id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
245607544
pes2o/s2orc
v3-fos-license
Paradoxical role of interleukin-33/suppressor of tumorigenicity 2 in colorectal carcinogenesis: Progress and therapeutic potential Colorectal cancer (CRC) is presently the second most prevalent global mortality-inducing cancer. CRC carcinogenesis is a multifactorial process involving internal genetic mutations and the external environment. In addition, non-neoplastic cell activities within tumor microenvironments for CRC development have been established. However, interleukin (IL)-33, secreted by such cell types, plays a pivotal role in cancer progression due to interaction with cellular constituents within the tumor-inflammation microenvironment. IL-33 belongs to the IL-1 cytokine family and acts as binding attachments for the suppressor of tumorigenicity (ST)2 receptor. Therefore, how to coordinate tumor microenvironment, design and optimize treatment strategies suitable for CRC, based on IL-33/ST2 signal is a challenge. Even though it has established influences upon immunity-linked conditions, IL-33 effects on CRC progression and prevention and related mechanisms are still controversial. Our review depicts controversial activities for IL-33/ST2 within carcinogenesis and cancer prevention. Moreover, IL-33/ST2 signaling is a potential therapeutic target for CRC. INTRODUCTION During 2020 alone, approximately 19.3 million newly diagnosed cancer cases were recorded, together with nearly 10 million global cancer mortalities [1]. Stemming from such statistics, colorectal cancer (CRC) represents the third most prevalent tumor (10%), and the second most prevalent global mortality-inducing cancer (9.4%) [1]. Approximately 10% of all CRC cases are inherited, with over 90% being sporadically randomized. In general, tumor initiation and development are primarily determined by key factors, such as genetic instability, epigenetic changes, antiapoptotic activity, immune-system circumvention, invasiveness, and metastases [2]. Three mechanisms of genetic instability in sporadic CRC have been identified: CpG-island methylation phenotype, chromosomal-based imbalances, and microsatellite instabilities. In particular, it is worth mentioning that several risk factors are related to CRC development, including lack of exercise, smoking, and red meat and alcohol consumption [3]. In addition, obesity, type 2 diabetes and inflammatory bowel disease (IBD) are highly linked to exacerbated CRC development. The parts played by nonneoplastic cells within tumor microenvironments (TMEs) for cancer development have been identified [2,4]. The cytokines, growth factors, and hormones secreted by these non-neoplastic cells are pivotal in cancer progression by interaction with the cellular constituents within tumor inflammation microenvironments [5]. Such cytokines include interleukin (IL)-33, a member of the IL-1 cytokine superfamily, which has been shown to mainly invoke T-helper (Th)2 immune response activities by means of its suppressor of tumorigenicity (ST)2 receptor [6]. IL-33/ST2 signal transduction is involved in IBD, maintenance of tissue homeostasis, and tumor invasion [7,8]. IL-33 can be pro-or antitumorigenic in CRC, with both activities indicating that IL-33 plays vital roles in enrolling immune-system cell types to modulate TMEs. In this review, IL-33/ST2 involvement in colorectal carcinogenesis, progress and therapeutic potential are discussed. DISCOVERY AND STRUCTURE OF IL-33 AND ITS LIGAND ST2 IL-33 was first identified in 2003. It is highly upregulated within hypertrophic veins as a nuclear protein, and given its first name, nuclear factor, from high endothelial venules [9]. Later in 2005, IL-33 was recognized as a member of the IL-1 cytokines family[6]. Meanwhile, IL-33 was recognized to be a ligand for ST2 receptor [6]. The molecular full-length weight for human IL-33 is 30 kDa. This cytokine has 270 amino acid residues, while murine IL-33 has only 266[6]. Human IL-33 consists of three domains: the N-terminal (aa 1-65), which is important for chromatin-binding and nuclear localization; central (aa , which interacts with nuclear factor-B; and Cterminal IL-1-like cytokine domain (aa 112-270), including the region binding to ST2 [10]. After synthesis, IL-33 is passively released following cellular mechanicalstress/damage triggers [11]. Meanwhile, the precursor protein IL-33 is cleaved to produce the 10-fold-active matured version, compared to full-length IL-33, and is segmented to effectively activate group 2 innate lymphoid cells (ILC2s) [12,13]. The ST2 receptor was first recognized as an oncogene in murine fibroblasts [14,15]. It has been investigated for many years before establishment of ligand IL-33, so ST2 was previously considered to be orphan receptor. The ST2 receptor derives from IL-1RL, which is a type-1 transmembrane protein [14]. Four ST2 isoforms are produced by alternative splicing, such as ST2L (ligand), sST2, ST2V (variant), and ST2LV (ligand variant). ST2L is a membrane-anchored receptor similar to IL-1, having three immunoglobulin-like extracellular, transmembrane domains, together with IL-1R1-like intracellular domains [16,17]. sST2 is a soluble-secreted isoform of ST2 that has no transmembrane domain, although it carries an extracellular domain as ST2L, with 5-9 extra amino acids on the C terminus in humans and mice [16,18]. ST2V resembles sST2 and lacks the third extracellular domain, although it has a hydrophobic tail instead of a third immunoglobulin-like domain [19]. ST2LV is an additional soluble isoform with no transmembrane domain [20]. ST2L and sST2 have been thoroughly investigated, altthough knowledge is scarce about ST2V and ST2LV. ST2L is typically expressed on fibroblasts, mast cells, Th2 lymphocytes, dendritic cells and macrophages, and sST2 is mainly present on fibroblasts/epithelial cells[20]. IL-33/ST2 and CRC carcinogenesis Like most malignant tumors, CRC carcinogenesis involves multiple factors and processes. For most sporadic CRC, the important causes of CRC carcinogenesis are adenomas, intestinal polyp deterioration, tumor suppressor gene APC (adenomatous polyposis coli) mutation and TME formation [21]. Recently, many studies have shown that IL-33/ST2 plays a vital role in CRC occurrence and progression [22]. Cui et al [23] reported that the IL-33/ST2 axis promoted the neoplastic transformation of human colorectal adenoma to CRC, which is closely correlated with increased IL-33 expression in CRC tissues as compared to adjacent noncancer tissues. Another study found that IL-33 acts as a mediator of intestinal polyposis and regulator of tumor stromal cell activation in Apc (min/+) mice, a genetic model of intestinal tumorigenesis [24]. In the Apc (min/+) polyps, IL-33 is expressed in tumor epithelial cells, and ST2 is related to two stromal cell types, subepithelial myofibroblasts and mast cells. Stimulation of IL-33 induces stromal cells to express components of the extracellular matrix and growth factors that promote tumor development and growth [24]. He et al [25] reported that epithelial IL-33 promotes intestinal tumorigenesis in Apc (min/+) mice with transgenic expression of IL-33 in intestinal epithelial cells through the expansion of ST2 + T regulatory (Treg) cells, Th2 cytokine production and alternative activation of macrophages. Conversely, loss of IL-33 or ST2 in Apc (min/+) mice inhibits tumorigenesis and tumor angiogenesis, and induces apoptosis in adenomatous polyps [24,25]. This suggests that IL-33 promotes the transition of adenomas and polyposis to CRC through the activation of tumor stromal cells and the formation of a protumorigenic microenvironment. The TME plays important roles in triggering cancer. The alarm protein IL-33 has been shown to be involved in formation of the early TME and to influence carcinogenesis and progression. Pastille and colleagues used animal models and patient samples to suggest that IL-33/ST2 axis activity restricted effector CD8 + T cell functions in the CRC environment and promoted tumor growth in the colon [26]. In addition, IL-33 downregulates IL-17 and differentiation through forkhead box (FOX)P3, indicating an immunosuppressive environment during CRC tumorigenesis [26]. In murine models for colon cancer, IL-33 within tumor regions can recruit and activate macrophages into the microenvironment, leading to prostaglandin E2 upregulation, consequently exacerbating colon cancer stemness/progression [27]. IL-33/ST2 signaling can activate c-Jun and stem cell genes (NANOG, NOTCH3 and OCT3/4) to induce CRC stemness, eventually to promote carcinogenesis [27]. More importantly, Taniguchi et al [28] reported the potential role of IL-33 in regulating tumor-initiating cells, as well as the impact on stem cell-niche interactions, which is necessary for tumor progression, and highlighted the new role of IL-33 in promoting CRC stemness and carcinogenesis. IL-33/ST2 and CRC progression A major hallmark for CRC progression is chronic inflammation [29]. IL-33 is upregulated within serum of ulcerative colitis cases, and consequently involved in the development and maintenance of inflammation. Meanwhile, ulcerative colitis is intimately linked to CRC progression, indicating that IL-33 has a pivotal role in triggering colon tumors [30]. Kirsten and colleagues suggested that involvement of the IL-33/ST2 axis was critical for CRC progression using bone marrow chimera investigations. This is partly because activation of the IL-33/ST2 signaling pathway damages intestinal barrier integrity, inducing immune-system cells to express protumorigenic IL-6[31]. Therefore, there is now compelling evidence that IL-6 serum level is linked to late-stage CRC in patients, together with being a predictor for poor prognosis in CRC [32]. It is also reported that epidermal growth factor (EGF) is a powerful signaling molecule, affecting CRC progression and intestinal epithelial cell development[33-35]. IL-33 and ST2 expression profiles can be strongly stimulated by EGF, without increasing the extracellular secretion of IL-33. Consequently, IL-33 upregulation leads to CRC triggering, thus indicating that the EGF/IL-33/ST2 axis components are novel drug targets against CRC [36]. In addition, CRC triggering/progress can be influenced through the immune microenvironment[37-39]. Multiple investigations have indicated that IL-33 thwarts host-based tumor immunity, tumor stroma modulation and exacerbation of angiogenesis, thus contributing to IL-33 receptor ST2-driven CRC [40]. Recently, IL-33/Treg cell interaction has attracted increasing attention. An early study showed that IL-33 can promote Treg cell function in the colorectum, where FOXP3 + Treg cells are abundant [41]. Treg cells can resist dysregulated inflammatory responses, and consequently acquire tissue-specific survival and function. It is well known that TME-resident IL-33 and Treg cells in the TME are individually implicated within CRC progression, albeit this is still in dispute. IL-33/ST2 signaling exacerbates CRC progression through modulation of FOXP3 + Treg cell phenotypic features and curtailing IL-17 differentiation [26]. Furthermore, tumor-derived IL-33 can remodel the TME through the recruitment of CD11b + /GR1 + and CD11b + /F4/80 + myeloid cells and promote CRC growth and liver metastasis in mice, with the potential as a therapeutic target [42]. IL-33 also has a pivotal effect on Treg cell functional stability, with genetic deletion of IL-33 improving the effectiveness of cancer immunotherapies[43]. IL-33/ST2 and CRC prevention IL-33 is considered to have a cancer-promoting role because IL-33/ST2 axis induction leads to CRC carcinogenesis/development. However, another concern is that IL-33 has a paradoxical role. Selected studies have indicated that IL-33 has a less-known role of tumor suppressor within many malignant tumors [44,45]. In aspects of cancer prevention, tumoral IL-33 overexpression increases antitumoral responses by the immune system, together with tumor rejection through activation of CD8 + T/NK cells [49]. Furthermore, Treg cell depletion synergizes with re-expression of IL-33 to contribute cancer-eliminating Th1-type immunity-related actions, implicating that IL-33 is a promising antitumor cytokine for immunotherapy [49]. Another study implied a protective role for IL-33/ST2 against CRC invasiveness and metastases, resulting in reduced colorectal tumor growth [50]. Malik et al [30] have demonstrated that IL-33-lacking mice are sensitive to colitis-associated cancer (CAC). Meanwhile, this study highlighted that IL-33, IgA, IL-1α and the microbiota are candidate drug targets against IBD/CAC. Many studies have shed light on IL-33 functions, whereas the literature on ST2 in CRC is scarce. Antitumorigenic functions of the IL-33 receptor have been gradually explained in CRC since 2016. Akimoto and co-workers have reported that soluble sST2 negatively correlated with colon tumor malignant growth in vivo by modifying the TME[51]. They further revealed the mechanisms: sST2 inhibited IL-33-driven angiogenesis, macrophage infiltration/polarization, and Th1 and Th2 activities. Another study by Donnell and colleagues demonstrated that ST2L downregulation in colon cancer, together with elevated tumor grade, led to ST2L downregulation. Colontumor-resident ST2 knockdown led to increased tumor expansion in animal studies, with a decrease in IL-33-driven macrophage infiltration and enrollment through antagonizing chemokine CCL2 [52]. This indicates that IL-33 has an antitumor function against CRC and the IL-33/ST2 axis exerts protective functions against colon-based tumor-triggering. Consequently, negative functions for the IL-33/ST2 axis in CRC progression depend on its involvement in the induction of angiogenesis, regulation of anti-tumor-based immunity-related responses and TME modulation [53]. Additional studies are needed January 7, 2022 Volume 10 Issue 1 to validate the precise functions adopted by IL-33/ST2 signaling in CRC (Figure 1). DIVERSIFIED THERAPEUTICS BASED ON IL-33/ST2 SIGNALING IN CRC IL-33 is related to carcinogenesis, progression and poor prognosis in some cancers, including CRC [40]. Due to TME-resident IL-33/ST2 variability, their overexpression/ recombinant protein inhibits CRC expansion. This suggests the potential of the IL-33/ST2 axis as a drug target for CRC. Many studies have reported possible strategies for the treatment of CRC based on the IL-33/ST2 axis (Table 1). IL-33/ST2 and conventional therapies Intestinal mucositis and severe diarrhea are commonly associated with cancer chemotherapy and are thus dose-limiting adverse effects. Combining radiation with conventional chemotherapy can exacerbate mucositis, leading to chemotherapeutic dose reductions or inevitable cessation of such treatments[54]. Since chemotherapy directly results in DNA damage/apoptosis through reactive oxygen species (ROS) and variation of cytokine production [55], as a proinflammatory factor, IL-33 has a pivotal part in driving inflammation/tumors through its ST2 receptor. One particular investigation highlighted that IL-33, in reduced doses, resisted chemotherapeutic platinumdrug-induced cell death and enhanced cellular invasiveness in selected tumors through JNK pathway triggering [56]. Thus, regulation of the IL-33/ST2 pathway can relieve inflammation/improve chemotherapy function. Irinotecan (CPT-11) is a topoisomerase I inhibitor, and is an antitumor drug that can be used to treat metastatic CRC [57,58]. The clinical pharmacokinetics of CPT-11 and its metabolites such as SN-38 seem to be the key factor for optimal use of anticancer chemotherapeutic drugs [57]. CPT-11 systemic-based treatment causes intense mucosal disruption and diarrhea, coinciding with small intestinal IL-33 upregulation. However, the symptoms of mucositis were markedly lower within ST2 -/mice. Recombinant IL-33 protein reinforces CPT-11-driven mucositis, and blockade of IL-33 with its complementary antibody (or soluble ST2) significantly alleviates mucositis and reduces tumor growth by CPT-11 in a mouse model of CT26 colon cancer [59]. Such results indicate that thwarting the IL-33/ST2 axis can be exploited as a novel therapy against mucositis, consequently enhancing the beneficial effect of chemotherapy against CRC. IL-33/ST2 and treatment with immune-checkpoint inhibitor Immunotherapy represents a powerful method in cancer treatment. Stemming from this, immune checkpoint modulation has been broadly applied to treat multiple cancers, following the discovery of cytotoxic T lymphocyte-associated protein 4 and programmed cell death (PD)-1[60,61], which was awarded the 2018 Nobel Prize in Physiology or Medicine. Blockade of immune checkpoint yields promising clinical results in CRC. However, only a subset of cancer patients having an elevated microsatellite instability frequency phenotype develop durable antitumor immune responses due to the complicated TME associated with PD-1 and PD ligand (PD-L)1 [62,63]. It can be explained from recent data that IL-33/ST2 can regulate PD-1/PD-L1 signaling within tumors. For example, exogenous IL-33 upregulated PD-1 by CD8 + T cells, together with upregulated PD-L1 within murine acute myeloid leukemia (AML) cells [64]. Thus, combining IL-33 with PD-1 antibody dramatically extends AML murine survival times in a CD8 + T-cell-based fashion, even leading to full regression within 50% of such treated mice. Another study showed that IL-33 triggered CD8 + T cells/ILC2s in pancreatic tumors, and activated ILC2s increased PD-1 expression. Subsequent combined treatment of IL-33 and PD-1 inhibition enhanced immunotherapy outcomes in a murine model [65]. Recent results showed that IL-33/ST2 act as candidate targets of checkpoint inhibitors for CRC immunotherapy, where they are secreted by lymphocytes, stromal cells and tumor cells to recruit immune cells and remodel the tolerogenic TME. A recent study reported that ST2 is specifically expressed in tumor-associated macrophages (TAMs) of CRC, and ST2 upregulation is related to low survival odds and reduced CD8 + T cell cytotoxicity in CRC [66]. They also found that ST2-positive TAMs were enrolled into CRC xenograft model tumors through chemokine receptor CXCR3, promoting an immunosuppressive TME. Thus, the combined effect of ST2 depletion using ST2-knockout mice and treatment with PD-1 antibody had a significant suppressive effect on CRC growth. The use of IL-33 trap fusion protein reduced tumor-infiltrating ST2 + TAMs and thwarted xenograft tumor expansion in CRC preclinical models. Thus, the IL-33/ST2 axis plays a big part in CRC immuno- therapy. IL-33/ST2 signaling and lymphocyte immunotherapy Being an alarmin and immune regulation-related factor, IL-33 has a pivotal role in regulating the function of a wide range of immune cells. However, whether IL-33/ST2 signaling-regulated immune lymphocytes exert potential antitumor immunity in CRC is still a question under investigation. Recent research progress seems to suggest the positive reactivity based on Th1 cells (CD8 + T and NK cells) and Th2 cells (CD4 + T, ILC2 and eosinophils etc.). Several studies indicated that exogenous or endogenous IL-33 is positively related to recruitment and CD8 + T/NK cell triggering within the TME. In melanoma or breast cancer models, exogenous application or transgenic expression of IL-33 recruits and activates (IFN-γ + CD107 + ) CD8 + T and NK cells to orchestrate the TME, regulates xenograft tumor expansion and prevents lung metastasis of breast cancer in mice [49,67]. In the CRC model, Xia et al [68] found that overall antitumor responses/IFN-γ expression by tumor-infiltrating CD8 + T cells were impaired in IL-33-deficient mice. Conversely, IL-33 upregulated IFN-γ by activated CD4 + /CD8 + T cells, improving CD8 + T cell infiltrative and antitumor responses against protumor effects by Treg cells. These results imply that the balance of CD8 + T cells and Treg cells within the TME is a crucial factor for IL-33-mediated anticancer responses in CRC. In addition to activating Th1 response, IL-33 additionally modulate Th2 functions, including CD4 + T cells, ILC2s and eosinophils in the TME. IL-33 can directly target conventional and regulatory CD4 + T cells expressing ST2, and promote the immunosuppressive functions of Treg cells, which causes tumor growth and immune evasion [69]. IL-33 preferentially promotes Th2 response to modulate tumor immunity. In murine CT26 or MC38 CRC models, recombinant IL-33 markedly reduced colon tumor expansion/metastatic activity in lungs/liver [70]. IL-33 treatment can augment IFN-γ + CD4 + T cells, together with upregulating CD40L on TILs. Moreover, IL-33 was found to be adequate for upregulating ST2 on CD4 + T cells, although not in CD8 + T/NK cells, suggesting that IL-33/ST2 signaling activates CD4 + T cells through positive-feedback looping. Emerging studies have proved the positive role of eosinophils in mediating anticancer immunity-related counteractivity by IL-33 within several cancers, including CRC [71]. A more recent study by Kienzl and colleagues demonstrated that IL-33 can inhibit cancer expansion in CT26 engraftment/colitis-linked CRC mouse models [72]. The IL-33-induced effect was cancelled within eosinophil-lacking dblGATA-1 mice, although it was rescued through adoptive transfer of ex vivo-triggered eosinophils by IL-33 [72]. They further found that IL-33 treatment upregulated eosinophil biomarkers associated with triggering and homing (CD11b and Siglec-F), and with degranulation (CD63 and CD107a) in vitro and in vivo. These results implied that eosinophils are a requisite for the antitumor effect of IL-33 in CRC. Moreover, IL-33 stimulation can enrich ILC2s in the TME of many cancers, and ILC2s also constitutively express ST2 [73]. Thus, IL-33 targets directly ILC2s and induces ILC2 cell expansion, enrichment and activation in tumors [74]. Thus, it was proved that, in local expression of IL-33 in murine CRC, CT26 enhanced MyD88-based antitumor ILC2 activity [75]. In this study, IL-33 promoted production of CXCL2 from ILC2s, and created a TME with CXCR2expressing tumor cells through a dysfunctional angiogenesis/hypoxia/ROS axis, which caused tumor cell-specific apoptosis. The finding highlights the vital role of ILC2s in the IL-33-mediated antitumor effect for CRC immunotherapy. IL-33/ST2 signaling and cancer gene therapy and other blockade strategies Recently, gene therapy using viral or nonviral vectors to carry therapeutic genes for diseases has attracted increased attention. In particular, breakthroughs have been made in the treatment of genetic diseases. Gene therapy also shows a promising prospect in the field of human cancer treatment. In our group, cancer gene therapy using oncolytic viruses as vectors has achieved encouraging results. We have constructed multiple oncolytic viruses targeting multiple cancers, such as CD55-Smad4 for CRC [76], GD55 for liver cancer [77,78] and Ad-wnt(24) for Wnt signaling-positive cancer [79]. In CRC, tumor-disruptive adenovirus CD55-Smad4 was developed without issues and succeeded in regulating CRC cell growth, migration, and tumor stem-cell activity through reining-in of Wnt/β-catenin signaling. Previous reports have demonstrated that recombinant ST2/IL-33 significantly inhibits CRC growth and enhances antitumor immune effects [72]. It additionally suggests that oncolytic viruses, targeting CRC and carrying IL-33 or ST2 gene, have the potential for tumor therapy through overexpression of IL-33 or ST2 gene and lysis of tumor cells mediated by oncolytic viruses. Our unpublished results showed that oncolytic adenovirus and vaccinia virus carrying the IL-33 gene can effectively inhibit the growth of mouse CT26 CRC cells in vitro, and further in vivo experiments are ongoing (Figure 2). January 7, 2022 Volume 10 Issue 1 Figure 2 The potential therapeutics mediated by interleukin-33/suppressor of tumorigenicity 2 in colorectal cancer. There are at least two anti-IL-33 antibodies (SAR440340 and MEDI3506) being developed to treat chronic obstructive pulmonary disease, moderate-to-severe asthma, and chronic bronchitis in clinical phase I and II trials (NCT03387852, NCT03546907, NCT04751487, NCT04570657, NCT04701983, and NCT04631016). Thus, it suggests that the blockade strategy using anti-IL-33 antibodies has the potential for treatment of human cancer, including CRC, where IL-33 plays the protumorigenesis role. CONCLUSION IL-33 plays a controversial role in carcinogenesis, cancer prevention and cancer immunity, although the specific mechanism is still unclear. In CRC, the divergent roles of IL-33 may depend on the TME. Therefore, how to orchestrate the TME to design and optimize appropriate treatment strategies based on IL-33/ST2 signaling for CRC is an important question. These strategies include how to activate and recruit IFN-γsecreting CD4 + and CD8 + T cells, NK cells, dendritic cells, M1 macrophages, eosinophils and ILC2s, and how to better combine chemotherapy, immune checkpoint inhibitors and cancer gene therapy to achieve more effective treatments for CRC. Moreover, being an alarmin, IL-33 may take up the role of a potential biomarker for CRC diagnosis, therapy and prognosis.
2022-01-01T16:08:08.503Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "1201f8f3ec24fec66826b0d326ede82e14b73f4b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v10.i1.23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae23d91760e0890ae640c657be833205567ab06b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264518438
pes2o/s2orc
v3-fos-license
CRISPR screens and lectin microarrays identify novel high mannose N-glycan regulators Glycans play critical roles in cellular signaling and function. Unlike proteins, glycan structures are not templated from genes but the concerted activity of many genes, making them historically challenging to study. Here, we present a strategy that utilizes pooled CRISPR screens and lectin microarrays to uncover and characterize regulators of cell surface glycosylation. We applied this approach to study the regulation of high mannose glycans – the starting structure of all asparagine(N)-linked-glycans. We used CRISPR screens to uncover the expanded network of genes controlling high mannose surface levels, followed by lectin microarrays to fully measure the complex effect of select regulators on glycosylation globally. Through this, we elucidated how two novel high mannose regulators – TM9SF3 and the CCC complex – control complex N-glycosylation via regulating Golgi morphology and function. Notably, this method allowed us to interrogate Golgi function in-depth and reveal that similar disruption to Golgi morphology can lead to drastically different glycosylation outcomes. Collectively, this work demonstrates a generalizable approach for systematically dissecting the regulatory network underlying glycosylation. Introduction All living cells and organisms are covered with glycans -complex carbohydrates linked to proteins, lipids, and RNA 1 .Glycans play critical roles in many biological processes -intracellularly, glycans are essential for protein folding and influence the stability, localization, and activity of many proteins within and outside the cell.Extracellularly, glycans on the cell surface mediate cell-cell recognition and interactions, including many immunological responses 2 .Under many acute and chronic disease states, glycosylation can become dysregulated and actively contribute to disease progression . For xample, the high mannose glycan epitope, typically found intracellularly within the ER and the Golgi, was recently identified as a stress signal for influenza virus infection, and their cell surface presentation is suggested to cause excessive tissue damage through binding innate immune lectins and over-activating the complement pathway 78 .However, how high mannose or other glycan motifs are regulated at the cell surface remains relatively unknown. Understanding how cell surface glycosylation is regulated has been historically challenging due to the non-templated nature of glycans.Unlike proteins, biosynthesis of glycan structures is not directly encoded in genes.Instead, glycan synthesis is controlled by an expanded network of genes that regulate biosynthetic enzyme expression and localization, glycan trafficking, organelle function, substrate availability, and carbohydrate metabolism, producing a heterogeneous collection of glycans on the cell surface 9,10 .While the biosynthetic enzymes that directly catalyze glycosidic linkages are mostly mapped out through decades of dedicated research 10,11 , the contribution of other genes remains relatively poorly understood.In addition, changes to cell states, such as activation of proteostasis stress response pathways 12 , can also drastically alter the glycan repertoire of a cell, adding to the difficulty of understanding how glycosylation is regulated. While understanding the biology underlying a specific glycan epitope remains challenging, recent advances in glycomic techniques have enabled a comprehensive survey of the glycan landscape of cells and tissues under healthy and disease states 13 .In particular, lectin microarrays, which utilize a variety of lectins and antibodies to detect specific glycan moieties, have proven to be a powerful and highly sensitive method for uncovering glycosylation differences between biological samples 14 .Lectin microarray analyses have identified glycan changes across many diseases and have been useful for biomarker discovery in predicting disease outcome and vaccine response 7,[15][16][17][18] .However, lectin microarrays alone cannot readily reveal the underlying biology that causes the glycan change in the first place. Recent advances in CRISPR screening have proven to be a powerful tool for understanding the genetics of glycosylation.Utilizing bacterial and plant toxins that bind known glycan moieties, novel genetic regulators have been identified that control the synthesis of glycoproteins and glycolipids [19][20][21] .Expanding on these works, we utilized the accumulated knowledge of naturally isolated lectins and their binding specificities in CRISPR screens and lectin microarrays to identify and characterize novel regulators of cell surface N-glycosylation.Specifically, we applied our strategy to uncover regulators of high mannose glycans -the essential intermediate structure for all N-glycans and an important glycan epitope of the innate immune response. We first used FACS-and magnetic-based cell sorting methods to conduct genome-wide and targeted screens to uncover the expanded network of genes that control cell surface levels of high mannose N-glycans.Next, we employed lectin microarrays to measure the glycan changes comprehensively to obtain mechanistic insights into how select regulators control cell surface glycosylation.Through this, we discovered how two novel regulators of high mannose glycosylation -a previously poorly characterized gene TM9SF3 and the endocytic recycling machinery CCC complex -control complex N-glycosylation and Golgi morphology and function.Specifically, we found that loss of TM9SF3 function reduces cis-and trans-Golgi colocalization and inhibits complex N-glycan formation, while disruption to the CCC complex leads to Golgi fragmentation yet mildly increases cis/trans-colocalization, enhancing complex N-glycan production.Notably, the unbiased interrogation of Golgi function using lectin microarray revealed that Golgi morphology changes that are similar on a surface level (i.e.fragmentation) can lead to drastically different glycosylation outcomes.Together, these findings reveal 3 novel cell surface high mannose N-glycosylation regulators and validate the strategy to combine CRISPR screening with lectin microarray technologies for revealing novel regulators of glycosylation. UPR ER activation upregulates high mannose glycans on and within cells All N-glycans begin as the 14-sugar glycan structure (Glc 3 Man 9 GlcNAc 2 ), which is added onto selected asparagine residues on nascent proteins as they enter the ER for folding 22 .As these glycoproteins mature through the ER and cis-Golgi, they transition through a high mannose stage (Man 5 -Man 9 ) after initial processing steps that trim off glucose residues.Typically, these high mannose structures are further processed into more complex glycans in the Golgi, such as those elongated with repeating units of Gal and GlcNAc ("poly LacNAc") or capped with sialic acid residues, resulting in an extensive array of mature, complex N-glycans at the cell surface 22 .When cells experience stresses such as influenza viral infection, high mannose glycans can become upregulated at the cell surface and function as a stress signal that binds innate immune lectins 7 .However, it is unknown how healthy cells maintain low levels of high mannose at the surface or how these glycans become upregulated under stress conditions.Thus we chose to focus on the regulation of high mannose glycans with our approach. Unfolded protein response (UPR) activation through the XBP1 pathway was required for high mannose expression in the human lung carcinoma cell line A549 in response to influenza and, in other work, was shown to induce high mannose in other cell lines in a cell-type specific manner 7,12 .We thus focused on understanding how high mannose glycans are regulated under basal and UPR ER -activated conditions.To this end, we built a doxycycline(dox)-inducible system to enable the overexpression of XBP1s, the key transcription factor that mediates IRE1 branch of the UPR ER response (Fig. 1a).This dox-inducible XBP1s system was lentivirally introduced into the cell line A549s.Consistent with previous work that used similar strategies to activate branches of the UPR ER 12,23 , our system also enabled specific upregulation of XBP1s targets (Fig. 1b). Next, we tested whether XBP1s-induction by itself could alter cell surface high mannose glycan levels in A549.We utilized two lectins that can specifically bind high mannose N-glycan -Hippeastrum hybrid lectin (HHL), which binds N-glycan structures with Man 5 to Man 8 24 , and Griffithsin (GRFT), which binds Man 6 to Man 9 25 .We find that XBP1s activation leads to a slight but highly reproducible increase in both HHL and GRFT binding (Fig. 1c, Extended Data Fig. 1b and c).This increase in binding is reduced by cleaving off high mannose and hybrid glycans using Endoglycosidase H 3 , confirming the specificity of the lectins (Extended Data Fig. 1d).Notably, this increase in high mannose glycans on the cell surface also upregulates the binding of the complement pathway protein Mannose-binding lectin 2 (MBL2), consistent with previous reports 7 (Fig. 1c, Extended Data Fig. 1d).In addition, partial activation of XBP1s using small molecule IXA4 26 on wild type A549 cells also increases cell surface high mannose structures in a dose-dependent manner (Extended Data Fig. 1f).Together, these results show that activating the XBP1s branch of UPR ER in A549 cells can enhance cell surface high mannose levels. Next, to determine how each high mannose glycan structure (Man 5 -Man 9 ) changes upon XBP1sinduction on a whole cell level, we quantified all high mannose N-glycan structures using Ultra-Performance Liquid Chromatography (UPLC) with fluorescence detection 27 .Interestingly, XBP1s-induction massively upregulates all high mannose structures on a whole cell level, with the largest increase in Man 6-8 structures (Fig. 1d, Extended Data Fig. 1g).This data confirms that the high mannose expression in lung cells can be triggered by XBP1-pathway induction, providing a mechanism for glycan-based reporting of cell damage and infection to the innate immune system.Our results also suggest that the changes in cell surface glycome likely originate from changes in the early stages of N-glycosylation that occurs in the ER and Golgi. Finally, to fully characterize the other glycosylation changes induced by XBP1s activation, we employed lectin microarray to comprehensively profile changes in glycan repertoire under basal and XBP1s-induced conditions (Fig. 1e, Source Data 1).We confirmed the changes in high mannose structures (HHL) and observed corresponding upregulation of oligomannose structures (Man3 to Man9) by the increased binding of lectins SNA-II, UDA, and GNA 24 (Fig. 1f, Extended Data Fig. 1h).Interestingly, we did not observe a corresponding decrease in complex glycans but instead uncovered a concurrent upregulation of complex N-glycans capped with terminal Galactose (Gal) and N-Acetylgalactosamine (GalNAc), suggesting that cells can upregulate high mannose independently as a stress signal without compromising complex N-glycan synthesis.This is consistent with findings in influenza infection, in which high mannose upregulation did not impact the expression of most complex glycan epitopes 7 .Furthermore, we observe an upregulation of O-linked glycans (lectins: AIA, MNA, and MPL), highlighting how XBP1s induction globally alters cellular glycosylation. Genome-wide CRISPR screen uncovers the expanded network of genes regulating high mannose. To uncover the genes beyond glycan biosynthetic enzymes controlling cell surface high mannose levels, we utilized our cellular system to conduct a genome-wide CRISPR screen (Fig. 2a).To do so, we first engineered the XBP1s-inducible A549 line to also stably express Cas9 and confirmed that concurrent expression of sgRNA targeting relevant genes, such as XBP1, can alter cell surface presentation of high mannose glycans (Extended Data Fig. 2A). Next, we lentivirally transduced a previously validated genome-wide sgRNA knockout library 28 into the Cas9-expressing, XBP1s-inducible A549 cells, with sgRNAs targeting all protein-coding genes with ten sgRNAs per gene and ~10,000 negative controls.The cells were then dox-treated to induce XBP1s for 48 hours, fixed, and stained with FITC-labeled HHL.The population of cells with the top 25% and bottom 25% of HHL signal were selected using fluorescence-activated cell sorting (FACS), such that cells expressing sgRNAs targeting genes that suppress high mannose glycan presentation will be enriched in the top 25% and depleted in the bottom 25%, while cells expressing sgRNAs targeting genes required for high mannose glycans will be enriched in the bottom 25% and depleted in the top 25% population.The proportion of each sgRNA in the two populations was measured by deep sequencing, and significant regulators of high mannose glycan presentation were identified using casTLE 29 (Fig. 2a). This initial screen identified 109 known and novel regulators of high mannose glycan regulation at a 10% false discovery rate (Fig. 2b, Source Data 2).Among the strongest hits were biosynthetic enzymes directly involved in N-glycan maturation -MAN1A2 and MGAT1 22 .These are Golgi-localized enzymes that act in sequential steps to remove mannoses from Man 8 and Man 9 structures to form Man 5 and add a GlcNAc residue (Fig. 2c).Deletion of any of these enzymes block glycan processing to more complex structures and can therefore lead to an increase in high mannose structures.Detection of these positive controls indicates that our screening strategy worked well to identify modulators of the high mannose epitope. Besides enzymes involved in glycosylation, our strongest hits that enhanced high mannose glycan expression were members of the tail-anchored (TA) protein insertion pathway(Fig.2b).Knocking out four of the six canonical members (WRB, GET4, ASNA1, and CAMLG) leads to an upregulation of high mannose structures.This is likely because disruption to the TA-insertion pathway mis-localizes essential Golgi proteins 30,31 , inhibiting proper N-glycan processing in the Golgi.Indeed, when we inhibited ASNA1 using the small molecule Retro-2 31,32 , cell surface high mannose glycans became upregulated under both basal and XBP1s-induced conditions (Fig. 2d).In contrast, many of the strongest genes that caused loss of high mannose upon deletion, even in the presence of induced XBP1s, were transcriptional regulators, some of which are likely to be involved in doxinduced overexpression of XBP1s and may not be involved in direct induction of high mannose glycans by XBP1s.Given this, we decided to focus on genes whose deletion enhanced high mannose levels in our assay regardless of XBP1s. Next, we tested whether our top hits have the same impact on high mannose glycans under basal conditions.We established individual CRISPRi-knockdown cell lines with two independent sgRNA each and assayed how the disruption of each gene impacted cell surface high mannose levels using competitive HHL binding assays (Fig. 2e).We found that all our top hits regulate high mannose levels in both basal and XBP1sinduced conditions (Fig. 2f, Extended Data Fig. 2c).Together, these results highlight the critical role of Golgi function in regulating cell surface presentation of high mannose glycans. Magnetic sorting-based CRISPR screens uncover additional novel regulators of high mannose glycans under basal and UPR ER -induced conditions. We decided to focus our attention towards understanding high mannose regulation under basal, unstressed conditions.To do so, we generated a CRISPRi sublibrary targeting all genome-wide hits and genes functionally connected to our top hits, totaling 292 genes, with five sgRNAs each and 540 negative controls (Supplementary Table 1).Moreover, to enable faster screening at high coverage, we employed magneticactivated cell sorting (MACS) to separate cells with high versus low levels of cell surface high mannose (Fig. 3a).We screened under both basal (untreated) or UPR ER induced (XBP1s, dox) conditions to identify genes that impacted high mannose generally. Briefly, the targeted sgRNA was lentivirally transduced into the A549 cell line with constitutively active CRISPRi machinery and dox-inducible XBP1s.The resulting cells were either treated with dox to induce XBP1s expression or left untreated.Cells were then lifted and stained with HHL conjugated to magnetic particles and separated magnetically such that cells with increased levels of high mannose on the cell surface would be retained by the magnet.In contrast, cells with less high mannose would be eluted.Each population was subjected to three rounds of separation.The proportion of each sgRNA was measured by deep sequencing and analyzed by casTLE 29 .This strategy validated 77 hits from our genome-wide screen and further identified 111 additional genes that regulate the cell surface high mannose glycosylation under XBP1s induction.The increased number of genes identified in this secondary screen is likely due to the increased sensitivity from higher library coverage and CRISPR knockdown, which enabled essential genes to be more readily identified.Among these, 118 hits also regulate high mannose glycosylation under basal conditions.These include genes directly regulating early steps in processing the high mannose structure (e.g.MAN1A1, MAN1A2, and MGAT1).Identifying these glycosylation enzymes indicates that the screening approach worked well and has increased sensitivity in detecting high mannose regulators compared to the genome-wide screen.In addition, known Golgi regulators, such as all members of the COG complex (COG1-8), were also found to be regulators of the high mannose epitope (Fig. 3b and c, Source Data 3).Top hits were validated using competitive HHL binding assays (Fig. 3d). Our strategy also enabled us to uncover genes that, when depleted, suppress high mannose levels (Fig. 3b and c).Interestingly, these include many members of the CCC protein complex, which consists of CCDC22, CCDC93, and any of the ten COMMD proteins.The CCC complex works closely with the retriever complex to regulate protein recycling between the endosome and the cell surface 33,34 , but no glycosyltransferases have been reported as cargo.Given the tight connection between endocytic recycling and the trans-Golgi network 34 , it is plausible that the CCC might be acting through the Golgi to mediate high mannose levels.However, the precise role of the CCC complex in regulating high mannose or other types of glycosylation remains unclear. As high mannose is a key intermediate for all N-glycans, we expected that many of our hits would impact other glycan structures along the N-glycan maturation pathway.Therefore, to evaluate how the top hits effect other forms of glycosylation, we measured changes in other glycan epitopes using a panel of lectins with known specificities 24 on live, intact cells (Extended Data Fig. 3c).We found that knocking down known Golgi regulators (COG3, COG6, and GET1) generally shifts cells to display more high and oligomannose structures and fewer branched and complex epitopes.In contrast, disrupting the CCC complex (CCDC22 and VPS35L) leads to an upregulation of mature terminal glycan epitopes such as sialic acids and GalNAc and a corresponding 6 downregulation of high and oligomannose structures.These results show that the top hits are not merely affecting glycan density on the cell surface but impacting the cell's glycosylation pathways. Together, our two-tiered screening approach allowed us to uncover known and novel regulators of high mannose glycosylation beyond expected biosynthetic enzymes under both basal and XBP1s-induced conditions.However, identifying the genes alone does not provide sufficient information for understanding how these regulators, particularly ones without known connection to the biosynthetic enzymes, control glycosylation.Thus, we next sought to investigate how two regulators of opposing phenotypes -TM9SF3 and the CCC complexact to regulate glycosylation. TM9SF3 regulates Golgi organization and promotes N-glycan maturation One of the strongest hits in our screens was a poorly characterized gene TM9SF3 (Transmembrane 9 Superfamily Member 3).Knock down of this geneleads to a strong upregulation of cell surface high mannose under both basal and XBP1s-induced conditions (Fig. 3b, d, and e).TM9SF3 belongs to a family of four multipass membrane proteins characterized by nine transmembrane domains and was previously found to be localized to the Golgi 35,36 .Interestingly, its knockdown leads to similar glycan changes as known Golgi regulators (Extended Data Fig. 3c), suggesting that TM9SF3's role may be linked to Golgi function.Moreover, another family member, TM9SF2, has been shown to regulate glycolipid synthesis 19,20 .However, the role of any TM9SFs in N-linked glycan regulation is unknown. To begin understanding the mechanism by which TM9SF3 regulates high mannose glycosylation on the cell surface, we first validated its effect on high mannose by establishing three knockdown lines using independent sgRNAs and found that, as expected, all three lines have increased high mannose under both basal and XBP1s conditions (Extended Data Fig. 4a, b).We next we tested whether this was a general property of members of the TM9SF family.In line with our screen results, we find that only TM9SF3 acts to control high mannose levels on the cell surface (Fig. 4a, Extended Data Fig. 4c). We reasoned that TM9SF3 might play a role in Golgi function and morphology.The Golgi has three compartments, the cis golgi-where mannosidases reside, the medial golgi, and the trans golgi network (TGN) where complex N-glycans and sialosides are synthesized.Using intracellular staining coupled with flow cytometry quantification to study TM9SF3 knocked down cells (TM9SF3-KD), we observe a slight decrease in TGN marker TGN46, suggesting that there might be mild defects in TGN function (Fig. 4b, Extended Data 4d).Characterization using confocal microscopy shows similar reduction in TGN46 staining (Fig. 4c).Interestingly, imaging also revealed changes in cis-and medial-Golgi morphology in TM9SF3-KD cells.Cis-and medial-Golgi compartments become highly dispersed, whereas TGN morphology and dispersion remains largely unchanged in TM9SF3-KD cells (Fig. 4c and Extended Data Fig. 4f -h).This leads to a reduction in cis-Golgi compartments that colocalize with TGN when compared to control cells (Fig. 4d).These results indicate that TM9SF3 regulates Golgi organization, which is essential for proper glycan maturation. Next, we used lectin microarrays to unbiasedly measure changes in global glycosylation.Because the enzymes involved in glycan processing and their Golgi localization are largely known, obtaining a comprehensive survey of the glycan repertoire can provide mechanistic insights into what glycosylation steps, and potentially Golgi compartments, are altered when our gene-of-interest is knocked down.Surprisingly, this revealed a general upregulation of oligomannose glycans, suggesting that the early steps of N-glycan remodeling that converts high mannose into oligomannose structures, which occurs in the cis-and medial-Golgi, can proceed normally despite the altered morphology (Fig. 4e, Extended Data Fig. 4j, Source Data 4).In addition, we also observed a reduction in complex LacNAc epitopes, indicating that the final steps of glycan elongation and capping needed for forming complex glycans are inhibited.Indeed, in addition to a reduction in complex LacNAc epitopes, cell surface lectin binding assays also confirm the downregulation of other complex glycan epitopes such as α2,3-sialic acids (Fig. 4f, Extended Data Fig. 4k).Together, these results suggest that the fragmented cis-and medial-Golgi and the reduction in TGN in TM9SF3 knockdown cells may impede the trafficking of glycoproteins through Golgi compartments for glycan remodeling, resulting in a glycan repertoire enriched in high and oligomannose structures (Extended Data Fig. 5j). The CCC complex negatively regulates Golgi function and complex glycan formation. Finally, we sought to study the role of the CCC complex in regulating N-glycosylation, given that multiple complex members are identified as regulators of the high mannose epitope.To first validate our screen results and carefully determine how each complex member impacts the high mannose epitope, we established individual knockdown lines of each member using CRISPRi with two sgRNAs each.Consistent with the screen results, we find that knocking down its core members (CCDC22 and CCDC93) and 7 of its 10 COMMD members reduces high mannose epitope on the cell surface under both basal and XBP1s-induced conditions (Fig. 5a, Extended Data Fig. 5a).Notably, knocking down VPS35L, a component that the CCC shares with the Retriever complex 33 , also down-regulates high mannose and leads to a similar glycan profile as CCDC22 knockdown (Extended Data Fig. 3c), suggesting that the CCC and Retriever complex may act together to regulate glycosylation. Given the critical role of protein recycling in the secretory pathway, we next sought to test how the Golgi might be impacted by disruption to the CCC complex.To do so, we generated a stable A549 line with essential CCC complex component CCDC22 knocked down (Extended Data Fig. 5b).We found that CCDC22-depletion leads to a slight upregulation in both cis-/medial-Golgi marker GM130 and TGN marker TGN46 (Fig. 5b, Extended Data Fig. 5c).To characterize these Golgi changes further, we used confocal microscopy to monitor cis-/medial-Golgi as well as TGN morphology in CCDC22 knockdown cells.Surprisingly, we observed a dispersed cis/medial-Golgi phenotype, similar to that observed in TM9SF3-KD cells despite the opposing phenotypes (Fig. 5c, Extended Data Fig. 5d and e).In the CCDC22-KD cells, we also observed a similar fragmentation and dispersion of the TGN (Fig. 5c, Extended Data Fig. 5f and g), which was consistent with previous reports 37 .Interestingly, despite the dispersion of the Golgi compartments, the cis-Golgi becomes even more colocalized with the TGN (Fig. 5c and d).These findings led us to hypothesize that disrupting the CCC complex might either (1) trap high mannose glycans intracellularly, preventing them from reaching the cell surface, or (2) enhance high mannose remodeling into more complex glycans through increased association between the Golgi compartments. To test these possibilities, we utilized lectin microarray to thoroughly interrogate glycosylation and Golgi function.This revealed a dramatic upregulation of complex glycan epitopes -Glycans of CCDC22 knockdown cells are more likely to be highly branched, elongated with N-acetyllactosamine (LacNAc), and capped with terminal sialic acids or galactose (Fig. 5e, Extended Data Fig. 5h, Source Data 4).These changes are matched by their cell surface staining (Fig. 5f, Extended Data Fig. 5i).These findings strongly suggest that disruption of the CCC complex enhances the process by which high mannose glycans are remodeled into more complex Nglycans, resulting in an upregulation of complex glycans at the expense of high mannose glycans (Extended Data Fig. 5k).This may be due to the increased association between the Golgi compartments, allowing glycoproteins to be more efficiently trafficked through the Golgi and thereby promoting glycan maturation.The expanded Golgi network may also be concentrating glycan synthesis enzymes, particularly elongation and capping glycosyltransferases to generate more complex glycans.Together, our results indicate that the CCC complex is a negative regulator of Golgi function and complex N-glycan formation. Discussion In this study, we present an approach utilizing CRISPR screening and lectin microarrays to identify and characterize the network of genes that regulate cell surface glycosylation.Applying this strategy, we first used genome-wide and targeted CRISPR screens to uncover regulators of high mannose glycosylation, which enabled us to identify genes beyond the known biosynthetic enzymes.We then used lectin microarrays to comprehensively measure glycosylation changes in two novel regulators -a previously uncharacterized gene TM9SF3, and the protein recycling machinery CCC complex.Our analyses indicate that TM9SF3 is a regulator of Golgi organization and is required for proper complex N-glycan synthesis, whereas the CCC complex is revealed to be a negative regulator of Golgi function and complex glycosylation. While it is no surprise that regulators of the Golgi would influence high mannose and other types of glycosylation, our approach allowed us to identify regulators of Golgi function in a manner traditional morphology or single glycoprotein analyses did not provide.Notably, the use of lectin microarrays allowed us to rapidly measure changes in N-and O-glycans simultaneously, providing comprehensive insights into the state of glycosylation pathways in the cell without the need to follow specific glycosyltransferases, which can be technically challenging due to their overlapping functions as well as low protein expression.Specifically, our work found an unexpected disconnection between Golgi morphology and function, in which fragmented and dispersed Golgi appear to retain function to process glycans.Interestingly, these scattered, smaller Golgi structures observed in our TM9SF3 and CCDC22 knockdown cells are reminiscent of Golgi satellites or outposts in dendrites of neurons, where localized glycosylation events can occur in response to neuronal excitation 26,38 , suggesting that such regulation of Golgi morphology and function may be a general mechanism by which cells control glycosylation.Specifically, fragmentation and dispersion only in the cis-and medial-Golgi along with the disconnection from the TGN, like what we observe in TM9SF3-KD cells, might allow high and oligomannose glycans to bypass the intact trans-Golgi deplete cells of their complex glycan structures.On the other hand, fragmentation and rearrangement of the Golgi to bring cis-and trans-golgi into closer contact, as we observed when the CCC complex is disrupted, might allow for concentrating specific glycosyltransferases and/or more efficient trafficking through the Golgi, enhancing remodeling and upregulating complex glycans.Follow-up studies will be required to determine whether these genes regulate glycome changes in disease states and fully elucidate how they may dynamically regulate glycosylation enzymes and glycosylation of specific proteins. Glycosylation changes brought by changes in Golgi dynamics can have significant implications for how cells interact with the immune system.Particularly, the upregulation of complex epitopes capped with sialic acids has been shown to suppress the immune system 39 .In contrast, excess high mannose epitopes can over-activate complement pathways through interaction with mannose-binding lectin 7,8 and promote cancer metastasis 40 .Notably, Golgi dysregulation is a feature of many diseases, including bacterial and viral pathogenic infections, cancers, and neurodegenerative diseases [41][42][43] .Careful research into how the Golgi alterations in various diseases regulate and change glycosylation can provide insight into how they may alter cell surface glycan signals to escape surveillance or overactivate the immune system to cause chronic inflammation. Together, our work discovered novel regulators of high mannose glycosylation and Golgi function.Additionally, our work demonstrates a readily generalizable approach combining CRISPR screens and lectin microarrays for dissecting the complex network of genes that controls the production of any glycan epitopes.This can be easily adapted to different cell types for studying the cell-type specificity of glycosylation 12,44 .Collectively, this represents a powerful method for understanding glycosylation regulation and can allow us to investigate the origins of altered glycosylation in many diseases. RT-qPCR for UPR ER targets A549 cells with dox-inducible Cas9 were either treated with 2 µg/mL doxycycline (Sigma, D3072) or equal volume of DMSO for 48 hours.Cells were washed 2x with ice cold PBS (Gibco, 10010049).TRIzol reagent (Invitrogen, 15596026) and RNeasy micro kit (Qiagen catalog no.74004) were used in conjunction to isolate cellular RNA according to manufacture instructions.cDNA synthesis from the purified RNA was performed using QuantiTect Reverse Transcription kit (Qiagen, catalog number 205311) according to manufacturer's instructions.SYBR Green master mix (Applied Biosystem, catalog number A46109) and primers listed below were used to set up qPCR reactions and analyzed on QuantStudio 6 Flex. UPLC Quantification of high mannose N-glycan structures 10 Proteins were harvested from A549s with or without XBP1s-induction using 1% NP-40 lysis buffer --1% NP-40, 150 mM NaCl, and 50 mM Tris-Cl pH 8) supplemented with cOmplete protease inhibitor (Roche, 11836170001).Isolated proteins were flash frozen and sent to UC San Diego's GlycoAnalytic Core for N-glycan analysis using Ultra-Performance Liquid Chromatography (UPLC) with fluorescent detection.Briefly, N-glycans were cleaved off by PNGase F, purified, and labeled with procainamide to allow for detection.The same amount of high mannose N-glycan structures with Man 5 -Man 9 were spiked into each sample to allow for relative quantification of each high mannose N-glycan structure. Lectin microarrays Flash frozen A549 cell pellets were washed with protease inhibitor cocktail supplemented PBS and sonicated on ice until homogenous.20 µg of protein from each homogenized samples were then labeled with Alexa Fluor 555-NHS.A reference sample was prepared by pooling equal amounts (by total protein) of all samples and labeled with Alexa Fluor 647-NHS.Lectin microarray printing, hybridization, and data analysis was performed as previously described 14 .Details for the print are provided in the MIRAGE table (Supplementary table 4). FACS-based CRISPR-deletion screen for high mannose regulators A previously established genome-wide, 10 sgRNA per gene CRISPR deletion library which is separated into 9 sublibraries was used for the genome-wide screen.A549s confirmed to stably express Cas9 and the doxinducible XBP1s circuit was transduced with one sublibrary at a time at a multiplicity of infection (MOI) of 0.3-0.4.Cells expressing sgRNAs were selected using puromycin (Gibco, A1113803) at 1 µg/mL for 3-4 days such that >90% of cells were mcherry positive as measured by flow cytometry.Cells were then allowed to expand for up to 7 days.Deep sequencing was used to confirm sufficient sgRNA representation in each library. The screen was performed one sublibrary at a time due to the large amount of FACS required.For each sublibrary, cells were treated with 2 µg/mL dox for 48 hours to induce XBP1s expression.Cells were dissociated with Accutase and fixed with 4% PFA.Fixed cells were stained with HHL-FITC at 10 µg/mL in 3% BSA for 2 hours at 4C with rotation.Cells were then washed 2x with dPBS and resuspended in 3% BSA.HHL-stained cells were sorted on BD Aria for top and bottom 25% of HHL signal, with at least 1000x coverage in each population.Cells were sorted within a week.The recovered cells were unfixed by incubating with protease K (Qiagen, 19133) overnight at 56°C with shaking.Genomic DNA of each population was extracted using Qiagen DNA Blood Midi kit (Qiagen, 51183).The sgRNAs were amplified and prepared for sequencing with a previously described nested PCR protocol with slight modification to make sgRNA sequencing library compatible with Illumina read 1 primer.Briefly, the sgRNA-encoding constructs were first amplified with primers oKT187 and oKT188, followed by a second PCR to introduce staggered sequences and indices for multiplexing (see Supplementary Table 2 for primer sequences).The resulting PCR products were gel purified prior to sequencing on Illumina HiSeq.Hit identification was performed using CasTLE 29 .See Supplementary Table 3 for all sgRNAs used for validation. MACS-based CRISPR-inhibition screens To generated CRISPRi A549s, a CRISPRi construct (pLX_311-KRAB-dCas9, gift from John Doench & William Hahn & David Root, Addgene plasmid # 96918 ; http://n2t.net/addgene:96918 ; RRID:Addgene_96918) was lentivirally introduced into the A549s expressing inducible-XBP1s.These cells were selected with blasticidin (10 µg/mL) and single cell cloned to ensure stable CRISPRi machinery expression.To generate the secondary CRISPRi screening library, sgRNAs targeting a total of 292 genes, including all genes that passed 10% FDR from the genome-wide screen as well as functionally related genes were designed using CRISPick 45,46 , along with ~500 control sgRNAs were synthesized by Twist Bioscience and cloned into pMCB320 using BstXI/BlpI overhangs after PCR amplification (see Supplementary Table 1 for complete list of genes and sgRNAs).This library was lentivirally installed into the A549s expressing CRISPRi machinery as well as inducible-XBP1s, and selected for with puromycin (1 µg/ mL). For the screen, the 50 million library cells per condition were seeded in ten 15 cm plates.Cells were either treated with dox for 48 hours to induce XBP1s epxression or equal volume of DMSO as basal control.Cells were then lifted with Accutase, pooled, and washed 2x with dPBS.Cells were then resuspended in 10 mL 1% BSA and incubated with 100uL HHL-coupled with magnetic beads, which were prepared by mixing biotinylated HHL and MojoSort streptavidin nanobeads (BioLegend ,480016) 1:1 for 30 minutes on ice.HHL-beads were allowed to bind to cells for 1 hour at 4°C with rotation.Cells were then washed 2x with cold MojoSort buffer (BioLegend, 480017), and resuspended in MojoSort buffer.Cells were then placed on magnet and allowed to separate for 10 minutes.Unbound cells were collected into new tubes, whereas bound cells were resuspended in MojoSort buffer and allowed to separate again.After 10 minutes of separation, the unbound population is discarded, and the bound population was resuspended and allowed to separate one more time to increase purity.Similarly, the initial unbound population was placed on the magnet again, allowed to separate twice more by keeping the unbound population and discarding the bound population.Each bound and unbound populations underwent a total of three rounds of separations.A total of 10M and 20M cells were collected for the high mannose-low and -high populations, representing a 5000x and 10000x coverage of the sgRNA library, respectively Finally, genomic DNA were extracted from all resulting populations using Qiagen Blood Midi Prep and sgRNAs were prepared for sequencing in the same manner as the genome-wide screen.Hit identification were performed using CasTLE 29 .See Supplementary Table 3 for all sgRNAs used for validation. Immunofluorescence and confocal microscopy Cells were grown on glass coverslips were stained using standard immunocytochemistry techniques.Briefly, cells were fixed with 4% PFA, permeabilized with 0.1% Triton X-100, blocked with 3% BSA and stained with the following antibodies: mouse anti-GM130 (1:250), rabbit anti-TGN46 (1:500), and Phalloidin Alexa Fluor 647 (Thermofisher, A30107).Cover slips were mounted using VectaShield with DAPI (Vector Laboratories, H-1800-2).All images were collected on a Nikon Ti-E inverted microscope (Nikon Instruments, Melville, NY) equipped with a Plan Apo 60× oil objective.Images were acquired using a Zyla 5.5 camera (Andor Technology), using the iQ3 acquisition software (Andor Technology).a, Schematic for FACS-based CRISPR screen.Cas9-expressing A549s were lentivirally transduced with a genome-wide CRISPR-deletion sgRNA library.Resulting cells were dox-treated to induce XBP1s overexpression for 48 hours.Cells were then gently lifted with Accutase, xed, and stained with FITC-labeled HHL.The top and bottom 25% of HHL stained cells were isolated by FACS.The resulting populations were subjected to deep sequencing and analysis.The screen was performed in duplicate.b, Volcano plot of all genes indicating e ect and con dence scores for the genome-wide screen performed in duplicate.E ect and P values were calculated by casTLE.c, Schematic for initial steps of N-glycan mannose-trimming and remodeling.All three enzymes indicated are hits in genome-wide screen.d, Disruption of tail-anchored protein insertion pathway by ASNA1 inhibitor Retro-2 in wild type A549s also upregulates cell surface high mannose glycan levels.A549s were treated with treated with 2 µg/mL dox, 100 µM of Retro-2, both, or left untreated for 48 hours.Resulting cells were lifted with Accutase and stained with FITC-labeled HHL, followed by ow cytometry analysis.Data are presented as mean ± s.e.m. of median of each replicate and are representative of two independent experiments performed in triplicate with consistent results.e, Schematic for competitive binding assays for measuring changes in high mannose levels.Cells expressing sgRNAs for CRISPRi-mediated knockdown (KD) and miRFP and cells expressing a control sgRNA and BFP were cocultured in 1:1 ratio.Cells were either treated with dox to induce XBP1s or left untreated for 48 hours.Resulting cells were lifted and stained with HHL-FITC, and log2 ratio of HHL intensity of KO: control was determined using ow cytometry.f, Validation of hits in XBP1s-induced A549s using competitive HHL binding assays: Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.a, Schematic for MACS-based CRISPR screen.A549 cells stably expressing CRISPRi machinery and the targeted sgRNA sublibrary were either dox-treated to induce XBP1s or left untreated for 48 hours.Cells were lifted and incubated with HHL coupled to magnetic beads.The cells were then placed on a magnet in which high HHL-binding cells would be retained on the magnet, whereas the low HHL-binding cells were removed from the population.This separation was repeated twice more on each high and low HHL binding cells to improve the purity of the populations.Finally, each resulting population were subjected to deep sequencing and analysis to identify hits.The screen was performed in duplicate.b, The maximum e ect size (center value) estimated by CasTLE from both basal and XBP1s-induced conditions with ve independent sgRNA per gene.The bars represent the 95% credible interval, with red representing XBP1s and blue representing basal conditions.Only genes considered to be a hit in at least one condition are shown.Genes are ordered in descending order of estimated maximum e ect size of XBP1s-induced condition.The top 30 positive and negative hits are shown in the expanded panels.c, Top 30 regulators for high mannose N-glycans with their reported subcellular localization.d, Validation of hits in A549 under basal conditions using competitive HHL binding assays.Each gene is knocked down by co-expression of two independent sgRNAs.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.e, Validation of hits in A549 under XBP1s-induced conditions using competitive HHL binding assays.Each gene is knocked down by co-expression of two independent sgRNAs.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.a, Competitive HHL binding assay in A549s with expressing three independent sgRNAs targeting TM9SF3.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate.b, Competitive HHL binding assay in A549s under basal conditions expressing three independent sgRNAs targeting TM9SF3.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.c, RT-qPCR for TM9SF family members.Gene expression is normalized to housekeeping genes GAPDH and HRPT1.d, Flow cytometry quanti cation of intracellular staining of GM130 and TGN46 in TM9SF3 knockdown cells under basal and XBP1s-induced conditions compared to wildtype control.Data are presented as mean ± s.e.m. and are representative of three independent experiments performed in triplicate with consistent results.e, Schematic for microscopy analysis.Wildtype control cells (mcherry-positive) were cocultured with TM9SF3 knockdown cells (mcherry-negtaive) as in-well internal controls.f and h, Representative confocal microscopy images of TM9SF3 knockdown and wildtype control cells, stained with cis-/medial-Golgi marker GM130 (f) or TGN marker TGN46 (h).Wildtype cells are outlined in dotted white lines.Magni ed view of the red boxed areas are shown in the right-most column, with red arrows denoting KD cells.Scale bars, 10 μm.Images are representative of two independent experiments performed in triplicate.g and i, Quanti cation of average cis-/medial-Golgi or TGN distances from the nucleus in Extended Data Fig. 4f and 4h, respectively.Each data point represents the mean distance of a single cell.P-value is calculated by Mann-Whiteney U test.j, Volcano plot for lectin microarray results of XBP1s-induced A549 cells with TM9SF3 knocked down compared to wild type control.Lectins are color-coded by their glycan-binding speci cities. k, Competitive cell surface lectin binding assay for TM9SF3 knocked down A549s compared to wildtype control under XBP1s-induced conditions.Lectin binding speci cities and the location of where the modi cation predominately occurs are indicated.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results. Extended Data Fig. 5 -Log a, Competitive HHL binding assays on A549s under XBP1s-induced conditions for all knock down of all members of the CCC complex.Each gene is knocked down by co-expression of two independent sgRNAs.Data are presented as mean ± s.e.m. and are representative of three independent experiments performed in triplicate with consistent results.b, Western blot of CCDC22 knockdown cell line showing expected reduction in CCDC22 protein levels.c, Flow cytometry quanti cation of intracellular staining of GM130 and TGN46 in CCDC22 knockdown cells under basal and XBP1s-induced conditions compared to wildtype control.Data are presented as mean ± s.e.m. and are representative of three independent experiments performed in triplicate with consistent results.d and f, Representative confocal microscopy images of CCDC22 knockdown and wildtype control cells, stained with cis-/medial-Golgi marker GM130 (d) or TGN marker TGN46 (f).Wildtype cells are outlined in dotted white lines.Magni ed view of the red boxed areas are shown in the right-most column, with red arrows denoting KD cells.Scale bars, 10 μm.Images are representative of two independent experiments performed in triplicate.e and g, Quanti cation of average cis-/medial-Golgi or TGN distances from the nucleus in Extended Data Fig. 5d and 5f, respectively.Each data point represents the mean distance of a single cell.P-value is calculated by Mann-Whiteney U test.h, Volcano plot for lectin microarray results of XBP1s-induced A549 cells with CCDC22 knocked down compared to wild type control.Lectins are color-coded by their glycan-binding speci cities. i, Competitive cell surface lectin binding assay for CCDC22 knocked down A549s compared to wildtype control under XBP1s-induced conditions.Lectin binding speci cities and the location of where the modi cation predominately occurs are indicated.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.j, Model for how glycosylation is altered in TM9SF3-KD cells: High mannose glycans are converted into oligomannose glycans in the cis-and medial-Golgi despite changes in morphology.However, the nal stages of complex glycan synthesis, such as elongation by LacNAc motifs and capping with sialic acids, are inhibited.k, Model for how glycosylation is altered in CCDC22-KD cells: Despite the fragmented Golgi morphology, glycans are able to be remodeled from high mannose into complex glycans, perhaps at an enhanced e ciency.Resulting in a more complex N-glycan repertoire at the expense of high mannose glycans. Schematic for dox-inducible XBP1s upregulating high mannose N-glycans.b, RT-qPCR for targets of general UPR ER and XBP1s.Gene expression is normalized to housekeeping genes GAPDH and HRPT1.Data are presented as mean ± s.e.m. and are representative of two independent experiments performed in triplicate with consistent results.c, Fluorescent HHL and GRFT binding on A549 cells with or without dox-induction of XBP1s.Cells were treated with 2 µg/mL dox for 48 hours to overexpress XBP1s.The ow cytometry data is representative of three independent experiments performed in triplicate.d, Fluorescent MBL2 binding on A549 cells with or without dox-induction of XBP1s.Cells were treated with 2 µg/mL dox for 48 hours to overexpress XBP1s.The ow cytometry data is representative of three independent experiments performed in triplicate.e, UPLC quanti cation of high mannose N-glycan structures of A549 cells with or without XBP1s-induction.Levels of each high mannose structure are normalized to the protein amount of each replicate.The experiment was performed in triplicate, and data is presented as mean ± s.e.m. f, Schematic for lectin microarray analysis of A549s under basal or XBP1s-induced conditions.g, Volcano plot of lectin microarray data.Median normalized log2 ratios (sample /reference) of the A549 samples are presented.Lectins are color-coded by their glycan-binding speci cities. Fig. 3 : Fig. 3: Targeted CRISPRi screen uncovers additional novel regulators of high mannose glycans under basal and UPR ER induced conditions WT vs KD T-test WT vs KDBasal XBP1s Fig. 4 :Fig. 5 : Fig. 4: TM9SF3 regulates the Golgi organization and is required for formation of complex N-glycans
2023-10-28T13:08:53.973Z
2023-10-24T00:00:00.000
{ "year": 2024, "sha1": "f566c201a31d704bc04831aa4ad39762de6ffc84", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10634773", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5ed64c4d64d3a7ec4a91aef612d3a21776092b3d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
246430204
pes2o/s2orc
v3-fos-license
Correcting diacritics and typos with a ByT5 transformer model Due to the fast pace of life and online communications and the prevalence of English and the QWERTY keyboard, people tend to forgo using diacritics, make typographical errors (typos) when typing in other languages. Restoring diacritics and correcting spelling is important for proper language use and the disambiguation of texts for both humans and downstream algorithms. However, both of these problems are typically addressed separately: the state-of-the-art diacritics restoration methods do not tolerate other typos, but classical spellcheckers also cannot deal adequately with all the diacritics missing. In this work, we tackle both problems at once by employing the newly-developed universal ByT5 byte-level seq2seq transformer model that requires no language-specific model structures. For a comparison, we perform diacritics restoration on benchmark datasets of 12 languages, with the addition of Lithuanian. The experimental investigation proves that our approach is able to achieve results (>98%) comparable to the previous state-of-the-art, despite being trained less and on fewer data. Our approach is also able to restore diacritics in words not seen during training with>76% accuracy. Our simultaneous diacritics restoration and typos correction approach reaches>94% alpha-word accuracy on the 13 languages. It has no direct competitors and strongly outperforms classical spell-checking or dictionary-based approaches. We also demonstrate all the accuracies to further improve with more training. Taken together, this shows the great real-world application potential of our suggested methods to more data, languages, and error classes. Introduction Since the dawn of the computer era, the English language, Latin alphabet, and the QWERTY keyboard are the "computer-native" means of communication. English remains the lingua franca in IT, science, and many other fields. Many people use it in addition to other, their native languages, as do we here. Most other languages that use a Latin-based alphabet have some diacritic signs ("č") that are added to the basic Latin characters ("c"), modifying their pronunciation. The initial ASCII character set was greatly expanded by the wide adoption of the Unicode Standard to accommodate for the characters of other languages. Typing these characters, however, is not always convenient. Many different keyboard layouts exist, they can be more efficient for other languages, as well as English, it is easy to remap physical keyboards in software, and virtual keyboards on touchscreens can even be dynamic; however, learning to type efficiently on different layouts is not easy, they are also not universally available. In addition, large alphabets are not practical to fit on a keyboard layout so that each character can be typed by pressing just one key, instead requiring combinations or sequences of keys. All these factors made the QWERTY variations (including the similar QWERTZ and AZERTY) remain the most popular keyboard layouts for Latin-alphabet-based languages, where the diacritics are usually an afterthought. By necessity, haste, or convenience, people often forgo the diacritic signs and special characters in the languages that need them, and type using the base Latin alphabet and keyboard layout instead. Such texts can typically be largely understood nonetheless, but this introduces ambiguities and is not considered a proper use of the language. Our aim, in this work, is to investigate automatic methods of restoring diacritic signs in such texts, as well as correcting other typical typographic errors, colloquially known as "typos", as such fast, sloppy typing usually results in both. Restoring diacritics (as well as correcting typos) is important for the human readability of the texts, as well as disambiguation and the proper use of the language (and the prestige associated with it), preventing its degradation. On the more objectively-measurable technical side, undiacritized texts are also harder to proccess automatically: machine-translate, synthesize, parse, etc. The relevance and importance of diacritics restoration are revealed by evaluating them on the downstream tasks, i.e., extrinsically. There are several examples. The diacritics restoration helped to increase the automatic speech recognition quality for the Romanian language when diacritics were restored in the corpus used for the language model training [1,2]. The diacritics restoration also resulted in a better text-to-speech performance for Romanians [3]. Used as the integrative NLU component, the diacritics restoration also improved the accuracy of the intent classification-based Vietnamese dialogue system [4,5]. Similarly, statistical machine translation performance was positively correlated with correctly diacritized words for Arabic [6]. Moreover, a higher binary classification accuracy was achieved after Turkish text diacritization [7]. Usually, the progress in any Natural Language Processing (NLP) topic initially begins with research for the English language and then spreads to others, but the omitted diacritics problem is an exception. The English written language is highly dependent on the original Latin alphabet. Undiacritized ASCII equivalents of a few English loanwords with diacritics (as "café", "naïve", "façade", etc., mostly borrowed from French) do not cause ambiguity and, therefore, can be easily restored with a dictionary. The level of ambiguity and complexity of restoration for the other languages strongly depends on the language characteristics. For languages where the omitted diacritics cause fewer disambiguation problems, the diacritics restoration is formulated as a spelling correction task. In this research, our focus is on languages that already have lexical and inflective ambiguity. Hence, the omitted diacritics exacerbate this problem even more, and simple solutions are not enough. Virtually all the previous works (see Section 2) investigated the diacritics restoration problem in isolation, i.e., restoring diacritics in otherwise correct texts. This is, however, not realistic: if not enough care and attention is given to using proper diacritics, while typing a text, then, typically, the same is with using the correct spelling. A carefully-typed text without diacritics might be more common in the past, when Unicode was not widely supported for technical reasons, but this is no longer the case. Crucially, it is neither easy to correct typos before restoring diacritics, as those are not proper texts, nor after, as diacritics would not be restored on mistyped words. If, in addition to the missing diacritics, other typographical errors are introduced (as is common with fast, careless typing), specialized diacritics restoration algorithms break down. Considering these limitations and trends in the current state of the art in diacritic restoration and typo correction, we take an approach with these main contributions: • In contrast to the current state of the art, we use the latest universal sequence-tosequence byte-level transformer model ByT5 [8] that has no task-or language-specific structure, vocabulary, or character set; • We experimentally investigate the effectiveness of this universal method in restoring diacritics on a standard set of 12 + 1 languages, comparing it to the state of the art; • We experimentally investigate the effectiveness of this universal method in correcting typos while simultaneously restoring diacritics on the same set of 12 + 1 languages. The rest of this paper is organized as follows. We provide a review of related work in the literature on diacritics restoration and typo correction in Sections 2 and 3, respectively. In Section 4, we give a detailed background on our chosen approach and related transformer models in general. In Section 5, we describe the datasets used. In Section 6 we outline the experimental setting, and in Section 7, we present the results. Finally, we discuss the findings of this work in Section 8 and summarize them in Section 9. fully combines the statistical bigram language model with the dictionary (of 750 000 entries) look-up method. The diacritization process contains three stages. During the first stage, substitution schemes are applied to the raw text result for generating the diacritized candidates; then, the validity of each candidate is determined via a comparison with dictionary forms; and finally, correct forms are selected with the language model. The authors demonstrated the effectiveness of their method not only on the artificial data (newspaper articles that were undiacritized, namely for experiments) but also on the real data (forum posts). The statistical language model can be created not only on the word level but on the character level, as in [21]. During the first stage, for recognized words, it uses a statistical n-gram language model with n = [1,4] that works on the word level; during the second stage, it processes the out-of-vocabulary words with the statistical n-gram character-based model that works on the character level. The authors proved that their offered approach led to the better diacritization accuracy of the Arabic dialectal texts. Translation-Based Approaches Sometimes the diacritization problem is formulated as the machine translation problem, but instead of translating from the source language to the target, the undiacritized text is " translated" into the diacritized text. However, such a translation problem is less complex due to a simpler (one-to-one) alignment and decoding. The phrase-based Statistical Machine Translation (SMT) system has been successfully applied to restore diacritics in the Algiers dialectal texts of the Arabic language [22]. This system uses the Moses (Open Source Toolkit for SMT) engine with the default settings, such as the bidirectional phrase and lexical translation probabilities, the distortion model with seven features, a word and phrase penalty, and a language model. The SMT-based method was also applied to Hungarian texts [23]. Similar to [22], Moses was used with the default configuration settings (except for the translation model that contained only unigrams, and the language model with n up to 5), monotone decoding, and without the alignment step. However, SMT alone was not enough to solve their task: the agglutinative morphology of the Hungarian language results in plenty of word forms that are unseen by the system with the restricted vocabulary. To handle this, a morphological analyzer was incorporated into the system. It generates candidates for unseen words that are later fed into the Moses decoder. The probability of each candidate was estimated from the corpus with a linear regression model considering its lemma frequency, the number of productively applied compounding, the number of productively applied derivational affixes, and the frequency of the inflectional suffix sequence returned by the analysis. Despite the problem to be solved in [24] being formulated as a word-to-word translation problem, this is not the typical case with SMT. The authors investigated two approaches that only required monolingual corpora. Their lexicon-based approach (applying the most frequent translation observed from the training data) was outperformed by the corpus-based approach (combining information about the probability of translation and the probability of observing a translation in the given context, via a simple log-linear model). This research is interesting for several reasons. First of all, the effectiveness of the method is proven in several languages, i.e., Croatian, Serbian, and Slovenian. Similarly, the diacritics are restored on both standard and non-standard (Web data) texts. Moreover, the authors also performed cross-lingual experiments by training their model on one language and testing it on another. The cross-lingual experiments revealed that the Croatian and Serbian languages can benefit from each other (training/testing in both directions), whereas the model trained on the Slovenian language was not effective for Croatian or Serbian. Character-Level Approaches Another important direction in diacritics restoration is character-level approaches. They solve problems that are typically defined as sequence labeling. The iterative process slides through an undiacritized sequence of characters by assigning their diacritized equivalents (labels). Each character is a separate classification instance with the surrounding content as other classification features. Such approaches typically require no additional language tools except for the raw text, which makes them suitable for less-resourced languages. Moreover, character-level methods are robust when dealing with unknown words. Depending on the chosen classifier, this classification process can be viewed as the independent instance-based classification (assuming that each instance is independent) or the sequence classification (considering conditional dependencies between predictions) problems. The seminal research work in [25] described the instance-based classification technique applied to the Czech, Hungarian, Polish, and Romanian languages. Authors tested different window sizes (of 1, 3, 5, 7, and 9 lower-cased characters to both sides) with two classifiers: the memory-based approach and the decision tree (C4.5). Their offered method achieved an accuracy which is competitive to word-level approaches. Another study, presented in [26], described the sequence classification tackled with the MaxEnt classifier. This approach is applied to the Arabic language, but instead of pure character features, it employs character-(character n-grams), segment-(words decomposed into prefixes, suffixes, stems, etc.), and part-of-speech tag-based feature types. The successful combination of these diverse sources resulted in a high diacritization accuracy. Similar to [25], three instance-based classifiers (a decision tree, logistic regression, and the Support Vector Machine, or SVM), with character n-grams (from a sliding window) as features, were investigated for the Hungarian language [27]. The decision tree, which is also good at identifying important features and keeping decisions easy to interpret, was determined to be the most accurate. This research is important for several reasons: it claims the effectiveness of the offered approach on non-normative language (web data, Facebook posts) and the superiority over lexicon lookup (retrieving the most common diacritized forms) and hybrid (the lexicon plus character bigrams) approaches in the comparative experiments. However, comparative experiments are not always in favor of character-level approaches. In [28], the character-level and word-level approaches are compared for the Lithuanian language. The authors used conditional random fields (CRF) as the sequence classifier by applying them to the character-level features. Despite different window sizes (up to 6), the character-based approach was not able to outperform the trigram language model with the back-off strategy. The character-based approach was also not the best choice when applied to the Spanish texts [29]. It was outperformed by the decision list (that combines the simple word-form frequency, morphological regularity, and the collocational information) and the part-of-speech tagging (trained on the tagged corpus with information about the diacritic placement) approaches. Two approaches, namely, sequence labeling (i.e., sequence classification) and SMT were compared in [30] for the Tunisian language. The sequence classification approach uses CRF as the classifier and is applied to the different character (windows up to 6-grams) and word-level (part-of-speech tags of two neighboring words) features. The SMT approach uses Moses with a 5-gramlanguage model and other parameters set to their default values. The comparative experiments demonstrated the superiority of the sequence labeling approach compared to the SMT approach. Even more comprehensive comparative experiments are performed in [31], and they cover 100 languages and several approaches, such as the lexicon lookup, the lexicon lookup with the bigram language model, several character-level methods with various window sizes, the hybrid of the lexicon lookup with the bigram language model (for words in the lexicon), and the character-level approach (for words that are not in the lexicon). With some exceptions, the hybrid approach performs the best for the majority of languages. A similar hybrid approach is also successfully applied to the Romanian language [32]. The candidates for each recognized undiacritized target word are generated based on mappings of the dictionary, and the appropriate candidates are selected with the Hidden Markov Model (HMM)-based language model. The diacritics for unknown words are restored with the character-level approach (described in [25]) using windows with up to eight characters. Another hybrid approach that is used for completely different purposes (to clarify/claim the output of the character-based method) is presented in [33] for the Turkish language. During the first stage, it performs the sequence classification with the CRF method, but next to current/neighboring character,s it also uses the current/neighboring tokens as features, i.e., five character-level and two word-level features. The output of the first stage is fed into the morphological analyzer-based language validator. The authors compared their hybrid approach with several others (rule-based, rule-based with the unigram language model, and character-based but without language validator stage) and proved it is the best model to use. In contrast to the previously described approaches, the sequence labeling problem can be solved, not on the character, but the syllable level, as in [34]. The authors solved the instancebased classification problem by treating each syllable as a separate independent classification instance and applying the SVM classifier on top. They used different types of features, such as the n-grams of syllables (surrounding the target with window sizes of 2 and 3); syllable types (uppercase, lowercase, number, other), characterizing surrounding syllables, and dictionarybased features (dictionary words that contain the target syllable). The method achieves a high accuracy on Vietnamese texts. Deep-Learning-Based Approaches With the era of Deep Neural Networks (DNNs), the diacritics restoration problem is being solved with these innovative techniques. Some of them rely on word embeddings, i.e., learned word representations that are capable of capturing the context. Word2vec embeddings were integrated into a three-stage diacritics restoration system for Turkish in [7]. During the first stage, candidates are generated for the target word. During the second stage, the morphological analyzer checks if the candidates are legitimate words. During the last stage, the word2vec-based tool evaluates the semantic relationship of each candidate to its neighboring words with the similarity method and chooses the most suitable one. The authors tested two types of word-embedding models (i.e., the continuous bag-of-words model, or CBOW, which predicts the target word based on its context, and the skip-gram model, which predicts the surrounding words based on the input word) and several similarity measures (Cosine, Euclidean, Manhattan, Minkowski, and Chebyshev). Their experimental investigation revealed that the skip-gram and cosinesimilarity approach was the most accurate on Twitter data. The omitted diacritics problem can also be tackled at the character level and solved as a character classification problem. An example of such a system is for the Arabic language, and the core of it is the Bidirectional Recurrent Neural Network (BiRNN) [35]. The BiLSTM takes the undiacritized character (as an input) and outputs its diacritized equivalent (as a label). The input characters are represented as real-number vectors that are randomly initialized at the beginning and are updated during the training. The output is the n-dimensional vector, with the size n equal to the size of the output alphabet. The approach outperformed the other methods in the comparative experiments. A similar approach is offered for Hebrew, and the base of it is the two-layer LSTM [36]. The Deep Belief Network (DBN) (as a stack of multiple restricted Boltzmann machines in which each layer communicates with both the previous and subsequent layers; however, the nodes in each layer do not communicate with each other literally), on the character level, is applied to Arabic [37]. The advantage of the DBN compared to the RNN-based approaches is that it overcomes the limitations of backpropagation. The authors tested their approach on several benchmark datasets and compared it to other competing systems, claiming their approach to be the best for the diacritization problem. The robustness of sequence classification was also tested for Croatian, Serbian, Slovenian, and Czech [38]. However, this language-independent part has the additional integration of the 2, 3, 4, 5-gram language model. This language model-based version, for the inference, uses the left-to-right beam search decoder that combines the neural network and language model likelihoods. The authors compared their method with other approaches (lexicon-based, corpus-based) and systems, demonstrating its superiority over the other models. The authors in [39] also assumed that pure character information is not enough to achieve a high accuracy for Arabic, because the lexical and syntactic information is closely interrelated. Due to this reason, they offer the multi-task approach, which jointly learns several NLP models, namely for segmentation (operating at the character level), part-of-speech tagging, and syntactic diacritics restoration (operating at the word level). All these aggregated models are later used for diacritics restoration. The segmentation, part-of-speech tagging, and syntactic diacritization models use separate BiLSTM methods with the softmax on top of each. Their outputs are aggregated, and they become the input for the diacritization model which, again, is BiLSTM-based. The authors compared their model to the other popular approaches, and they claim it is a statistically significant improvement. A similar character classification problem was solved in [40] for the Romanian language. The architecture of this offered system has three different input paths: for characters (to represent the window of characters around the target character), words, and sentences (in which the target character appears). The character input path is represented by a BiLSTM encoder for character embeddings, the word input path by the FastText word embeddings, and the sentence input path by the BiLSTM encoder applied on concatenated FastText word embeddings. The authors tested their approach with different combinations of input paths (only character input, character input with the word input, etc.) proving that the best accuracy can only be reached with all the three input paths. The sequence classification tasks were also solved for the Arabic, Vietnamese, and Yoruba languages [41]. The authors tested the Temporal Convolutional Network (TCN) (in which information flows from the past to the future, as in the LSTM) and the Acousal TCN (A-TCN) (where information flows in both directions, as in the BiLSTM) approaches, and compared them to the recurrent sequential models, i.e., the LSTM and the BiLSTM. The A-TCM approach yielded a significant improvement over the TCM and had a competitive performance over the BiLSTM. The hybrid approach (as the three-stage stacked pipeline) for the Arabic language [42] integrates a character classifier as the first language-independent component. The other two components, namely, the character-level deterministic rule-based corrector and the word-level statistical corrector, are already language-dependent, but help to increase the accuracy even further. Another research direction for the diacritics restoration problem is the sequence-to-sequence (seq2seq) methods. The seq2seq architecture consists of an encoder (converting an input sequence into a context vector) and decoder (reading the context vector to produce an output sequence) blocks as separate DNNs. Such a seq2seq approach, with the RNN-based core, was successfully applied to the Turkish language [43], and, with the LSTM-based core, to Vietnamese texts [5,44]. In [45], Romanian authors investigated four different encoder-decoder architectures operating on the character level: one-layer LSTMs, two types of stacked LSTMs, and the CNN-based method (three-layer CNN with the concatenated output of the encoder and decoder, processed with another two-layer CNN), and determined that the CNN-based approach was the most accurate. Moreover, they compared their seq2seq approaches with the classification-based approach. The first approach is a hybrid of the BiLSTM (operating on the word level) and the CNN (operating on the character level); the second is described in [38] and requires additional language resources (a language model). The comparative experiments revealed the superiority of seq2seq methods. Transformer-Based Approaches The state-of-the-art techniques in the diacritics restoration, as in all NLP fields, employ transformer-based models. The multilingual BERT was successfully applied to 12 languages (Vietnamese, Romanian, Latvian, Czech, Polish, Slovak, Irish, Hungarian, French, Turkish, Spanish, and Croatian) [46]. The BERT embeddings, created on the undiacritized text, are fed into a fully connected Feed-Forward Neural Network (FFNN). The output of such a network is a set of instructions (as labels) that define the diacritization operation necessary for each character of the input token. The authors claim that their BERT-based approach outperforms all previous state-ofthe-art models. The authors in [47] solve the character classification problem for the Vietnamese language by offering a novel Transformer Decoder method with the Penalty layer (TDP). The model is a stack of six decoder blocks. The encoder part is redundant since each input character corresponds to only one output character. The penalty layer restricts the output by only allowing the possible characters for each input character. The authors also performed comparative experiments, proving their approach is superior to those offered in [38]. Another transformer-based technique was applied to 14 languages (Bosnian, Czech, Estonian, Croatian, Hungarian, Lithuanian, Latvian, Polish, Romanian, Slovak, Slovenian, Albanian, Serbian, and Montenegro) [48]. The core of the diacritization approach is the Marian Neural Machine Translation (NMT) system [49] with six encoder-decoder layers, which is applied to the frequently occurring character sequences. The research is especially interesting because it is performed in monolingual (training and testing on the same language) and multilingual (by either mixing the data of all languages or by mixing the data of all languages, but inserting language codes as the first token of each segment) settings. The authors experimentally determined that the monolingual experiments gave almost the same accuracy as the multilingual experiments with the language codes. Related Work on Correcting Typographical Errors A typographical mistake is an error that occurs while printing the material. Historically, this was due to errors in the setup of the manual type-setting. The term includes errors caused by mechanical failure or the slipping of the arm (or finger), but does not include errors caused by ignorance, such as spelling errors. However, typos are the subset of a bigger category of misspelling errors. These are of the same importance and are solved with the same methods. The only difference is that typographical errors are easier to model, as they depend only on the keyboard (we discuss it more in Section 5.2) and not the language. The most classical spelling error correction systems follow these steps: 1. Error detection; 2. Candidate generation; 3. Error correction. We will cover separate methods constituting this pipeline below. Non-Word Detection The dictionary is the most popular error detection method, sometimes called a lexicon or a unigram language model. The dictionary detects non-words, that is, the ones that cannot be found in it. The first system [50] used exactly this method with some additional heuristics. Modern spell checkers, such as GNU Aspell [51] and Hunspell [52] also compare each word of a text to their large lists of words. In Hunspell's case, the dictionary is compacted by keeping only the main word forms with transformation rules, prefixes, and suffixes, thus supporting many languages with rich morphologies. There are some downsides to the dictionary method. As noted in [53], about 40% of spelling errors are real-word errors (i.e., "from" → "form") and cannot be detected by the dictionary. The study by [54] showed that GNU Aspell corrects only 51% of errors and performs best on non-word errors. Secondly, the dictionary cannot cover rare words, such as proper names, country and region names, technical terms, and acronyms. This issue could be dealt with by enlarging the dictionary. However, [53] argues that, eventually, most of the misspellings would match rare words and would, therefore, fail to be spotted. Candidate Generation This is the task of finding the confusion set of real words for a given misspelled word. One can manually craft a confusion set or look for a publicly available one, such as [55] for the Chinese language. However, usually these sets are generated on the fly. The similarity measure between words is obtained by the phonetic or the Minimum Edit Distance algorithms. The most-known phonetic algorithm is Soundex [56,57]. The cornerstone of the Soundex approach is that homophones (the same-sounding words) are encoded similarly, so that they can be matched regardless of subtle differences in their spelling. A Soundex code is computed from a misspelling, and words that have the same code are retrieved from the dictionary as correction candidates. A similar principle of misspelling encoding was used in the first system by [50]. Nowadays, the Metaphone representations of words (as an improvement over Soundex) [58] are used in Aspell [51]. The Minimum Edit Distance [59] measure is defined by the minimum number of edit operations needed to transform one string to another. As reported in [60], more than 80% of errors differ from the correct word by only a single letter; thus, the distance between them is low. There are several different edit distance algorithms: Levenshtein [61] (number of insertions, deletions, and substitutions), Damerau-Levenshtein [60] (treating transposition as a single edit), Hamming [62] (number of characters that differ between two equal-length strings), and the Longest Common Subsequence [63]. As an example, the widely-used Aspell uses the Damerau-Levenshtein distance between Metaphone representations of words. Using Context and External Datasets The given candidates can be simply ranked by their pre-computed distances. On the other hand, some additional information, whether from nearby words or from additional corpa, can aid target word selection. The approach in [64] uses a Bayesian combination rule to rank the given candidates. First, the probabilities for substitutions, insertions, and other errors are collected from a corpus of millions of words of typewritten text. Then, given a misspelled word, its each inflection and the resulting word probabilites are combined to produce a probability estimate for each correction candidate. The n-gram language models [14] that are trained on a large external corpus can give a conditional probability of how likely a sequence of words is to be followed by a certain word. The n-gram model ranking for confusion sets is used in multiple works for spelling correction [65,66,67,68,69,54]. The character-level n-gram also allows for the calculation of a distance measure (such as Hamming in [70]) by comparing the character n-grams between two strings [71]. Spelling correction systems using n-grams usually employ back-off techniques [65,66,68] or other [72,73] smoothing techniques, and sometimes, due to its size, they even require a complex distributed setting [68,74]. The extensions and problems with the n-gram models have already been discussed in Section 2.1.2. External datasets are especially well-exploited by the neural network approaches. The authors of [75,76] used a FastText [77] shallow neural model to learn both known and unknown word vectors as a sum of character n-gram embeddings. Candidate words could then be scored with a cosine similarity to the context words vectors. The differences between these two works are text domains. In the study by [75], the model was trained in the Bangla language, while in the study by [76], the model was trained on English and Dutch clinical texts. The ability to learn from vast text resources eventually culminated in the state-of-the-art transformer models, discussed in Sections 3.5 and 4.2. Real-Word Errors We already reviewed techniques for detecting and correcting non-word typos. The other, far more difficult, group is the real-word errors. These are misspellings that result in other real words. Ironically, these errors are also caused by automatic spelling correction systems [78]. As it is harder to apply unsupervised methods such as the dictionary, there is also a challenge to build tools for different languages with different alphabets and rules [79]. The detection of real-word errors can be done by searching every word in a confusion set and checking for a better alternative [80,66,72,81]. The candidate population is usually done by the n-gram method, and others, as already discussed in Section 3.2. Some works employ natural language parsers that check grammar [82,83] or look for words semantically unrelated to their context, that have semantically-related spelling alternatives [84]. Since the detection is similar to the selection of candidates here, the real-word error correction systems often do detection and correction at the same time. Transformer Models for Spelling Error Correction Recent advances in natural language processing, particularly the transformer architecture [85], solve many problems encountered in traditional approaches. Firstly, the traditional detectsuggest-select pipeline is discarded. Whether it is a seq2seq translation or an encoder-type each-token classification, target words are generated immediately. Secondly, the segregation of non-word and real-word methods is gone here. Finally, the use of the context from the whole input sequence and the knowledge from the additional datasets are now employed. Despite the advantages, some open issues are still being solved. An important problem for seq2seq models is the over-correction, which is the attempts of a model to correct the sentence even if it is not confident. The authors of [86] addressed this problem for their Korean spelling error correction system by using a dedicated Copy Mechanism. Correction is attempted only if it detects that the input is incorrect, otherwise, the input sequence is copied. The results showed that such a mechanism resulted in a better overall performance. The authors of [87] found that the over-correction can be mitigated by allowing the transformer to be trained with unfiltered (containing gibberish samples) inputs. In this way, the model is forced to stick to the initial input, unless there is a high certainty of a typo. There is also an attempt to use an additional error detection classification head in the encoder-type transformer model [88]. Usually, small available datasets are not enough to train transformer models. As a result, most works resort to the artificial spelling error generation. The authors of [87] used the statistics of their private 195 000 sample dataset to generate 94 million examples. The authors of [86] used Grapheme-to-Phoneme and Alphabetical (insertions, deletions, and substitutions) generators, together with 45 711 private samples. The authors of [88] constructed a random rule-based generator covering the most common error categories of the Vietnamese language. Works utilizing the BERT [89] encoder can utilize, or supplement, the default masking [MASK] token. The authors of [90] also used related words from confusion sets, while the authors of [91] replaced them with phonologically and visually similar ones. The original BERT [89] transformer model used subword tokenization. As misspellings happen at a character level, it is wise to also incorporate characters or other phonetic features. The authors of [88] used an additional character-level encoder to output character-level vectors. These are concatenated with word embeddings and are used in the final word encoder. For the Chinese language, [91] additionally added phonetic and shape embeddings acquired from separately-trained single-layer GRU [92] networks. Parallel to the character classification, authors also performed pronunciation prediction. Similarly, other works on the Chinese language find it useful to predict not only characters, but also pinyin and radicals, which is a total of three classification heads. In contrast to these approaches, we use the fine-grained model in the first place and we, therefore, can avoid the additional incorporation of character information. Our Methodology The analysis of related work revealed research performed under very different experimental conditions, which makes the results difficult to compare. Different languages have different levels of complexity and ambiguity, and omitting the diacritics or introducing typos exacerbates this problem even more. The training/testing texts cover normative (fiction, periodical, Bible texts) and non-normative (tweets, comments)language types. Investigated approaches are affected by the availability of language resources and the emergence of new methods, and vary from rule-based, traditional machine learning to the most innovative deep learning solutions. There are different evaluation types: extrinsic, which refers to evaluating the downstream tasks, vs. intrinsic, which refers to calculating the percentage of correctly restored words or characters); different evaluation metrics cover word-level and character-level (including all characters or only with diacritics) techniques. Hence, there is no consensus about which approach is the best for the diacritics restoration and typographical error correction problems. Recent trends suggest that innovative approaches, such as transformer models, are still needed, and should be the most promising. Formal Definition of the Solving Task Let X = {x 1 , x 2 , . . . , x N } be a sequence of tokens, constituting our text without diacritics and/or with typos. Let Y = {y 1 , y 2 , . . . , y M } be a sequence of equivalents with their diacritics and/or typos corrected. Depending on the chosen tokenization form, a token can represent a word, subword, character, or byte value. The function η correctly maps X → Y . Our task is to find the method Γ which is as close to an approximation of η as possible. In this work, we use a transformer model as a method Γ. Below, we further explain what is behind tokens in our case, and how the sequence mapping is performed. Tokens Generally, the text is represented as a Unicode string. It is a sequence of code points, which are numbers from 0 through 1 114 111. For example, the letter "s" has a code point of 115, while the same letter with the additional caron, "š", is at 353. The Unicode describes a huge amount of various symbols but is very wasteful in terms of memory space. The most popular symbols are at the beginning of this list, but they would still have to be represented as 32-bit integers. Instead, UTF-8 encoding is employed to translate the Unicode sequence into 8-bit bytes. If the code point is larger than 127, it is turned into multiple bytes with values between 128 and 255. Therefore, the code point 353 of the letter "š" is translated into two bytes 197 and 161, while the letter "s" retains byte 115. The authors of [8] showed better results using a transformer model ByT5 at these byte-level tokens, rather than on characters. Inspired by their success on transliteration and noisy text tasks, we also use the same byte-level tokenization. Mapping X to Y One should note that the transformer model does not map the whole target sequence instantly. Starting with the first artificial start token y 0 , it estimates the probability for each next token by taking into account the whole input sequence and the previously generated tokens (the context). The probability that the next token is y i can be written as Thus, the output from a transformer model is a list of probabilities for each token, in a vocabulary, to be the next token y i . The choice of the next token, given the probabilities of all candidates, depends on the decoding algorithm. There are two groups of maximization-based sampling: greedy and beam search. The most obvious greedy approach is to select a token with the highest probability. During the beam search, a defined number (the so-called beam size) of the word sequences with the highest overall probabilities are kept. This way, a single low-probability word would not shadow a high-overall-probability sequence. Stochastic approaches are inappropriate for our task as there is only one right way to restore diacritics or correct typos. Transformer Models There are several key reasons why transformer [85] architecture became the top-performing model in multiple natural language processing leaderboards, such as SuperGLUE [93]. The first reason is that, compared to previous recurrent ones, it is highly parallelizable. It does not need to wait for the calculations to finish for the previous word. Instead, calculations for all words are done at once. Models can be elementary, trained on multiple dedicated machines (such as GPUs), thus quickly digesting vast amounts of data. Secondly, only after a single block (usually called a layer), the information between all tokens is already exchanged. This is accomplished by a self-attention layer inside the block, which processes a sequence by replacing each element with a weighted average of the rest of the sequence. As there are usually more than five blocks, it allows for the quick learning of long-range dependencies. Finally, it costs less computational power, demanding shorter sequences, which is the case for most of the language tasks. These reasons allowed transformer architecture to flourish. The capabilities of these models come with a price. Training them from scratch requires dedicated hardware (i.e., a GPU with a large enough memory), takes a long time, and consumes a lot of electricity. Solutions to alleviate this burden started with the introduction of the BERT [89] transformer. This model is pre-trained with a general word-masking task to be fine-tuned for any desired task later (the process called transfer learning). It is estimated that the pre-training of BERT caused more than 300 kg of CO 2 emissions [94], but it can be easily fine-tuned for a custom purpose at a small fraction of that cost. Three years later, there are plenty of similarly pre-trained publicly available models (e.g., at HuggingFace transformers library [95]). We also built our work on top of one such pre-trained ByT5 [8] model. In general, transformer models can be grouped into three categories: auto-encoding, autoregressive, and sequence-to-sequence. We will cover them in more detail below. Auto-Encoding Transformer Models This version of the transformer model possesses only an encoder part. It encodes the input text into distinct output vectors for each given token. Attention layers can access all the words in the initial sentence to get the most representative information of the whole sequence. Additional "heads" can be placed on top to further process this representation for a sentence or word classification, extractive question answering, regression, or other tasks. The most popular model of this category is the BERT [89]. Several diacritics restoration works use transformer encoders. The authors of [46] performed a classification of each transformation, described by a diacritic sign to be applied and its position in a word. Meanwhile, the model in [47], although it is named a "decoder", has its attention masking removed and classifies output diacritic mark categories for each input character. Auto-Regressive Transformer Models These models possess only the decoder side of the original architecture, and its tokens can only attend to the previous ones. Probably the most-known example is one of the latest gigantic (175 billion parameters) transformer models, GPT-3 [96]. It is used in practice by finishing sentence beginnings, which is the so-called zero-shot task solving. In this setting, the human must manage to convey all the necessary information for solving the task in the beginning, such as by providing examples of task solutions. Currently, we do not possess access to the latest GPT-3 model, nor do we believe it can adequately cover the languages we use in this work. However, it would be interesting to test its capabilities in an unsupervised zero-shot multilingual diacritics and typos correction. Sequence-to-Sequence Transformer Models These are the encoder-decoder models. In the encoder part, each token can attend to every other token. On the decoder side, there are two types of attention that occurs. The first type is the attention to the decoder's past inputs, which is the same as in the auto-regressive transformer models. The second type is the model's full attention to the tokens of the encoder. The most straightforward application of this network is the translation. The encoder only receives input language tokens, while the decoder is fed target language tokens and predicts them one at a time. As the diacritics restoration task can be viewed as a translation task, this transformer type is found in several related works [97,98,99]. The most popular model of this category is T5 [100]. Authors framed various tasks, even ones including numbers, to text-to-text format. They reported that there was no significant difference if a separate "head" was used, or an answer was generated as simple text. This, in turn, made the model very simple to use. In this work, we use the follow-up multilingual ByT5 [8] model designed to work with byte-level tokens. We think that the seq2seq approach is the most adequate, as it is universal. Additionally, operating on the byte-level gives a level of immunity to minor text noise, i.e., against typographical errors, and is more languageuniversal. The ByT5 Model The ByT5 model [8] is a general-purpose pre-trained multilingual text-to-text model, based on its earlier predecessor, mT5 [101]. It completely disposes of SentencePiece [102] tokenizer, as it does not need any. The authors concentrated 3/4 of the parameters into the encoder by decoupling the depth of the encoder and the decoder. A small version of the ByT5 now has 12 encoder layers and four decoder layers. In the ByT5 model's case, the total vocabulary size is 384, consisting of: three special tokens (<pad> for padding, </s> for the end of the sequence, and <unk> for unknown), 256 = 2 8 values of the main eight-bit byte, and 125 extra sentinel tokens used only in the pre-training task. In the small version, the vocabulary accounts only for 0.3% of the total parameters, while in a similarly-sized mT5 model, the vocabulary took 85% of the total parameters. As a result, the small ByT5 model, working with fine-granularity tokens (bytes), outperforms mT5, which worked inefficiently due to its large granularity and its rarely-used vocabulary parts (subwords) which took up much parameter space. Due to its byte-level nature, the ByT5 model is slower to compute. More fine-grained tokenization produces more tokens for the same text and requires more time for the model to digest. However, the ByT5 model's authors showed that, for short-to-medium length text, the time increase is negligible. This is the case for diacritics restoration, as the input is composed of a single sentence. The sequence-to-sequence nature of the ByT5 model tackles the limitations of the latest state-of-the-art diacritics restoration model [46], which is based on the BERT. The latter system was an auto-encoding type, and it performed classifications for each token. That is, it had to predict the proper classes of each token correction, described by the position and diacritic sign type. This system is limited to its predefined instruction set (correction classes), which is highly language-dependent and involves the single task of restoring diacritics. On the other hand, our sequence-to-sequence ByT5 approach allows us to address multiple grammatical errors and learn to generate output sequences in a much more universal, language-independent approach. Training Hyperparameters The artificial neural networks are trained by updating their weights according to their response to the input. In particular, we focused on mini-batch gradient descent. For every mini-batch of n training examples (input x i and output y i pairs), the model parameters θ are updated using an objective function J: θ = θ − η · ∇ θ J(θ; x i:i+n ; y i:i+n ). (2) The Adam [103] and Adafactor [104] extensions of this vanilla gradient descent are currently the most prevalent optimization algorithms for the transformer models. The success of training the models depends a lot on setting the hyperparameters in (2) correctly, such as the batch size n, the sequence length within a sample, and the learning rate η. We will discuss them in more detail. Batch Size This is the number of samples to be run through the model before updating the weights. The more tokens it has, the less disturbance an individual sample will cause during a (much smoother) weight update. On the other hand, very large batches take more time to compute and have diminishing gains. The first popular pre-trained transformer, the BERT [89] model, for its classification, used a batch size of 256 sequences. A later model, RoBERTa [105], showed that an increase in the batch size (up to 8 000) and the dataset size accordingly improved the downstream performance. However, the same authors had to fine-tune the downstream applications using only batches of a size up to 48. The popular seq2seq transformer, T5 [100], used batch size 128 for both pre-training and fine-tuning. Follow-up models, such as the multilingual version mT5 [101], the grammatical error correction model gT5 [106], and ByT5 [8] (the model we use in this work) all carried on with the same value for fine-tuning. The same size is also used in works solving the diacritics restoration task [47,107]. In conclusion, we can use a batch size of 128 or greater. All methods of this family use the same size and we are not strictly limited by the dataset size to increase it for better performance. Maximum Sequence Length When choosing the right batch size, one should also account for the maximum number of tokens allowed in a sample. There are two caveats here. First, the time complexity of the transformer model is quadratic on the sequence length n (number of tokens) O(n 2 ), thus, shorter sequences are preferred for a faster training time. Secondly, the model we use operates in byte granularity and needs more tokens to express the same text, compared to word-level granularity models. The authors of the ByT5 model [8] report that English language sequences in byte tokens are about five times longer than in subword ones. As a result, the maximum sequence length for the ByT5 model is set to 1024 tokens. In our case, samples are sentences and, in practice, they all fit into this length. Learning Rate The last important parameter in (2) is the learning rate η. It controls how much the model parameters have to be updated. Low values of η ensure smooth monotonic but small updates of the learned weights and a prolonged convergence. On the other hand, the higher learning rates would enlarge improvements and speed up the training. However, due to the higher "energy" (or "temperature") in the optimization, the high η causes the "bouncing" of the learned parameter values and prevents settling in the best spot, resulting in the higher final training loss. An optimal learning rate value, as used during fine-tuning of the T5 family of models [100,106,101,8] with the Adafactor optimizer, is 0.001. Sometimes, better results can be achieved by scheduling learning rate values during the training. There is, typically, the so-called warm-up period in the beginning to level discrepancies between previous parameters and new domain updates. It contains low or linearly increasing values of the learning rate. Similarly, as the training is to be finished, the "energy" of the optimization can be lowered by lowering the learning rate and allowing the neural network weights to settle in a more favorable position. As an example, during the original T5 [100] pre-training, a constant warm-up following an inverse square root decay with a peak learning rate of 0.01 was used. However, fine-tuning was performed with a constant value of 0.001. Such a learning rate is not dependent on the dataset size and it enables the straightforward comparisons of different setups. Overall, learning rate schedules can improve constant learning rate results, but they are less flexible to experiment with. Evaluation To evaluate diacritics restoration capabilities, we use the alpha-word accuracy metric from [38]. Each text sample is segmented into words, and for each word, we check if it is an alpha-word (alphabetical word): • All characters in the word are alphabetic, where the general Unicode category property is one of "Lm", "Lt", "Lu", "Ll", or "Lo"; • It has at least one letter. Given the number of gold (correct text) words to satisfy this condition T g, as well as the number of these words that are correctly predicted by the system T s, the alpha-word accuracy is alpha-word accuracy = T s T g · 100%. (3) This metric ensures that our results are not polluted by words that cannot have accents (e.g., numbers). Moreover, it takes into account both occasions of necessary and unnecessary accent generations. Other metrics, such as the Word Error Rate (WER) or the Diacritic Error Rate (DER), restrict themselves to T g of only the diacritized letters in the gold standard text [37]. Dataset The expansion of the internet brought many abundant multilingual text resources. They usually vary from noisy and colossal to small in quantity but high in quality. A good example of the former is the Common Crawl dataset of more than 20TB of data, and its version OSCAR [108], which is filtered by language. Such huge datasets are now one of the main building blocks of the popular transformer models' pre-training, but they are very costly to work with during fine-tuning scenarios, such as our. The other extreme, such as the small high-quality Universal Dependencies [109] dataset, is too small to cover most aspects in each language. Recent works on diacritics restoration seek a compromise between these two extremes. The authors of [48] use an OpenSubtitles dataset, which is of a satisfactory quality. On the other hand, the authors of [46] combine low-quality and high-quality datasets. They train first with the noisy web data, and finish with the higher quality Wikipedia dataset. However, training took two weeks for each language to reach the state-of-the-art results. We use the same 12-language (Croatian, Czech, French, Hungarian, Irish, Latvian, Polish, Romanian, Slovak, Spanish, Turkish, and Vietnamese) Wikipedia dataset, proposed in [38]. Recent state-of-the-art diacritics restoration results were reported [46] for this dataset, so it is straightforward to compare with our methods on this particular task. As our focus is on efficiency, we omitted the large web text part to work only with the better-quality Wikipedia part. We also add the Lithuanian language to the list, using the tools publicly provided by the original authors of [38] (we provide the links in the Data Availability Statement at the end of this article). The Lithuanian language is an omission we do not want to make here, not only because it is our mother tongue and, thus, we can interpret the results well, but also because it has some very unique features discussed in Section 5.1. The dataset consists of training, development, and testing sets. All three are lowercased, tokenized to words, and are split into sentences. The split between sets is performed on the Wikipedia article level. We show statistics of the training set in Table 1. The testing sets do not differ much, except that each language has exactly 30 000 sentences allocated to it and, thus, has a similar amount of words. The percentages of alpha-words, diacritic words, and diacritic letters in the testing sets do not deviate by more than 10%, compared to their training counterparts. The dataset is already preprocessed to be used by simpler approaches, such as dictionary mapping. The ByT5 tokenization does not require that, as any text can be encoded in UTF-8 bytes; thus, it can work with any processed or unprocessed text. Features of Lithuanian Here are some features of the Lithuanian language that make it interesting and important to include. The Lithuanian language is highly inflective (fusional) and derivationally complex. It is different from agglutinative languages, that rely on prefixes, suffixes, and infixes. For inflections, Lithuanian "fuses" inflectional categories together, whereas prefixes, suffixes, and infixes are still used to derive words. For example, a diminutive/hypocoristic word can be derived by adding suffixes to the root, and the word can have two-three suffixes (sometimes going up to six), where each added suffix changes its meaning slightly. The language has compounds (connecting two-three words). Moreover, verbs can be made from any onomatopoeia; phrasal verbs (e.g., go in, go out) are composed by adding the prefix to the verb. Some sentence structures are preferable in the Lithuanian language, but, syntactically, there is a lot of freedom in composing sentences. However, it is important to notice that the word order changes the sentence shade and message emphasis. This complexity and variety of the forms makes isolated Lithuanian words ambiguous: 47% of Lithuanian word forms are morphologically ambiguous [110]. This, in turn, makes diacritic restoration and typo correction even more challenging. A Realistic Model of Typos We produce our pairs of correct (target) and incorrect (input) texts by taking the dataset as the correct (gold) text and by generating the corresponding incorrect text automatically. The diacritic removal is straightforward, and is simply done by replacing all diacritic letters with the non-diacritic equivalents. However, for typographical error inductions, a dedicated realistic corruption model is required. The approach, taken by other works [78,87], is to infer probabilities for each error group from the available smaller dataset and to use them to generate errors on the target one. We took the same approach in this work. There are four prevailing categories of typographical errors. The authors of [60,111] reported that more than 80% of errors can be attributed to substitution, deletion, insertion, or transposition errors. This division allows us to model each category separately. The physical keyboard layout plays an important role in influencing typos. A single keypress instruction consists of information of which hand, finger, and key row to select. The authors of [53] argue that the confusion of these instructions is the main culprit of substitution errors, while mixed instruction timing between the two hands (operating on different parts of a keyboard) is the main culprit of transposition errors. While there may be more causes, such as visual and phonological factors [112], we restrict ourselves to the physical keyboard layout influence. This allows us to model typographical errors for all languages, given the distribution of the keyboard errors for a single language. We also make no distinction between physical and touchscreen keyboards, large or small. There are only limited misspelling resources for the data-rich English language, as shown in Table 2. The largest one is the Github Typo corpus [113]. Although it contains edits for multiple languages, only the English language is of a significant size. There is also a multilingual Wikipedia edit history, which could be prepared, similar to the GitHub dataset. However, it must be filtered [114] to not include non-typographical error-related examples. Incorporating the Twitter Typo corpus [115] may also not be worth the effort, as the domains are different, as well as the length of text spans (needed to normalize error frequencies). In the end, we used a single GitHub Typo Corpus to derive the probabilities of errors. [117,118] 1 Handwritten Further details on generating the typos are provided in Section 6.2. Experiment Details Here, we provide further details on our experiments. ByT5 Model Fine-Tuning We chose the batch size of 256 and the default ByT5 maximum sequence length of 1024. Such a configuration matches the total maximum number of tokens (256 × 1024 = 2048 × 128) with the best system for diacritics restoration [46]. The larger sequence length is essential, as our model works on byte-level fine-grained tokens, compared to coarser subword-level models. We used a GeForce RTX 2080 Ti GPU. Due to the modest memory size, we employed the gradient accumulation technique. It accumulates gradients in a continuous, rather than in a parallel, fashion. In addition, feeding only a single sample at a time allowed us to avoid padding. We trained each model for 2048 steps, each consisting of 256 sentences/samples, with a total of 2048 × 256 = 524 288 sentences, and this took up to 10 h for a single model. For example, for the Lithuanian language, this corresponds to a 0.86 epoch over the total 612 724 sentences in its dataset (Table 1). In our results, we refer to such basic training as being trained for ×1 the number of sentences (#samples). We fix this training length, irrespective of the available dataset, for each language (Table 1) to make training comparable among languages. In experiments where we trained our models for longer (e.g., ×8), we used the whole dataset and passed through it as many times as needed, e.g., for Lithuanian ×8 corresponds to 6.8 epochs. We used the Adafactor [104] optimizer with a constant learning rate of 0.001. The same setup was employed by the ByT5 [8] authors for fine-tuning experiments. Moreover, the Adafactor optimizer also has very little auxiliary storage compared to the other popular optimizer, Adam [103]. More complex learning rate schedules may give a slightly better performance, but it would be more difficult to compare our runs, so we adhered to the constant learning rate approach. For the diacritics restoration task with each language, we trained three different models. Each model has a different weight initialization, and data sampling is performed differently, according to a given random seed. The results are reported as a mean and a standard deviation over these three runs. In addition, we trained models for simultaneous diacritics and typographical error corrections for each language. We also trained several models for a much longer time. First, we continued our basic fine-tuning setup with a batch size of 258 to 6 000 steps (all other basic setups are up to 2 048). At this stage, the loss became noisy (although it was low), so we increased our batch size to 8 192 and continued training further. Due to the change in batch size, we reported our model training steps by how much training data, compared to our basic setup, it consumed. In our results, we reported models trained for ×8 and ×19 the number of samples in the basic setup. We chose those ceiling-rounded numbers as a means of convenience in our setup. As long training is very time-consuming, we performed only a few of them. We think that it still sufficiently indicates the scaling effects. For text generation, in all our experiments we used a beam size of two. Later runs revealed that there is hardly a difference in size. As a result, for future work, we recommend adhering to a simpler beam size of 1. The training script and the Pytorch model implementation were used from the Hugging Face library [95]. If not stated otherwise, we used all default parameters as they are in this library version 4.12.0. The Generation of Typographical Errors We took a similar approach for the generation of typographical errors, as in [78]. Close to a process of text writing, the program moves through each symbol and induces errors in a stochastic manner by evaluating probabilities of various error types for each character. This includes deletion, insertion, substitution, and transposition operations. The chance for a letter to participate in a particular error type is determined according to the frequency of errors in the reference dataset. We used the largest known original typo dataset, the GitHub Typo Corpus [113]. The dataset was filtered for only English language typos and the characters were selected with a count of at least 1 000. Given the final character set C, the total number of times f (c) the character c ∈ C or a specific typo pattern appeared in the selected corpus, the following probabilities for each character are considered: Note that we divide insertion errors into two distinct categories, whether the character is inserted after the one in question, or before. Both insertion probabilities are collected from the same samples, so we divide them by two. An alternative way would be to collect triplets of characters before the one in question and after, but the probabilities would then be sparse. Nevertheless, our chosen approach covers the so-called "fat-finger" errors. We ran some typographical error induction experiments on the original GitHub Corpus and confirmed that our generation method aligns with the original error type distribution. Initially, only about 1% of characters were corrupted, so we scaled our probabilities by a factor of three to be close to the low error rate, as defined in [78]. The final error type distribution and percentage of the corrupted characters for each language are depicted in Figure 1. The amount of generated errors for each language slightly varies because the letter frequencies derived from English differ in other languages. The amount of generated errors (%) Figure 1: Distribution of generated typographical errors by category (the left vertical axis and stacked bars). Proportions for the English part of the GitHub Corpus (used to derive generation probabilities) are also depicted for reference. The total percentage of induced corruptions are included (the right vertical axis and corresponding blue dots). Insertion and substitution errors can result in many different outcomes. The probabilities for specific letters to emerge, given that this type of error occurs at a specific place, are computed by the following equations: As mentioned previously, we took the typo statistics from the English dataset and ran on the assumption that typos are based purely on the layout of the keyboard (the proximity of keys, etc.), so the same typo statistics will be in all the other languages using the QWERTY layout. We did not deal with the extensions of the character sets and keyboard layouts for different languages, as we only introduced typos to the undiacritized versions of the texts, irrespective of the case. We disregarded other possible minor variations in the keyboard layouts as insignificant. For the Croatian, French, Hungarian, and Slovak languages, corresponding to their different keyboard layout families (see Table 1), we remapped the original English QWERTY dataset before inferring typo probabilities. For example, for Croatian, which has a QWERTZ layout, we had to swap the letters "z" and "y" when calculating probabilities. In our initial experiments, we did not observe significant model performance differences between the QWERTY and remapped typo generation versions. Results We present the results of our different experiments here. Diacritics Restoration The diacritics restoration results are presented in Table 3. Our ByT5 method results lay between the dictionary (a simple statistical Unigram model) and the state-of-the-art model [46]. The highest alpha-word accuracy is for French, Spanish, and Croatian, with results that were only 0.34%, 0.29%, and 0.56% behind the state of the art, respectively. These languages have the smallest percentage of diacritic words (see Table 1). The lowest scores are recorded for Vietnamese and Latvian at 94.25% and 96.33%, respectively. We also note that the Irish language, with the smallest dataset, has the highest standard deviation of 0.32%. Table 3: Alpha-word accuracy results (%) for the diacritics restoration task. We report means and standard deviations for three separate training runs with different initial model weights and dataset samplings trained for 524 288 sentences (#samples: ×1) and a single run for eight times more(×8), cycling through the available training data ( The "Raw" column in Table 3 indicates the alpha-word accuracy of the uncorrected text for comparison. Naturally, the more diacritic-heavy the language is, the lower the number. An Approach with the Dictionary and the ByT5 models (Dict.+ByT5) We noticed that the dictionary method outperforms the ByT5 method for words that have only a single target translation in the dictionary. We grouped words by how many translation targets in the dictionary they have and we show the ratio of ByT5-to-Dictionary error rates in Table 4. The resulting values that are higher than 1 indicate the Dictionary outperforming the ByT5 model. This is the case for all languages at a word group with only a single translation. Table 4: Alpha-word error ratio between the ByT5 and Dictionary methods for two word groups and models in different training stages. The values higher than 1 indicate that the Dictionary method restores diacritics better. The first word group corresponds to words with exactly one possible translation target, and the second word group corresponds to words with two translation targets. Groups are determined by the training set statistics, while results are reported on the testing set. Table 4 also portrays how the ratio of the ByT5-to-Dictionary error rates changes during half and full training. The trend is obvious: the transformer improves for all word groups with training. If our training was longer, the ByT5 model may even surpass the Dictionary model at a word group of one translation candidate. This is exactly what happened for the Latvian and the Lithuanian languages after eight times more training samples. Note that at half the training, the standard deviation of the Turkish ratio is abnormally high. This is due to one of three ByT5 training runs that temporarily fail. However, with further training, the run recovered up to the same accuracy level as the other two. This is a good example of how different training dynamics can be dependent on different initial conditions and different data sampling. We constructed a hybrid approach by letting the Dictionary model restore words with only a single translation candidate, while leaving all the other words for the transformer. For our standard training, this improved the single ByT5 results by up to 0.37%, on average, and allowed us to reach the state-of-the-art results for the Turkish language. However, we can observe that, with longer training, the pure ByT5 model can catch up to, or even surpass, the hybrid approach. Simultaneous Diacritics and Typos Corrections The results of the simultaneous diacritic and typographic error corrections are represented in Table 5. We see that the alpha-word accuracy results are significantly lower across the board, compared to restoring the diacritics alone. The Dictionary method was used in the same way as the previous experiment, i.e., it was "trained" on the typo-free diacritization-only task in both the standalone and hybrid approaches. The reduction of accuracy the ByT5 model, on average, is by 7.84%, while for hybrid Dict.+ByT5 approach, it was 3.71%. A smaller reduction for the hybrid method suggests that the transformers do not cope well with the same words that it successfully dealt with when there were no typos present. A possible reason may be that more learning is required by both tasks, and up to 10 h of training might not be enough. Training the Hungarian model up to 19 times longer improves the performance substantially, but the gap of 2.98% between the ByT5 model and the hybrid remains. We also added correction results that were obtained with the open-source Hunspell spellchecker [52] by replacing the words that it found to be incorrectly spelled with its first suggestion. The results indicate that it is barely better than raw uncorrected sentences. It is also significantly worse than our Dictionary approach, which is specialized in restoring diacritics. Performance on the Zipf 's Tail Word frequencies can be modeled reasonably well by a Zipf distribution. It is a very heavytailed distribution, where there is a vast number of words with low frequencies. The abundance of such words is a challenge for most learning systems, as the data for these points is sparse. Our question is, how hard are these words for our trained models? We grouped words that were in our testing set by their frequencies in the training set. The resulting word groups are: • Unseen: present in the test but not in train data; • [1,100]: words appearing in the training set from 1 to 100 times; • [101, 10 000]: words appearing in training set from 101 to 10 000 times. Alpha-word accuracy results for these groups are shown in Table 6. A substantial part of errors come from the words that are unseen during the training. Excluding Vietnamese and Irish, this ranges from 13% (Spanish, French) to 36% for Slovak. The Vietnamese outlier of 1% may be due to its linguistic nature, while the Irish outlier of 46% is due to its very small dataset. Overall, the smaller the dataset (Table 1), the more unseen or rare words, and the associated errors, we have. Similar to the Dictionary method and the other classical methods, unseen data is also a significant source of errors for the transformer model. Different to the classical approaches, however, is the transformer model, which is based on neural networks, and it can generalize to unseen data. To investigate this generalization, we filtered all the words that were in the testing set and not in the training and calculated the percentages, as is shown in Table 7. We can see that the ByT5 model successfully restores more than 76% of unseen words for each language. Training Longer Training for longer is beneficial. As can be seen in Figure 2, testing the alpha-word accuracy for all our models is only increaing with training. The lack of training hurts the performance of the Vietnamese language the most, which is the language with the most diacritics. Training the corresponding model for eight times longer brings substantial improvements of over 3.28%. A similar trend is observed for all the models trained on the two tasks simultaneously in Figure 3. Here, the improvements are much larger. On the other hand, languages with fewer diacritics, such as French and Spanish, have diminishing gains from longer training. Overall, longer training is a must for the more difficult tasks. Figure 3: Alpha-word accuracy improvement during diacritics and typographical errors correction training. Training data of ×1 corresponds to 2048 × 256 sentences for a given language. We also run a single longer training session for the Hungarian language, with up to ×19 training steps. Note that while the training is much longer, we still use the same dataset sizes presented in Table 1, but we just iterate over them more times. Discussion In this work, we show that accuracy can be improved by combining the transformer and the classical Dictionary methods. Yet, this is the case for more under-trained transformers. We show that the longer-trained ByT5 models start to bypass the hybrid approach. However, when resources are limited compared to the difficulty of the task, such a hybrid approach can be a viable solution, as is the case with our simultaneous diacritics restoration and typos correction tasks. The hybrid Dict.+ByT5 approach might also have an advantage in the latter task because the dictionary part is "trained" on the typo-free diacritization task and, thus, recognizes and corrects typo-free words well. The ByT5 model was trained only on the combined task, so it, thus, has a harder time learning to recognize these situations from the noisy data. Transformer models depend on the amount of training data, and small sizes can hinder the performance. Hungarian and Latvian languages, with a very similar percentage of diacritics (and, hence, the task difficulty), had a difference of four times between their dataset sizes. As a result, our achieved restoration score for Latvian was almost 2% lower. On the other hand, the alpha-word accuracy of over 96% and 98% can still be reached for Latvian and Irish languages, with dataset sizes of 5.5 and 1.2 million words, respectively. This indicates a correlation between the difficulty of the task and the size of the dataset needed. One way to improve our results is to leverage the fact that most of the errors are due to unseen and less-seen words in the training data. As we show in this work (Table 6), longer training improves the restoration of words with moderate frequencies but it is less effective for unseen words and is very time-consuming. The only way to improve unseen words is to rely on the additional dataset. Time constraints could, additionally, be relieved by employing boosting approaches [119], i.e., training on the filtered selection of data, which is known to be problematic. Such data could contain a high proportion of low-frequency and unseen words, while at the same time, being compact. A limitation of our work is that we had only a single moderate GPU at our disposal. Scaling the model size [106], incorporating additional datasets [46], and training longer can improve accuracy by several percent. Similarly, one can build a model of multiple languages to gain benefits by overlapping vocabularies and semantics of related under-represented languages, although studies report contradictory results [48,46]. We think that all these scaling approaches are promising as future work. In our work, we generated the typos for the entire datasets just once, but, in principle, we could generate different typos each time we pass through the dataset. This would require more computation, but it would enrich the data for longer training sessions. Another natural future direction is the incorporation of multiple error types. This is still an active area of research, as the currently achievable accuracy of such systems has a wide margin to improve [107]. In this work, we show how difficult the task becomes by combining just two classes of errors. However, this is a bigger problem for the classical hand-crafted approaches, but our ByT5-based models could, in principle, cope with this, given additional data and training times. Our approach is also easy to scale to other languages, as it does not depend on the alphabet or structure of the language. For example, only the typo dataset generation model in this work depends on the Latin alphabet and a corresponding keyboard layout. Altogether, this makes our approach very promising for large-scale real-world applications. Our combined diacritic restoration and typo correction solution could, in principle, already be used in, for example, auto-correcting text messages or social media posts/comments. Expanding the approach in the ways discussed above opens even bigger application horizons. Conclusions We achieved a 98.3% average alpha-word accuracy (within 1% of the state of the art) on the diacritic restoration task over 13 benchmark languages with a ByT5 universal byte-level transformer model approach, a smaller training dataset (Wikipedia), and a much-reduced training time (Table 3). When the training time is limited, the model is slightly improved by the assistance of a simple statistical Unigram model (Dict.+ByT5). There is a solid indication, however, that longer training gets very close to the state-of-the-art model, even without this assistance, and with the smaller dataset ( Figure 2). We achieved a 94.6% average alpha-word accuracy on the simultaneous diacritics restoration and typo correction tasks with the same models (Dict.+ByT5), training datasets and times. This is a much harder task, and is problematic for the specialized systems; thus, we have no state-of-the-art model to compare to (Table 5). There is also a strong indication that longer training can significantly improve these results (Figure 3). We investigated that most of the errors are caused by the words that are rare in the training dataset (Table 6). However, contrary to the classical approaches, our models generalize quite well to the unseen words (Table 7) and restore diacritics correctly on > 76% of the unseen words in every language. This gives us good hints on how the models can be further improved, often by simply training them more. The good performance and universality of this approach make it very promising for realworld applications, more languages and error classes. Data Availability: Publicly available datasets were analyzed in this study. The data for 12 benchmark languages can be found here: http://hdl.handle.net/11234/1-2607. Additional data for the Lithuanian were used from here: https://ufal.mff.cuni.cz/~majlis/ w2c/download.html and were preprocessed by the tools from https://github.com/arahusky/ diacritics_restoration/tree/master/data/create_corpus_scripts. All the links were last accessed on 3 January 2022. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-01T04:47:47.840Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "f357b35e068bbfbb8468b48198ab63241713629d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/5/2636/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f357b35e068bbfbb8468b48198ab63241713629d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
25118196
pes2o/s2orc
v3-fos-license
Overview of screening methods for fatty liver disease in children The prevalence of obesity and obesity related comor-bidities including diabetes and nonalcoholic fatty liver disease (NAFLD) has been rising globally. Nonalcoholic fatty liver disease is emerging as a common liver disease among adults which can lead to the eventual development of complications including cirrhosis and hepatocellular carcinoma. With the rise of obesity in children, the development of detection methods for the presence of NAFLD is becoming imperative. Although the gold standard for diagnosis is liver biopsy, practical issues limit pediatric use and warrant development of noninvasive or minimally invasive screening tools for the detection and staging of NAFLD. A variety of diagnostic methods have been studied including use aminotransferases, imaging studies and serologic markers which have some population-based limitations. Additional factors such as gender and ethnicity may also play a role in the screening of NAFLD in pediatric population studies. INTRODUCTION Nonalcoholic fatty liver disease (NAFLD) has emerged as the most common cause of liver disease among children, paralleling the rise in obesity over the past few decades. Fatty liver disease has a spectrum of clinical manifestations, ranging from simple steatosis to steatosis with inflammation and fibrosis nonalcoholic steatohepatitis (NASH) [1] . NAFLD was first described by Zelman [2] in 1952 among an inpatient population of thirty obese men with liver disease. In 1983, Moran et al [3] reported 3 children less than 14 years of age with severe hepatitis and fibrosis. Population studies also seem to suggest racial and gender variability regarding NAFLD [4,5] . Factors including obesity, gender and ethnicity may influence the development of NAFLD. Development of safe and cost-effective methods for screening and detection of NAFLD is critical given the large number of patients. Frequently used screening methods for NAFLD include aminotransferases and ultrasonography. NAFLD is the most common etiology for transaminase elevation among adults [6] . Although the gold standard for diagnosis is a liver biopsy, the invasiveness and expense of the procedure limits the feasibility of this option in children. Available imaging modalities, including ultrasound, computed axial tomography and magnetic resonance imaging, have some limitations for broad use, including cost, radiation exposure, as well as technical limitations due to body habitus. A literature search was performed, through PubMed, using the following and combination of the following terms: NAFLD, NASH, nonalcoholic fatty liver, steatohepatitis, infant, child and adolescent. The results were limited to human studies, and infant, child, adolescent and the English language. The utility of current screening methods for the detection of pediatric NAFLD will be reviewed. SURROGATE OF NAFLD Unexplained alanine aminotransferase (ALT) elevation is a frequently used surrogate for the presence of NAFLD in children and adults. ALT elevation (> 30 U/L) was reported in 6% of overweight adolescents and 10% of obese adolescents among 2450 children enrolled in the NHANES Ⅲ survey (National Health and Examination Survey cycle Ⅲ) by Strauss et al [7] . ALT elevation (>30 U/L) was an independent predictor for NAFLD among an Italian pediatric sample of 268 children between the ages of 6 and 20 years with a body mess index (BMI) of >90th percentile [8] . ALT elevation was present in 76 children with NAFLD (81% sensitivity of ALT for NAFLD prediction); in 49 children ALT values were > 40 U/L (89% sensitivity of ALT for NAFLD prediction) [8] . Louthan et al [5] noted that elevated ALT (ALT > 40 U/L) was four times more likely in obese children. In several studies, ALT elevation has correlated with the presence of hepatic fat on imaging. Fishbein et al [9] reported a retrospective review of hepatic magnetic resonance imaging (MRI) findings of 39 obese Caucasian children, noting hepatic fat fraction correlated with serum ALT (ALT > 35; r = 0.44; P < 0.05) and age (r = 0.54; P < 0.005) but not with BMI z-score. In a prior study of obese children with hepatomegaly, he reported 21 of 22 (95%) subjects had elevated fat fraction on hepatic MRI and 12 of 20 (60%) had elevated serum ALT (ALT > 35) [10] . Correlation between ALT elevation (ALT > 58) and fatty liver on ultrasound (P < 0.001) was reported in a prospective study of 84 Chinese children seen in the obesity and lipid disorder clinic (ages 9.5-14 years); gamma-glutamyl-transpeptidase (GGT, abnormal GGT > 40) also correlated with fatty liver on imaging (P < 0.001) [11] . Tazawa et al [12] reported sensitivity, specificity and positive predictive values of 0.92, 0.62 and 0.83 respectively for ALT elevation (ALT > 30 U/L) and detection of evidence of fatty liver on ultrasound for a school-aged population in Japan. PITFALLS OF ALT There can be shortcomings with utilizing ALT as a screening method for NAFLD. Aminotransferase elevation is not universally encountered among patients with NAFLD. The Dallas Heart study conducted in Dallas County on 2287 adult subjects revealed that abnormal ALT was not a useful diagnosis of NAFLD as 79% of subjects with hepatic steatosis (determined by elevated hepatic triglycerides on imaging) had normal ALT levels [13] . In the study conducted by Franzese et al [14] , 26 out of 38 (68%) obese children with fatty liver on imaging had normal aminotransaminases. Similar concerns were raised by Fishbein et al [10] upon demonstration that ALT (ALT > 35) did not detect low levels of hepatic fat fraction. In the study by Tazawa et al [12] , 18% of Japanese schoolchildren with normal ALT levels (ALT < 30) had ultrasound findings of a fatty fibrotic pattern suggestive of nonalcoholic steatohepatitis. A study by Burgert et al [15] demonstrated that only 48% of obese children (42% Caucasian/25% African American/33% Hispanic) with intrahepatic fat accumulation on MRI had abnormal ALT levels (ALT > 35), concluding that use of serum ALT as a screening tool may not be effective. Of note, children with an absence of abnormal ALT levels are rarely investigated for NAFLD; evidence of insulin resistance and diabetes should heighten concern for possible NAFLD as it has been associated with liver disease in adults and children [16] . Upcoming imaging methods may enhance capacities for non-invasive detection and staging of NAFLD and NASH in children. Preliminary adult data suggest the FibroScan ® probe as a potential noninvasive technique due to its non-specificity and potential to compensate for larger size. FibroScan ® measures liver stiffness by transient elastography as a surrogate for fibrosis [17] . FibroScan ® has been studied in adult mixed populations, including hepatitis and NAFLD. Prior probes were unable to measure liver stiffness in 2%-10% of patients due to inflammation and body size [18] . The XL ® FibroScan probe has improved detection of NAFLD and fibrosis among adults through improved transducer sensitivity with greater measurement depth but still has suboptimal reliability among morbidly obese adults (BMI > 40) and diabetics [18][19][20] . However, the reproducibility of results is a drawback as well as concerns regarding specificity of findings. GENDER IN NAFLD Several studies have indicated a potential relationship between gender and the presence of NAFLD. In general, it has been noted that NAFLD is more prevalent in males than females. Several imaging studies using ultrasound and hepatic MRI have suggested male predominance [8,15] . In addition, a retrospective review, published in 2006 of pediatric autopsies by Schwimmer et al [4] in San Diego County, observed that children with fatty liver were older and more likely to be male with a higher BMI. An earlier study published by Schwimmer et al [21] published in 2003 observed that age and sex did not differ in patients with liver fibrosis, although the majority of patients in the study with NAFLD were male (70%). Similarly, male dominance was reported in a Japanese study by Tominaga et al [22] but the values were not statistically significant. In an Australian study of 500 adolescents, the prevalence of transaminase elevation was increased in obese boys (40% in boys and 20% in girls), but there was no screening for the presence of underlying liver disease [16] . Likewise, in a study done in Taiwan (which included screening for hepatitis B and C), there was a higher prevalence of transaminase elevation in obese boys over girls [23] . A higher prevalence of transaminase elevation among obese boys has also been reported by Chan et al [11] and Schwimmer et al [24] (defined as ALT > 40 U/L), as well as Strauss et al [7] , but with a note of caution as there was alcohol consumption reported among adolescent males. Using subjects from the ages of 12-19 years from the NHANES study (1999)(2000)(2001)(2002) with exclusion of those with ethanol consumption, Graham et al [25] reported an interaction with male sex upon ALT elevation (ALT > 40). Gender influences upon the prevalence of NAFLD in children have not been consistently substantiated by other investigators. Louthan et al [5] did not report an influence of gender upon ALT (ALT > 40) in her pediatric study population. Similarly, Fishbein et al [9] did not detect differences in ALT based upon gender. ETHNICITY AND NAFLD There has been a correlation between ethnicity and ALT levels. Normal ALT ranges vary between different ethnicities and differing ALT levels will have to be regarded for different ethnic groups. In particular, African Americans have been noted to have the lowest percentage of elevated ALT levels, while those of Hispanic origin have been observed to have the highest. The prevalence of ALT elevation (ALT > 30) was 7.4% in Caucasian adolescents, 11.5% in Mexican Americans and 6.0% in African American adolescents in one study conducted utilizing the NHANES survey (1999)(2000)(2001)(2002)(2003)(2004) [26] . Louthan et al [5] also observed that elevated ALT was four times less likely in African Americans than Caucasians, despite increased obesity and insulin resistance suggestive of potential ethnic differences in ALT norms [5] . Several studies have noticed the effect of ALT on the Hispanic population. A recent multicenter pediatric cross-sectional study by Schwimmer et al [24] reported a prevalence of elevated ALT (ALT > 40) levels as 36%, 22% and 14% among Hispanic, Caucasian and African American adolescents, respectively; other studies have reported similar findings [27] . Discrepancies may also exist among Asian subpopulations as children of Filipino descent had a prevalence of 20%, but only 4% in those of Vietnamese or Cambodian origin [4] . Similar ethnic influences upon NAFLD/NASH have been reported among adults, although higher percentages of African American patients were encountered. Likewise, out of 151 adults cared for at Brooke Army Medical Center and diagnosed with NAFLD (46% of cohort), the prevalence of NAFLD/NASH confirmed by biopsy was 58.3% among Hispanics, 44% among Caucasians and 35.1% among African Americans [28] . CONCLUSION Paralleling the rise of obesity in children and adolescents has been a rise in the incidence of NAFLD in pediatric populations. Optimal methods for population-based screening for pediatric NAFLD remain undefined to date. As demographic factors such as gender and ethnicity may play a role in the prevalence of NAFLD/NASH, use of targeted screening methods may be feasible but consideration for ethnicity norms on markers, including ALT, may be necessary to enhance sensitivity. Data on influences of gender upon NAFLD/NASH prevalence/ detection in children has been inconsistent to date, warranting additional investigation. Utilizing ALT as a determinant of NAFLD may not be effective. Studies using ultrasonography indicated fibrotic patterns, yet subjects had normal ALT. Also, hepatic steatosis was noted in subjects with normal ALT in the Dallas Heart study. Therefore, further studies are needed to determine surrogate markers of NAFLD in varying pediatric populations.
2018-04-03T00:44:58.189Z
2012-01-27T00:00:00.000
{ "year": 2012, "sha1": "0fee87d4748f1d2bd5c50e6e104b73ea95618966", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4254/wjh.v4.i1.1", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d86c6e53b5529152658ea899e5ac27cc12eede1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221819606
pes2o/s2orc
v3-fos-license
Chance-Constrained Optimal Covariance Steering with Iterative Risk Allocation This paper extends the optimal covariance steering problem for linear stochastic systems subject to chance constraints to account for optimal risk allocation. Previous works have assumed a uniform risk allocation to cast the optimal control problem as a semi-definite program (SDP), which can be solved efficiently using standard SDP solvers. We adopt an Iterative Risk Allocation (IRA) formalism, which uses a two-stage approach to solve the optimal risk allocation problem for covariance steering. The upper-stage of IRA optimizes the risk, which is proved to be a convex problem, while the lower-stage optimizes the controller with the new constraints. This is done iteratively so as to find the optimal risk allocation that achieves the lowest total cost. The proposed framework results in solutions that tend to maximize the terminal covariance, while still satisfying the chance constraints, thus leading to less conservative solutions than previous methodologies. We also introduce two novel convex relaxation methods to approximate quadratic chance constraints as second-order cone constraints. We finally demonstrate the approach to a spacecraft rendezvous problem and compare the results. I. INTRODUCTION In this paper we address the problem of finite-horizon stochastic optimal control of a discrete linear time-varying (LTV) system with time-independent white-noise Gaussian diffusion. The control task is to steer the state from an initial Gaussian distribution to a final Gaussian distribution with known statistics. In addition to the boundary conditions, we consider chance constraints that restrict the probability of violating the state constraints to be less than a certain threshold. Hard state constraints are difficult to impose in stochastic systems because the noise can be unbounded, so chance constraints are used to deal with this problem by imposing a small, but finite, probability of violating the constraints. In the literature, there are two kinds of chance constraints; individual and joint [2]. Individual chance constraints limit the probability of violating each constraint, while joint chance constraints limit the probability of violating any constraint over the whole time horizon. In this paper, we consider the case of joint chance constraints, because they are a more natural choice for most applications. The control of stochastic systems can be best formulated as a problem of controlling the distribution of trajectories over time. Moreover, Gaussian distributions are completely characterized by their first and second moments, so the control problem can be thought of as one of steering the mean and the covariance to their terminal values. The problem of covariance control has a history dating back to the '80s, with the works of Hotz and Skelton [3], [4]. Much of the early work focused on the infinite horizon problem, where the state covariances asymptotically approach their terminal values. Only recently has the finite-horizon problem drawn attention, with much of the early work focusing on the covariance steering (CS) problem, namely, with the problem of steering an initial distribution to a final distribution at a specific final time step subject to LTV dynamics. The problem could be thought of as a linear-quadratic Gaussian (LQG) problem with a condition on the terminal covariance [5]. Moreover, it has been shown that the finite-horizon controller can be constructed as a state-feedback controller and the problem can be formulated as a convex program [5], [6], or as the solution of a pair Lyapunov differential equations coupled through their boundary conditions [7], [8]. Alternatively, for certain special cases one can solve the CS problem directly by solving an LQ stochastic problem with a particular choice of cost weights [9]. Other approaches [10], [11] use an affine disturbance feedback controller having two components, one that steers the mean state and the other that steers the covariance. In general, the theory of steering marginal distributions has a long history stemming from the problem of Schrödinger bridges and optimal mass transport [8], [12]- [14]. Recent work has focused on incorporating physical constraints on the system, such as state chance constraints [15], obstacles in path-planning environments [11], input hard constraints [16], incomplete state information [17], and extensions in the context of stochastic model predictive control [18] and nonlinear systems [19]- [21]. In this work, we extend the Covariance Steering Chance Constraint (CSCC) problem, to account for optimal risk allocation. By risk allocation we mean allocating the probability of violating each individual chance constraint at each time step. For example, if there are M chance constraints and N time steps, there would be N M total allocations for the whole problem. Previous works [9], [11], [15], [16], [18] have assumed a constant risk allocation, so that the resulting problem can be turned into a semi-definite program (SDP). Here, however, we adopt a two-stage algorithm that optimizes the risk distribution over all time steps, and subsequently optimizes the controller by solving a SDP. Other works have tried to optimize the risk using techniques such as ellipsoidal relaxation [22] and particle control [23]. However, ellipsoidal relaxation techniques are overly conservative and lead to highly suboptimal solutions. Particle control methods are computationally too demanding, since the number of decision variables grows with the number of samples. The two-stage risk allocation scheme proposed in this paper is computed iteratively until the cost is within a given tolerance of the minimum, from which we get the optimal risk allocation for the problem, as well as the optimal controller. Previous works on chance constrained optimization and CS use polyhedral chance constraints, since they can be represented as intersections of linear inequalities [24]. This formulation results in some favorable properties that help with the optimization. However, in many applications the constraints are in the form of a conical region (e.g., line-ofsight (LOS) constraints). Approximating such cone constraints with intersecting planes would make the problem rather large for high accuracy approximations. In this work, we also present a way to approximate such cone chance constraints (as special cases of general quadratic constraints) in terms of twosided polyhedral constraints. We then apply this formulation to the case of LOS cone chance constraints, and compare with a polyhedral approximation. Additionally, we present a geometric relaxation of the cone chance constraints, which is less conservative than the two-sided approximation. To illustrate the proposed risk allocation algorithm we use as an example a spacecraft rendezvous problem between two spacecraft, in which the approaching spacecraft has to remain within a predetermined LOS region during the whole maneuver. Both polyhedral and cone LOS constraints are investigated and compared. The paper is structured as follows: In Section II we define the general stochastic optimal control problem for steering a distribution from an initial Gaussian to a terminal Gaussian with joint state chance constraints. In Section III we review the two-stage risk allocation formalism, and formulate the SDP for the optimal controller as well as the proposed iterative risk allocation algorithm. In Section IV we present two different convex relaxations of quadratic chance constraints, one in terms of two-sided linear constraint relaxation, and the other based on a geometrical construction. Finally, in Section V we implement the theory to the spacecraft rendezvous and docking problem with both polyhedral and cone chance constraints. II. PROBLEM STATEMENT We consider the following discrete-time stochastic timevarying system subject to noise where x ∈ R n , u ∈ R m , with time steps k = 0, . . . , N − 1, where N representing the finite horizon. The uncertainty w ∈ R r is a zero-mean white Gaussian noise with unit covariance, i.e., E[w k ] = 0 and E[w k1 w k2 ] = I r δ k1,k2 . Additionally, we assume that E[x k1 w k2 ] = 0, for 0 ≤ k 1 ≤ k 2 ≤ N . The initial state x 0 is a random vector drawn from the normal distribution where µ 0 ∈ R n is the initial state mean and Σ 0 ∈ R n×n > 0 is the initial state covariance. The objective is to steer the trajectories of (1) from the initial distribution (2) to the terminal distribution where µ f ∈ R n and Σ f > 0 are the state mean and covariance at time N , respectively. The cost function to be minimized is where Q k ≥ 0 and R k > 0 for all k = 0, . . . , N − 1. Additionally, and over the whole horizon, we impose the following joint chance constraint that limits the probability of state violation to be less than a pre-specified threshold, i.e., where P(·) denotes the probability of an event, X ⊂ R n is the state constraint set, and ∆ ∈ (0, 0.5]. Remark 1: We assume that the system (1) is controllable, that is, for any x 0 , x f ∈ R n , and no noise (w k ≡ 0, k = 0, . . . , N − 1), there exists a sequence of control inputs {u k } N −1 k=0 that steer the system from x 0 to x f . First, we provide an alternative description of the system (1) in order to solve the problem at hand. Using [9], [11], [15], [16], [18], we can reformulate (1) as where X := [x 0 , ...x N ] ∈ R (N +1)n , U := [u 0 , ...u N −1 ] ∈ R N m , and W := [w 0 , ..., w N −1 ] ∈ R N r are the state, input, and disturbance sequences, respectively. The matrices A, B, and D are defined accordingly [9]. Using this notation, we can write the cost function compactly as whereQ andR are defined accordingly. Note that since Q k ≥ 0 and R k > 0 for all k = 0, . . . , N − 1, it follows thatQ ≥ 0 andR > 0. The initial and terminal conditions (2) and (3) can be written as and where Σ X := E[X 2 ] − E[X] 2 , and E k := [0 n,kn , I n , 0 n,(N −k)n ] picks out the kth component of a vector. Consequently, the state chance constraints (5) can be written as In summary, we wish to solve the following stochastic optimal control problem. A. Lower-Stage Covariance Steering Borrowing from the work in [11], we adopt the control policy where v k ∈ R m , K k ∈ R m×n , and y k ∈ R n is given by Remark 2: The proposed control scheme (11)-(12) leads to a convex programming formulation of Problem 1 as follows. Using (11)-(12), we can write the control sequence as where )n a matrix containing the gains K k . It follows that the dynamics can be decoupled into a mean and error state as follows Additionally, the cost function takes the form where Σ Y := AΣ 0 A + DD . The terminal constraints can be reformulated as Qualitatively speaking, V steers the mean of the system to µ f , while K steers the covariance to Σ f . In order to make the problem convex, we relax the terminal covariance constraint B. Polyhedral Chance Constraints When dealing with the risk allocation problem, it is customary to assume that the state constraint set X is a convex polytope X p , so that where α j ∈ R n and β j ∈ R. Under this assumption, the probability of violating the state constraints (10) can be written as Equation (20) represents the objective that the joint probability of violating any of the M state constraints over the horizon N is less than or equal to ∆. Using Boole's Inequality [25], [26], one can decompose a joint chance constraint into individual chance constraints as follows where each δ j k represents the probability of violating the jth constraint at time step k. Notice that the probability in (21) is of a random variable with mean α j E kX and covariance α j E k Σ X E k α j . Thus, (21) can be equivalently written as where Φ(·) denotes the cumulative distribution function of the standard normal distribution. Simplifying (23) and noting that (24) can be computed from its Cholesky decomposition. The expression in (24) gives N M inequality constraints for the optimization problem. In summary, Problem 1 is converted into a convex programming problem. Problem 2 : Given the system (14) and (15), find the optimal control sequences V * and K * that minimize the cost function (16) subject to the terminal state constraints (17a) and (18), and the individual chance constraints (24). Remark 4: Note that it is not possible to decouple the mean and covariance controllers in the presence of chance constraints, because of (24). C. Risk Allocation Optimization Since δ j k are decision variables in (24), the constraints are bilinear, which makes it difficult to solve this problem. As mentioned previously, in order to transform Problem 2 to a more tractable form, the allocation of the risk levels δ j k may be assumed to be fixed to some pre-specified values, usually uniformly. In this case, δ j k are no longer decision variables and the problem can be efficiently solved as an SDP. However, a better approach is to allocate δ j k concurrently when solving the optimization Problem 2, so as to minimize the total cost. This gives rise to a natural two-stage optimization framework [1]. According to the approach in [1], the upper stage optimization finds the optimal risk allocation δ : ∈ R NM , and the lower stage solves the CS problem for the optimal controller U * = U * N −1 given the risk allocation δ from the upper-stage. Let the value of the objective function after the lower-stage optimization for a given risk allocation δ be J * , that is, where J(V, K) is given in (16). The upper-stage optimization problem can then be formulated as follows. D. Iterative Risk Allocation Motivation Even though we have formulated the solution of Problem 2 as a two-stage optimization problem, it is not clear yet how to solve Problem 3 efficiently in order to determine the optimal risk allocation. To gain insight into the solution, we first state a theorem about the monotonicity of J * (δ). Theorem 1. The optimal cost from solving Problem 2 is a monotonically decreasing function in δ j k , that is, Proof. Let δ, δ be two risk allocations, and let R(δ), R(δ ) denote the feasible regions, defined by the inequality constraints (24). If δ j k ≤ δ j k for all j and k, then R(δ) ⊆ R(δ ). To see this, let us rewrite (24) as follows Next, and since ∆ ∈ (0, 0.5], it follows that δ j k ∈ (0, 0.5]. Also note that in the domain z ∈ [0.5, 1] the cumulative distribution function Φ(z) forms a convex region M, as shown in Figure 1. Additionally, Φ −1 (z) is a monotonically increasing function, and hence Thus, if δ j k ≥ δ j k , the right hand side of (30) will be larger for δ than it is for δ. This implies that the inequality constraints are tighter for δ than for δ , which proves that R(δ) ⊆ R(δ ). This fact finally implies that J * (δ) ≥ J * (δ ). Remark 5: The chance constraints can be written in yet another form that will prove useful below. Starting from (24), notice that we can write the chance constraints as The quantityδ j k represents the true risk experienced by the optimal trajectories, i.e, when using (V * , K * ). Clearly, the risk we have selected does not need to be equal to the actual risk once the optimization is completed. When these values are equal we will say that the constraint is active, and is inactive otherwise. Good solutions correspond to cases when the true risk is within a small margin of the allocated risk. Many values of δ j k smaller than their true counterparts would imply an overly conservative solution. E. Iterative Risk Allocation Algorithm We can exploit Theorem 1 in the context of CS to create an iterative risk allocation algorithm that simultaneously finds the optimal risk allocation δ * and the optimal control pair (V * , K * ). To this end, suppose we start with some feasible risk allocation δ j k(i) , for all k, j, where i denotes the iteration number. Using this risk allocation, we then solve Problem 2 to get the optimal controller (V * (i) , K * (i) ), which corresponds to the optimal mean trajectoryX * (i) at iteration i. Next, we construct a new risk allocation δ j k(i) as follows: for all k, j such that δ j k(i) is active, we keep the corresponding allocation the same, i.e, δ j k(i) = δ j k(i) . However, for all k, j such that δ j k(i) is inactive we let δ j k(i) < δ j k(i) , which corresponds to tightening the constraints. Since this new risk allocation is smaller, it follows from (31) . Furthermore, this implies that The constraint (33) ensures that the optimal solution for δ (i) is feasible for δ (i) . Furthermore, since δ j k(i) < δ j k(i) , it follows that R(δ ) ⊆ R(δ), so the optimal solution for δ (i) is also the optimal solution for δ (i) as well, hence J * (δ ) = J * (δ). Next, we construct a new risk allocation δ j k(i+1) from δ j k(i) as follows. For all k, j such that δ j k(i) is inactive, leave the new risk allocation the same. For all k, j such that δ j k(i) is active, let δ j k(i+1) > δ j k(i) , which corresponds to relaxing the constraints. Following the same logic, Theorem 1 implies that J * (δ (i) ) ≥ J * (δ (i+1) ). Thus, we have laid out an iterative scheme for a sequence of risk allocations {δ (0) , δ (1) , . . . , δ (i) } that continually lowers the optimal cost. This leads to Algorithm 1 that solves the optimal risk allocation for the CS problem subject to chance constraints. Note that the algorithm is initialized with a constant risk allocation. To tighten the inactive constraints in Line 9, the corresponding risk is scaled by a parameter 0 < ρ < 1 that weighs the current risk with the true risk from that solution. Additionally, to loosen the active constraints in Line 13, the corresponding risk is increased proportionally to the residual risk remaining. Solve Problem 2 with current δ to obtainδ 5N ← number of indices where constraint is active IV. CONE CHANCE CONSTRAINTS In many engineering applications polytopic constraints such as (19) are not realistic. Most often, the constraints have the form of a convex cone, namely, the feasible region is characterized by Cone constraints such as (34) are more realistic, as they better describe the feasible space. As with the case of a polyhedral feasible state space X p , we want the state to be inside X c throughout the whole time horizon. However, since the dynamics are stochastic and similar to (5), this assumption is relaxed to the condition that the probability that the state is not inside this set is less than or equal to ∆. In the context of convex cone state constraints, this condition becomes Remark 6: Although the set X c is convex, the chance constraint P(x ∈ X c ) ≥ 1 − δ may not be convex. Specifically, for large δ, it is possible that the chance constraint (35) is non-convex [27]. Since there is no guarantee that (35) will be a convex constraint, we need to make a convex approximation so that (35) holds for all ∆ ∈ (0, 0.5]. A. Two-Sided Approximation of Cone Constraints Recent work on two-sided affine chance constraints [27] has shown how to relax a general class of quadratic constraints of the form where ξ is a Gaussian random variable, and ∈ (0, 0.5]. In [27] the authors proved that (36) can be conservatively approximated by the following convex constraints where β ∈ (0, 1) represents a constant that balances the tradeoff between violating any of the two constraints (37a)-(37b). In order to cast the cone chance constraint (35a) in the form (36), we first replace the constraint in (35a) with the chance constraint Remark 7: The chance constraint (38) is a relaxation of the original chance constraint (35a). The proof of this result is given in Appendix A. In order to write (38) in the form (36), square both sides of the inequality in (38) and rearrange terms to obtain and identifying ξ = x k yields or, after rearranging terms and completing the squares, which yields the desired result. Remark 8: It should be noted that the set of equations (40) does not always have a solution. Specifically, (40a) implies that A A is the sum of two rank-one matrices, which is a restrictive condition. However, it turns out that this condition holds for our problem. In the case when the cone is centered at the origin, we have that b = d = 0 and a simple solution of equations (40) yields In this sense,â andĉ denote the unit vectors that parametrize the orientation of the cone. In the context of CS, (37a)-(37c) then result in the following four affine chance constraints These constraints are now in the standard affine form, and similar to (24), they can be converted tô As a result, the approximation of the quadratic chance constraints has resulted in four cone constraints at each time step, or 4N total cone constraints for the whole problem. Since these constraints are now convex, the resulting problem is convex and can be solved using standard SDP solvers similarly to the polyhedral chance constraint case. B. Geometric Approximation We limit the following discussion to the three-dimensional case, which often occurs when enforcing position constraints. However, the results can be generalized to n-dimensional convex cones. For simplicity, let b = 0 in (34), which corresponds to a cone centered at the origin. From a geometric point of view, one can think of the conical state space (35a) as imposing, at each time step k, that the projection ξ k := AE k X ∈ R 2 lies inside the disk r k = c E k X + d with probability greater than 1 − δ k . However, since E k X is a stochastic process, it follows that the radius of the disk is uncertain, therefore, and similar to Section IV-A, we relax the chance constraint such that the Gaussian vector ξ lie within the mean radius of the diskr k = c E kX + d. Using this approximation, the chance constraints (35a) become Note that the random variable ξ k = AE k X is Gaussian such that ξ k ∼ N (ξ k , Σ ξ k ), with meanξ k := AE kX and covariance Σ ξ k := AE k Σ X E k A . So far, we have turned the convex cone chance constraint (35a) into the chance constraint (46) that requires the probability of a Gaussian random vector being inside a circle of given radius to be greater than 1 − δ k . This problem can be analytically solved as follows. Proposition 1. Let ζ ∼ N (0, Σ ζ ) be a two-dimensional random vector. Then, for any a > 0, (47) Proof. The probability density function (PDF) of ζ is given by Then, the probability in (47) is given explicitly by where Lemma 1. Let ζ ∼ N (0, Σ ζ ) be a two-dimensional random vector, let σ 2 ζ = λ max (Σ ζ ), and let r > 0. Then Proof. Since the covariance matrix is positive definite, we can diagonalize it as Σ ζ = P DP where D is a diagonal matrix containing the eigenvalues λ i of Σ ζ and P is an orthogonal matrix. Since σ 2 ζ = max i λ i , it follows that From the previous expression, it follows that Rearranging the previous inequality gives ζ 2 2 /σ 2 ζ ≤ ζ Σ −1 ζ ζ, and using (47), it follows that Setting r 2 = σ 2 ζ a 2 achieves the desired result. Geometrically, the level sets {ζ Σ −1 ζ ζ = a 2 } define the contours of ellipses having probability 1 − e −a 2 /2 and the level sets { ζ 2 2 = r 2 } are the smallest circles that contain these ellipses. Using Proposition 2 we can now satisfy (46) by enforcing Note that σ 2 Therefore, using Σ X = (I + BK)Σ Y (I + BK) , we get In summary, the convex cone chance constraints (35a) become A. IRA-CS with Polytopic Chance Constraints In this section, we implement the previous theory of CS with optimal risk allocation to the problem of spacecraft proximity operations in orbit. We consider the problem where one of the spacecraft, called the Deputy, approaches and docks with the second spacecraft, called the Chief, such that in the process, the Deputy remains within the line-of-sight (LOS) of the Chief, defined initially to be the polytopic region shown in Figure 2. Assuming that the Chief is in a circular orbit, the relative dynamics of the motion between the two spacecraft are given by the Clohessy-Wiltshire-Hill Equations [28], where m c is the mass of the Chief, ω = µ/R 3 0 is the orbital frequency, and F := [F x , F y , F z ] represents the thrust input components to the spacecraft. These equations of motion are written in a relative coordinate system, where the Chief is located at the origin, and x, y, z represent the position of the Deputy with respect to the Chief. Note that the z dynamics are decoupled from the x − y dynamics; furthermore, the z dynamics are globally asymptotically stable, so in theory we only need to control the planar dynamics. In Figure 2 the blue area represents the planar region. To write the system in state space form, let x := [x, y, z,ẋ,ẏ,ż] ∈ R 6 to obtain the LTI . To discretize the system, we divide the time interval into N = 15 steps, with a time interval ∆t = 0.5 sec. Assuming a zero-order hold (ZOH) on the control yields the discrete system where A d = e A∆t , B d = B∆t + AB∆t 2 /2 and we choose the associated noise characteristics G = diag(10 −4 , 10 −4 , 5 × 10 −8 , 5 × 10 −8 ) [29]. We assume that the initial state mean and covariance are µ 0 = [0.75, −1, 0.75, 0 1×3 ] km and Σ 0 = 10 −2 diag(0.1, 0.1, 0.1, 0.01, 0.01, 0.01), respectively. We wish to steer the distribution from the above initial state to the final mean µ f = 0 with final covariance Σ f = 1 4 Σ 0 , while minimizing the cost function (4) with weight matrices Q = diag(10, 10, 10, 1, 1, 1) and R = 10 3 I 3 . We impose the joint probability of failure over the whole horizon to be ∆ = 0.03, which implies that the probability of violating any state constraint over the whole horizon is less than 3%. The control inputs are bounded as u k ∞ ≤ 0.08 km/s 2 . Note that these bounds are hard constraints as opposed to (soft) chance constraints. To implement this input hard constraint within the CS framework, the algorithm in [16] was used. The details are given in the Appendix. Lastly, in the iterative risk allocation algorithm, we use a scaling parameter ρ (i) = (0.7)(0.98) i in Line 10 of the algorithm, where i represents the current iteration. The SDP in Problem 2 was implemented in MATLAB using YALMIP [30] along with MOSEK [31] to solve the relevant optimization problems. Figures 3 and 4 show the optimal trajectories with optimal risk allocation, and Figure 5 shows the two dimensional planar motion. Figure 6 compares the terminal trajectories of CS with a uniform risk allocation with the proposed method. The two solutions look similar and both satisfy the terminal constraints on the mean and the covariance. However, due to the relaxation Σ N ≤ Σ f , the uniform risk allocation leads to more conservative solutions, as shown in Figure 6. The volume of the final covariance ellipsoid, V N ∝ log det Σ N is considerably smaller for the uniform allocation solution compared to the optimal allocation solution (see Table I). In fact, we see that a consequence of optimal risk allocation is that it maximizes the final covariance given all the constraints, while still being bounded by Σ f . Figures 7 and 8 show the state trajectories and the optimal controls for the polyhedral chance constraints. The control is almost linear but saturates at the first and the last few time steps. Figure 9 shows the a priori allocation of risk, as well as the true riskδ once the optimization is completed, where δ r corresponds to the risk allocated for the right boundary and δ u for the risk allocated for the top boundary. Notice that in Figure 9a the true risk exposure is much lower than the allocated risk, which confirms the conclusion that the solutions for the uniform allocation case are overly conservative. In fact, the true risk is nearly zero except at the initial and terminal times. Comparing this to Figure 9b we see a close correspondence between the allocated risk and the true risk exposure over the whole horizon for the optimal risk allocation case. It should be noted that although the true risk is still slightly less than the allocated risk, the error between the two is much smaller when compared to that of the uniform risk allocation strategy. The iterative risk allocation algorithm is robust in the sense that the algorithm will assign risk proportionately to how close the solution trajectories are to the boundaries of the state space. Since solutions are close to the right and top boundaries of the allowable LOS region for most of the horizon, the optimal allocation weighs these respective risks greater than those of the left and bottom boundaries. Thus, IRA assigns an extremely small risk to the right boundary during these time steps and only assigns a larger risk when the trajectories reach their terminal values. Table I shows the true joint probability of failure, defined as It is clear that the uniform risk allocation does not even come close to the desired design of ∆ = 0.03, while the IRA gives a true probability of failure very close to the desired one. Finally, we looked at the optimal cost function over each IRA iteration, as in Figure 10. The convergence criterion set in this example is = 10 −5 , or when all of the constraints are inactive, which can be proved in [1] to be a sufficient condition for optimality for Problem 3. We see that indeed (29) holds, and the optimization resulted even in a slight decrease of the objective function, converging within 16 iterations. Thus, the iterative risk algorithm optimizes the risk allocation at each time step without increasing the cost. B. IRA-CS with Cone Chance Constraints For the convex cone chance constraint case, we also implemented the method outlined in Section IV, namely the 4N constraints in (45). For this example, the following representation of a cone was used where λ = 1.2, which corresponds to a 50 • cone half-angle. This requirement translates to the individual chance constraints P x 2 k + z 2 k ≤ (λy k ) 2 ≥ 1 − δ k , k = 1, . . . , N. As discussed in Section IV, in order to put this in the form of (36), the probabilistic constraint in (68) is relaxed to κ = λȳ k , so that at each time step, the state is forced to stay inside a disk with radius λȳ k . Comparing (68) simulations. Figures 11 and 12 show the optimal trajectories in the three-dimensional space and in the projection on the x-y plane, respectively. It should be noted that for the two-sided approximation of the cone constraint, and since we approximated the quadratic constraints as four linear constraints, the IRA algorithm needs to be adjusted as follows. In Line 5 of Algorithm 1, a constraint is active at time step k if any of the four affine constraints in (45) is active. Similarly, when tightening the constraints in Line 10, the maximum true riskδ k := max jδ j k is used. This is not needed for the geometric approximation because it approximates each cone chance constraint as a single convex constraint for each k, so the standard IRA algorithm is applicable. VI. CONCLUSION In this paper, we have incorporated an iterative risk allocation (IRA) strategy to optimize the probability of violating the state constraints at every time step within the covariance steering problem of a linear stochastic system subject to chance constraints. For the covariance steering problem, we showed that employing IRA not only leads to less conservative solutions that are more practical, but also tends to maximize the final covariance. Additionally, the use of IRA in the context of CS with chance constraints results in optimal solutions that have a true risk much closer to the intended design requirements, compared to the use of a uniform risk allocation. We also implemented quadratic chance constraints in the form of convex cones, which are more accurate and natural for many engineering applications. Using a two-sided affine approximation, the quadratic chance constraints can be made convex, and a slightly modified IRA algorithm was used to optimize the risk. Lastly, we also used a geometric approximation of the cone chance constraints, which is valid when the state space is three-dimensional, as is often the case when constraining the position of the vehicle, and which is less conservative than the two-sided affine approximation. Both relaxations result in convex programs, where the two-stage IRA algorithm is applicable.
2020-09-22T01:00:46.215Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "0a7b164e5eedacdd6a0b550c5bd93cf09cd197bf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a7b164e5eedacdd6a0b550c5bd93cf09cd197bf", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
261371851
pes2o/s2orc
v3-fos-license
Understanding SEI evolution during the cycling test of anode-free lithium-metal batteries with LiDFOB salt Anode-free lithium-metal batteries (AFLMBs) have the potential to double the energy density of Li-ion batteries, but face the challenges of mossy dendritic lithium plating and an unstable solid electrolyte interphase (SEI). Previous studies have shown that the AFLMBs with an electrolyte containing lithium difluoro(oxalato)borate (LiDFOB) salt outperform those with lithium hexafluorophosphate (LiPF6), but the mechanism behind this improvement is not fully understood. In this study, X-ray photoelectron spectroscopy (XPS) depth profile analysis and electrochemical impedance spectroscopy (EIS) were conducted to investigate the SEI on plated Li from the two conducting salts and their evolution in Cu‖NMC full cells during cycling. XPS results revealed that an inorganic-rich SEI layer is formed in the cell with LiDFOB-based electrolyte, with a low carbon/oxygen ratio of 0.56 compared to 1.42 in the LiPF6-based cell. With the inorganic-rich SEI, a dense electroplated Li with a shining surface on the Cu substrate can be retained after ten cycles. The inorganic-rich SEI enhances the reversibility of Li plating and stripping, with a high average CE of ∼98% and a stable charge/discharge voltage profile. The changes in SEI resistance and cathode electrolyte interphase resistance are more prominent compared to the changes in solution and charge transfer resistances, which further validate the role of the passivation films on Li deposits and NMC cathode surfaces in stabilizing AFLMB cycling performance. Introduction Lithium (Li) metal is hailed as the next holy grail of high-energydensity anodes, which promises the lowest electrochemical potential (−3.04 V vs. standard hydrogen electrode) and the highest theoretical specic capacity (3861 mA h g −1 ). 1,2owever, Li metal is costly and handling Li metal requires ultralow humidity, which increases the production costs. 3Furthermore, the use of excessive Li may pose safety concerns for the equipped battery. 4Thus, anode-free lithium-metal batteries (AFLMBs) with all the active lithium supplied from the cathode materials become the most promising choice as the nextgeneration rechargeable batteries, with energy densities of up to 423 W h kg −1 and 1514 W h L −1 . 5It is also noteworthy that the production of AFLMBs is compatible with the existing manufacturing facilities for Li-ion batteries. 7][8] In particular, interfacial reactions that form the solid electrolyte interphase (SEI) layer consume the active Li and adversely affect the AFLMBs cell life. 9,10The undesirable heterogeneous SEI layer may form due to inherent electrolyte instability at low reduction potentials and inhomogeneous surface chemistry.This passivation layer produces unequal electric eld distribution and Li concentration gradient, which may be the root cause of Li dendrites growth. 11The unstable Li dendrites could lose electrical contact with the Cu current collector during cycling, resulting in "dead" Li that is electrochemically irreversible. 12Thus, the formation and evolution of SEI inuence Li plating and stripping during cell cycling, and affect the coulombic efficiency (CE) of AFLMBs. Various strategies have been employed to suppress the dead Li formation in AFLMBs, focusing on modied current collectors, 13,14 carbon host materials, 15 articial solid electrolyte interphase additives, 16 and innovative electrolyte strategies. 17odied current collectors, such as those coated with protective layers or porous structures, are reported to mitigate the dead Li formation by promoting more uniform lithium plating and stripping. 18Carbon-based hosts can serve as a physical barrier against dendrite growth and enhance lithium ion transport. 15However, Li host with high contact area may promote excessive SEI formation, leading to electrolyte depletion.An articial SEI protective layer can be engineered through additives or coatings, enhancing electrolyte compatibility and preventing the continuous growth of detrimental Li dendrites. 18owever, developing an articial SEI remains challenging due to the trade-off between ionic conductivity and mechanical robustness. 19In the end, material choice of electrolyte (i.e., solvent, co-solvent, salt, and additives) still play a pivotal role in AFLMBs performance as it controls the Li + ion ux, current density, and de-solvation mechanisms that can lead to homogenous and dense Li plating on the current collector. 1][22][23] Ether-based solvents, such as dimethyl ether (DME), are widely studied to achieve smooth lithium plating in Li‖Li or Li‖Cu symmetric cells. 22,24However, its low working voltage inhibits its compatibility with the highvoltage cathode.Using highly concentrated electrolytes can solve this problem, but the high production cost will be unavoidable. 25Notably, carbonate solvents can achieve AFLMBs with high cut-off voltage. 26Common combinations of carbonate solvents with a high dielectric constant (e.g., ethylene carbonate (EC)) and low viscosity solvent (e.g., diethylene carbonate (DEC)) can be employed in AFLMBs. 27However, the commercial electrolyte for Li-ion batteries with LiPF 6 as the conducting salt results in poor Li deposits due to the autocatalytic reaction of LiPF 6 with a trace amount of water. 28This reaction generates HF that damages SEI on the Li metal surface and causes poor cycling stability of AFLMBs.Besides, HF could also initiate transition metal dissolution on the cathode side, further deteriorating the battery performance. 29herefore, choosing a compatible Li salt for carbonate electrolytes is crucial for enabling high-energy-density and stable AFLMBs.1][32] In a LiFePO 4 (LFP)‖Cu cells, the oxalate group in LiDFOB was reported to regulate the growth of LiF particles by serving as a capping agent, producing a uniform distribution of LiF particles on the LFP surface. 33Weber et al. 32 employed different salt compositions in AFLMBs with LiNi 0.5 Ni 0.3 Co 0.2 O 2 cathode.Cells with LiDFOB salt cycled between 3.6-4.5V can reach 60 cycles with capacity retention above 80%, whereas cells with LiPF 6 only lasted for 10 cycles.The performance improvement of AFLMBs with LiDFOB salt can be related to the SEI composition and morphology of the electrodeposited lithium. 22,34eports using Li‖Li cells have shown that SEI inuences the lithium morphology and CE, which can be demonstrated by measuring the changes in the internal resistance by electrochemical impedance spectroscopy (EIS). 35The cells that produce dense plated Li morphology are reported to have a stable solution resistance, an increase of charge transfer resistance, and an increase in interfacial lm (i.e., SEI) resistance aer the symmetric cell is cycled for 600 hours.However, these prior reports are limited to Li‖Li cells, where the Li supply is unlimited and is completely different from Cu‖NMC cell conguration. In this work, we study the SEI evolutions governed by the two conducting salts (i.e., LiPF 6 and LiDFOB) in a full-cell conguration during the cycling test.The effect of salt type on the performance and stability of AFLMBs is investigated.Cu‖NMC cell with high cathode mass-loading of 20 mg cm −2 is used, mimicking the industrial standard for cathode mass loading.Xray photoelectron spectroscopy (XPS) depth prole analysis is used to examine the SEI composition formed by different salts aer the formation cycle.The plated Li of different cycles is monitored with SEM to reveal its morphology.The resistance is checked by EIS to further reveal the evolution of SEI behaviour in cell operation.Our ndings demonstrate that AFLMBs with LiDFOB-based electrolytes exhibit better performance and stability compared to those with LiPF 6 .Specically, the inorganic-rich SEI formed with LiDFOB promotes stable SEI formation and better Li plating and stripping kinetics, resulting in ∼98% CE with a retained shining and dense plated Li.These mechanistic insights can further explain the behaviour of LiD-FOB salts in SEI formation for prolonging the AFLMBs cycle life. Electrochemical measurements The formation cycle was done by applying a current rate of 5 mA g −1 to 3.8 V, then 25 mA g −1 to 4.5 V, followed by discharge at 50 mA g −1 to 2.8 V.The galvanostatic charge-discharge was conducted at room temperature with a battery analyzer (Neware Battery Testing System) within the potential range of 2.8 V to 4.5 V at 50 mA g −1 .Linear scan voltammetry (LSV) was conducted with Autolab PGSTAT 302N (Metrohm AG) from the open circuit voltage (OCV) to −0.2 V.The electrochemical impedance spectroscopy measurements were measured at room temperature with Autolab PGSTAT 302N (Metrohm AG) in the frequency range of 1 MHz to 0.01 Hz, with an amplitude of 10 mV aer fully charged to 4.5 V and rested for 24 hours. Subsequently, the EIS result was tted with the NOVA Soware.The electrolyte conductivity was measured with Pt microelectrode with TSC 1600 equipment (rhd Instruments) connected to Autolab M204 (Metrohm AG).Cell impedances with 1 M LiD-FOB or 1 M LiPF 6 in EC : DEC (1 : 1, v/v) were measured at the temperature range from −10 to 70 °C with 10 min for equilibration. Surface characterizations The cells were evaluated aer 100% charge (SOC) in the 1st, 5th, and 10th cycles, respectively.All the cells were disassembled inside the glovebox and washed with anhydrous dimethyl carbonate (DMC, dried with 4 Å molecular sieves) to thoroughly remove the electrolyte's remnant.The lithium-plated copper foil was then dried under a vacuum at room temperature for one hour.The sample was transferred to the characterization chamber (e.g., SEM or XPS) using an airtight container.The samples were exposed to the air for a maximum of 10 seconds. JEOL JSM-7600F eld emission scanning electron microscopy (FE-SEM) was used to observe the morphology of plated lithium.Characterization of X-ray photoelectron spectroscopy (XPS) was carried out at high vacuum (3.8 × 10 −8 Torr) in the PHI Quantera SKM X-ray photoelectron spectrometer with Al ka source (1486.6 eV).The examined areas were 250 mm 2 .XPS depth prole analyses of the SEI were done with Ar + sputteretching at different stages: 0 min (surface), 6 min, and 12 min.The etching rate is 1.67 nm min −1 based on SiO 2 .The obtained XPS spectra were calibrated to a C-C bond with a binding energy of 285 eV and tted using CasaXPS (version 2.3.17,Casa Soware Ltd). Results and discussion In this study, the full-cell AFLMBs were assembled using electrolytes consisting of 1 M LiDFOB in EC : DEC (1 : 1, v/v) or 1 M LiPF 6 in EC : DEC (1 : 1, v/v) and NMC622 cathode with high areal mass loading of 20 mg cm −2 .The cells were cycled between 2.8 and 4.5 V in a full charge-discharge mode at a constant current density of 50 mA g −1 (or at a rate of C/4) aer the formation cycle.The performance of AFLMBs depends on the salt choice in the electrolyte, suggesting that the salt participates in the SEI formation and inuences the following Li plating process. 36The SEI composition can alter the distribution of electric current and Li + ux across the current collector, directly impacting the shape and structure of electrodeposited Li metal. 37To assess the chemical composition of the SEI at different depths, X-ray photoelectron spectroscopy (XPS) depth proling was employed on the deposited Li at Cu foil aer the formation cycle.The depth prole of the SEI is determined through a sputter-etching process with accelerated Ar + ions prior to the XPS measurement at different stages, namely the surface (i.e., 0 min), 6 min, and 12 min.The SEI surfaces were gradually removed through etching time, and then the distinctive photoelectron signature was collected. The XPS spectra of both samples at various etching times can indicate the presence of organic and inorganic components of the resultant SEI (Fig. 1). 38The C 1s spectra in Fig. 1a and b display the characteristic inorganic peak of Li 2 CO 3 (289 eV) along with deconvoluted peaks of organic compounds, such as carbonyl group C]O (287.2 eV), polyether carbon C-O (285.8 eV) and hydrocarbon C-C/C-H (285 eV). 28,30,39The C 1s XPS spectra of the cell with LiDFOB from 0 to 12 minutes of etching time are dominated by the inorganic compounds (Fig. 1a), as shown with the presence of Li 2 CO 3 with less prominent C 1s spectra intensity compared to the cell with LiPF 6 salt (Fig. 1b).The lower organic compound in the SEI may reect less SEI breakage and formation during Li plating, resulting in a high CE during the charge-discharge cycle. 40ompared to Li deposits from the cell with LiDFOB, the SEI formed in the cell with LiPF 6 is enriched with organic compounds, evidenced by high intensity of carbonyl, polyether, and hydrocarbon peaks (Fig. 1b).The C 1s spectra intensity remains signicant even at the inner SEI of LiPF 6 cell (i.e., aer 6 and 12 minutes of etching time), as shown with the polyether compound dominating the peak intensity.Therefore, the use of LiPF 6 -based electrolytes can lead to the formation of an organicrich SEI layer.The organic-rich SEI strongly bonds with Li metal due to its low interfacial energy. 41Thus, it may experience the same volume change with Li during the plating and stripping, leading to SEI breakage during cycling.Moreover, its low interfacial energy also facilitates vertical and dendritic Li growth, which is detrimental to battery performance. 12he O 1s XPS spectra can further conrm the presence of inorganic compounds in the SEI. 39Fig. 1c and d show the O 1s spectra for the cells with LiDFOB and LiPF 6 salt, respectively.The peaks associated with inorganic phases, such as LiOH (531.5 eV), Li 2 O (532.5 eV), and Li 2 CO 3 (532.9eV), along organic phases such as carbonyl (533.8 eV) and polyether (534.9 eV), are detected in both of O 1s spectra. 39,42The relative atomic concentrations of carbon with oxygen are calculated based on C 1s and O 1s peak area to quantify the organic or inorganic phases inside the SEI.The SEI of the cell with LiDFOB salt possesses signicantly lower organic compounds (i.e., inorganic-rich), with a C/O ratio of 0.56 at the surface, compared to that with LiPF 6 salt (C/O ratio of 1.42).Hence, the use of LiDFOB-based electrolytes can lead to the formation of an inorganic-rich SEI layer.The inorganic lithium compounds were reported to have weak bonding with high interfacial energy with Li metal. 43,44Thus, the produced inorganic-rich SEI can keep its integrity during Li plating and stripping. 40Besides, the inorganic-rich SEI also possesses a high Young's modulus that can suppress dendritic Li growth and penetration. 45The high mechanical stability of inorganic-rich SEI could also prevent the SEI from continuous breakage during the Li plating and stripping, retaining high CE and stable cycle life of the battery. 40esides the SEI components, the energy required to dissociate the Li + from the solvent component (i.e., de-solvation energy) electrolytes is critical for the kinetics performance because the solvent molecules around Li + have to be completely stripped off before plating into the substrate. 46To show the desolvation energy of LiPF 6 and LiDFOB salt in the EC : DEC solvent, we have added the comparison of the Arrhenius plot between the 1 M LiPF 6 and 1 M LiDFOB in EC : DEC, as shown in Fig. 2a.The activation energy (E a ) of 1 M LiDFOB is 0.168 eV, compared to 0.176 eV for 1 M LiPF 6 .The lower E a value of LiDFOB salt suggests that the LiDFOB salt has a low desolvation barrier for Li + and promotes the facile de-solvation process of Li + compared to LiPF 6 salt in a carbonate-based solvent.Thus, Li + could easily dissociate from EC : DEC solvent and easily plated to Li. LSV plot in Fig. 2b also supports this argument, where the electrolyte that contains LiDFOB salt reduced rst at a potential of 1.24 V, compared to 1.05 V of Li‖Cu cell with LiPF 6 salt.Faster de-solvation of LiDFOB leads to early SEI formation, suggested to be benecial to protect the electrolyte from decomposition, as well as controlling the local current density across the current collector. 47With controlled local current density and synergistic effect of inorganic-rich SEI, the cell with LiDFOB salt achieves a low nucleation overpotential of 0.08 V, compared to 0.18 V of the cell with LiPF 6 salt (Fig. 2c).Lower nucleation overpotential can be regarded as low energy to form Li nuclei. 48Thus, LiDFOB salt can be a benecial contributor to achieving facile nucleation and homogenous spatial distribution of nuclei. The morphologies of plated Li in the Cu‖NMC cells with 1 M LiDFOB or 1 M LiPF 6 in EC : DEC (1 : 1, v/v) electrolyte aer fully charged at the 1st, 5th, and 10th cycle are depicted in Fig. 3. Initial plating at the formation cycle shows the Li dendritic growth with LiDFOB salt (Fig. 3a).Due to the presence of surface cracks, pits, and subsurface impurities on Cu foil, the localized high electron density at these inhomogeneous sites may lead to the preferential gathering of Li-ions and electrons at the interface, thereby resulting in the formation of dendritic Li upon the initial cycle. 6As the cycle number increases, the plated Li with LiDFOB becomes dense and spherical-like, as shown in Fig. 3b and c.The inorganic-rich SEI layer is known to have a low Li + diffusion barrier, which facilitates a fast and uniform Li-ion diffusion during the Li plating, as illustrated in Fig. 3d. 49oreover, the dense plated lithium led to a good mechanical integrity of deposited lithium on the copper foil, proven by the retained shining Li deposits aer 10 cycles (Fig. S1a-c †). On the other hand, the Li morphology plated with LiPF 6 salt tends to form a mossy structure, as shown in Fig. 3e-g.The organic-rich SEI has a high porosity, which promotes heterogeneous local current buildup, leading to uneven Li + transfer inside SEI. 34Thus, the organic-rich SEI even can promote the growth of dendritic Li, as illustrated in Fig. 2h. 49This dendritic Li is easily detached from the current collector (Fig. S1d-f †). The electrochemical performance of Cu‖NMC cells with LiDFOB and LiPF 6 salt is depicted in Fig. 4. As the inorganicrich SEI governed by LiDFOB promotes dense electroplated Li on Cu foil, the Li reversibility of the cell is positively affected and shows an overlapping voltage prole (Fig. 4a).On the other hand, organic-rich SEI in the cell with LiPF 6 induces the mossy and dendritic Li growth on Cu foil.Thus, the Li loss caused by irreversible Li plating-stripping with LiPF 6 salt is prominent, as illustrated by the cells' voltage degradation in Fig. 4b.The Cu‖NMC cell with LiDFOB salt demonstrates an impressive average coulombic efficiency of 98% and capacity retention of 52% even aer 50 cycles (Fig. 4c).In contrast, cells utilizing LiPF 6 salt exhibit a complete depletion of capacity aer 40 cycles.This highlights the signicant performance enhancement achieved by incorporating LiDFOB salt in anode-free lithium metal batteries for extended cycles.The relatively low stability of the cell with LiDFOB salt in this study can be justi-ed by the use of a high cathode mass loading, a non-uorinated solvent, and a wide operating voltage range, all of which collectively contribute to the observed stability levels. 32esides shortening the battery cycle life, the SEI formation and evolution are important factors that govern the changes in the battery resistance. 34,45Therefore, the electrochemical impedance spectroscopy (EIS) of cells with LiDFOB and LiPF 6 is measured at the 1st, 5th, 10th, and 15th cycle (at a fully charged state) to monitor the cell resistance change from electrolyte, cathode, and anode.The details of the tting procedure and parameters are given in ESI S2 and S3, † respectively.The obtained Nyquist plot of both samples is composed of two semicircles with distinct characteristics.The rst semicircle, which dominates the impedance at high frequency, corresponds to the electrode resistance.It can be further analyzed by deconvoluting it into two semicircles from the passivation layers on the cathode and anode, as depicted in Fig. S2a.† Meanwhile, the last semicircle at low frequency is associated with the charge transfer resistance (R ct ). The impedance of symmetric cells composed of Cu‖Cu (Fig. S2c †) and NMC‖NMC (Fig. S2d †) retrieved from the two fully charged Cu‖NMC cells was measured to dene the contribution of the cathode and anode passivation layer.For the negative side, the symmetric Cu‖Cu cell (Fig. S2c †) exhibits smaller semicircles compared to NMC symmetric cell (Fig. S2d †).Comparing these to the full cell impedance spectra (Fig. S2a †), the negative lithium electrode contributes to high frequency (i.e., rst semicircles), while the positive electrode contributes to the medium frequency (i.e., second semicircle).This result is in line with the study by Iurilli et al., 50 which explains that the high and medium frequencies observed in the rst semicircle are attributed to the impedance of the passivation layer at the anode side (R SEI-anode ) and the cathode side (R CEI-cathode ), respectively.The tted Nyquist plots and parameters are shown in Fig. S3 and Table S1, † respectively. The changes in the resistance value of cells with LiDFOB and LiPF 6 within the rst 15 cycles are shown in Fig. 5a and b, respectively.The cell with LiDFOB shows a stable impedance over the cycles compared to the cell with LiPF 6 , which correlates well with the improved cycling stability and high CE.Furthermore, the changes in the R SEI-anode (Fig. 5c) and R CEI-cathode (Fig. 5d) are prominent.Thus, the battery's performance is related to the formation of the passivation lm on Li and NMC622 cathode surfaces.In contrast, the changes in R s (Fig. S4a †) and R ct (Fig. S4b †) are relatively small, indicating that the electrolyte conductivity and electrode kinetics are not major limiting factors in the electrochemical performance of the cells. Fig. 5c shows the distinct behaviour of SEI resistance at the anode (R SEI-anode ) for cells with LiDFOB and LiPF 6 .Changes in the SEI resistance may indicate the stability of SEI during charge and discharge cycles.The volume changes during Li plating can break the SEI, especially when Li is porous or dendritic. 49Aer the formation cycle, the cells with LiDFOB and LiPF 6 exhibit R SEI-anode of 14.53 and 25.04 U cm 2 , respectively.As the cycle number increases, the R SEI-anode of the LiDFOB cell increases to 20.95 U cm 2 , while the LiPF 6 cell decreases to 8.85 U cm 2 .The decrease of the R SEI-anode for the LiPF 6 cell can be attributed to the formation of porous lithium structure and the continuous SEI breakage and formation in each cycle. 49Fresh cracks of SEI expose Li metal, and SEI formation occurs again. 34Thus, a thinner SEI may be formed, reected in the decreased SEI resistance over the cycle number.In contrast, the increase in the R SEI-anode of the LiDFOB cell may reect the stable SEI formed at Li metal surfaces.Previous research also reported that the increase in SEI resistance correlates with the dense lithium metal plating in the lithium metal battery. 35The presence of stable SEI and densely plated Li metal was also reected in high CE and stable cycling stability of cells with LiDFOB. The evolution of interphase at the cathode is observed in the evolution of the R CEI-cathode (Fig. 5d).The cathode electrolyte interphase (CEI) plays a role in mitigating the capacity decay due to transition metal dissolution at the cathode (i.e., the crosstalk effect). 6,51In this case, the transition metal with IV+ oxidation state in the NMC cathode is likely to escape during lithiation at a high cut-off voltage, which degrades the cathode and increases the resistance of the battery. 52,53The cell with LiPF 6 salt exhibits an increased R CEI-cathode from 15.24 to 47.93 U cm 2 aer 15 cycles, while the cell with LiDFOB decreases its R CEI-cathode from 24.46 to 15.25 U cm 2 . In particular, LiDFOB is known for stabilizing CEI by forming LiF-rich phases while the electrolyte decomposes at the cathode side, preventing continuous parasitic reactions at the cathode (i.e., transition metal dissolution and electrolyte decomposition). 51The presence of LiDFOB stabilizes the CEI layer at the cathode surface, proven by the decreased R CEI-cathode value following the cycle number.On the other hand, the increase of R CEI-cathode on the cell with LiPF 6 salt can be related to the immature passivation layer formed at the NMC cathode surface.The accumulation of CEI produces from the interfacial reaction at the cathode side further increases the interfacial resistance of the cathode and consumes the electrolyte. Conclusions The formation of mossy, dendritic lithium plating and unstable SEI has long been known to be signicant challenges in the cycling performance of AFLMBs.Previous works showed that LiDFOB-based electrolytes had enhanced the life cycle of AFLMBs.Our study adds to this body of work by highlighting the role of LiDFOB salt in SEI formation.The XPS data indicates that the inorganic-rich SEI formed on the Li deposit in the cell with LiDFOB, which suppresses dendrite growth, promotes a dense Li deposition, thus delivering high CE.With the inorganic-rich SEI, a dense electroplated Li with a shining surface on the Cu substrate can be retained aer 10 chargedischarge cycles of Cu‖NMC cell.The cell with LiDFOB shows a relatively stable impedance over the cycles compared to the cell with LiPF 6 , which correlates well with the improved cycling stability and high CE.The changes in R SEI-anode and R CEI-cathode in both cells are more prominent compared to the changes in R s and R ct , indicating the improved cycling performance is related to the formation of the passivation lm on Li and NMC622 cathode surfaces, rather than electrolyte conductivity and electrode kinetics.The use of LiDFOB is shown to provide advantages in the passivation of plated lithium and NMC cathodes, leading to enhanced cycling performance of AFLMBs.These ndings demonstrate the fundamental behaviour of LiDFOB salts in governing stable inorganic-rich SEI and can provide insights for the design of future high-performance AFLMBs.S1. † Fig. 1 Fig. 1 XPS depth profile analysis of electroplated Li after the formation cycle, with etching times of 0, 6, and 12 minutes.C 1s XPS spectra of plated Li with (a) LiDFOB and (b) LiPF 6 electrolyte salt.O 1s XPS spectra of plated Li with (c) LiDFOB and (d) LiPF 6 electrolyte salt. Fig. 3 Fig. 3 (a-c) SEM images of plated Li on Cu foil after fully charged at different cycles with LiDFOB salt.(d) Illustration of dense Li plating with inorganic-rich SEI.(e-g) SEM image of plated Li on Cu foil after fully charged at different cycles with LiPF6 salt.(h) Illustration of dendritic Li plating with organic-rich SEI.All the cells are cycled from 2.8-4.5 V with a current density of 50 mA g −1 . Fig. 5 Fig. 5 Nyquist plots of the fully charged Cu‖NMC cells with (a) LiDFOB and (b) LiPF6 salt at the 1st, 5th, 10th, and 15th cycle and the fitting data of (c) solid electrolyte interphase resistance at the anode (R SEI-anode ) and (d) cathode electrolyte resistance (R CEI-cathode ).The fitting parameters are listed in TableS1.†
2023-08-31T15:05:33.397Z
2023-08-21T00:00:00.000
{ "year": 2023, "sha1": "068b992fdb3574666c1365858ca97684081a7a73", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/ra/d3ra03184e", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a64800bc4dec12f6c36248f554f416ded0894f6", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
2149683
pes2o/s2orc
v3-fos-license
Applications of Electronic Health Information in Public Health: Uses, Opportunities & Barriers Electronic health information systems can reshape the practice of public health including public health surveillance, disease and injury investigation and control, decision making, quality assurance, and policy development. While these opportunities are potentially transformative, and the federal program for the Meaningful Use (MU) of electronic health records (EHRs) has included important public health components, significant barriers remain. Unlike incentives in the clinical care system, scant funding is available to public health departments to develop the necessary information infrastructure and workforce capacity to capitalize on EHRs, personal health records, or Big Data. Current EHR systems are primarily built to serve clinical systems and practice rather than being structured for public health use. In addition, there are policy issues concerning how broadly the data can be used by public health officials. As these issues are resolved and workable solutions emerge, they should yield a more efficient and effective public health system. Introduction Public health depends on a robust information base to carry out its primary tasks of assessment, policy development and assurance. 1 Reliable, timely data are needed perhaps most evidently in response to infectious disease and other acute events. Historically, public health surveillance has relied on telephone and mail, and more recently online, completion of notifiable disease reports and access to electronic laboratory reporting (ELR). However, several new types of health information technology (HIT) may play an important role in support of public health in the near future, including: electronic health records (EHRs), personal health records (PHR), health information exchange (HIE), clinical decision support (CDS), and Big Data analytics. Each technology has potential benefits, as well as significant barriers to use. This HIT is seen as central to achieving the "Triple Aim" of healthcare reform: "improving the individual experience of care; improving the health of populations; and reducing the per capita costs of care for populations. " 2 Ready access to data at the point of care supports clinical decision making that benefits the individual patient, and that same access to data is required to support agencies making decisions that have an impact on the health of populations. For example, public health could quickly assess the completeness of immunizations, understand which populations remain underimmunized, initiate action to understand the reasons, and take action targeted at clinical care systems, physicians, or patients as the need requires. 3 Similarly, the availability of electronic clinical information on cases and their management will greatly enhance the ability to improve the quality of traditional public health services. While most communicable disease services are currently provided outside of public health clinics, public health remains responsible for investigation, contact tracing and management, relying on laboratory and passive physician reporting to assure cases are referred. More efficient and more rapid transmission of medical data can lead to more rapid identification of patients, simplify identification of clusters, facilitate contact tracing and patient or professional education and other initiatives. The data can be used to identify gaps in quality of care, such as failure to follow recommended guidelines or inadequate follow up and treatment. The advent of widely available electronic health information and Big Data, the massive amount of data produced each day, also provides new opportunities to understand social interactions, environmental and social determinants of health and the impact of those environments on individuals. The powerful analytic tools that have been applied to marketing and other fields are not commonly present in public health departments, but implementing them has the potential to fundamentally change surveillance and other systems. By the same token, technology puts information into the hands of users who can use it to drive community change. Making data readily available -with appropriate protections, of coursecan empower stakeholders in ways that one can now only imagine. eGEMs We are also at the cusp of major change as public health roles become more demanding and are being reshaped by the Affordable Care Act (ACA). As more people have insurance coverage, the need for public health to deliver clinical services will diminish substantially with a residual function for those who remain without access to the mainstream clinical care system. What will not diminish, however, is the public health responsibility to control certain clinical conditions. This system transformation signals great opportunity for the integration of public health and health care through public health informatics. Indeed, public health informatics can support "the triple aim of achieving a public health goal faster, better, or at a lower cost by leveraging computer science, information science, or technology. " 4 The Los Angeles County Department of Public Health has begun this journey and has a well-developed ELR system for notifiable diseases. However, it foresees the need to rapidly expand its capability to use emerging HIEs to access those data. Unlike the incentives available to the clinical care system for developing MU capabilities, public health has few resources to develop its information technology infrastructure and ability to analyze and use those data efficiently and effectively. This article provides some insights into the practical impacts of the burgeoning electronic systems on public health departments. Current and Emerging Uses of Electronic Health Information Electronic health information can potentially improve many of the core functions in public health (See Figure 1). Data Collection Public health agencies monitor the health status of populations, collecting and analyzing data on morbidity, mortality and the predictors of health status, such as socioeconomic status and educational level. There is a particular focus on diseases of public health importance, the needs of vulnerable populations and health disparities. An EHR provides both episodic snapshots and a longitudinal view of a patient's health related to clinical care. A PHR provides a powerful tool for gathering information about clinical visits, as providers and patients use the application to access, manage and share health information. 5 A survey of PHR users identified a willingness to share their data for care improvement and public health purposes. 6 The use of EHR data to support public health surveillance and epidemiology has been demonstrated for a wide range of conditions, including respiratory diseases, cancer, and even social determinants for disease. 7,8,9 The use of data mining and analytic techniques on EHR data has the potential to identify new risk factors and target interventions at the individual level. 10,11 For example, a health system's EHR was used to identify smokers for tobacco dependence interventions. 12 EHRs can provide data on subpopulations, geographic areas and health conditions that are typically underrepresented in public health surveillance and large-scale surveys, most often conducted at the federal or state level. For example EHR data from a county clinic serving children in foster care was used to describe the health status of this vulnerable population. 13 The NYC Macroscope project launched "a population health surveillance system that uses electronic health records (EHRs) to track conditions managed by primary care practices that are important to public health, " whereby they monitor the community prevalence of "chronic conditions, such as obesity, diabetes and hypertension, as well as smoking rates and flu vaccine uptake. " 14 The combination of EHR data and geographic information system (GIS) technology has the potential to provide for selective sampling of demographic groups or geographic communities and can be used to understand patterns of illness and delivery of care at the community level. 15,16 Cancer epidemiology has prioritized the use of Big Data, and genomics has used it to identify genetic risk factors for common diseases and mutations that confer a high risk for rare conditions. 17,18 Big Data facilitates more drilling down (viewing more detail), drilling up (viewing data in aggregate), and slicing-and-dicing (viewing specific combinations of data variables) than may be reasonable with traditional data collection and desktop-based analysis. 19 Many public health departments are pursuing a health-in-all-policies approach to assure that health is a consideration in all major policy decisions. These might include developing new housing, factories, transportation systems, recreation facilities, or educational initiatives to increase graduation rates. Health impact assessments play a critical role in informing decision makers about how their decisions can be used to maximize health and mitigate harms. 20 Using GIS to look at both Big Data and EHR data may support the detailed knowledge of risk groups, behaviors, social and physical environments needed for both epidemiology and comprehensive policy evaluation such as health impact assessments. Analysis, Diagnosis, and Investigation of Public Health Concerns Public health authorities are required to drill down for individual data and risk factors in order to diagnose, investigate and control disease and health hazards in the community, including disease that originates with social-, environmental-, occupational-and communicable-disease exposures. The community relies on public health to control exposure across jurisdictions and sectors, which may involve closing a school or business, isolating infectious individuals, or limiting exposures to health hazards. For example, a clinician or laboratory reports a case of active tuberculosis to the local health department. In response, public health staff performs chart reviews and patient interviews to identify exposed community members and immediately ensure appropriate precautions. For the next year they ensure that all affected patients receive appropriate care and case management. They may provide direct clinical services, expert consultation for drug-resistant and other challenging cases, or they may provide oversight of private sector care, to ensure an appropriate treatment regimen and patient adherence. eGEMs This process is resource intensive and time-consuming for both the public health department and clinicians, which can lead to a suboptimal response and public health control measures. Access to EHR data can improve the efficiency of both the investigation and quality assurance process, because health department staff no longer must travel to multiple sites, manually abstract data from multiple electronic medical records (EMRs), or reenter abstracted data into an electronic public health information system. EHR data may offer a more longitudinal, complete and accurate information than a onetime interview with a patient. Data obtained from a PHR may offer data that is different in content or time frame, and it might also offer information on patients that have not had a clinical visit. Bioterrorism events and outbreaks such as bacterial meningitis and pandemic influenza demand a rapid public health response that only timely access to clinical data can guide. EHR add-on technologies have been developed specifically to support real-time, automated reporting of notifiable diseases, influenza-like illness, and diabetes prevalence to health departments. 21 The efficiency of the public health response can also be improved when clinicians receive public health information in a timely way. Efforts to support bidirectional communication that integrates public health information and interventions at the point of care have been encouraged and might include information about patients followed by the public health agency (PHA), communicable disease outbreaks and control, environmental exposures, medication and product recalls. 22 Implementation of Public Health Strategies Several pilot studies have demonstrated the promise of bidirectional HIE to support efficient surveillance and public health interventions, including linking patients to care and assuring the quality of clinical care. The Louisiana State University Hospital System and the Louisiana Office of Public Health implemented a bidirectional data exchange to link HIV-positive patients not currently receiving HIV treatment to appropriate medical care. 23 By matching hospital registration data with the local health department's HIV/AIDS registry, the authors were able to alert physicians that a presenting patient was not currently receiving HIV treatment. The median time between the patient's last medical visit and the alert was 20 months. More than 70% of alerts issued were followed by a documented action by the provider, helping to assure appropriate patient care. Public Health Function Opportunities with Health IT Collection of data on individual or community health status EHRs provide an episodic and longitudinal view of an individual's health. 5 lab or physician diagnosed illnesses. 33 EHRs can monitor community prevalence of chronic conditions. 14 EHRs can describe vulnerable populations' health status. 13 EHR add-ons can supply bioterrorism monitoring through real-time, automated reporting. 21 Analysis, diagnosis, and investigation of public health concerns ty Information Systems (CIS) with EHRs. 9 for disease. 10 Bidirectional alert systems inform providers when a set of symptoms 27,28 Implementation of public health strategies assurance processes and support proper treatment and case management of infectious diseases. 25,26 individuals. 12 Bidirectional support through combined registry and EHR data can alert clinicians when a patient isn't being treated for a condition. 23 GIS-analyzed Big Data and EHRs can inform Health Impact Assessments. 20 eGEMs This example of bidirectional communication provided person-specific, context-sensitive knowledge that supported both health-related decision making and action by healthcare providers, assuring high-quality clinical care and an effective and efficient public health intervention using CDS. More than merely providing information, CDS tools and processes can include automated reminders and alerts, condition-specific order sets, data reports and visualizations, clinical guidelines and evidence-based references. 24 CDS is generally available as an embedded function of an EHR at the point of care; less commonly, CDS is provided through an EHR as an HIE service or to an individual via a PHR. CDS supports quality assurance efforts in that it facilitates high-quality clinical care that helps ensure a timely and effective public health response benefitting both the individual patient and the community. 25,26 The Institute for Family Health in New York City used advisory statements from the local health department to create alerts within their EHR system, prompting appropriate laboratory testing during foodborne disease outbreaks and appropriate testing and treatment during a local Legionella outbreak. 27,28 The alert was triggered by symptoms such as cough, chest pain, fever, chest congestion or cold symptoms, and the management guidance included information on diagnosis, testing and treatment. A prepared order set "included orders for sputum culture, Legionella urine antigen, chest x-ray, and complete blood count, as well as outpatient antibiotic prescriptions appropriate for community acquired pneumonia. " Bidirectional communication via a PHR or a SMART ("Substitutable Medical Applications, Reusable Technologies") application for a mobile device may also offer CDS to guide individual patient action or provide data for population surveillance and investigation via HIE services. 29 Data submission to support traditional public health functions (i.e., reportable laboratory results, syndromic surveillance, immunization information systems, cancer registries, and other specialized registries). Advance Clinical Processes Components include the following: • More rigorous HIE • Electronic transmission of patient care summaries across multiple settings • More patient-controlled data The collection of clinical data of possible public health interest HIE infrastructure standardizes data exchanged through varied PHA IT systems and clinical EHR models. Improved Outcomes Components include the following: • ing to improved health outcomes • Decision support for national high-priority conditions • Patient access to self-management tools • Access to comprehensive patient data through patient-centered HIE • Improving population health Submission of HAI reports. recommendations. Submission of adverse event reports. Opportunities: Improvements in Infrastructure The application of electronic health information in public health is supporting the increased adoption of EHRs by the medical community, the inclusion of required public health reporting within the Centers for Medicare and Medicaid Services (CMS) incentives for EHR adoption, and the national infrastructure for HIE. CMS incentives for demonstrating the MU of certified EHR technology have increased the adoption of EHRs by healthcare providers and hospitals. 31 The first two stages of MU have both required and optional EHR functionality relevant to public health. The proposed third stage of MU recommends submission of Healthcare Associated Infection (HAI) reports, adverse event reports, and the ability to receive person-specific recommendations from an immunization information system. (See Figure 2.) HIE refers to the electronic movement of health-related information, including MU data, among organizations according to nationally recognized standards. 32 HIE may positively support public health data exchange with clinical health partners in a number of ways, including the following: reducing the number of system interfaces required to exchange data; providing automated routing of relevant electronic health information to public health; providing data standardization; providing record-linking services and supporting simultaneous queries across many care settings; supporting public health reminders and alerts; and, where agreed to, providing centralized data storage for efficient analysis. HIE infrastructure serves as a hub, through which hospitals, ambulatory practices, laboratories, pharmacies, and other clinical entities exchange electronic data among their information systems. Connecting to an HIE may reduce the technical effort for a PHA versus attempting to connect and maintain direct interfaces with each clinical entity. HIE infrastructure can potentially monitor data transactions for specific laboratory or physician-based diagnoses, whether mandated or not mandated by statute, and can route appropriate health information to the PHA; this may preclude having the PHA work directly with each clinical entity or their technology vendor to achieve the same end. 33 HIE infrastructure may provide services to standardize data exchanged across disparate IT systems, potentially reducing the data mapping effort of the PHA to create comparable population sets. A common feature of HIE service is the provision of a portal, which allows authorized users to search for health information about an individual across multiple healthcare settings, which could be used to support public health investigations. HIE infrastructure may serve as a mechanism for PHA communications back to clinical health; for example, in connecting to a provider's EHR to identify individuals with notifiable diseases that have been lost to follow up, or to provide general information on epidemiologic trends in the community. 33 Finally, HIE services may include the aggregation of data of public health interest into a centralized data warehouse to facilitate analysis. Barriers and Limitations Barriers and limitations are detailed below, and illustrated in Figure 3. Limitations of EHR and PHR Data While EHRs have demonstrated potential to support public health practice, there are current limitations to more widespread public health use of EHR data. With respect to data availability, EHRs are generally designed around the provider-patient clinical encounter, and often do not include psychosocial, behavioral, and environmental variables of interest to public health. Some projects have attempted to capture such information within the clinical workflow, but outside of the EHR context. 34 Attempting to incorporate these variables into EHRs can require significant time and resource investment by PHAs to engage with EHR vendors and may also add additional burden to clinician workflow. 35 Limitations of EHR and PHR Data Questionable reliability and validity of EHR data for public health; variables of interest to public health may be missing (e.g., psychosocial, behavioral, and environmental factors). Health Information Organizations (HIOs) Questionable viability and sustainability of HIOs; PHA participation in HIOs entails Infrastructure Funding tralized system within a PHA in accordance with MU. tain new HIT programs. Policies/Legislation Clinical partners may misinterpret the Health Insurance Portability and Accountability Act (HIPAA), and be wary of data exchange with PHAs. eGEMs With respect to data quality, the reliability and validity of EHR data for public health use may not be adequate, due to the use of different data models across EHRs, or to variation in data collection across practice settings, or to the use of free-text rather than structured data collection; however, there are demonstration projects currently underway to assess some of these issues. 36 With respect to data exchange with EHRs, significant barriers include the following: the inconsistent use of available data and messaging standards, which may require ongoing encouragement of public health's clinical partners; the establishment and ongoing monitoring of interfaces between clinical and public health; and policy or performance barriers to a PHA conducting direct queries or analyses. Similarly, PHRs may also be difficult to use as a public health data source. Current evidence is limited with regard to the effectiveness of PHRs as a surveillance tool. 37 In addition, there has been limited adoption of PHRs, their respective functionalities vary, uncertainty exists regarding market demand and who will support the cost of PHRs, in addition to lack of standards for data collection and biophysical measurements that may make PHRs prone to data quality issues. 38,39 Health Information Organizations While HIE shows promise as a service, there are significant concerns with regard to the financial viability and sustainability of the health information organizations (HIOs) that provide the oversight and governance for this service. 40 In a survey to assess the state of HIE activity in the United States, only 75 of 197 potential HIOs were operational, 50 of those 75 HIOs did not meet criteria for financial viability, and only 13 of the HIOs surveyed met criteria for the first stage of MU. 41 While information exchange with clinical care partners is conceptually attractive, PHA participation in an HIO entails significant risk where HIE is not already firmly entrenched as a sustainable, private community enterprise. Alternatively, where not supported by the market, HIE activity would have to be treated as a "public good, " with support provided by government and/or payers. 42 If a PHA undertakes implementation of HIE activity, there are a number of prohibitive factors to consider: local PHA expertise in implementing these technologies should not be assumed; budgets must remain flexible to account for undiscovered work that is inevitably revealed during implementation; leadership should remain constant to ensure consistent vision; and contingencies must be in place to avoid delays that may undermine confidence, but delays should nevertheless be expected. 43 Infrastructure Funding The status of public-health information-technology infrastructure at health departments across the country is mixed. Baker and Koplan cited critical gaps in basic information technology services (such as fax, e-mail, and internet connectivity), although these gaps have been closing over the past decade. 44 However, the software applications that are used to support core public health functions present a more variegated picture, as some PHAs have modern systems, while others maintain outdated legacy systems, and there are still others for whom these applications are "virtually nonexistent. " 45 Seven percent of local health departments have implemented HIE and half have "no activity" in the area. 46 Even within health departments, there is significant variation. Categorically funded public health programs have historically been prohibited from developing information systems that might also support the needs of other programs, leading to the development of information technology silos and information process silos (i.e., different programs in a PHA having parallel interactions with external data partners, such as hospitals and laboratories, leading to redundant coordination and resource investment). 47,4 To support broad use of electronic health information across all programs in a public health department, shared infrastructure should be established for the receipt, processing, and analysis of data; development of such infrastructure will require separate dedicated funding and/or increased flexibility in the use of categorical funds. 48 Although eligible hospitals and providers are receiving incentive funding for MU of certified EHR technology -including the submission of relevant data to PHAs, dedicated funding for public health to support ongoing receipt and management of MU data has been spare. Recently, traditional grant funding sources, including the CDC Cooperative Agreements for Epidemiology and Laboratory Capacity and Public Health Emergency Preparedness, have allowed PHAs more latitude to establish infrastructure that specifically supports MU engagement with hospitals and providers, although without providing specific guidance on specific, sustainable approaches. 49,50,51 The National Association of County and City Health Officials have called for additional funds and The Joint Public Health Informatics Taskforce has called for alterations to cooperative agreements to fund necessary changes. 52,53 Public health will need to advocate not only for building capacity to connect to clinical health, but to connect more efficiently through centralized HIE. While more than $547 million was initially awarded to states to support HIE activity through the State Health Information Exchange Cooperative Agreement Program, PHAs were generally not funded through these awards to connect into HIE infrastructure, nor was public health engagement of primary interest to some HIE efforts: in a survey of 27 of these cooperative programs, less than half supported public health use cases for HIE, such as the reporting of notifiable conditions or immunization data. 54 It is critical for HIOs to be aware of PHAs as data exchange partners, and for both to pursue mutually beneficial funding opportunities that will sustain these relationships. Workforce Capacity Public health departments have variable information technology, and informatics and data analysis expertise, to receive, store, manage, and utilize new electronic health information sources. In an informatics needs assessment by the National Association of County and City Health Officials (NACCHO), barriers to implementing information systems (after insufficient funding and eGEMs lack of time and resources) included lack of project management staff and lack of persons with technical skill to support systems. 46 Even with information systems in place, local health departments may not have the appropriate staff to manipulate and understand the data: the percentage of local health departments that had epidemiologists, public health information specialists, and public health informatics specialists were 28%, 21%, and 13%, respectively. 55 This is further compounded by a more general loss of approximately 15% of the local public health workforce from 2008 to 2011. 56 For the moment, public health departments have not been mandated to receive MU data from hospitals and providers; however, if public health departments are compelled, or choose, to receive MU and other EHR data, they will likely need to add staff to conduct business analysis and project management to support the development of new systems and interfaces between public health and clinic health, as well as data analysts and epidemiology staff to prepare and analyze this new or augmented influx of data. Policies and Legislation In engagements between clinical and public health, the Health Insurance Portability and Accountability Act (HIPAA) is an oft-cited barrier to data exchange; in fact, according to the HIPAA Privacy Rule, covered entities may disclose protected health information for public health use, including reporting, surveillance, investigations, and interventions. 57 Public health will need to continue to educate its clinical partners on permissibility of disclosures to public health, in order to support necessary and novel secondary uses of electronic health information. State and local statutes have mandated the reporting of selected infectious and noncommunicable diseases. However, many other conditions of interest to public health, including indicators of chronic disease -or even negative results from tests for reportable diseases -do not enjoy the same sanctions. As new sources of electronic health data become available, corresponding legislation will be required to support their use for public health purposes. For example, in 2005, the New York City Board of Health mandated the laboratory reporting of hemoglobin A1C test results to track blood glucose levels in diabetes patients; while preliminary feedback has been positive, it has not been without controversy, and may provide important lessons for other health agencies. 58 Conclusion Ultimately, the broad vision is that electronic health information from EHRs and PHRs will be made available to PHAs through direct connections or via consolidated HIE. The latter will leverage Big Data science to conduct surveillance and make inferences about health determinants, implementing traditional population-level interventions and individual clinical interventions via CDS technologies. The potential of electronic HIE implies timely availability and improved access to data, compared to traditional paper-based manual processes. The availability of enhanced technology implies more timely analysis and opportunities for innovation. In order to realize this vision, local health departments require additional funding and technical support from national bodies for infrastructure development. Additional personnel are needed for policy development and advocacy for the needs of public health departments in local HIE. Technical personnel must engage with both internal and external partners regarding MU reporting and must navigate the complex field of changing requirements and standards for health information technology. Appropriate business analysis and project management staff will be required to ensure that all systems are designed to help users work more efficiently, rather than simply automating and reinforcing redundant processes. Data analysis staff must then appropriately interpret these data and present the data in a way that makes decision making clear and actionable. Public health leaders at the state and local levels need an increased commitment to public health informatics and the development of sustainable centralized HIE. They must develop their own strategic plan for sustainability that includes the public health workforce and technical expertise. Furthermore, leaders will need to offer ongoing education to all parties involved in HIE on the role of HIPAA for health departments. Surveillance laws must be addressed to include chronic diseases and other diseases of public health importance to protect access to electronic clinical data for the benefit of the public's health. Just as electronic health information will transform the day-today practice of medicine, it will transform the practice of public health. Together with the changes brought about by health reform, it will facilitate the development of PHAs into knowledge organizations. The transition will require investment in new technology, analytic and application techniques, hiring and retraining of staff, but -most importantly -creative leadership to capitalize on these new opportunities.
2016-08-09T08:50:54.084Z
2013-10-28T00:00:00.000
{ "year": 2013, "sha1": "c36c55949ebbe0afba459b01087d22a207bdc9ac", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.13063/2327-9214.1019", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c36c55949ebbe0afba459b01087d22a207bdc9ac", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
258492995
pes2o/s2orc
v3-fos-license
A Comparison of Podoplanin Expression in Oral Leukoplakia and Oral Squamous Cell Carcinoma: An Immunohistochemical Study Introduction: Oral squamous cell carcinoma (OSCC) accounts for about 90% to 95% of all malignancies of the oral cavity.The majority of OSCCs are preceded by oral potentially malignant disorders (OPMDs). Podoplanin (PDPN) is a mucin-like small transmembrane glycoprotein. Alterations in PDPN immunoexpression have been reported in OPMDs and OSCCs. Objective: The objectives of this study were to evaluate the role of PDPN immunoexpression in oral leukoplakia (OL) and different histological grades of OSCC and to assess the role of PDPN as a potential biomarker for predicting the risk of malignant transformation. Materials and methodology: Immunohistochemical analysis for PDPN was performed in 45 histologically confirmed cases of formalin-fixed, paraffin-embedded specimens of different grades of OSCCs and 15 cases of OLs with 15 cases of the normal oral mucosa (NOM) as controls. The expression and distribution of this marker were analyzed in these lesions. Results: The immunoexpression of PDPN showed a significant increase in the expression of the percentage of positive cells, staining intensity, location of staining in the epithelium, tumor islands, and within the cells, as well as the mean lymphatic micro vessel density between NOMs, OLs, and different grades of OSCCs. Conclusion: Upregulation of PDPN can be related to the malignant transformation of OLs and biological aggressiveness of OSCCs. The enhanced immunoexpression of PDPN signifies that this immunomarker can have a role in tumor cell differentiation and the neoplastic progression of OSCCs. Increased density of lymphatic vessels suggested an important role of lymphangiogenesis in tumor progression and also as a prognostic factor for lymph nodal metastasis. Introduction Oral squamous cell carcinoma (OSCC) accounts for about 90-95% of all malignancies of the oral cavity [1]. The majority of OSCCs are preceded by visible clinical alterations in the oral mucosa which include oral potentially malignant disorders (OPMDs) such as oral leukoplakia (OL), erythroplakia, and oral submucous fibrosis. The malignant transformation rate of OPMDs to OSCCs is about 6% to 36% [2]. For the past few decades, the mortality rate of OSCC is relatively high with a five-year survival rate of 50% in spite of advancements in diagnosis and treatment modalities [3]. Thus, early diagnosis and appropriate treatment of OPMDs may facilitate the prevention of their malignant transformation into OSCCs [4]. Podoplanin (PDPN) is a mucin-like transmembrane glycoprotein [5]. It is generally expressed in the kidneys, type I alveolar cells of the lung, a few types of neurons, mesothelial cells, osteoblasts, choroid plexuses, glial cells, lymphatic endothelial cells, and various types of fibroblasts [6]. In the oral cavity, PDPN is weakly expressed in the basal layers of the epithelium and strongly expressed in the myoepithelial cells of salivary glands [7]. The physiological role of PDPN includes the normal development of the lymphatic system, lungs, and heart [5]. It is also involved in the morphogenesis of odontoblasts and helps in maintaining the shape of myoepithelial cell processes [7]. It has a role in tumourigenesis, cell motility, tumor invasion, and metastasis [5]. PDPN expression is upregulated in various malignancies such as squamous cell carcinomas of the oral cavity, larynx, skin, lung, esophagus, and cervix, colorectal adenocarcinomas, and testicular germ cell tumors [8]. 1 1 1 1 1 1 Carcinogenesis in the oral cavity is a complex mechanism that alters various genes. Immunohistochemical studies allow the identification of these molecular changes as "tumor markers" and contribute to the improved capacity in the diagnosis and evaluation of a prognosis. Thus, there is a need for the identification of biomarkers that detect the changes at a molecular level and predict the malignant transformation of OPMDs such as OLs to OSCCs [9]. Hence, this study was performed to evaluate the role of PDPN as a potential biomarker in predicting the risk of the malignant potential of OLs as well as the tumor progression in OSCCs. The tissue blocks were sectioned at three-micron thickness and placed on 3-amino-propyltriethoxysilane-coated slides. Following deparaffinization by heating and treatment with three changes of xylene, the sections were kept in decreasing grades of isopropyl alcohol for rehydration and then immersed in water. The tissue sections were kept in an antigen retrieval solution (Tris-EDTA buffer) and treated at 95°C for three cycles. The antigen-retrieved sections were allowed to cool for 30 minutes and then washed in distilled water followed by rinsing in phosphate-buffered saline. Further, the slides were treated with 3% hydrogen peroxide for 10 minutes to block endogenous peroxidase activity. Materials And Methods The tissue sections were then incubated with a PDPN primary antibody (Pathnsitu Biotechnologies Private Limited, Hyderabad, mouse monoclonal antibody, Clone PM231) for 30 minutes at room temperature, and then the slides were rinsed with a wash buffer. The tissue sections were then treated with a secondary antibody (horseradish peroxidase) at room temperature for 10 minutes. The visualization of the immunohistochemical reaction was performed with freshly prepared substrate chromogen solution (Diaminobenzidine). The sections were then counterstained with Harris hematoxylin and mounted using Dibutyl Phthalate Xylene. Evaluation of PDPN-positive cells was performed using a compound light microscope at 10x, 20x, and 40x magnifications by two independent observers. The internal positive control was a lymphatic endothelium which had demonstrated a strong PDPN positivity. For each case, five fields of the most representative areas were selected. The immunoreactive score (IRS) given by Remmele and Stegner et al. (Table 1) was calculated for each case [10]. Results In our study, the mean percentage of PDPN-positive cells in NOM, OL, WDOSCC, MDOSCC, and PDOSCC were 1.3, 28.9, 51.96, 65.75, and 74.18, respectively. There was a statistically significant increase in the mean percentage of PDPN-positive cells (p=0.003) from NOM to OL to different grades of OSCC ( Figure 1). IIIb, and IIIc In our study, the intensity of PDPN staining was noted. The results showed a statistically significant increase in the intensity of staining from NOM to OL to different grades of OSCC (p=0.042) ( IRS classification was compared between all the groups by Chi-square test. Our results showed a statistically significant increase in PDPN IRS from NOM to OL to different grades of OSCC (p=0.029) ( Table 3). The location of PDPN immunoexpression within the epithelium was compared in NOM and OL. In NOM, PDPN expression was detected only in the basal cells of four positive cases. In OL, out of nine positive cases, seven cases were detected in the basal layer and the remaining two cases in the basal and suprabasal layers. There was a statistically significant increase (p=0.028) in PDPN immunoexpression within the epithelium from NOM to OL (Table 4) (Figures 2B, 3B). Hence, when there was a shift in histological grades of OSCC from WDOSCC to MDOSCC to PDOSCC, the location of PDPN also exhibited a progressive shift from membranous to a combination of membranous and cytoplasmic expression. There was a statistically significant difference (p=0.041) in the location of PDPN immunoexpression within the cells from NOM to OL to different grades of OSCC. Negative In our study, the mean lymphatic micro vessel density (MLVD) was calculated by counting the number of PDPN-positive lymphatic vessels in five highly vascularized fields (under 20x magnification) followed by calculating its mean value ( Figure 9). FIGURE 10: MLVD between groups I, II, IIIa, IIIb, and IIIc The inter-observer agreement is assessed using the Kappa statistical test, and there was an almost perfect agreement between observers 1 and 2 with a Kappa value of 0.966 ( Table 5). (almost perfect agreement) Kappa statistical test Discussion In the Southeast Asian population, OSCC is considered a foremost distress due to common oral habits such as smoking, alcohol intake, betel quid, areca nut, and tobacco chewing [11,12]. The predisposition of genetic changes like mutations has been found to be another significant risk factor in the occurrence of OSCC [11]. It is striking that most of the OSCCs arise from OPMDs such as OL and erythroplakia. The presence of dysplasia is most significant in predicting the malignant transformation than its clinical appearance in OL and erythroplakia [13]. Hence, the histopathological assessment of epithelial dysplasia is considered the gold standard for assessing the malignant transformation of OPMDs [14]. Previous studies have reported that the malignant transformation rate of OL ranges from 0.13 to 34.0% [15]. Thus, novel molecular markers which help in predicting the malignant potential of OLs and tumor aggressiveness of OSCC are required to be studied extensively in search of potential newer therapies for OSCC and also for its precursors [16]. PDPN is a cell surface protein measuring about 45 Kilodalton which can be induced in basal keratinocytes and dermal fibroblasts during skin remodeling [17]. It is specifically expressed in the endothelium of lymphatic cells but not in the endothelium of blood vessels [5]. PDPN expression is increased in many neoplasms like angiosarcoma, lymphangioma, and squamous cell carcinomas of the oral cavity, skin, esophagus, lung, larynx, and cervix [8]. PDPN expression in early invasive OSCCs is heterogeneous and fragmented, often confined to the invasive tumor front areas [8]. In malignant cells, PDPN is a component of invadopodia, which are actin-rich plasma membrane projections with proteolytic activity. They mediate the degradation of the extracellular matrix components, thereby facilitating the invasion of tumor cells through the epithelial basement membrane [6]. PDPN induces both plasma membrane extension and the actin cytoskeleton rearrangement which favors tumor cell motility [7]. In our study, the mean percentage of PDPN-positive cells showed a statistically significant increase from NOM to OL to different grades of OSCC. Our findings are in accordance with the studies of Logeswari et al. [18,19]. Thus, PDPN may be regarded as a predictor marker in assessing the malignant transformation of OPMDs like OL and also for the prognosis of OSCC. In contrast, a study by Aiswarya et al. (2019) showed a significant decrease in the percentage of positive cells from WDOSCC to PDOSCC [20]. It was suggested that the induction of PDPN expression in OSCC mediates a pathway mechanism that leads to collective and directional cell migration. In addition, it induces numerous adjustments of intracellular signaling pathways which cause modulation of Rho family GTPase activities, phosphorylation of ezrin, radixin, and moesin proteins, and rearrangement of the actin components, thereby enhancing cell invasion and migration. Therefore, it was concluded that PDPN can have a role in the tumor cell differentiation and neoplastic progression of OSCC [21]. In our study, a statistically significant increase in the intensity of PDPN staining from NOM to OL to different grades of OSCC (p=0.042) was noted. Our results are similar to the findings of Raluca et al. (2015), wherein they noted severe intensity in MDOSCC and PDOSCC compared to WDOSCC. They observed a progressive shift in staining intensity from mild to severe patterns from WDOSCC to PDOSCC [22]. In contrast to our findings, the study results of Aiswarya et al. (2019) showed intense expression of PDPN in WDOSCC, whereas mild intensity was noted in MDOSCC cases [20]. [19]. Thus, PDPN expression extending to suprabasal layers in some OL cases may represent the upward clonal expansion of stem cells during carcinogenesis. OL cases showing such clonal expansion may imply a significantly higher risk of malignant transformation [23]. In the present study, within different grades of OSCC, the location of PDPN expression was predominantly evident in the entire tumor islands as the grade of OSCC increased. Our observations are similar to the study results of Parhar et al. (2015), wherein they observed a diffuse pattern (both center and peripheral positive cells) of PDPN expression which was prominent in PDOSCC and least in WDOSCC [23]. In contrast to our findings, the study results of Prasad et al. (2015) showed that in WDOSCC, PDPN-positive expression was restricted to the periphery of the tumor nests, whereas the cells in the center of tumor nests showed negativity. Thus, the PDPN expression at the periphery of tumor nests signifies its high proliferative capacity, and central cells suggest terminal differentiation of tumor cells [24]. The location of PDPN expression within the cells was observed and compared. When there was a shift in histological grades of OSCC from WDOSCC to MDOSCC to PDOSCC, the location of PDPN immunoexpression also exhibited a progressive switch from a membrane to a combination of membranous and cytoplasmic expression. Thus, a statistically significant association between the location of PDPN within the cells and different grades of OSCC was noted. In accordance with our findings, the study of Raluca et al. (2015) showed a PDPN expression in both membrane and cytoplasm of tumor cells in different grades of OSCC [22]. In contrast, Laura et al. (2014) noted that the expression of PDPN was limited to membranous staining in cases of poorly differentiated OSCC [25]. Previous studies have suggested that tumor-associated lymphatic vessel formation plays a significant role in tumor progression and metastasis of various malignancies including OSCC [26]. In our study, a statistically significant increase of MLVD from NOM to OL to different grades of OSCC was observed. Within different grades of OSCC, there was a significant increase in MLVD from WDOSCC to PDOSCC. Our results are similar to the study of Aiswarya et al. (2019), in which MLVD was highest in OL and OSCC compared to NOM. Thus, an increase in MLVD in the tumor stroma has been shown to be associated with the metastasis of lymph nodes [20]. In contrast, the study results of Parhar et al. (2015) showed the highest MLVD in MDOSCC cases. Thus, an increase in MLVD in OSCC represents tumoral lymphangiogenesis which may act as an indicator of lymph nodal metastasis in OSCC patients [23]. The majority of OSCCs are diagnosed in the advanced stages. Even after extensive research works using newer diagnostic and prognostic markers, the five-year survival rate in OSCC patients is reported to be very low. This necessitates us to conduct research on new diagnostic and prognostic markers which helps in predicting the malignant potential of OPMDs like OLs as well as to predict the chance of the occurrence of OSCC and its progression, thereby improving the survival rates of patients with OSCC. One such marker is PDPN. In the present study, the immunoexpression of PDPN was statistically significant in relation to the percentage of positive cells, staining intensity, IRS classification, and location of staining within the epithelium, tumor islands, and cells, as well as MLVD among all the groups. There are certain limitations in our study like a smaller sample size and the lack of correlation between the risk factors such as smoking, tobacco chewing, and alcohol intake in relation to the occurrence and progression of OL and OSCC. Conclusions This study demonstrated that the upregulation of PDPN expression can be related to the malignant transformation of oral leukoplakias and the biological aggressiveness of OSCCs. From WDOSCC to PDOSCC, there was an increased expression of PDPN with respect to disease severity. The enhanced immunoexpression of PDPN signifies that this immunomarker can have a role in the tumor cell differentiation and neoplastic progression of OSCCs. In OPMDs like OL, showing increased PDPN expression has been found to be related to a high risk of progression to invasive oral cancer, thereby suggesting that it might act as a powerful tumor marker to predict the risk of malignant transformation in OL patients. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Ethics Committee, Osmania Medical College, Koti, Hyderabad issued approval ECR/300/Inst/AP/2013/RR-16(GDCH-IEC/PG/1922). After clearing all queries raised in the meeting, the committee has granted the ethical clearance for the study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-05-05T15:05:32.638Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "8a35d1482915664bfbc41570367c479865dda3b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.38467", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b9dbf93a371c485fed652acfd4d5ee5ae81f7f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18961720
pes2o/s2orc
v3-fos-license
Vitrectomy with internal limiting membrane peeling vs no peeling for Macular Hole-induced Retinal Detachment (MHRD): a meta-analysis Background we conducted our meta-analysis of published studies to assess existing evidence about the efficacy and safety of vitrectomy with ILM peeling vs. that of vitrectomy with no ILM peeling for Macular hole-induced retinal detachment. Methods Databases, including Pubmed, Cochrane Library, Ovid, Web of Science, Wanfang and CNKI, were searched to identify studies comparing outcomes following vitrectomy with ILM peeling and that with no ILM peeling for macular hole-induced retinal detachment. The meta-analysis was performed by RevMan 5.1. Results Six comparative studies comprising 180 eyes were identified. It was indicated that the rate of retinal reattachment (Odds ratio (OR) = 3.03, 95 % Confidence interval (CI):1.35 to 6.78; P = 0.007) and macular hole closure (OR = 6.74, 95 % CI:3.26 to 13.93; P < 0.001) after initial surgery was higher and the rate of recurrent retinal detachment (OR = 0.08, 95 % CI:0.02 to 0.30; P = 0.0002) was lower in the group of vitrectomy with ILM peeling than that in the group of vitrectomy with no ILM peeling. However, the improved BCVA (Weighted mean difference (WMD) = 0.14, 95 % CI: −0.20 to 0.47; P = 0.42) and the rate of postoperative complications were similar between the two groups. Conclusion Vitrectomy with internal limiting membrane peeling is an efficient and safe procedure for macular hole-induced retinal detachment. Background Macular hole-induced retinal detachment (MHRD), also named retinal detachments resulting from macular hole is usually a vision-threatening complication to highly myopic eyes, which is more common in Asian adult population [1]. In the past years, MHRD was presumed as a rare disease according to the paucity of literatures. But with the OCT routinely used to evaluate RDs preoperatively, macular holes were found more frequently in RD [2]. The important causative factors of MHRD might be related to the tangential traction caused by a premacular membrane or fibrosis and the inverse traction caused by the posterior staphyloma [3][4][5]. More recent analysis by OCT showed that tangential traction of the vitreous cortex behind the vitreous pocket contributed to the development of MHRD [3][4][5][6]. In highly myopic eyes, the elongation of the axial length of the eye and the development of a posterior staphyloma result in thinning of the retina and choroid, which also leads to the development of MHRD [3,7]. Since the early 1990s, pars plana vitrectomy, gas endotamponade, and epiretinal membrane removal had been used in retinal detachment related to a macular hole [8][9][10][11][12][13]. However, the primary success rate of retinal reattachment (43.9-75 %) was not as high as expected and the visual outcomes in some cases were poor [11][12][13][14][15]. To facilitate macular hole closure, removal of the internal limiting membrane (ILM) had been used a surgical adjunct primarily to counter surface traction and promote the closure of the macular hole in the past fifteen years [16][17][18]. It was proposed that removing the internal limiting membrane (ILM) ensures the complete removal of any overlying ERM adjacent to an indiopathic macular hole and the vitreous traction on the retina [19][20][21][22][23][24]. Although an increasing number of PPV with ILM peeling has been reported treating retinal detachments resulting from a macular hole with better retinal reattachment and visual acuity [25][26][27][28][29], ILM peeling has been shown to lead to small but noticeable anatomic and functional changes in the peeled area of the retina, which should also be considered in the risk-benefit analysis. There is still debate among vitreoretinal surgeons about whether and when to peel the ILM in MHRD cases. The removal of the ILM may increase the incidence of postoperative complications, including the development of a MH and a MHRD [21]. Moreover, some functional outcomes such as postoperative scotomas and dissociated optic nerve fibre layer (DONFL) should also be considered in the risk-benefit analysis [30,31]. Most studies have been limited by small sample size and a single institution design, so consensus has not been reached as to the necessity of ILM peeling in MHRD. To overcome these limitations, we conducted this meta-analysis of published studies to assess existing evidence about the efficacy and safety of vitrectomy with ILM peeling vs that of vitrectomy with no ILM peeling for macular hole-induced retinal detachment. Literature search A literature search was performed to identify all relevant prospective or retrospective studies that compared outcomes following vitrectomy with ILM peeling and that with no ILM peeling for macular hole-induced retinal detachment. The Pubmed, Cochrane Library, Ovid, Web of Science, Wanfang and CNKI databases were searched systematically for all articles published before June 2014. The following terms were used for the search: "retinal detachment", "macular hole", "myopic", "macular holeinduced retinal detachment", "internal limiting membrane peeling" and "vitrectomy". Only studies in the English or Chinese language were considered for inclusion. Reference lists of all retrieved articles were manually searched to broaden the search. All abstracts, studies and citations scanned were reviewed. Data extraction and assessment of study quality Two reviewers independently extracted the data from each study. The following information was extracted from each study: first author, year of publication, study design, inclusion and exclusion criteria, quality of study, study population characteristics, number of subjects in vitrectomy with ILM peeling group and vitrectomy with no ILM peeling group, baseline characteristics of the patients such as duration of symptoms, refractive error, axial length and preoperative BCVA in each group, postoperative data. Discrepancies between the two reviewers were resolved by discussion and consensus with the corresponding author. Since most of our selected studies were non-randomized surgical research, the quality of each included trial was accessed using methodological index for non-randomized studies (MINORS) [32]. This validated index involves 12 items, the first eight items specifically designed for noncomparative studies and the remaining four items applied to comparative studies. Items are scored as 0 (not reported), 1 (reported but inadequate) and 2 (reported and adequate). The maximum ideal score for comparative studies is 24. It is important to appreciate that such scoring system was use in the quality comparison of nonrandomized research because other quality grading according to levels of evidence does not provide adequate stratification [32]. We evaluated each study with a quality score and the score of 12 or more indicated a higher quality study. Criteria for inclusion and exclusion To be included in this meta-analysis, studies had to fulfill the following criteria: (1) compare outcomes of patients receiving vitrectomy with ILM peeling with those of patients receiving vitrectomy with no ILM peeling for macular hole-induced retinal detachment; (2) report on at least one of the outcome measures mentioned below; and (3) if multiple studies were reported by the same institution and/or authors, either the best quality or the most recent publication was included in our analysis. Noncomparative studies were excluded. Abstracts, letters, editorials and experts opinions and reviews without original data were excluded. The following studies or data were also excluded: (1) studies included cases with both MHs and peripheral breaks; (2) the outcomes and parameters of patients were not clearly reported; (3) significant differences existed in duration of symptoms, refractive error, axial length and preoperative BCVA between vitrectomy with ILM peeling group and vitrectomy with no ILM peeling group; (4) If end points were not comparable, (5) if it was impossible to extract or calculate appropriate data from the published results. Outcomes of interest The following outcomes were used to compare between the group of vitrectomy with ILM peeling and that of vitrectomy with no ILM peeling. (1) data of efficacy, including rate of retinal reattachment after initial surgery, rate of macular hole closure after initial surgery, improved BCVA and rate of recurrent retinal detachment; (2) data of safety, the rate of postoperative complication such as retinal breaks, ERM (epiretinal membrane), cataract and intraocular pressure rise. Statistical analysis We used the Cochrane Collaboration's Review Manager Software (RevMan Version 5.1) for the data analysis. Dichotomous variables were analyzed by using estimation of odds ratios with a 95 % confidence interval (95 % CIs) and continuous variables using weighted mean difference (WMD) with 95 % CIs. For studies that presented continuous data such as median and range values, we converted these data to the mean and standard deviation by using the method of Hozo et al. [33]. Thus all continuous data were standardized for analysis. The homogeneous test of effects was performed using χ 2 test, with P < 0.05 and I 2 > 50 % indicating significant heterogeneity. A fixed-effects model was used when no heterogeneity was detected, which meant that there was no variances among studies. If any heterogeneity existed, a random-effects model, which leads to wider CIs than the fixed-effects model, was used for the meta-analysis. Presence of publication bias was evaluated qualitatively by a funnel plot. We also systematically describe and assess the results that are not appropriate to be combined in the meta-analysis. Selection of studies The initial search yielded 417 relevant studies. But most of these studies were not suitable for our analysis because they included duplicates, lab or animal studies, case reports, review and other study subjects irrelevant to our title. After screening all titles, abstracts and fulltest, 411 publications were excluded according to the selection criteria and a total of 6 studies [28,[34][35][36][37][38] were retrieved for more detailed evaluation. The search process is illustrated in Fig. 1. Characteristics and baseline of the included studies In total 6 studies [28,[34][35][36][37][38], 180 eyes (92 eyes with ILM peeling, 88 eyes with no ILM peeling) were included with retinal detachment resulting from macular hole. The characteristics of these 6 studies are summarized in Table 1. None of the studies were randomized controlled trials. Five studies [28,[35][36][37][38] were conducted in China, one in Japan [34]. The sample size of each study varied from 11 to 52. All the studies included in the meta-analysis were considerably well conducted and had balanced populations. The baseline characteristics of each included trial, such as duration of symptoms, refractive error, axial length and preoperative BCVA were found to be equivalent between the group of vitrectomy with ILM peeling and the group of vitrectomy with no ILM peeling. Meanwhile, analysis of the pooled data revealed that the two groups did not differ significantly and there was no statistical heterogeneity between the studies ( Table 2). Quality assessment The methodologic quality of the included trials is explained comprehensively in Table 3. In general, the quality of the studies was moderate to good (all >12). All data were analyzed in accordance with intention-to-treat principle. Meta-analysis of efficacy outcomes The pooled data from 5 studies including 128 eyes in the meta-analysis indicated that the group of vitrectomy with ILM peeling had higher rate of retinal reattachment after initial surgery than the group of vitrectomy with no ILM peeling (OR = 3.03, 95 % CI: 1.35 to 6.78; P = 0.007) and there was no statistical heterogeneity between the two groups (heterogeneity P = 0.26, I 2 = 25 %) (Fig. 2). The rate of macular hole closure after initial surgery was reported in 6 studies including 180 eyes. There was no statistical heterogeneity between the studies (heterogeneity P = 0.81, I 2 = 0 %). By using a fixed effects model, it was indicated that the rate of macular hole closure after initial surgery was higher in the group of vitrectomy with ILM peeling than that in the group of vitrectomy with no ILM peeling (OR = 6.74, 95 % CI: 3.26 to 13.93; P < 0.001) (Fig. 3). In 3 studies, it was indicated that there was no significant difference in the improvement of BCVA after surgery between the two groups. Analysis of the extracted data revealed that there was statistical heterogeneity between the studies (heterogeneity P = 0.03, I 2 = 72 %), which may have resulted from variations in the method used to measure the visual acuity and the data conversion from other unit to logMAR. Patients undergoing vitrectomy with ILM peeling experience a similar improvement of BCVA of those undergoing vitrectomy with no ILM peeling. The two groups did not differ significantly in the regard (WMD = 0.14, 95 % CI: −0.20 to 0.47; P = 0.42) using the random effects model (Fig. 4). The pooled data from 3 studies including 102 eyes in the meta-analysis indicated that the group of vitrectomy with ILM peeling had lower rate of recurrent retinal detachment after initial surgery than the group of vitrectomy with no ILM peeling (OR = 0.08, 95 % CI: 0.02 to 0.30; P = 0.0002) and there was no statistical heterogeneity between the two groups (heterogeneity P = 0.87, I 2 = 0 %) (Fig. 5). Meta-analysis of safety outcomes Five studies reported postoperative complications such as retinal breaks, ERM (epiretinal membrane), cataract and intraocular pressure rise. As is shown in Table 4, we analyzed the pooled data of complications respectively and revealed that the two groups did not differ significantly in the regard of using the fixed effects model. It is important to note that heterogeneity testing indicated no significant heterogeneity between the two groups. Testing for publication bias A funnel plot of the macular hole closure rate in including studies demonstrated symmetry, which indicated no serious publication bias (Fig. 6). Discussion Gonvers and Machemer [39] first introduced the surgical procedures for treating retinal detachments (RDs) resulting from a macular hole. Since then, pars plana vitrectomy, gas endotamponade, and epiretinal membrane removal has been widely accepted as the treatment for retinal detachment related to a macular hole [8][9][10][11][12][13]. Despite the universality of pars planavitrectomy for MHRD, the primary success rate of retinal reattachment was not as high as expected. Previous studies on retinal detachment related to high myopia in patients with macular hole have demonstrated that vitreous surgery can lead to anatomic macular hole closure, with the primary anatomic closure rate ranging from 46-75 % [8][9][10][11][12][13][14][15]. Kadonosono et al. first reported ILM peeling with the assistance of indocyanine green and sulfur hexafluoride gas injection for retinal detachment related to high myopia in patients with macular hole with a high reattachment success rate of 91 % [25]. However most studies have been limited by small sample size and a single institution design, consensus has not been reached as to the necessity of ILM peeling in MHRD. Since macular hole-induced retinal detachment is relatively uncommon, it is unlikely to perform large scale studies or randomized studies to study on the necessity of ILM peeling in it. So we design our meta-analysis to determine the efficacy and safety of ILM peeling in macular hole-induced retinal detachment. In this metaanalysis, we pooled data from 7 studies and examined eight factors associated efficacy and safety: rate of retinal reattachment after initial surgery, rate of macular hole closure after initial surgery, improved BCVA, rate of recurrent retinal detachment and rate of postoperative We realized that the efficacy and safety outcomes were associated with the baseline characteristics of eyes including in studies such as duration of symptoms, refractive error, axial length and preoperative BCVA. So we excluded the studies which existed significant difference in duration of symptoms, refractive error, axial length and preoperative BCVA between the group of vitrectomy with ILM peeling and that of vitrectomy with no ILM peeling. Before we analyzed the outcomes of efficacy and safety, we compared the duration of symptoms, refractive error, axial length and preoperative BCVA between the two groups and found no significant difference. According to this result we consider that the groups have comparability and the conclusion below was reasonable. Our meta-analysis summarized the efficacy and safety outcomes of ILM peeling in vitrectomy with a total of 113 case eyes and 102 control eyes. The result indicated that the rate of retinal reattachment and macular hole closure after initial surgery was higher and the rate of recurrent retinal detachment was lower in the group of vitrectomy with ILM peeling than that in the group of vitrectomy with no ILM peeling (Figs. [2][3]5). However, the improved BCVA and the rate of postoperative complications were similar between the two groups (Fig. 4, Table 4). Macular hole-induced retinal detachment (MHRD) is usually a vision-threatening complication to highly myopic eyes, which is more common in Asian adult population [1]. The important causative factors of MHRD might be related to anterior-posterior traction of the vitreous on the macular area of the retina or fibrosis and the inverse traction caused by the posterior staphyloma [3][4][5]. Histologic studies of excised posterior vitreous cortex in the eyes with MHRD have shown that the fibrous astrocytes made up the majority of cells, and the cortical vitreous contained abundant newly formed collagen including fibrous long-spacing collagen surrounded by sparsely distribute native vitreous collagen [40]. These findings indicated that the removal of the vitrous cortex should reduce the tangential traction and resolve the myopic traction maculopathy. In highly myopic eyes, the elongation of the axial length of the eye and the development of a posterior staphyloma result in thinning of the retina and choroid, which then leads to the development of MHRD [3,7]. It was found that the posterior vitreous cortex or a thin ERM was adherent to the detached retina during surgery in all cases [34]. Thus, ILM peeling is considered to ensure the complete removal of the overlying residual vitreous cortex or ERMs and to relieve the tangential traction of residual prefoveal vitreous after posterior vitreous detachment of the contraction of epiretinal cellular constituents adjacent to the macular hole, resulting in closing the macular hole and aiding in the recovery of macular shape [41]. However, it is usually difficult to remove a thin and fragile ERM or posterior vitreous cortex completely from a detached retina. It was supposed that without ILM peeling, the remaining vitreous may act as a scaffold for the epiretinal membrane, thereby exerting traction on both the MH and retina in the posterior pole, thus limiting MH closure or even promoting reopening. In our meta-analysis, the improved BCVA was not significantly difference between the group of vitrectomy with ILM peeling and the group of vitrectomy with no ILM peeling (Fig. 4). No visual acuity improvement difference are likely explained by the fact that patients whose macular hole had not closed or who developed recurrent retinal detachment after initial surgery in the group of no ILM peeling were ethically allowed to receive second surgery including ILM peeling in clinical study. Thus the outcome of improved BCVA was observed similar in the last data point of follow-up between the two groups [42]. The ILM is the basal lamina of the Müller cells, and the Müller cell cone, which is an inverted cone-shaped zone of specialized Müller cells that form the base of the fovea [43], serves as a plug that binds the photoreceptor cells together in the macula and supports the macula structurally [44]. ILM peeling may decrease the structural support of the macula [21], reduce the amplitude of the local electroretinogram (ERG) [45] and dissociate optic nerve fiber layer [30]. Despite the anatomic change, there are no functional consequences have been attributed to these findings. However, we supposed that these anatomic changes had potential negative effect on the improvement of BCVA in the group of vitrectomy with ILM peeling. In our meta-analysis, the rate of postoperative complications was not significantly different between the group of vitrectomy with ILM peeling and the group of vitrectomy with no ILM peeling (Table 4). We supposed that the surgeons' experience increasing and the dye application in ILM peeling made the adverse effect of ILM peeling be avoided. Retinal breaks, ERM (epiretinal membrane), cataract and intraocular pressure rise were the most common complications after the progresses. However, all the major surgical complications were few both in the two groups. The results of the present meta-analysis should be interpreted with caution because of several limitations. First, all the studies available for this meta-analysis were retrospective studies, so there was a possibility of evident selection bias and observer bias with regard to the adoption of the operative approach. The surgeons might deal the eyes which had larger macular hole size, higher refractive errors and longer symptom duration with no ILM peeling to avoid postoperative application such as retinal breaks and cataract. Second, as is known, successful vitrectomy with or without ILM peeling depends on individual experiences. Surgeons with varying expertise from different clinical centers were included our study. Therefore, the efficacy outcomes such as rate of retinal reattachment after initial surgery, rate of macular hole closure after initial surgery, improved BCVA and rate of recurrent retinal detachment might be affected. The problem of intersurgeon variability, which most of the clinical trials might encounter was difficult to solve. Third, although our funnel plot showed that publication bias is unlikely, it is important to bear in mind that publication bias usually existed in meta-analysis based on published studies. Finally, converting non-normally distributed statistics (median and range) to normally distributed statistics (mean and SD) may be a cause of bias in our analysis. Conclusions In conclusion, the present meta-analysis of published studies has shown that vitrectomy with internal limiting membrane peeling is an efficient and safe procedure for the treatment of macular hole-induced retinal detachment with high rate of retinal reattachment and macular hole closure, lower rate of recurrent retinal detachment as compared to the procedure of vitrectomy with no internal limiting membrane peeling. Therefore, vitrectomy with ILM peeling may be a preferred treatment for macular hole-induced retinal detachment. Submit your next manuscript to BioMed Central and take full advantage of:
2017-06-27T02:14:15.386Z
2015-06-20T00:00:00.000
{ "year": 2015, "sha1": "fd1c43150e2e13f1b68f5e7fe13715ca728c3f11", "oa_license": "CCBY", "oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-015-0048-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "710a6c4cba132c08fcc3197e58ead2131511ab01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55833974
pes2o/s2orc
v3-fos-license
Test of the Atiyah-Singer Index Theorem for Fullerene with a Superconducting Microwave Resonator Experiments have been performed using a spherical superconducting microwave resonator that simulates the geometric structure of the C60 fullerene molecule. The objective was to study with very high resolution the exceptional spectral properties emerging from the symmetries of the icosahedral structure of the carbon lattice. In particular, the number of zero modes has been determined to test the predictions of the Atiyah-Singer index theorem, which relates it to the topology of the curved carbon lattice. This is, to the best of our knowledge, the first experimental verification of the index theorem. Experiments have been performed using a spherical superconducting microwave resonator that simulates the geometric structure of the C60 fullerene molecule. The objective was to study with very high resolution the exceptional spectral properties emerging from the symmetries of the icosahedral structure of the carbon lattice. In particular, the number of zero modes has been determined to test the predictions of the Atiyah-Singer index theorem, which relates it to the topology of the curved carbon lattice. This is, to the best of our knowledge, the first experimental verification of the index theorem. Introduction.-The spectrum of graphene, a monolayer of carbon (C) atoms arranged on a hexagonal lattice, has been the focus of extensive theoretical [1,2] and experimental studies [3]. Its universal properties were often also investigated experimentally in analog systems, so-called 'artificial graphene' [4], e.g., in our group in photonic crystals [5][6][7][8][9][10]. Moreover, theoretically much attention has been devoted to curved graphene structures like fullerene molecules [11][12][13][14][15][16] and the connection between their spatial symmetries and electronic properties. Here, the most famous example is the C 60 molecule. It consists of 60 carbon atoms at the vertices of a truncated icosahedron and has the shape of a soccer ball. Concerning the spectral properties of fullerenes the number of near-zero modes, i.e., of electronic states with excitation energies close to zero, have been of particular interest since they determine the electrical conductivity. In [17,18] an index theorem has been derived that allows the computation of the number of such near-zero modes from the topology of the surface. It was deduced from the renowned Atiyah-Singer index theorem [19][20][21][22] which states that the analytic index of an elliptic differential operator on a compact manifold equals the topological one, in other words, that there is a connection between the number of zero modes of the operator and the topology of the manifold on which it is defined. The aim of the high-resolution experiments presented in this letter was to test these predictions in experiments with a superconducting microwave resonator of the same topology as the C 60 molecule. First we briefly review the salient features of graphene and fullerenes and outline the derivation of the index theorem from [17,18] for deformed graphene sheets. We then describe the experimental setup and compare the results of the measurements to the predictions from the index theorem and to tight-binding model (TBM) calculations. These allow us to study the approach to the thermodynamic limit of an infinite number of carbon atoms. Graphene, fullerenes and the Atiyah-Singer theorem.-The honeycomb structure of graphene is formed by two interpenetrating triangular sublattices. As a consequence, at half filling the Fermi surface in graphene reduces to two independent points in the first Brillouin zone, the so-called 'Dirac points', denoted by K + and K − [1,2] that are conical intersections of the valence and the conduction band. Low energy excitations within the cone regions around K ± have a linear dispersion with a slope given by the Fermi velocity v F . On an infinte graphene sheet they are therefore described by a Dirac Hamiltonian for massless spin-1/2 quasiparticles consisting of partner Hamiltonians which describe excitations with momentum q = (q x , q y ) in each of the two Dirac cones around K ± . The Pauli matrices σ α with α = x, y act on the two sublattice components of the excitations, combined in two-dimensional spinors and hence referred to as quasi-spin. Both cones together then yield a four-component Dirac equation [1]. Fullerene molecules can be constructed by introducing positive curvature into an initially flat graphene sheet [23]. The bending is realized by replacing hexagons by pentagons, ensuring at the same time that the lattice is not stretched and each C atom keeps three neighbors. To determine the number of pentagons n 5 necessary to generate a spherical fullerene molecule with n 6 hexagons one uses the Euler formula [24] which relates the number of vertices V , of edges E, faces F and open ends N open of an arbitrary two-dimensional lattice to the genus g of the surface formed by it, via the Euler characteristic For a lattice of pentagons and hexagons, V = (5n 5 + 6n 6 )/3, E = (5n 5 + 6n 6 )/2 and F = n 5 + n 6 , this gives χ = n 5 /6. Without open ends χ must be an even integer on a closed orientable surface due to the Gauß-Bonnet theorem [24]. Hence, for a flat graphene sheet with periodic boundary conditions one has g = 1 for the torus and n 5 = 0, while a sphere with g = 0 needs n 5 = 12 pentagons to avoid open ends. Consequently, fullerenes are grown from the C 60 molecule by increasing the number of hexagons, i.e., always have twelve pentagons at the same relative positions. This also applies to the thermodynamic limit, and one expects that their low-energy electronic excitations are described by a Dirac equation on a sphere. To introduce a pentagon into the honycomb lattice, a π/3 sector is cut out and then the edges are glued together [13,14,17,18] as illustrated in Fig. 1. Thereby a pentagon is created at the apex of the emerging cone. Along the seam, two C atoms from the same triangular sublattice, e.g., the red ones in Fig. 1, are connected. This results in a coupling of the Dirac operators associated with the K ± points. Indeed, when the fourdimensional spinor associated with the Dirac equation of the flat graphene sheet is transported around the apex by an angle 2π, it is forced to jump at the seam from a red site to another red one instead of to a blue one. It thus acquires a non-trivial phase, which can be accounted for by introducing a non-Abelian gauge field A µ in the Hamiltonian which yields a flux of (π/2) τ y when integrated along a closed loop around the apex. The Pauli matrix τ y thereby couples the K + and K − spinor components [1]. This description entails the existence of a ficitious magnetic monopole inside the surface. In the case of fullerenes it is located at the center of the spherical molecule, yielding a flux of 1/8 through each of the twelve pentagons. Thus the total magnetic monopole charge inside the sphere equals 3/2. In addition to that, analogous to the daily rotation of Foucault's pendulum, a deficit angle of π/3 arises when moving a frame along a loop around the apex. It is here described by a quasi-spin connection Q µ with circulation −(π/6) σ z around the apex. The coupling of the K ± spinor components in the resulting four-dimensional Dirac equation can be removed by a rotation, which leads to two independent twodimensional Dirac equations denoted by l = 1, 2 [18], where e µ α is the Zweibein in the tangent plane of the surface, and A l µ are the components of A µ in the rotated basis with circulation ±π/2 for l = 1 and 2, respectively, where A l is now an Abelian gauge field. The four-dimensional Dirac equation obtained from Eq. (2) provides a good description of the low-energy excitations of the C molecules [13][14][15]. It yields the longwavelength excitations of the deformed graphene sheet in the vicinity of the Dirac points, and thus also the zero modes that we are interested in. For the fullerenes the Dirac operators / D l are elliptic and defined on a compact surface. Hence the Atiyah-Singer index theorem [19][20][21][22] applies. Ten years after its first formulation a new proof was provided based on the heat equation [25] which was later employed for the derivation of an index theorem for graphene sheets deformed by pentagons and heptagons [17,18] as briefly reviewed in the following. Each Dirac operator in Eq. (2) can be written in terms of off-diagonal partner operators P and P † [26], so ( / D l ) 2 contains only diagonal operators P P † and P † P that have the same number of zero modes as P † and P , respectively. Furthermore, the non-zero eigenvalues of P P † and P † P are identical. The analytic index of / D l is given by the difference of the numbers of zero modes of P and P † denoted by ν ± , respectively, i.e., index( / D l ) = ν + − ν − [17,18]. More importantly, however, this index is related to the total flux of the effective gauge field via the Atiyah-Singer index theorem [17], The integral is taken over the compact surface Ω and F l = ∂ ∧ A l are the field strengths associated with the now Abelian gauge potentials A l . Stokes' theorem then implies from the closed loops around each apex that 1 2π The Euler formula thus leads to the Atiyah-Singer index theorem for fullerenes in the form, index( / D l ) = ±3(1−g). In two dimensions either ν + or ν − vanish. Hence, the index theorem provides the number of zero modes [17,18]. The total number of zero modes is the sum of those of the subsystems corresponding to l = 1, 2. Consequently, according to the index theorem, the zero modes of the four-dimensional Dirac operator for spherical fullerenes correspond to two triplets. The same result has been obtained in a continuum model for the low-energy electronic states of icosahedral fullerenes [13][14][15][16]. We emphasize, however, that the eigenvalues of the near-zero modes tend to zero, i.e., coincide with the energy at the Dirac points, only in the thermodynamic limit of an infinity number of C atoms. In a sufficiently large but finite fullerene molecule they are expected to lie much closer to the Dirac energy than all the other ones. Experimental setup and resonance spectra.-Hitherto, experiments have been performed with flat, superconducting microwave resonators, so-called 'microwave billiards' [27] to address problems from the fields of quantum chaos [28,29] and compound nucleus reactions [30]. In this context, the equivalence of the Helmholtz equation and the non-relativistic Schrödinger equation of the corresponding quantum billiard is exploited which holds below a maximum microwave frequency f max = c/(2d) with c the velocity of light and d the height of the billiard. Consequently, the eigenvalues of a quantum billiard can be obtained experimentally from the eigenfreqencies of the microwave billiard of corresponding shape. Recently, we realized experiments with superconducting microwave Dirac billiards and studied universal spectral properties of graphene sheets [31] with unprecedented accuracy [10,32,33]. The aim of the experiments presented here was the investigation of the universal spectral properties of the fullerene C 60 molecule attributed to its lattice structure and to determine the number of zero modes which, according to the Atiyah Singer index theorem solely depends on the number of pentagons. For this we use a system exhibiting the same topological properties, namely a quantum fullerene billiard on a sphere, consisting of a network of 60 circular billiards at the positions of the C atoms connected by three of the altogether 90 straight leads with three adjacent ones. We studied them experimentally by using instead of a planar, superconducting microwave (Dirac) billiard a cavity, which is imprinted on a sphere. The microwave fullerene billiard displayed in Fig. 2 was constructed by milling a total of 60 circular cavities (vertices) and 90 rectangular channels (edges) out of a brass sphere and then closing them with small triangular brass plates of 5 mm thickness and 3 mm thick rectangular ones, respectively. Before the parts were screwed together, they were covered with lead, which is superconducting below T c = 7.2 K. The diameter of the sphere of 160 mm was limited by the size of the liquid Helium cryostat in which the resonator was cooled down to 4.2 K in order to attain superconductivity. The radius of the circular cavities was 12 mm, the widths of the waveguides 14 mm, before lead coating them. Thus the cutoff frequency for the first propagating mode in the latter is f 1 c 10.714 GHz. In total, 8 antennas were attached to the triangular plates. Two, covered with red In the latter two cases they are twofold degenerate due to the mirror symmetry. Note, that the circular cavities exhibit no threefold symmetry, because each of them is part of one pentagon and two hexagons and the internal angles differ. The first band is located well below the cutoff frequency of the waveguide. Consequently, the electric field modes excited inside the circular cavities are only weakly coupled to those in the neighboring ones. The resonance frequencies in the second band are above the cutoff frequency. Accordingly, the modes in the cavities are coupled via the modes inside the waveguides, and thus mimick a situation where the C atoms are coupled to the neighboring ones via an extra atom, thus explaining the number of resonances in this band. The third band is still below the frequency f 2 c 20.143 GHz of the second propagating mode in the waveguides. As a result, the number of possible mode configurations is restricted due to the symmetry properties of the modes excited inside the cavities, that prefereably couple to the second excited mode inside the waveguide. Above 20.232 GHz, i.e., beyond f 2 c , several bands are intertwined. In summary, only the first band can be used to model the situation in the fullerene C 60 molecule. The middle panel of Fig. 3 shows a magnification of it. Fifteen groups of nearly degenerate resonances are clearly visible. The number of resonances identified in each of them is indicated and coincides with the degrees of degeneracy predicted on the basis of group theoretical considerations for the eigenfrequencies because of its truncated icosahedral structure [12,34]. In the group with degeneracy degree 9, in fact, the energy values of 5 and 4 degenerate eigenfrequencies, respectively, are accidentally the same. We emphasize, that we were only able to identify all 60 resonance frequencies because the degeneracies were lifted. The reason is that the symmetries of the resonator structure were slightly perturbed due to the presence of the antennas and unavoidable marginal inhomogeneities in the lead coating. The influence of the former turned out to be negligible for sufficiently short antennas. The effect of the latter on the size of the splittings of the nearly degenerate resonance frequencies was tested by smoothing the surface of the resonator which indeed induced a reduction of the splittings in each group. The pair of triplets visible in the middle panel of Fig. 3 and shown in a further magnification of the spectrum in the lowest panel, is closest to the Dirac frequency at f D = 8.504 GHz which was determined as described below. The zoom demonstrates the high resolution necessary to resolve the 6 resonances. These are the 6 modes conjectured by the Atiyah-Singer index theorem that we were looking for. As is discussed next they are corroborated also by TBM calculations. Tight-binding model description of the spectra.-The eigenvalues of the C 60 molecule have been computed previously using the TBM [12], however, a stringent test of its applicability was missing. Given the experimental results on the eigenfrequencies in the first band of the fullerene resonator we are now in a position to check the validity of the TBM in detail. As stated above, the modes excited in the 60 cavities are weakly coupled, which is an essential prerequisit for the applicability of the TBM. Detailed calculations showed a quantitative agreement between the computed and the measured frequencies only when including next-nearest, and second and third-nearest neighbor couplings with strengths t 1 , t 2 and t 3 , respectively. This yielded for the frequency of the isolated cavities f 0 = 8.515 GHz and the coupling parameters t 1 = −0.0929 GHz, t 2 = 0.0035 GHz and t 3 = 0.0005 GHz. The eigenvalues deduced from the TBM appear as 15 groups of degenerate eigenvalues with the same multiplicities as the resonances in the spectrum shown in the middle panel of Fig. 3. An even better agreement was achieved by taking into account the fact that, due to the inhomogeneities in the lead coating, the radii of the cavities are slightly different. In order to estimate the deviations thus induced in f 0 , we used the fact, that f 0 is given by the first zero of the J 0 Bessel function, J 0 (kR) = 0, for a circular cavity of radius R. We inserted for each of the 60 cavities the measured radius and replaced f 0 by the individual values. Thereby, the degeneracies were removed. In panel a) of Fig. 4 we compare the resonance density ρ(f ) = i δ(f − f i ) obtained for the frequencies f i in the first band (black full line) with the TBM result (red dashed line). For display purposes we have replaced the δ functions by Lorentzians of finite width of Γ = 2 MHz. The good agreement reassures the applicability of the TBM and thus justifies its use for further numerical studies with larger fullerene molecules with the parameters determined from the experiment. Panel b) of Fig. 4 shows a comparison between the resonance densities of the C 60 , C 240 , C 540 and C 720 molecules in ascending order. Here, we used Lorentzians of width Γ = 5 MHz for all cases. As stated above, all molecules contain the same number of pentagons whereas the num- ber of hexagons increases. The resonance densities should thus resemble more and more that of a graphene sheet. They exhibit a minimum bounded by two increasingly sharp peaks, that evolve into van Hove singularities in the limit of an infinite number of atoms [35,36]. A comparison of the resonance densities with that of a rectangular graphene sheet with periodic boundary conditions (panel c) of Fig. 4), which has been obtained in measurements with a microwave Dirac billiard [10], shows that they resemble for large fullerenes. In contradistinction to the latter, however, the resonance densities of the fullerenes all exhibit a peak of similar size located at the minimum which is due to the two triplets of zero modes. This remains true in the limit of an infinite number of atoms [13][14][15][16] and is thus a distinct feature of the spatial curvature and topology. According to the Atiyah-Singer index theorem a plane graphene sheet with periodic boundary conditions should not exhibit any zero modes. This is observed in panel c). Zero modes are expected only, if part of the graphene sheet is terminated with zigzag edges [31]. The associated states are called edge states, because their wave functions vanish everywhere except at these edges [6,10,32]. We have also computed the wave functions of the fullerene molecules under consideration using the TBM and found, that those of the 6 zero modes are localized at the pentagons, which may be considered to be equivalent to zigzag edges within the hexagon network. The central frequencies of the two triplets, marked by squares for the one close to the Dirac frequency (dotted line) and by a circle for the one further away are displayed in Fig. 5 as a function of the number n of C atoms. As is clearly visible, the distance between the triplets decreases with increasing size of the fullerene molecule and both approach the Dirac frequency. This behavior is well fitted by a function f (n) = f D + a/n b yielding the parameter values given in the caption of Fig. 5, and for the Dirac frequency finally a value of f D = 8.504 GHz. Note that this is essentially the only way to determine the Dirac frequency of a C 60 molecule. Conclusions.-The lowest 60 eigenvalues of a C 60 fullerene were determined in high-precision experiments using a superconducting microwave billiard of corresponding shape. They appear in 15 groups of nearly degenerate ones, where the multiplicity coincides with that determined based on the group theory of the truncated icosahedral structure of C 60 . We have demonstrated in TBM calculations for spherical fullerene molecules of increasing size that the two triplets of resonances detected close to the Dirac frequency correspond to the triplets of zero modes predicted by the Atiyah-Singer index theorem, and thus provided to the best of our knowledge the first experimental test of it. The exact value of the Dirac frequency was obtained as the asymptotic value attained by the frequencies of the triplets in the limit of an infinite number of atoms. This work was supported by the DFG within the Collaborative Research Center 634.
2015-03-13T14:20:46.000Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "94414728ffeab504fe9d8d01d1476776dc3a5789", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1503.04077", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "94414728ffeab504fe9d8d01d1476776dc3a5789", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
57383568
pes2o/s2orc
v3-fos-license
Congenital Malformation of the Brain More than 2000 different congenital malformations of the brain have been described in the literature, and their incidence is reported to be about 1 percent of all live births.1 Since the congenital anomalies of the brain are commonly encountered in day to day practice, it is very important for every radiologist to be familiar with the basic imaging findings of common congenital anomalies to make a correct diagnosis necessary for optimum management of these conditions. Magnetic resonance imaging (MRI) is very useful in studying these malformations. The aim of this chapter is to provide an overview of all important and routinely encountered congenital malformations of the brain. Introduction More than 2000 different congenital malformations of the brain have been described in the literature, and their incidence is reported to be about 1 percent of all live births. 1 Since the congenital anomalies of the brain are commonly encountered in day to day practice, it is very important for every radiologist to be familiar with the basic imaging findings of common congenital anomalies to make a correct diagnosis necessary for optimum management of these conditions. Magnetic resonance imaging (MRI) is very useful in studying these malformations. The aim of this chapter is to provide an overview of all important and routinely encountered congenital malformations of the brain. Normal brain development Congenital anomalies of the brain are extremely complex and are best studied by correlating with embryological development. Basic events in normal brain development includes the following four stages: 1 Stage 1: Dorsal Induction: Formation and closure of the neural tube -Occurs at 3-5 weeks -Three phases: Neurulation, canalization, retrogressive differentiation -Failure: Neural tube defects (Anencephaly, Cephalocele, Chiari malformations) and Spinal dysraphic disorders. Classification of congenital malformation of brain A number of classification systems have been proposed, but with regards to our basic purpose, simplified classification of brain malformations has been taken into account, which is as follows: 1 -Complex anomaly involving skull, dura, brain, spine and the cord -Skull and dural involvement  Luckenschadel (lacunar skull), concave clivus and petrous ridges  Small and shallow posterior fossa with low lying transverse sinuses and torcular herophilli  Hypoplastic tentorium cerebelli with gaping (heart shaped) incisura  Hypoplastic, fenestrated falx cerebri with interdigitating gyri  Gaping foramen magnum - The herniated brain dysgenetic and non-functional -Absence or erosion of the crista galli with enlargement of foramen cecum is a constant feature of a nasal cephalocele. Fig. 4. Occipital cephalocele. Sagittal T1W (a) image shows herniation of severely dysplastic cerebellar tissue and the occipital lobe into a large CSF containing sac through an osseous defect in the occipital bone(thin white arrow). Thin strand of dysplastic brain tissue or septa can be seen traversing the CSF within the sac. Also note small posterior fossa, and deformed brain stem. 2D Time-of-flight venogram (b) demonstrates no herniation of dural venous sinuses in the cephalocele sac. This is important information for the surgeons. 7 -Differential diagnosis of a nasal cephalocele includes congenital nasal masses (e.g. dermoid) where crista galli is present but split. -Antenatal ultrasound and MRI are useful in evaluation of content of the sac. MR venography is useful to assess major dural sinuses within the herniated sac, which may be responsible for major bleeding during surgery. CT is useful in demonstrating bony defect. -Associated anomalies  Chiari II and III malformation (seen with occipital cephalocele)  Corpus callosum agenesis, Dandy-Walker malformation (seen with parietal cephalocele) image shows a small subcutaneous mass (thin white arrow) in high occipital region just external to a small defect in the calvarium. Note that the brain is not entering the cephalocele; instead, a thin strand of fibrous tissue is seen extending across the osseous defect, from the surface of the brain to the subcutaneous mass. Small posterior fossa arachnoid cyst is also seen. 2D TOF venogram (c) shows presence of median procencephalic vein within embryonic falcine sinus(thin white arrow) and absence of sagittal sinus. Disorders of diverticulation and cleavage Holoprosencephaly 1,2,5 ( Figure 14 Lissencephaly. Axial T1W image shows a complete smooth brain with thickened cortex and shallow sylvian fissures(arrows) giving the brain characteristic figure of eight appearance. Fig. 18. Lissencephaly with hemimegalencephaly. Axial (a) and coronal (b) T1W image of brain shows right sided hemimegalencephaly and lissencephaly. Right frontal lobe is particularly enlarged, has a disorganized, thickened, nearly agyric cortex with complete loss of cortico-medullary differentiation (arrow). The anterior interhemispheric fissure is displaced to the opposite side by the hypertrophied frontal lobe; ipsilateral frontal horn is also enlarged. Heterotopias are isointense to normal gray matter in all pulse sequences and do not enhance on administration of intravenous contrast. They are best appreciated on medium tau inversion recovery sequences.  The differential diagnosis is subependymal nodules (SENs) of tuberous sclerosis. On MRI, SENs are not precisely isointense to gray matter, however, occasionally show enhancement after contrast administration. They are often calcified.  Large dysplastic and disorganized masses of ectopic gray matter may mimic intracranial mass, and produce severe deformity of ipsilateral ventricle.  Subcortical heterotopias are less frequent. -Band or laminar type  A layer of neurons interposed between the ventricle and cortex, seen as alternating layer of gray and white matter band - The cortex overlying the heterotopia is nearly always abnormal with pachygyria or polymicrogyria. Severe form of open lip schizencephaly has an appearance which is called "basket brain".  Closest differential is porencephalic cyst in which CSF space is lined by gliotic white matter, in contrast to gray matter lined schizencephaly. -Associated anomalies: heterotopias, septo-optic dysplasia, absence of septum pellucidum and callosal dysgenesis MR shows enlargement of a part or whole of one cerebral hemisphere, ipsilateral ventricle is frequently dilated and the frontal horn is stretched. The cortex is affected by diffuse migration anomaly (polymicrogyria, pachygyria) and the underlying white matter is gliotic and dysmyelinated -Rarely, associated enlargement and dysplasia of ipsilateral cerebellar hemisphere and brain stem may be present, a condition referred as total hemimegalencephaly -Heterotopias may be present -Associated anomalies: Epidermal nevus syndrome, Klippel-Trenaunay-Weber syndrome, Neurofibromatosis type 1 MR image demonstrates a large laminated-appearing T2W/FLAIR hyperintense and T1weighted hypointense mass involving the right cerebellar hemisphere. Note gross thickening of the cerebellar folias (arrow). No perilesional edema present. However, mass effect on the fourth ventricle with moderate hydrocephalus can be seen. Proton MR spectroscopy (d) reveals normal metabolites peak. Disorders of histogenesis "Neurocutaneous syndromes" or "Phakomatoses" constitute a group of congenital malformations which are characterized by cutaneous lesions associated with CNS anomalies. Some of the common neurocutaneous syndromes are described below. -Also known as Von Recklinghausen disease or peripheral neurofibromatosis -Accounts for > 90% of all NF cases -Incidence = 1:2000 to 3000 live births -Diagnostic criteria: two or more of the following findings are present  Six or more café-au-lait spots(≥5mm in pre-pubertal children and ≥15mm in postpubertal period)  One plexiform neurofibroma or two or more neurofibromas of any type  Two or more pigmented iris hamartomas(Lisch nodules)  Optic nerve glioma  Axillary or inguinal freckling  Osseous lesions such as dysplasia of greater wing of sphenoid, pseudoarthrosis  First degree relative with NF-1 -CNS lesions present in 15-20% cases. These include  Optic nerve glioma (most common CNS lesion), may extend to involve the optic chiasma, optic tract, optic radiation and the lateral geniculate bodies.  Nonoptic gliomas may involve the brain stem, tectum, and periaqueductal region.  Plexiform neurofibroma is a hallmark of NF-1. It is an unencapsulated neurofibroma along the path of major cutaneous nerve of the scalp and neck, which commonly involves the first (ophthalmic) division of trigeminal nerve. It is often associated with dysplasia of sphenoid bone and bony orbit.  Non-neoplastic hamartomatous lesions (80%) of basal ganglia and white matter. Majority of lesions show no mass effect or contrast enhancement. These lesions may increase in size or number in early childhood, diminishes with age and rarely observed into adulthood.  Other intracranial lesions include astrocytic proliferation of the retina, intracranial aneurysms, vascular ectasia and a progressive cerebral arterial occlusion disease akin to moya-moya pattern.  Spinal lesions may include cord astrocytoma / hamartoma, dural ectasia and lateral/anterior intrathoracic meningoceles.  Skeletal dysplasias may include hypoplasia of sphenoid bone and bony orbit, kyphoscoliosis, scalloping of posterior aspect of the vertebral bodies 24 -CNS lesions present in 100% cases. These include  Bilateral acoustic schwannomas, hallmark of NF-2  Schwannomas of other cranial nerves. Trigeminal nerve is next most frequently involved nerve, albeit, any cranial nerve may be affected (with the exception of the olfactory and optic nerves).  Meningiomas, often multiple  Choroid plexus calcification  Spinal lesions include cord ependymomas, meningiomas, or multilevel bulky schwannomas of exiting roots Fig. 33. Neurofibromatosis type 1: Opticochiasmatic-hypothalamic pilocytic astrocytomas. Axial T2-weighted (a) and coronal FLAIR (b) MR image shows enlargement of bilateral optic chiasma (thin black arrows) and ill-defined hyperintensity involving the hypothalamus(thin white arrow) and adjacent brain. Coronal T1-weighted post contrast image (c) demonstrates mild to moderate enhancement of the optic chiasma/hypothalamus but marked enhancement of the lesions involving the adjacent brain parenchyma. Moderate obstructive hydrocephalus is also present. FLAIR coronal images (d-f) of the same patient shows further extension of the optic pathway glioma to involve bilateral medial temporal lobes, basal ganglia region, mid brain and pons (thin white arrow). These lesions appear as ill-defined areas of high signal intensity on Flair images. The enlarged optic chiasma (thin black arrow) and obstructive hydrocephalus are also seen in these images. . Right schwannoma appears as a large homogenous enhancing right CP angle mass with intracnalicular extension and the left one is seen as a small intracanalicular enhancing mass. Multiple meningiomas(thin black arrows) are also present seen as enhancing extra-axial masses in right medial temporal and bilateral frontal regions. Right optic nerve meningioma is also seen completely filling the intraconal space. Non-contrast sagittal T1W(d) and coronal T2W image(e) of whole spine of the same patient demonstrates low cervical region meningioma(d) and multiple rounded lumbar region nerve root schwannomas(e), best appreciated on MR myelogram(f). Tram-track or gyriform pattern of cortical calcification underlying the leptomeningeal angioma is diagnostic of the syndrome. The calcification is unusual before two years of age. Calcifications are best seen on plain CT, T2W and GRE image. Summary Congenital malformations of the brain are both complex and multiple. The neuroradiologic diagnosis of such anomalies requires a basic understanding of normal brain development and pathogenesis. The aetiologies associated with development anomalies may result from a variety of insults from genetic to environmental. Abnormalities associated with the neural tube and the neural plate generally occur within the first 28 days of gestation. On the other hand, abnormalities associated with cellular proliferation and migration in the CNS generally occur after the 28th day of gestation. This chapter will cover malformations associated with both of these periods. Congenital anomalies of the brain are commonly encountered in day to day practice. Nevertheless, diagnosing it correctly is of paramount importance. Imaging plays an important role in reaching the correct diagnosis necessary for optimum management of these unfortunate conditions. It is as important for every radiologist to be familiar with basic imaging findings of common congenital anomalies, as it is for the paediatrician.
2016-09-29T08:41:17.449Z
2012-03-09T00:00:00.000
{ "year": 2012, "sha1": "2c09ac6ba1952dfe8c94cfc4e14fe839ef1fb433", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/31404", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6a9155f65219732972f0ca9a9e6943b05ebe2710", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1558762
pes2o/s2orc
v3-fos-license
Effects of confinement on thermal stability and folding kinetics in a simple Ising-like model In cellular environment, confinement and macromulecular crowding play an important role on thermal stability and folding kinetics of a protein. We have resorted to a generalized version of the Wako-Saito-Munoz-Eaton model for protein folding to study the behavior of six different protein structures confined between two walls. Changing the distance 2R between the walls, we found, in accordance with previous studies, two confinement regimes: starting from large R and decreasing R, confinement first enhances the stability of the folded state as long as this is compact and until a given value of R; then a further decrease of R leads to a decrease of folding temperature and folding rate. We found that in the low confinement regime both unfolding temperatures and logarithm of folding rates scale as R-{\gamma} where {\gamma} values lie in between 1.42 and 2.35. Introduction In the past the majority of experiments on protein folding have been carried out in diluted solutions but in the last two decades it has become clear that these experiments do not take into account two issues which arise in vivo and whose relevance on thermal stability and equilibrium rates is not negligible.Namely, crowding and confinement [1,2,3,4].Crowding refers to the fact that about 30% of cells internal volume is occupied by macromolecules such as lipids, carbohydrates and proteins themselves [1].This fraction could even reach 40% in E. Coli [5].Confinement is merely a limitation in the volume available to the polypeptide chain as naturally occurs in the exit tunnel of ribosomes or in the chaperonin cavity. Studying protein folding properties in a crowded environment is experimentally possible simply by adding high concentrations of macromolecules to solutions, but this approach has problems because of specific interactions which arise between proteins and crowding agents and because crowding promotes protein-protein aggregation [1].Based on the idea that the main effect of crowding is the reduction of volume available to the protein due to steric constraints, theoretical studies and simulations have shown that crowding may be quantitatively mapped onto confinement as long as crowding agents are modelled as hard spheres and the volume fraction occupied by them does not exceed 10% [6].Thanks to this mapping, experimental and theoretical studies on confinement may give many hints also for crowding effects.However the above conditions often does not hold in the cell interior because of too high concentration of agents or presence of macromolecules-protein attractive interaction.In addition, gradients in macromolecule concentrations may exist [7] and, from a more general point of view, crowding is dynamic in nature whereas confinement is static.Thus, the mapping is not close enough to draw a completely satisfactory analogy between crowding and confinement. An experimental procedure to mimic the effects of confinement, is the encapsulation of proteins within pores of silica gels [8,9] or glasses [10] or polyacrylamide gels [11].These experiments reported, for most of the considered proteins, an increase in thermal stability when they are confined into nanopores.Melting temperature (T f ) shift is even dramatic in the cases of α-lactalbumin and RNase A, being as large as about 30 K [8,10].On the contrary, recent experiments suggested that crowding influence on stability is modest [7,12]. The commonly accepted reason for the increase in stability is the change in conformational entropy induced by confinement [13,14,15,16,17,18].Encapsulating the protein in a given volume disallows the most expanded configurations of the denatured state ensemble and so indirectly favours more compact structures and, among them, the folded state.The same argument explains also why confinement should lead to an increase in folding rates k f as long as the nanopore size is large enough to contain the folded state and to permit chain reconfigurations around it [13,14,15,16,17,18,19]. From polymer physics we know that a polymer confined between two (sufficiently close) inert hard walls, behaves like a pancake with the radius of gyration (parallel to the walls) that scales as a power of the number of monomers [20,21,22].Furthermore, when it is confined within a cage with repulsive walls, its free energy follows a simple power law dependence on the size of the cage R [20,21].Then, as shown by Takagi et al. [17], folding temperatures and rates should follow the scaling laws ∆T Literature reports many values for the exponent γ: for an ideal gaussian chain confined between two walls (d c = 1), in a cylinder (d c = 2) or in a spherical cavity (d c = 3), γ = 2 while for an excluded volume chain γ = 5/3 for d c = 1, 2 and γ = 15/4 for d c = 3.Using a Gō-model α-carbon representation of proteins and Langevin simulations in a cylindrical cage, Takagi et al. [17] found γ = 3.25 ± 0.09.Best and Mittal [18] simulated confinement of protein G and a 3-helix bundle in different geometries and reported that for d c = 1, 2 both values γ = 2 and γ = 5/3 are a good estimate of the behavior of the two proteins, but they also remarked the fact that it is hard to distinguish which value fits best the simulations because least square fitting of power laws can produce biased estimates of parameters for small samples.For spherical confinement the same authors reported a behavior which is stronger than γ = 2 but much weaker than expected behavior for the excluded volume chain (γ = 15/4). In the present work we confine a simple Ising-like model (WSME model) originally proposed by Wako and Saitô in 1978 and later reconsidered by Muñoz and Eaton [23,24,25,26,27].Equilibrium thermodynamics of the model can be solved exactly [28].The cluster variation method is exact for this model [29] and it successfully describes the kinetics of protein folding [30,31,32,33].More recently it has been proposed a generalized version of the model that permits to reproduce the general features of mechanical unfolding [34,35] and, through Monte Carlo simulations, to obtain for some already widely studied proteins and RNA fragments, unfolding pathways which are consistent with results of experiments and/or of simulations made with more detailed models [36,37,38,39].The model has also been used with success to study folding equilibrium and kinetics and to mimic mutations of a small ankyrin repeat protein [40]. We use the confined WSME model to study thermodynamics and kinetics of three ideal structures and three simple proteins in confining conditions.The ideal structures are a 10 residues ideal α-helix, a 2-stranded and a 3-stranded ideal β-sheets each with 7 residues per strand.Real structures are a 3-helix bundle, protein G and its Cterminal β-hairpin.The paper is organized as follows: in Sec. 2 we describe the WSME model and its confinement.Sec.3.1 focuses on the confinement-induced changes of thermodynamic stability for the different proteins while Sec.3.2 deals with kinetics and the expected increase in folding rates.Some conclusions are drawn in Sec. 4. The model WSME model is a Gō-like model in which a given N residues protein is described by a sequence of N binary variables m k , whose value is 1 if k-th residue is in the native configuration and 0 otherwise.Two residues interact only if they and all residues between them are native and only if they are in contact in the native structure, i.e. they have at least a pair of atoms which are closer than the threshold length of 4 Å in the native structure.If residues i and j are in contact in the native structure we associate to them a negative energy −ε ij (defined as in [27]) and a contact matrix element ∆ ij = 1.If the two residues are not in contact ∆ ij = 0.When the molecule is pulled at its ends by a constant force f , the Hamiltonian reads: where L = L ({m k }, {σ ij }) is the end-to-end length of the protein and {σ ij } is a set of new binary variables that will soon be defined and in which the greater entropy of non-native states is encoded. Here and in the following we define a native stretch from residue i to residue j as a sequence of native residues delimited by the two non-native residues i and j.The end-to-end length L is the sum of the native stretches lengths l ij multiplied by a sign +1 or −1 (the binary variable σ ij ) if the stretch is parallel or antiparallel to the direction of the force.The binary variable σ ij thus represents the direction of the stretch from i-th to j-th residue.Using the quantity , which is equal to 1 if the sequence of residues from i to j is a native stretch and is 0 otherwise, and setting the boundary conditions m 0 = m N +1 = 0, the length L is defined as: The set of all possible lengths {l ij } is obtained directly from the three dimensional structure deposited in the Protein Data Bank (pdb) as the distances between the various pairs of central carbon atoms {C α i , C α j }.Besides l ij , other two lengths associated to the stretch from the i-th to the j-th residue are important for what follows.These are the maximum p max ij and the minimum p min ij among the distances between C α i and the projections of each C α k (i ≤ k ≤ j) on the straight line from C α i to C α j .Note that, as shown in figure 1 can be recursively calculated building up the protein residue by residue and evaluating at each step the partition function z n (L), where n is the number of residues achieved at that step (see appendix of [35] for detailed calculations).Where χ in = n−1 k=i n r=k+1 ε kr ∆ kr is minus the energy of the native stretch from (i−1)th to (n + 1)-th residue and the initial conditions are z −1 (L) = 1 for L = 0 and z −1 (L) = 0 for L = 0.The goal of the recursive scheme is the constrained partition function Z(L; f = 0) which corresponds to z N (L).The absolute value of the possible end-to-end lengths of a protein cannot be greater than L max = N i=0 l i,i+1 , which corresponds to the length of the molecule in the completely unfolded, fully extended configuration.Thus, because of finite resolution of amino acids coordinates in the pdb file (which is 10 −3 Å), L belongs to a finite set of values in the range [−L max , L max ]. Confinement of WSME model Consider again the recursive scheme of (4) and set the starting point of the molecule in the middle of the cage.In order to confine the protein into a cage of size 2R with perfectly repulsive walls, when adding a native stretch from (i − 1)-th to (n + 1)-th residues (which are respectively at the distances L i−2 and L n from the N-terminus), one has to require that every residue of this stretch lie inside the cage.This issue may be solved by considering also the lengths p max i−1,n+1 and p min i−1,n+1 of the native stretch (see axis x1 of figure 1) and inserting appropriate step functions in the recursive scheme: where θ is the Heaviside step function: Translational freedom must also be taken into account.To this end, for a given configuration, instead of considering simply the end-to-end length, it would be better to consider as the relevant length the distance between the two farthest residues of that configuration.We call it the configuration effective length.Fixing in the center of the cage the N-terminus excludes from the partition functions z n (L) the contribution of some of the configurations which have an effective length shorter than 2R (for example in fig.2a configuration a1 has an effective length shorter than configuration a2 but the former is forbidden while the latter is allowed).Thus, to take into account all the a) configurations with an effective length shorter than the cage size, the partition function has to be computed for different positions of the cage relative to the N-terminus.The final partition function will be the sum of various partition functions at different cage positions.Note that some configurations will appear many times in such a scheme (for example state a3 of fig.2a) as a consequence of their greater translational freedom. To obtain the final partition function one has to repeat this procedure considering all the possible positions of the cage relative to the N-terminus, i.e. to start with the range [−2R, 0] and to move the cage with a step ∆R equal to the resolution of the {l ij } until the final range [0, 2R] is reached.To speed up computations we rounded the lengths to a resolution of 10 −1 Å .For the 3-helix bundle we checked that this assumption does not modify the results through a comparison with results obtained at the resolution of 10 −3 Å. Equilibrium In this study we considered six different structures.Three are real structures: a 3-helix bundle (pdb code 1PRB), protein G (pdb code 2GB1) and its final hairpin.The other three structures are an ideal α-helix of ten residues (radius 2.3 Å, pitch 5.4 Å, ε ij = 1 if j = i + 4 and ε ij = 0 otherwise), a 2-stranded and a 3-stranded antiparallel βsheets with 7 residues in each strand (the 3-stranded sheet is drawn in figure 3).In the following, code 'a010' refers to the ideal α-helix, 'b207' and 'b307' to the two β-sheets which have respectively 2 and 3 strands and 'GB1h' refers to the final hairpin of protein G. To study the equilibrium response to confinement of the six structures, we computed, at different cage sizes R, thermodynamic quantities as the Helmholtz free energy, the specific heat and the average fraction of native residues.For each structure we varied the distance 2R between the walls, in a range from about the mininum effective length of the completely unfolded state to twice the maximum length of the completely unfolded state, i.e. from 4 Å (the distance between two subsequent amino acids is about 3.8 Å) to 2L max . If we denote with L N eff. the effective length of the native state (values are reported in table 1), we may naively distinguish between two different confinement regimes: (i ) one, for 2R > L N eff., which disallows the more expanded conformations of the non-native basin but not the folded state, and (ii ) the strong confinement regime, for 2R < L N eff., which forbids also the fully native state. Table 1 also shows the effective length of the unfolded state L U eff. .This is obtained through a Monte Carlo simulation at the unfolding temperature as the average effective length over the configurations belonging to the unfolded basin.Details about Monte Carlo moves will be given in the next section.(1−m i ) configurations and this number grows exponentially with the amount of non-native residues, we may expect that confinement in a cage of size R, with L max > 2R > L N eff., gives a reduction of conformational entropy which affects more the non-native basin.Besides, one has to consider translational freedom whose role is to further stabilize the most compact configurations irrespectively of the fact that they belong or not to the native basin.Thus a structure with L N eff.> L U eff. , as in the case of the ideal α-helix, does not undergo any stabilization of the folded state.Figure 4 shows the free energy landscapes for the three real structures at different confinement sizes (for a better comparison the free energy of the completely folded state has always been set to zero).For the final hairpin of protein G confinement increases the free energy of both the native and non-native basin: both native and non-native basin are destabilized but the latter is more affected.On the contrary, for the 3-helix bundle both native and non-native basins are stabilized, with a slightly greater stabilization with confinement for the native state.Finally, for protein G, only the non-native basin is destabilized by confinement.The increased stability of the native state relative to the unfolded state should result in higher unfolding temperature according to [16,17]: where here, and from now on, we denote with T 0 f the unfolding temperature without confinement.For each protein, we have determined T f as the temperature at which the average fraction of native residues is such that (M − M ∞ )/(M 0 − M ∞ ) = 0.5, where M ∞ = 1/3 is the value of M at infinite temperature and M 0 ≈ 1 is its value at zero temperature. The ideal α-helix is destabilized by confinement already from values of R lower than R = 15 Å and no enhancement in the unfolding temperature could be detected for greater values of R. Other proteins exhibit an enhancement in their thermal stability to a different extent depending on their structure: the increase in unfolding temperatures is of few percents for the 3-stranded β-sheet, 3-helix bundle and protein G, while for the two β-hairpins T f ≃ 6.6 T 0 f (ideal 2-stranded β-sheet) and T f ≃ 2.7 T 0 f (final hairpin of protein G).Such drastically different behavior is due to the very short effective lengths of native states of the two hairpins and to the limitation of the model which projects the positions of all residues on a single direction and loses information on the real three-dimensional structure.For the 3-helix bundle and for protein G, the increases in unfolding temperature correspond respectively to about 1.5 K and 9.3 K. Values R eq I of the cage radius for which, at equilibrium, unfolding temperature reaches its maximum and the extent of enhancement are reported in table 2. Table 2. Values of R for which unfolding temperature reaches its maximum (T max f ) and the extent of enhancement.Values of γ from fits to (6) and fit ranges.Fits in ranges from L U eff. /2 to L max for 'b207' and 'GB1h' result in exponents γ The enhancement in thermal stability can be appreciated in figure 5 where we reported the specific heat as a function of temperature.The top panel also shows well another feature of the unfolding phase transition in confined environment which is a decreased cooperativity with confinement [17]. A fit to (6) of unfolding temperatures as a function of R (figure 6) yielded exponents γ reported in table 2. All values are in between 1.50 (3-helix bundle) and 2.35 (final hairpin of protein G).Remarkably, in this range we find also the theoretical values of ∂β 2 as a function of the temperature at various confinement radius R for protein G and its final hairpin. γ for an excluded volume chain confined in a slit or in a cylinder (γ = 5/3) and for a gaussian chain in a slit, a cylinder or a sphere (γ = 2).Furthermore, a more careful analysis of data in figure 6 suggested us to fit, in the case of the β-hairpins, also in a more limited range of R values going from L U eff. /2 to L max (figure 7).In this very low confinement regime γ = 1.72 for the ideal hairpin and γ = 1.6 for the final hairpin of protein G. . Shift in unfolding temperature as a function of confining cage radius R. Fits to (6) in ranges reported in table 2. The vertical lines represent the ranges spanned by fits. Figure 7. Shift in unfolding temperature as a function of confining cage radius R for 'b207' and 'GB1h'.Fits to (6) in ranges L U eff. /2 to L max .The vertical lines represent the ranges spanned by fits. Kinetics The folding kinetics have been studied by Monte Carlo (MC) simulations in which a 2components ternary variable (m k , s k ) have been associated to each residue k. is the direction of the native stretch from the k-th to the j-th residue.A single MC step consists in choosing a residue k with uniform probability among the N residues and changing (m k , s k ) variable with equal probability to any of its other two states.This move is alternated with a 0.1 Å translation of the entire protein to the left or to the right with equal probability.Few remarks are necessary: suppose to have a native stretch from the i-th to the j-th residue and to transform the variable (m k , s k ), i < k < j, from (1, 0) to (0, s k = ±1).The direction of the new native stretch from the k-th to the j-th residue will be determined by s k while the new native stretch from i to k will inherit the direction of the old one from i to j. If instead we move the state of k-th residue from (0, ±1) to (1, 0), two native stretches merge into one with direction equal to the direction of the first old native stretch.At each MC step confinement requirements must be checked.Changes in folding rates have been estimated [18] from Kramers kinetics, k f ∝ D exp(−∆G barrier U /k B T ), where ∆G barrier U is the free energy difference between the transition state and the unfolded state and D is a diffusion coefficient.Because the unfolded state is destabilized by confinement, the free energy barrier dividing the unfolded from the native state is smaller.Assuming that the free energy of the latter and the diffusion constant are not affected by confinement and that the free energy of the unfolded state grows by a term ∼ T (R/L 0 ) −γ [20,21] leads to the scaling law: We determined folding rates as the inverse of mean first passage times by using 10 4 folding trajectories.First passage time is defined as the time at which, starting from a random unfolded configuration, the weighted fraction of native contacts (Q = catches up with the threshold 0.9, which ensures the protein has reached the folded state and has not got stuck in some intermediate.Temperature has been set to 0.9 T 0 f .Table 3.Values of R for which the folding rate reaches its maximum k max f at T = 0.9 T 0 f and the extent of enhancement.Values of γ from fits to (7) When decreasing R, folding is accelerated until a certain size R kin I is reached, then folding rates start to decrease.Table 3 reports R kin I values and the maximum extent of folding rates enhacement.For the β-hairpins, the drastic difference between R kin I and R eq I is likely due to the fact that for very small confining cages, even if the native state is not compromised, the structure is squeezed so much that chain reconfigurations towards the folded state become difficult.The same reason should explain the small differences between R kin . Shift in folding rates at T = 0.9 T 0 f as a function of confining cage radius R. Fits to (7) in ranges reported in table 3. Table 3 also reports the γ values obtained through a fit to (7) while figure 8 shows the folding rates behavior together with fit lines.If for the two hairpins we considered the very-low confinement regime, exponents γ relative to folding rates are comparable with their equilibrium counterparts. Conclusions We have investigated the effects of confinement on protein thermal stability and folding kinetics using a simple Ising-like model that we have contributed to develop and validate in recent years and now properly modified to include confinement of a polypeptide chain into a slit.To study thermal stability we have made use of the property of the model to be exactly solvable at equilibrium, while to study folding rates behavior we have resorted to Monte Carlo simulations.Notwithstanding the simplicity of the model and its unidimensionality, we obtained results which follow the general trend of previous experimental studies [9,10,11,41] and simulations [16,17,18]: provided the native state is compact, when reducing the space available to a given protein, both unfolding temperature T f and folding rate k f grow until a certain confinement size which depends on the protein.If the confinement size is further decreased, unfolding temperatures and folding rates decrease.Furthermore, our results also support the theoretical prediction [13,17,18] that enhancement depends on the confinement size R by the scaling law ∆T f ∼ ∆ ln k f ∼ R −γ . Among the six different protein structures studied in this work, one, a 10-residues ideal α-helix, does not show any enhancement of folding temperature and rate because its native state cannot be considered compact if compared to the average unfolded state.For the other five structures we found that exponents γ lie in between the upper and lower values of 2.35 and 1.42 and that those obtained for unfolding temperatures from exact solutions at equilibrium are consistent with those obtained for folding rates enhancement by Monte Carlo simulations. Theoretical values of γ (γ = 5/3 for a chain with excluded volume confined into a slit or a cylinder and γ = 2 for a gaussian chain into a slit, a cylinder or a sphere) are not directly comparable to the results of our model, which differs from these theories both for the geometry (our chain is neither self-avoiding nor gaussian) and for the presence of specific interactions, which are neglected by these theories.Nevertheless, our results, both from thermodynamics and kinetics, for γ, are in the same range as the theoretical ones. Furthermore, for a 3-helix bundle and for protein G, our results are consistent with those obtained through a more realistic model by Best and Mittal [18] for confinement of the same proteins into a slit: γ values are consistent and also the maximum enhancement extents of folding temperatures and folding rates are in good accordance.The two model also agree in the fact that protein G is more affected by confinement but there is no accordance on the confinement radius at which the 3-helix bundle reaches its maximum folding temperature and its maximum folding rate. Figure 1 . Figure 1.Sketch of a configuration with residue m i−1 = 0. Axis x1 shows relevant lengths of entire molecule.Axis x2 shows relevant lengths of native stretch from (i − 1)-th to (n + 1)-th residues. Figure 2 . Figure 2. Three different configurations which would give a contribution to the partition function constrained at length L without any cage.With cage a only configurations 2 and 3 contribute.In b only configurations 1 and 3 contribute. Figure 4 . Figure 4. Free energy profile in function of the fraction of native residues M at various confinement radius R for the 3-helix bundel, protein G and its final hairpin.Free energy of completely native state (M = 1) have been setted to zero. Table 1 . Native state end-to-end length (L N ), effective length of the native state (L N eff.), maximum length of the fully unfolded state (L max ) and effective length of the unfolded state (L U eff. ) for the six different structures. Since, without confinement, for a given set of binary variables {m k }, the model admits 2 N i=1
2012-02-08T10:14:11.000Z
2011-12-06T00:00:00.000
{ "year": 2011, "sha1": "102343ce9931746cb0c08e9010b6161ed50c2dde", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1112.1193", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e88cca43cc390be2883519cbcca69a0c1e91acb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Biology", "Medicine", "Chemistry" ] }
5844249
pes2o/s2orc
v3-fos-license
Use of the femoral vein ('groin injecting') by a sample of needle exchange clients in Bristol, UK Background Use of the femoral vein for intravenous access by injecting drug users (IDUs) (commonly called 'groin injecting') is a practice that is often observed but on which little is written in the literature. The purpose of this study was to describe self-reported data from a sample of groin injectors on the natural history and rationale regarding their groin injecting, to inform future research and the development of appropriate harm reduction strategies. Methods A convenience sample of groin injectors willing to participate in a semi-structured interview were recruited through the Bristol Drugs Project Harm Reduction Service. The interviews were conducted over the period of one week. Data on transition to groin injecting, rationale for use and incidence of problems were collected. Results Forty seven IDUs currently injecting in their femoral vein ('groin') were interviewed, 66% (n = 31) male and 34% (n = 16) female. Their mean age was 31 yrs (range 17 to 50 yrs; SD = 7.7). The mean length of time since first injecting episode was 9.6 yrs (range 6 mths to 30 yrs; SD = 7.0). The mean length of time since use of the groin began was 2.6 years (range 1 mth to 15 yrs; SD = 3.3). The mean length of time between first injection and first use of the groin was 7.0 yrs (SD = 7.0). One person had used no other area for venous access prior to using the groin, nine people had used one, nine people had used two, 10 people had used three, five people had used four and 13 people had used more than four areas. The main reason given for starting to inject in the groin was that 'no other sites were left'. However further discussion identified this meant no other convenient sites were accessible. Practises such as the rotation of injecting sites, as advocated in many harm reduction leaflets, were reported to be difficult and unreliable. The risk of missing the vein and subsequently losing the 'hit' was considered high. Use of the non-dominant hand to administer injections was problematic and deterred rotation between arms. The groin site was reported to be convenient, provide quick access, with little mess and less pain than smaller more awkward veins. The formation of sinuses over time facilitated continued use of the groin. Approximately two thirds of participants had experienced difficulty gaining IV access at their groin. Common problem included scar tissue occlusion, swelling and pain. Some reported infections and past history of deep vein thrombosis. Conclusion Use of the groin was perceived to be convenient by the study group. Problems following safer injecting advice were identified, including dexterity difficulties leading to fear of losing the 'hit'. Health problems at the groin site did not deter use. These results suggest further qualitative work is needed to explore the difficulties in following safer injecting advice in more detail and inform the development of more appropriate advice. Further quantitative work is necessary to establish the prevalence of groin injecting amongst IDUs and the incidence of associated problems. There is a need for a longitudinal study to examine the relationship between injecting technique and loss of patency of veins. If protective factors could be identified, evidence-based safer injecting advice could be established to preserve peripheral veins and reduce use of the groin site, which is high risk and associated with serious adverse consequences. Background The physical health complications of injecting drug use are well documented in the literature (e.g. [1][2][3][4][5][6]). The injections used by injecting drug users (IDUs) are nonsterile and not subject to quality control. This, coupled with frequent, chronic venous administration is associated with damage to the vascular structure. Vascular damage commonly begins with thrombophlebitis, leading to vein sclerosis and loss of patency [1,3], rendering the vein unusable. This leads to the IDU seeking other useable points of intravenous (IV) access. There is little research literature reporting patterns of vascular access in IDUs. If the natural history of IDU injecting patterns was better understood, effective strategies to protect vascular health could be explored. A paper by Darke et al [7] describes a pattern of use of various injecting sites over time, identified amongst a sample of injecting drug users in Sydney, Australia. The authors report most injectors began their injecting careers using the cubital fossa (inner crook of the arm), with a pattern of progression through forearm (after a median of two yrs from first injection), upper arm (3.5 yrs from first injection), hand (4 yrs from first injection), neck, feet, leg (all 6 yrs after first injection) and finally groin and peripheral digits (both 10 yrs from first injection). This suggests that use of the groin amongst this sample was reserved as a 'last resort' with other points of access being selected first. The use of the femoral vein in the groin by IDUs is of concern. It is linked with increased risk of vascular complications such as deep vein thrombosis (DVT), leg ulcers and vascular insufficiency. Its close proximity to the femoral artery and nerve also poses the risk of inadvertent trauma to these sites. Arterial injection is associated with arterial spasm and arterial thrombus formation [8]. The literature contains reports of adverse consequences from use of the groin site but little qualitative study of the factors that motivate this practice. The primary purpose of this study was to inform service development at Bristol Drugs Project (BDP) and compare the findings with that of Darke et al 5 . However the findings of this work are of interest to the wider harm reduction community because little is written in the literature about groin injecting. This study begins to shed some light on factors that motivate this practice. It also suggests future areas for research around groin injecting in order to inform the development of evidence-based safer injecting advice. Location This study was undertaken in Bristol, which is the largest city (pop. 382,000) in the South West region of England, UK. Participants were clients of BDP, which is a voluntary sector drug service. BDP is the only needle exchange and harm reduction agency in the city, but there are also pharmacy-based exchanges. Recruitment A convenience sampling method was used. Willing participants were recruited from attendees at BDP needle exchange base which is a fixed site service, and the Mobile Harm Reduction Service, which is a vehicle providing outreach needle exchange services across the city. Data was collected over a period of one week in 2004 by the same interviewer in all cases. All clients who used the needle exchange services staffed by the interviewer were invited to take part in the study. Participants were guaranteed strict confidentiality and data was collected anonymously. Data collection Data was gathered using a short semi structured interview based on a series of questions derived from previous discussions amongst needle exchange clients and staff. It explored injecting history and whether the person had ever or was currently experiencing problems using their groin. The study was reviewed and approved by the BDP management board. The interview was conducted after the needle exchange transaction was completed. Verbal consent was obtained. Data was recorded on a tick-box data collection form by the interviewer and by additional note writing. Analysis Data was coded and input into SPSS for Windows (v. 12, SPSS inc. Illenois, 2003) for analysis where appropriate. Descriptive data was analysed to identify emergent themes. Incidence of use of the groin and demographics The interview took approximately 10 minutes. A total of 92 clients were interviewed as part of the wider review and 47 (51%) of these were currently injecting in their groin. None of those who were not injecting in the groin presently had ever done so in the past. Of those injecting in their groin, 66% (n = 31) were male and 34% (n = 16) were female. The mean age of the groin injectors was 31 yrs (SD = 7.7), with the youngest being 17 and the oldest being 50 yrs. Twelve (26%) of the groin injectors were between 17-24 yrs, 29 (62%) were between 25 and 39 yrs and 6 (13%) were between 40-50 yrs. Length of time of groin injecting The mean length of time since use of the groin site began was 2.6 years (SD = 3.3), with the shortest time being 0.08 years (1 month) and the longest time being 15 years. The mode length of time was 2 years, reported by 6 people (13%). Time from first injecting episode to first use of the groin The mean length of time between first injection and first use of the groin for IV access was 7.0 yrs (SD = 7.0). Two people (4%) had been using the groin for IV access since they first began injecting, both were male. One had tried to inject into the arms unsuccessfully prior to using the groin, so switched to the groin straight away. This person was very thin with no visible veins, so chose to use the groin to ensure IV access. The other person had not tried any other injecting sites prior to the groin, as all his associates who were injecting at the time were using the groin, encouraging him not to attempt to try any others first. Twenty five people (53%) had begun using the groin within 5 years from their first injection and 11 (23%) had begun using the groin 5 or more yrs but less then 10 years since their first injection. Eleven people (23%) had begun using the groin 10 or more yrs since their first injection. The longest time between first injection and use of the groin was 23 yrs in a male who had begun injecting 25 yrs ago but only started using the groin 2 yrs ago. Areas used prior to the groin People were asked to report which areas they had injected into prior to using the groin. One (2%) person had used no other area for venous access prior to using the groin and had been using this site for 15 years. Nine (19%) people had used one other area and in all cases this was the cubital fossa (inner crook of the arm). Nine (19%) people had used two areas prior to the groin, all of these cases had used the cubital fossa, seven had also used site(s) on the legs, one had used the foot and one had used the neck. Ten (21%) people had used three areas prior to using the groin. Again in all cases the cubital fossa had been used, nine of the ten had used sites on the legs, six had used the feet and five had used the neck. Five (11%) people reported injecting in four areas prior to the groin. All had used the same four areas, which were the cubital fossa, legs, feet and neck. Thirteen (28%) people had used more than four areas prior to the groin and classed themselves as having used 'everywhere'. Why did you start injecting in your groin? The main reason given for starting to inject in the groin was that 'no other sites were left'. However as many peo-ple had not tried all other sites, this was probed further. Further discussion found that in the majority of cases, no other convenient sites were perceived to be accessible. Many reported that practises such as the rotation of injecting sites, as advocated in many harm reduction publications, were found to be difficult and unreliable. The risk of 'losing' an injection (missing IV access) through poor injecting technique was considered to be too big a risk, presumably because subcutaneous and intramuscular drug absorption does not provide the same euphoria. Use of the non-dominant hand to administer injections was also reported to be difficult and deter rotation of injecting sites between arms, or require third party assistance. The groin site was reported by most to be convenient, provide quick access, with little mess and less pain than smaller more awkward veins. The gradual formation of sinuses in the groin over time was reported to further facilitate continued use of this site. Drugs injected into the groin and equipment used The most common drug injected by the group was heroin, used by 46 (98%) of interviewees. Nineteen people (40%) injected crack cocaine and eight (17%) injected amphetamine. Twenty four people (51%) currently injected one main drug only into the groin, with 23 injecting heroin and one injecting amphetamine. Twenty people (43%) injected two main drugs into the groin, for 16 of these people their main drugs were heroin and crack cocaine. The remaining four injected heroin and amphetamine. Three people injected three main drugs into the groin and for all these were heroin, crack cocaine and amphetamine. No other drugs were reported to be injected into the groin within the group. The most common injecting equipment used to access the groin was detachable 1 ml syringes with orange needles (0.5 × 25 mm, 25G) used by 33 people (70%), 11 people (23%) used the same syringes with blue needles (0.6 × 30 mm, 23G) and one person (2%) used green needles (0.8 × 40 mm, 21G). Seven people (15%) used 1 ml insulin syringes. Numbers exceed 100% as four people regularly used more than one type of equipment for groin access. Condition of the groin site and history of access problems Participants were asked whether they were currently or had in the past experienced any problems gaining IV access using the groin site. They were also asked to selfassess the current condition of their groin based on a five point Likert scale: 'very poor', 'poor', 'OK', 'good' or 'very good'. Approximately one third of people reported never having had a problem gaining IV access at their groin site (n = 16, 34%). This group comprised 11 males and five females. Five described the current state of their groin site as 'ok', five said it was 'good' and six said it was 'very good'. Their mean length of use of the groin site was 1.1 yrs (SD = 1.2). Approximately two thirds of people had experienced problems with IV access at the groin site on one or more occasions in the past, or were currently experiencing problems (n = 31, 66%). This group comprised 20 males and 11 females. When asked to describe the current condition of their groin two said it was 'very poor', seven said it was 'poor', 14 described it as 'ok', three said it was 'good' and five said it was 'very good'. Their mean length of use of the groin site was 3.3 yrs (SD = 3.8). When asked to describe the types of access and health problems experienced, a common problem reported was hardened scar tissue occluding the site. This was said to be difficult to penetrate and a cause of needles bending and breaking, causing some people to select longer, thicker needles. Another common problem was swellings in the groin area, accompanied at times by pain. Some people reported infections at the injecting site. Some reported having experienced 'blood clots' or 'DVT' (deep vein thrombosis). It is unknown whether these had been medically diagnosed or treated. Discussion It is of interest that in the overall sample (n = 92) all those who had tried groin injecting (n = 47) continued to do so, despite two thirds having experienced problems with access and a range of health problems at the site. These included reports of infections, hardened tissue, swelling and DVT, which is consistent with problems described in the literature. Further exploration of factors that discourage use of the groin amongst non groin injectors would be of interest. Comparisons between vein health and injecting practices of groin injectors and those who do not inject in the groin would establish protective factors. A longitudinal study is necessary for this to establish the factors that protect and damage the patency of veins. Modifiable protective factors could inform safer injecting advice. The average length of time of groin injecting amongst the sample was 2.6 years with the longest time being 15 years, illustrating that this site of access can be used for considerable time. The formation of sinuses around hardened scar tissue was seen to be an advantage and facilitated continued use, despite posing the risk of breaking needles. Further work to explore responses to problems with the groin site and long-term consequences would be of interest. Darke et al [7] reported an average time of 10 years from first injection to use of the groin site in their sample of IDUs in Sydney. In this study the average length of time from first injection to use of the groin site was 7 yrs, with more than half (53%) of participants having begun groin injecting within 5 yrs. One theory for the earlier use of the groin site in Bristol is that the use of acidifiers such as citric acid, necessary to dissolve the brown base heroin common in Western Europe, shortens the usable life of veins. Acidifiers are not used in Australia as street heroin is in a soluble form. However further work would be needed to confirm is this theory is true. A pattern of use of various sites prior to the groin site was found in the majority in this study, similar to the findings of Darke et al [7]. However in this study there were some for whom rapid progression to use of the groin occurred, for example nine participants had only used one site, the cubital fossa, prior to the groin and a further nine had only used two sites. The qualitative data gathered from the sample illustrated that the groin was viewed as 'easy-touse' with more security of delivering the injection intravenously than other, more awkward sites. This study showed that the groin site was favoured over others for utility and convenience. Other useable sites did potentially exist in many, such as the cubital fossa of the dominant arm, but were viewed as difficult to use and risked loss of the injection. A need for safer injecting information was highlighted in many in this study and practice within the agency has been developed to address this. Future work should examine decision making around use of the groin and whether information on health risks coupled with support to access more awkward peripheral veins can deter use of the groin. However caution is needed not to promote use of sites that require third party assistance, as this may reduce the level of control the IDU has over the injecting process and increase the risks of transmission of hepatitis C and other blood-borne pathogens. Several points can be learned from this work to inform the delivery of harm reduction messages to this client group: 1. Recognition of the importance of utility and convenience when selecting an injecting site. Had this study not enquired about previous sites used and probed those who said they had 'no other sites' left, it may have been wrongly assumed that they did indeed have no useable sites. The identification of the 'utility and convenience' factor and the difficulties in using the non dominant hand for drug injecting has prompted the authors to consider the implementation of structured safer injecting training for IDUs, in order to deter use of the potentially high risk groin site. Such training, run by experienced nurses or anaesthetists, could develop injecting skills amongst IDUs in order to improve injecting techniques and promote the use of other available peripheral sites on the upper limbs. Such services could be integrated within safer injecting facilities. 2. The data on choice of injecting equipment is encouraging, as the majority of participants used detachable nee- dles, which are intended for intravenous access. Most also chose to use short needles (orange), which it is believed to reduce the extent of vascular assault when injecting. This practice has been promoted by needle exchange workers locally. However the identified use of insulin syringes and longer needles (e.g. blue) in some is of concern. Insulin syringes are fragile and intended for subcutaneous use only hence carry a risk of breaking, especially if scar tissue forms that is tough to penetrate. Longer needles may increase the assault on the vascular system or increase the risk of injuring the surrounding nerves or arteries. 3. The contribution of the quantities, frequency of injecting and poly drug use to vascular damage should be studied. Due to tolerance to the effects of many psychoactive drugs, injectors often use increasing quantities and inject with increasing frequency as their injecting careers lengthen. Some also progress to poly drug use. Just over half of this sample (51%) injected one drug only, and in all but one cases this was heroin with the remaining case being amphetamine. However of the remaining 49% (n = 23) who injected more than one drug, 19 were injecting crack cocaine. Cocaine is a potent local anaesthetic and could potentially increase the risks of using the groin site (and other sites) due to lack of sensation on injecting. Further work is needed to quantify and explore these risks and also to assess the longevity of intravenous access in relation to single and poly drug use and frequency of use. This study focused on the practices of groin site injectors in Bristol using the BDP needle exchange services, illustrating past and current injecting practices, identifying learning points for safer injecting advice delivery and future research. A convenience sample was used and the sample size was dictated by willingness to participate in the given time frame of the study. The results should not be extrapolated to the rest of the UK or those not in contact with BDP. Conclusion This study found, amongst a convenience sample of IDUs in Bristol, that the average time from first injection to use of the groin site was 7 yrs, with the majority having begun use of this site within five years. Reasons for use of the groin site centred on utility, convenience, ease of use and reduced risk of losing the euphoric effects due to extravenous delivery. Several key points were generated to inform the BDP harm reduction service, including the idea of developing safer injecting workshops for IDUs and encouragement that messages on use of equipment were successful. Several areas for future research have been prompted by this study. Groin site injecting is a risky practice that appears to have had little mention in the literature other than the reporting of case studies. The health risks are significant therefore further work to better under-stand this practice amongst IDUs and how to deter it would be of benefit.
2014-10-01T00:00:00.000Z
2005-04-15T00:00:00.000
{ "year": 2005, "sha1": "fdbef976b09196534480f02044ae8beafa613d8b", "oa_license": "CCBY", "oa_url": "https://harmreductionjournal.biomedcentral.com/track/pdf/10.1186/1477-7517-2-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "437ed6906e42ebf813e3062976fe8b1dcf45c4c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119106986
pes2o/s2orc
v3-fos-license
Fully Constrained Mass Matrix: Can Symmetries alone determine the Flavon Vacuum Alignments? A set of fully constrained Majorana neutrino mass matrices consistent with the experimental data was proposed in 2012. In the framework of the representation theory of finite groups, it was recently shown that a fully constrained mass matrix can be conveniently mapped into a sextet of Σ(72 × 3). In this paper, we expand on this work and introduce a formalism to incorporate additional symmetries onto Σ(72 × 3), so that the vacuum alignment of the sextet is entirely determined by the flavour symmetries alone. The complete flavour group is Σ(72 × 3) × X24 × X24 where X24 is a finite group specifically constructed with the required symmetries. Here, we define several flavons which transform as multiplets under Σ(72 × 3) as well as X24. Our construction ensures that the vacuum alignment of each of these flavons is a simultaneous invariant eigenstate of specific elements of the groups Σ(72× 3) and X24, i.e. the vacuum alignment is fully determined by its symmetries. The flavons couple together uniquely to reproduce the fully constrained sextet of Σ(72× 3). PACS. 14.60.Pq Neutrino mass and mixing – 11.30.Hv Flavor symmetries Introduction More than two decades [1,2] of experiments in neutrino oscillations have provided us with measurements of the neutrino mixing angles θ 12 , θ 23 , θ 13 as well as the masssquared differences, ∆m 2 21 , ∆m 2 31 [3,4]. Yet, several features of neutrinos remain a mystery. Ordering of neutrino masses, CP violation in neutrino sector, nature of neutrinos (Majorana or Dirac), existence of sterile neutrinos etc. are some of them. Parameters such as the light neutrino mass and the complex phases in the mixing matrix also need to be measured. Many of these questions are expected to be resolved by future experiments in the coming decades [5,6,7,8,9,10,11]. The initial measurements of large solar (θ 12 ) and atmospheric (θ 23 ) mixing angles stimulated the theoretical study of flavour symmetries in neutrino sector based on discrete finite groups [12,13,14,15]. Tribimaximal mixing [16] with θ 12 = sin −1 (1/ √ 3), θ 23 = π/4, θ 13 = 0 was widely used as a template for building models in neutrino sector. With the measurement of the non-zero reactor (θ 13 ) mixing angle inconsistent with tribimaximal mixing, theorists have turned to alternative mixing schemes. A natural approach is to extend tribimaximal mixing with one or more free parameters [17,18,19,20,21,22,23,24,25,26]. One such ansatz called tri-phi-maximal mixing a krisphysics@gmail.com orcid.org/0000-0002-0707-3267 (TφM) [27], leads to a mixing matrix of the form The angle φ parametrises the non-zero reactor mixing angle. Like tribimaximal mixing, tri-phi-maximal mixing also has a trimaximal second column and is CP conserving. An ansatz of Majorana neutrino mass matrices, which leads to TφM with φ = ±π/16 was proposed [28] shortly after the discovery of non-zero θ 13 by Daya Bay experiment. These matrices are fully constrained, in the sense that they do not contain free parameters. Hence, they also provided the neutrino mass ratios 1 m 1 : m 2 : m 3 = √ 2 tan 3π 16 : 1 : These are consistent with the measured neutrino masssquared differences and also predict the experimentally 1 Given as (2+ : 1 : in Ref. [28] arXiv:1901.01205v1 [hep-ph] 4 Jan 2019 undetermined light neutrino mass to be around 25 meV. In Fig. 1, we compare these ratios with the experimental mass-squared differences. Recently [29] it was shown that the group Σ(72 × 3) can be used to model such fully constrained mass matrices. Σ(72 × 3) can be obtained using four generators, namely C, E, V and X [30]. For the three-dimensional representation, we have The tensor product expansion of two triplets of this group is given by Σ(72 × 3) is the smallest group which produces a complex sextet from the tensor product of two identical triplets as shown in Eqs. (5,6). Note that the triplets of the continuous group SU (3) also have the same tensor product expansion. In Ref. [29], we assigned the right-handed neutrinos to be a conjugate triplet, In the Majorana mass term, two of these conjugate triplets couple to produce a conjugate sextet, where ν Rj .ν Rk are the Lorentz invariant products of the right-handed neutrino Weyl spinors. S ijk are the familiar Clebsch-Gordan coefficients for the symmetric tensor product of two triplets of SU (3). We use the conventional basis where the non-zero coefficients are given by We also introduced a flavon sextet, which couples with the conjugate sextet, Eq. (8), to produce the Σ(72 × 3)-invariant mass term, The flavon sextet acquires a Vacuum Expectation Value (VEV) through Spontaneous Symmetry Breaking (SSB) and this VEV determines the structure of the mass matrix. Comparing Eq. (10) with Eq. (11), it is clear that there is a one-to-one correspondence between the components of the sextet and the elements of the 3 × 3 complex-symmetric Majorana mass matrix. A specific VEV of the sextet fully constrains the mass matrix. The VEVs which correspond to the Majorana mass matrices, Eqs. (2), are In Ref. [29] we constructed flavon potentials which, through SSB, resulted in these VEVs and this reproduced the mass matrices, Eqs. (2). These mass matrices are diagonalised by 2 × 2 unitary matrices. The mixing matrix of the form TφM(φ = ± π 16 ) is obtained as the product of a 3 × 3 trimaximal contribution from the charged-lepton sector and the above mentioned 2 × 2 contribution from the neutrino sector. The mixing angles extracted from TφM(φ = ± π 16 ) are quite close to the experimental values. We used higher order corrections in the charged-lepton sector to account for the small discrepancy between the TφM(φ = ± π 16 ) and the experimental values. Vacuum Alignment in Flavour Space In this section, we briefly review the salient features of model building using flavons. Specifically we discuss the construction of the Majorana mass matrix involving three families of right-handed neutrinos 2 . The three neutrino states are assumed to form a triplet under a discrete flavour group, in general a subgroup of the continuous group, U (3). Since the neutrinos are assumed to transform as a triplet under the flavour group, we calculate the tensor product expansion of two such triplets. This expansion gives rise to several neutrino-neutrino terms which transform as multiplets under the flavour group. Flavons also transform as multiplets under the flavour group. The neutrino-neutrino multiplets and the corresponding flavon multiplets (conjugates) couple, leading to flavour group invariant mass terms. In model building, once we settle on a suitable flavour group and a set of flavon multiplets, there are two questions that determine the mass matrix. What are the values of the coupling constants that appear along side the invariant mass terms and how are the flavon VEVs aligned? We discuss these aspects in the rest of this section. Corresponding to each of the invariant mass term we associate a coupling constant. These coupling constants are independent of each other and their values are arbitrary from a theoretical point of view, i.e. neither the flavour symmetries nor the features of the flavons can be used to predict them. However, the presence of coupling constants could be an advantage when the aim of the model is to explain only certain features of the mass matrix while leaving other features untouched, i.e. the coupling constants in the model allows us to leave a few degrees of freedom within the mass matrix unconstrained by the theory. If the unconstrained features are experimentally known, we can fit our model with the data and subsequently determine the values of the coupling constants. Else, the values of these constants are left unknown. The flavour structure of a model is also determined by the relative orientation between the fermion flavour eigenstates and the flavon VEVs. Assigning the three flavours (families) of fermions as a triplet under the flavour group implies that they are aligned along the basis states of the representation. To obtain the alignment of flavon VEVs, we construct a flavon potential invariant under the flavour group. This potential will have a discrete set of extrema points. Through the mechanism of SSB, the flavon acquires a VEV which corresponds to one of these extrema. The fact that the symmetry is discrete, limits the extrema points to a finite set. SSB randomly chooses one among these extrema as the vacuum alignment. By changing the nature of the flavon potential we may alter the set of extrema points and thus change the possible vacuum alignments. The flavon VEVs form the building blocks of the mass matrix, so the alignment of the VEVs in flavour space has important consequences for the structure of the mass matrix. We expect that a given alignment has specific symmetry properties under the flavour group, which in turn impart specific features to the mass matrix. Let us use the discrete group S 4 as an example to demonstrate the alignment of states in the flavour space 3 . The triplet representation (3) of S 4 corresponds to the 24 proper rotations in the three dimensional real space that leaves a cube invariant. When we assign the fermions as a triplet, we are assigning its components, i.e. the three flavour states, as the basis states of the triplet representation. In the widely used basis of 3, the basis states are aligned along the face centres of cube, i.e. the x, y and z axes as shown in Fig. 2. The cube remains invariant under rotations about these axes by multiples of π/2. Therefore, the three fermion flavour states correspond to the three cyclic C 4 subgroups of S 4 . To couple with two triplets (3) of fermions, we introduce a flavon which transforms as a 3 . The representation 3 also corresponds to 24 rotational symmetries of a cube (12 proper and 12 improper rotations). We may construct a potential whose extrema correspond to flavon orientations directed towards the face-centres of the cube, as shown in Fig. 2 (left). Each of these extrema remains invariant under a C 2 × C 2 subgroup of S 4 consisting of 4 proper and improper rotations. Six such subgroups exist and the number of extrema orientations (the number of faces of the cube) is simply 24/4 = 6. When the flavour symmetry group S 4 is broken by SSB, the resulting vacuum alignment will be along one of these extrema. So the symmetry breaking is not complete, i.e. a C 2 × C 2 subgroup remains as the residual symmetry. It is also possible to construct a potential whose extrema are oriented in directions with no symmetry properties. Such a potential will have 24 distinct extrema as shown in Fig. 2 (right). A VEV along one of these extrema breaks S 4 completely so that there remain no residual symmetries. By appropriately tuning the potential, we will be able to orient the extrema and the resulting VEV in any direction we may want. This seems to be true for all discrete groups, not just S 4 . We note that a considerable number of publications rely almost entirely on flavon potentials to determine their vacuum alignments. Authors utilise quite complicated potentials to obtain VEVs which are phenomenologically viable, but they fail to provide a justification for these VEVs in terms of the symmetries of the flavour group. Even though this procedure is technically valid, we argue that it goes against the very spirit of using the properties of discrete groups for determining the flavour structure. If the VEV is made to orient in an arbitrary direction with no apparent connection to the original symmetry, the whole purpose of using discrete symmetries can be called into question. We argue that the orientation of the fermion basis states as well as the flavon VEVs should be determined by their symmetries alone. These symmetries are nothing but the subgroups of the discrete flavour group. The mathematical elegance of the subgroup structure of the flavour group should manifest as the restrictiveness of the orientations of the flavour states and thus the predictiveness of the flavour model. When the flavon vacuum alignments, Eqs. (12,13), for the sextet of Σ(72 × 3) were proposed [29], they were not completely justified with the help of their symmetry properties alone. In this paper, we combine Σ(72 × 3) with a new discrete symmetry group which X 24 . We introduce flavons which transform under both Σ(72 × 3) and X 24 . Our flavon VEVs uniquely break the combined flavour group into its subgroups. In other words, the VEVs are completely determined by their symmetries alone. These flavons are coupled together to obtain the sextet of Σ(72× 3). This sextet in turn couples with the neutrino triplets resulting in the Majorana mass term. The Discrete Group X 24 We construct discrete group, X 24 , using the following generators: where ω = e i 2π 3 ,ω = e −i 2π 3 are the cube roots of unity and τ = e i π 4 ,τ = e −i π 4 are the eighth roots of unity. The largest cyclic subgroup of this group is C 24 , generated by ωτ and hence the subscript 24 in X 24 . These generators, Eq. (14), are selected so that the group constructed from them helps to uniquely define the required flavon VEVs. The rest of this section covers the mathematical study of the properties of this group. A reader who is more inclined towards applying the group theoretical results for the construction of the VEVs and the mass matrix may skip over to section 4 and may revert to this section when it is deemed necessary. As the first step in analysing X 24 , we construct the group elements, Using C(τ ) 1 , C(τ ) 2 and B we obtain the group element, A and |B| generate the group S 3 × S 3 which forms a subgroup of X 24 . To show this we obtain, where D 1 , E 1 and D 2 , E 2 in Eqs. (18,19) separately form the generators of the group S 3 , because they satisfy the following group presentation, along with the relationship, S 3 group elements, g 1 and g 2 , generated by D 1 , E 1 and D 2 , E 2 respectively, can be expressed as where i 1 , i 2 ∈ {1, 2} and j 1 , j 2 ∈ {1, 2, 3}. The first set of generators, Eq. (18), commute with the second set, Eq. (19), i.e. so that we obtain the direct product of two S 3 groups. Thus we show that A and |B| generate the group S 3 × S 3 with the total number of elements equal to 2 × 3 × 2 × 3. Note that the elements of S 3 × S 3 in the basis given by, Eq. (18,19), are matrices with '1's and '0's only. C(τ ) 1 and C(τ ) 2 , Eqs. (15,16), individually generate the cyclic group C 8 . In X 24 , we can find two more such generators of C 8 , Four elements, similar to Eqs. (15,16,26,27), which individually generate the cyclic group C 3 can also be found, We also find a fifth independent C 3 generator, Using 6 × 6 special unitary diagonal matrices, the maximum number of independent C n generators that can be constructed is five and in Eqs. (28 -32) we have listed all of them for C 3 . For the case of the diagonal C 8 subgroups of X 24 , it so happens that that the upper and the lower 3 × 3 diagonal matrices are individually special unitary. This additional constraint limits the total number of independent generators to four, ie. Eqs. (15,16,26,27). Eqs. (15, 16, 26, 27, 28 -32) constitute an exhaustive list of generators producing all the diagonal elements within X 24 . These elements form the subgroup The diagonal elements commute with each other, so they also form the centre (largest abelian subgroup) of X 24 . Note that 3 and 8 are co-prime numbers which implies C 8 × C 3 is C 24 . This can also be inferred from the multiplication of C 8 and C 3 generators, for example, In other words, the group C 24 ×C 24 ×C 24 ×C 24 ×C 3 forms the centre of X 24 . Every representation matrix of X 24 is of the form of a representation matrix of S 3 × S 3 with phases replacing certain number of '1's in the S 3 × S 3 matrix. These phases can be extracted out using a diagonal phase matrix, i.e. an element of the centre of the group. In other words, any element of X 24 can be obtained by left multiplying (or right multiplying) the corresponding element of S 3 ×S 3 with an appropriate diagonal phase matrix. Therefore, C 24 ×C 24 ×C 24 ×C 24 ×C 3 and S 3 ×S 3 form a normal subgroup and the associated quotient group respectively of X 24 . Using this information, we may express X 24 as a semidirect product, Any element of X 24 can be uniquely expressed as where m 1 , ..., m 4 ∈ {1, ..., 8}, n 1 , ..., n 5 ∈ {1, 2, 3}, i 1 , i 2 ∈ {1, 2}, j 1 , j 2 ∈ {1, 2, 3}. So it is clear that the order of the group X 24 is 8 4 3 5 2 2 3 2 . We used the group theory package GAP and verified that the group generated by A, B has this order, thus confirming our calculations. We also verified that the sextet representation, Eqs. (14), is irreducible. We note that X 24 is not a subgroup of U (3). The Model In this section, we discuss the construction of the Majorana mass term and the resulting mass matrix of the form, Eq. (2), with the help of flavons transforming under the flavour groups Σ(72 × 3) and X 24 . This term involves the couplings among the right-handed neutrinos. The discussion of the Dirac mass term for the neutrinos (couplings between the right-handed neutrinos and the lefthanded lepton doublets) as well as for the charged-leptons (couplings between the right-handed charged-leptons and the left-handed lepton doublets) are omitted here. We assume that the Dirac sector follows the details as given in Ref. [29] in using Σ(72 × 3) to obtain the relevant mass matrices. The complete flavour group for our model is Σ(72 × 3) × X 24 × X 24 . Why we have used two copies of X 24 will become apparent during the course of this section. Table 1 shows how the right-handed neutrinos (ν R ) and the flavons (φ,φ, ∆) transform under the flavour group. In this paper, we use Latin and Greek letters to denote the indices which transform under Σ(72 × 3) and X 24 respectively, i.e. ν Ri , φ αi ,φ αi , ∆ αβ . Using the C-G coefficients, Eqs. (9), and considering the transformation properties given in Table 1, we construct the invariant term in the Majorana sector, where the summation is over all repeated indices. Comparing this invariant with Eq. (11), we obtain The flavonφ (andφ) can be considered as a set of six Σ(72×3)-triplets. In Eq. (37), we have a composite system of these triplets coupled together with ∆ to obtain ξ which is a sextet under Σ(72 × 3) and an invariant singlet under X 24 . The flavonsφ,φ and ∆ acquire VEVs through SSB. Let these vacuum alignments to be Consequently, the expression for the 3 × 3 Majorana mass matrix becomes The Flavon Vacuum Alignments In this section we show that the VEVs, Eqs. (38), can be uniquely defined by their symmetries. More concretely, we show that they can be expressed as unique and simultaneous invariant eigenstates of a set of group elements. These elements constitute a subgroup of the flavour group, Σ(72 × 3) × X 24 × X 24 . Since the VEVs remain invariant under the action of these elements, they break the flavour group into this subgroup. First we consider the flavon ∆. Consider the group element where the rows and columns of ∆ corresponds to the first and the second X 24 in the direct product. It is clear that, this operation multiplies all the vanishing elements in the VEVs with ω orω. Therefore, invariance of the VEV under O C∆ ensures that these elements vanish. Consider the group element where I is the identity. As a matrix equation, the operation of this element on the VEV can be written as This operation interchanges the rows 1, 2, 3 of the VEV with the rows 6, 5, 4 respectively. Invariance under this operation ensures that the rows 1, 2, 3 become equal to the rows 6, 5, 4 respectively. This condition is also satisfied by our VEV. Finally we consider the group element Eqs. (44, 46). To denote the action of a direct product element, we used left and right multiplications with the corresponding matrices. To represent a direct product element using a single matrix, we need to obtain the Kronecker product of the left and the right matrices. It can be shown that, the four Kronecker product matrices, corresponding to the four direct product elements, Eqs. (41, 43, 45, 47), commute with each other 5 , i.e. they generate the subgroup C 3 × C 2 × C 2 × C 3 . Therefore, the flavon VEV ∆ breaks X 24 ×X 24 into C 3 ×C 2 ×C 3 ×C 2 = C 6 × C 6 . To summerise, the C 6 × C 6 subgroup generated by O C∆ , O Dc∆ , O Dr∆ and O E∆ remains as the residual symmetry of the VEV and it uniquely defines the VEV (up to multiplication by an overall complex constant). Now we turn our attention to the flavons,φ andφ. Consider the group element It is clear that, this operation multiplies all the vanishing elements in the VEV with ω orω. Therefore, invariance of the VEV under O Cφ ensures that these elements vanish. Consider the group element (51) As a matrix equation, its operation on the VEV is (52) This operation interchanges the rows, 1, 2, 3, of the VEV with the rows, 4, 5, 6 respectively, along with multiplication of these rows with certain specific values of phases. Invariance under this operation ensures that the elements in the upper and the corresponding lower rows in the VEV have the same magnitude, but differ by specific phases. Our VEV satisfies this condition. Finally consider the group element As a matrix equation, its operation on the VEV is This operation cycles various sets of three elements of the VEV, along with multiplying these elements with specific phases. There are six such sets in the VEV. These include (49, 51, 53), generate C 3 , C 2 and C 3 groups respectively. This is evident by inspecting the corresponding matrix operations in Eqs. (50, 52, 54). These three elements also commute with each other, so that they generate the subgroup To prove that they commute we need to calculate the Kronecker product matrices, as we discussed in the case of the ∆ flavon. To summarise, the C 6 × C 3 subgroup generated by O Cφ , O Dφ and O Eφ remains as the residual symmetry of φ and φ after SSB and it uniquely defines these VEVs (up to multiplication by an overall complex constant). Since the neutrinos, ν R , form a triplet under Σ(72×3), the individual states, ν R1 , ν R2 and ν R3 correspond to the flavour basis states, (1, 0, 0) T , (0, 1, 0) T and (0, 0, 1) T respectively. These states are the invariant eigenstates of the group elements C, E 2 CE and ECE 2 respectively where the group generators are given in Eqs. (4). Individually, the above mentioned group elements form C 3 subgroups of Σ(72×3). To summarise, we have shown that the flavon VEVs as well as the neutrino states can be uniquely defined in terms of their symmetry properties. They are expressed as the invariant eigenstates of specific group elements which form specific subgroups of the flavour symmetry group. Thus the flavour structure of our model is entirely determined by the subgroup structure of the flavour symmetry group. It should be noted that, even though we have used matrix representations in convenient bases, our formalism is manifestly basis independent, i.e. expressible in terms of the abstract group generators. Summary Fermions and flavons transform as multiplets of a discrete group in the flavour space. The structure of the fermion mass matrix is determined by the relative orientation of the flavon VEVs with respect to the fermion flavour eigenstates. Therefore, fixing the vacuum alignments is central to the flavour problem. The canonical formalism involves constructing a flavon potential, extremising it and obtaining the VEV through spontaneous symmetry breaking. However, by carefully adjusting the potential we may obtain any arbitrary vacuum alignment. We argue that such a procedure goes against the spirit of using discrete symmetries to explain the flavour structure. The vacuum alignment should not be determined by the structure of the potential, but rather by its symmetries. In this paper we adopt a formalism in which the VEV is fully determined in terms of its residual symmetry, i.e. the unbroken part of the original discrete group. This constrains the possible orientations of the VEV, in relation to the fermion flavour eigenstates, to a unique and finite set. Thus we get rid of the arbitrariness of the vacuum alignment which could arise when using a potential. In an earlier publication, we showed that a fully constrained Majorana mass matrix can be constructed using a sextet of Σ(72 × 3). A specific VEV for this sextet leads to TφM-mixing with φ = π/16 and neutrino mass ratios, Eq. (3). In this paper, we obtain this VEV using the formalism of residual symmetries. To achieve it, we propose a new discrete symmetry group, X 24 . The flavonsφ,φ and ∆ which transform under the expanded flavour group Σ(72 × 3) × X 24 × X 24 are introduced. The VEV of each of these flavons is uniquely identified as an invariant eigenstate of several elements of the flavour group. The VEV remains invariant under the residual group generated by these elements, i.e. each VEV is determined by a particular subgroup of Σ(72 × 3) × X 24 × X 24 . The flavonsφ,φ and ∆ are coupled together to obtain the required sextet of Σ(72 × 3). By imposing the condition that the VEVs of the constituent flavons are invariant eigenstates under the simultaneous action of Σ(72 × 3) and X 24 , we make the VEV of the sextet of Σ(72 × 3) implicitely dependent on X 24 . Because the fermions exist in three families, only the discrete subgroups of U (3) have been used as flavour symmetry groups in literature so far. In this paper, we broadened the application of discrete groups in model building by utilising a group which is no longer required to be a U (3) subgroup. In a general framework, we may express the flavour group, G f , as a direct product, G f = G U (3) × G X . In this paper, we have G U (3) = Σ(72 × 3) and G X = X 24 × X 24 . But in general, G U (3) can be any discrete subgroup of U (3) such as A 4 , S 4 , A 5 , ∆(3n 2 ) and ∆(6n 2 ). On the other hand, G X , which we call the "auxiliary group" can be any discrete group. We hope that this newly introduced framework will stimulate further research involving other choices of auxiliary groups combined with the commonly studied subgroups of U (3). This may lead to novel choices of vacuum alignments for the G U (3) -flavons and new textures of mass matrices. I would like to thank Paul Harrison and Bill Scott for the helpful discussions and Aidan Wiederhold for helping with making the plot. I acknowledge the support from the University of Warwick and the hospitality of the Particle Physics Department at the Rutherford Appleton Laboratory. I thank the management of the School of the Good Shepherd, Thiruvananthapuram, for providing a convenient and flexible working arrangement conducive to research. Appendix Here we use the group S 4 to construct a couple of toy models for mass matrices. We investigate the flavon potentials and show that they may or may not lead to VEVs with specific symmetry properties. The representation 3 of S 4 is generated using the matrices where we have adopted the commonly used basis in literature. 3 consists 24 proper rotations which include 9 rotations about the axes passing through face centres, 8 rotations about the axes passing through vertices, 6 rotations about the axes passing through edge centres and the identity element. We assume that the right-handed neutrinos transform as a 3, The individual fermion states correspond to the basis states of the representation, for example ν R1 corresponds to (1, 0, 0) T . This state remains invariant under the action of the C 4 subgroup of S 4 generated by This subgroup consists of rotations by nπ/2 about the axis passing through (1, 0, 0) T . Similarly we have two more C 4 subgroups in relation to the states ν R2 and ν R3 . The tensor product expansion of two triplets (3 ) is given by Therefore, by coupling the neutrinos we obtain The generators corresponding to Eqs. (55), for the doublet (2) and the triplet (3 ) are The doublet representation (2) is not faithful. The 2×2 matrices, Eqs. (62), generate the dihedral group D 6 which forms a subgroup of S 4 . D 6 represents the rotation as well as the reflection symmetries of an equilateral triangle as shown in the Fig. 3 Let us define singlet (φ s ), doublet (φ d ) and triplet (φ t ) flavons which transform as 1 (invariant), 2 and 3 respectively. They couple with the neutrino multiplets, Eqs. (59-61), to produce the S 4 invariant mass term, where k s , k d and k t are the coupling constants. The flavons and the coupling constants in Eq. (64) can be written in a matrix form, where we have expressed the doublet and the triplet flavons in terms of their components, i.e. φ d = (φ d1 , φ d2 ) T and φ t = (φ t1 , φ t2 , φ t3 ) T . Substituting a specific vacuum alignment for the flavons in Eq. (65) produces the mass matrix. In the rest of the Appendix, two examples are provided where we minimise flavon potentials to obtain the VEVs and the corresponding mass matrices. In Example 1 the VEVs can also be defined in terms of their symmetries while in Example 2 they do not have such symmetry properties. Example 1 It is straightforward to write a potential for the invariant Extremising this potential leads to the VEV In order to construct a potential for the doublet flavon, we first consider the tensor product of two doublets. It can be shown that the tensor product leads to another doublet, Now we construct the potential, where the operator | | 2 represents () T (). This potential has three minima: . They form the vertices of the equilateral triangle as shown in Fig. 3 (left). We assume that the flavon acquires one of these minima as its VEV, This VEV breaks D 6 to one of its subgroups, C 2 , generated by where E, F are given in Eqs. (62). C 2 represents the reflection symmetry of the triangle which keeps (1, 0) T invariant. Conversely, the vacuum alignment, (1, 0) T , can be uniquely identified by this residual C 2 symmetry. Now we construct a potential for the triplet flavon, φ t , which transforms as a 3 . From the tensor product of two φ t triplets, we obtain the second order triplet, similar to Eq. (61). Using φ t and (φ t φ t ) t , we construct the potential, This potential has six minima φ t = (±1, 0, 0), (0, ±1, 0) and (0, 0, ±1). These are the face centres of the cube shown in Fig. 2 (left). We assume that the flavon acquires one of these minima as its VEV, This VEV breaks S 4 to one of its subgroups, C 2 × C 2 , generated by where E, F are given in Eqs. (63). The generators, Eqs. (75) represent two improper rotations of the cube. Apart from these two elements, C 2 × C 2 also consists of a proper rotation (Diag(1, −1, −1)) and the identity element. The VEV, (1, 0, 0) T , remains invariant under the action of these elements. Conversely, the VEV can be uniquely identified by this residual C 2 × C 2 symmetry. Substituting the VEVs, φ s , φ d and φ t in Eq. (65), we obtain the mass matrix, This matrix is diagonalised using the unitary matrix, which provides a bimaximal contribution to mixing. By a suitable selection of the coupling constants, k s , k d , k t , we can obtain any set of values for the masses without affecting the mixing part. Example 2 In this example we construct potentials for the flavons φ d and φ t leading to VEVs which leave no residual symmetries. For φ d , we use the potential where the scale Λ is added with the higher dimensional term. This potential has six minima which are of the form where g i are the six elements of the group D 6 . These minima are shown in Fig. 3 (right). We assume that flavon acquires one of the minima, as its VEV. This VEV does not possess any residual symmetry of D 6 . For constructing the potential for the triplet flavon, we first obtain a doublet from the tensor product of two triplets. Similar to Eq. (60), we obtain We construct the potential as where κ 1 and κ 2 are arbitrary constants. In Eq. (82) we have coupled the doublet flavon, φ d , with the triplet flavon, φ t . Therefore, we should extremise Eq. (82), together with the potential for the doublet flavon, Eq. (78). If we substitute φ d = (0, 1) T and φ t = (1, κ 1 , κ 2 ) T , both terms in Eq. (82) as well as in Eq. (78) vanish, indicating that these states of the flavons constitute a minimum of the potential. By transforming these flavon states under the action of S 4 we obtain further minima forming a discrete set. For φ t , these minima are shown in Fig. 2 (right) 6 . We select one of these minima as the VEV. As mentioned previously, this VEV does not possess any residual symmetry of S 4 . Using the VEVs of the doublet and the triplet flavons we obtain the mass matrix, which has more degrees of freedom compared to the previous case, Eq. (76). By suitably tuning these free parameters, we can ensure that this mass matrix is consistent with the current neutrino masses and mixing data. However, we argue that since the VEVs, Eqs. (80, 83), have no apparent connection with the original flavour symmetry (S 4 ), we can not claim that the texture of the resulting mass matrix has its origin in the aforementioned symmetry.
2019-01-04T16:48:21.000Z
2019-01-05T00:00:00.000
{ "year": 2019, "sha1": "fdccf26db72eaeaeefc4cb675d2aaff3f77edf4f", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.101.075004", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "fdccf26db72eaeaeefc4cb675d2aaff3f77edf4f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269087879
pes2o/s2orc
v3-fos-license
Convolutional neural network algorithm trained on lumbar spine radiographs to predict outcomes of transforaminal epidural steroid injection for lumbosacral radicular pain from spinal stenosis Little is known about the therapeutic outcomes of transforaminal epidural steroid injection (TFESI) in patients with lumbosacral radicular pain due to lumbar spinal stenosis (LSS). Using lumbar spine radiographs as input data, we trained a convolutional neural network (CNN) to predict therapeutic outcomes after lumbar TFESI in patients with lumbosacral radicular pain caused by LSS. We retrospectively recruited 193 patients for this study. The lumbar spine radiographs included anteroposterior, lateral, and bilateral (left and right) oblique views. We cut each lumbar spine radiograph image into a square shape that included the vertebra corresponding to the level at which the TFESI was performed and the vertebrae juxta below and above that level. Output data were divided into “favorable outcome” (≥ 50% reduction in the numeric rating scale [NRS] score at 2 months post-TFESI) and “poor outcome” (< 50% reduction in the NRS score at 2 months post-TFESI). Using these input and output data, we developed a CNN model for predicting TFESI outcomes. The area under the curve of our model was 0.920. Its accuracy was 87.2%. Our CNN model has an excellent capacity for predicting therapeutic outcomes after lumbar TFESI in patients with lumbosacral radicular pain induced by LSS. Lumbar spinal stenosis (LSS) is caused by narrowing of the lumbar spinal canal or lumbar vertebral foramen 1 .LSS typically results from degenerative changes in the spine, including degeneration of the disc, formation of osteophytes, and thickening of spinal ligaments 1 .These degenerative changes cause the narrowing of space that is available for the neural and vascular elements in the lumbar spine 2 .LSS can cause pressure on the nerve roots and vascular structures 2 .In addition to the compression of neurovascular structures in the lumbar spine, LSS causes an inflammatory response in which various inflammation-mediated cells and proinflammatory cytokines are involved, resulting in lumbosacral radicular pain 3 .The radicular pain from LSS may be aggravated, particularly during walking or standing for long periods 4 . Conservative treatments such as oral medication, physical therapy, and injection procedures are used to control the lumbosacral radicular pain caused by LSS 5,6 .Moreover, transforaminal epidural steroid injection (TFESI) is one of the most effective treatments for alleviating pain from LSS 3,6 .Corticosteroids inhibit the synthesis of various proinflammatory mediators 7 . The prediction of therapeutic outcomes after TFESI is important because it allows clinicians to elucidate a therapeutic plan for lumbosacral radicular pain due to LSS.Previous studies have evaluated outcomes according to stenosis severity observed on magnetic resonance imaging (MRI) 3,8 .However, the prognostic evaluation methods of the previous studies only showed a tendency of the therapeutic outcomes and did not provide individualized outcomes based on the specific structural characteristics of each patient 3,8 .Furthermore, MRI is expensive and not easily accessible.We believe it is possible to assume the degree of spinal stenosis by assessing degenerative findings in lumbar spine radiographs, such as disc space narrowing, osteophyte formation, and facet degeneration 9 .Lumbar spine radiographs can be easily performed because almost all clinics and hospitals are equipped with a radiographic imaging machine, and the cost for patients is relatively low.However, at present, no study has analyzed the therapeutic outcomes of TFESI based on findings visible in lumbar spine radiographs. Machine learning (ML) is a computer algorithm that can automatically learn from data without the need for explicit programming [10][11][12] .ML is known for its ability to overcome the limitations of existing image analysis techniques and enable breakthroughs in the field of image analysis [10][11][12] .Deep learning (DL) is an advanced ML approach that uses many hidden layers to build artificial neural networks with structures and functions similar to those of the human brain.It can learn from unstructured and perceptual image data, and several studies have demonstrated that the DL technique can outperform traditional ML techniques [13][14][15] .A convolutional neural network (CNN) is a representative DL model specializing in image analysis 16,17 .We believe that the CNN model can recognize and analyze the findings related to spinal degeneration on lumbar spine radiographs and could help predict the therapeutic outcome of TFESI.Furthermore, CNN can enable personalized prediction of therapeutic outcomes based on each patient's images. In the current study, we used lumbar spine radiographs as input data and trained a CNN model to predict therapeutic outcomes after lumbar TFESI in patients with lumbosacral radicular pain caused by LSS. Results Table 1 summarizes the sample characteristics of the proposed model and its performance measures.This study utilized 193 samples, with a training set comprising 79.8% (154 samples) and a validation set comprising 20.2% (39 samples).The training set had a 'favorable outcome' to 'poor outcome' ratio of 38.3-61.7%,while the validation set had a ratio of 33.3-66.7%.The trained model demonstrated robust performance with a training accuracy of 94.2% and an AUC of 0.983 (95% CI [0.967-1.000]).The validation accuracy was also high at 87.2%, with an AUC of 0.920 (95% CI [0.834-1.000])(Fig. 1). In terms of class-specific performance, for the 'favorable outcome' set, the precision was 0.733, recall was 0.917, and F1-score was 0.815.For the 'poor outcome set' the model showed a precision of 0.958, recall of 0.852, and F1-score of 0.902.The macro average across classes was a precision of 0.846, recall of 0.884, and F1-score of 0.858, while the weighted average was a precision of 0.889, recall of 0.872, and F1-score of 0.875. These results suggest that the model is highly accurate and distinguishes between the 'favorable outcome' and 'poor outcome' with a particularly strong performance in identifying the 'favorable outcome' .However, there is scope for improvement in the precision for the 'favorable outcome' . Figure 2 provides additional information on the model's characteristics through confusion matrix analysis for the validation data.The confusion matrix shows that the model correctly predicted 11 out of 12 patients who showed 'favorable outcomes' (91.7% precision).Also, the model correctly predicted 23 out of 27 patients with 'poor outcomes' (85.2% recall). Discussion In our study, we developed a CNN algorithm for predicting therapeutic outcomes of lumbar TFESI in patients with lumbosacral radicular pain following LSS.The accuracy of our algorithm was 87.2%, and the AUC was 0.920.Considering that AUCs of 0.7-0.8,0.8-0.9, and > 0.9 are considered as having acceptable, excellent, and outstanding diagnostic capacities, respectively, the ability of our CNN model that was developed using lumbar radiographs as input data seems to be excellent 18 .While neural networks and other various algorithms have been utilized for the past 50 years, developments in the field of CNNs constitute significant accomplishments 14,17 .The multiple convolutional and pooling layers of the CNN algorithm enable the identification of radiologic features or other image-based data and assign weights to important features 16,17 .Also, the CNN algorithm is less affected by distortions, horizontal or vertical shifts, contrasts, angles, and partial masks in images; it also requires less computer memory, which allows for more effective algorithm training 19 .In accordance with these advantages of the CNN algorithm, our algorithm is expected to accurately identify key features influencing the therapeutic prognosis after TFESI in lumbar radiographs of patients with LSS. Previous studies have evaluated therapeutic outcomes after TFESI in patients with LSS for alleviating lumbosacral radicular pain 3,8 .In 2018, Chang et al. evaluated the outcome of TFESI according to the severity of lumbar foraminal spinal stenosis (LFSS) 3 .Of 31 patients with mild to moderate LFSS, 27 patients (87.1%) showed favorable outcomes (≥ 50% pain reduction of initial pain at 3 months post-treatment).Of 26 patients with severe LFSS, 11 (42.5%) reported successful pain relief.In 2020, Do et al. evaluated the therapeutic outcomes of interlaminar epidural steroid injection in patients with chronic radicular pain according to the degree of lumbar central spinal stenosis (LCSS) 8 .At 3 months after TFESI, nine (30.0%) of 30 patients with moderate LCSS showed www.nature.com/scientificreports/favorable outcomes (≥ 50% pain reduction of initial pain at 3 months post-treatment).Five (17.9%) of 28 patients with severe LCSS reported successful pain relief.However, these research results described only overall trends in therapeutic outcomes based on radiologic findings, such that it remains difficult to discern the favorable and poor therapeutic outcomes of an individual patient.In contrast, our algorithm can determine an individual's therapeutic outcomes when inputting each patient's lumbar radiographs into the algorithm. Regarding studies that developed a DL algorithm to predict therapeutic outcomes after TFESI, to the best of our knowledge, two studies have been published 20,21 .In 2022, Kim et al. collected whole T2-weighted sagittal lumbar spine MR images from 503 patients with chronic lumbosacral radicular pain 20 .Similar to our study, the favorable and poor outcomes were defined as ≥ 50% and < 50% reduction at 2-month follow-up after TFESI, respectively.Kim et al. reported that the accuracy for predicting the therapeutic outcome of TFESI was 76.2%, and the AUC was 0.827.In 2023, Wang et al. recruited 288 patients with radicular pain due to cervical foraminal stenosis 21 .The authors collected single T2-axial spine MR images for each patient.They also defined ≥ 50% and < 50% reduction at 2-month follow-up after TFESI as favorable and poor outcomes, respectively.The accuracy of the developed model for predicting the outcome after TFESI was 79.3%, and the AUC was 0.801.Therefore, our study is the first to demonstrate the usefulness of a DL model trained using radiographs in predicting the therapeutic outcomes of spinal injections for radicular pain. Integrating our research results with a cloud system could significantly enhance its accessibility and scalability.A cloud-based platform would allow for deployment of the developed model across healthcare settings, enabling real-time analysis of lumbar spine radiographs.This approach would facilitate a centralized database for training and updating the model with new data, improving its accuracy over time.Moreover, cloud integration supports collaborative efforts among healthcare professionals and researchers, allowing for seamless sharing of insights and advancements in the treatment of lumbosacral radicular pain caused by LSS.By leveraging cloud technology, the research outcomes can be made more widely available, offering optimized treatment plans on a global scale. In conclusion, we found that a CNN model trained using four radiographs (the anteroposterior, lateral, and left and right obliques) per each patient had an excellent capacity (accuracy = 87.2%,AUC = 0.920) for predicting the therapeutic outcomes after lumbar TFESI in patients with lumbosacral radicular pain due to LSS.We believe that our developed model could be effectively applied as a supplementary tool in clinical practice by pain physicians.Our study had several limitations.(1) A relatively small number of patients were included.(2) We collected images from a single hospital.To increase the generalizability of our results, lumbar radiographs collected from multiple hospitals should be used as input data for the CNN training algorithm.(3) We assumed that the patients' pain was caused solely by single-level LSS.However, in reality, it is possible that the pain was associated with multiple levels of LSS.Therefore, for more accurate analysis, it is preferable to use the entire lumber spine image as input data for developing the DL algorithm.( 4) We used only the NRS as output data.If functional data were used instead, the developed algorithm could provide more information.( 5) For developing the DL algorithm, we used only lumbar spine radiographs as input data.Incorporating MRI data along with lumbar spine radiographs as input data could further improve the prediction accuracy of therapeutic outcomes after lumbar TFESI. Participants This retrospective observational study involved 193 patients (mean age = 74.3± 9.8 years, men: women = 71:122, injection levels L3:L4:L5:S1 = 2:3:24:156:8, right: left: bilateral = 69:68:56) who visited the spine center of a university hospital and underwent lumbar TFESI for LSS between January 2013 and December 2021.The inclusion criteria for this study were as follows: (1) single-level lumbar TFESI for segmental pain that radiated to the lower extremity due to LSS; (2) ≥ 3 months history of a symptomatic lumbosacral radicular pain with > 3 on a numerical rating scale (NRS-11; 0 = no pain; 10 = the worst pain) prior to TFESI; (3) ≥ 50% temporary pain relief following a diagnostic nerve block with 1 mL of 2% lidocaine; and (4) MRI and electrophysiological findings corresponding to the clinical manifestations.The data of patients with a history of spinal surgery, such as lumbar fusion or laminectomy, before TFESI were excluded.The study protocol was approved by the institutional review board of Yeungnam university hospital, which waived the requirement for written informed consent owing to the retrospective nature of this study.The study was conducted in accordance with the Declaration of Helsinki. TFESI procedures TFESI was conducted using the standard method described in a previous study.All injections were performed by a single interventional physiatrist specializing in spinal injections.A strict aseptic technique was used to perform the TFESI procedures.Patients were prone, and C-arm fluoroscopy (Siemens, Erlangen, Germany) was used to aid level identification and needle placement.Lidocaine 1% was administered at the needle insertion site, and the tip of a 25-gauge 90-mm spinal needle with a bend at the tip to allow for guidance was positioned between the lateral vertebral body and the 6 o' clock position below the pedicle.Lateral fluoroscopic imaging demonstrated the presence of the needle tip between the spinal laminar margin and the posterior vertebral body.Under anteroposterior fluoroscopy, 0.3 mL of non-ionic contrast material was injected to confirm the absence of vascular uptake and spread of contrast into the foramen.Subsequently, another injection of the contrast medium was performed under real-time fluoroscopic monitoring, and 20 mg (0.5 mL) of triamcinolone with 0.5 mL of bupivacaine hydrochloride and 1 mL of normal saline was injected. Images used for the DL algorithm (input data) Lumbar spine radiographs were used as input data for developing the DL algorithm.Lumbar spine radiographs that were used as input data include the anteroposterior, lateral, and bilateral oblique (left and right oblique) www.nature.com/scientificreports/views.Oblique lumbar radiographs were obtained from a 45° anteroposterior orientation on both the left and right sides of each patient.Additionally, we cut each lumbar spine radiograph image into a square shape that included the vertebra corresponding to the level at which the TFESI was performed, as well as the vertebrae juxta below and above that level. To prepare data for DL, the region of interest (ROI) was isolated, with images segmented by a physiatrist to delineate regions with critical lesions, and image dimensions were standardized.This ROI protocol positively influenced the learning efficacy of the DL model.Furthermore, image features were normalized prior to their input into the CNN model to optimize generalization capability.This approach enhances applicability of the model across diverse datasets within the medical imaging domain.Medical imaging requires precise lesion detection, therefore attributes such as brightness adjustment, blurring, and noise were not utilized in image processing methodology in this study. Measurement of therapeutic outcome (output data) Pain severity at pretreatment and 2-month follow-up after TFESI was assessed on the NRS (0 = no pain; 10 = worst pain).The NRS data were collected via chart review.A "favorable outcome" was defined as a ≥ 50% reduction in NRS score at 2 months post-TFESI versus the pretreatment NRS score.A "poor outcome" was defined as a < 50% reduction in NRS score at 2 months post-TFESI versus the pretreatment score.To validate the change in pain reduction, NRS scores were evaluated by assessing the difference between the pretreatment NRS scores and the 2-month post-treatment scores (change in NRS [%] = [pretreatment score − 2 months post-TFESI score]/ pretreatment score × 100). DL algorithms Python 3.8.10,scikit-learn 1.1.2,and TensorFlow 2.13.0 with Keras were used to develop the CNN model for predicting TFESI outcomes.We concurrently fed four X-ray ROI images (anteroposterior, lateral, left oblique, and right oblique) into the EfficientNetV2S CNN model for training.We employed a range of optimizers, learning rates, and batch sizes, integrating dropout regularization techniques to mitigate overfitting.Table 2 provides detailed information about the proposed model.The table is based on the outputs generated using Tensor-Flow's model.summary()function.Figure 3 offers a concise summary of each phase within the model training procedure. Statistical analysis Statistical analyses were executed utilizing Python 3.8.10 and scikit-learn version 1.1.2.A receiver operating characteristic (ROC) curve analysis was conducted, and the area under the curve (AUC) was computed.The 95% confidence interval (CI) for the AUC was determined following the method outlined by DeLong et al. 22 .Scikit-learn was employed for computing both the ROC curve and AUC.The classification report function in scikit-learn was employed to compute the accuracy, class-specific precision, recall, and F1 score. Figure 1 . Figure 1.Receiver operating characteristic curves for the validation and test datasets of our developed model.Acc, accuracy; AUC, area under the curve; CI, confidence interval. Figure 3 . Figure 3. Diagram of the process for the development of the deep learning model for predicting the therapeutic outcome after transforaminal epidural steroid injection in patients with lumbosacral radicular pain due to lumbar spinal stenosis.region of interest; AUC, area under the curve.
2024-04-13T06:17:52.524Z
2024-04-11T00:00:00.000
{ "year": 2024, "sha1": "3e8df932aba083a61b8440264852133471bd4c08", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "29c7f8a850f6dc14efc3bf8b1bb7fd23f2f18e85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259327531
pes2o/s2orc
v3-fos-license
The Effect of Antimicrobial Additive on Plastic Deterioration The Covid-19 virus, which started in 2019 and has taken the whole world under its influence has negatively affected normal living conditions and accordingly, increased awareness to prevent anti-microbial diseases transmitted by contact. In almost all sectors, precautions have been taken considering the transmission of diseases by contact. As a requirement of our age, they enrich their product ranges by developing new methods and making innovations for companies whose sector is dominated by industrial locks, hinges, and handles, both in terms of hygienic product design and in terms of the raw material of the part used in production. For example, it is preferable that the products in the air-conditioning sector have anti-microbial properties. The evaluation of the effect of microorganisms on plastic raw materials and the determination of whether this effect causes future deterioration in plastic materials has been examined in our article. Currently, the products supplied to the sectors are products produced from raw materials such as PA6 GFR30, ABS, PA6, which are available on the market. It is aimed to provide anti-microbial properties of products in accordance with the needs of the sector and the era by changing the raw material used or adding additives in certain proportions to the raw material. Introduction Industrial polymers are common materials used in manufacturing o0f toys, packaging, and many households producing such as disposable plates, spoons, and forks. Plastic materials are easy to use and clean products. However, it is a suitable environment for bacterial growth in reuse because plastic can biodegrade and its carbon can be used as food by fungi and bacteria [1]. Bacteria and microbes on plastic products cause biological deterioration. Biodegradation is defined as the situation in which an undesirable change in the color, strength, or mass of the product occurs. Plastic products can be modified as antimicrobial with the additives used to prevent biological degradation. Antimicrobials are used in a wide range of plastic applications [2]. We can find the properties of these additives in the related standards like ISO 846 : 2019 Plastics -Evaluation of the action of microorganisms [3]. In the standard, the colonization of microorganisms on the surface of the plastic product, the deterioration of plastics that create a nutritious environment for microorganisms, and the biological deterioration of the product are examined. The function of the antibacterial additives is to prevent the growth of bacteria, mold, and pathogenic microorganisms by 99%. Depending on the conditions of the pandemic process, studies have been initiated to make antimicrobial products for preference and to fulfill the supply of them on the market [4]. Locks, gaskets, handles, etc. used in the air handling unit sector. Studies are carried out on the condition that products such as products comply with the above-mentioned standard by using different raw materials or by adding additives to the raw material used. The antimicrobial additive material used because of the criteria provided by this standard reduces the proliferation of the organisms specified in the product and the biological deterioration of the plastic. By using antibacterial additives, it is aimed to prevent the proliferation of microorganisms in the products, as well as to support the antibacterial feature by designing the environments where microorganisms can adhere and multiply, by designing such environments at a minimum level. The main points to be considered in terms of design can be considered as follows. The part should not have indentations and protrusions that can create the necessary habitat for the reproduction and living of microorganisms such as water and dust. The surfaces should be as flat as possible, away from sharp recesses and sharp protrusions as much as possible. The recesses will create a suitable environment for the proliferation of microorganisms. Since there are no recesses and sharp surfaces that will create this environment, it will be ensured that water, dust, and similar substances that will cause the proliferation of microorganisms will not stick on the surface. Sharp transitions should be avoided as much as possible. The examples we made are examples of the lever locks available in Mesan lock Inc. (Fig. 1). Sharp corners and lines may contain substances such as dust and dirt and may create an environment for the growth of microorganisms. However, the round design will keep the accumulation of such factors to a minimum due to its rounded lines. Accordingly, the new lever lock design we have designed has been optimally adapted to the abovementioned features (Fig. 2). . Our aim in this work is to offer a product in compliance with antimicrobial standards by choosing the appropriate additive to the plastic raw material ensuring optimum mixing. Antimicrobial Additive. Products made from Plastic materials are easy to clean and use. However, it is a suitable environment for bacterial growth in reuse. There is a general belief in society that plastic materials are not hygienic products. Therefore, it is aimed to give antibacterial properties to plastic products by using additives. Antimicrobial additives are auxiliary materials that enable the 46 Engineering Chemistry Vol. 3 final product to have antimicrobial product properties by adding a certain amount into the plastic products currently used and dispersing them homogeneously. Thanks to these additives, the formation of bacteria and fungi on the product are prevented by 99% [5]. Heavy metals including silver, zinc, copper, mercury, tin, lead, bismuth, cadmium, chromium, and thallium among the metallic elements have antibacterial capabilities, and the exchange with these metals endows inorganic polymers like zeolites and zirconium with antibacterial activity [6]. Silver-supported zirconium phosphate or silica gel's antibacterial properties aren't brought on by the release of silver ions, but rather by the catalytic action of silver, which causes oxygen to be activated [7,8]. Silver is the most widely used technology as an antibacterial agent in the world due to its historical success, broad performance spectrum, many materials and applications suitability for use are the main factors. The additive brought to a surface during the production process includes the specific antimicrobial active, such as silver, and can be formulated as a concentrated powder, liquid suspension or masterbatch pellet, depending on the target material and production process. In this study, an additive containing 15% granular glass with silver ions concentration, which was added to the PA6 GFR 30 raw material, was used. The additive material was turned into masterbatch pellets and mixed homogeneously in PA6 GFR 30. Case Study This study, it is aimed to add a certain number of additives to the plastic raw materials, which are most widely used in industrial lock sectors, to gain antimicrobial properties. This experiment was performed in accordance with ISO 846 : 2019 Plastics -Evaluation of the action of microorganisms standard procedure A, including identification, and visual and microscopic evaluations. Before the PA6 GFR 30 raw material entered the test environment, the test samples were prepared by mixing the antimicrobial additive in certain percentages homogeneously with the raw material. Then, PA6 GFR30 material, which added a certain percentage of antimicrobial additives, was tested. We can consider the experiment in two stages. The first of these stages is the growth test, that is, the test in which the resistance of the raw material against mold is examined and the resistance tests against bacteria. Experiment 1 Fungal-Growth Test. Before starting the experiment, the surfaces of the prepared test samples were cleaned with 70% ethanol, and then the microbes detailed below were exposed and their incubation periods were observed. The observation period for this experiment was 4 weeks. As the first, Aspergillus Niger van Tieghem's characteristics are given in Table 1. As the second, Penicillium funiculosum's characteristics are given in Table 2. Additionally, Paecilomyces variotii's and Trichoderma virens's characteristics are given in Table 3. Finally, Chaetomium globosum's characteristics are given in Table 4. During this 4-week period, mold resistance of the above-mentioned Fungi was observed in the piece. This process is illustrated visually. In the image you see on the left, there is an antimicrobial added plastic sample sent before the test starts. On the right, there is a sample image that has not changed at all (that is, it passed the test and gave a positive result) as a result of the 4-week test period (Fig. 3). The meanings of the intensity grades used for the evaluation of mold growth are given in Table 5. The findings found as a result of the mold resistance test (growth test) are detailed as follows. It was observed that the raw material prepared with the additive did not show any change in color or structure compared to the uninoculated sterile sample and control samples. No mold formation could be detected on the sample surface, even under microscopic examination. The growth density was determined as "0". Overall evaluation growth intensity is also determined as "0". Experiment 2 Bacteria Resistance Test. The test is done according to above mentioned "Plastics -evaluation of the action of microorganisms" standard, procedure C, including identification, visual and microscopic evaluation [3]. For this experiment, the surface of the test pieces was cleaned again with 70% ethanol. Material prepared with the additive did not show any change in color or structure compared to the uninoculated sterile sample and control samples. No mold formation could be detected on the sample surface, even under microscopic examination. The growth density was determined as "0". As a result of our experiments, it was observed that the optimum ratio determined for the PA6 GFR30 raw material piece with antimicrobial additives was 6 %. Summary Microbe growth and raw material degradation create problems in many sectors. For this reason, antimicrobial additives are gaining value day by day. With the spread of the coronavirus around the world, customers' interest in antimicrobial products is increasing. As this need for customers increased, companies started to add antimicrobial additives at certain rates to their products. In this study, we examined the effect of 6% antimicrobial additive added to PA6 GFR30 raw material by observing bacteria and mold formations. According to the results of our mold and bacteria growth experiments, no mold or bacteria growth was observed in the handle lock products produced in our company as a result of mixing the masterbatch containing 15% silver ion glass powder with 6% PA6 GFR30 raw material. Companies producing antimicrobial products with PA6 GFR30 raw material will increase their costs and produce inefficiently because of using more than 6% additives. Therefore, it is important for engineers to make decisions based on the results we have shown in our study. By making design improvements and adding additive raw materials to the products in the right ratio, it is possible to stop the reproduction of microbes and ensure that they do not stick to the surfaces 100%.
2023-07-05T16:47:18.205Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "caf02a1d3f39a5b199c3a252de82531b8b42716e", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/EC.3.45.pdf", "oa_status": "HYBRID", "pdf_src": "ScientificNet", "pdf_hash": "caf02a1d3f39a5b199c3a252de82531b8b42716e", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
225417272
pes2o/s2orc
v3-fos-license
The health impacts of waste-to-energy emissions: a systematic review of the literature Waste-to-energy (WtE) processes, or the combustion of refuse-derived fuel (RDF) for energy generation, has the potential to reduce landfill volume while providing a renewable energy source. We aimed to systematically review and summarise current evidence on the potential health effects (benefits and risks) of exposure to WtE/RDF-related combustion emissions. We searched PubMed and Google Scholar using terms related to health and WtE/RDF combustion emissions, following PRISMA guidelines. Two authors independently screened titles, abstracts and then full-texts of original, peer-reviewed research articles published until 20th March 2020, plus their relevant references. Overall quality of included epidemiological studies were rated using an amended Navigation framework. We found 19 articles from 269 search results that met our inclusion criteria, including two epidemiological studies, five environmental monitoring studies, seven health impact or risk assessments (HIA/HRA), and five life-cycle assessments. We found a dearth of health studies related to the impacts of exposure to WtE emissions. The limited evidence suggests that well-designed and operated WtE facilities using sorted feedstock (RDF) are critical to reduce potential adverse health (cancer and non-cancer) impacts, due to lower hazardous combustion-related emissions, compared to landfill or unsorted incineration. Poorly fed WtE facilities may emit concentrated toxins with serious potential health risks, such as dioxins/furans and heavy metals; these toxins may remain problematic in bottom ash as a combustion by-product. Most modelling studies estimate that electricity (per unit) generated from WtE generally emits less health-relevant air pollutants (also less greenhouse gases) than from combustion of fossil fuels (e.g. coal). Some modelled estimates vary due to model sensitivity for type of waste processed, model inputs used, and facility operational conditions. We conclude that rigorous assessment (e.g. HRA including sensitivity analyses) of WtE facility/technological characteristics and refuse type used is necessary when planning/proposing facilities to protect human health as the technology is adopted worldwide. Introduction Global waste generation has been estimated to double in the decade from 2015 to 2025, from 3 to over 6 million tonnes of waste per day; this rate is expected to continue into the next century, when the estimate increases to 11 million tonnes per day (World Energy Council 2016). In parallel, the world is facing an energy sustainability crisis. Heightened electricity consumption increases energy demand, while conversely, greenhouse-gas emissions must be curbed to mitigate climate change. Sustainable energy and waste management requires policies that promote a 'circular economy' , balancing product life cycles (from production to disposal), and that minimise adverse economic, environmental, and societal impacts (Beyene et al 2018, IEA Bioenergy 2018. A circular economy reuses and recycles goods, where possible, restoring and regenerating products, components and materials to be at their highest utility and value at all times (IEA Bioenergy 2018). The process of wasteto-energy (WtE; also known as 'energy-from-waste') supports a circular economy by reducing landfill volume from municipal solid waste (MSW) by up to 80%, while also generating energy such as through combustion for turbine-driven electricity (Beyene et al 2018, U.S. Energy Information Administration 2018. Combustion of MSW is the most established method of energy recovery through WtE worldwide, accounting for nearly 90% of the WtE sector (Clean Energy Finance Corporation 2015, World Energy Council 2016. MSW includes domestic, commercial and institutional waste such as plastics, rubbers, wood, metals and paper, which may be combustible or recyclable. The combustible component of MSW is known as refuse-derived fuel (RDF), and is used in a thermal process (incineration, pyrolysis, or gasification) to generate electricity, or heat, fuel gases, and solids as primary recovery products (Beyene et al 2018). From a health perspective, the WtE process may have advantages compared to waste management practices that solely rely on landfill sites that are associated with contamination of the air (e.g. volatile organic compounds) alongside water and soils (Vrijheid 2000). However, the WtE process may emit higher concentrations of carbon dioxide (CO 2 ), sulfur dioxide (SO 2 ), and nitrogen oxides (NO x ) per unit electricity produced compared to other forms of energy such as natural gas or renewables (O'Brien 2006). The WtE process involves the combustion of RDF components for which emissions may also include persistent organic pollutants such as dioxins (Albores et al 2016). This concern is offset, to some extent, in modern, well-run WtE plants that emit lower concentrations of these pollutants compared to coal and oil-fired power plants or traditional incineration of MSW (US EPA 2016). Hence, WtE processes may have both beneficial and adverse impacts on the emission of airborne toxins, and consequently on health, relative to alternative waste disposal and energy generation processes. The WtE sector is already well established in Europe and provides up to 8% of electricity and up to 15% of domestic heating needs (World Energy Council 2016, Zafar 2018). As of 2008, 475 European WtE plants processed an average of 59 million tonnes of MSW creating revenue of US$4.5 billion each year (Zafar 2018). In Scandinavia, Denmark repurposes 54% of its MSW as RDF (Zafar 2018). Meanwhile Sweden, which has employed WtE since the 1940s, is aiming to match the repurposing of 99% of its local MSW (two million tonnes annually) with an equivalent amount of imported MSW as RDF (Fredén 2018). In 2012, approximately 600 WtE plants across 35 different countries were estimated to combust 130 million tonnes of MSW (Hoornweg and Bhada-Tata 2012), with the sector growing at a compounded annual rate of nearly 10% (World Energy Council 2016). Outside of Europe, the process is being adopted with eagerness, using the established Waste Incineration Directive (WID 2000/76/EC) of the European Commission as a guide for monitoring and regulating WtE emissions (Clean Energy Finance Corporation 2016). In 2016, the USA alone operated 71 WtE plants generating approximately 14 billion KWh of electricity from 30 million tonnes of RDF (U.S. Energy Information Administration 2018). In the Asia-Pacific region, China is the fastest growing adopter of WtE, recently planning 125 new plants to double national capacity (World Energy Council 2016, Zafar 2018. China, one of the major importers of MSW, has restricted imports of certain materials (e.g. plastics, paper) to reduce local widespread environmental contamination (Retamal et al 2019), challenging major exporters of MSW such as Australia (Cheng andHu 2010, Downes andDominish 2018). Responding to this challenge, Australia has estimated that a national shift towards WtE presents an opportunity to repurpose 20+ million tonnes of MSW otherwise going to landfill annually and avoid 9 million tonnes of CO 2 (equivalent) emissions by replacing fossil-fuel combustion while meeting 2% of national baseload electricity demand (Clean Energy Finance Corporation 2016). Hence, it is timely to consider the place of WtE in the energy transitions landscape and, in particular, to consider its impact on air quality and health. Despite the growing global interest in WtE, the public health implications of combusting RDF remains little studied. There has been no previous systematic literature review of the health impacts associated with WtE, although several reviews on municipal waste incineration have been published. In 2019, a systematic review on the evidence of health effects from waste incineration (2002 to 2017) was published in response to several new incinerators proposed for use within Australia (Tait et al 2020). The literature review, which did not include WtE facilities, concluded that the available evidence likely underestimated the health effects of exposure to incineration emissions due to most studies being of low quality and only examining a limited subset of potential exposure and disease pathways (Tait et al 2020). Other earlier reviews on the health impacts or risks of incineration and resulting emissions have focused on hazardous (industrial) or unsorted (municipal) solid waste, rather than sorted RDF for WtE. These reviews concluded that the evidence is insufficient to support an association between a specific waste incineration process and adverse health effects (Vrijheid 2000, Hu and Shy 2001, Giusti 2009, Porta et al 2009, Cordioli et al 2013. Associations between exposure to emissions and health outcomes such as increased risk of lung/throat cancer or ischaemic heart disease (Hu and Shy 2001), as well as non-Hodgkin's lymphoma and soft-tissue sarcoma (Giusti 2009), have been reported, however, the findings are inconsistent. The reason for this has been suggested to be due to poor methods of exposure characterisation which have relied on distance from source or self-reporting exposure, rather than measured or modelled pollutant concentrations (Hu and Shy 2001, Cordioli et al 2013, Hoek et al 2018, Tait et al 2020. More consistent associations have been reported between exposure to emissions and elevated biomarkers of organic chemicals or heavy metals in urine and blood (Hu and Shy 2001). In their review on MSW incineration without energy recovery (i.e. not WtE), Tait et al (2020), recommended future studies be conducted on the health impacts of WtE, including studying content and volume of feedstock (waste), combustion specifications, consideration of multiple exposure pathways, reporting of a larger array of health outcomes, and controlling for potential confounding factors (Tait et al 2020). Other reviews have suggested that previous limitations of incineration studies could be addressed by large, prospective, multi-site cohort studies with personal measurements of exposure, based on knowledge of biological pathways and toxicological effects of specific compounds (Giusti 2009, Porta et al 2009, Hoek et al 2018, however such studies can be expensive and sample size (of the study population) can be a limiting factor. Clearly, the expanded interest in WtE facilities yet current lack of evidence on health impacts with their operation requires stringent oversight to safeguard environmental and health outcomes. Our aim was to conduct a systematic review on the potential health effects associated with exposure to airborne emissions from WtE processes (including RDF combustion). The primary motivation for the current review was the perceived lack of data on the potential for health impacts of WtE processes and emissions, and the increasing growth in demand in regions where WtE has not yet been adopted on a widespread basis. As WtE has been promoted worldwide as a potentially sustainable form of both waste management and electricity generation, we considered it timely to ascertain the extent and breadth of evidence from published studies with health-related data or information associated with airborne emissions from WtE processes. The different types of study designs for studies included in our review include epidemiological, environmental monitoring, health risk assessments/health impact assessments, and life-cycle analyses (detailed below in Methods). Literature search strategy We conducted a systematic search of PubMed and Google Scholar, supplemented by a hand search of bibliographies of the articles included for full text screening. We used PubMed as the primary database source given our review was focused on health outcomes and PubMed is considered to be the most comprehensive health database. We used Google Scholar as a secondary source to identify relevant literature that PubMed does not catalogue, as done previously for hazardous waste reviews (Cordioli et al 2013). Search terms and the Boolean operators (string) that we used were as follows: "air" AND "health" AND "energy" AND "waste" AND "energy from waste" OR "waste to energy" OR "incineration" OR "refuse derive$ fuel" AND "air pollution" OR "air quality" OR "emission" 'Incineration' was chosen as it is the industrial term that represents 'combustion' and 'burning' . In addition, 'air pollution' , 'air quality' or 'emission' terms were used to avoid pollutants or hazards associated with other emissions. Two investigators (TCH, CC) independently screened titles, abstracts and full-texts for inclusion or exclusion of articles. Where there was variation between the two investigators, this was resolved by reviewing the the full-text article a second time until agreement was reached. The inclusion criteria used for selection of eligible articles were as follows: Exclusion criteria included articles that related to: (a) Hospital or medical waste, composting of waste, or agricultural waste. (b) Combustion of biomass fuel for cooking and heating in low-income settings. (c) Review papers. Literature review and synthesis We followed the approach (criteria) suggested by the PRISMA guidelines for performing and reporting the flow of a literature review process (e.g. figure 1) (Moher et al 2009). We synthesised study findings by grouping the articles by different study designs (methods). We used the following groupings: epidemiological (examining direct associations between exposure and health risk); environmental monitoring (emissions or exposure assessments or modelling); health risk assessment (focused, standard methodology to estimate risks related to a single or a mix of pollutants; applying health risk estimates from epidemiological studies to quantify the health burden due to the exposure of interest in a defined population), or health impact assessment (broader methodology that assesses the public health impacts to inform decision making; often including HRA methods or other health risk findings) (Gulis 2017); and, life-cycle analyses (LCA; quantifying carbonrelated impacts and indirect health impacts, with some LCAs also addressing direct health impacts). We used a standardised series of tables to summarise the studies and to list exposure assessment methods, health outcomes, summary results, and risk of bias. We provided an overall quality rating for epidemiological studies, similar to the Navigation framework previously developed (Woodruff and Sutton 2014) and demonstrated (Johnson et al 2014). The Navigation framework was developed in recognition that usual quality frameworks used for reviewing health studies, such as Cochrane, do not necessarily translate well to studies of environmental exposures, due to the nature of the exposure and difficulty in conducting randomised trials. As there were few relevant epidemiological studies, the criteria were amended slightly to ensure relevance depending on the study design. For example, we included mention of sensitivity analyses in modelling studies or explicit statements about assumptions used in the analyses. However, we did not critically appraise or scrutinise the assumptions or the software used in the LCA models as this was beyond the scope of our study. Literature search results The PubMed literature search identified 258 relevant primary records (articles) for review. The Google Scholar search identified 11 unique records relevant for review. As such, the complete search gave a combined total number of 269 unique records for consideration. After two investigators independently reviewed the titles of these 269 records, 137 records were identified to be appropriate for abstract screening (which removed 74 records). Sixty-three records were subsequently identified as eligible for full-text review, leading to the exclusion of 46 records. Finally, 17 fulltexts were selected for our review synthesis, plus two of their references to give a total of 19 full-texts to be synthesised (figure 1). The 19 included articles all related to combustion of MSW as RDF or in WtE facilities or processes. MSW incineration studies were included if they presented information or data related to emissions that were relevant to WtE processes, such as incineration of RDF. All included articles were published in the past 15 years, reflecting the increasing interest in WtE. Most studies comprised health impact or risk assessments/risk modelling (n = 7), followed by lifecycle assessments (n = 5), environmental monitoring studies (n = 5), with only two being epidemiological studies. Synthesis and discussion of findings To our knowledge, this is the first systematic literature review focused primarily on studies of the health effects associated with WtE-related air emissions. We found that while implementation of WtE technologies is increasing, the majority of incineration-health studies to date do not specifically address the combustion of sorted waste (RDF) for WtE (shown to be different than MSW due to waste composition characteristics by environmental monitoring studiesreported on below). Previous reviews have focused on the health impacts of waste incinerators (Cordioli et al 2013, Tait et al 2020) the economic implications of WtE technologies (Beyene et al 2018), exposure assessment methods in epidemiological studies of industrially contaminated sites (Hoek et al 2018) or waste incinerators, and the health impacts of general waste management practices (Giusti 2009). There are numerous epidemiological (e.g. cohort) studies on the health effects of other waste management risks including landfill leaching, sewage contamination and ionising radiation, yet few on air pollution emissions from RDF combustion. Due to the small number (n = 2) of epidemiological studies that directly measured health outcomes associated with WtE processes we believed it was not appropriate to metaanalyse the evidence for WtE health effects. However, we reviewed studies of environmental monitoring and health risk assessments in order to contribute to the evidence base for decision making. The following synthesis details the contributions to this evidence base, from studies detailing process emissions to health risk assessments. Epidemiological studies of health outcomes The direct health effects of exposure to emissions from combustion of RDF for WtE have been little studied. This is likely to be partly due to the difficulty of quantifying population health effects from generally inaccurate or low levels of exposure (Vrijheid 2000). This is despite previous recommendations that large prospective cohort studies with direct exposure and biomarker measurements be preferentially funded and performed (Giusti 2009). We found only two epidemiological studies relevant to exposures to WtE facilities or RDF emissions. One epidemiological before/after cohort study was performed in Italy among 380 individuals residing near a new WtE facility, with exposure assessed before and one year after operation began (Ruggieri et al 2019). In this biomonitoring study, chromium (but not other heavy metal) concentrations were higher in the urine of participants predicted to be exposed to WtE emissions compared to unexposed but otherwise comparable participants (Ruggieri et al 2019). However, this finding was applicable in both the baseline and follow-up year, and so the result cannot be directly attributed to operation of the WtE facility. Interestingly, concentrations of other heavy metals were higher in the control subjects, and so were attributed to other sources of personal exposure such as fish intake (arsenic) and tobacco smoke (cadmium) (Ruggieri et al 2019). Hence, residing near the WtE plant was not associated with greater exposure to heavy metals. We considered the study to be of good quality, having used dispersion modelling to assign exposures and conducting before and after health outcome measurements in an 'exposed' and 'unexposed' group. Validation of the emissions modelling by environmental sampling could have improved exposure assessment. There is further follow-up planned for this cohort which is expected to provide additional data (Ruggieri et al 2019). A recently published birth cohort study was conducted in Taiwan and investigated childhood social development in children residing near an incinerator (Lung et al 2020). The study of nearly 20 000 subjects (for which approximately five percent were considered exposed), reported a transitory negative effect on childhood social development, for children living within 3 km of a MSW incinerator, although this effect was apparent at six months but not evident at 18 months. A limitation of this study was considered to be the coarse exposure assessment applied to subjects which has the potential to lead to exposure misclassification. Exposure assessment ('whether there were incinerators within 3 km of their place of residence') and health outcome reporting were both coarse and subjective, with both being self-reported by parents (Lung et al 2020). We conclude that the results from the two epidemiological studies provide little evidence of an adverse impact of WtE air emissions on health outcomes. See table 1 for further details of included epidemiological studies. Environmental monitoring While studies of emissions inventory profiles do not include health outcomes, they may provide valuable information on the potential pathways and hazards posed by incineration of MSW components comprising RDF, with the potential for carcinogenic or toxic emissions relevant to WtE processes. Our review found five articles which reported on emissions testing and environmental monitoring of WtE facilities. In general, we found that the articles related to emissions monitoring predominantly fell into three categories: (1) the first related to estimating pollutant emissions of concern; (2) the second related to the need for monitoring to ensure efficacy of treatment technologies in removing/reducing pollutants; and (3) the third related to the need for appropriate monitoring to determine the influence of the feedstock on pollutant formation. Of greatest concern for health is the combustion of plastic MSW (composed of hydrocarbon/oilproducts), which is concentrated in RDF for WtE, and which emits organic and chlorinated/fluorinated compounds (e.g. dioxins), polychlorinated biphenyls, furans, chlorophenols, and mono-and polycyclic aromatic hydrocarbons (Karunathilake et al 2016). Notwithstanding, two environmental monitoring studies reported that after WtE upgrades to an Italian incinerator facility which included stricter emissioncontrol measures primarily aimed at reducing dioxin emissions, particulate matter (PM) emissions also declined (Buonanno et al 2010(Buonanno et al , 2011. This indicates that controlling emissions for critical contaminants such as dioxins and furans may also have a beneficial effect of leading to a reduction in PM, a standard air pollutant. The health risk of toxics predominantly relate to cancer, neurological and adverse birth outcomes and are considered to pose a greater risk to health than the standard regulated air pollutants such as PM and gaseous compounds. However, exposure to even low levels of PM is not benign and many epidemiological studies point to a range of risks associated with PM including increased risk of mortality, cardiovascular morbidity, lung cancer and more (Hime et al 2018). Thus changes to existing treatment facilities that improve emissions controls for both types of pollutants are beneficial from the standpoint of exposure minimisation. The review also reports on articles which compared or discussed monitoring campaigns and/or trials of varying technologies. In one study a two-stage dry treatment system was shown to remove harmful acid gases (hydrogen chloride, SO 2 ) from WtE emissions even with a widely-varying (potentially highly chlorinated) waste stream (Dal Pozzo et al 2016). This is an example of a monitoring program which can help provide evidence of efficacy of treatment technologies. Two articles reported on the influence of feedstock on pollutant concentration emissions. The articles indicated that rather than combusting RDF directly for electricity generation, experimentation suggested that mixing certain proportions of RDF components (e.g. certain plastics, wood chips) with traditional fuels (e.g. coal) for combustion and Dispersion modelled predictions of total metal content (mg m -3 yr -1 ) conducted at baseline to determine exposed (0.014-0.11)/ unexposed (0-0.007). Incl. There were reduced levels of most metals in 2014 (post) compared to 2013 (pre) in both WtE-exposed and unexposed groups Exception: Increased Cr in exposed; Reduced Mn in unexposed Low, moderate or high potential for concern based on quality and outcomes of study to replace electricity for industrial heating applications (e.g. cement kilns), has the potential to reduce sector or total emissions of health-relevant chemicals (e.g. dioxins, mercury) (Chen et al 2014, Richards andAgranovski 2017). We conclude from the results of the environmental monitoring studies that there is a need for regulation of the feedstock used (e.g. removing food waste) for RDF and WtE facilities to maximise complete combustion and minimise carcinogenic/contaminant emissions (e.g. volatile organic compounds), more so than the treatment technology used. See table 2 for further details of included environmental monitoring studies. Health risk/impact assessment studies We found seven studies comprising HRAs or HIAs of WtE facilities or RDF emissions. In table 3 we outline the health outcome assessed in a majority of the HRAs, including the hazard index (HI), hazard quotient (HQ), lifetime cancer risk (LCR) and other indices (4th column). These represent indices where cancer and non-cancer risks are considered for various chemicals of concern (3rd column, table 3), e.g. heavy metals, VOCs, organic compounds such as dioxins and furans, and so on. Some of the HRAs/HIAs also considered air pollutant emissions such as NO x , PM, and sulfur oxides (SO x ). The risk of exposure is based on modelled estimates of the chemicals/pollutants emissions from each WtE facility or alternative waste disposal method. Some of the studies have used proprietary software which includes the exposure-response functions for the chemical/pollutant of concern, which we list in table 3 (3rd column). These studies generally showed that the risk to or impact on health from exposure to WtE and RDF incineration emissions are not substantially elevated above 'background' risk levels (Mindell 2005, Roberts and Chen 2006, Krajčovičová and Eschenroeder 2007, Rovira et al 2010, Ollson and Whitfield Aslund et al 2014. They also point to lower emissions from well-run WtE facilities compared to landfill (Paladino and Massabò 2017) and traditional incineration (Krajčovičová and Eschenroeder 2007) or when RDF is substituted for fossil fuel for incineration (Rovira et al 2010). Six of the HRA studies estimated that exposure to WtE emissions was unlikely to increase incremental LCR or HQ for cancer risk (Roberts and Chen 2006, Krajčovičová and Eschenroeder 2007, Rovira et al 2010, Ollson and Knopper et al 2014, Li et al 2015, Paladino and Massabò 2017. Two HRAs reported lower cancer risk for exposure to WtE emissions compared with incineration emissions (Karunathilake et al 2016) or as substitution of RDF for fossil fuels in cement production (Rovira et al 2010). One HRA estimated that cancer risk from exposure (all pathways) to WtE emissions (mainly dioxin) would be lower than for exposure to landfill emissions, and estimated that agricultural (milk and meat) product ingestion was a more important exposure pathway than for inhalation of WtE emissions (Paladino and Massabò 2017). A health risk assessment conducted in Slovakia compared a traditional open-air (uncontained) MSW incinerator with a modern WtE plant, and found that the former increased the cancer risk 10-80 times above the background level, while the WtE plant presented a less than one-in-a-million excess risk of cancer (Krajčovičová and Eschenroeder 2007). That HRA estimated a substantially decreased cancer risk when MSW is sorted for RDF and its incineration emissions are properly controlled (contained) as advocated in modern, well-run WtE. In China, a more recent HRA estimated that under normal conditions, operational levels of emissions from WtE are unlikely to cause adverse health (incremental lifetime cancer) risks among nearby residents, with risks estimated for lifelong exposure through direct inhalation of ambient emissions and landfilling of bottom/solid ash residues (Li et al 2015). The exception to this was risk of chromium exposure which slightly exceeded the tolerance value (Li et al 2015). However, Li et al (2015) reported that all scenarios tested were sensitive to the model inputs and estimated that during abnormal operation (e.g. malfunction of control systems) the WtE facility could also carry an elevated risk due to inhalation of acid gas (hydrogen chloride). Four HRAs assessed non-cancer risks (Mindell 2005, Roberts and Chen 2006, Ollson and Knopper et al 2014, Li et al 2015. Ollson et al (2014a) estimated that abnormal operation of a Canadian WtE facility could lead to infant consumption of breast milk contaminated with dioxins and furans (Ollson and Knopper et al 2014). Two of the HRAs (Li et al 2015, Karunathilake et al 2016 estimated no increased risk of non-cancer health effects from operation of their respective WtE facilities. Modelling studies of UK WtE plants estimated premature (total non-traumatic) deaths and respiratory-related hospital admissions to be less than or equal to onein-a-million above background rates (Mindell 2005), and overall risk of dying to be 1 in 4 million for any year (Roberts and Chen 2006). It should be noted that these UK studies were either funded by the proponent company for the WtE facility, or written by previous employees of related boards/companies. It is clear from these studies that the choice of scenarios and model inputs can influence the risk findings, and so it is important that sensitivity analyses be conducted. Of note, Li et al (2015) reported that all of the scenarios studied in their analyses (WtE, landfill, and material recovery and composting) were sensitive to the inputs used for the reference concentrations and the landfill gas collection rates. In sensitivity analyses, the HI for the WtE option increased the most, indicating that there needs to be careful selection of the reference criteria values and that sensitivity Low/decreased, moderate or high potential for concern based on quality and outcomes of study Low, moderate or high potential for concern based on quality and outcomes of study analyses are crucial for better understanding operational limitations of WtE facilities and to avoid abnormal operations or malfunction events. Together, we conclude that the HRA results show that under normal operating conditions there is little to no evidence of an increased risk of cancer or non-cancer effects in humans, as WtE facilities are capable of lower emissions (except for a predicted potential higher emission of chromium) than existing waste management practices of landfill and traditional incineration. However, close attention is required to ensure operational limits are not exceeded, as such conditions are estimated to be associated with increased risk of dioxin exposure (one HRA) and potentially hydrogen chloride gas exposure (one HRA). This highlights the need for appropriate sensitivity analyses to be conducted during the HRA process, along with careful selection of reference health criteria and consideration of the fuel used for combustion. See table 3 for further details of included health risk/impact assessment studies. Life-cycle analyses A total examination of the environmental, social, and economic impact associated with all stages of a product's life, from raw material extraction to final product disposal (e.g. landfill or WtE), is termed a life-cycle analysis/assessment (LCA) (Muralikrishna and Manickam 2017). An LCA is distinct from a HRA in that an LCA considers the full life-cycle of a product, from production to disposal, while a HRA typically only considers one stage of the lifecycle while focusing on a health impact. For example, an LCA for WtE will consider the impacts of not only the resulting toxic contaminants in ash and air emissions, but also emissions which have a greenhouse gas impact such as carbon dioxide, as well as the impact of the fuel used for the WtE facility. Besides health impacts, LCAs can determine equitability of a product's environmental impact, and can determine if overall impact (both to health and the environment) of one waste management process is more favourable than another. For example, one may ask if exposure to atmospheric emissions from WtE is less harmful to health than from unsorted (mass) waste incineration or landfill leachate, also taking into account health impacts of climate change due to greenhouse gas emissions from each technology-however, none of the studies considered climate change in relation to health outcomes. Future LCA studies of new energy technologies could be important in estimating direct and immediate health impacts (due to a change in pollutant emissions) balanced with estimating the potential for indirect and delayed health impacts due to increased greenhouse gas emissions from climate change. Nevertheless, our review reports the results of five LCAs. As with the HRAs reported above, two of the LCAs predict lower pollutant emissions from combustion of RDF (sorted for WtE), compared to incineration of unsorted MSW (still producing electricity). A Canadian LCA for a WtE facility estimated lower cancer and non-cancer health risk per unit of electricity generated with RDF than for unsorted MSW incineration (Karunathilake et al 2016). Similarly, a lower health risk was attributable to the sorting and use of relatively high calorie, low toxicity waste for RDF (e.g. wood, paper, plastics, textiles, and rubbers; sorted, treated, shredded, and combusted to produce approximately 4 MWh of energy per tonne) (Reza et al 2013) compared to the use of coal. Although Reza et al estimated lower heavy metal emissions for RDF used in WtE facilities, an exception was an estimated increase in lead emissions. This LCA was conducted with a focus of comparing environmental benefits of the two feedstocks for use in cement kilns. Two of the LCAs estimated greater impacts from WtE processes compared with other waste management processes. Scipioni et al (2009) predicted lower emissions of respiratory related pollutants but greater potential for exposure to carcinogens, climate change pollutants (mainly CO 2 ) and radiation, when comparing dry and wet fly gas scrubbing, with and without WtE processing. Tan and Khoo's (2006) LCA analysis comparing landfill, WtE incineration, and recycling and composting stated that 'energy gained from incineration of waste materials is outweighed by the air pollution generated' and estimated that recycling and composting would result in the least ecosystem impact. The authors acknowledged the generally 'wetter' conditions of their MSW (in Singapore) which they suggested might be more suitable for composting. Although the article mentioned modelling of disability adjusted life years (DALYs; as a health measure) we could not see where these results were calculated or presented. In addition, the assumptions used in this LCA were not explicitly mentioned, so it is difficult to determine how a change in model inputs might influence these findings. In Passarini et al's (2014) LCA which compared various upgrades of an incinerator to enable functioning as a WtE plant, they estimated that concentrations of heavy metals in the fly and bottom ash were the main contributor to carcinogen endpoints and these remained constant over time. However, they estimated decreases in carcinogens and particulate matter in airborne emissions during operation as a WtE facility. The LCA concluded that human health improvements were expected with WtE operations due to both the lowered emissions and the predicted improvements associated with greenhouse gas mitigation. Most of the LCAs reviewed used accepted international methods for LCA, such as ISO standards along with the use of specialist software such as SimaPro and impact assessment methods such as Ecoindicator99. As with the HRA studies, some of the LCA studies highlighted the variability in calculated health risks to be dependent on the reference criteria and dose and other model inputs, thus indicating the necessity for sensitivity analyses to be conducted. In general, we conclude that the predictions from the majority of LCA studies indicate that emissions from, and therefore health risks associated with, WtE plants are lower than for landfill and traditional incineration. However, an increased potential for health risk is highlighted for lead (Reza et al, 2013) and other heavy metals in the bottom and fly ash (Passarini et al, 2014) that may be emitted in later stages of the life cycle (following combustion of RDF for WtE). See table 4 for further details of included life-cycle analyses. Implications Our review indicates that there is a dearth of studies on the potential health impacts of WtE-related emissions, even in countries where WtE facilities have been in operation for some years (such as Sweden); however, some practical implications can be drawn from the limited research done. This has implications for the emerging WtE sector. As a consequence of the lack of health studies related to WtE facilities, inference is often drawn from exposure studies to health-related emissions common to combustion of MSW. These studies might provide some indication of potential impacts from WtE process emissions, albeit newer technologies and tighter restrictions of feedstock appear to be implemented in WtE facilities. An example of this is exposure to dioxin emissions from older MSW incinerators, where past epidemiological studies have reported weak to moderate associations between dioxin emissions and an increased incidence of cancers including non-Hodgkin's lymphoma (Viel et al 2008) and sarcoma (Zambon et al 2007) among nearby residents and incinerator workers. These studies were conducted prior to the lowering of incinerator emission volumes through introduction of stricter regulations, and so the findings cannot be directly extrapolated to current WtE technologies. Furthermore, it should be acknowledged that due to the varying waste streams in different geographic regions and for different facilities, research evidence from one country may not accurately or wholly inform policy or practice in other countries/regions. Older studies have also tended to study the incineration of unsorted MSW. The extent to which the existing evidence base reviewed here can support a causal association between exposures to airborne emissions from WtE facilities and adverse health impacts, is very limited. While the evidence base, as a whole, is weak and there is little evidence of effects under normal operating conditions of WtE plants, the review has highlighted some potential areas for further study. There is clearly a place for more studies of the potential for health impact from WtE facilities, using the various study types included in this review: epidemiological studies; HRAs; LCAs; and, environmental monitoring. However, given the cost of completing well designed and adequately powered epidemiological studies, and the difficulty in ensuring sufficient sample size or a non-exposed control group, it is likely that other methods such as health risk assessment, along with exposure modelling, with or without LCAs, will prove to be useful in assessing new WtE facilities. Notwithstanding the above, there is a need for well-designed epidemiological studies of exposure to WtE emissions. Such studies could provide empirical data for subsequent HRAs and LCAs, but need to address issues of exposure misclassification potential which has occurred in the past (Forastiere et al 2011) such as using distance based measures as an exposure proxy. The collection of environmental monitoring data of environmental media, e.g. air and soil, in the vicinity of WtE facilities, along with emissions monitoring, would also facilitate validation of the exposure models used in epidemiological and HRA studies. There is an argument to be made for more standardised exposure assessment methods and standardised measurement units, reference criteria and models, in studying the health impacts of WtE emissions, especially for more harmful components such as dioxins, given the variety of methods presented in the LCAs and HRAs reviewed for this paper. Further, we agree with previous researchers that modelling studies, such as HRAs and LCAs, should explicitly outline model input assumptions and associated uncertainties given their influence on model outcomes (Scipioni et al 2009). Some of the studies included in this review have highlighted the need for special consideration of the feedstock used for RDF and WtE facilities, given that it is one of the critical issues affecting contaminant emissions, over and above the treatment technology used. For instance, the World Energy Council (2016) stated that dioxin (and other toxins such as furan) emissions from RDF can be reduced by nearly 100% with the implementation of regulatory emission-control strategies within the WtE sector, such as controlling the nature of the feedstock. This can result in emission volumes which are lower, per equivalent energy unit, than for coal or gas-powered power plants (World Energy Council 2016). Regulating the pre-sorting of waste for WtE processes can help to maximise complete combustion and minimise carcinogenic emissions (Reza et al 2013, Karunathilake et al 2016. Others state that food waste, for example, should be removed from the RDF stream as it yields little exportable energy in WtE processes (Diggelman and Ham 2003). Some researchers state that the higher calorific value of RDF results in more Low, moderate or high potential for concern based on quality and outcomes of study complete (higher temperature) combustion, resulting in less emissions of other potentially toxic pollutants such as volatile organic compounds (Friege and Fendel 2011). Although not strictly airborne emissions, there is a need for increased scrutiny of the use/disposal of bottom and fly ash given at least two studies estimated increased concentrations of chromium and dioxin in bottom/fly ash. Others have advocated that WtE residuals should be re-purposed (isolated) as construction 'filler' rather than go to landfill (Tan and Khoo 2006, Passarini et al 2014, Malakahmad et al 2017. While toxins such as dioxins and furans found in breast milk, or heavy metals found in urine, are not themselves measured health impacts, they could cause health impacts with accumulation over time. The WHO recognises that, due to the omnipresence of dioxins, the whole population has background exposure which is not expected to affect human health (such as the levels found in our included studies); however, as these toxins have a high toxicity, efforts need to be taken to reduce additional exposure such as from waste incineration (WHO 2020). As such, we can best prevent or reduce this exposure by continuing to measure directly at the source. More broadly, LCAs including health impacts of MSW stream management should not only consider direct pollutant emissions, but also the potential effects of repurposing waste such as reducing and recycling, and the impact on greenhouse gas production and transport emissions (Giusti 2009). A fair and full LCA may show that the most benefit from RDF/WtE processes may come from fuel substitution for industrial processes such as cement manufacturing (Reza et al 2013, Richards andAgranovski 2017), within which combustion ash could be isolated to further reduce the environmental impact from landfill. While WtE may have a larger carbon footprint (CO 2 emissions) compared with recycling of materials (e.g. plastic) (Tan and Khoo 2006), it generally emits lower concentrations of greenhouse gases (CO 2 , methane) than landfill (Giusti 2009, Clean Energy Finance Corporation 2015, Malakahmad et al 2017, Beyene et al 2018, Murray 2018, Orru et al 2019. Furthermore, WtE technology (e.g. dry treatment of flue gas) has the ability to offset traditional (fossil fuel) combustion for electricity generation and thereby potentially reduce total emissions of greenhouse gases or criteria air pollutants (Scipioni et al 2009). These are all important considerations from a broader public health perspective. LCA methodology appears to be well suited to provide useful information for the planning and design stage of waste management facilities, as they allow identification of alternative processes and treatment requirements, and so can enable decisions on long term infrastructure investments which benefit health not only locally, and more broadly. We recommend that in regions where WtE has not yet been fully adopted, that LCA incorporating HRA, should be undertaken using local data inputs and with local conditions in mind. As many regions of the world are needing to manage unprecedented volumes of waste, and at the same time are also experiencing slow implementation of cleaner/safer technologies, the risks related to waste management are likely to remain a challenge for years to come. Using RDF for WtE may address a gap in the circular economy for recovering energy from waste, and while seen as a renewable resource (Natural Resources Canada 2015), decision-makers should appropriately assess applications for new WtE facilities, taking a precautionary but not inhibitory approach, in light of the lack of rigorous health evidence. Conclusion We have found a dearth of well-conducted epidemiological studies investigating the health risks of exposure from WtE processes. The limited evidence from the two epidemiological studies, along with HRAs, LCAs and emissions monitoring studies suggests that the risks to human health from emissions of appropriately designed, properly managed (including feedstock), state-of-the-art WtE incineration plants are relatively lower compared to prevailing alternative waste management practices, including incineration of unsorted waste (without energy recovery) and land fill. Importantly, the waste management hierarchy recommends an emphasis on the reduction of material going to waste before it is re-purposed or recycled, as it is clear that the input waste stream can substantially influence pollutant emissions. While WtE practice might be a reasonable option for mitigating waste management and energy security issues, its implementation requires proper design, operation, and emissions management (monitoring) and control, as well as ongoing environmental and health monitoring and surveillance to maximise both economic and environmental benefits while minimising health impacts or risks. With respect to planning and design of WtE facilities, it is important that health risk assessments supported by comprehensive exposure monitoring, and robust modelling (e.g. detailed emissions modelling plus atmospheric modelling and real population data) be conducted for proposed WtE facilities to ensure that protective measures are optimally designed and emissions criteria appropriately implemented. Furthermore, close attention to health data used and assumptions made for reference doses, exposure duration and frequency, and concentration-response functions, is needed. It is equally important for HRAs and LCAs to include sensitivity analyses to test such assumptions. Future reviews will be reliant on additional well conducted epidemiological studies or HRAs and LCAs and, exposure modelling and monitoring, to further our knowledge in this area.
2020-08-20T10:06:55.984Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "6aa90499b17e049a9b96047a1862bd582dddf7e1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/abae9f", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3265e7d47d43059b77e24871a0739e6da01cd4ee", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
258811728
pes2o/s2orc
v3-fos-license
Quantum Chemistry–Machine Learning Approach for Predicting Properties of Lewis Acid–Lewis Base Adducts Synthetic design allowing predictive control of charge transfer and other optoelectronic properties of Lewis acid adducts remains elusive. This challenge must be addressed through complementary methods combining experimental with computational insights from first principles. Ab initio calculations for optoelectronic properties can be computationally expensive and less straightforward than those sufficient for simple ground-state properties, especially for adducts of large conjugated molecules and Lewis acids. In this contribution, we show that machine learning (ML) can accurately predict density functional theory (DFT)-calculated charge transfer and even properties associated with excited states of adducts from readily obtained molecular descriptors. Seven ML models, built from a dataset of over 1000 adducts, show exceptional performance in predicting charge transfer and other optoelectronic properties with a Pearson correlation coefficient of up to 0.99. More importantly, the influence of each molecular descriptor on predicted properties can be quantitatively evaluated from ML models. This contributes to the optimization of a priori design of Lewis adducts for future applications, especially in organic electronics. ■ INTRODUCTION Optimizing functional materials under real working conditions is essential but can be a tedious and expensive process. Accordingly, this has been assisted by computational tools for several decades. A variety of chemical properties can be predicted using current computational methods, but these often require computing the quantum mechanical wavefunction, a costly endeavor, especially for large molecules and any properties involving excited states and atypical bonding. In the last decade, machine learning (ML) is becoming a versatile computing tool to assist molecular design and optimization, together with calculations from physical laws. ML algorithms have been successfully employed for classification, regression, clustering, or dimensionality reduction tasks of large sets of input data. Machine learning is promising to solve data bottlenecks in many problems in chemistry and materials science. 1−6 Solutions employing machine learning offer advances to screen high volumes of compounds for advanced material applications ranging from efficient organic photovoltaics 7−12 to organic light-emitting diodes 13,14 to high-temperature alloys 15 and many more. Having eligible descriptors is critical for developing many ML models for practical applications as well as advancing fundamental understanding. Some descriptors can be computed readily from molecular structures, which can be called molecular descriptors. These descriptors, for example, the number of hydrogen atoms or molecular weight or the number of conjugated bonds, can be generated readily and economically for use in ML models. 16 −18 In several reports, a variety of descriptors were obtained from the traditional quantum approach, so-called quantum descriptors. 9,19,20 Examples include frontier molecular orbital energies, electron population, and triplet states. Despite the advantage of being calculated from first principles, compared to molecular descriptors, quantum descriptors present the challenges of time-and resource-consuming calculations and the uncertainty originating from their dependency on the level of theory employed. In addition to predicting the final performance of functional devices, a variety of fundamental properties of molecules and materials have been predicted by machine learning. 21−25 For example, machine learning can be employed to predict the energies of highest occupied molecular orbital−lowest unoccupied molecular orbital (HOMO−LUMO) orbitals, 26 lattice energies, 27 and charge transfer integrals of organic crystals. 24,27 Many such important properties are either very challenging to obtain experimentally or complicated and timeconsuming to be calculated using traditional computational tools. Machine learning can be applied to rapidly screen and predict those properties. For example, intramolecular reorganization energy, which is an important property of organic semiconductors, is typically an expensive calculation using ab initio methods. Recently, the employment of deep neural networks or kernel ridge regression ML models significantly reduced the time and power required for computing intramolecular reorganization. 28 Machine learning has been shown, in combination with legacy quantum methods, to increase the accuracy of computed properties with a minimal increase in computational expense. 29 In this work, we aim to employ machine learning using readily obtained molecular descriptors to predict properties that can be calculated using density functional theory (DFT) but are costly and highly sensitive to calculation details. We chose a model dataset to be the adducts of Lewis bases (LBs) and Lewis acids (LAs). In addition to showcasing classic coordinate covalent bonds, our model dataset is inspired from recent work using Lewis acids to form adducts with organic semiconductors to tune their optoelectronic properties and doping levels. 30−36 These adducts are formed by the partial electron density transfer from a semiconducting conjugated molecule or polymer, usually containing Lewis basic moieties, to external Lewis acids. Most of the molecules in these adducts have an alternating donor−acceptor motif, in which the acceptor unit contains atoms with a nonbonding pair of electrons capable of coordinating with Lewis acids. Boronbased LAs, such as BF 3 , BCl 3 , and B(C 6 F 5 ) 3 , have been widely utilized. Our recent work 37 employed electronic structure calculations to confirm the hypothesis that the changes in optical properties of parent conjugated molecules are tied to electron transfer from these molecules to Lewis acids. 30 Generally, in chemistry, this electron transfer (hereinafter called charge transfer to be consistent with previous studies) is a crucial quantum mechanical quantity. It relates not only the binding strength of a Lewis acid and a Lewis base but also the nature of the bond, for example, whether it is formed mainly by electrostatic or covalent interactions. 38 Although charge transfer in LA−LB bonds is intuitively understood via concepts from organic chemistry, it is too microscopically intricate to confirm experimentally. 38,39 We showed that the calculated amount of charge transfer correlated with the degree of red shift in optical absorption of the adducts for a given set of molecules. 37 In this paper, we broaden the screening and predicting power by using machine learning to predict the charge transfer and other optoelectronic properties of those adducts from molecular descriptors that can be obtained readily and inexpensively. In addition, we also obtain the relative weight of each molecular descriptor, reflecting its impact on the properties of adducts and permitting insight into the chemistry and physics associated with the molecular design. ■ METHODOLOGY We designed 1016 adducts from 90 Lewis bases (LBs) and 12 Lewis acids (LAs). A majority of these LBs are acceptor moieties commonly used in high-performing donor−acceptorbased organic semiconductors, 36,40−44 while others are typical LBs in chemistry, such as NH 3 , (CH 3 ) 2 NH, and aniline. Most LAs are common Lewis acids and have been experimentally validated to bind with conjugated Lewis bases and cause changes in optoelectrical properties. 30,35,36,38 Some representative LBs and LAs are presented in Figure 1, and all chemical structures are given in Supporting Information (SI) Figures S1 and S2. The adducts were formed by binding one LB to one LA. Charge transfer for an individual adduct was calculated using a two-step approach successfully implemented in our previous study. 37 First, the nuclear coordinates of most adducts were optimized in DFT using the APFD 45 exchange− correlation functional with 6-311G(d,p) basis set. The adducts with LB of BI 3 were calculated with the LANL2DZ basis set due to heavy iodine atoms. The aforementioned functional was validated by comparing it to HF and two other DFT-based functionals (i.e., B3LYP and CAM-B3LYP with GD3BJ dispersion). Figure S3 demonstrates the comparison conducted with adducts of NH 3 (i.e., the representative for N-sp 3 ), pyridine (i.e., the representative for N-sp 2 ), and acetonitrile (i.e., the representative for N-sp) both in vacuum and in dichlorobenzene (DCB) using the polarizable continuum model (PCM). The degree of charge transfer is indicated to be fairly insensitive to the choice of DFT functional and impacted fairly uniformly in the presence of an implicit solvent across a representative subset of our Lewis acid−base pairs. Besides, it is noticeable that the charge transfer calculated by Hartree−Fock, which includes the full exchange with no selfinteraction and no correlation potential, is in close agreement with three different levels of functional choice. Due to the inclusion of an empirical dispersion model in APFD as well as its use and agreement with experimental trends seen in our prior study, 37 this relatively recent functional was employed here for geometry optimization. After optimization, the nonbonding adducts were eliminated from the dataset. Then, charge transfer was calculated using atomic partial charges from NBO population analysis. 46,47 Molecular descriptors were calculated using the Dragon package. 48 Four groups of descriptors from Dragon were selected based on the higher level of insights by which they can inform the molecular design. They are constitutional descriptors (molecular composition information such as molecular weight�MW or mean atomic Sanderson electronegativity scaled to C�Me), atom-centered fragments (Ghose−Crippen descriptors defined for hydrogen atoms, carbon atoms, and heteroatoms such as the number of =CH 2 fragment�C-015), functional group count descriptors (count descriptors of various functional groups such as the number of nonaromatic conjugated C(sp 2 )�nCconj or number of imides�nN(CO) 2 ), and molecular properties (such as Moriguchi octanol−water partition coeff. (logP)�MLOGP). Molecular descriptors were independently calculated for the LB set and the LA set and then combined to create 141 descriptors (Table S1) to build ML models and predict the aforementioned charge transfer and other properties of the adducts. The ML models in this study/research were chosen based on their versatility and applicability in chemistry and materials science. They belong to linear-based models (linear regression (LR) and ridge linear regression (RIDGE)), support vector machine regression (SVR), k-nearest neighbor regression (KNN), artificial neural network (ANN), and decision-treeensemble-based models (random forest (RF) and gradient boosting (GB)). 20% of the dataset was selected as the test set using the stratified shuffle split function of the Scikit-learn Python module, which randomly selects the test set while keeping the histogram of both the training set and test set similar to the overall dataset ( Figure S5). 49,50 This splitting approach, often called stratified sampling in statistics, is commonly used in machine learning to avoid significant sampling bias toward certain groups of the predicted values. 50 Prior to training models, grid searches were also performed to optimize the model hyperparameters. The models are validated by two methods. First, they are used to predict the test set, and then, the Pearson correlation coefficients (r) and root mean squared error (RMSE) between ML-predicted results and DFT-calculated ones can be calculated. 50 The ML models were also validated by cross-validation algorithm with stratified-shuffle-split as the splitter resulting in 30 data points of r and RMSE for testing sets. ■ RESULTS AND DISCUSSION The data analysis before employing ML models is presented in Figures 2 and S4. The DFT-calculated charge transfer distribution of over 1000 adducts is shown in a histogram in Figure 2a, with two apparent peaks at around −0.15 and −0.30 electrons. This number indicates the amount of electron transfer (i.e., loss, hence the negative sign) from Lewis bases to Lewis acids upon the formation of adducts. Figure 2b reveals that the average absolute value of charge transfer in adducts with boron-based LAs (∼0.3) is noticeably higher than aluminum-based LAs (∼0.15), corresponding to the two aforementioned peaks in the histogram. The small peak near 0 in the histogram in Figure 2a is the result of charge transfer for SO 2 adducts. The average charge transfer is −0.20 e with a standard deviation of 0.09 e. This range of charge transfer in LALB adducts is consistent with previous computational and experimental studies. 38,51 In addition to charge transfer, we extracted other physiochemical properties of the adducts from the same DFT calculations in order to get more insights into the coordination bonds and showcased the applicability of machine learning in predicting a wide range of properties. One such important property is the formation energy, which indicates the binding strength of Lewis acids and bases energetically. Figure 2c shows the average formation energy for each LA, which is the difference in total energy of the product (adducts) and reactant (LA plus LB). The variation of formation energy among different LAs is discernably different from that of charge transfer. For example, the adducts of AlCF and AlCl 3 have the higher formation energy but lower charge transfer compared to those of BCF and BCl 3 , respectively. On the other hand, BH 3 has comparable formation energy but much higher charge transfer compared to those of AlH 3 . The uncorrelated behavior of the properties of over 1000 adducts is indicated in Figure 2d, which is consistent with the Figure 2e. Compared to the continuous distributions of charge transfer, one can deduce that the charge transfer is not well-correlated with any of the descriptors, as demonstrated in Figure S4. Figure 3 demonstrates the exceptional performance of the ML models in predicting charge transfer. Figure 3a Figure 3d), which demonstrates the reliability of ML models over different choices of training and test sets. Among these, ANN, RF, and GB perform the best with r around 0.97 and RMSE around 0.02 e, which is about 10% of the second peak in the charge transfer histogram. The fact that ANN, RF, and GB models are more accurate than LR, RIDGE, and KNN models might be attributed to aforementioned low correlations between charge transfer and the descriptors related to the difference in their distributions (continuous versus discrete). In order to inform the molecular design toward adducts with desirable charge transfer, we evaluate the influence of each descriptor in determining the predicted outcome from ML models, which is the charge transfer in this case. To that end, we extract the feature importance from RF and GB models (with 30-time cross-validation) using available algorithms in Scikit-learn and calculate the relative weight of each descriptor by dividing its feature importance by the highest feature importance. All molecular descriptors with their relative weights are given in Table S1 for both RF and GB models. The relative weights are calculated by dividing the feature importance of each descriptor with the highest value (i.e., the feature importance of la_nB). The 20 descriptors (out of 141) with the highest relative weights from the RF model are plotted in Figure 3b. Interestingly, among those 20 descriptors, the number of high-weight descriptors for LAs and LBs is 15 and 4, respectively (total_MW is the total molecular weight of LAs and LBs). On average, the relative weights of all LA and LB descriptors from the RF model are 0.171 ± 0.212 and 0.018 ± 0.025, respectively, and those from the GB model are 0.042 ± 0.180 and 0.005 ± 0.014, respectively. It implies that, at least for this dataset, the molecular descriptors of LAs are more significant than those of LBs in determining the charge transfer. With 11 LAs out of 12 containing either B or Al to bind with LBs, both GB and RF models properly capture the two highest weight descriptors as the number of boron atoms (la_nB) and the number of heavy atoms�aluminum in this case (la_nHM). Apart from la_nB and la_nHM, other notable LA descriptors are the mean atomic polarizability (la_Mp), mean atomic van der Waals volume (la_Mv), and molecular weight of LA (la_MW). Notable LB descriptors are the number of double bonds of carbon and heteroatom (C-041) and the number of imides (nN(CO) 2 ). Furthermore, a challenge in studying chemically combinatorial datasets is that machine learning extrapolability cannot be assessed accurately using the (even stratified shuffle) random splitting for obtaining training and testing sets. It is suggested that the so-called leave-one-cluster-out (LOCO) method should be used to provide more insights into the extrapolability of machine learning models. 55 In order to address this issue, the charge transfer dataset was clustered to perform LOCO predictions. First, the dataset is grouped into five different clusters based on LB structures (Table S2). Figure 4a shows the RF-predicted charge transfer values of LOCO predictions with the highest and the lowest Pearson correlations and those of "normal" predictions based on stratified shuffle split (Figure 3a). Similar graphs of the remaining ML models are demonstrated in Figure S9, and the Pearson correlation boxplots of LB-structure-based LOCO predictions of seven ML models are presented in Figure S11a. The results indicate that the LOCO predictions present an average of 16.21% lower Pearson correlation values compared to the stratified shuffle selection. This reduction, which is smaller than those reported in prior studies, 55,56 confirms the robustness of our models in predicting charge transfer for Lewis bases that have structure motifs not presented in the training set. It is also noticeable that all ML models demonstrate poor performance in predicting charge transfer very close to zero. For LA clustering, the dataset is divided into 12 clusters corresponding to 12 LAs (each cluster is named by the representative LA) to conduct LOCO predictions. The similar graphs and boxplots of LA-structure-based LOCO predictions are illustrated in Figures S10 and S11b, which can be taken as supporting evidence for the lower accuracy in LOCO predictions compared to normal prediction, especially for BCF and SO 2 clusters. For BCF, high DFT-calculated charge transfer adducts cannot be predicted well, which might result from the significantly high number of fluorine atoms in BCF compared to all other LAs in the training set, where all BCF adducts are excluded. In the case of SO 2 , the fact that the sulfur atom is not presented in the LOCO training set descriptors (none of the other LAs has sulfur) is hypothesized to result in the inferior LOCO predictions. Compared to all other clusters, the BCF and SO 2 results are statistically determined to be outliers. With the exclusion of the outliers, the average reduction of Pearson coefficients with LA-structure-based LOCO predictions is 13.33%. Finally, the Pearson correlations of both LA-structure-based and LB-structure-based LOCO prediction were combined for each ML model and are demonstrated in Figure 4b. The boxplots show the reliability of all ML models with average Pearson correlation values all above 0.7 except for the LR model. Similar to the normal prediction, RF is among the most stable ML model with the highest performance (i.e., 0.84 ± 0.14 in Pearson correlation) in the LOCO predictions. Given the high accuracy of ML models in predicting charge transfer, we used them to predict other DFT-calculated properties of the adducts employing the same set of descriptors. One such property is the first excited state (hereinafter called ES1) of the adducts and the other is the shift in the first excited states of the adducts from the first excited states of Lewis bases (hereinafter called delES1). ES1 and delES1 are desirable fundamental properties in designing LALB adducts as light emitters or light absorbers in LEDs, solar cells, and other applications. All excited states were computed by time-dependent DFT (TD-DFT) with the same level of theory as the one used for geometry optimization. The average and standard deviation values of ES1 are 2.927 and 1.509 eV, respectively, and those of delES1 are −0.197 and 0.816 eV, respectively. Histograms of ES1 and delES1 are given in Figure S12. The negative delES1 indicates the red shift in the adducts compared to LBs, which is consistent with the Figure 5. Especially, for ES1, r varies narrowly around 0.96 for all models. The same descriptors are also shown to predict two other fundamental properties of the adducts�the HOMOs and the formation energy with a high level of accuracy ( Figures S13 and S14). As has been noted by others, it is expected that accurate DFT prediction of the HOMO−LUMO gap may be quite sensitive to calculation details, including particulars of exchange− correlation functionals coupled with how solvent models are employed, especially when extrapolating to larger complexes and the solid state. 57 Although we see encouraging trends in our preliminary studies of ES1 and delES1 results here, a comprehensive study of these issues remains for future studies. ■ CONCLUSIONS In summary, we demonstrate a powerful yet facile alternative methodology to DFT calculations in approaching quantum chemistry by employing machine learning analysis. With the utilization of readily obtained molecular descriptors, machine learning presents the capability of predicting DFT-calculated charge transfer and other (advanced) physical and chemical properties of the Lewis acid−Lewis base adducts. The prediction of charge transfers, first excited states, red shifts in the first excited states, HOMOs, and formation energy of the adducts show a high level of accuracy for a wide range of machine learning models from linear regression to decisiontree regression, especially noteworthy is the exceptional accuracy for ANN and ensemble models like RF and GB. Even with leave-one-cluster-out testing, our ML models are shown to have a relatively high level of accuracy in predicting the DFT charge transfer of most Lewis adduct clusters, whose structural motifs are absent in the training chemical space. We also analyze the feature importance that influences the prediction of charge transfer for RF and GB models, which might provide insights for molecular design toward specific applications. In a broader context, our approach is promising to accurately and economically screen and predict a variety of fundamental properties of molecules that influence the performance of functional devices, such as solar cells and LEDs. ■ ASSOCIATED CONTENT
2023-05-21T15:09:43.325Z
2023-05-19T00:00:00.000
{ "year": 2023, "sha1": "8f4e2b7a2d42ffc2c4204edadb8daf670d60835a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.3c02822", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68930e09883dadfbc34f252280a808cac8c719b9", "s2fieldsofstudy": [ "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
249808760
pes2o/s2orc
v3-fos-license
A Metabolomic Profiling of Intra-Uterine Growth Restriction in Placenta and Cord Blood Points to an Impairment of Lipid and Energetic Metabolism (1) Background: Intrauterine growth restriction (IUGR) involves metabolic changes that may be responsible for an increased risk of metabolic and cardiovascular diseases in adulthood. Several metabolomic profiles have been reported in maternal blood and urine, amniotic fluid, cord blood and newborn urine, but the placenta has been poorly studied so far. (2) Methods: To decipher the origin of this metabolic reprogramming, we conducted a targeted metabolomics study replicated in two cohorts of placenta and one cohort of cord blood by measuring 188 metabolites by mass spectrometry. (3) Results: OPLS-DA multivariate analyses enabled clear discriminations between IUGR and controls, with good predictive capabilities and low overfitting in the two placental cohorts and in cord blood. A signature of 25 discriminating metabolites shared by both placental cohorts was identified. This signature points to sharp impairment of lipid and mitochondrial metabolism with an increased reliance on the creatine-phosphocreatine system by IUGR placentas. Increased placental insulin resistance and significant alteration of fatty acids oxidation, together with relatively higher phospholipase activity in IUGR placentas, were also highlighted. (4) Conclusions: Our results show a deep lipid and energetic remodeling in IUGR placentas that may have a lasting effect on the fetal metabolism. Introduction Intrauterine growth restriction (IUGR) or fetal growth restriction (FGR) generally refer to inadequate fetal growth with a birthweight below the 10th percentile according to gestational age and sex [1,2]. It affects up to 10% of pregnancies and increases the risk of perinatal morbidity and mortality, as well as the risk of long-term onset of metabolic and cardiovascular diseases in [3]. Prenatal identification of IUGR relies on ultrasound measurements, but this offers poor prognostic ability [1], and biomarkers of the disease are still lacking. One of the branches of metabolomics, known as targeted metabolomics, consists in measuring absolute concentrations of a large but predefined set of metabolites. Metabolites are measured in samples from subjects with different clinical conditions (e.g., IUGR versus control placentas) and different statistical models are built to discriminate between affected and control samples. This hypothesis-free modeling of metabolomic data allows for the highlighting of candidate biomarkers, while shedding new light on pathophysiological mechanisms. Maternal blood, urine and hair, amniotic fluid, cord blood and newborn urine have been investigated using metabolomics in the context of IUGR [4,5]. These studies were recently subjected to a meta-analysis in which 15 publications were included [4]. Liquid chromatography paired with mass spectrometry was the most commonly used metabolomic approach, showing that fatty acids, phosphosphingolipids and amino acids were the most prevalent predictive metabolites. Vitamin D was the most prevalent predictive biomarker in the blood in the first trimester of pregnancy, the second one being homocysteine, an intermediate metabolite of DNA methylation, in the amniotic fluid in the second trimester. A deregulation of lipid metabolism, mostly fatty acids involved in energetic supply, was also observed in maternal blood. In the blood of mothers, Sovio et al. reported a metabolomic signature predictive of FGR [6]. They performed untargeted metabolomics using UPLC-MS/MS in 175 FGR cases compared to 299 controls at 12, 20 and 28 weeks of gestational age, highlighting 22 discriminating metabolites. A ratio calculated from four of these metabolites showed good predictive performance with respect to FGR outcome (AUC = 0.78). In neonates' urine, increased levels of myo-inositol were found in FGR cases. Myoinositol is known to regulate the free fatty acids released from adipose tissue and is potentially involved in metabolic syndrome [7,8]. Umbilical cord blood metabolome was shown to correlate strongly with birth weight, especially lysophosphatidylcholines, fatty acids and phosphatidylcholines, in the investiga-Here, we present a targeted metabolomics profiling of IUGR, conducted in cord blood and in two replicated placenta cohorts, and carried out using LC-MS/MS quantitative targeted metabolomics. Patients Placentas and cord blood samples were collected at the University Hospital of Angers, France. Maternal and fetal clinical data were collected from the patients' obstetric records. All patients gave written consent for the use of their placenta. This study was authorized by the CPP (Comité de Protection des Personnes) ethics committee and registered with the French Ministry of Research under number DC-2011_1467. The cohort has also been registered with the CNIL (Commission Nationale de l'Informatique et des Libertés). We included IUGR patients affected by poor placental perfusion (placental insufficiency), on the basis of clinical criteria with confirmed vascular anomalies seen on placentas, and excluded other causes of IUGR such as maternal hypertensive, metabolic and genetic diseases. Placentas were obtained from caesarean sections before or during the onset of labor, or from vaginal delivery. For the analysis, patients were classified into two groups, Intra-Uterine Growth Restriction (IUGR) and control. IUGR was defined by a reduction of fetal growth during gestation, with a notch observed by Echo-Doppler in at least one uterine artery and with Doppler abnormalities on umbilical Doppler and/or cerebral Doppler and/or ductus venosus, and with a birth weight below the 10th percentile according to the Audipog growth curve [1] and confirmed by the anatomopathological analysis of the placenta after birth. The control group was defined by women with normal pregnancies and who underwent a planned caesarean section before labor at term. Two cohorts of IUGR placental tissue, hereinafter named P1 (n = 20 IUGR versus 20 controls) and P2 (n = 24 IUGR versus 22 controls), were collected over a period of two years (2016-2017). Most of the women included in the IUGR group gave birth after a caesarean section (15/20 in P1 and 22/24 in P2) (Tables S1 and S2). These two cohorts differed only in their IUGR level of severity, the second cohort being more severe with a lower term age. Placental and Cord Blood Samples All placental tissues were dissected within 30 min after delivery. The protocol for placental dissection has been described previously [15]. Briefly, after removal of maternal decidua and amniotic membranes, 1 cm 3 sections of placental villi were dissected from four different cotyledons between the basal and chorionic plates. After washing with PBS to remove maternal blood, the tissues were frozen at −80 • C until metabolites were extracted. Placentas were then sent for pathological analysis and stored at the biological core facility at Angers University Hospital (Centre de Ressources Biologiques). Cord blood samples were collected from the umbilical vein. The blood transported in ice was immediately centrifuged for 15 min at 3000 rpm, and the supernatant (plasma) was stored as aliquots at −80 • C until metabolomic analysis. We included samples from both P1 and P2 cohorts for cord blood analysis: 15 in the IUGR group (4 P1 + 11 P2) versus 15 in the control group (8 P1 + 7 P2) (Table S3). Metabolite Extraction and Protein Quantification from Placental Tissues Placental tissues weighing between 20 and 50 mg were thawed on ice before being transferred to a 500 µL Precellys ® tube filled with ceramic beads. Forty microliters of cold water were added to the Precellys ® tube and tissues were grinded at 6500 rpm for 40 s. Five microliters of the supernatant obtained after centrifugation at 12,000× g for 5 min at 4 • C were taken for protein concentration determination. Two hundred microliters of supernatant were added to Precellys ® tubes and submitted to another grinding cycle of 6500 rpm for 40 s. After centrifugation at 16,000× g for 5 min at 4 • C, 100 µL of the supernatant were stored at −80 • C until mass spectrometry (MS) analysis. Protein concen-trations were measured using a colorimetric method using bicinchoninic acid following the manufacturer's instructions (BC Assay kit, Interchim, Montluçon, France). Metabolomic Analysis Using Biocrates ® Technology A targeted quantitative metabolomics approach was performed on placental and cord blood plasma extracts using the Biocrates AbsoluteIDQ p180 kit (Biocrates Life Sciences AG, Innsbruck, Austria) and an AB Sciex QTRAP 5500 mass spectrometer (SCIEX, Villebon sur Yvette, France). This kit allows the quantification of 188 metabolites, including 40 acylcarnitines, 21 amino acids, 21 biogenic amines, 90 glycerophospholipids, 15 sphingolipids and the sum of hexoses. Liquid chromatography (LC) was used to separate amino acids and biogenic amines before detection by tandem mass spectrometry (LC-MS/MS), whereas flow injection analysis with tandem mass spectrometry (FIA-MS/MS) was used to quantify acylcarnitines, glycerophospholipids, sphingolipids, and sugars. Ten µL of each sample (placenta homogenate supernatant or plasma) was added to the center of the filter placed on the top wall of the well in a 96-well plate. Metabolites were extracted in methanol solution using ammonium acetate after drying the filter spot under nitrogen flow and derivatization with phenylisothiocyanate for quantification of amino acids and biogenic amines. After validation of the three quality control levels, metabolite concentrations were used for statistical analyses only if they fell within the quantification range determined by the calibration curves. Metabolites with more than 20% of their values outside the range of quantification were not considered. Before excluding these metabolites, a χ2 test was performed to verify whether being outside the range of quantification was independent of the IUGR/control comparison. Statistical Analyses The Student's t-test was used to compare the metabolite concentrations in placenta and newborn cord blood samples for the IUGR and control groups. The non-parametric Mann-Whitney-Wilcoxon test was used to compare metabolite sums and ratios between these cohorts. The Benjamini-Hochberg correction was applied to account for risk I inflation associated with multiple comparisons. Metabolites were scaled to have zero mean and unit variance (UV scaling) before submission to unsupervised and supervised algorithms. Principal component analysis (PCA) and orthogonal projection to latent structures-discriminant analysis (OPLS-DA) were the unsupervised and supervised methods used in multivariate analysis. PCA enables outlier detection, based on Hotelling's T2 distance, and identification of similar samples grouping together in the scatter plot. In the supervised analysis, the X matrix of predictive variables was composed of metabolite concentrations and the Y vector contained the information relative to the group (control or IUGR). To avoid selecting optimistic but over-fitted models, the predictive capabilities of the OPLS-DA models were evaluated by cross-validation using cross-validated R 2 Y (Q 2 Ycum or goodness of prediction), the cross-validated analysis of variance (CV-ANOVA) test, and the goodness of prediction of permuted models (Q 2 Y cum-perm). Models with a low degree of over-fitting are characterized by Q 2 Ycum > 0.5, negative Q 2 Y cum-perm and are significantly more discriminant than the null model (p-value CV-ANOVA < 0.05). In predictive models, selection of metabolites of interest was made through the combination of two pieces of information: variable importance in the projection (VIP) and the loading between the metabolite in the X matrix and the predictive latent variable(s) of the OPLS-DA models. Only metabolites with a VIP value larger than 1 and absolute high loading values were considered as important in the metabolomics signature. Clinical Description Placenta P1 and P2, and cord blood cohorts are described in Supplementary Tables S1-S3, respectively. The main difference in P1 and P2 is the term of birth of the IUGR group. Indeed, newborns in the IUGR group are more preterm in P2 than in P1 (31.3 vs. 36.4 weeks of gestational age, respectively). The term of birth in the control group is thus closer to that of the IUGR group from the P1 cohort. Nevertheless, IUGR was more severe in newborns from the P1 cohort than P2 (−2.2 vs. −1.8 mean birth weight z-score). All the women included in the control group gave birth after a planned caesarean section. Most of the women included in the IUGR group gave birth after a caesarean section (15/20 in P1 and 22/24 in P2). Metabolomic Signature of P1 Cohort From 188 measured metabolites, 127 were in the quantitation range and were kept for statistical analyses (see Supplementary Table S1). PCA showed no outliers nor spontaneous sample grouping ( Figure 1A). The OPLS-DA method enabled good group discrimination (R 2 Y = 0.85) as observed in Figure 1B, with good predictive capabilities and a low risk of overfitting (Q 2 Y cum = 0.72; p-value CV-ANOVA < 0.0001; Q 2 Y cum-perm = −0.61). Metabolite ranking according to VIP and loadings are presented in Figure 1C. short (C2, C6, and C4) acylcarnitine species along with tryptophan (Trp) and creatinine. On the other hand, many lipids, including lysophosphatidylcholine (lyso PCs), sphingomyelin (SM) and hydroxy sphingomyelin (SM(OH)) as well as some phosphatidylcholine (PC) species, are increased in the IUGR group compared to controls. The concentration of some polar metabolites is also decreased in placentas from the IUGR group, such as the amino acids aspartate (Asp), serine (Ser), glycine (Gly), arginine (Arg), threonine (Thr) and tyrosine (Tyr), as well as biogenic amines including kynurenine, alpha-aminoadipic acid (alpha-AAA), two polyamines (putrescine and spermidine) and the collagen-degradation product trans-4-hydroxyproline (t4-OH-Pro). For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation is used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. When the OPLS-DA model is built, projecting the samples onto the predictive latent variable (pLV) enables perfect discrimination between IUGR and control samples (B). Loadings vs. VIPs (volcano plot, (C)) show increase concentrations of carnitine (C0) and long (C16, C18, C18:1) and short (C2, C6, and C4) acylcarnitine species along with tryptophan (Trp) and creatinine. On the other hand, many lipids, including lysophosphatidylcholine (lyso PCs), sphingomyelin (SM) and hydroxy sphingomyelin (SM(OH)) as well as some phosphatidylcholine (PC) species, are increased in the IUGR group compared to controls. The concentration of some polar metabolites is also decreased in placentas from the IUGR group, such as the amino acids aspartate (Asp), serine (Ser), glycine (Gly), arginine (Arg), threonine (Thr) and tyrosine (Tyr), as well as biogenic amines including kynurenine, alpha-aminoadipic acid (alpha-AAA), two polyamines (putrescine and spermidine) and the collagen-degradation product trans-4hydroxyproline (t4-OH-Pro). For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation is used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. Color code: amino acids: green; carnitine and acylcarnitine species: brown; biogenic amines: light green; lysophosphatidylcholine species: light orange; phosphatidylcholine species: dark orange; sphingomyelins and hydroxy sphingomyelins: yellow. Multivariate supervised analysis showed increased concentration of carnitine and acylcarnitine species (AC) (short chain length C2, C3-DC, C4 and C6 and long chain length C16, C18 and C18:1) in IUGR placentas compared to controls. Furthermore, the ratio between acetyl and propionyl carnitines (C2 + C3) and free carnitine (C0), an indicator of fatty acid β-oxidation, was significantly increased in IUGR placentas (median fold change of 1.49, p-value Wilcoxon = 0.006). Creatinine and tryptophan were also relatively increased in IUGR placentas. Kynurenine, a metabolite derived from tryptophan, was decreased in IUGR placentas, making the kynurenine/tryptophan ratio significantly diminished in IUGR compared to controls (median fold change of 0.13, p-value Wilcoxon < 0.0001). Arginine was also diminished in IUGR placentas with relatively increased activity of arginase in this group, as measured by the ornithine/arginine ratio (median fold change of 1.39, p-value The concentration of other amino acids such as aspartate, glycine, serine, and threonine was also found to be relatively lower in placentas from the IUGR group compared to controls. The polyamines putrescine and spermidine were found to be diminished in IUGR placentas with increased activity of spermine synthase, quantified by the spermine/spermidine ratio (median fold change of 1.26, p-value Wilcoxon = 0.001). A deep lipid remodeling was also observed when comparing IUGR and control placenta. Lysophosphatidylcholine species of less than 22 carbon atoms were diminished in IUGR placentas compared to controls with average values of 4.97 and 7.1 µmol/mg of protein, respectively (p-value Student < 0.0001). Concentrations of some sphingomyelins and only a fraction of phosphatidylcholines were also decreased in IUGR placentas compared to control placentas. However, the proportion of unsaturated fatty acyls, including monounsaturated (MUFA) and polyunsaturated (PUFA) acyl moieties of diacyl phosphatidylcholines (PC aa), was significantly increased in IUGR placentas compared to controls (median fold change of 1.47, p-value Wilcoxon = 0.0002). Metabolomic Signature of P2 Cohort From 188 measured metabolites in the kit, 139 were retained for the statistical analysis (see Supplementary Table S2). Principal component analysis (PCA) showed no outliers nor spontaneous sample grouping ( Figure 2A). As for the P1 cohort, OPLS-DA provided good discrimination between groups (R 2 Y = 0.76) as observed in Figure 2B, with good predictive capabilities and with a low tendency towards overfitting (Q 2 Y cum = 0.51; p-value CV-ANOVA < 0.003; Q 2 Y cum-perm = −0.64). Metabolite ranking according to VIP and loadings can be visualized in Figure 2C. This supervised multivariate model uncovers 25 metabolites commonly modified in P1, all varying in the same direction, as shown in the Venn diagram presented in Figure 3). This replication of our study on two distinct cohorts reinforces the reliability of the results commonly obtained in both cohorts, but it should be noted that the disparities observed between these two cohorts may also be partly due to the greater prematurity and severity of the growth delay observed in the IUGR group of the P2 cohort. Indeed, the P2 cohort has slightly more discriminating total metabolites (n = 51), notably phosphatidylcholine species, than the P2 cohort (n = 44). On the other hand, many lipidic metabolites, including lysophosphatidylcholine (lysoPCs), sphingomyelin (SM) and hydroxy sphingomyelin (SM(OH)) as well as phosphatidylcholine (PC) species are decreased. The concentration of some polar metabolites is also decreased in IUGR placentas, such as the amino acids tyrosine (Tyr), arginine (Arg), glycine (Gly), serine (Ser) and methionine (Met), as well as biogenic amines, including alpha-aminoadipic acid (alpha-AAA) and carnosine. For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation is used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. Color code: amino acids: green; acylcarnitine species: brown; biogenic amines: light green; lysophosphatidylcholine species: light orange; phosphatidylcholine species: dark orange; sphingomyelins and hydroxy sphingomyelins: yellow; sum of hexose or H1: pink. On the other hand, many lipidic metabolites, including lysophosphatidylcholine (lysoPCs), sphingomyelin (SM) and hydroxy sphingomyelin (SM(OH)) as well as phosphatidylcholine (PC) species are decreased. The concentration of some polar metabolites is also decreased in IUGR placentas, such as the amino acids tyrosine (Tyr), arginine (Arg), glycine (Gly), serine (Ser) and methionine (Met), as well as biogenic amines, including alpha-aminoadipic acid (alpha-AAA) and carnosine. For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation is used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. Color code: amino acids: green; acylcarnitine species: brown; biogenic amines: light green; lysophosphatidylcholine species: light orange; phosphatidylcholine species: dark orange; sphingomyelins and hydroxy sphingomyelins: yellow; sum of hexose or H1: pink. Figure 3. Venn diagram showing common discriminant metabolites in placenta P1 and/or P2 cohorts. Discriminant P1 metabolites have been drawn in a blue circle whilst important P2 metabolites have been included in a green circle. The intersection of both signatures comprises 25 metabolites, all varying in the same way in both cohorts: increased levels of 6 acyl-carnitines and creatinine and decreased levels of glycine, serine, arginine, tyrosine, alpha-aminoadipic acid, five lysophosphatidylcholine species (lysoPC), one diacyl phosphatidylcholine (PC aa 32:0) and three alkyl-acyl phosphatidylcholine (PC ae) species and 4 (hydroxy)sphingomyelin species (SM (OH) and SM, respectively). For the acylcarnitine, lysophosphatidylcholine, phosphatidylcholine and (hydroxy)sphingomyelin species, the sum of the length of the one or two acyl or acyl-alkyl groups is noted after "C", "lysoPC", "PC" and "SM(OH)" or "SM", respectively, and is followed by the number of double bonds. * Indicates metabolites measured in only one cohort because they were out of range in the other cohort. Cord Blood Metabolomics One hundred and forty-one measured metabolites were retained for the statistical analysis of the plasma of cord blood (see Supplementary Table S3). Principal component analysis (PCA) showed no outliers, but a trend towards group distinction was observed in control and IUGR samples, which had positive and negative values, respectively, in the second principal component PC2 ( Figure 4A). OPLS-DA analysis enabled high group discrimination (R 2 Y = 0.95) as observed in Figure 4B, with good predictive capabilities and low overfitting (Q 2 Y cum = 0.72; p-value CV-ANOVA < 0.0002; Q 2 Y cum-perm = −0.61). Metabolite ranking according to VIP and loadings are displayed in Figure 4C. For the acylcarnitine, lysophosphatidylcholine, phosphatidylcholine and (hydroxy)sphingomyelin species, the sum of the length of the one or two acyl or acyl-alkyl groups is noted after "C", "lysoPC", "PC" and "SM(OH)" or "SM", respectively, and is followed by the number of double bonds. * Indicates metabolites measured in only one cohort because they were out of range in the other cohort. It should be noted that the values for tryptophan were below the lower limit of quantification for this P2 cohort and thus were not taken into consideration in the statistical analysis. Similar to the P1 cohort, the ratio between acetyl and propionyl carnitines (C2 + C3) and free carnitine (C0) was significantly increased in P2 IUGR placentas (median fold change of 1.4, p-value Wilcoxon = 0.017). Additionally, the ratio of unsaturated to saturated fatty acids was higher in IUGR placentas compared to controls (median fold change of 1.2, p-value Wilcoxon = 0.0029). Cord Blood Metabolomics One hundred and forty-one measured metabolites were retained for the statistical analysis of the plasma of cord blood (see Supplementary Table S3). Principal component analysis (PCA) showed no outliers, but a trend towards group distinction was observed in control and IUGR samples, which had positive and negative values, respectively, in the second principal component PC2 ( Figure 4A). OPLS-DA analysis enabled high group discrimination (R 2 Y = 0.95) as observed in Figure 4B, with good predictive capabilities and low overfitting (Q 2 Y cum = 0.72; p-value CV-ANOVA < 0.0002; Q 2 Y cum-perm = −0.61). Metabolite ranking according to VIP and loadings are displayed in Figure 4C. The predictive latent variable (pLV) correctly predicts samples allocation according to IUGR status. Loadings vs. VIPs (volcano plot, C) show increased concentrations of carnitine (C0) and two short chain acylcarnitine species (acetyl (C2) and butyryl (C4) carnitine, respectively) along with five amino acids (alanine (Ala), asparagine (Asn), tyrosine (Tyr), glutamine (Gln) and proline (Pro)), three biogenic amines (alpha-aminoadipic acid (alpha-AAA), trans-4-hydroxyproline (t4-OH-Pro) and spermine), one lysophosphatidylcholine (lysoPC) and two diacyl phosphatidylcholine (PC aa) species. On the other hand, many lipid species including diacyl-and alkyl-acyl-phosphatidylcholines, sphingomyelins and lysophosphatidylcholines of less than 22 carbons are found diminished in the cord blood plasma of newborns diagnosed IUGR compared to controls. The amino acid tryptophan is also relatively decreased in the IUGR cohort compared to controls. For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation was used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. Color code: amino acids: green; acylcarnitine species: brown; biogenic amines: light green; lysophosphatidylcholine species: light orange; phosphatidylcholine species: dark orange; sphingomyelins and hydroxy sphingomyelins: yellow. The predictive latent variable (pLV) correctly predicts samples allocation according to IUGR status. Loadings vs. VIPs (volcano plot, (C)) show increased concentrations of carnitine (C0) and two short chain acylcarnitine species (acetyl (C2) and butyryl (C4) carnitine, respectively) along with five amino acids (alanine (Ala), asparagine (Asn), tyrosine (Tyr), glutamine (Gln) and proline (Pro)), three biogenic amines (alpha-aminoadipic acid (alpha-AAA), trans-4-hydroxyproline (t4-OH-Pro) and spermine), one lysophosphatidylcholine (lysoPC) and two diacyl phosphatidylcholine (PC aa) species. On the other hand, many lipid species including diacyland alkyl-acyl-phosphatidylcholines, sphingomyelins and lysophosphatidylcholines of less than 22 carbons are found diminished in the cord blood plasma of newborns diagnosed IUGR compared to controls. The amino acid tryptophan is also relatively decreased in the IUGR cohort compared to controls. For phosphatidylcholines, the sum of the length of the two acyl or acyl-alkyl groups is noted after the C and is followed by the number of double bonds. The same notation was used to represent the length and the number of double bonds in the acyl chain of sphingomyelins and lysophosphatidylcholine species. Color code: amino acids: green; acylcarnitine species: brown; biogenic amines: light green; lysophosphatidylcholine species: light orange; phosphatidylcholine species: dark orange; sphingomyelins and hydroxy sphingomyelins: yellow. Figure 4C shows a deep lipid remodeling in the blood of IUGR newborns with decreased levels of lysophosphatidylcholines with acyl chain of less than 22 carbon atoms, many phosphatidylcholine species, and some sphingomyelins. The ratio of lysophos-phatidylcholines to phosphatidylcholines, measuring phospholipase activity, was significantly diminished in IUGR newborns (median fold change of 0.65, p-value Wilcoxon < 0.0001). Contrary to what was observed in placenta cohorts, the ratio of unsaturated to saturated fatty acids moieties in phosphatidylcholine molecules was significantly diminished in the plasma of IUGR newborns (median fold change of 0.84, p-value Wilcoxon = 0.0043). The same inversed situation was also observed for some polar metabolites. Indeed, tyrosine and alpha-aminoadipic acid concentrations were found to be relatively decreased and tryptophan relatively increased in the plasma samples of IUGR newborns compared to controls. The plasma's metabolomic signature was also characterized by elevated concentration of carnitine (C0), acetyl(C2) and butyryl(C4) carnitine species as well as the amino acids alanine, asparagine, proline, and glutamine. Plasmatic concentrations of biogenic amines trans-4-hydroxyproline and the polyamine spermine were also elevated in IUGR samples. Discussion The replication in two cohorts of IUGR and control placentas, recovered from women who underwent vaginal and cesarean delivery, reveals a set of 25 discriminating metabolites, similarly modified in both cohorts. In the case of IUGR, six short-and medium-chain AC and creatinine show increased concentrations, whereas four amino acids (glycine, serine, arginine, and tyrosine), alpha-aminoadipate, and twelve glycerophospholipids (five lysophosphatidylcholines, four phosphatidylcholine and three sphingomyelins) show decreased concentrations. Such a decrease in glycerophospholipid concentrations is also found the IUGR blood cords compared to controls (seven lysophosphatidylcholines, 27 phosphatidylcholines and four sphingomyelins, yet only two increased phosphatidylcholines and one lysophosphatidylcholine). Only two short-chain acylcarnitine species are increased, as in the placentas, in cord blood from IUGR compared to controls, with the addition of free carnitine (C0), which is also increased. In contrast to placentas, in umbilical cord blood plasma, creatinine does not appear to be discriminating and alpha-aminoadipate is found to be increased. Lastly, with respect to the amino acids, only tyrosine appears to be discriminating in cord blood as it is in placentas, but inversely, with an increased concentration in cord blood. Four other amino acids not present in the placental signature appear to be increased in blood (alanine, asparagine, glutamine, and proline), and tryptophan appears to be decreased. Creatinine is a degradation product of creatine phosphate that plays an important role in energy homeostasis through the ability of creatine-phosphate to phosphorylate ADP. The blood level of creatinine depends mainly on the production of creatine by skeletal muscle and its elimination by kidneys. Increased concentrations of creatinine have already been reported in the urine of IUGR newborns [7], in cord blood of IUGR patients [11] and in fetal umbilical venous plasma of growth-restricted fetal pigs [16], while the concentration of its precursor creatine has been shown to be increased in the umbilical cord blood plasma of patients with IUGR compared to controls [12]. Here, we show for the first time, to our knowledge, that creatinine has also elevated levels in the placenta of IUGR patients. As the placenta is known for its ability to perform creatine biosynthesis [17], the increase of this metabolite could result from mitochondrial dysfunction due to hypoxia, as has been suggested in the case of preeclampsia, where a similar elevation of creatine concentration can be seen [18]. The fetus is an "essentially glycolytic organism" and the paramount importance of placental-to-fetal glucose transfer is a well-accepted paradigm. Interestingly, hexoses, which are the other main energy source for mitochondria in the placental syncytium (PS), were also found to be increased in the most severe IUGR P2 cohort (Figures 3 and 4). The PS is equipped with the whole machinery for fatty acid oxidation and Shekhawat et al. have demonstrated that fatty acid oxidation in PS was comparable to or even greater than that in cultured human fibroblasts [19]. In our study, we observed a larger carnitine pool (free carnitine plus AC) in IUGR placentas in both P1 and P2 cohorts compared to controls (p-value P1 , Student test = 0.019 and p-value P2 , Student test < 0.001, respectively) ac-companied by significant alteration of AC/C0 ratios (p-value P1 , Wilcoxon test = 0.005 and p-values P2 , Wilcoxon test = 0.006, respectively). These results point toward increased carnitine accumulation and fatty acid oxidation in IUGR PS mitochondria. However, in the relatively hypoxic environment associated with IUGR, fatty acid oxidation is probably incomplete, resulting in further AC accumulation. Such acylcarnitine species and fatty acid accumulation has also been reported in umbilical cord blood [20] and in the blood of IUGR newborns [21]. Additionally, a negative correlation between birthweight and acylcarnitine species concentration in blood was recently identified in larger cohorts [22]. According to these authors, this metabolic signature could reflect insulin resistance that is closely related to mitochondrial energy metabolism. In this configuration of insulin resistance, fatty acid oxidation would be an important source of energy for the PS. Increased α-aminoadipic acid in newborn blood could be another effect of a state of insulin resistance. Indeed, α-aminoadipic acid, a metabolite of lysine catabolism, has been identified as an early biomarker of insulin resistance [23,24]. Interestingly, concentrations of α-aminoadipic acid were lower in IUGR placentas compared to control placentas in both cohorts. Taking together both placental and newborn blood data, the hypothesis of diminished placental clearance of fetal α-aminoadipic acid in IUGR pregnancies seems plausible. In pregnant sheep, Wilkes et al. provided evidence of the placenta's important role in clearing fetal α-aminoadipic acid after a maternal lysine load [25]. To our knowledge, no study has been carried out in humans to investigate the role of the placenta in eliminating fetal blood α-aminoadipic acid. Concerning amino acids, whose altered concentrations are generally attributed to altered transport in the IUGR deficient placenta [16], our signatures observed in placenta and cord blood diverge considerably. Tyrosine appears discriminating in the two samples, but in opposite direction. Others amino acids (glycine, serine, and arginine) are decreased in placenta, while five others, not present in the placental signature, appear to be either increased (alanine, asparagine, glutamine and proline) or decreased (tryptophan) in blood. The discriminant amino acids found by Bahado-Singh et al. [13] in IUGR placenta are not quite the same as ours, but the decreased concentration of tyrosine and glycine in both studies reinforces their pathophysiological importance. Similarly, in IUGR cord blood, according to the different studies, changes in the concentrations of tyrosine, alanine, glutamine, serine, proline, and tryptophan have been reported, highlighting their pathophysiological importance [4,12,16,21]. Creatinine, a surrogate of creatine synthesis, was significantly increased in IUGR placentas. Creatine is synthetized from glycine and arginine. Interestingly, both glycine and arginine were decreased in P1 and P2 IUGR samples. It is tempting to speculate about enhanced local creatine synthesis in IUGR placentas aiming to improve spatial energy allocation through phosphocreatine in this tissue. Our signature also shows a sharp rearrangement of glycerophospholipids in IUGR, with a massive decrease in their concentration in P1 and P2 cohorts and in cord blood. A global decrease in phosphatidylcholines has already been reported in the placenta of patients with fetal growth restriction [13] as well as in the blood of IUGR patients and in the blood of a rat model of IUGR [26]. In IUGR placentas the ratio of unsaturated to saturated fatty acids moieties in phosphatidylcholine species was significantly higher compared to controls. The opposite was observed in cord blood samples, illustrating a complex interplay of metabolic changes between the fetal and the placental compartments. Interestingly, the concentration of lysophosphatidylcholines in cord blood has been shown to positively correlate with birth weight [9]. The ratio of lysophosphatidylcholines to phosphatidylcholines, measuring phospholipase A2 activity, was significantly diminished in IUGR cord blood samples, but not at the placental level, showing, once again, the complexity of metabolic interactions inside the fetoplacental unit. This remodeling of glycerophospholipids could be either due to a structural modification of the placenta, with the phospholipids being the most important components of biological membranes, or due to a more general modification of lipid metabolism relating to the energetic impairment. Indeed, a substantial disruption of lipid metabolism, including altered lipoprotein profiles has been shown in mother and fetuses with IUGR [27]. Our study is the second to explore the placenta of IUGR patients after that recently published by Bahado-Singh et al. [13]. It differs from that study in its use of a replicated placental cohort and in its comparison of the metabolomic profile of fetal umbilical cord blood. The disparity in pregnancy term between IUGR and control groups in the P2 cohort is a limitation of our study, since it is difficult to constitute a cohort of healthy patients with a similar term to IUGR. However, this disparity in pregnancy term is less marked in our P1 cohort. The fact that we obtained a common metabolomic signature in the two cohorts studied, despite this disparity of pregnancy terms, shows that the signature of the IUGR is predominant over that of the variations in the term of pregnancy. Our study confirms a deep remodeling of glycerophospholipids, already shown by Bahado-Singh et al., and the modifications of some amino acids. However, it also uncovered for the first time an altered saturated-to-unsaturated ratio of the acyl moieties forming these phospholipids and potentially diminished PLA2 activity in the plasma of IUGR newborns. It also reveals an increase in creatinine, alpha-aminoadipate and acylcarnitine species in the placenta obtained from IUGR pregnancies, pointing toward a disturbed energy metabolism with insulin resistance. Conclusions In summary, our study shows a profound alteration in the placenta of IUGR patients with respect to energy and lipid metabolism, with insulin resistance, increased activity of fatty acids oxidation, altered saturated-to-unsaturated fatty acids moieties in phosphatidylcholines and phospholipase activity. It is tempting to propose that this placental metabolic reprogramming could have a lasting effect on the fetus and that it may be responsible for the increased susceptibility to metabolic and cardiovascular diseases eventually observed in adulthood.
2022-06-18T15:14:37.469Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "8b47fd1e6da4855c57246c7b853f95d68f75eda7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/10/6/1411/pdf?version=1655274427", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8796a2471023f38e57832e7a1ed388785fd23f0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
5623273
pes2o/s2orc
v3-fos-license
Dental treatment for handicapped patients; sedation vs general anesthesia and update of dental treatment in patients with different diseases Dental treatment on Handicapped Patients is often difficult because many people with a wide range of ages (from children to the elderly) with different pathologies that can affect the oral cavity and differ widely are included in this group. This situation creates some controversy, because according to pathology, each patient will be treated differently depending on collaboration, general health status, age or medication used to treat this pathologies. According to this situation we can opt for an outpatient treatment without any kind of previous medication, a treatment under conscious or deep sedation or a under general anesthesia treatment. With this systematic review is intended to help clarify in which cases patients should be treated under general anesthesia, sedation (conscious or deep) or outpatient clinic without any medication, as well as clarify what kind of treatments can be carried in private dental clinics and which should be carried out in a hospital. It will also discuss the most common diseases among this group of patients and the special care to be taken for their dental treatment. Key words:Hospital dentistry, handicapped patient. Introduction The lack of integration within the public health system is one of the most important problems of dental treatment for patients with special needs. Moreover, the peculiarity of decentralization of Public Health Services in Spain, makes public health patients with special needs coverage, very different within the same country (1). This creates a lackof care that makes patients to keep looking for solutions to their dental problems in the field of private assistance (2). e171 We will present two examples of dental treatment integration in patients with special needs in a private hospital setting. To evaluate the service provided to this type of patients, not only in Spain, a systematic review of the literature was performed using database PubMed-Medline using the keywords dentistry and handicapped patient hospital. With the keywords used 151 articles were found, and then we proceeded to do a manual search to select the suitable items, select 22 to perform the review (Table 1). titude of common dental clinics full of arguments for them: the next customer location, integration into the commercial life of the neighborhoods etc. Then there is the assumed tradition by patients in concept of Dentist (something more aesthetic and associated with repairing teeth) and in concept of Hospital (associated with diseases). Hospitals are in most cases associated to sophisticated treatments, technological advances and specialization these are important arguments that we should offer to conquer a reference position, position that will give us the famous "word of mouth" (patients talking with other people about our services) and can be reinforced with simple advertising campaigns. Another essential aspect for the proper positioning within the Hospital of our dental service is to be able to choose the location close to fields related to our workspace: ENT, Dermatology, Plastic Surgery, Pediatrics, Psychiatry/Psychology (3). Looking to our hospital, like any other specialty, you need to be more than a tenant: you should generate traffic of patients that can consume in other departments (Analytical, Rx, interdepartmental, operating rooms). This, which could be required from Hospital direction, comes up by activity that generates working in a hospital enviroment. In fact, from the beginning, we will find care demands not only in teeth areas and also we could find complex patients who have not been successful treated in their regular dentists, and who need dental attention in other environment (4). b) Handicapped Patients. Probably the most important differential area compared to Dental Clinics. The special care in dentistry: Those people who for physical or mental characteristics can only be treated in hospital environment, either for monitoring and controlling risks or because the only way to access to their mouth, is using general anesthesia or other form of sedation. It's easy, even today, associate this operation to the public health service but since transfers were initiated to differents Spanish health services, we find many contradictions and discrimination for being resident in one or another part of the country (5). Additional cost that Dental performance in these kind of patients means, does, in many cases, impossible to treat them under the private health system when they need General Anesthesia, and they only have access to dental health through mutilation (extractions multiple) or the bounty of the Administration. At this point is very important that hospital direction could get involved to promote patients referral with some agreements that can range from the "total" (completely covered dental care for patients with disabilities) to "partial"promoting access to the hospital (General Anes-thesia and Operating Room) and giving only economic coverage overall performance (extractions and surgery). This simple formula, a combination of public and private health services, allows to do more just and equitable dental practice: handicapped patient can be treated at all levels at the same price as everyone else in the field of dentistry, and has the same right to Oral Health than any other person without pathologies because the state could provide additional medical facilities (hospital, operating room and anesthesia) for dental care. Probably this health service model could be the final format that can be applied by the state because of economic rationality and social justice . But from the point of view of independent dental professional, it is possible that this patient loads is not enough itself to maintain Dental Service. Again is necessary to apply business judgment in this kind of professional dedication to find profits and for Monitoring evolution and loyalty of patients, promote oral health in those centers which care of disabled patients, collaborate in monitoring and maintaining hygiene and also promote a familiar motivation. These actions, carried out systematically and notarized, have enabled successful experiences in creating Dentistry Services at Private Hospitals. Being also an obvious fact that morbidity and dependence in chronic patients in our society have increased. The incidence of cerebral palsy with a neonatal origin have been maintained and even decreased but dependencies related to age and senile dementias have gradually increased (6 -Group 4 would bring together all patients with pathologies with some degree of intellectual disability. Most of these patients, due to wide range of general pathology that may present, requires specially trained professionals, and individualized treatment plans that cover dental treatment needs. In all cases we should always perform a complete clinical history, we should also request medical reports and study the underlying pathology. We should also know the medical treatments that are being performed as the same time of dental attention. On the other hand, we must perform a dental history aimed to obtain an individual and objective assessment that will show us the most appropriate oral therapeutic needs in each case and will help us to decide what would be the best way to perform dental attention in clinic or under sedation or general anesthesia (7-9). All of this makes necessary an interrelation and intercommunication with other medical and dental specialties, developing integrated protocols in many cases. If all these actions could be coordinated from a hospital dental service that can organize referrals to other services (not only related to mouth) integrated in the same hospital, it will make everything easier, because not only number of movements of patient get reduced, because visits and tests could be jointly programmed, but, thanks to the computerization of services, it facilitates access from any specialty to all tests that have been done to that patient. Clinical dental treatment in handicapped and madically compromised patients Within this diverse spectrum of individual needs in dental treatment of handicapped patients, we must consider on one hand the underlying disorder which will mark the most appropriate treatment decisions, the different medical specialists who must intervene to schedule referrals and which protocols must be followed in each case. In case of medically compromised pediatric patients, we should always perform a complete clinical record and we should also ask their doctors for a full report of their illness, treatment and an update prognosis. When patients come for the first time to our clinic, we always ask them from 5 years of age onwards, about an orthopantomography which can be performed the day before our appointment. Oral treatment may be performed in normal clinical following special protocols for each specific type of pathology and in some cases it will require coordination with others specialists who could directly control overall patient´s illness. Patients from this group, usually attend a Hospital Dental Service because they feel safer getting dental care in this environment. Pathologies that medically compromised patients could present can be very large and diverse and always require an overall assessment of patient status, to design individually dental treatment plan which suits better, each case needs. Diabetic patients have no specific oral manifestations. For dental practice we must always control diabetes. Poorly controlled patients should be referred to specialist some days before dental treatment to control blood sugar levels. It is recommended to perform dental treatment within two hours after the insulin injection and we should not modificate patient´s usual breakfast, not change medication schedules and especially food intake, both, before and after treatment. Local anesthesia can be performed normally. We should remember that in these patients healing is slower and therefore, if extractions are performed, we recommend coverage with high spectrum antibiotics (10,11). The most common complication in this kind of patients is hypoglycemia, which usually can be solved with administration of orally quick absorption carbohydrates (or parenterally), but we can avoid it easily if we adapt schedules appointments to patient´s intake. Another group of patients are those who suffer from cardiac diseases. In these cases, patients who come more often are those with congenital cardiac abnormalities and those with functional murmurs. In first group we must always request specialist report (where should appear pathology´s current situation) , because is specialist who makes pathology following. We should always apply prophylaxis recommendation for bacterial endocarditis prevention in cases where it is indicated (12,13). In the second group, treatment does not require any special preparation. In pediatric patients, breathing problems we see more frequently are those associated with bronchial asthma. Most common oral abnormalities in this kind of patients are an increased predisposition to tooth decay, associated with prolonged use of steroids and other drugs in suspension form for inhalation and dispensed daily (14). It is recommended to perform Dental treatment in these patients in periods in which child is asymptomatic, appointments should be done at morning hours because we can monitor better patient´s situation, anesthetic with vasoconstrictor can be used, always performing aspiration during injection and should be avoided anesthetics with sulphites. After inhaler use it is recommended to these kind of patients to always rinse their mouth with water, to avoid decay risk (14). In epileptic patients without intellectual disabilities who take regular medication, are well controlled and who usually have no crisis, dental treatment can be normally performed at clinic. It is recommended to treat these e175 kind of patient within 2 hours after they have consumed their medication. In these patients we should avoid triggers factors for epilepsy, such as stress and anxiety. If during treatment, suddenly, a crisis appears, we should immediately remove all tools and materials from mouth, put patient in a supineposition, wemust tilt patients head sideways, and we should avoid mouth closing for avoid tongue biting (15). For patients with mild motor deficits involving only arms and legs and not accompanied by intellectual disabilities, there won´t be any problem for normal dental treatment. In cases which shortfall affects upper limbs it is important to involve their family in proper daily dental hygiene. Patients with sensory deficits, which more often come to clinic are those with impaired hearing ability or deaf patients, who in early-onset cases it will probably be associated with speech deficits. In these cases, patient communication is the most important problem, which will make treatment exclusive and personalized: You need a substantial visual communication and specialized collaboration in sign language by their family or by a professional (16). Dental tratment of handicapped and medically compromised patients in an operating room Within resources that can be provided in a hospital, one of the most important and requested by handicapped patients is dental treatment under general anesthesia or sedation (17,18). Within this group of patients we can find medically compromised patients (congenital cardiac abnormalities, blood dyscrasias, allergic reactions to local anesthetics,. uncontrollable epilepsy, etc.) On the other hand, we have all the patients with motor deficits that don´t allow proper treatment in clinics and all patients who have a mild or severe intellectual disability, whose condition or treatment inhibit a dental treatment in clinics (8,16). This group of patients (intellectual disabled) generally presents big problems, with many different oral pathologies, because they themselves are not able to seek medical care and disability also involves a failure to perform a proper daily oral hygiene and proper maintenance and it is also the group of patients which generates a greater request for hospital treatment under general anesthesia or sedation. Inside this group we can find genetic intellectual disabilities such as Down syndrome, fragile X syndrome, Angelman syndrome etc. Others would be patients with severe neuromuscular disorders such as cerebral palsy and spina bifida we can also include in this group patients with autism spectrum disorders, affected by Asperger Syndrome and Rett Syndrome. To assess these patients, we must make a general record of underlying disease with complete laboratory blood analysis and electrocardiogram , and dental history (or-tory (orthopantomography etc.). With these data we can make an inquiry with the anesthesiologist to determine the health risk presented by the patient. For this purpose the ASA group classification is used, which is a 6-degree scale created by the American Society of Anesthesiologists that relates the degree of surgical risk in the patient with his main pathology and with that, he will determine the type of anesthetic technique, general anesthesia or sedation, to perform dental treatment plan with the most appropriate option (17)(18)(19). To achieve sedation treatments patients can be only candidates if they are included in both ASA groups I and II ( ASAV moribund patient who is not expected to survive without the operation. ASA VI declared brain-dead patient whose organs are being removed for donor purposes Table 2. ASA patients classification. The anaesthesiologist will establish the preoperative protocol with minimum 6 fasting hours and suppression or not, of patient's underlying medications, depending on patient´s disease and type of drug. On this visit, an informed consent, for the type of anesthetic technique will be performed. If it´s necessary to administer some treatment to reduce anxiety and fear, at the time of admission before intervention, premedication and administration way( nasal, rectal or sublingual) will be scheduled (20)(21)(22). With all this we make a second patient reassessment where informed consent is signed to carry out the proposed dental treatment. You give to patient´s family, written preoperative and postoperative recommendations to prevent oversights and errors. In those cases in which correct exploration is impossible and therefore we can not perform a preoperative treatment plan, parents will be informed and we will proceed to evaluate patient needs and establish a treatment plan when patient is slept, forcing us to make changes in predicted treatment, make quick decisions taking into account needs of patient and degree of cooperation in the later maintenance of treatment . Patient will enter into health service as an ambulatory e176 surgery proceed,a couple of hours before surgery. In these cases treatments are usually performed along morning so patient could remain as shortest as possible in hospital, and they can return sooner to their usual environment. At the moment of admission at hospital, as a medical order, premedication ordered by anesthesiologist should appear, and it will be administered by nursing staff and, during surgery, certain drugs could be administered parenterally according to patient's requirements if it ´s needed, as in case of endocarditis prophylaxis in patients with heart disease, analgesics to reduce posto-educe postoperative pain etc. After intervention, patient remains with an injecting dropper and under hospital supervision for about four hours and if liquid tolerance is adequate, no vomiting and no complications appear we will proceed to discharge patient with drug regimen to be followed at home depending on each case of dental treatment performed and the patient's underlying disease. With this we will try to interfere as little as possible with patient's general environment and allows that time at hospital is as minimum as possible. Patient should return for a control treatment in the recommended period and in that time, we will proceed to establish the inspection and maintenance protocol to be followed by the patient in order to minimize the emergence of new diseases. On this visit is we will suggest referrals to other services if it is necessary (8). In conclusion we have described a sequence of performances for outpatient treatment in handicapped patients with different pathologies whether children or adults, both in private practice at dental clinics and in hospitals, in order not to changetheir routine as much as possible and create a favorable environment to face treatment, prescribing sedation or general anesthesia as a last resource in extreme cases only if patient´s pathology, medication or irregular collaboration, require it.
2016-05-04T20:20:58.661Z
2013-10-13T00:00:00.000
{ "year": 2013, "sha1": "e171d6bb848f06cfc3195fd567fbd841668dbe9f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4317/medoral.19555", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6f9061b2dd222265a4468303f1a3cdabde67152", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5143911
pes2o/s2orc
v3-fos-license
Absolutely Continuous Representations and a Kaplansky Density Theorem for Free Semigroup Algebras We introduce notions of absolutely continuous functionals and representations on the non-commutative disk algebra $A_n$. Absolutely continuous functionals are used to help identify the type L part of the free semigroup algebra associated to a $*$-extendible representation $\sigma$. A $*$-extendible representation of $A_n$ is ``regular'' if the absolutely continuous part coincides with the type L part. All known examples are regular. Absolutely continuous functionals are intimately related to maps which intertwine a given $*$-extendible representation with the left regular representation. A simple application of these ideas extends reflexivity and hyper-reflexivity results. Moreover the use of absolute continuity is a crucial device for establishing a density theorem which states that the unit ball of $\sigma(A_n)$ is weak-$*$ dense in the unit ball of the associated free semigroup algebra if and only if $\sigma$ is regular. We provide some explicit constructions related to the density theorem for specific representations. A notion of singular functionals is also defined, and every functional decomposes in a canonical way into the sum of its absolutely continuous and singular parts. Free semigroup algebras were introduced in [13] as a method for analyzing the fine structure of n-tuples of isometries with commuting ranges. The C*-algebra generated by such an n-tuple is either the Cuntz algebra O n or the Cuntz-Toeplitz algebra E n . As such, the free semigroup algebras can be used to reveal the fine spatial structure of representations of these algebras much in the same way as the von Neumann algebra generated by a unitary operator encodes the measure class and multiplicity which cannot be detected in the C*-algebra it generates. This viewpoint yields critical information in the work of Bratteli and Jorgensen [5,6,20,21] who use certain representations of O n to construct and analyze wavelet bases. From another point of view, free semigroup algebras can be used to study arbitrary (row contractive) n-tuples of operators. Frahzo [17,18], Bunce [7] and Popescu [23] show that every (row) contractive n-tuple of operators has a unique minimal dilation to an n-tuple of isometries which is a row contraction, meaning that the ranges are pairwise orthogonal. Thus every row contraction determines a free semigroup algebra. Popescu [26] establishes the n-variable von Neumann inequality which follows immediately from the dilation theorem. Popescu has pursued a program of establishing the analogues of the Sz. Nagy-Foiaş program in the n-variable setting [24,25,27]; the latter two papers deal with the free semigroup algebras from this point of view. Free semigroup algebras play the same role for noncommuting operator theory as the weakly closed unital algebra determined by the isometric dilation of a contraction plays for a single operator. In [12], the first author, Kribs and Shpigel use dilation theory to classify the free semigroup algebras which are obtained as the minimal isometric dilation of contractive n-tuples of operators on finite dimensional spaces. Such free semigroup algebras are called finitely correlated, because from the wavelet perspective, these algebras correspond to the finitely correlated representations of E n or O n introduced and studied by Bratteli and Jorgensen. It is interesting that this class of representations of O n are understood in terms of an n-tuple of matrices, a reversal of the the single variable approach of analyzing arbitrary operators using the isometric dilation. Out of the analysis of finitely correlated free semigroup algebras emerged a structural result that appeared to rely on the special nature of the representation. However in [11], two of the current authors and Katsoulis were able to expose a rather precise and beautiful structure for arbitrary free semigroup algebras. This structure plays a key role in this paper. The prototype for free semigroup algebras is the algebra L n determined by the left regular representation of the free semigroup F + n on Fock space. This representation arises naturally in the formulation of quantum mechanics. We have named it the non-commutative analytic Toeplitz algebra because of the striking analytic properties that it has [1,13,14,15,2]. In particular, the vacuum vector (and many other vectors in this representation) has the property that its image under all words in the n isometries forms an orthonormal set. We call such vectors wandering vectors. Such vectors play a crucial role in these representations, and a deeper understanding of when they occur is one of the main open questions in this theory. The norm-closed algebra A n generated by n isometries with orthogonal ranges is even more rigid than the C*-algebra. Indeed, it sits inside the C*-algebra E n , but the quotient onto O n is completely isometric on this subalgebra. As O n is simple, it is evidently the C*-envelope of A n . This algebra has been dubbed the non-commutative disk algebra by Popescu. It plays the same role in this theory as the disk algebra plays in the study of a single isometry. In this paper, we explore in greater depth the existence of wandering vectors. The major new device is the notion of an absolutely continuous linear functional on A n . In the one variable case, a functional on A(D) is given by integration against a representing measure supported on the Shilov boundary T. Absolute continuity is described in terms of Lebesgue measure. In our setting, we do not have a boundary, and we have instead defined absolute continuity in terms of its relationship to the left regular representation. A related notion that plays a key role are intertwining maps from the left regular representation to an arbitrary free semigroup algebra. The key observation is that the range of such maps span the vectors which determine absolutely continuous functionals, and they serve to identify the type L part of the representation (see below). These results will be used to clarify precisely when a free semigroup is reflexive. For type L representations, we establish hyper-reflexivity whenever there are wandering vectors-the reflexive case. Basically the only obstruction to hyper-reflexivity is the possibility that there may be a free semigroup algebra which is type L (isomorphic to L n ) but has no wandering vectors, and hence will be reductive (all invariant subspaces have invariant ortho-complements). The ultimate goal of this paper is to obtain an analogue of the Kaplansky density theorem. This basic and well-known result states that given any C*-algebra and any * -representation, the image of the unit ball is wot-dense in the unit ball of the wot-closure. In the nonselfadjoint setting, such a result is not generally true. However, in the context of completely isometric representations of A n , we have a rather rigid structure, and we shall show that in fact such a Kaplansky type theorem does hold. Let σ be a * -extendible representation of A n , that is, σ is the restriction of a * -representation of O n or E n to A. We call it regular if the type L part coincides with the absolutely continuous part. It is precisely this case in which a density theorem holds, and the unit ball of σ(A n ) is weak- * dense in the unit ball of the free semigroup algebra. In particular, we shall see that this holds in the presence of a wandering vector. In fact, the only possible obstruction to a Kaplansky density result for all representations of A n is the existence of a representation where the free semigroup algebra is a von Neumann algebra and is also absolutely continuous. No such representation is known to exist. Preliminaries In this section, we will remind the reader of some of the more technical aspects which we need, and will establish some notation for what follows. A typical n-tuple of isometries acting on a Hilbert space H and having pairwise orthogonal ranges will be denoted by S 1 , . . . , S n . This may be recognized algebraically by the relations The C*-algebra that they generate is the Cuntz algebra O n when n i=1 S i S * i = I and the Cuntz-Toeplitz algebra E n when n i=1 S i S * i < I. The norm-closed unital subalgebra generated by S 1 , . . . , S n (but not their adjoints) is completely isometrically isomorphic to Popescu's non-commutative disk algebra A n . The ideal of E n generated by I − n i=1 S i S * i is isomorphic to the compact operators K, and the quotient by this ideal is O n . Let the canonical generators of E n be denoted by s 1 , . . . , s n . Then every such n-tuple of isometries arises from a * -representation σ of E n (write σ ∈ Rep(E n )) as S i = σ(s i ). We shall call a representation σ of A n * -extendible if σ is the restriction to A n of a *representation of E n or O n to the canonical copy of A n . It is easy to see that σ is * -extendible if and only if σ(S i ) are isometries with orthogonal ranges; or equivalently, σ is contractive and σ(s i ) are isometries. Let F + n denote the unital free semigroup on n letters. (Probably we should use the algebraist's term 'monoid' here, but our habit of using the term semigroup is well entrenched.) This semigroup consists of all words w in 1, 2, . . . , n including the empty word ∅. The Fock space ℓ 2 (F + n ) has an orthonormal basis {ξ w : w ∈ F + n }, and is the natural Hilbert space for the left regular representation λ. This representation has generators, denoted by L i := λ(s i ), which act by L i ξ w = ξ iw . The wot-closed algebra that they generate is denoted by L n . In general, each n-tuple S 1 , . . . , S n will generate a unital algebra, and the wot-closure will be denoted by S. When a representation σ of E n is given and S i = σ(s i ), we may write S σ for clarity. For each word w = i 1 . . . i k in F + n , we will use the notation S w to denote the corresponding operator S i 1 · · · S i k . In particular, L w ξ v = ξ wv for w, v ∈ F + n . It is a basic fact of C*-algebra theory that every representation of E n splits as a direct sum of the representation induced from its restriction to K and a representation that factors through the quotient by K. However, K has a unique irreducible representation, and it induces the left regular representation λ of F + n , described above. So σ ≃ λ (α) ⊕ τ where α is some cardinal and τ is a representation of O n . This is equivalent to the spatial result known as the Wold decomposition. The Wold decomposition is the observation that the range M of the projection I − n i=1 S i S * i is a wandering subspace, meaning that the subspaces {S w M : w ∈ F + n } are pairwise orthogonal, and together span the subspace S[M]. Any orthonormal basis for M will consist of wandering vectors which generate orthogonal copies of the left regular representation; moreover, the restriction of the S i to S[M] ⊥ will be a representation which factors through O n . We call the representation τ the Cuntz part of σ, and when α = 0, i.e. when n i=1 σ(s i s * i ) = I, we say simply that the representation σ is of Cuntz type. Recall [13] that every A ∈ L n has a Fourier series A ∼ w∈F + n a w L w determined by Aξ ∅ = w∈F + n a w ξ w . The representation λ is a canonical completely isometric map from A n into L n which sends s i to L i . Hence elements of A n inherit corresponding Fourier series, and we will write A ∼ w∈F + n a w s w . The functional ϕ 0 reads off the coefficient a ∅ . The kernel of ϕ 0 in A n and L n are denoted by A n,0 and L n,0 respectively. These are the norm and wot-closed ideals, respectively, generated by the generators s 1 , . . . , s n and L 1 , . . . , L n . Even when ϕ 0 is not defined on a free semigroup algebra S, we still denote by S 0 the wot-closed ideal generated by S 1 , . . . , S n . This will either be codimension one or equal to the entire algebra. The ideals A k n,0 and L k n,0 consist of those elements with zero Fourier coefficients for all words w with |w| < k; and are generated as a right ideal by {s w : |w| = k}. Moreover [14], each element in L k n,0 may be uniquely represented as A = |w|=k L w A w and A is equal to the norm of the column operator with entries A w . One can recover an element of A n or L n from its Fourier series in the classical way using a summability kernel. For t ∈ T, let α t be the gauge automorphism of O n determined by the mapping s i → ts i . Let V n (t) = 2n+1 k=−2n−1 c k t k be the de la Vallée Poussin summability kernel on T from harmonic analysis. Recall that V n is a trigonometric polynomial of degree 2n + 1 with Fourier transformV n (k) = 1 for |k| ≤ n + 1. Let m be normalized Lebesgue measure on T. Define linear maps Σ k on O n by Then Σ k is a unital completely positive map on O n which leaves A n invariant and moreover, for every X ∈ O n , Σ k (X) converges in norm to X. It has the additional property that the Fourier coefficients of Σ k (X) agree with those of X up to the k-th level. Indeed, if A ∼ w∈F + n a w L w lies in A n , then Σ k (A) = |w|≤2k+1 c |w| a w s w . Notice that for A ∈ L n , Σ k (A) converges to A in the strong operator topology. Let σ be a * -extendible representation and let S = S σ . We now recall some facts from [11] regarding the ideals S k 0 . The intersection J of these ideals is a left ideal of the von Neumann algebra W generated by the S i ; therefore, J has the form WP σ for some projection P σ ∈ S. (When the context is clear, we will write P instead of P σ .) The Structure Theorem for free semigroup algebras [11] shows that P is characterized as the largest projection in S such that P SP is self-adjoint. Moreover P ⊥ H is invariant for S and when P = I, the restriction of S to the range of P ⊥ is canonically isomorphic to L n . Indeed, the map taking S i | P ⊥ H to L i extends to a completely isometric isomorphism which is also a weak- * homeomorphism. Algebras which are isomorphic to L n are called type L. When P = I, the restriction of σ to the range of P ⊥ again determines a * -extendible representation of A n , and we call this restriction the type L part of σ. 4 Absolute Continuity In the study of the disk algebra, those functionals which are absolutely continuous to Lebesgue measure play a special role. Of course, the Shilov boundary of the disk algebra is the unit circle, and the Lebesgue probability measure m is Haar measure on it. Moreover, every representing measure for evaluation at points interior to the disk is absolutely continuous. We have been seeking an appropriate analogue of this for free semigroup algebras for some time. That is, which functionals on the non-commutative disk algebra A n should be deemed to be absolutely continuous? Unfortunately, there is no clear notion of boundary or representing measure. However there is a natural analogy, and we propose it here. Our starting point is the left regular representation of the semigroup (N, +). Under this representation, the generator of (N, +) is mapped to the unilateral shift S and elements of A(D) are analytic functions h(S) of the shift, which may be regarded as multipliers of H 2 (T). With this perspective, every vector functional h → h(S)f 1 , f 2 = T hf 1 f 2 dm corresponds to a measure which is absolutely continuous with respect to Lebesgue measure. On the other hand, suppose ϕ is a functional on A(D) given by integration over T by an absolutely continuous measure, so that ϕ(h) = T hf dm for some f ∈ L 1 (T). It is not difficult to show that such functionals on A(D) can be approximated by vector functionals from the Hilbert space of the left regular representation. Moreover, if one allows infinite multiplicity, one can represent ϕ as a vector state, that is, there are vectors x 1 and x 2 in Another view is that the absolutely continuous functionals on A(D) are the functionals in the predual of H ∞ (T). Our analogue of this algebra is L n . So we are motivated to make the following definition: Definition 2.1. For n ≥ 2, a functional on the non-commutative disk algebra A n is absolutely continuous if it is given by a vector state on L n ; i.e. if there are vectors ζ, η ∈ ℓ 2 (F + n ) so that ϕ(A) = λ(A)ζ, η . Let A a n denote the set of all absolutely continuous functionals on A n . For n ≥ 2, L n has enough "infinite multiplicity" that it is unnecessary to take the closure of vector functionals; in fact we shall see shortly that A a n is already norm closed. The following result shows that the notion of being representable as a vector state and being in the predual of L n are equivalent to each other and to a natural norm condition on the functional. n , the following are equivalent: (1) ϕ is absolutely continuous. Proof. (1) implies (2) by definition. The converse follows from [13] where it is shown that every weak- * continuous functional on L n is given by a vector state. The norm condition on the vectors ζ and η is also obtained there. Next, suppose (1) holds. The map λ carries A k n,0 into L k n,0 . Let Q k denote the projection of ℓ 2 (F + n ) onto span{ξ w : w ∈ F + n , |w| ≥ k}. Elements of L k n,0 are characterized by A = Q k A. Hence it follows that lim Conversely suppose that (3) holds. Then given A ∈ A n , we use the fact that Σ k (A) converges to A in norm. Note that when m ≥ k, Σ k (X) − Σ m (X) belongs to A k n,0 and has norm at most 2 A . It follows therefore that the adjoint maps satisfy, From (1) implies (2), we found that the set of absolutely continuous functionals is norm closed. Hence the limit ϕ is also absolutely continuous. The following is immediate. Corollary 2.3. The set A a n is the closed subspace of the dual of A n which forms the predual of L n . Definition 2.4. Let σ be a * -extendible representation of A n on the Hilbert space H σ . A vector x ∈ H σ is called an absolutely continuous vector if the corresponding vector state taking A ∈ A n to σ(A)x, x is absolutely continuous. Another straightforward but useful consequence is: If σ is a * -extendible representation of A n and x, y are vectors lying in the type L part of S = σ(A n ) wot (or even in the type L part of T = (σ ⊕ λ)(A n ) wot ), then ϕ(A) = σ(A)x, y is absolutely continuous. In particular, every vector lying in the type L part of H σ is absolutely continuous. Proof. Relative to H ⊕ ℓ 2 (F + n ), the structure projection P σ⊕λ for T decomposes as P 1 ⊕ 0, with P 1 ≤ P σ . By considering vectors of the form x ⊕ 0, where x is in the range of P σ , we may regard the type L part of S as contained in the type L part of T. Thus, we may assume to be working with the representation σ ⊕ λ from the start. By [11, Theorem 1.6], the type L part of σ ⊕ λ is spanned by wandering vectors . For any wandering vector w, the functional ϕ w (A) = (σ ⊕ λ)(A)w, y ⊕ 0 is absolutely continuous because the cyclic subspace 6 (σ ⊕ λ)(A n )[w] is unitarily equivalent to ℓ 2 (F + n ) and y ⊕ 0 may be replaced with its projection into this subspace. By the previous corollary, the set of absolutely continuous functionals is a closed subspace. Taking linear combinations and limits shows that ϕ is in this closure, and hence also absolutely continuous. Now we wish to develop a connection between absolute continuity and certain intertwining operators. is an invariant linear manifold for σ(A n ). The next result shows that V ac (σ) is also closed and equals the set of absolutely continuous vectors for σ. wot . The following statements hold. Proof. Let x, y ∈ V ac (σ), and choose vectors ζ, η in ℓ 2 (F n ) and X, Y ∈ X (σ) with Xζ = x and Y η = y. Then x, x is absolutely continuous, say ψ(A) = λ(A)ζ, η . Theorem 1.6 of [11] shows that x ⊕ ζ is a cyclic vector for an invariant subspace M of (σ ⊕ λ)(A n ) on which the restriction is unitarily equivalent to λ. Indeed, while the hypothesis of that theorem requires that σ be type L, this condition is used only to establish that ψ is absolutely continuous (in our new terminology). It is evident that a subspace of this type is the range of an intertwining isometry V ∈ X (σ ⊕ λ). Let X = P H V . Then X intertwines S and L. Moreover, since x ⊕ ζ is in the range of V , it follows that x is in the range of X, so (ii) holds. We now push this argument a little further. Observe that as in the proof of [11, Theorem 1.6], given t > 0, ζ may be replaced by tζ. Therefore, if x ∈ V ac (σ), the argument of the previous paragraph also shows that x belongs to the closed span of the wandering vectors for (σ ⊕λ)(A n ). Thus x belongs to the type L part of (σ ⊕λ)(A n ), whence V ac (σ) ⊆ Ran(P H Q ⊥ ). Conversely, since P H and Q commute, any vector x ∈ Ran(P H Q ⊥ ) lies in the type L part of T, and thus ψ(A) = Ax, x is absolutely continuous by Corollary 2.5. But then x ∈ V ac (σ) by part (ii). So Ran(P H Q ⊥ ) = V ac (σ). That V ac (σ) is closed is now obvious. We now give a condition sufficient for the existence of wandering vectors. Theorem 2.8. Let X belong to X (σ). Then the following statements are equivalent. i) The representations σ| Ran X and λ are unitarily equivalent; ii) Ran X = S[w] for some wandering vector w; iii) X * X = R * R for some non-zero R ∈ R n = L ′ n . In particular, this holds if X is bounded below. Proof. The equivalence of (i) and (ii) is clear from the definitions. To obtain (iii) ⇒ (i), suppose that X * X = R * R. By restricting σ to the invariant subspace Ran X, we may suppose that X has dense range, and that Xξ ∅ is a cyclic vector. We now show that σ is equivalent to λ. Since R ∈ R n , [13, Corollary 2.2], shows that R factors as the product of an isometry and an outer operator in R n . The equality X * X = R * R is unchanged if the isometry is removed, so we may assume that R has dense range. Since X and R have the same positive part, there is an isometry V such that X = V R and Ran V = Ran X; whence V is unitary. Then Therefore V intertwines S and L and so σ| Ran V is equivalent to λ. Finally, we show (ii) ⇒ (iii). If there is an isometry V ∈ X (σ) with Ran V = Ran X, then by again restricting to this range, we may assume that V is unitary, so that σ is equivalent whence R := V * X belongs to L ′ n = R n . Therefore X = V R and so X * X = R * R. Now suppose that X is bounded below. Again we may suppose that X has dense range, hence X is invertible. Consider the Wold decomposition of S. The Cuntz part is supported on Hence σ is a multiple of λ. Since Ran X has a cyclic vector Xξ ∅ , σ has multiplicity one, and thus is equivalent to λ. As an immediate corollary, we note the existence of wandering vectors is characterized by a structural property of X (σ). Corollary 2.9. Let σ be a representation of E n on H with generators S i = σ(s i ). Then S has a wandering vector if and only if there exists X ∈ X (σ) such that X is bounded below. Proof. If η ∈ H is wandering for S, then the isometric map determined by Xξ w = w(S)η belongs to X (σ). The converse follows from the theorem. Remark 2.10. If one only has X * X ≥ R * R for a non-zero R ∈ R n , one may still deduce that Ran X has wandering vectors. To do this, use Douglas' Lemma [16] to factor R = Y X. Then argue as in Theorem 2.8 that Y S i = L i Y . Then with N as in (1), one can show that Y N = {0}. Since Y has dense range, σ has a summand equivalent to λ. Moreover, since the range of an intertwiner consists of absolutely continuous vectors, the existence of this summand and Lemma 3.2 below show that the range of X is spanned by wandering vectors. Example 2.11. There are intertwining maps whose range is not equivalent to λ. For example, consider the atomic representation of type π z ∞ 2 [13, Example 3.2]. Then the restriction of S 2 to the spine ℓ 2 (Z × {0}) is the bilateral shift. Observe that there is a summable sequence (a k ) k∈Z such that k∈Z a k ξ k,0 is cyclic for the bilateral shift. Indeed, Beurling's Theorem states that the (cyclic) invariant subspaces of the bilateral shift, considered as M z on L 2 (T), have the form L 2 (E) for a measurable subset E of T or the form wH 2 where |w| = 1 a.e. Thus if a function g vanishes on a set of positive measure, it generates L 2 (supp(g)). On the other hand, if there is an outer function f in H 2 with |f | = |g| a.e., then the cyclic subspace is wH 2 where w = g/f . This occurs if and only if log |g| belongs to L 1 (T). So choose a C 2 function g on T which vanishes at a single point in such a way that log |g| is not integrable. For example, make g(θ) = e −1/|θ| near θ = 0 and smooth. Lying in C 2 guarantees that the Fourier coefficients are summable. For each k ∈ Z, there is an intertwining isometry V k with V k ξ ∅ = ξ k,0 . Then V = k∈Z a k V k is an intertwiner. Moreover, V ξ ∅ is cyclic for this Cuntz representation. So V has dense range; but the representation π z ∞ 2 is not equivalent to λ. Remark 2.12. Consider the completely positive map on B(H) given by Φ Moreover, sot-lim k Φ k (X * X) = 0. This latter condition is called purity by Popescu [29]. Under these two hypotheses, namely Φ(D) ≤ D and sot-lim k Φ k (D) = 0, Popescu proves the converse, that D = X * X for an intertwiner SX = XL (∞) using his Poisson transform. Wandering vectors and absolute continuity In [11], we showed that in the presence of summands which contain wandering vectors, the entire type L part is spanned by wandering vectors. In this section, we use the ideas of the previous section to strengthen this significantly by showing that the presence of one wandering vector implies that the type L part is spanned by wandering vectors. We then consider the various ways in which a representation can appear to be type L. Definition 3.1. Let σ be a * -extendible representation of A n . We say that σ is type L if the free semigroup algebra generated by σ( Notice that the restriction of a * -extendible representation σ of A n to the invariant subspace V ac (σ) produces an absolutely continuous representation. We call this restriction the absolutely continuous part of σ. Lemma 3.2. If σ is absolutely continuous and has a wandering vector, then H is spanned by its wandering vectors. In particular, σ is type L. Proof. Let η be a wandering vector in H, and set H 0 = S[η]. Let V be the isometry in X (σ) mapping ℓ 2 (F + n ) onto H 0 . By Theorem 2.7, every vector x ∈ H is in the range of some intertwining map X ∈ X (σ). We may assume that X = 1/2. Then V ± X are intertwiners which are bounded below. By Theorem 2.8, the ranges of these two intertwiners are the ranges of isometric intertwiners, and thus are spanned by wandering vectors. But the range of X is contained in the sum of the ranges of V ± X; and hence x is contained in the span of all wandering vectors. Corollary 3.3. If σ is any representation of E n such that σ(A n ) has a wandering vector, then the span of the wandering vectors for σ(A n ) is V ac (σ). Proof. Any wandering vector is an absolutely continuous vector, so simply restrict σ to the σ(A n )-invariant subspace consisting of absolutely continuous vectors and apply the lemma. We now delineate the various type L forms, and their relationships as we know today. There are no known examples of absolutely continuous representations without wandering vectors. Theorem 3.4. Consider the following conditions for a * -extendible representation σ of A n : (1a) σ is absolutely continuous σ is absolutely continuous and σ (k) has a wandering vector for some finite k (3c) σ (k) is spanned by wandering vectors for some finite k (4a) σ is absolutely continuous and has a wandering vector (4b) σ is spanned by wandering vectors Then properties with the same numeral are equivalent, and larger numbers imply smaller. Proof. (1a) ⇒ (1b). If σ is absolutely continuous, then σ ⊕ λ is absolutely continuous and has a wandering vector. Thus by Lemma 3.2, σ ⊕ λ is spanned by its wandering vectors, and so is type L. (1a) ⇒ (1c): If τ is any type L representation, there is an integer p so that τ (p) has a wandering vector. Thus (σ ⊕ τ ) (p) is absolutely continuous and has a wandering vector, and so is also type L. However being type L is not affected by finite ampliations, as this has no effect on the wot-closure. So σ ⊕ τ is type L. It is worthwhile examining the various weaker notions of type L in light of the Structure Theorem for Free Semigroup Algebras [11]. Let σ be a representation of E n and let S and W denote the corresponding free semigroup algebra and von Neumann algebra respectively. Then there is a projection P in S characterized as the largest projection in S for which P SP is self-adjoint. Then S = WP + SP ⊥ , P ⊥ H is invariant for S and SP ⊥ is type L. We wish to break this down a bit more. Definition 3.5. A representation σ of E n or O n is of von Neumann type if the corresponding free semigroup algebra S is a von Neumann algebra. If σ has no summand of either type L or von Neumann type, say that it is of dilation type. We also will say that σ is weak- * of some type if σ (∞) is of that type. A very recent result of Charles Read [30] shows that there can indeed be representations of von Neumann type. The reason for the nomenclature dilation type is that after all summands of von Neumann type and type L are removed, the remainder must have a non-zero projection P prescribed by the structure theorem such that P H is cyclic and P ⊥ H is cyclic for S * . For these algebras, the type L corner must be a multiple of λ. To see this, consider the subspace W = i S i P H ⊖ P H. This is a wandering subspace for the type L part. It is necessarily non-zero, for otherwise S would be a von Neumann algebra. Moreover, W is cyclic for the type L corner because of the cyclicity of P H. Hence the type L part is equivalent to λ (dim W) . This is an observation that was, unfortunately, overlooked in [11]. Hence one sees that the compressions A i = P S i | P H form a row contraction with S i as their minimal isometric dilation (in the sense of Frahzo-Bunce-Popescu). We record the most useful part of this for future reference. Proposition 3.6. If σ is dilation type, then it has wandering vectors. In particular, dilation type and weak- * dilation type coincide. Proof. The first statement was proven in the preamble. Once one has a wandering vector, the span of the wandering vectors includes all of the absolutely continuous vectors, which includes the weak- * type L part. We can now clarify the exceptional case in which there may be pathology. Proposition 3.7. Let σ be a * -extendible representation of A n . If the type L and absolutely continuous parts do not coincide, then σ is of von Neumann type, and decomposes as σ ≃ σ a ⊕ σ s where σ a is absolutely continuous and σ s has no absolutely continuous part. Proof. Decompose σ ≃ σ v ⊕ σ d ⊕ σ l into its von Neumann, dilation and type L parts. By Proposition 3.6, if there is a dilation part, then there are wandering vectors. So by Corollary 3.3, the type L and absolutely continuous parts coincide. Likewise if there is a type L part, the equivalence of (1a) and (1b) in Theorem 3.4 shows that the type L and absolutely continuous parts will coincide. So σ is necessarily of von Neumann type. Since V ac (σ) is invariant for S σ , and S σ is a von Neumann algebra, V ac (σ) is a reducing subspace for S σ . This gives the desired decomposition σ ≃ σ a ⊕ σ s . Definition 3.8. Call a * -extendible representation σ of A n regular if the absolutely continuous and type L parts of σ coincide. Remark 3.9. Proposition 3.7 shows that the only pathology that can occur in the various weak type L possibilities is due to a lack of wandering vectors. It is conceivable that a representation is type L but has no wandering vectors. Such an algebra is reductive and nonselfadjoint. There is no operator algebra known to have this property. So the (unlikely) existence of such an algebra would yield a counterexample to a well-known variant of the invariant subspace problem. A * -extendible representation σ which is weak- * type L but not type L must be von Neumann type by the preceding proposition. But then σ(A n ) w- * would be a weak- * closed subalgebra isomorphic to L n which is wot-dense in a von Neumann algebra. We have no free semigroup algebra example of this type of behaviour. However, Loebl and Muhly [22] have constructed an operator algebra which is weak- * closed and nonselfadjoint, but with the wot-closure equal to a von Neumann algebra. Therefore it is conceivable that such a free semigroup algebra could exist. Finally, one could imagine that σ is of weak- * von Neumann type but absolutely continuous. Clearing up the question of whether any of these possibilities can actually occur remains one of the central questions in the area. We conjecture that every representation is regular. Indeed, we would go further and speculate that type L representations always have wandering vectors. Reflexivity and hyper-reflexivity In this section, we establish two reflexivity results that extend previous work in light of the previous section. Theorem 4.1. If S is a free semigroup algebra which has a wandering vector, then it is reflexive. Proof. By [11, Proposition 5.3], S is reflexive if and only if the restriction to its type L part is reflexive. Thus, without loss of generality, we may assume that S is type L. Since S is type L and has a wandering vector, Lemma 3.2 shows that H is spanned by wandering vectors. Let W ⊆ H be the set of all unit wandering vectors. For each α ∈ W , let H α = S[α] and let V α : ℓ 2 (F + n ) → H L be the intertwining isometry which sends ξ w to S w α. Then the invariant subspaces H α span H and each restriction S| Hα is unitarily equivalent to L n via V α . If T ∈ Alg Lat S, then H α is invariant for T . Since L n is reflexive, there is an element B α ∈ L n so that T | Hα = V α B α V * α . For each α ∈ W , there is an element A α ∈ S so that A α | Hα = V α B α V * α . Fix an element α 0 ∈ W , let V 0 = V α 0 and A 0 = A α 0 . We shall show that T = A 0 . By replacing T with T − A 0 , we may assume that T | H 0 = 0, so that our task is to show T = 0. Given α ∈ W , the operator X = V 0 + .5V α is an intertwining map between S and L which is bounded below. Moreover, M := Ran X is closed and invariant for S; hence M is also invariant for T . But T Xξ ∅ = T V 0 ξ ∅ + .5T V α ξ ∅ = .5A α V α ξ ∅ =: y belongs to H α ∩ M. This implies that there is a vector ζ ∈ ℓ 2 (F + n ) so that y = Xζ = V 0 ζ + .5V α ζ belongs to H α , and thus V 0 ζ lies in H 0 ∩ H α . If ζ = 0 then y = 0, so that A α has the non-zero vector V α ξ ∅ in its kernel. Otherwise V 0 ζ is a non-zero vector in H 0 ∩ H α and A α V 0 ζ = T V 0 ζ = 0. Therefore, A α | Hα has non-trivial kernel. Hence B α is an element of L n with non-trivial kernel. Since non-zero elements of L n are injective [13, Theorem 1.7], we deduce that B α = 0. Hence 0 = A α | Hα = T | Hα . Since α∈W H α = H, we conclude that T = 0 as desired. Recall that an operator algebra A is hyper-reflexive if there is a constant C so that The known families of hyper-reflexive algebras are fairly small. It includes nest algebras [3] with constant 1, the analytic Toeplitz algebra [9] and the free semigroup algebras L n [13]. Bercovici [4] obtained distance constant 3 for all algebras having property X 0,1 and also showed that an operator algebra A has property X 0,1 whenever its commutant contains two isometries with orthogonal ranges. In particular, L n has property X 0,1 when n ≥ 2. Bercovici's results significantly increased the known class of hyper-reflexive algebras. There is a long-standing open question about whether all von Neumann algebras are hyperreflexive, which is equivalent to whether every derivation is inner [8]. The missing cases are von Neumann algebras whose commutant are certain intractable type II 1 algebras. This could include certain type II ∞ representations of O n , and hence would apply in our context. So for the next result, we restrict ourselves to the type L case. Theorem 4.2. If S is a type L free semigroup algebra which has a wandering vector, then S is hyper-reflexive. Before giving the proof, we pause for the following remark. Remark 4.3. If S is type L and has a wandering vector, then by [13] it has property A 1 and by [15] it even has property A ℵ 0 . In particular, Theorem 4.2 together with a result from [19] implies that every weak- * closed subspace of a type L free semigroup algebra with a wandering vector is also hyper-reflexive. Even though X 0,1 is only a bit stronger than A ℵ 0 , we were unable to show that S has it. So we are unable to apply Bercovici's argument. Thus, the proof which follows uses methods reminiscent of those used in [13]. If S is type L and has no wandering vector, then as noted in Remark 3.9, the algebra will be nonselfadjoint and reductive. In particular, it is not reflexive. Proof. Let T ∈ B(H), and set β(T ) = sup P ∈Lat S P ⊥ T P . Let x 0 be a wandering vector of S. Then S| S[x 0 ] ≃ L n . Since L n is hyper-reflexive with constant 3, there exists an A ∈ S with (T − A)| S[x 0 ] || ≤ 3β(T ). By replacing T with T − A, we can assume that T | S[x 0 ] ≤ 3β(T ). 13 Let x be a wandering vector with x = x 0 and let V be the isometric intertwiner from Let x i = S i x 0 , for i = 1, 2. For i = 0, 1, 2, define isometric intertwiners V i from ℓ 2 (F + n ) to H by V i ξ w = S w x i for w ∈ F + n . For i = 1, 2, set T i = V i + rV where 0 < r < 1/ √ 2, and define N i = Ran T i . We claim that N 1 and N 2 are at a positive angle to each other; so that N 1 ∩ N 2 = {0} and N 1 + N 2 is closed. Indeed, using δ := 1 − r √ 2 > 0, So the natural map of N 1 ⊕ N 2 onto N 1 + N 2 is an isomorphism. Observe next that for any w ∈ F + n , we have Therefore, lim k→∞ |w|=k As n j=1 S j N i has co-dimension one in N i , we find that. By the Wold decomposition, we deduce that S| N 1 +N 2 ≃ L (2) n . This algebra is hyper-reflexive with distance constant 3. So there is an element A ∈ S such that (T − A)| N 1 +N 2 ≤ 3β(T ). Suppose that y is a unit vector in S[x]. Observe that Choosing r sufficiently close to 1/ √ 2 yields that T | S[x] ≤ 26β(T ), so (2) holds. We now can estimate T . Fix any unit vector y ∈ H, and let T be the free semigroup algebra generated by S i ⊕ L i . Since S is type L, by [11,Theorem 1.6] there is a vector ζ ∈ ℓ 2 (F + n ) with ζ < ε such that T[y ⊕ ζ] is a subspace of H ⊕ ℓ 2 (F + n ) which is generated by a wandering vector. Hence T[y ⊕ ζ] is the range of an isometry W ′ from ℓ 2 (F + n ) to H ⊕ ℓ 2 (F + n ) intertwining L i with S i ⊕ L i . Then W ′′ := P H W ′ is a contraction in B(ℓ 2 (F + n ), H) satisfying S i W ′′ = W ′′ L i . Moreover, there is a vector ξ ∈ ℓ 2 (F + n ) of norm (1 + ε 2 ) 1/2 such 14 that W ′′ ξ = y. Identify S[x 0 ] with ℓ 2 (F + n ) via the isometry V 0 ∈ B(ℓ 2 (F + n ), H), and set W := W ′′ V * 0 ∈ B(S[x 0 ], H) and w := V 0 ξ. Let J be the inclusion map of S[x 0 ] into H. For |t| < 1, consider V t = J + tW . This is an intertwining map which is bounded below, and thus by Theorem 2.8, there is a wandering vector x t of S so that Ran(V t ) = S[x t ]. So T (w + ty) ≤ 26β(T ) w + ty . Since T w ≤ 3β(T ) w , if we let t increase to 1 and ε decrease to 0, we obtain T y ≤ 55β(T ). So T ≤ 55β(T ). Thus, S is hyper-reflexive with constant at most 55. The following proposition is complementary to [11,Proposition 2.10] showing that if S is of Cuntz type, then S ′′ = W is a von Neumann algebra. Proof. Since S is not Cuntz type, by the Wold decomposition, it has a direct summand equivalent to L n . That is, we may decompose the generators S 1 , ..., S n as S i = T i ⊕ L i on H = H 1 ⊕ ℓ 2 (F + n ). Let W be the von Neumann algebra generated by S. By the Structure Theorem [11, Theorem 2.6], there is a largest projection P in S such that P SP is self-adjoint and S = WP By Theorem 3.2, P ⊥ H is spanned by wandering vectors. For any wandering vector x α , let V α be the canonical intertwining isometry from ℓ 2 (F + n ) into H defined by V α ξ w = S w x α for w ∈ F + n . If we select x 0 = 0 ⊕ ξ ∅ , then V 0 maps onto the free summand. It is easy to check that V α V * 0 commutes with S. Let A ∈ S ′′ . Then since 0 ⊕ I commutes with S, A must have the form A = A 1 ⊕ A 2 . Moreover, A 2 ∈ L ′′ n = L n by [13]. There is an element B ∈ S such that B = B 1 ⊕ A 2 . Subtracting this from A, we may suppose that A = A 1 ⊕ 0. Then Thus AP ⊥ = 0. As above, AP lies in S, whence A belongs to S. A Kaplansky Density Theorem Kaplansky's famous density theorem states that if σ is a * -representation of a C*-algebra A, then the unit ball of σ(A) is wot-dense in the ball of the von Neumann algebra W = σ(A) w- * = σ(A) wot . In general, there is no analogue of this for operator algebras which are not self-adjoint. Indeed, it is possible to construct many examples of pathology [32]. On the other hand, the density theorem is such a useful fact that it is worth seeking such a result whenever possible. In this section, we establish a density theorem for regular representations of A n . Consider the following "proof" of the Kaplansky density theorem. Consider the C*algebra A sitting inside its double dual A * * , which is identified with the universal enveloping von Neumann algebra W u of A. Any representation σ of A extends uniquely to a normal representation σ of W u onto W = σ(A) ′′ . Because this is a surjective * -homomorphism of C*-algebras, it is a complete quotient map. In particular, any element of the open ball of W is the image of an element in the ball of W u . Now by Goldstine's Theorem, every element of the ball of A * * is the weak- * limit of a net in the ball of A. Mapping this down into W by σ yields the result. We call this a "proof" because the usual argument that W u is isometrically isomorphic to A * * requires the Kaplansky density theorem. Indeed, each state on A extends to vector state on W u . But the fact that all functionals on A have the same norm on W u follows from knowing that the unit ball is weak- * dense in the ball of W u . It seems quite likely that the use of Kaplansky's density theorem could be avoided, making this argument legitimate. Nevertheless, we can use this argument to decide when such a result holds in our context. Moreover, in the C*-algebra context, Kaplansky's theorem extends easily to matrices over the algebra because they are also C*-algebras. In our case, it follows from the proof. The double dual of A n may be regarded as a free semigroup algebra, in the following way. We shall use it as a tool in the proof of the Kaplansky density theorem, and we pause to highlight some of its features. Definition 5.1. Regard A n as a subalgebra of E n . Then the second dual A * * n is naturally identified with a weak- * closed subalgebra of E * * n . This will be called the universal free semigroup algebra. That this is a free semigroup algebra will follow from the discussion below. We shall denote its structure projection by P u . Denote by j the natural inclusion of a Banach space into its double dual. Then j(A n ) generates E * * n as a von Neumann algebra. If σ is a * -representation of E n on a Hilbert space H, then σ has a unique extension to a normal * -representation σ of E * * n on the same Hilbert space H. Moreover, σ(E * * n ) is the von Neumann algebra σ(E n ) ′′ generated by σ(E n ). Fix once and for all a universal representation π u of E n acting on the Hilbert space H u with the property that π u has infinite multiplicity, i.e. π u ≃ π (∞) u . This is done to ensure that the wot and weak- * topologies coincide on the universal von Neumann algebra W u = π u (E n ) ′′ . Then π u is a * -isomorphism of E * * n onto W u . This carries A * * n onto the weak- * closed subalgebra closure S u of π u (A n ). This coincides with the wot-closure, and thus this is a free semigroup algebra. Hence A * * n is a free semigroup algebra. Since π u has infinite multiplicity and contains a copy of λ, its type L part is spanned by wandering vectors. So by Theorem 3.4, the range of π u (P ⊥ u ) is V ac (π u ). Proposition 5.2. Let σ be a representation of E n and let P u ∈ A * * n be the the universal structure projection. Then σ(P ⊥ u ) is the projection onto V ac (σ). Proof. Consider the kernel of σ. There is a central projection Q σ ∈ E * * n such that ker σ = Q σ E * * n . Moreover, we may regard H as a closed subspace of H u and σ as given by multiplication by Q ⊥ σ , namely σ(X) = Q ⊥ σ X| H for any X ∈ E * * n . Let M be the range of σ(P ⊥ u ) and let x ∈ M. Then x ∈ Q ⊥ σ P ⊥ u H u , so x belongs to V ac (π u ). Thus for any A ∈ A n , σ(A)x, x = π u (j(A))Q ⊥ σ P ⊥ u x, Q ⊥ σ P ⊥ u . As the range of P ⊥ u consists of absolutely continuous vectors, we see that this is an absolutely continuous functional, so x ∈ V ac (σ). Conversely, if x ∈ V ac (σ), then there exists an intertwiner X ∈ X (σ) and ζ ∈ ℓ 2 (F + n ) so that x = Xζ. Observe that Q ⊥ σ X belongs to X (π u ), hence x ∈ V ac (π u ). Since the absolutely continuous part of π u coincides with the type L part of π u , we conclude that x ∈ P ⊥ u H u ∩ Q ⊥ σ H u and therefore σ(P ⊥ u )x = x. Since the type L part of a representation σ is contained in the absolutely continuous part, it follows that σ(P ⊥ u ) ≥ P ⊥ σ . Notice that by the previous result, σ is regular if and only if σ(P ⊥ u ) = P ⊥ σ , where P σ is the structure projection for S σ . Proposition 5.3. Let σ be a regular * -representation of E n . Then σ(A n ) wot = σ(A n ) w- * and Proof. Let T := σ(A n ) w- * , S := σ(A n ) wot and let W be the von-Neumann algebra generated by σ(A n ). Let P T and P S be the structure projections for σ(A n ) w- * and σ(A n ) wot respectively. Then P ⊥ T ≥ P ⊥ S . Since the absolutely continuous part of σ contains the range of P ⊥ T , the regularity of σ yields that P T = P S = σ(P u ). Hence T = WP + TP ⊥ and S = WP + SP ⊥ . Moreover both TP ⊥ and SP ⊥ are canonically isomorphic to L n and the isomorphisms agree on σ(A n ). Hence they are equal. For typographical ease, write P = P T = P S . Given X ∈ S, find X ′ ∈ E * * n such that σ(X ′ ) = X. We may suppose that X ′ = Q ⊥ σ X ′ . This determines X ′ uniquely, and σ is injective on Q ⊥ σ E * * n . By Proposition 5.2 and the regularity of σ, σ(P u ) = P . So σ(P u X ′ P ⊥ u ) = P XP ⊥ = 0, whence P u X ′ P ⊥ u = 0. To see that X ′ belongs to A * * n , it remains to show that P ⊥ u X ′ P ⊥ u lies in A * * n P ⊥ u , which is type L. But A * * n P ⊥ u and SP ⊥ are both canonically isometrically isomorphic to L n , from which it is clear that σ| A * * n P ⊥ u is an isomorphism onto SP ⊥ . We can now prove our Kaplansky-type theorem. Theorem 5.4. Let σ be a regular * -representation of E n . Then the unit ball of σ(A n ) is weak- * dense in the unit ball of σ(A n ) w- * , and the same holds for M k (σ(A n )). Proof. Let S := σ(A n ) w- * = σ(A n ) wot . We first show that (3) ker σ| A * * n = A * * n Q σ P u . To see this, notice that σ| A * * n P ⊥ u is an isometric map of the type L part of A * * n onto the type L part of S, that is, σ maps A n P ⊥ u isometrically onto SP ⊥ σ . Therefore, if X ∈ A * * n and σ(X) = 0, then σ(X)P ⊥ σ = 0, so that XP ⊥ u = 0. As X ∈ ker σ, we find X ∈ A * * n Q σ P u . The reverse inequality is clear, so (3) holds. Next we show that σ| A * * n is a complete quotient map onto S. For X ∈ A * * n , we have dist(X, ker The reverse inequality is clear, so that σ(X) = dist(X, ker σ| A * * n ). By tensoring Q σ and P u with the identity operator on a k-dimensional Hilbert space, the same argument holds for X ∈ M k (A * * n ) and the map σ k := σ ⊗ I C k . Thus σ| A * * n is a complete contraction. Consider any element T of the open unit ball of S. Since the map of A * * n onto S is a complete quotient map, there is a contraction T u ∈ A * * n which maps onto T . By Goldstine's Theorem, the unit ball of a Banach space is weak- * dense in the ball of its double dual. So select a net A λ in the ball of A n so that j(A λ ) converges weak- * to T u . Then evidently σ(A λ ) converges weak- * (and thus wot) to T . If one wants A λ ≤ T , a routine modification will achieve this. Because σ| A * * n is a complete contraction, the same argument persists for matrices over the algebra as well. Lemma 5.5. If σ is absolutely continuous and S satisfies Kaplansky's Theorem with a constant, then σ is type L. Proof. As σ is absolutely continuous, σ ⊕ λ is type L. Let τ denote the weak- * continuous homomorphism of L n into S obtained from the isomorphism of L n with S σ⊕λ followed by the projection onto the first summand. Note that if L is an isometry in L n , then (σ ⊕ λ)(L) is an isometry [11, Theorem 4.1]. Hence τ (L) is an isometry as well. Consider ker τ . This is a weak- * closed two-sided ideal in L n . If this ideal is non-zero, then the range of the ideal is spanned by the ranges of isometries in the ideal [14]. In particular, the kernel would contain these isometries, contrary to the previous paragraph. Hence τ is injective. Let C be the constant in the density theorem for S. If T ∈ S and T ≤ 1/C, then there is a net A i in the unit ball of A n such that σ(A i ) converges weak- * to T . Drop to a subnet if necessary so that the net λ(A i ) converges weak- * to an element A ∈ L n . Then (σ ⊕ λ)(A i ) converges weak- * to T ⊕ A. Hence τ (A) = T . That means that τ is surjective, and hence is an isomorphism. Now if σ is not type L, then it is von Neumann type by Proposition 3.7; and hence contains proper projections. But L n contains no proper idempotents [13]; so this is impossible. Therefore σ must be type L. Theorem 5.6. For a representation σ of E n , the following statements are equivalent. (2) is obvious, so suppose (2) holds. If σ is not regular, then it is von Neumann type by Proposition 3.7; and σ ≃ σ a ⊕σ s . Since Kaplansky holds with a constant, this persists for σ a because the wot-closure does not change by dropping σ s , it being the full von Neumann algebra already. This contradicts Lemma 5.5. Definition 5.7. A functional ϕ on A n is singular if it annihilates the type L part of A * * n . Proposition 5.8. For a functional ϕ on A n of norm 1, the following are equivalent: (1) ϕ is singular. (2) There is a regular representation σ of E n and vectors x, y ∈ H σ with x = P σ x such that ϕ(A) = σ(A)x, y . Proof. If ϕ ∈ A * , it is a weak- * continuous functional on A * * n , so we may represent it as a vector functional on H u , say ϕ(A) = π u (A)x, y . Since ϕ annihilates the type L part, it does not change the functional to replace x by P u x. So (1) implies (2). If (2) holds, then for every A ∈ A n we have ϕ(A) = σ(j(A)P u )x, y , which clearly annihilates the type L part of A * * . Thus (2) implies (1). If (1) holds, then ϕ(j(A)P ⊥ u ) = 0, so ϕ(j(A)) = ϕ(j(A)P u ). Now A * * n P u = k≥1 (A * * n,0 ) k , so that ϕ| (A * * n,0 ) k = 1 for all k ≥ 1. It is easy to see that (A * * n,0 ) k = (A k n,0 ) * * . By basic functional analysis, a functional on a Banach space X has the same norm on the second dual. Therefore ϕ| A k n,0 = 1 for all k ≥ 1. If (3) holds, then there is a sequence A k in the ball of A k n,0 so that lim k→∞ ϕ(A k ) = 1. Here is a version of the Jordan decomposition. Proposition 5.9. Every functional ϕ on A n splits uniquely as the sum of an absolutely continuous functional ϕ a and a singular one ϕ s . Moreover and these inequalities are sharp. Regard A n as a subalgebra of E n and extend ϕ to a linear functional (again called ϕ) on E n with the same norm. Then (using the GNS construction and the polar decomposition of functionals on a C*-algebra) there exists a * -representation σ of E n on a Hilbert space H σ and vectors x, y ∈ H σ with x y = ϕ so that for every A ∈ E n , ϕ(A) = σ(A)x, y . Therefore for A ∈ A n , we have ϕ a (A) = σ(A)σ(P ⊥ u )x, y and ϕ s (A) = σ(A)σ(P u )x, y . Hence The example following will show that the √ 2 is sharp. Example 5.10. Consider the atomic representation σ 1,1 on Cξ * ⊕ ℓ 2 (F + n ) given by S 1 ξ * = ξ * and S 2 ξ * = ξ ∅ ; and Then S σ contains A = ξ ∅ ξ * * / √ 2 + (I − ξ * ξ * * )/ √ 2 and ϕ(A) = 1. So we see that ϕ = 1. On the other hand, Question 5.11. Let S be the unilateral shift and consider the representation of A 2 obtained from the minimal isometric dilation of A 1 = S/ √ 2 and A 2 = (S+P 0 ) * / √ 2. The weak- * closed self-adjoint algebra generated by A 1 and A 2 is all of B(H). Therefore this representation is either dilation type with P SP = B(H) or it is type L, depending on whether the functional ϕ = e 0 e * 0 is singular or absolutely continuous. To check. it suffices to determine whether ϕ has norm 1 or less on A n,0 . We would like to know which it is. We provide an example of how the density theorem can be used to establish an interpolation result for finitely correlated presentations. Such representations are obtained from a row contraction of matrices A = A 1 . . . A n ∈ M 1,n (M k (C)) by taking the minimal isometric dilation [17,7,23]. These representations were classified in [12]. The structure projection P has range equal to the span of all {A * i } invariant subspaces on which A is isometric. In particular, it is finite rank. Also, the type L part is a finite multiple, say α, of the left regular representation. Thus elements of the free semigroup algebra S have the form X 0 Y Z (α) where X and Y lie in P WP and P ⊥ WP respectively, and Z ∈ L n , where W is the von Neumann algebra generated by S. Theorem 5.13. Let σ be a finitely correlated representation. If A ∈ S σ has A < 1 and k ∈ N, then there is an operator B ∈ A n so that σ(B)P = AP and the Fourier series of B up to level k agree with the coefficients of AP ⊥ . Since the weak and strong operator topologies have the same closed convex sets, the density theorem implies that there exists a sequence {L k } in L n so that L k < 1 − ε and A = sot lim σ(L k ). Recalling that P and Q k are finite rank, we conclude that there exists B 1 ∈ A n so that (A − σ(B 1 ))P + Q k (C − B 1 ) < ε/2. By [15,Corollary 3.7], there is an element C 1 ∈ L n so that Q k C 1 = Q k (C − B 1 ) and Proceed as above to define C 2 ∈ L n so that Q k C 2 = Q k (C 1 −B 2 ) and C 2 = Q k (C 1 −B 2 ) ; and then define A 2 = (A 1 − σ(B 2 ))P + C (α) 2 P ⊥ satisfying A 2 < ε/4. Proceeding recursively, we define B j for j ≥ 1 so that B = j≥1 B k is the desired approximant. Constructive examples of Kaplansky In this section, we give a couple of examples where we were able to construct the approximating sequences more explicitly. We concentrate on exhibiting the structure projection P as a limit of contractions. It is then easy to see that the whole left ideal WP has the same property by applying the C*-algebra Kaplansky theorem. We do not have an easy argument to show that one can extend this to the type L part without increasing the constant. Proposition 6.1. Let S be the free semigroup algebra generated by isometries S 1 , . . . , S n ; and let A be the norm closed algebra that they generate. Let P ∈ S be the projection given by the Structure Theorem. If P = I is the wot-limit of a sequence in A of norm at most r, then the sot-closure of the r-ball of A k 0 contains SP for all k ≥ 0. Proof. Since P = I, S has a type L part. Let Φ be the canonical surjection of S onto L n with Φ(S i ) = L i [11, Theorem 1.1]. Recall that the kernel of Φ is ∞ k=1 S k 0 = WP . Since the weak and strong operator topologies have the same closed convex sets, we may suppose that the sequence in A converges to P strongly. In particular, the restriction of this sequence to the type L part converges strongly to 0. Hence the Fourier coefficients each converge to 0. Thus a minor modification yields a sequence A k ∈ A k 0 of norm at most r converging sot to P . If T lies in the unit ball of WP , then by the usual Kaplansky density theorem, there is a sequence B k in the unit ball of C * (S) which converges sot to T . We may assume that B k are polynomials in S i , S * i for 1 ≤ i ≤ n of total degree at most k. Then observe that B k A 2k lies in A k n,0 , and converges sot to T P = T . Our first example is a special class of finitely correlated representations which are obtained from dilating multiples of unitary matrices. Theorem 6.2. Suppose that U i for 1 ≤ i ≤ n are unitary matrices in B(V), where V has finite dimension d, and that α i are non-zero scalars so that n i=1 |α i | 2 = 1. Let S i be the joint isometric dilation of A i = α i U i to a Hilbert space H. Let S be the free semigroup algebra that they generate; and let A denote the norm-closed algebra. Then the projection P = P V is the projection that occurs in the Structure Theorem, and there is a sequence of contractions in A which converges sot to P . Lemma 6.3. If U is a set of unitary matrices in M d , then the closure of the set of all nonempty words in elements of U is a subgroup of the unitary group U and the algebra generated by U is a C*-algebra. Proof. The closure G of words in U is multiplicative and compact. Any unitary matrix U is diagonalizable with finite spectrum. A routine pigeonhole argument shows that there is a sequence U n i which converges to I, and thus U n i −1 converges to U −1 . It follows that G is a group. It is immediate that the algebra generated by U contains U * and thus is self-adjoint. Proof of Theorem 6.2. From the Lemma, we see that the algebra generated by {A * i } is self-adjoint, and thus the space V is the span of its minimal A * i invariant subspaces. From the Structure Theorem for finitely correlated representations [12], we deduce that P = P V is a projection in S and that S = WP + SP ⊥ , where W is the von Neumann algebra generated by S, P ⊥ H is invariant, and SP ⊥ is a (finite) ampliation of L n . Consider the space X consisting of all infinite words x = i 1 i 2 i 3 . . . where 1 ≤ i j ≤ n for j ≥ 1. This is a Cantor set in the product topology. Put the product measure µ on X obtained from the measure on {1, . . . , n} which assigns mass |α i | 2 to i. Fix ε > 0. Since the closed semigroup G generated by {U i } is a compact group by Lemma 6.3, one may choose a finite set S of non-empty words which form an ε-net (in the operator norm). Let N denote the maximum length of these words. Then we have the following consequence: given any word w = i 1 . . . i k , there is a word v = j 1 . . . j l in S with l ≤ N so that U wv = U i 1 . . . U i k U j 1 . . . U j l satisfies U wv − I < ε. Recursively determine a set W of words so that S w have pairwise orthogonal ranges and U w − I < ε for w ∈ W as follows: start at an arbitrary level k 0 and take all words w with |w| = k 0 such that U w − I < ε. If a set of words of length at most k has been selected, add to W those words of length k + 1 which have ranges orthogonal to those already selected and satisfy U w − I < ε. We claim that sot-w∈W S w S * w = I. The argument is probabilistic. Let δ = min{|α i | 2 }. Associate to w the subset X w of all infinite words in X with w as an initial segment. By construction, the sets X w are pairwise disjoint clopen sets for w ∈ W with measure |α w | 2 , where we set α w = k t=1 α it . Verifying our claim is equivalent to showing that w∈W X w has measure 1. Consider the complement Y k of w∈W, |w|≤k X w . This is the union of certain sets X w for words w of length k. For each such word, there is a word v ∈ S so that U wv −I < ε. Now S wv has range contained in the range of S w , which is orthogonal to the ranges of words in W up to level k. It follows from the construction of W that there will be a word w ′ ∈ W so that w ′ divides wv. As a consequence, Y k+N has measure smaller than Y k by a factor of at most 1 − δ N because for each interval X w in Y k , there is an interval X wv which is in the complement, and its measure is at least δ N µ(X w ). Therefore lim k→∞ µ(Y k ) = 0. Observe that P S w = α w U w . Define a state τ on B(H) as the normalized trace of the compression to V. Since U w − I < ε, it follows that |τ (U w ) − 1| < ε. Compute By taking ε = 1/k and k 0 = k in the construction above, we obtain a sequence T k of polynomials T k ∈ A k 0 which are contractions and lim k→∞ τ (T k ) = 1. It follows that there is a subsequence which converges wot to a limit T ∈ S which lies in k≥1 S k 0 = WP . Moreover T ≤ 1 and τ (T ) = 1. The only contraction in B(V) with trace 1 if the identity, and therefore the compression P T = P . As T is contractive, we deduce that P ⊥ T = P ⊥ T P = 0, whence T = P . As the sot and wot-closures of the balls are the same, there is a sequence in the convex hull of the T k 's which converges to P strongly. Our second constructive example is the set of atomic representations introduced in [13]. To analyze these, we will need some of Voiculescu's theory of free probability. Theorem 6.4. If S is an atomic free semigroup algebra, then the structure projection is a sot-limit of contractive polynomials in the generators. It is convenient for our calculation to deal with certain norm estimates in the free group von Neumann algebra. We thank Andu Nica for showing us how to handle this free probability machinery. and takes positive real values on real numbers x < 1/4α(1 − α). An easy calculation shows that the singularity at z = 1 is removable. The power series for M pqp converges on the largest disk on which it is analytic. The branch point occurring at z = 1/4α(1 − α) is the only obstruction, and thus the radius of convergence is 1/4α(1 − α). On the other hand, from Hadamard's formula, the reciprocal of the radius of convergence is 4α(1 − α) = lim sup k→∞ τ ((pqp) k ) 1/k = pqp . Corollary 6.6. Let U i for 1 ≤ i ≤ n denote the generators of the free group von Neumann algebra, and let P i be spectral projections for U i for sets of measure at most α ≤ 1/2. Then n i=1 P i ≤ 1 + 2n 2 √ α. Proof. By Lemma 6.5, we have P i P j = P i P j P i 1/2 ≤ 2 √ α for i = j. If n i=1 P i = 1+x, then Hence x ≤ 2n 2 √ α as claimed. Recall from [13] the atomic representation σ u,λ determined by a primitive word u = i 1 . . . i d in F + n and a scalar λ in T. Define a Hilbert space H u ≃ C d ⊕ ℓ 2 (F + n ) d(n−1) with orthonormal basis ζ 1 , . . . , ζ d for C d and index the copies of ℓ 2 (F + n ) by (s, j), where 1 ≤ s ≤ d, 1 ≤ j ≤ n and j = i s , with basis {ξ s,j,w : w ∈ F + n }. Define a representation σ u,λ of F + n and isometries S i = σ u,λ (i) by ξ s,j,w = ξ s,j,iw for all i, s, j, w For our purposes, we need to observe that the vectors ζ 1 , . . . , ζ d form a ring which is cyclically permuted by the appropriate generators S is ; and all other basis vectors are wandering. The projection P in the structure theorem is the projection onto C d . Lemma 6.7. Let u be a primitive word and let λ ∈ T. Let S be the atomic free semigroup algebra corresponding to the representation σ u,λ . Then the projection P from the Structure Theorem is the limit of contractive polynomials in the generators.
2014-10-01T00:00:00.000Z
2004-06-02T00:00:00.000
{ "year": 2004, "sha1": "21d2d3d4aaf0808e36ac19bc19e10b5f3c38c42c", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jfa.2004.08.005", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "21d2d3d4aaf0808e36ac19bc19e10b5f3c38c42c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55598703
pes2o/s2orc
v3-fos-license
Stability of tetrons We consider the interactions in a mesonic system, referred here to as `tetron', consisting of two heavy quarks and two lighter antiquarks (which may still be heavy in the scale of QCD), i.e. generally $Q_a Q_b \bar q_c \bar q_d$, and study the existence of bound states below the threshold for decay into heavy meson pairs. At a small ratio of the lighter to heavier quark masses an expansion parameter arises for treatment of the binding in such systems. We find that in the limit where all the quarks and antiquarks are so heavy that a Coulomb-like approximation can be applied to the gluon exchange between all of them, such bound states arise when this parameter is below a certain critical value. We find the parametric dependence of the critical mass ratio on the number of colors $N_c$, and confirm this dependence by numerical calculations. In particular there are no stable tetrons when all constituents have the same mass. We discuss an application of a similar expansion in the large $N_c$ limit to realistic systems where the antiquarks are light and their interactions are nonperturbative. In this case our findings are in agreement with the recent claims from a phenomenological analysis that a stable $b b \bar u \bar d$ tetron is likely to exist, unlike those where one or both bottom quarks are replaced by the charmed quark. Introduction Multiquark hadrons, whose internal structure apparently goes beyond the standard template of three quark baryons and quark-antiquark mesons, have recently been observed in various experiments (for a recent review see e.g. [1,2]). All such exotic hadrons found so far contain a heavy b or c quark and a corresponding antiquark. For this reason they all are unstable with respect to annihilation of a heavy quark-antiquarks pair, even though their rate of dissociation into conventional hadrons can be small. There is a whole spectrum of theoretical models for description of such resonances. In particular the most discussed models for the mesonic ones are the molecular [3,2], the tetraquark (a recent review can be found in Ref. [4]) and the 'hadro-quarkonium' [5,6]. A different kind of phenomenology of multiquark hadrons would be accessible if there existed systems made of two heavy quarks (as opposed to a quark-antiquark pair) and two lighter antiquarks: Q a Q bqcqd , that would be bound below the threshold for dissociation into a pair of Qq mesons. Such hadrons have been discussed within the quark model for quite some time [7,8], and the lightest of them can decay only through the weak interaction. In view of special properties of such systems we call them here 'tetrons' implying that they in fact are 'stable' mesons made of four constituents. A recent revival of the interest in tetrons is inspired by the observation [11] by LHCb of a doubly charmed baryon Ξ ++ cc ∼ ccu. The measurement of the mass of the baryon at about 3621 MeV has provided an estimate of the effective mass of the heavy quark pair cc (with the interaction between the quarks in the color antitriplet state), and thus an input into phenomenological models [14,15]. The latter models are based on the picture [8], where due to the attraction in the colorantisymmetric state, the heavy quark pair forms a compact, in fact a point-like, bound state. This bound state then acts essentially as a heavy antiquark and binds either with a light quark to form a baryon, e.g. Ξ ++ cc , or with a light antiquark pair to form a tetron, e.g. Q a Q būd . Since the latter binding is similar to that in respectively a heavy meson and a heavy (anti)baryon, by applying the known mass differences, e.g. between Λ c and D, or between Λ b and the B-meson, the masses of possible tetrons containing cc, or bb, or bc heavy quark pair can be estimated. In this way it has been argued [14,15] that there are no stable tetrons with cc heavy quark pair, but there definitely is a bbūd one, well below the B −B0 threshold, and also likely similar weakly decaying strange tetrons [15] bbsq with q standing for either u or d. Numerical evidence for such states has been established in lattice nonrelativistic QCD [12], as well as using the approximation of static b-quarks [13]. (The conclusion about existence of mixed bottom-charm tetrons bcqq is not conclusive in Ref. [14] and negative in [15].) It is clear however that the similarity of the interaction in a tetron to that in an (anti)baryon, where a heavy antiquark is replaced by a compact color-antisymmetric pair of heavy quarks is not exact. One simple reason for a deviation is the spin-dependent interaction, which is suppressed for heavy quarks and which to some extent can be accounted for [15]. The other (and less tractable) reason is that the heavy quark pair has a finite size with the most important effect being a flip of the color state from antisymmetric to symmet-ric (with the corresponding change of the color of the light antiquark pair). The existence of these configurations was recognized in the previous studies [7,9,10] and was taken into account in a series of approximations. In what follows we treat the mixing of the color configurations explicitly within an expansion in the ratio of the distance between the heavy quarks to the characteristic distance to the light antiquarks. The point-like limit [8,14,15] is the first term in this expansion. It naturally appears that for sufficiently heavy quark pair with the (reduced) mass M , the characteristic size of the bound state is proportional to 1/M , while the distance scale for the light (massless) antiquarks in a tetron is set by Λ QCD , so that the ratio of the distance scales is proportional to Λ QCD /M . We will argue however that the effects of the deviation from the point-like approximation are enhanced in the limit of large number N c of colors, so that the relevant parameter for this deviation is in fact which at N c = 3 indicates that the point-like limit is not applicable if at least one of the heavy quarks is the charmed one. On the other hand, this limit may work with reasonably small corrections of order ξ for tetrons with the bb quark pair. Furthermore, it appears that a stable tetron does not exist if the parameter ξ is of order one or larger. To establish this behavior, we consider in Section 2 the limit where all the quarks and the antiquarks are asymptotically heavy, so that the relevant distances for bound states are short. One can then apply the Coulomb-like limit for the gluon exchange among all constituents, with a non-relativistic Hamiltonian describing the interplay of color configurations. The two scales are introduced in this model by considering the quarks Q as having mass M that is larger than the mass m of the antiquarksq. The ratio f = m/M is a variable parameter. 1 The bound state problem in this model is solved by a numerical variational calculation; on the other hand it is analyzed in terms of an expansion in the size of the heavy bound QQ pair. We find that an analog of the parameter (1) in this solvable model is On the other hand we find from the numerical calculation that a stable tetron in this system exists only when the ratio f is smaller than a certain critical value f c (N c ), where the coefficient a is of order one, numerically a ≈ 0.77. It is thus plausible that the condition for existence of a stable tetron is a small value of the expansion parameter [in this model ξ c in Eq. (2)] describing the deviation from the point-like model for the pair of heavy constituents. Unlike in the solvable model with Coulomb-like forces, interactions in a system containing light u, d, or s quarks cannot be described by a potential. However some features of a gluon exchange can be applied to such systems in the limit of large number of colors N c with the usual assumption [16] that, as N c increases, the coupling α s decreases, so that the product N c α s stays of order one. We discuss the parameters describing a tetron in this limit in Section 3. Finally, Section 4 contains general discussion and conclusions. A solvable model with superheavy quarks We consider a system of two heavy quarks Q with mass M and two lighter (but still heavy) antiquarksq with mass m each. For the start we assume no statistics symmetry constraints, e.g. assuming that the quarks are not identical, even though they have the same mass. The odd numbered positions r 1 and r 3 refer to the quarks, while the even ones r 2 and r 4 are those for the antiquarks. The gluon exchange potential between the color constituents at positions r i and r j is with T a (i) being the color generators acting on the constituent at r i , and d ij in the Coulomb limit is given by The condition for the system to be colorless can be satisfied with two configurations of the sub-systems described by the color combinations: where α and β are color indices in the fundamental representation of the color group SU (N c ). Clearly in the Ψ configuration the color singlets are (q (2) Q (1) ) and (q (4) Q (3) ) while in Φ they are (q (4) Q (1) ) and (q (2) Q (3) ). The sum of pairwise one-gluon exchanges among the four constituents results in the potential that can be written in terms of Ψ and Φ as where we have used the notation The potential matrix in Eq. (7) is not symmetric, because the color states Ψ and Φ in Eq. (6) are not orthogonal, Orthogonal (and normalized) states can be chosen as u = 1 and the one-gluon exchange potential (7) in the basis of these states reads as where The Hamiltonian with the potential (11) clearly has a Z 2 × Z 2 symmetry under switching of the positions of the quarks, r 1 ↔ r 3 , and (independently) switching the positions of the antiquarks, r 2 ↔ r 4 . The symmetry of the u and w components is opposite; e.g. if the w component is even under swapping of quarks then the u component has to be odd. This implies that the eigenstates of the Hamiltonian can be classified in terms of the symmetry of the w component: w ++ , w −− , w +− and w −+ . Furthermore, one can readily see that the states u and w contain the quark (antiquark) pair of a definite color symmetry: symmetric in u and antisymmetric in w. 2 In particular, at N c = 3 the u state contains a color sextet quark (anti-sextet antiquark) pair, while the state w contains the antitriplet quark (triplet antiquark) pair configuration. Thus it is the latter w component that is present in the phenomenological analyses of Refs. [14,15]. When the heavier quarks Q are close to each other, the term d 13 becomes dominant, (p + q) ≈ 2d 13 , and one recovers from Eq. (11) the attraction in the color antisymmetric state: This attraction binds the Q quarks into a compact Coulomb-like system with the size and energy becoming, at large N c , Clearly, at large M such distance scale is small in the scale R q of the dynamics of the lighter antiquarks in the considered system, and one can consider an expansion in the ratio r QQ /R q . In the zeroth order of this expansion, i.e. at vanishing r QQ , the off-diagonal terms in Eq. (11) vanish and there is no mixing between the w and u components, and thus one can set u = 0. Then the leading at large N c interaction for the lighter antiquarks is that with the heavier quarks. After setting r 3 = r 1 in the proportional to N c part of the diagonal term in Eq. (11) one finds the potential 2 The color and coordinate symmetry properties of the components certainly become essential for identical quarks with the constraint of the Fermi-Dirac statistics. describing an independent Coulomb-like interaction of the two lighter antiquarks with the compact QQ system. Naturally, the latter interaction corresponds to spectra of two independent Qq Coulomb-like quarkonia, with the distance and energy scale set as It is also clear that the ground state in both the potential (13) and (15) is spatially symmetric, so that the overall ground state of the tetron is of the type w ++ under the Z 2 × Z 2 symmetry. Due to the binding between the heavy quarks by the potential (13) the resulting fourquark system is stable under decay to two quarkonium mesons. It should be noted however that this binding is only sub leading in terms of the large N c counting, as can be seen by comparing the expressions (13) and (15). Thus the discussed 'hierarchy' of the binding energies is only applicable if the ratio f of the masses is small enough at a fixed N c . In other words there is a critical value of this ratio f c (N c ) above which the described approximation fails. In order to evaluate the behavior of f c (N c ) we consider here the effects arising at a finite ratio r QQ /R q . We find that the main effect arises due to non-vanishing off-diagonal elements in the potential (11): This term can be considered as small as long as the energy shift that it produces in the second order is small in comparison with either of the energy scales [in Eq. (14) or Eq. (16)]. One can readily verify that using the energy scale E QQ imposes a more stringent bound on f = m/M : so that the applicability of the discussed expansion fails at f > f c (N c ) with f c given by Eq. (3). In particular the absence of a stable bound state at larger mass ratio makes highly unlikely existence of a 'double bottomonium' occasionally discussed in the literature (see e.g. Ref. [17,18,19]). By performing a numerical variational calculation we find that the lowest bound state in the system is of the w ++ type and exists only when the ratio in Eq. (18) is small so that the mass ratio f is smaller than the critical value described by Eq. (3) (see Fig. 1). 3 The results for the values of f c at which the bound state disappears at different N c are shown in Fig. 2. We computed the data points in Fig. 2 with a generalization of the algorithm developed for the positronium molecule [20]. Both wave-function components u and w are represented as a sum of Gaussian trial functions of all six inter-particle distances. We use a basis of 200 trial functions for each of the components. Much larger bases can be employed if higher precision is warranted. A challenge in this calculation is a slow convergence very near the threshold. This explains the slight spread of the data points around the fitted curve in Fig. 2. Figure 1: Extra binding energy of a tetron (in units of the total binding for two independent Qq mesons) as a function of the antiquark/quark mass ratio f . The number of colors is N c = 3. The state with the symmetry w ++ (circles) is bound more strongly than w −− (triangles). Even the state w ++ is no longer bound when the mass ratio is higher than about f c 0.152. Tetron with superheavy quarks and massless antiquarks A potential description, and even more in terms of a Coulomb-like potential, is not applicable for the interaction of light u, d, s quarks, and other methods have to be invoked. In this section we consider a system of two very heavy quarks QQ with mass M each, and two massless quarksqq (which are not necessarily identical, e.g.ūd). Although literally the potential model of the previous section does not apply, some essential features of the interaction in Eq. (11) are retained, in particular a Coulomb-like potential treatment of the interaction between the heavy quarks. Namely, the one gluon exchange between the heavy quarks still produces a compact bound state in the potential (13) with the relevant parameters described by Eq. (14). This interaction, essential at large M , is however only sub-dominant in the large N c limit, in which limit the dominant effect (of order one) is the interaction between the light and heavy constituents. The heavy-light mesons Qq are formed and the estimate (16) Figure 2: Mass threshold f c for tetrons as a function of the number of colors. for the relevant characteristic size and energy scale is replaced by Moreover, the mixing between the w and u components, although not describable by a potential analog of the non-diagonal components in Eq. (11), retains the following features. It is of order one in the limit of large N c and it vanishes at zero spatial separation r QQ between the heavy quarks. Thus one can estimate the amplitude of the mixing in the linear order of the expansion in r QQ as The perturbation parameter for the mixing is then evaluated as which results in the estimate in Eq. (1). Final remarks The parameter ξ in Eq. (1), similarly to ξ c in (2), controls the applicability of the treatment of tetron starting from a compact bound diquark made of the heavy quarks. A perturbative expansion in the spatial separation is possible when this parameter is (formally) much less than one, and generally this expansion becomes invalid once the ξ is of order one. Our calculations in the solvable model with heavy quarks however revealed that not only that the expansion becomes inapplicable when ξ c is of order one, but no stable bound tetrons arise at all. We interpret this behavior as that the leading at large N c dipole force [the off diagonal terms in the potential (11)] results in a strong mixing between the w and u components. Such mixing essentially randomizes the total color of heavy diquark, so that a residual net interaction between the heavy constituents largely cancels between the color symmetric and antisymmetric configurations. We thus conclude that it is highly likely that in a more realistic tetron with light quarks the existence of a stable bound state is also controlled by the parameter ξ, and the stability does not exist if ξ is of order one or larger. It is certainly of a primary interest to understand the status of tetrons with the heavy constituents being the actual b and c quarks. Using the criterion based on the estimate in Eq. (1) one readily concludes that for the ccqq and bcqq systems, where the reduced mass M in the heavy diquark is determined by the charm quark mass, there is essentially no chance that the parameter ξ is small. Thus we confirm the finding of the earlier studies [7] that it is highly unlikely that there are stable tetrons with such quark structure. The parameter ξ from Eq. (1) is more likely to be small enough, if M is proportional to the mass of the b quark. Due to the inherent uncertainty in this estimate for a nonperturbative system it would be impossible to unambiguously claim existence of stable tetrons of such type, based solely on this estimate. However we believe that there is a strong indication that if stable tetrons do exist, the only possibility for them is to be of the double bottom type. At this point we find an agreement with the conclusions based on purely phenomenological estimates in Refs. [14,15]. It is certainly understood [15] that an experimental observation of double bottom tetrons can be quite challenging. However a search for them may be well worth the effort, as the tetrons possibly present a very unconventional form of hadrons that are stable with respect to strong decay. Clearly, the smallness of the parameter ξ, or its analog, requires existence of two strongly separated mass scales, whose ratio can ensure that the binding effect in the color antisymmetric state due to heavy masses is not eliminated by a larger in N c destabilizing mixing between the color states. We notice absence of such hierarchy of scales for four-quark systems with only the heavy b and c quarks, e.g. bbcc, so that we do not expect existence of stable tetrons of such type. The same negative conclusion applies to four-quark systems with hidden heavy flavors, such as a double bottomonium bbbb, or double charmonium, cccc, systems.
2017-08-21T16:28:03.000Z
2017-08-15T00:00:00.000
{ "year": 2018, "sha1": "483e43e39c534d27f8d2db70347c230ea7661187", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2018.01.034", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "483e43e39c534d27f8d2db70347c230ea7661187", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236615256
pes2o/s2orc
v3-fos-license
The Academy Is Well Positioned to Offer Pharmacy Technician Training Programs Schools and colleges of pharmacy are optimally positioned to train the entire pharmacy team, including pharmacists and pharmacy support personnel, because they can provide comprehensive workforce development, utilize established faculty expertise, harness existing infrastructure, afford opportunities for intraprofessional education, and support institutional growth and reputability. As the emphasis of training shifts towards team-based approaches and expanded responsibilities, ensuring the existing and future pharmacy workforce is equipped to serve their communities becomes increasingly important. Thus, schools and colleges of pharmacy should consider offering a pharmacy technician training program to meet the needs of their community and the profession. INTRODUCTION Just as the role of the pharmacist has continued to evolve, with a focus in recent years on pharmacists offering more clinical services, so too the role of the pharmacy technician is evolving. The technician's role has advanced over the past decade as employers seek to more efficiently support pharmacist-delivered patient care services. One example is an initiative that supports technicians completing order verification through a "tech-check-tech" model in community and hospital settings. [1][2][3] The COVID-19 pandemic has emphasized this need for an expanded scope of technician practice. In October 2020, the Department of Health and Human Services authorized qualified pharmacy technicians to administer COVID-19 vaccines. 4 With community pharmacies serving one of the most essential access points to health care, this authorization facilitated technicians playing an integral role in combatting COVID-19 in communities across the nation. This is a clear example where expanding the scope of the pharmacy technician's role could meet a current practice need and improve the efficient delivery of patient care. DISCUSSION Like most entry-level professions, however, there are major challenges faced by the technician workforce, such as a patchwork of practice entry requirements across states; reports of low job satisfaction, particularly in high stress environments; and high levels of turnover. [5][6][7] In recent years, numerous efforts have been made at the national and state level to develop the technician workforce. For example, many states now require technicians to be certified, or if not, allow those technicians who are certified to take on more responsibility within the pharmacy. 5,8 Additionally, the Pharmacy Technician Certification Board offers specialty assessment-based certifications and has announced an advanced pharmacy technician credential to support the expanding role of technicians. 9 Researchers have found that pharmacy technicians that are certified report having a stronger commitment to the pharmacy profession, and employers generally agree that certified technicians offer more value to the organization. 10,11 Of the 417,780 employed pharmacy technicians in the United States in 2019, 288,866 were certified. 12 These numbers are predicted to increase over the next decade as the job market for pharmacy technicians continues to grow. 13 Considering the gap in the number of pharmacy technicians that are certified, coupled with anticipated career growth, there is an opportunity for schools and colleges of pharmacy to train the next generation of pharmacy technicians for their expanded role on the pharmacy team. Surprisingly, though, few colleges of pharmacy are involved in formalized technician training programs. [14][15][16] Perhaps the largest barrier for institutions to adopt pharmacy technician training programs has been concerns related to cost and return on investment. However, as evidenced by the current COVID-19 pandemic, innovative education models, such as online or hybrid delivery methodologies, can meet student expectations and allow schools to recruit from a national pool of learners. In addition, national stakeholders have engaged in robust debate about these wide variations in pharmacy technician training, which led to consensus recommendations to advance the standardization for technician training. 17 With growing expectations from the public, regulators, and employers for pharmacy schools to take on this challenge, opportunities exist for institutions to invest in training programs for pharmacy technicians, which will ultimately elevate the entire pharmacy team. Currently, technician training programs are offered by a diverse group of organizations: community colleges, pharmacy chains, health systems, the United States military, and select colleges and universities. 18 On average, these programs have a class size of 26 students (range, 3 to 110 students), last 32 weeks (range, 18 hours to 24 months), and have a median tuition of $4,800, with select programs leading to an associate's degree. 14 In October 2020, The College of Pharmacy at the University of Tennessee Health Science Center launched a pharmacy technician training program, and has enrolled 29 learners in three cohorts to date. This program is completely online; is offered as a cohort model where all the learners progress through the 12-week program together, with four planned offerings per year; is selfpaced; provides dedicated faculty support; supports a flexible and affordable pricing model; and takes approximately 12 weeks to complete. Now that our institution has gained this year of experience in pharmacy technician education, we challenge more members of the Academy to invest in pharmacy technician training. Schools and colleges of pharmacy are optimally positioned to train the entire pharmacy team, including pharmacy technicians, for five reasons: they are able to provide comprehensive workforce development, utilize established faculty expertise, harness existing infrastructure, develop opportunities for intraprofessional education, and support institutional growth and reputability. First, a benefit of a pharmacy school having a technician training program is that it will allow them to provide comprehensive workforce development in that they can train every member of the pharmacy team. These training programs can equip technicians to meet the expectations that come with their expanded roles within the health care system that allow pharmacists to provide additional services that advance patient care. 2 This aligns with our college's mission to "educate, inspire, discover, and serve to advance health." 19 Comprehensive development will require training the entire team, and having a technician training program at a college of pharmacy will allow the institution to do just that. As program directors, our goal is to implement a pharmacy technician training program that optimally positions learners to attain appropriate certification. Even though certification is not the goal of every learner that participates, it is the goal for many, and pharmacists even report that certification should be required for technicians to take on advanced roles. 10 Also, since professional development should continue throughout the technician's career, we housed the program in our Office of Continuing Professional Development so we can continue to support learners in developing, maintaining, and expanding their skills through a lifelong learning approach. As previously noted, there are various training programs available across the United States for pharmacy technicians that vary widely in hours, cost, duration, and more. 14 Also, technicians commonly receive on-the-job training and may or may not be required to pass a licensing examination in their state to practice. The lack of standardization is quite opposite of that for the Doctor of Pharmacy degree, which requires students to complete several years of rigorous professional education and pass licensing examinations prior to starting practice. Thus, one challenge has been demonstrating the value of our program, even though it is not a degree-granting program, through our marketing and promotional materials. Our program directors often meet with learners prior to starting the program to help ensure the program aligns with their goals. Ultimately, to elevate the pharmacy profession and support the expanded roles of every team member, additional investment in the entire workforce will be necessary. This can be achieved through an educational shift that standardizes minimum training requirements, which is arguably more critical now than ever before, to meet the current need in the workforce for technicians to take on an expanded role. Second, establishing a pharmacy technician training program in a school of pharmacy allows the program to benefit from the existing faculty expertise, whether to develop the program, identify content that already exists to license for use in the program, or support learner success. Having dedicated faculty that understand the demands placed on every member of the pharmacy team is key to positioning the learner to be successful in their new role. Also, pharmacy faculty can ensure learners progress through the didactic material appropriately develop necessary study habits, and ultimately find employment (eg, helping with effective job searches, resume review, interview preparation, and identifying potential employers. These efforts by faculty will place learners in the best possible position to obtain the job they desire after completing the program. Concerns about faculty workload may deter some pharmacy schools from launching such a program. However, garnering support from full time faculty (affiliate or volunteer), staff, and current pharmacy students can maximize efficiencies with effort and time committments required by the program. For example, we partnered with an educational company to develop baseline content for the program, freeing faculty to focus on other aspects of the program such as recruitment, examination preparation, and learner progression. Another approach would be to involve students, residents, or fellows in supporting faculty by serving as instructors or teaching assistants for courses within the pharmacy technician program in which they possess expertise. Third, schools of pharmacy possess the infrastructure to adequately prepare pharmacy students to train pharmacy technicians. Pharmacy schools and/or continuing pharmacy education providers are keenly capable of identifying the needs of the pharmacy profession and developing a plan to address them. That is because they are required to engage in continuous quality improvement through gap analyses, assessments, and evaluations of their programsd to maintain accreditation from Accreditation Council for Pharmacy Education (ACPE). For example, research has shown that pharmacists report that pharmacy technicians need more training in "soft skills," such as communication. 20 A technician training program may opt to offer a specific course in communication or even a course with an experiential component by utilizing existing preceptor relationships that support experiential components of the curriculum for pharmacy students at the college. Fourth, a pharmacy technician training program affords opportunities to advance intraprofessional education among the entire pharmacy team. While student pharmacists are expected to mentor and train pharmacy technicians upon graduation in a variety of practice settings, there is little mention of intraprofessional training in the ACPE Standards 2016. 21 Even though these standards only provide guidance to develop a Doctor of Pharmacy curriculum, intraprofessional education is arguably a key skill student pharmacists must develop to manage their future team. Students do interact with technicians to varying degrees during their education, such as on introductory and advanced practice experiences and internships, but how student and technician perceptions and performance on the team is evaluated remains undocumented in the literature. However, research has demonstrated that poor pharmacy management skills are linked to pharmacy technician turnover. 22 Given the focus on team-based learning and critical importance of developing management skills, offering a technician training program is a ripe opportunity for intraprofessional learning activities. Fifth, a training program supports institutional growth and reputability. As pharmacy schools across the United States face a decrease in pharmacy student applicants, a technician training program not only meets an existing need in the workforce, but also adds a new source of revenue for the college. 22 Offering this program can be part of the solution in times of constructing budgets. Also, even though a technician training program does not place pharmacists in jobs or help them pass licensing examinations, the program could serve as a pipeline for the professional PharmD program, as several learners have expressed interest in pursuing a Doctor of Pharmacy degree in the future. There is also the added benefit of elevating an institution's reputation by bolstering community rapport with employers, recent graduates, and local and state government programs. This allows future employers to focus on organization-specific onboarding efforts. The availability of well-trained pharmacy technicians also allows pharmacists to expand their practice and provide additional patient care services. Opportunities abound to create synergy between a college of pharmacy and a technician training program. The launch of our program has not been without challenges, such as software platform issues and learner attrition. However, through communication and relationship building with both our partners and future learners, these challenges can be overcome. Future directions may include offering an experiential component to the pharmacy technician program by engaging existing preceptors, or incorporating vaccination training as a standard component of the pharmacy technician program. It will also be important to integrate pharmacy student and pharmacy technician training as much as possible. As the technician's role is expanding at an unprecedented rate,their training must keep pace to position them and the entire pharmacy team for success. CONCLUSION We hope that presenting these five reasons for pharmacy schools to establish a pharmacy technician training program will spur discussion within the Academy. As the focus shifts towards team-based approaches and expanded responsibilities for all team members, ensuring that the existing and future pharmacy workforce is equipped to serve their communities becomes increasingly important. Thus, colleges of pharmacy should consider offering a pharmacy technician training program to meet the needs of their community and the profession.
2021-08-02T00:06:12.096Z
2021-05-06T00:00:00.000
{ "year": 2022, "sha1": "9c8510032ed58556e5306556150dc34ae87f85c8", "oa_license": null, "oa_url": "https://www.ajpe.org/content/ajpe/early/2021/05/04/ajpe8554.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "96085212fc0037b0eea0ac1a2e60abdcb650f37d", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
232046311
pes2o/s2orc
v3-fos-license
High-throughput nanoindentation mapping of cast IN718 nickel-based superalloys: influence of the Nb concentration A high-throughput correlative study of the local mechanical properties, chemical composition and crystallographic orientation has been carried out in selected areas of cast Inconel 718 specimens subjected to three different tempers. The specimens showed a strong Nb segregation at the scale of the dendrite arms, with local Nb contents that varied between 2 wt.% in the core of the dendrite arms to 8 wt.% in the interdendritic regions and 25 wt.% within the second phase particles (MC carbides, Laves phases and {\delta} phase needles). The nanohardness was found to correlate strongly with the local Nb content and the temper condition. On the contrary, the indentation elastic moduli were not influenced by the local chemical composition or temper condition, but directly correlated with the crystallographic grain orientation, due to the high elastic anisotropy of nickel alloys. Introduction Cast and forged In718 polycrystalline Ni-base superalloys present high temperature strength and fatigue resistance even in oxidizing and corrosive environments [1]. Such performance makes this material suitable for use as turbine discs and some static components of aero-engines working at temperatures up to 600-700 ºC [2]. The formulation of this alloy includes various transition metals which lead to strengthening by solid solution, precipitation via several second phase particles, γ', γ'', δ (Ni3Nb), MC carbides and Laves phases, as well as grain size refinement and a high twin boundaries density [3]. In casting processes, chemical segregation occurs during the solidification stage leading to local variations in the mechanical properties of the material at the microstructural scale that have not been studied before. The recent advances in high-speed nanoindentation mapping methods, such as XPM (accelerated property mapping) [4], enable evaluating local variations in the mechanical properties of large areas with lateral resolutions of a few micrometres. Such resolution values are similar to those typically selected in electron microscopy techniques for determining the local composition and the crystallographic orientation, such as EDS (electron dispersive X-ray spectroscopy) and EBSD (electron back scatter diffraction), respectively. Hardness is usually determined as the ratio between the maximum indentation load and the projected area of the residual imprint [5]. Nevertheless, when the indentation depth ranges from nanometres to a few micrometres such task becomes tedious, especially in the case of high-speed nanoindentation maps where the aim is to measure thousands of indentations. This issue can be addressed by depth-sensing instrumented indentation, where the indentation load (P) and the penetration depth (d) of the indenter are continuously recorded. Provided that the geometry of the indenter is known, the hardness and elastic modulus can be directly inferred from the indentation load-penetration curves using the Oliver and Pharr method [6]. In the case of high-speed nanoindentation maps is, however, important to keep track 3 of the indenter geometry during the process, because this can be altered by tip wear over the thousands of indentations involved in the maps. In this study, we correlate the XPM maps with the crystallographic orientation obtained by EBSD and the compositional maps obtained by EDS of selected areas of cast Inconel 718 specimens subjected to three different tempers. The EDS results showed acute Nb segregation across the dendritic arms and in the interdendritic regions that lead to strong nanohardness gradients in the XPM maps. Nevertheless, the XPM maps show relatively constant elastic moduli inside the grains, not influenced by the local chemical composition but a direct dependence of the grain orientation, as expected for materials with high elastic anisotropy like nickel. Determining the dependence of the local mechanical properties with chemical segregation is key for tailoring the properties of the resulting material during casting or other solidification processes, such us welding, repairing or 3D printing, for optimizing processing parameters and for developing microstructure based models. 2.1.-Microstructure The material used in the present study was a cast IN718 polycrystalline Ni-base superalloy, with the average composition shown in Table 1. iii. Overaging (O): after peak aging precipitation, 800 ºC during 36 h followed by furnace cooling. Optical micrographs of the cast Inconel 718 alloy are presented in Fig. 1a and b. The microstructure was characterized by large irregular grains in the millimetre size range. As typically occurs during solidification, each grain develops by dendritic growth, leaving behind interdendritic regions decorated by coarse second phase particles, mainly δ (Ni3Nb), MC carbides and Laves phases, as shown in Fig 1c. The volume fractions of MC carbides, Laves phases and δ needles were 1.3%, 0.6% and 4.4%, respectively. The secondary dendrite arm spacing was 157 µm ± 23 µm. slower than other elements [7,8], they tended to be localized in the dendrites core region. In the case of Mo and Al, their content was lower than the other elements and were homogeneously distributed along the different microstructure features. On the contrary, Ti segregated strongly to the interdendritic areas, taking part on the formation of the second phases particles present in these regions, mainly MC carbides. Finally, Nb segregated towards the dendrites outer region, into the interdendritic area and the second-phase particles. As a result, and even though the average Nb content was 5.1 wt.%, the Nb content of the dendrite core was as low as 2 wt.%, while the Nb levels in the interdendritic regions, away from the second-phase particles, reached 8 wt.%. The Laves phases and δ needles appeared as islands containing about 25 wt.%Nb. No significant differences in terms of chemical segregation were found with temper condition, which indicates that the temperatures and times were not high or long enough, respectively, to homogenise the chemical composition. 7 Finally, the precipitates in the peak aged condition were characterized by TEM. The alloy matrix comprises a FCC phase consisting on a Ni based solid solution. The matrix is strengthened by an intermetallic FCC ' phase with composition Ni3(Ti ,Al) and a BCT '' phase with composition Ni3Nb formed during ageing [9]. Fig.4 shows TEM images of the specimen in the peak aged condition at different magnifications. The '' precipitates exhibit an elongated disc shape, whereas the ' precipitates are almost spheroidal. Both had an average size of 20 nm. The presence of complex precipitates comprised of two half-spheroidal ' particles sandwiching a '' disc-shape precipitate could also be observed, as shown in Fig. 4b. 2.2.-Mechanical properties The hardness (H) and reduced elastic modulus (Er) maps for the solubilised, peak-aged and overaged precipitation state are shown in Fig. 5 3.-Discussion The correlative studied carried out showed that, even though cast IN718 typically shows grain sizes in the order of millimetres, is a very heterogeneous material at the microscale. The dendritic growth during solidification leads to a large heterogeneity at the scale of the SDAS (around 150 μm in this case). As a result, strong chemical segregations occur within each grain at the dendrite scale, with dendrite cores that are richer in Fe and Cr and depleted in Nb, while the interdendritic regions are much richer in Nb and in hard second phases, such as the TiC particles (Fig. 1). The chemical segregation through the dendrite radius occurs due to differences in the diffusion rates of each constitutive element. While the slower elements, Cr and Fe, remain in the dendrite cores ( Fig. 3d-f and g-I, respectively), Ti and Nb tend to segregate towards the outer part of the dendrites ( Fig. 3p-r and s-u, respectively), and specially into the interdendritic areas. In the particular case of Nb, the presence of this element is crucial for the formation of the metastable ''-Ni3Nb precipitates responsible for the strengthening of Inconel 718. These precipitates are disc-shaped (Fig. 4) and play a key role as strengthening agents due to their higher volume fraction with respect to ' precipitates, as well as their coherency with the Ni matrix and the lattice distortion caused by the c-axis of the D022 body centred tetragonal '' structure [10,11]. Therefore, the Nb content is expected to play a crucial role on the local mechanical properties of this alloy. Fig. 6a represents the histogram of the Nb distribution for the three thermal treatments obtained from the corresponding EDS maps, excluding the second-phase particles (Fig. 3s-u). The distribution was similar for the three specimens, with a minimum and a maximum content of 2 and 8 wt.%, respectively, which indicates that Nb diffusion does not take place significantly during the temper treatments. On the contrary, the heat treatments are expected to affect the dissolution, precipitation and coarsening of the '' precipitates (Fig. 4), that are expected to be heterogeneously distributed at the dendrite scale, as a function of the local Nb content. In fact, the hardness maps (Fig. 5a, b and c) temper condition, as shown in Fig. 7. This information represents a very valuable local tool to assess the quality of welds, repairs or 3D printed components, as the local hardness maps can be directly correlated to the local microstructure, to identify areas with a strong Nb segregation or with local tempers, as a result of heat affected zones. Fig. 7. Hardness versus Nb concentration for the solubilized, peak-aged and over-aged temper conditions. Finally, it is interesting to notice that the reduced elastic modulus maps (Fig. 5d, e and f) do not show a correlation with the local chemical segregation or the temper condition. Instead, the elastic modulus maps present a direct correlation with the crystallographic orientation of the grains (Fig. 2a, b and c). This is not surprising considering the large elastic anisotropy of IN178 [12,13] and the fact that, contrary to hardness, elastic properties are relatively insensitive to microstructural features, such as precipitation stage or small variations in chemical composition. Fig. 8 plots the expected elastic modulus variation with crystallographic orientation [14], calculated using the singlecrystal elastic constants of IN718 (c11=259; c12=179; c44=109.6, in GPa) [12]. The elastic modulus is expected to be much higher in the <111> direction, 279 GPa, than in the <101> and <001> directions, 104 and 113 GPa, respectively. Even though nanoindentation imposes a complex stress state under the indent, it is interesting to notice that the relative values of indentation elastic modulus correlate well with the normal crystallographic orientation of the grains. For instance, the light blue grain in the peak aged condition in Fig. 2b, with the surface normal between [101] and [111], presents the highest elastic modulus, of around 210 GPa (Fig. 5e and 6c), while the dark orange grain in the overaged condition (Fig. 2c), with its surface normal close to [001], presents the lowest elastic modulus, of around 165 MPa ( Fig. 5f and Fig.6c). 4.-Conclusions A high-throughput correlative study of local mechanical properties, chemical composition and crystallographic orientation has been carried out in selected areas of cast Inconel 718 specimens subjected to three different tempers. The specimens showed a strong Nb segregation at the scale of the dendrite arms, with local Nb contents that varied between 2 wt.% in the core of the dendrite arms to 8 wt.% in the interdendritic regions and 25 wt.% within the second phase particles (MC carbides, Laves phases and δ phase needles). The nanohardness was found to correlate strongly with the local Nb content and the temper condition in each case. On the contrary, the indentation elastic moduli was not influenced by the local chemical composition or temper condition, but directly correlated with the grain orientation, due to the with high elastic anisotropy of nickel. Determining the correlation between the local mechanical properties, the chemical composition and the temper state can be a very valuable tool to assess the quality of cast components and other solidification processes, such as welds, repairs or 3D printed components. This way the local hardness maps can be directly correlated to the local 13 microstructure, to identify areas with a strong Nb segregation or with local tempers, as a result of heat affected zones. 5.1.-Sample preparation Each sample was grinded using decreasing grit papers and then mechanically polished up to OP-S (0.025 μm). EBSD characterization required for an additional etching of the sample surface to resolve the Kikuchi´s patterns using Grundy´s reagent (52.6 cm 3 HCl, 36.9 cm 3 H2O, 10.5 cm 3 HNO3, 2.6 g CuCl2, 2.6 g FeCl3) during 60 s at room temperature. 5.2.-Microstructural characterization Compositional characterization was carried out in the areas of interest by EDS using an (1) where Lmax is the maximum load reached during the test and Amax is the contact area determined from the Oliver and Pharr method [6]. The reduced elastic modulus was obtained from the contact stiffness, i.e. the slope at the beginning of the unloading curve (S=dP/dh), using Eq. 2. where β is a dimensionless correction factor which accounts for the deviation in stiffness due to the lack of axisymmetry of the indenter tip with β =1.02-1.19 for a triangular Berkovich punch [16]. The sample´s elastic modulus, Es, can then be obtained using Eq. 3. where E and are the elastic modulus and the Poisson´s ratio, respectively, of the sample (s) and of the indenter (i). In order to ensure that the geometry of the indenter was not altered after such a large number of indents, the area function of the indenter was assessed before and after each indentation session. The diamond area function was determined by progressive loadingunloading cycles in a material with well-known mechanical properties, such as fused silica. No significant differences were found during the entire experimental campaign.
2021-02-26T02:15:30.142Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "06425e151728d151ac191fa4f44fcee0dc886206", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.12785", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "06425e151728d151ac191fa4f44fcee0dc886206", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
265156398
pes2o/s2orc
v3-fos-license
Animal Bite Injuries to the Face: A Retrospective Evaluation of 111 Cases The treatment of bite wounds to the face is discussed controversially in relation to surgery and antibiotics. The aim of this study is a retrospective evaluation of 111 cases of animal bite injuries to the face that presented to our unit of oral and maxillofacial surgery over a 13-year period. Children under 10 years of age were predominantly involved. A total of 94.5% of the assessed injuries were caused by dogs. Wound infections occurred in 8.1%. Lackmann type II was the most common type of injury (36.9%). The perioral area was affected most frequently (40.5%). Primary wound closure was carried out in 74.8% of the cases. In 91.9% of the cases, antibiotic prophylaxis was prescribed. The most often administered type of antibiotic was amoxicillin with clavulanic acid (62.1%). Patients without antibiotics showed an increased infection rate without significance. Wound infections occurred significantly more frequently in wounds to the cheeks (p = 0.003) and when local flap reconstruction was necessary (p = 0.048). Compared to the other surgical treatment options, primary closure showed the lowest infection rates (4.8%, p = 0.029). We recommend antibiotic prophylaxis using amoxicillin with clavulanic acid and wound drains for wounds of Lackmann class II or higher. Primary closure seems to be the treatment of choice whenever possible. Introduction Bite-related injuries are a frequent cause of presentations in emergency departments.About 30.000-50.000 bite wounds are reported in Germany per year, mainly caused by dogs and cats [1].An estimated 8500 bite wounds per year are located on the face [2].From 50 to 75% of these accidents happen to children [3].The most common complication of dog bites is an infection secondary to wound contamination by both gram-positive and gram-negative microorganisms in the saliva [4]. The management of bite wounds is discussed controversially, and the risk of infections and the recommendations of anti-infective treatment are vague, ranging from cleansing the wound over antibiotic prophylaxis to antibiotic treatment in the case of wound infection [5].Some studies do not deem routine antibiotic therapy in facial bite wounds as necessary [6].Others report the use of broad-spectrum antibiotics in all cases of dog bites [7].Concerning surgical treatments, previous studies suggest, for example, that primary closure of facial dog bites in children can be achieved with a low infection rate and an excellent cosmetic outcome [8].Unfortunately, major reconstructive surgery is required in rare cases of animal attacks as a result of high masticatory forces and large breeds [9].To improve the management of facial animal bite wounds, we retrospectively analysed the epidemiological aspects as well as the surgical and antibiotic treatment of the patient population presenting in our department of oral and maxillofacial surgery with bite wounds to the face. Patients and Methods From 2009 to 2022, 111 patients presented with bite injuries to the face in our Department of Oral and Maxillofacial Surgery at Regensburg University Hospital.Epidemiologic data were collected from patients' medical records.Lackmann's classification was used to define wound severity [10] (Table 1).The main wound locations were categorized as nasal, periorbital, perioral, cheek, auricular or frontotemporal area.Intraoral perforation, tissue defect and lacrimal duct involvement were assessed.Surgical treatment was categorised into solitary wound-cleansing, primary wound closure, local flap reconstruction, reconstruction with skin and cartilage grafts and microsurgical replantation and free flap reconstruction.Criteria for wound infection were fever (over 38 • C), lymphangitis, abscess or at least four of five minor criteria: erythema, tenderness, swelling at the wound-site, purulent secretion and leukocytosis of more than 12 × 10 9 /L [5,11].Antibiotic therapy was assessed regarding the type of antibiotics and the treatment onset.A total of 100 patients were treated within 6 h of the trauma, 11 were treated after 6 h or more but not later than within the day after the trauma.If required, scar correction was performed at least 6 months after the primary surgical treatment.Comorbidities that affected the results could not be identified.Data were analysed by the use of SPSS 26.0 (IBM Corp., Armonk, NY, USA).Significant differences were identified in cross-tabulation using Pearson's chi-square-test, correlations and the Mann-Whitney U Test.A p-value less than 0.05 was considered as statistically significant. IVb The above, and bone involvement Epidemiology Table 2 shows the base data of the entire cohort.A total of 59 female and 52 male patients underwent treatment for animal bite injuries to the face from 2009 to 2022 in our unit.The mean age was 30.30 ± 21.50 years (range 1-76 years).A total of 28 patients were 10 years or less (25.2%) and represented the dominant age band (Figure 1).In 105 cases (94.5%), bites were caused by dogs, in 5 cases by horses (4.5%) and in one case by a fox (0.9%).Only female patients sustained horse bites (p = 0.032).A total of 64% of the involved animals (n = 71) were familiar to the victims.In 18.0% of the involved dogs, complete vaccination status including rabies was documented (n = 20), and in 82.0% of cases, the animal's vaccination status was unknown (n = 91). Wound Infections A total of 7 of the antibiotically treated patients developed a wound infection (6.8%).In the patient group without antibiotic prophylaxis, 2 of 9 patients showed signs of infection (22.2%).This difference was not statistically significant (p = 0.818).A higher infection rate in patients with delayed surgical treatment could also not be displayed (p = 0.90).Scar correction surgery was significantly required more often in patients with wound infection after the initial treatment (p = 0.002).Compared to the other surgical treatment options, direct wound closure led to significantly less need for scar corrections (2.4%; p = 0.001).In 14 cases, a perforation into the oral cavity was described (12.6%).One of these perforating wounds led to an infection (7.1%).There was no significant correlation between oral perforation and wound infection (p = 0.887).In 4 of the 37 defect wounds, signs of infection were recognized (10.8%) without achieving statistical significance (p = 0.461).In 3 of the 21 drained wounds, signs of infection were detected (14.3%) (p = 0.249).Regarding surgical treatment, local flap reconstruction led to the highest percentage of infection (2 of 5 cases, 40%).The distribution of infections dependent on the different ways of surgical treatment was significant (p = 0.048).By comparing primary closure to all other treatment options, a significantly lower rate of infection in the primarily closed wounds could be displayed (p = 0.029) (Table 6).Concerning the Lackmann stage, the highest infection rate was found in stage II wounds (11.9%), followed by stage III (8.1%).A significant association between the Lackmann stage and wound infection could not be ascertained (p = 0.750). Discussion Up to 30,000-50,000 injuries per year are associated with animal bites in Germany.A total of 60-80% of these injuries result from dog bite injuries [1].Approximately one in twenty dogs will bite a human being during his or her lifetime [4].Following the upper and lower extremities, the face is the most common area for bite injuries [12,13].Especially in children, the face is reported to be the most common location of bite wounds [14].Facial injury complications following animal bites include soft tissue infections and prominent scars [15].In our own department, there were 111 patients with animal bite wounds to the face over the 13-year period documented.A total of 94.5% of the bite wounds were caused by dogs and 4.5% by horses.Interestingly, cat bites were not reported.They seem to be located more commonly on the hands followed by the upper and lower extremities [16]. Regarding the patients' age pattern, the dominant group were children between 0 and 10 years (25.2%).Other authors also report children to be at the highest risk of falling victim to dog bites [4,5,17].This is likely caused by the unintentionally threatening and provoking behaviour of children against dogs [3].Another reason is probably the smaller size of children and their faces being in the range of medium-and large-sized dogs [18].In this context, children are reported to be two times more likely to suffer a periorbital injury from dog attacks when compared to adults [19]. The most common site was the perioral region (40.5%),followed by the nose (22.5%) and the ear (17.1%).This is basically consistent with other studies [5,20,21] and may be caused by their exposed location.Horse bites mainly addressed the periauricular and frontotemporal area, whereas injuries especially in the perioral and nasal tissues were exclusively caused by dogs (p < 0.001).This might result from the different directions of the attacks.Dogs commonly attack from the bottom up, so the victim's perioral tissue is more reachable for them.Horse bites were exclusively noticed in female patients (p = 0.032) as, in our region, the most common contact with horses originates from horseback riding. An overall infection rate of 8.1% was detected.This is consistent with previous studies [22,23].The significantly highest percentage of infections was registered in wounds affecting the cheeks compared to all other facial soft tissues (36.4%; p = 0.006).Guo et al. also identified soft tissue injuries to the cheek to be more at risk for infection compared to injuries to other facial areas, independent of the cause of the accident [24].Stanbouly et al. identified the cheeks as the most frequent site to develop open wounds caused by dog bites and that open wounds are more likely to develop an infection following dog bites [18].We suppose the complex multi-layer anatomy of the cheek to be responsible for that. With regard to the different ways of surgical treatment, a slight increase in infections in patients undergoing local flap reconstruction could be detected (p = 0.048).Local flap reconstruction was exclusively required in stage II and III wounds.The larger defect size and the involvement of deep tissues may be responsible for the higher infection rate in these cases.To prevent infections, we recommend installing a drain in cases of local flap reconstruction.Primary closure seems to cause the lowest infection rate (4.8%, p = 0.029).Regarding aesthetic outcomes, scar correction was significantly less required after direct closure compared to the other surgical treatment options (p = 0.001).Coinciding with other authors, we recommend prompt primary wound closure after careful cleansing and disinfection if possible [23].Detailed information about eventual complications such as wound infection and hypertrophic scarring has to be provided preoperatively to the patients to avoid future complaints [25].Another interesting treatment option is described by Lisong et al., who recommend the application of medical glue after negative pressure sealing and drainage to treat children's maxillofacial dog bites.The use of medical glue is time-saving, leads to smooth scars and high satisfaction, especially in children and their families, and should be integrated into clinical routine in the case of animal bite injuries to the face [22]. Despite previous authors' reports of higher infection rates in intra-oral/extra-oral communicating wounds and because of the additional exposure to the victim's own oral flora [9,24], in the present study oral perforation was not a promoting factor for infection.Nevertheless, in these cases, we advise proper wound cleaning from extraoral and intraoral and watertight closure of the intraoral aspect of the wound to prevent additive contamination by their own salivary flora. Regarding the need for antibiotic treatment, Kesting et al. recommend antibiotic prophylaxis for all wounds of Lackmann class II or higher, in cat and horse bites, in children, in patients with immunodeficiency and in wounds older than 6 h [5].Others advise the early prescribing of prophylactic oral antibiotics in all cases of bite injuries [26][27][28].In our department, antibiotic prophylaxis was administered to 91.9% of patients presenting with bite injuries to the face (n = 102).A total of 6.9% of them showed signs of wound infection despite prophylactic antibiotic treatment (n = 7).Nine patients did not receive an antibiotic prophylaxis and two of them developed a wound infection (22.2%).This difference was not significant (p = 0.197).The patients' age did not seem to influence the development of wound infections as children under 10 years nearly showed the same infection rates (8.0%) as patients older than 10 years (8.1%).Regarding wound infection according to the Lackmann classification, the lowest infection rate was assessed in class I wounds (3.4%), whereas the most infections were documented in class II (11.9%) and III (8.1%).A significant correlation between the Lackmann stage and wound infection could not be displayed (p = 0.750).In stage I wounds, only one infection in 29 wounds was detected, whereas in stages II and III the percentage of infections obviously increased.In stage IV wounds, no infections were assessed.However, this finding can be attributed to the low number of stage IV wounds (n = 3) and may not be representative.With this in mind, our findings indicate that the risk for wound infection in stage I wounds is low and increases with the involvement of deep tissues.According to these results, we accede to the proposal of Kesting et al. for antibiotic prophylaxis for patients with Lackmann class II or higher facial bite injuries.In contrast, proper local disinfection seems to be appropriate in Lackmann class I cases after careful evaluation of the individual situation concerning the patient's immunological competence, macroscopic wound contamination, etc.We would not suggest a special need for preventive antibiotic use in children as infection rates in children seem to be equal to adults.In other studies, the evaluation of the complications revealed that hypertrophic scarring was the most common complication following surgery [21].A total of eight patients required dermal scar correction after at least 6 months (7.2%).The percentage of patients undergoing scar correction was significantly increased in patients with wound infection documented compared to patients with complication-free wound healing (p = 0.001).We suppose that wound infections lead to enhanced scarring and reduced long-term aesthetics.In 21 cases, a wound drain was inserted as a part of wound closure or reconstruction surgery.A total of 14.2% of the wounds with a drain showed signs of infection (n = 3 of 21) compared to the 6.7% infection rate in wounds without a drain (n = 6 of 90).However, there was a significant correlation between Lackmann class II or higher and the installation of a drain (p = 0.013).So, the higher infection rate could be explained by the fact that drains were mainly installed in critical wounds with a higher risk of infection.So, despite this higher occurrence of infections, we recommend inserting a drain in visibly contaminated wounds and optionally in Lackmann class II or higher. Common pathogens associated with animal bites include Staphylococcus, Streptococcus, Pasteurella, Capnocytophaga, Moraxella, Corynebacterium, Neisseria and Anaerobic bacteria [11].Dog bites can result in the transmission of numerous pathogens including Rabies lyssavirus (i.e., rabies), Clostridium tetani (i.e., tetanus), Pasteurella spp., Capnocytophaga canimorsus, Fusobacterium, Bacteroides, Prevotella spp., Propionibacterium, Peptostreptococcus, Eikenella corrodens and Streptococcus pyogenes, among others [18,29].Amoxicillin with clavulanic acid is generally considered the first-line prophylactic treatment for animal bites [21,30,31].Amoxicillin is a penicillin derivative and has a similar activity against both gram-positive and gram-negative bacteria.With the addition of clavulanic acid, the spectrum is increased to include beta-lactamase-producing strains as well as broadening the coverage to include other bacterial species [32].According to this, the most frequently administered antibiotic agent was amoxicillin with clavulanic acid (n = 69).Amoxicillin with clavulanic acid is reported to be virtually active against all the bacteria isolated from bite wounds [5,33].When given with a prophylactic intention, wound infection occurred in 7.2% (n = 5).This low number of infections supports amoxicillin with clavulanic acid and seems to be the agent of choice as the first option in all facial bite injuries.Prophylactic antibiotics should be prescribed for 3 days [34].If a wound shows evidence of infection, a microbiology swab should be taken for culture and sensitivity [34].Antibiotics for treatment of infection should be prescribed for 5 days [34].As alternatives to amoxicillin with clavulanic acid, mainly clindamycin and cefuroxim were administered.Clindamycin is well known for its activity against anaerobic bacteria, particularly beta-lactamase-producing strains of the Bacteroides species and its activity against aerobic gram-positive cocci.However, clinicians should be aware of its failure against aerobic gram-negative rods [35].Cefuroxime is stable to many β-lactamases and is active against many gram-positive and gram-negative organisms.Like most other cephalosporins, it is not active against Streptococcus faecalis, Pseudomonas species or Bacteroides species [36].Tetanus and rabies immunization history must be checked, and vaccination and immune globulin should be administered when necessary.According to the recommendations of the WHO, nibbling of uncovered skin, minor scratches or abrasions without bleeding and licks on slightly abraded skin demand immediate post-exposition vaccination and local treatment of the wound.Single or multiple transdermal bites or scratches (with bleeding), licks on broken skin, contamination of mucous membrane with saliva from licks and contact with bats (superficial or deep bites or scratches, contact with a wound or mucous membrane) require immediate post-exposure vaccination and the of immunoglobulins [37].In the present cohort, rabies immunization was carried out in the case of a fox bite.Since 2008, Germany has been considered to be free from terrestrial rabies.Nevertheless, post-exposure prophylaxis should be carried out if the suspicion of being exposed to rabies cannot be invalidated as it was in our case [38].It must be remembered that, in other countries, emergency physicians have to cope with a more difficult situation concerning rabies related to a high number of straining dogs.For instance, Aydin et al. report in a Turkish study that 97.1% of patients presenting with bite injuries receive a rabies vaccination [13]. Another interesting aspect of this study is the fact that long-term facial nerve malfunctions after suffering a bite injury were not explicitly recorded.The incidence of permanent facial nerve harm after animal attacks to the face seems to be quite low.However, further research is required for a detailed assessment of the function of the facial nerve in patients with facial bite injuries. A limitation of the study is the retrospective design over a 13-year time period.Despite accurate documentation detailed information about initial medical findings, treatments and outcomes may be absent.Several additional cases had to be excluded because of incomplete medical records.In this context, data about patients' comorbidity and smoking status are missing.This means a major compromising factor to the treatment outcomes and the results of the study.Regarding the surgical treatment, information about the use of specific disinfection agents could not be achieved.Therefore, it could not be displayed which disinfection concept is appropriate for animal bite wounds.Moreover, treatment was carried out by multiple practitioners.Individual surgical experience might affect treatment outcomes but could not be reflected in the study's results.Another limitation of the study is an incomplete microbiological assessment, as the cultivation of bacteria causing wound infections was successful just in three cases.Cultures of bite wounds are not obligate initially, unless the wound is abscessed or already infected [39].In our cohort, microbiological cultures were not collected routinely from wounds not infected.The lack of bacteria cultures from infected wounds may be promoted by the reflexive empirical administration of broad-spectrum antibiotics before taking swab specimens of the wounds. Conclusions Animal bite wounds to the face are a common reason for presentation in emergency departments.Children under 10 years of age are the main portion of the patient population.The main location is the perioral region, followed by the nose and ear.Cheek wounds are at greatest risk for wound infection as well as local flap reconstruction.Perforation wounds into the oral cavity do not imply increased infection rates.We recommend antibiotic prophylaxis with amoxicillin with clavulanic acid and wound drains for wounds of Lackmann class II or higher.Primary closure of the wounds seems to be the treatment of choice if possible concerning infection rates and aesthetic outcomes. Figure 1 . Figure 1.Age pattern of patients suffering animal bites to the face. Figure 1 . Figure 1.Age pattern of patients suffering animal bites to the face. Figure 3 . Figure 3. (A) Lackmann type III bite injury after a dog attack.(B) Local flap reconstruction.Drain installed at lateral wound site.(C) Situation 3 months after surgery.(D) Mouth opening 3 months after surgery.
2023-11-15T06:17:31.192Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "959d80c743471ed2b483c0e5265a84cfa4ae48df", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/21/6942/pdf?version=1699255011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63d793fbf05fac84395db67eb2d5ce3f2b810f9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237632067
pes2o/s2orc
v3-fos-license
Evaluation of outcome of pregnancy in placenta accreta spectrum Background: Aim of the study was to evaluate the outcome of pregnancy in placenta accrete spectrum in third trimester pregnancy at tertiary care centre Methods: This hospital based retrospective study was carried out from 2017 to 2019. The case records of all women identified as placenta accrete spectrum from the hospital registers were retrieved. A total of 166 patients with the diagnosis of placenta accrete spectrum were included in the study. Results: The incidence of morbidly adherent placenta is 5 per 10,000 deliveries with mean age being 32.4±4.2 (23-39) years. and showed its relation with risk factors such as previous caesarean section (CS), placenta praevia and multiparity. The mean duration of MICU stay in placenta previa was 6.7±1.9 days (range 2-12 days). With complications in 18 cases of which urinary bladder injury (3%), infection (9%), PPH and coagulopathy (4.2%). The placenta was removed successfully in 141 while 25 cases had caesarean hysterectomy (2.4%). In total 166 cases 26 (16.8%) cases are intrauterine device (IUD) and still births. 5 (3%) cases are very low birth weight, 24 cases (14.5%) are low birth weight babies, 76 (45.8%) cases had neonatal intensive care unit (NICU) admissions followed by 10 (6%) cases with <5 APGAR score. Conclusions: Placenta accreta spectrum can be identified antenatally with a high index of suspicion in the presence of known risk factors and proper radiological studies, allowing for planned attempts to avoid life-threatening haemorrhage and caesarean hysterectomy. INTRODUCTION The placenta accreta spectrum, formerly known as morbidly adherent placenta, refers to the range of pathologic adherence of the placenta, which includes placenta accreta, increta and percreta. It was reported as 1:30,000 deliveries in 1950 and the incidence of morbidly adherent placenta has increased 10-fold in the past 50 years, current frequency-1 in 2500 deliveries because of alarming increase in cesarean section rates. Severe and even life-threatening haemorrhage, which frequently necessitates blood transfusion, can result in maternal morbidity and fatality. Women with morbidly adherent placenta spectrum have a higher risk of maternal death. morbidly adherent placenta has taken epidemic proportions of late which parallel's the rise in cesarean deliveries. Major risk factors are a history of placenta previa, previous CS, advanced maternal age, multiparity and a history of endo-uterine maneuvers. 1 hysterectomy. Grey scale ultrasound is adequate in most cases for diagnosing the condition. However, MRI may be required in some cases especially in those with anterior placenta with bladder invasion and posterior placenta. It has been seen that antenatal diagnosis of this condition with planned elective delivery in a tertiary care set up with availability of multidisciplinary care can significantly improve maternal and fetal outcomes. 4,5 The mainstay and standard treatment of this condition is peripartum hysterectomy after CS, without disturbing the placenta. Attempts at manual removal of placenta after cesarean in such cases can result in torrential hemorrhage with severe morbidity or mortality. METHODS This hospital based retrospective study was carried out from 2017 to 2019 in department of obstetrics and gynecology at maternity hospital as tertiary care center. The case records of all women identified as placenta accreta spectrum from the hospital registers were retrieved after approval of the institutional ethical committee to carry out this study. A total of 166 patients with the diagnosis of morbidly adherent placenta were included in the study. The criterion for diagnosis of morbidly adherent placenta was taken as manual removal of the placenta being partially or totally impossible or evidence of gross placental invasion at surgery or women with an ultrasound diagnosis confirmed by failed attempts to remove the placenta during the third stage of labor or histopathological confirmation of hysterectomy specimen. The histopathological specimens were fixed in formalin, trimmed and stained with routine haematoxylin and eosin stain. Clinical and pathological correlation was done by obtaining relevant clinical details, radiological findings and histopathological diagnosis. Maternal demographic data, mode of delivery was noted. From the operative notes, data on placental location, estimated blood loss, units of blood transfusion required and surgical procedure carried out to control bleeding was retrieved. Post-operative ICU admission, fetal outcome and maternal and fetal mortality was recorded. Statistical analysis Categorical variables were presented in number and percentage (%) and continuous variables were presented as mean±SD and median. The data was entered in MS excel spreadsheet and analysis was done using statistical package for social sciences (SPSS) version 21.0. RESULTS Among 28779 deliveries, 166 cases of morbidly adherent placenta with incidence of 0.5% i.e., 5:1000 births. Age range of patients was 21-37 years, mean age being 32.4±4.2 (23-39) years. Most of the cases in study are multipara with at least one previous CS and no history of placenta previa. DISCUSSION Morbidly adherent placenta was seen in 5:1000 (0.5%) deliveries in our hospital over a 2-year period. In 2017, the first national and binational case-control study of morbidly adherent placenta in Australia and New Zealand found an incidence rate of 44.2/100000 women given birth or 0.0442%. 6,7 The most cited epidemiological study is that of Miller et al who found in the United States over a period of 10 years (1985-1994) 62 placentas accreta on 155 670 births with an incidence rate of 1/2 510 births or 0.0398%. 8 Recently, Carusi reported that the exact incidence of morbidly adherent placenta is not easy to ascertain, but it is about 1/1000 deliveries and this incidence is increasing along with increasing the risk factors. This lower incidence can be explained by the fact that screening is more effective in developed countries than in low-and middleincome countries. The only certainty is that the incidence of morbidly adherent placenta has increased dramatically in a few decades, and it was shown that this was likely correlated to the increasing rate of cesarean delivery. 9 The present results revealed that risk factors for morbidly adherent placenta were maternal age, >35 years, previous CS (≥2), multiparity (≥3) and previous history of placenta previa. These results agreed with many authors, Fitzpatrick et al. studied risk factors for morbidly adherent placenta and found that high maternal age, prior caesarean delivery and placenta previa were considered as significant risk factors. Also, another study in 2017 reported that older maternal age, prior CS, placenta previa and high parity were independent risk factors for morbidly adherent placenta. Also, other investigators reported similar results. In Choudhary et al study all patients were multiparous with 84%, in study by Fitzpatrick et al also, concurrent placenta praevia was recorded in 53% cases, which is lesser compared to 71% in study by Choudhary et al 2 and 64% in study by Fizpatrick et al. [10][11][12] According to the current findings, blood transfusion was required in all 166 cases of placenta accreta spectrum. A recent study published in 2018 found that 94.7% of placenta accrete cases required blood transfusion, while another study found that 75.0% cases required blood transfusion. In these circumstances, blood transfusion should be expected, and in some cases, major transfusion may be required. According to Wright et al the average blood loss for morbidly adherent placenta cases undergoing caesarean hysterectomy was 3000 ml, with a mean transfusion requirement of 5 packed red blood cell (PRBC) units. About 41.7% of women with a recognised diagnosis of placenta accrete had an estimated blood loss of 5000 mL. 13 Our findings are similarly consistent with Epstein et al findings' which were based on a study of 77 women with morbidly adherent placenta. The hysterectomy group had a statistically significant greater EBL than the conservative management group (2989 ml vs. 1410 ml). Our findings are also consistent with other studies in the literature that show that conservative management reduces the requirement for blood transfusions as compared to extirpative management. 14 The average length of MICU stay in the CS hysterectomy group was 6.7 days, according to our findings. It has been reported that the average MICU stay after a CS A B C Veludandi U et al. Int J Reprod Contracept Obstet Gynecol. 2021 Sep;10(9):3331-3335 International Journal of Reproduction, Contraception, Obstetrics and Gynecology Volume 10 · Issue 9 Page 3334 hysterectomy ranged from 4 to 8 days, which is similar to our data. Bladder damage was noted as a problem in 3% of our research participants. Many research back up our findings, stating that problems following a caesarean hysterectomy are more common, with bladder and ureteric injuries being the most common. 15 In our analysis, the uterus was preserved in 97% of the instances. Overall, our findings show that cautious management may be beneficial in cases when couples want to continue trying to conceive with the promise of a follow-up. We also discovered that UAE reduced placental vascularity and hastened placental resorption, which is consistent with previous research. The mortality rate of Placenta Accrete has been reported to be around 7%. In a recent study in Egypt, the death rate in PA cases was reported to be 3.2%. However, a nationwide study in the United States found a mortality rate of 1.0% in women who had an obstetric hysterectomy, although other studies found mortality rates of 1-6%. Fortunately, no deaths were reported in the current investigation. 16 In total 166 cases 26 (16.8%) cases are IUD and still births. 5 (3%) cases are very birth weight, 24 cases (14.5%) are low birth weight babies 76 (45.8%) cases had NICU admissions followed by 10 (6%) cases with <5 APGAR score. In Singh et al study four were stillborn, nine needed NICU transfer and eight had an APGAR score of 9 at 5 min of birth. 17 Cutting through the placenta and separating the placenta can cause torrential bleeding which is difficult to manage, so this can be avoided in case of morbidly adherent placenta and cost effect reducing the burden of health resources. These cases should be managed in higher centres. Tertiary care centres with all facilities like experienced obstetricians, for adequate blood transplants like blood banks, Neonatal banks, unital care, experienced anesthetists, urologists, RICUs. Expert radiologists to diagnose morbidly adherent placenta should be available. When placenta is in anterior lower uterine segment in cases of previous LSCS, ultrasound and MRI has to be done. CONCLUSION The increased use frequency of CS has been linked to an increase in morbidly adherent placenta. It is possible to make an antepartum diagnosis using suitable radiological tests, and the patient can be saved from a life-threatening postpartum haemorrhage, as well as planned measures to save the uterus.
2021-09-25T15:50:43.024Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8788f4292b06f0d507be09d0c64b12cd4dff0876", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/10791/6783", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82cad42df78587dfb434e1f814094cfe087355e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55979589
pes2o/s2orc
v3-fos-license
Accounting Education , Knowledge Management and Working Capital Management Performance : Evidence from China With the advent of the era of knowledge economy, skills training, knowledge acquirement and management are increasingly combined with business management practices. There is preliminary evidence show that the education, learning and management of accounting knowledge have a significant correlation with the enterprise performance. Based on the theory of accounting education and knowledge management, this paper investigates the influence of internal control on the performance of working capital management from the perspective of internal control acquisition, direct learning and indirect learning. The empirical study on China’s capital market shows that with the improvement of accounting education and knowledge management, the quality of internal control of enterprises will also be improved, furthermore, the performance of working capital management will be obviously improved. The conclusion of this paper not only enriches the literature in the field of accounting education and knowledge management, but also provides the crucial evidence that how Chinese enterprise can improve the rationality and scientificity of working capital management decision. INTRODUCTION The advent of the era of knowledge economy not only promoted the importance of enterprise to knowledge education, knowledge learning and knowledge management, but also promoted the importance of accounting education to the content of knowledge management.AACSB recently began to emphasize the importance to take big data and technology into accounting courses (Sledgianowski et al., 2017).There is preliminary evidence show that knowledge learning and management have a significant role in promoting enterprise performance (Reich et al., 2014;Cohen & Olsen, 2015;Zheng et al., 2017).In the information age, knowledge has become the most important source of wealth, it has become one of the hot issues for many scholars that how to use the knowledge and skill to enhance the enterprise performance (Andreeva & Kianto, 2012).Among all of the knowledge resources of the enterprise, the mastery and learning of accounting knowledge may be the most fundamental way to promote the improvement of enterprise performance.Especially, in recent years, the enterprise's internal control knowledge and system praised by many scholars, may become the booster for the improvement of enterprise performance.Since the promulgation of the SOX Act, internal control has gradually developed into an important part of corporate accounting system.Most of the listed companies in China have established internal control system.While this kind of system establishment is a long-term learning and a process of improvement, and the impact on enterprise performance may also be gradual. The impact of knowledge management on enterprise is realized mostly through the ways of the communication, sharing, cooperation, innovation of the enterprise members, and running through the whole business process of the enterprise.As one of the crucial parts of enterprise financial decision-making (Aktas et al., 2015), working management decision may be mostly influenced by accounting policies and knowledge management.Since 1930s, along with ever-changing global economic situations as well as intensified market competition, working capital management has increasingly drawn wide attention from the theoretical circle and the practical circle.Its content has expanded stepwise from the initial stage to the comprehensive stage that covers income management, supply chain management, and expenditure management.Compared to investment and financing management and profit distribution management, working capital management decisions are equally important to the financial management system as a member of the decision field.Existing research has proved that quite a great component of balance sheets of listed companies in various countries involves current assets and current liabilities as an important part of working capital (Mian & Smith, 1992).Operating risks and enterprise revenue are linked to the effectiveness of working capital management, which further impact on the realization of maximum enterprise values.Highly efficacious working capital management decisions are of great value relevance.A company is endowed with much better profitability, stock performance, and business performance if its working capital management decisions converge to the optimum value (Hassan, 2011;Baños-Caballero, 2014;Aktas et al., 2015;Mun & Jang, 2015). However, the blossom of theoretical studies fails to promote leapfrogging development of working capital management practice in a real sense.When the Chinese government reformed the accounting system in 1993, the concept of working capital management was introduced into China -for its financial meaning -so that complying with the trend of international convergence.Through tracking and survey on working capital management of Chinese listed companies during the period 1997-2014, Pro.Wang et al. from China business working capital management research center conclude that working capital management of Chinese listed companies tends to: (1) have focused on and will focus more on investment activities; (2) lead to nonstop slight increases in the proportion of short-term financial liability and a constantly magnified working-capital-funded risk; (3) witness a continuously deteriorating management performance where accounts receivable and marketing channel gradually become the key to performance enhancement; (4) endow state-owned listed companies with much better performance and relatively higher short-term financial risk in comparison to non-state listed companies.Viewed as a whole, despite over 20 years of development, research on working capital management has still been underestimated such that risks and performance of working capital management remain trapped in the disadvantageous situation of continual exacerbation and that there still lacks systematic research into influential factors of working capital management (Wang et al., 2007).Based on the era of knowledge economy and the environment of China capital market, this paper holds the view that it is necessary to study the working capital management decision from the perspective of internal company, especially the internal control and knowledge management, and discuss the impact of internal control quality on enterprise working capital management performance.The significance of this study lies on that from the perspective of knowledge management, internal control may directly and indirectly affect the efficiency of the enterprise working capital allocation, thus promoting the substantial improvement of the overall working capital management performance of China's listed companies. Contribution of this paper to the literature • Based on the relevant contents of accounting education and knowledge management, this study examines the impact of internal control on working capital management, and expands the scope of the research on the economic consequences of internal control. • This paper analyzes the influence of internal control on working capital management from two dimensions, direct and indirect, and defines the mechanism of internal control on working capital management. 6825 Theoretically, there are at least three reasons why it is from the internal control perspective that studies are conducted on the access to promoting WCMP: 1.The Chinese Application Guidelines for Enterprise Internal Control raises claims for working capital management and its contents with regard to risk control and business procedure.The Application Guidelines for Enterprise Internal Control No.6 (finance activities), No.7 (purchasing business), No.8 (assets management), and No. 9 (sales business), respectively, list detailed operating guidelines for such components of working capital management as working capitals, supply chain management, inventory management, accounts receivable management, and prepayment management.2. According to the stakeholder theory, high-quality internal control may spur stakeholder groups such as shareholders, employees, suppliers, and governments to change their behaviors towards a more efficient operating actions and working capital management for the company.3. Present-day focus on working capital management decisions is mainly measurement of WCMP indices, rather than the connotations of internal control behind them.Direct requirements of the internal control system will significantly vary such indices as receivables turnover, inventory turnover, and current assets turnover.All in all, it is reasonable and of great theoretical and practical value to research into WCMP based on internal control, as what the paper does. Given this, We intends to base on a comprehensive measurement of WCMP and the multiple regression analysis method as well to empirically test the impact of enterprise internal control on WCMP, with panel data of A-share main-board companies on Shanghai and Shenzhen stock markets from 2004 to 2013 as the research objective.According to relative research result, premium internal control with either DWC or WCP as the index can steadily enhance WCMP.Further studies show that with internal control, a company can effectively shorten DSO and DIO, albeit cutting down DPO at the same time.For the first time, the research achievements in the paper provide experimental experience for the positive correlations between enterprise internal control and WCMP.It not only identifies the access to upgrading WCMP, but also helps deepen recognition and understanding of the economic consequences arising from enterprise internal control. Knowledge Management and Enterprise Performance During the era of knowledge economy, knowledge has become an important means and resource for enterprise to gain competitive advantage, increase their wealth and improve their innovation ability (Bogner & Bansal, 2007).Wang (2009) proves the positive correlation between knowledge management orientation and enterprise performance based on the concept of knowledge management orientation.Enterprise through the knowledge learning and knowledge structure optimization, to promote knowledge sharing, transformation, innovation and application in internal enterprise.Optimization of knowledge and orderly process of knowledge management will effectively improve enterprise performance (Gold & Arvind Malhotra, 2001).Chang Lee et al. (2005) provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time.Chen (2009) proposes an approach of measuring a technology university's knowledge management (KM) performance from competitive perspective.Mills (2011) uses survey data from 189 managers and structural equation modeling to assess the links between specific knowledge management resources and organizational performance.The results show that some knowledge resources (e.g.organizational structure, knowledge application) are directly related to organizational performance, while others (e.g.technology, knowledge conversion), though important preconditions for knowledge management, are not directly related to organizational performance.The above conclusions show that there is a significant positive correlation between knowledge management and firm performance. Based on the above analysis, it can be seen that knowledge management has become a key factor of enterprise performance growth.In view of this, this paper will draw lessons from the related theory of knowledge management, take the process of enterprise internal control establishing and perfecting as the process of enterprise knowledge accumulation and the process of management optimizing.Furthermore, from the perspective of knowledge management, this paper will analyse the impact of internal control on enterprise working capital management performance. Direct Impact of Internal Control on WCMP Compared to the initial stage, the present-stage concept of working capital management has expanded greatly for its connotation and extension.Working capital management has been lifted to a comprehensive one where revenue (accounts receivable, procurement procedure, payments and receipts of accounts), supply chain (inventories and logistics), and expenditures (procurement and payments) are all managed (Kieschnick et al., 2013).As an important means of enterprise internal control, the internal control system is endowed with unique perspectives and efficacies in terms of controlling working capital risks, arranging working capital business procedures, and lifting WCMP.According to the specific requirements of the Chinese Application Guidelines for Enterprise Internal Control, there are several direct impacts of internal control on WCMP. According to the Application Guidelines for Enterprise Internal Control No.6 -finance activities, the objective of internal-control-based working capital management is to realize balance between physical flow and fund flow and to comprehensively enhance the efficiency of fund operation as well.In addition, No.6 proposes detailed measures of lowering operating risks and upgrading fund benefits at the same time from the perspectives of capital budget management, short-term capital deployment, accounting system control (receipt, payment, examination and approval of funds), etc. The Application Guidelines for Enterprise Internal Control No.7 -purchasing business arranges and improves the working capital management procedures related to supply chains of materials (labor service) procurement and payments, for example.Measures such as establishing, examining and approving the procurement system, separating authority from responsibility, and improving the evaluation mechanism are all complementary to the promotion of WCMP. According to the Application Guidelines for Enterprise Internal Control No.8 -assets management, a company is supposed to introduce the modern concept of logistic management to regulate inventory management procedures.It should take full advantage of information system for a reasonable determination of the optimal inventory status, and ensure effective control on all risks of inventory management, in a way that promoting efficacious increase of inventory turnover. The Application Guidelines for Enterprise Internal Control No.9 -sales business provides quite referential application measures with regard to enterprise sales business and accounts receivable management, which is considered by Wang et al. (2014) as the key for listed companies to upgrade their WCMP.On the one hand, No.9 accurately seizes the critical risk points of sales channel, and controls the business procedures of such links as market survey, conclusion of sales contract, sales process and after-sales service, so that the efficiency of assets management for the sales channel is substantially promoted.On the other hand, No.9 details control requirements of accounts receivable, the obstinate illness of WCMP, including receivables management, bill management, accounting system control, and bad debt treatment.The implementation of internal control on accounts receivable can reduce the probability of producing bad debt in an effective manner, accelerate accounts receivable turnover, and improve WCMP as well. As can be seen from above, the direct impact of enterprise internal control on WCMP is mainly realized by formulating direct system requirements or detailed business procedure control requirements to intervene the indices of working capital management including content construction, risk control or evaluation indices.The intention is to radically free listed companies from invalid or inefficient management of working capitals. Indirect Impact of Internal Control on WCMP Internal control exerts direct impact on WCMP by posing requirements to the internal control system or by controlling relevant business procedures.Apart from this, enterprise internal control also has indirect influences on WCMP by promoting stakeholders to change their behaviors so that improving the operating management strategies.According to the stakeholder theory, stakeholders are realistic society subjects with the motivation for safeguarding self-interests (Donaldson & Preston, 1995).Stakeholders will adopt corresponding actions to maintain positive influences or reverse negative influences exerted on their interests under the internal control system once they perceive such influences and feel the need.This will in turn impede the realization of WCMP or its objective.Theoretically, the essence of internal control is to protect and maximize the interests of stakeholders such as shareholders (investors), creditor, manager, employee, supplier, consumer, and government (Hoitash, 2009).Highquality internal control will urge stakeholder groups to optimize resources by intensifying superior resource allocation (Zhang, 2007), in a way that improving WCMP.The paper selects several typical kinds of stakeholders for a representative sketch of the access to indirect influences of internal control on WCMP. From the perspective of shareholders, premium internal control brings cash holding appreciation (Huang et al., 2015), and helps improve stock performance and its execution achievements in the market (Hammersley et al., 2008).Shareholders who are informed of internal control information can precisely perceive the consequences of corresponding economic behaviors, and will thus decide to take measures such as inputting more assets or other superior resources to the company.The company that may expand its cash resource accordingly will finally have its WCMP improved in an indirect way. From the perspective of creditors, the perception of high-quality internal control information will strengthen their confidence in continuous operation of the company, and promote them to upgrade the company's credit rating, extend the limitations of debt covenants or reduce loan interest rate (Costello, 2011;Sun et al., 2017).Any one of the above measures will benefit the company greatly by supplementing its capability of short-term financial loans or cash flow, whose WCMP is then be improved remarkably. From the perspective of employees, effective internal control will both restrict and stimulate them so that improving their work efficiency and production enthusiasm.The implementation of internal control will not only enhance the company's productivity as a whole, but also curtail employee management costs, which will in turn supplement WCMP. From the perspective of consumers or clients, high-class internal product control and corresponding information disclosure will strengthen their confidence in product quality.As a result, the company will not only have more products sold and payed in time, but also reinforce communication and interaction with consumers or clients (Su et al., 2014).The corresponding promotion of sales performance and payment reclamation as well as the reasonable arrangement of production schedule are particularly crucial to improving WCMP in relation to the company's sales channel. From the perspective of suppliers, the rational system of procurement management will strengthen suppliers' confidence that the company will execute the contract and pay up on time.They will provide product support with larger discounts and longer credit extensions.In this way, not only can the company invest less assets than planned before, but the procurement period can be shortened, so that significantly enhancing WCMP in relation to the company's procurement channel. From the perspectives of government, a safe and sound internal control with high efficiency can improve the quality of a company's accounting information (Doyle et al., 2007;Altamuro & Beatty, 2010), and can strengthen the protection of investors (Gong et al., 2013;Peng, 2017).This will benefit the company in that the government will provide relatively loose regulatory environments and more policy support.A sound operating environment and favorable policy support provides certain help for the improvement of WCMP. To sum up, no matter with direct constraints by system regulation, or with indirect constraints based on stakeholders' behavioral perception, the implementation of internal control will exert great positive influence on WCMP.On such foundation, we proposes the following research hypothesis H1 for verification: Hypothesis 1: Internal Control is positively related to a firm's working capital management performance. Sample and Data Panel data of A-share main-board companies on Shanghai and Shenzhen stock markets from 2004 to 2013 was selected as the research objective.For sample data selection and follow-up data processing, we mainly: (1) excluded the listed financial companies due to discrepancies of accounting criteria; (2) excluded the listed companies whose sample data for those years was in ST, *ST and PT; (3) excluded the listed companies that underwent IPO for those years; (4) excluded the listed companies with financial data loss; (5) conducted Winsorize processing on 1% quantile and 99% quantile of major continuous variables, with the intention of restricting possible impact of extremums on research conclusions.After the above processing, there was a total of 8,887 companies that satisfied our requirements and that spanned a decade's sample interval.For sample selection, we referred to the Guidelines for the Industry Classification of Listed Companies (2012 Revision), so that the selected sample covered all the 19 industries but the financial industry and were strongly representative. As can be seen from Table 1, during the period of 2004-2013, samples of different years have similar numbers and fluctuate at the mean value of 888.7.The fluctuation is controllable and reasonable.In view of the samples' nature, there are 6109 state-owned companies and 2778 non-state companies, which occupies respective 68.7% and 31.3% of the total number of samples.This data distribution complies with the current special economic mechanism that state-owned business predominates national economy. All the internal control index and other relevant financial data required in the research are derived from the following several database: (1) DIB internal control and risk management database; (2) CSMAR database; (3) CCER database.SPSS21.0 and Stata12.0 are used as the data processing software and the statistical analysis software for the research. Model Design The paper intended to construct a panel-based multiple regression model 1 to study on WCMP response to enterprise internal control, where Performance denoted the proxy index of WCMP.DWC and WCP were used for measurement of specific parameters required by the research.The internal control index (ICI) was chosen as the independent variable based on internal control objectives that were provided by DIB internal control and risk management data.The rest of variables were control variables.In the model, was unobservable individual effects, was time effect, and , represented random disturbance terms.The specific model 1 was seen as follows: The relationship between internal control and WCMP was identified by the signal and significance of the internal control parameter 1 in the model. Variable Definition Dependent variable: WCMP There has long been research on WCMP evaluation.Financial evaluation indices that were used to reflect WCMP for most early-phase studies included accounts receivable turnover (period), inventory turnover (period), and current assets turnover (period).However, as the concept of working capital management expands, it is increasingly isolated and single-faceted to describe working capital management with financial evaluation indices.Comprehensively, there are mainly two types of indices that are universally recognized in theories and practice of working capitals: DWC and WCP.DWC: The early-stage indices of inventory turnover and receivables turnover that correspond to contents of working capital management studies have been no longer adaptable to current tendencies of diversified working capital management.Early-stage evaluation indices in relation to working capital management emphasize on managing current assets, but ignore inherent relationships between evaluation indices.As a result, conflicts and contrasts occur during the process of actual application (Wang et al., 2007).More importantly, as the current liability part is completely excluded from early-phase evaluation indices, it is rather difficult to figure out the overall impact of such indices as inventories, accounts receivable, and accounts payable on working capitals. Given that it is defective to measure WCMP with the pure index of current assets turnover, Richards and Laughlin (1980) proposed the idea of using days of working capital as the WCMP index.Days of working capital can be simplified as the mean time span needed from cash payment to cash receipt, and can reflect the entire process of a company's working capital management activities.Therefore, days of working capital has been widely applied to empirical studies of WCMP (Deloof, 2003;Knauer & Wöhrmann, 2013).Below is the formula of days of working capital: The American REL consulting company and CFO magazine began to investigate working capitals of American listed companies based on the indices of DWC and cash conversion efficiency (CCE) in 1997.Since 2003, they have employed the aforementioned formula "DWC=DSO+DIO-DPO" to rank WCMP of American listed companies, which played an important role in promoting universal application of WCMP.This phenomenon also reflects that it is objective and rational to measure WCMP with DWC.Therefore, the paper used DWC as the first index to evaluate WCMP.WCP: Initiated by the Boston Consulting Group (BCG), the concept of WCP takes comprehensive account of a company's working capital management efficiency and its business level, and acts as an integral index system to measure WCMP as a whole.This simple and practical index has been unanimously approved by the theoretical circle and the practical circle.WCP is hence used as the second means to evaluate WCMP in the paper.The corresponding formula is: WCP = net sales / annual average working capital volume Independent variable: ICI Internal control index (ICI) that were provided by DIB internal control and risk management data was chosen as the proxy index to measure the internal control quality of Chinese listed companies.ICI is founded on the basic norm of enterprise internal control, and combines five internal control objectives including operation, compliance, assets safety, strategy and report.It also reflects dynamic information of rectifying shortages of internal control for a consecutive publication from 2000 to 2014.The application of ICI will remarkably render the research conclusion more comparable and reliable.ICI refers to the quality of internal control of the firm and is represented by the natural logarithm of Dibo Company's internal control index. Control variables There are various influencing factors of WCMP according to relative studies, in line with the literature, we also control for other relative factors in the model 1.SIZE refers to the scale of the firm and is represented by the natural logarithm of total assets.Firms that are large in scale generally have more working capital to satisfy their business (Almazari, 2014), Scale will directly lead to a decline in efficiency of working capital.Therefore, we predict this variable also to be negatively related to a firm's WCMP.ROA refers the ratio of income before extraordinary items over total assets.According to the study by Palazzo (2012), the stronger the company's profitability, the more money it holds.LEV refers the ratio of total debt divided by total assets.According to the pecking order theory proposed by Myers (1984), Debt ratio is one of the important factors influencing the demand of working capital.Growth refers the firm's year-on-year sales growth.Kim et al. (1998), Opler et al. (1999) Consensus, the growth of the company will affect its future funding requirements.In view of the current nature of China's state owned economy, we also add a State variable to represent whether a firm is a State-Owned.State-Owned equals one if the ultimate owner of the firm is the government, and is otherwise equal to zero.In addition, we also control the time and industry dummy variables may influence the conclusions of the study (Nunn 1981). EMPIRICAL RESULTS Table 2 reports the descriptive statistics of the variables.As can be see, the mean value, maximum value, minimum value and range of DWC are 4. 674, 8.143, 0.795, and 7.348, respectively, which is just slightly different from the survey data of DWC obtained by Wang (1997Wang ( -2014)).The mean value, maximum value, minimum value and range of WCP are 6.508, 92.703, -38.535, and 131.238, respectively.This result shows that different listed companies have diversified performance and efficiency of working capital management.The mean value, maximum value, minimum value and range of internal control index are 6.512, 5.926, 6.849 and 0.923, respectively.The possible reason for little discrepancies between internal control index/quality of different listed companies is that the domestic internal control system is still in a preliminary stage which is far from forming great quality diversification.Below shows the descriptive statistics of the rest of control variables.We will not repeat them.Against the backdrop of the current special economic mechanism that state-owned business predominates national economy, the paper divided all the sample data into two sub-samples: state-owned companies and non-state companies.According to the ultimate nature of actual controllers in sample companies.The paper also conducted T test on the differences of paired sample's mean values and Wilcoxon rank sum test on the differences of median for the variables DWC, WCP, DSO, DIO, DPO, ICI.As can be seen from statistics in Table 3, compared to non-state listed companies, the state-owned companies have quite shorter DWC (4.518<5.018/ 4.523<4.968)and higher WCP (7.240>4.897/ 3.7784>2.534),which means that state-owned companies outstrip non-state companies with regard to WCMP.Wang (2013) also had similar conclusion.Meanwhile, state-owned companies predominate with shorter DSO, DIO and DPO than non-state companies.According to significance test on ICI, the internal control quality of stateowned companies surpasses that of non-state companies (6.524>6.486/ 6.535>6.520),which means that Chinese state-owned listed companies are ahead in developing the construction of the internal control system and in upgrading WCMP than non-state counterparts. Table 4 and Table 5 reports the regression results of Model (1).Given the panel characteristics of sample data, the panel-based multiple regression analysis method was used to conduct research on the impact of internal control on WCMP.Frequently-used panel-based multiple regression models are hybrid effect model, random effect model, and fixed effect model.The entity fixed effect model is mostly employed in empirical research by presuming that the error item of regression equation is related to a certain explanatory variable.The problem is that this model ignores time effect.This ignorance may cause the corresponding research results to greatly deviate from a real one in a way that deviating greater with the intensification of time effect (Zhao et al., 2012;Liu, 2017).To this end, we tested the joint significance of annual dummy variables of the sample data (F=497.86,P=0.0000.The test result that the null hypothesis of "no time effect" is strongly rejected shows that time effect should be considered in the model.Furthermore, in terms of the selection between fixed effect model and hybrid effect model, since F statistics is highly significant (as shown in Table 4) which corresponds to a significant individual effect for the model estimation, the fixed effect model should be the optimal one.When the fixed effect model and the random effect model is compared with each other, according to the result of Hausman test, the chi2 statistics is rather significant.This demonstrates that these two models have no noteworthy differences of model parameter estimation.In this case, we are supposed to be as conservative as selecting the fixed effect model.2) where WCMP is represented by DWC.According to specific regression analysis results, the estimation parameter of the internal control index is obtained as -0.450, and has passed the 1% significance test.This result shows that on the premise that other variables that DWC responds to are controlled unchanged, the higher the internal control index (quality) is, the shorter the DWC is, and the higher the WCMP is.Therefrom, the research hypothesis H1 is directly proved true. Table 5 demonstrates the multiple regression result based on WCMP and the internal control index in model (3).According to the results of F test and Hausman test, the fixed effect model is the best for multiple regression analysis on model (3) where WCMP is represented by WCP.The regression results show that the estimation parameter of the internal control index is obtained as 3.554, and has passed the 5% significance test.This result shows that on the premise that other variables that WCP responds to are controlled unchanged, the higher the internal control index (quality) is, the shorter the WCP is, and the higher the WCMP is.Therefrom, the research hypothesis H1 is directly proved true. As can be seen from the above summaries, WCMP is positively correlated with the internal control index (quality) no matter being represented by DWC or by WCP.Namely: the less deficiencies the internal control system has, the higher the internal control quality is, and the better the WCMP is.This conclusion is uniform to the theoretical analysis and research hypothesis in the paper that premium internal control can directly or indirectly promote WCMP of Chinese listed companies to accelerate continuously. It can also be seen that the impact of internal control on WCMP can be realized from two aspects: 1. Shorten DWC.The internal control system can be constrained through a series of regimes such that the DWC is shortened, which is of great significance to optimizing a company's working capital management decisions.2. Improve WCP.Apart from direct constraints on the internal control system, WCP can be improved by means of information disclosure and other information transfer methods both within the enterprise and out of the enterprise.The combination of the above two aspects will together upgrade WCMP of Chinese listed companies. Further Empirical Test Conventional measurement indices of WCMP which are supposed to cover current assets turnover indices such as DSO and DIO significantly function in empirical research on the field of working capital management.However, as the scope of working capital management expands, these conventional indices have been shrunk to a mere part of DWC.Nevertheless, they still play an important part in evaluating WCMP.Therefore, in order to ensure the prudence of the conclusions to be obtained herein, the paper conducted a further regression on the relationship between internal control and such indices as DSO, DIO, and DPO.This also aims to better clarify the correlations of internal control with various components of WCMP.The regression model required is shown as follows: , = 0 + 1 , + 2 , + 3 , + 4 , + 5 ℎ Table 6 is the multiple regression results of the internal control index and various components of WCMP.According to the results of F test and Hausman test, the fixed effect model should be used for multiple regression analysis on model ( 4), ( 5), ( 6).-118.393, -50.129, and -42.322, respectively, and separately passed the significance test at the significance level of 1%, 10%, and 1%.This shows that when other influencing factors remain unchanged, the internal control index and DSO, DIO and DPO are negatively correlated.Namely: the implementation of highquality internal control can significantly shorten a listed company's DSO, DIO and DPO. According to the regression result in According to the regression analysis result in model ( 2), it is easy to find that the negative correlations between internal control and DWC was not realized by shortening DSO and DIO at the same time when prolonging DPO, but by simultaneously shortening DSO, DIO and DPO albeit the smaller changing magnitude of DPO.According to the regression result of DPO in model ( 6) in Table 6, the absolute value of the correlation coefficients of ICI is 42.322, much less than those of respective 118.393 and 50.129 for DSO in model ( 4) and DIO in model ( 5).This research result is similar to the research on WCMP by Kong et al. (2009) and the research on accounting prudence and WCMP by Du (2014). DISCUSSION With panel data of A-share main-board companies on Shanghai and Shenzhen stock markets from 2004 to 2013 as the research objective, the paper used the panel-based multiple regression model to have conducted empirical research on the relationship between internal control and WCMP.Based on the Application Guidelines of Enterprise Internal Control and the stakeholder theory, the paper first summarized the direct and indirect impact of internal control on WCMP, and proposed the core research hypothesis HO accordingly: internal control is positively correlated with WCMP.Then, the paper conducted follow-up multiple regression analysis on HO, and ascertained that with any of DWC and WCP as the alternatives of WCMP, internal control and WCMP presented positive correlations with each other, namely: better internal control leads to shorter DWC or higher WCP, through which resulting in better WCMP.(2) according to further regression analysis result, the negative correlations between internal control and DWC was not realized by shortening DSO and DIO at the same time when prolonging DPO albeit its smaller magnitude.The research achievement in the paper remedies shortages of contents in research on the economic consequences brought by internal control, and expands the connotations and extensions of economic consequences brought by internal control. CONCLUSION The results of this paper show that through in-depth accounting training and education, internal control can help the enterprise's knowledge management system to play an important role.Based on the optimization of knowledge structure and knowledge learning, can make the internal control of the quality of enterprises to be orderly upgrade and improve, and then applied to the enterprise's working capital management performance.This is a complete research chain, and the conclusion of the study not only enriches the research literature in the field of accounting education and knowledge management, but also provides an optimized path for enterprises to improve the performance of working capital management. The research in the paper is of great guiding significance to tipping the scales against deteriorating working capital management of Chinese listed companies.Specifically, (1) it provides a practical access to upgrading WCMP.It is useless to apply conventional corporate governance strategies such as incentives and constraints, accounting control, information disclosure, and corporate culture constraints to manage working capitals in a modern sense, especially when the contents of working capital management has expanded to cover the supply chain channel.The paper has testified that the implementation of internal control plays a pivotal role in fully covering the contents of working capital management as well as in upgrading WCMP.Against the background of Chinese government departments striving for construction of the internal control system, it is undoubted a practical and low-cost shortcut to realize high-efficient working capital management via implementing internal control.(2) It is suggested to take advantage of the signal transmission function of internal control information, so that rationally arranging working capital activities ranging from short-term financing plan, production plan, sales plan to business credit.The quality of working capital activities exert profound influence on the company's risks and revenues.Research results show that disclosure of high-quality internal control information can drive the aforementioned stakeholders to change their behaviors by providing loose credit environment, higher business credit, and useful information of product demand to the company, which is conducive to the company's working capital management practice.Therefore, a company with relatively highquality internal control is capable of scheduling its working capital management plan at will, and can realize maximum WCMP on the premise of balancing risks and earnings. Table 3 reports the difference examination in State-Owned and Non-State-Owned. Table 3 . Difference Examination in State-Owned and Non-State-Owned Table 4 is the multiple regression result based on WCMP and the internal control index in model ( Table 6 , the regression coefficients of DSO, DIO, and DPO against internal control index are Table 6 . The regression results of the components of DWC and ICI
2018-12-12T17:59:45.380Z
2017-10-04T00:00:00.000
{ "year": 2017, "sha1": "2c839edd5ac4baafef6af36b0b34e2df180ecc8c", "oa_license": "CCBY", "oa_url": "http://www.ejmste.com/pdf-78279-14748?filename=Accounting%20Education_.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2c839edd5ac4baafef6af36b0b34e2df180ecc8c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
58536267
pes2o/s2orc
v3-fos-license
Potential Application of Yokukansan as a Remedy for Parkinson's Disease Parkinson's disease (PD), the second most common progressive neurodegenerative disorder, is characterized by complex motor and nonmotor symptoms. The clinical diagnosis of PD is defined by bradykinesia and other cardinal motor features, although several nonmotor symptoms are also related to disability, an impaired quality of life, and shortened life expectancy. Levodopa, which is used as a standard pharmacotherapy for PD, has limitations including a short half-life, fluctuations in efficacy, and dyskinesias with long-term use. There have been efforts to develop complementary and alternative therapies for incurable PD. Yokukansan (YKS) is a traditional herbal medicine that is widely used for treating neurosis, insomnia, and night crying in children. The clinical efficacy of YKS for treating behavioral and psychological symptoms, such as delusions, hallucinations, and impaired agitation/aggression subscale and activities of daily living scores, has mainly been investigated in the context of neurological disorders such as PD, Alzheimer's disease, and other psychiatric disorders. Furthermore, YKS has previously been found to improve clinical symptoms, such as sleep disturbances, neuropsychiatric and cognitive impairments, pain, and tardive dyskinesia. Preclinical studies have reported that the broad efficacy of YKS for various symptoms involves its regulation of neurotransmitters including GABA, serotonin, glutamate, and dopamine, as well as the expression of dynamin and glutamate transporters, and changes in glucocorticoid hormones and enzymes such as choline acetyltransferase and acetylcholinesterase. Moreover, YKS has neuroprotective effects at various cellular levels via diverse mechanisms. In this review, we focus on the clinical efficacy and neuropharmacological effects of YKS. We discuss the possible mechanisms underpinning the effects of YKS on neuropathology and suggest that the multiple actions of YKS may be beneficial as a treatment for PD. We highlight the potential that YKS may serve as a complementary and alternative strategy for the treatment of PD. Introduction Parkinson's disease (PD) is a chronic, progressive neurodegenerative disorder characterized by neuronal loss in the substantia nigra resulting in striatal dopamine deficiency [1]. PD is the second most common neurodegenerative disorder and occurs in 2-3% of people older than 65 years [1]. PD is typified by motor symptoms such as tremor, rigidity, bradykinesia, and postural instability. Additionally, most patients with PD experience nonmotor symptoms such as sleep disorders, cognitive impairments, disorders of mood and affect, autonomic dysfunction, sensory symptoms, and pain [1]. Thus, PD requires continued treatment to prevent deterioration of the quality of life. The gold standard therapy for PD is levodopa (L-DOPA), although the long-term use of L-DOPA and dopamine agonists causes diminished efficacy and side effects such as motor complications, neuropsychiatric symptoms, and Clinical Effects of YKS on PD-Like Symptoms Several clinical studies have identified effects of YKS on PDlike symptoms in various neurological disorders (Table 1). Sleep Disturbances. Sleep disruption in PD starts early in the disease progression and is caused by multiple factors, such as abnormalities in primary sleep architecture, nocturia, and restless legs syndrome causing arousal. Relevant subcategories of sleep disorders are rapid eye movement (REM) sleep behavior disorder (RBD), represented by an absence of REM atonia, dream-enacting behavior, and excessive daytime sleepiness [61]. In normal healthy adults, Yokukansankachimpihange (YKSCH), which comprises YKS and two additional herbs (compared to Anchusan), increased total sleep time and sleep efficiency based on polysomnography (PSG) recordings [7]. Additionally, YKS has been reported to be beneficial for sleep disturbance. It ameliorated sleep disorders as assessed by the neuropsychiatric inventory (NPI) and actigraphic evaluations in patients with Alzheimer's disease (AD) [5] and improved sleep quality as assessed via PSG and the Pittsburgh Sleep Quality Index in patients with dementia [6]. YKS also suppressed RBD, which is characterized by parasomnia, an absence of REM atonia, and dream-enacting behavior [8]. Collectively, these findings suggest that YKS may have therapeutic effects on insomnia, which is a nonmotor symptom of PD. Neuropsychiatric and Cognitive Impairments. The neuropsychiatric and cognitive symptoms of PD include anxiety, depression, hallucinations, and cognitive deficits [61]. In patients with PD or PD with dementia (PDD), administration of YKS for 4 or 12 weeks improved the total NPI score, which evaluates BPSD and subscale hallucinations [2,9]. The long-term administration of YKS (12 weeks) also improved subscale anxiety and apathy scores [2]. Based on the Mini-Mental State Examination (MMSE), used to assess cognitive function, YKS treatment produced slight improvements in outcomes in patients with PDD, but not in those with PD. Additionally, treatment with YKS did not alter motor function based on the Unified Parkinson Disease Rating Scale-III (UPDRS III) for determining mobility in PD and the Hoehn-Yahr score for evaluating PD severity [2,9]. In four clinical studies, NPI and Neuropsychiatry Inventory-Nursing Home version total scores were improved and MMSE was not changed in patients with dementia treated with YKS for 4 or 8 weeks [6,[10][11][12]. Additionally, in four clinical studies of the effects of YKS treatment for 4 or 12 weeks on patients with AD, total NPI scores improved in three [14][15][16]. Furthermore, NPI Brief Questionnaire Form (NPI-Q) scores, a simpler evaluation tool for BPSD, did not change in a randomized placebo-controlled multicenter trial [13]. Thus, the NPI-Q may be inappropriate for evaluating the effect of YKS treatment in mild BPSD. Additional studies have revealed that the MMSE, Zarit Burden Interview (ZBI), and Self-rating Depression Scale (SDS) scores are not improved by YKS treatment [13][14][15][16]. This lack of improvement in ZBI for evaluating the burden of caregivers Evidence-Based Complementary and Alternative Medicine and the SDS for evaluating caregiver's depression might have been due to the relatively short duration (4 weeks) of YKS administration in the above-mentioned studies [15]. These studies did reveal a difference in the subscale items in each study for patients with PD, dementia, and AD, as well as improvements in total NPI scores and in the scores of the specific NPI subscale that measures neuropsychiatric symptoms [13][14][15][16]. However, YKS did not effectively improve cognitive or motor function. In addition, the outcome of the specific evaluation index is dependent on the duration of YKS administration. In vascular dementia patients, the effect of YKS was similar in patients with PD, dementia, and AD with regard to improvements in NPI, but without changes in MMSE, the Barthel Index for activities of daily living, or the Disability Assessment for Dementia [17]. In very-late-onset schizophrenia-like psychosis, YKS treatment significantly improved all measures of psychotic symptomatology, including the psychiatric rating scale, clinical global impression scale-severity, and positive and negative syndrome scale scores, but did not significantly alter abnormal movements, as determined by the Simpson-Angus scale, Barnes Akathisia scale, and the involuntary movement scale [18]. Consequently, the therapeutic effects of YKS predominantly alter neuropsychiatric symptoms across various neurological disorders and may thus improve BPSD clinically. Several studies have examined improvements in cognitive function following YKS treatment. In most studies, YKS treatment did not affect MMSE scores (in terms of measurements of cognitive function), while it did improve cognitive function in daily life and per the Brief Assessment of Cognition in Schizophrenia, Japanese Version score in a schizophrenia case report [19]. Additionally, this effect of YKS may be mediated by serotonin (5-HT) transmission and the amelioration of aberrant glutamate transmission [19]. As mentioned above, administration of YKS induced slight improvements in cognitive function in patients with PDD [9]. Future studies should examine the effects of YKS on cognitive function using a variety of evaluation indexes. Pain. Pain is a common symptom experienced by patients with PD and is associated with motor fluctuations and early morning dystonia [61]. Central neuropathic pain has been described in patients with PD, but has a low incidence. Additionally, while L-DOPA does not exert an analgesic effect on pain [62], YKS has been found to be clinically effective for use in patients with neuropathic pain (significant decreases in the visual analogue scale and pain scores after treatment) [20]. However, further studies are needed to validate the effects of YKS and its underlying mechanism(s) of action in the context of pain. Tardive Dyskinesia. PD is characterized by bradykinesia and cardinal motor features such as a resting tremor, rigidity, and postural instability [1]. Although the presence of tardive parkinsonism is controversial, drug-induced parkinsonism is not uncommon in patients treated with dopamine receptorblocking agents [63]. TD is characterized by abnormal, involuntary, irregular choreoathetoid muscle movements in the head, limbs, and trunk. Critically, YKS improved TD in patients with schizophrenia who had neuroleptic-induced TD [21]. Administration of YKS in patients with schizophrenia similarly improved their TD and psychotic symptoms [21]. Protective Effects of YKS on PD-Like Symptoms in Animal Models Several preclinical studies have attempted to clarify the effects of YKS on PD-like symptoms using various animal models of neurological disorders (Table 2). Sleep Disturbances. A previous study using the pentobarbital-induced sleep test and electroencephalogram analysis reported sleep promotion via regulation of GABA A receptors and GABA content with 5-hydroxytryptophan [64]. YKS enhanced pentobarbital-induced sleep in socially isolated mice, which have shorter sleeping times than do group-housed mice. This effect of YKS was reversed by bicuculline (a GABA A -receptor antagonist), suggesting that the GABA A -benzodiazepine receptor complex is involved in the sleep-promoting effect of YKS [22]. Additionally, a recent study showed that a drop in body temperature was responsible for promoting sleep and that YKS has a sleeppromoting effect via decreases in body temperature based on thermography used to screen sleeping substances [23]. Depression. Depression affects 10-45% of patients with PD and is the most important predictor of quality of life in patients with PD [61]. Chronic stress is a well-known risk factor for depression [24]. Furthermore, brain glutamatergic neurotransmission is involved in the pathogenesis of stressrelated depression. The excitatory amino acid transporter (EAAT), which modulates glutamate levels in the synaptic cleft, is decreased in the hippocampus of stress-maladaptive mice, an effect that was ameliorated by YKS. YKS also inhibited decreased expression of EAAT2 in the hippocampus of stress-maladaptive mice, as found using western blot analysis, and improved depressive symptoms [24]. Anxiety. Anxiety is a common symptom in PD that can manifest as panic attacks and phobias [61]. Previous studies have reported an anxiolytic effect of YKS in animal models. In the elevated plus maze (EPM) test, administration of YKS or YKSCH attenuated freezing duration [25] and increased the time spent in the open arm [26,27,29,31], indicating an amelioration of anxiety-like behavior. In the contextual fear conditioning (CFC) test, YKS reduced freezing behavior (an anxiety response) [27,30]. Based on locomotor activity measurements, YKS improved anxietyrelated responses, such as increased defecation [26], reduced rearing behavior in the open field test [28], and reduced time in the dark box in the light/dark test [29]. To elucidate the mechanisms underlying the anxiolytic effects of YKS, several studies have investigated changes in neurotransmitter systems, such as dopamine and serotonin, as well as c-Fos, as a marker neuronal activation expression induced by YKS. Aging is known to increase anxiety, per Evidence-Based Complementary and Alternative Medicine 7 Evidence-Based Complementary and Alternative Medicine 9 increased defecation and decreased time spent in the open arm of EPM, as well as altered extracellular concentrations of serotonin and dopamine [26]. Administration of YKS in aged rats increased extracellular concentrations of serotonin and dopamine in the PFC [26]. Several studies have investigated changes in the 5-HT 1A and 5-HT 2A serotonin receptors following YKS administration [27,28,30]. Furthermore, the anxiolytic effects of YKS in the CFC test were antagonized by a 5-HT 1A receptor antagonist (WAY-100635) [27], and 5-HT 1A receptor density in the PFC of socially isolated mice was significantly increased by YKS [28]. Moreover, YKS had an antagonistic effect on wet-dog shakes induced by a 5-HT 2A agonist, 1-(2,5-dimethoxy-4-iodophenyl)-2-aminopropane (DOI) [29]. Additionally, cotreatment with YKS and fluvoxamine (5 mg/kg, i.p.) specifically decreased 5-HT 2A receptor expression in the PFC [30]. Therefore, the anxiolytic effects of YKS may be dependent on 5-HT 1A receptor signaling and decreased 5-HT 2A receptor expression. Additionally, c-Fos expression has been shown in brain circuits related to anxiety, depression, and stress responses. For example, c-Fos expression was increased in the PFC by YKS but reduced in the prelimbic cortex and amygdaloid nuclei [31]. These results suggest that the effects of YKS are associated with attenuated neuronal activity in the PFC and amygdala [31]. Hallucinations. Hallucinations are present in 30-60% of patients with PD, caused by the side effects of treatment for PD and neuronal degeneration of the pedunculopontine nucleus, locus coeruleus, and raphe nuclei [65]. Scarce evidence exists regarding the effect of YKS on hallucination-like symptoms in animal models. Recently, isolation stress was found to enhance a 2,5-dimethoxy-4-iodoamphetamine (DOI; 5-HT2A receptor agonist)-induced head twitch response, which is considered to be a hallucination-like symptom in mice [32]. Furthermore, 5-HT 2A receptors seem to be involved in hallucinations based on 5-HT 2A receptor-evoked head-twitches in mice [66], an effect that is increased by elevated corticosterone levels during chronic isolation stress [67]. Several behavioral studies have confirmed the involvement of 5-HT 2A receptor signaling in hallucination-like symptoms. For example, DOIinduced head twitch response is induced by a 5-HT 2A receptor agonist and suppressed by a 5-HT 2A receptor antagonist [32]. Moreover, wet-dog shakes and head twitch are both evoked by the administration of 5-HT 2A receptor agonists [66,68,69]. In isolation-stressed mice, YKS treatment decreased hallucination-like behaviors and 5-HT 2A receptor density in the PFC [32,33]. Therefore, YKS may improve hallucinations, although it is necessary to develop animal models capable of differentiating between the symptoms of hallucination and anxiety to better understand these outcomes. Aggressive Behavior. Aggression is a behavioral and psychological symptom of both dementia and PD. In various animal models, aggressive behavior is induced by social isolation, injection of amyloid (A ), cholinergic degeneration into the nucleus basalis of Meynert (NBM; an area of the substantia innominata of the basal forebrain containing acetylcholine [ACh] and choline acetyltransferase [ChAT]), para-chloroamphetamine (PCA) injections, and a zinc-deficient diet. Aggression is typically assessed using aggression and resident-intruder tests [34][35][36][37][38][39]. Alterations in dopaminergic and noradrenergic systems have also been implicated in aggression [70]. YKS ameliorated methamphetamine-induced hyperlocomotion mediated by the dopaminergic system (methamphetamine increases extracellular dopamine) [34]. Additionally, the 5-HT 1A receptor exhibits agonistic action via YKS [36] and ionotropic glutamate and GABA A receptors are involved in social isolation-induced aggressive behavior [38]. YKS treatment ameliorated aggression via 5-HT 1A receptor stimulation [36,37] and increased glutamate and GABA concentrations in the resident-intruder test [38,39]. Furthermore, glucocorticoids are known to be involved in the regulation of neurotransmission. Glucocorticoids enhance excitability of glutamatergic neurons and increase cytosolic Ca 2+ concentrations which is consequently related to excitotoxicity in the hippocampus [71]. Among the constituents of YKS, geissoschizine methyl ether (GM), a component of Uncaria Hook, and 18 -glycyrrhetinic acid (GA), a component of glycyrrhizin, ameliorated increases in glutamate release via attenuation of intracellular Ca 2+ levels increased by KCl [40]. Thus, YKS may ameliorate social isolation-induced aggressive behavior by attenuating glucocorticoid secretion [39]. Cognitive Impairments. YKS treatment has improved cognitive function in various animal models of diseases such as AD, cerebral ischemia, schizophrenia, aging, and thiamine-deficiency [41][42][43][44][45][46][47]. Several studies have examined the potential cognition-enhancing effects of YKS via its effects on the cholinergic system, which plays an important role in cognition [72]. The death of hippocampal pyramidal neurons induced by repeated ischemia (RI) involves downregulated ACh signaling and induces memory impairments [73]. YKS treatment, however, plays a neuroprotective role on the prevention of apoptosis in pyramidal neurons of CA1 and improves memory impairments by increasing ACh levels in the dorsal hippocampus [42]. High [K + ] concentration and dynamin 1 expression are also implicated in presynaptic vesicular recycling, ChAT activity, and decreased acetylcholinesterase (AChE), enzymes involved in ACh degradation [42]. Elevated [K + ] evokes the release of stored ACh via increased presynaptic vesicular recycling [74] and ChAT activity [75]. Interestingly, the combination of A oligomers and cerebral ischemia in rats attenuated this response to elevated [K + ]-evoked ACh release and mimics cognitive impairment in early AD [43]. Thus, YKS treatment may induce elevated [K + ]-evoked ACh release and thus alleviate some RI-induced memory deficits. Dynamin 1, a presynaptic protein implicated in early synaptic deficiencies [76], is decreased in a model of cerebral ischemia and was previously associated with memory loss prior to apoptotic neuronal loss in early AD. YKS restored dynamin 1 expression and increased ACh release [43]. ACh levels are also modulated by ChAT or AChE [42]. Olfactory bulbectomy (OBX) in mice causes olfactory loss, increased locomotor activity, aggressiveness, and impaired learning and Evidence-Based Complementary and Alternative Medicine 13 memory. YKS treatment improved cognitive deficits following degeneration of the cholinergic system induced by OBX [44]. Furthermore, YKS treatment counteracted downregulation of ChAT and muscarinic receptor M 1 expression in the hippocampus in mice with OBX [44]. Dopaminergic and glutamatergic systems are involved in cognitive impairment [45,77]. YKS treatment may further ameliorate cognitive impairments by modulating dopaminergic mechanisms, such as reducing the ameliorative effect of YKS by dopamine D1 receptor antagonism, and inhibiting glutamate excitotoxicity, such as inhibiting extracellular glutamate elevations in the ventral posterior medial thalamus in thiamine-deficient rats [45,46]. Moreover, YKS inhibits inflammatory responses, oxidative damage, and neuronal death via inhibition of microglial activation, oxidative DNA damage, and promotion of neurogenesis in the hippocampal dentate gyrus [47,48]. Microglial activation and inflammation promote expansion of certain cell populations [78] and may be detrimental to the survival of new hippocampal neurons. Therefore, YKS treatment may ameliorate cognitive deficits via antiapoptotic and anti-inflammatory actions [47,48]. Pain. Previous studies have attempted to determine the effects of YKS on neuropathic pain. For instance, YKS treatment inhibited mechanical allodynia of a brush in the von Frey filament test [49,50] and cold allodynia of the acetone test [49] in both a rat model of chronic constriction injury [49] and a mouse model of partial sciatic nerve ligation (PSL), both neuropathic pain models [50]. Glutamatergic neurotransmission and spinal IL-6 expression are known to play important roles in neuropathic pain [49]. Therefore, YKS-induced alleviation of neuropathic pain may be mediated via attenuation of glutamate levels in cerebrospinal fluid dialysate via blockade of glutamate transporters in the rat spinal cord with chronic constriction injury [49] and reduced expression of spinal IL-6 mRNA in mice with PSL [50]. Tardive Dyskinesia. After injection with haloperidol decanoate for the induction of vacuous chewing movements (VCMs) in long-acting depot neuroleptic-treated rats, YKS ameliorated VCM (a single mouth opening in the vertical plane), which is an index for TD in animal models [51]. Furthermore, YKS treatment inhibited increases in extracellular glutamate concentrations and decreased glutamate transporter (GLT-1) mRNA expression in the striatum in haloperidol decanoate-treated rats [51]. However, TD is not a major motor symptom involved in PD [63] and is necessary to validate the effects of YKS in animal models that demonstrate the cardinal motor symptoms of PD. In Vitro Neuroprotective Effects of YKS Previous studies have revealed multiple mechanisms by which the neuroprotective effects of YKS act in various in vitro systems (Table 3). Neuroprotection against Cytotoxicity. Corticosterone (CORT) inhibits cell proliferation and induces cytotoxic effects by modulating transcriptional responsivity. Plasma CORT levels increase in response to stressful conditions and may thus underlie neurological disorders, including neurosis and depression, via stimulation of endogenous stress responses [52]. In a previous study, YKS was demonstrated to inhibit increased aggressive behavior and CORT and orexin levels in rats stressed by individual housing [79]. In an in vitro system, YKS was also found to have a neuroprotective effect on CORT-induced cytotoxicity in mouse hippocampal neurons, potentially by ameliorating CORT-induced inhibition of glucose metabolism [52]. In addition, A is known to induce cytotoxicity and serve as a causative molecular mechanism underlying AD [41]. YKS increased cell viability against A -induced cytotoxicity in a primary culture of rat cortical neurons [53,56]. Therefore, YKS may exert neuroprotective effects on CORT-and A -induced cytotoxicity. YGS40, an active fraction of YGS, prevented oxidative stress by decreasing cytotoxicity. This was confirmed using MTT and lactate dehydrogenase (LDH) assays. YGS40 also protected against H 2 O 2 -induced apoptosis in PC12 cells. Hydrogen peroxide (H 2 O 2 ), the main component of reactive oxygen species (ROS), can cause oxidative stress and induce apoptosis. YGS40 prevented mitochondrial damage, such as MMP loss, by H 2 O 2 -induced apoptosis. Furthermore, YGS40 protected intracellular enzyme superoxide dismutase activity from antioxidants and decreased levels of malondialdehyde, a marker of lipid peroxidation [55]. Neurotransmission. To overcome the limitations of antipsychotic medicines such as extrapyramidal symptoms and other adverse events, YKS has been used therapeutically for BPSD. The serotonergic system plays an important role in BPSD pathophysiology and is implicated in cognitive dysfunction. Human recombinant 5-HT 1A receptors were expressed in the membrane of Chinese hamster ovary (CHO) cells and [3H] 8-OH-DPAT was used as a competitive radioligand to assess 5-HT 1A receptor binding. YKS prevented radioligand binding to 5-HT 1A receptors and had a partial agonistic effect on 5-HT 1A receptors in CHO cells [56]. These results may shed light on the neuropharmacological mechanisms of YKS and further suggest that YKS may be a therapeutic candidate for BPSD. Relevance of YKS to Autonomic Dysfunctions The nonmotor symptoms (i.e., cardiovascular and urinary dysfunctions induced by dysautonomia) of PD have been studied in both patients and animal models [61,[80][81][82]. The increased prevalence of cardiovascular dysfunction in early stage PD patients has been confirmed with evidence of reduced total power spectral analysis of heart rate at rest and observations of mild degrees of exercise intolerance in these patients [80]. Additionally, urinary dysfunction in PD includes symptoms such as urgency, frequency, nocturia, and urge incontinence [83]. Dysautonomia is an important symptom that is primary complaint of PD patients and significantly impairs their quality of life. However, little is known about the effects of YKS on clinical and preclinical autonomic dysfunction in 16 Evidence-Based Complementary and Alternative Medicine cardiovascular and urinary systems. Only one previous case study of the effectiveness of YKS in nocturnal enuresis in children has been conducted. Interestingly, in a child with monosymptomatic nocturnal enuresis who did not response to desmopressin, which is the primary therapy for nocturnal enuresis, YKS with desmopressin was shown to be effective [84]. However, this case of pediatric monosymptomatic nocturnal featured no other lower urinary tract symptoms or history of bladder dysfunction or PD-like symptoms. Given this limitation, the effects of YKS on dysautonomias in PD patients require further study. Conclusion PD is characterized by motor symptoms (e.g., tremor, rigidity, bradykinesia, and postural instability), nonmotor symptoms (e.g., sleep disorders, cognitive impairments, disorders of mood and affect, autonomic dysfunction, sensory symptoms, and pain), and drug-induced adverse events. L-DOPA, the gold standard drug for the treatment of PD, has a short halflife, resulting in discontinuous drug delivery quick dissipation of its effects. Furthermore, L-DOPA is known to cause complications such as motor response oscillations and druginduced dyskinesia [1]. Moreover, antipsychotic drugs for BPSD often induce extrapyramidal symptoms and increased mortality among elderly patients [53,56]. The development of complementary alternative therapies may thus help to mitigate the symptoms of PD and circumvent the need to increase standard medication doses as well as minimize any adverse events related to conventional medication use. Therapeutic applications of YKS include the treatment of neurosis, insomnia, and night crying in children; some of these symptoms overlap with the nonmotor symptoms of PD. YKS may have therapeutic effects on PD, although many clinical and preclinical studies of YKS in other neurological disorders have also been done. Primarily, YKS has been shown to improve NPI scores, a measure of BPSD symptoms in patients with dementia and PD [2,[9][10][11][14][15][16][17]. The neuropharmacological mechanisms underlying YKS's action include modulation of neurotransmitter systems, such those for serotonin, dopamine, glutamate, and GABA, as well as neuroprotection [24-30, 32-34, 36-39, 52, 53]. Apart from BPSD, limited data are available on the effects of YKS on the symptoms of PD, including autonomic dysfunction (mainly orthostatic hypotension, urogenital dysfunction, constipation, and hyperhidrosis) and sensory symptoms (most prominently, hyposmia). It remains necessary, however, to verify the complementary, therapeutic effects of YKS on the various symptoms of PD before it can be used with confidence to overcome the limitations of current PD therapeutics ( Figure 1). Conflicting reports on the effects of YKS have been made. For example, there is little evidence for YKS-mediated improvement in cognition among patients with PD in clinical trials, while positive results have been reported in preclinical studies. To resolve these differences, further research is needed to more appropriately select an optimal drug dosage, period of administration, and an evaluation index for use in clinical trials. Furthermore, preclinical models that more faithfully recapitulate the human PD condition are also needed. Prior to YKS being prescribed to patients with PD, its potential adverse effects must be considered and further research on them should be performed. Conflicts of Interest The authors declare that there are no conflicts of interest. yokukansan, in the treatment of the behavioural and psychological symptoms of dementia, " The International Journal of Neuropsychopharmacology, vol. 12, no. 2, pp. 191-199, 2009.
2019-01-23T21:23:07.638Z
2018-12-20T00:00:00.000
{ "year": 2018, "sha1": "a4ae826c426903a39d9656dde702d51d4276cd56", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/1875928", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bd1f57b988841133866f313a1b18fd3220c658f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238218406
pes2o/s2orc
v3-fos-license
Effects of common Gram-negative pathogens causing male genitourinary-tract infections on human sperm functions Male genitourinary tract (MGT) bacterial infections are considered responsible for 15% of male infertility, but the mechanisms underlying decreased semen quality are poorly known. We evaluated in vitro the effect of strains of Gram-negative uropathogenic species (two E.coli strains, three K. pneumoniae strains, P. aeruginosa and E. cloacae) on motility, viability, mitochondrial oxidative status, DNA fragmentation and caspase activity of human spermatozoa. All strains, except P. aeruginosa, reduced significantly sperm motility, with variable effects. Sperm Immobilizing Factor (SIF) was largely responsible for deteriorating effects on sperm motility of E. coli strains since they were completely reverted by knockout of SIF coding recX gene. Sequence alignment for RecX showed the presence of high homologous sequences in K. pneumoniae and E. cloacae but not in P. aeruginosa. These results suggest that, in addition to E.coli, other common uropathogenic Gram-negative bacteria affect sperm motility through RecX products. In addition to sperm motility, the E. coli strain ATCC 35218 also affected sperm viability, and induced caspase activity, oxidative stress and DNA fragmentation suggesting an interspecies variability in the amount and/or type of the produced spermatotoxic factors. In general, our results highlight the need for a careful evaluation of semen infections in the diagnostic process of the infertile man. Male factor is responsible of 40-50% of couple infertility, and it is estimated that male infertility affects up to 15% of the couples 1 . The most common cause of male infertility is poor semen quality, which may be due to alterations of testicular function or may originate during sperm transit in the male genital tract. Acute and chronic inflammation and infections are believed to be responsible for approximately 15% of cases of male infertility likely because of a detrimental effect on spermatozoa, although the association between inflammation and infections and poor semen quality has not been clearly defined 2 . Enterobacterales spp. are common pathogens of the urogenital tract and may interfere with male fertility 3,4 . As reported in the study by Boeri et al. 5 , Enterobacteriaceae represent the second most frequent pathogens responsible for semen infections in a cohort of 1689 European male partners of primary infertility couples. Similar frequencies were found in subfertile men attending the outpatient clinic of the University hospital of Florence 6 . Escherichia coli is one of the most frequent species found in human semen 6,7 and in genitourinary infections 8 , in particular epididymitis 9 . E. coli rapidly adheres to human spermatozoa in vitro, resulting in agglutination of spermatozoa. A profound decline in motility of spermatozoa is evident over time caused by severe alterations in sperm morphology 7 and by the release of soluble spermatotoxic factors such as sperm immobilizing factor (SIF, 2 ). An association with oligoasthenozoospermia and male infertility was however reported also with other Enterobacteriaceae as Klebsiella pneumoniae and Klebsiella aerogenes 10 and with Pseudomonas aeruginosa 5 www.nature.com/scientificreports/ Evaluation of the direct effects of bacteria on sperm functions in vitro is of great help in understanding the role of infections in male infertility. So far, most in vitro studies evaluating the effects of Enterobacteriaceae on human spermatozoa, employed E. coli strains as pathogen (for review see 2,8 ). In addition, most studies were limited to evaluate the effect of E. coli on human sperm motility and viability, and have been performed on highly motile selected sperm populations [11][12][13][14][15] , which are poorly representative of the real environment where bacteria, present in the male genital tract, may produce the damage. Whether other bacterial species as K. pneumoniae, K. aerogenes, Enterobacter cloacae, P. aeruginosa, commonly causing genitourinary tract infections (GUTI), affect human sperm motility or other sperm functions is not yet known. Although progressive motility and sperm viability are of fundamental importance both in natural and assisted reproduction, other sperm characteristics are necessary for fertilization and embryo development. In particular, spermatozoa must deliver an intact DNA to the oocyte. Oxidative and apoptotic pathways may cause sperm DNA fragmentation (sDF; 16,17 ), the most common type of DNA damage found in human spermatozoa 18 , which has a negative impact on both natural and assisted reproduction 19 . The effects of bacteria on sperm oxidative and apoptotic pathways have been poorly investigated. Besides damaging sperm DNA by inducing fragmentation, base oxidation and mutations, oxidative stress, when present in high levels, can cause lipid peroxidation in the plasma membrane, produce modifications of sperm proteins impairing their functions, alter mitochondrial function and induce apoptosis 20 . In turn, activation of apoptotic pathways may impact on other sperm functions and, ultimately, leading to cell death 21 . Inhibitory effect of E. coli on sperm motility have been attributed to release of SIF 2,8 , however, whether SIF is involved in the inhibitory effect of other bacterial species is presently less clear, nor it is known the role of this factor in other sperm alterations due to bacterial infections. In the present study we selected bacterial strains belonging to potential uropathogenic species (E. coli, K. pneumoniae, K. aerogenes and E. cloacae) that express the rec-X SIF-coding gene or highly homologous genomic sequences and evaluated their effects on sperm motility, viability, mitochondrial oxidative status, DNA fragmentation and caspase activity in whole semen. P. aeruginosa, a less frequent pathogen of male urogenital apparatus which was found with a prevalence of 10% in infertile couples 22 was also included in the study. In addition, we further explored the role of SIF on sperm motility by using an E. coli strain KO for SIF coding gene. Results Effect of bacterial strains on sperm motility. Whole semen samples were incubated with live bacterial cells from E. coli (ATCC 29522 and ATCC 35218), K. pneumoniae, P. aeruginosa, E. cloacae and K. aerogenes strains at sperm/bacteria ratio of 1:10 23 and progressive and total motility was recorded after 1 (n = 13) and 3 h (n = 16). All bacteria strains, except P. aeruginosa ATCC 27853, determined a significant decrease of total and progressive motility both at 1 (Fig. 1, panels A and B) and 3 ( Fig. 1, panels C and D) hours incubations. Among the bacteria tested, E. cloacae ATCC 13047 and E. coli ATCC 35218 strain were the most potent in reducing sperm progressive motility at both time points. A significant increase in the percentage of non-progressive motility of spermatozoa was detected in cultures with K. quasipneumoniae ATCC 700603, K. aerogenes ATCC 13048 and E. cloacae ATCC 13047 strains (data not shown). A direct adhesion of bacterial cells to the sperm tails as well as sperm agglutination was observed in cultures with E. cloacae ATCC 13047 (Supplemental Figure S1 and video S1), whereas such effects were not observed with the other bacterial strains (data not shown). Role of SIF on sperm motility. The 56 kDa Sperm Immobilizing Factor (SIF), is considered one of the responsible factors of the detrimental effects of E. coli on sperm motility 24 . To further confirm the role of SIF in sperm motility, we performed experiments by using the E. coli MG1655 as reference strain and its mutated counterpart JW2668 knockout (KO) for recX gene 25 . JW2668 E. coli strain did not affect either progressive ( Fig. 2A) and total ( Fig. 2B) sperm motility after 3 h incubation whereas the wild type MG1655 strain decreased both motilities. To investigate whether also the other strains affecting sperm motility were able to produce SIF homologous proteins, we performed tBLASTn and BLASTp sequence alignments using recX from E. coli K12 MG1655 as reference sequence. The results of this analysis, shown in Fig. 3, clearly indicate a high homology sequence in recX gene and SIF protein among the strains of E. coli (both strains), K. pneumoniae (both strains), K. aerogenes and E. cloacae. Conversely, sequence homology with RecX was not detected in the genome of P. aeruginosa ATCC 27853 (not shown), an Enterobacteriales species which belongs to the Pseudomonacee family. Effect of bacterial strains on sperm viability. To investigate whether changes in motility were due to a decrease of sperm viability, we evaluated the percentage of viable spermatozoa after 3 h of incubation with the different bacteria by two different techniques (eosin staining and Live⁄Dead Fixable Green Dead Cell Stain coupled to flow cytometry). Figure 4 shows that, with the exception of E. coli ATCC 35218, none of the bacterial strains induced significant reduction in cell viability with both methods (A, B). Similar results were obtained after 1 h incubation (not shown). Neither the reference E. coli MG1655 strain (wild-type strain), nor the E. coli JW2668 strain (KO for recX gene), affected significantly sperm viability as evaluated by eosin staining (Fig. 4C). Since SIF was reported to reduce sperm viability at higher concentrations respect to those affecting motility 24 , we collected supernatants from E. coli ATCC 25922 cultured at 100 and 300 × 10 6 /ml, and purified the fractions containing proteins with molecular weights (MW) ≥ 30KDa, thus including the 56 kDa SIF. After incubation for 3 h with such fractions, a decrease in sperm viability was observed with the fractions obtained from 300 million bacteria (Fig. 4D), suggesting that the toxic effect depends on the rate of secretion of spermatotoxic factors and that such rate is variable within strains of the same species. www.nature.com/scientificreports/ Effect of bacterial strains on sperm oxidative stress, caspase activity and DNA fragmentation. Pathogenic strains of Enterobacteriaceae family are known to produce toxins or metabolites that induce oxidative stress in infected cells 26 . In order to understand whether the bacterial strains tested in our study could determine an increase in ROS production and sperm oxidative status, we evaluated mitochondrial oxidation by using the fluorescent probe MitoSOX™ Red. With the exception of E. coli ATCC 35218, none of tested bacterial strains affected sperm mitochondrial ROS generation (Fig. 5A). Interestingly, E. coli ATCC 35218 also affected sperm viability ( Fig. 4 A and B), suggesting that generation of oxidative stress could be involved in inducing sperm death through activation of an apoptotic pathway 16 . To support this hypothesis, the activity of caspase 3 and caspase 7, an established marker of sperm apoptosis 27 , was measured. As shown in Fig. 5B, E. coli ATCC 35218 increased significantly the percentage of spermatozoa expressing caspase 3 and 7 activity. E. coli ATCC 25922 and K. aerogenes, used as control strain that did not affect sperm viability (Fig. 4 A and B) and mitochondrial oxidation (Fig. 5A), did not induce caspase activation. E. coli ATCC 35218 also induced a significant increase of total and PI brighter sDF 28 , whereas the other tested bacterial strains did not affect these parameters (Fig. 5C). www.nature.com/scientificreports/ Discussion Although bacterial pathogens are frequently found in semen samples of infertile men, there is no consistent epidemiological link between pathogens and male infertility or altered semen parameters. However, a recent study 2 reported that bacteriospermia was directly related to 15% of infertility in men treated with assisted reproduction. In our knowledge, the present study is the first one where the in vitro effect of a wide panel of bacteria belonging to the Enterobacteriaceae family commonly found in infected semen 5 , was evaluated at the same time and in the same semen samples. We found that almost all bacterial strains directly affect human sperm motility, whereas, only the E. coli strain ATCC 35218 impaired sperm viability, induced mitochondrial oxidative stress, DNA fragmentation and activate the apoptotic pathway. Even if the strains included in the study were not isolated from infertile males, K. quasipneumoniae ATCC 700603 (ST489), has been isolated from a urine sample, and E. coli ST73 and ST127 (the clones of ATCC 25922 and ATCC 35218, respectively) were recently associated to hospital and community acquired urinary tract infections 29,30 . In agreement with previous studies 11-14,31,32 the reduction of sperm motility was observed following incubation in vitro with SIF-producing strains of E. coli. Our study extends these findings to other Enterobacteriaceae, whose in vitro effects on human spermatozoa have been less investigated. However, the detrimental effects on motility were highly variable and dependent on the strain. We demonstrated the presence of high homologous sequences to recX gene and SIF protein from E. coli MG1655 in the K. pneumoniae, K. aerogenes and E. cloacae strains, suggesting that the secretion of this toxin may act as a common molecular mechanism used by Enterobacteriaceae to immobilize sperm. This conclusion is reinforced by the use of an E. coli strain KO for the recX gene that did not affect sperm motility compared to the wild type. In addition, P. aeruginosa, which is found in about 10% of infertile couples 22 and belongs to a different taxonomic order (Pseudomonadales), does not express a homologous recX gene and does not have the property to immobilize sperm in vitro. We noted that, despite the culture with K. pneumoniae, K. aerogenes, E. cloacae strains significantly decreased sperm progressive motility, such strains did not completely immobilize spermatozoa but increased the percentage of in situ motility. This result suggests that complete immobilization is influenced by the quality or the quantity of SIF released by bacterial strains. In addition, the E. coli strains used in our study did not determine sperm agglutination, which was observed only after incubation with E. cloacae. The effects on sperm progressive and total motility were present already after 1 h incubation and did not vary substantially after 3 h for most bacterial strains, indicating that the effect on motility may be quite rapid. This result suggests that waiting long times before sperm manipulation in assisted reproduction laboratories or We show that most of the Enterobacteriaceae tested here reduce motility without affecting sperm viability, with the exception of the E. coli strain ATCC 35218. This result was confirmed by using two different methods to assess sperm viability, a subjective one (eosin staining) according to indications of WHO 33 and an objective one (staining with LIVE/DEAD™ Fixable Green Dead Cell Stain coupled to flow cytometric detection). The results were qualitatively similar, although viability evaluated after staining with LIVE/DEAD™ Fixable Green Dead Cell Stain was found lower in all samples. It is possible that LIVE/DEAD™ Fixable Green Dead Cell Stain is more efficient than eosin in detecting unviable spermatozoa or is able to stain also apoptotic cells committed to die. We should also consider that the subjective analysis is done on 200 spermatozoa whereas the flow cytometric analysis regards 8.000 events, and thus, likely, more precise. Unlike motility, the effect of bacteria on viability appears to be dependent from the amount of SIF released in culture. In fact, when spermatozoa were incubated with the SIF-containing purified fraction from 300 millions E. coli ATCC 25922 culture supernatants, a partial spermicidal effect was observed, in agreement with previous studies 24 . However, we cannot exclude that factors different from SIF, produced by E. coli ATCC 35218, could be also involved in viability impairment. Of note, our analysis revealed that different E. coli strains could have a spectrum of different effects on sperm functions going from decrease of motility to induction of oxidative stress, www.nature.com/scientificreports/ DNA damage and cell death. In particular, the E. coli ATCC 35218 was the only strain, among those tested, able to increase mitochondrial ROS production, activate apoptotic pathways and induce sperm DNA fragmentation. Oxidative stress may be both a consequence and an inducer of sperm apoptosis 16,34 , and may be also involved in increasing DNA fragmentation 16,35 . In particular, oxidative stress appears to be the main inducer of sDF after spermiation and during in vitro incubations 16,[36][37][38] . The inducing effects of E. coli isolates on oxidative status and apoptosis were reported previously using different experimental conditions compared to our study 14,15,39 . In particular, Boguen et al. 15 , by comparing three E. coli strains demonstrated that the hemoliytic strain shows a greater detrimental effect on spermatozoa respect to non-haemolytic ones, including the E. coli strain ATCC 25922. A comparative genomic analysis of E. coli strains www.nature.com/scientificreports/ used in our study revealed the unique presence in E. coli ATCC 35218 of the chromosomal HlyE gene coding for hemolysinE (data not shown), a toxin with a short half-life that is known to impair membrane integrity in other cell types 40 . Therefore, it is possible that the detrimental effects of E. coli strain ATCC 35218 are mediated by more than one spermotoxic factor 41 . Reduction of sperm viability and motility, as it may occur in the case of semen infections, may highly affect the reproductive performance both in natural and assisted conception, as progressive motility is the necessary pre-requisite to reach the oocyte and to penetrate its vestments, whereas viability is of fundamental importance for a correct fertilization. In particular, motility is the primary sign used to determine sperm viability during intracytoplasmic sperm injection (ICSI). If no motility is present in a sample, techniques to identify viable spermatozoa can be used by embryologists 42 . In case of E. coli ATCC 35218, where reduction of motility may exceed reduction of viability (89% vs 45% according to viability evaluated by LIVE/DEAD™ Fixable Green Dead Cell Stain), viable spermatozoa may show increased oxidative stress and/or activation of apoptotic pathway and/ or fragmented DNA, likely compromising the outcome of reproduction. In particular, the E. coli ATCC 35218 induces DNA fragmentation within the PI brighter sperm population, which is unrelated to semen quality and may contain viable DNA fragmented spermatozoa 28 . sDF is associated with a reduced performance in ART affecting implantation and increasing the probability of miscarriage [43][44][45][46] . The effect of bacterial contamination in semen on the outcomes of ARTs is controversial. Some studies indicated poor outcomes because of oocyte degeneration 47,48 whereas others did not report significant effects on ART outcomes 49 . Incubation in vitro with E. coli reduces sperm ability to penetrate hamster oocytes, suggesting a negative effect on fertilization ability 32 . A strength of our study is testing the effect of strains of the most frequent Enterobacteriaceae infecting male reproductive tract, evidencing differences in their effects on sperm characteristics. In addition, we evaluated the effect of bacteria in the natural environment where they may alter sperm functions. In contrast, most previous studies have been performed on highly motile selected sperm [11][12][13][14][15] or washed semen samples 39 , where the effect of bacteria is tested in a medium where they never act, and that does not contain substances that can limit or enhance their effects. For instance, it has been shown lactobacilli 14 may prevent the effect of E. coli on sperm motility. In addition, fragmented semenogelins generated after liquefaction 50 and enzymes present in semen 51 show antibacterial activity. A limitation of our study is represented by the use of commercially available bacterial strains and not those isolated from semen samples. Moreover, we chose to use strains of Enterobacteriaceae with known genome to allow the sequence alignment shown in Fig. 3. Such alignment allowed us to reveal that other Enterobacteriaceae contain recX homologous sequences in their genome. In conclusion, our data indicate that common uropathogenic Gram-negative bacteria induce an impairment of sperm motility through recX products and suggest that an increased secretion of SIF or other factors, produced by selected strains, may be involved in impairing other sperm functions. Since the effects of bacteria on human spermatozoa may be variable and dependent on the strain, a careful evaluation of semen infections in the diagnostic process of the infertile man is warranted. Further experiments performed on bacterial samples isolated from semen cultures will be necessary in order to reinforce experimental proofs of SIF homologous activity secreted by Enterobacteriaceae strains derived from their natural site of infection. Materials and methods Ethic statement. The study was approved by the local Ethical Committee Comitato Etico Area Vasta Centro (CEAVC, protocol n. 16764_bio). All research was performed in accordance with the Declaration of Helsinki. Patients were informed about the aim of the study and signed an informed consent to use the remaining semen after routine analysis. Seven bacterial reference strains were included in the study as follows: two isolates of E. coli (ATCC 29522 and ATCC 35218), one isolate of K. pneumoniae (ATCC 13883), one isolate of K. quasipneumoniae (ATCC 700603), one isolate of P. aeruginosa (ATCC 27853), one E. cloacae (ATCC 13047) and one K. aerogenes (ATCC 13048). An E. coli K12 strain MG1655 and its derivative (with the knockout recX gene), from the Keio collection, were also added to the study collection 52 . Most of the selected reference strains were isolated from clinical human samples, except for K. pneumoniae ATCC 13883 and E. coli ATCC 25922 whose source is unknown (Supplemental Table S1). All bacterial strains were seeded on CHROMID® CPS® Elite agar (bioMérieux, Marcy l'Etoile, France) and incubated for 18 h at 35 ± 1 °C. Bacterial suspensions were prepared in 2 ml of sterile water and optical density was measured by DensiCHEK™ spectrophotometer (bioMérieux). Reagents and bacteria. Extraction of proteins from bacterial supernatants. 100 or 300 × 10 6 E. coli cells from ATCC 25922 strain were cultured in Mueller Hinton broth at 37 °C overnight. Culture supernatants were collected and centrifuged at 4000 g for 10 min. The protein fraction with MW ≥ 30 kDa was purified using Centrifugal Filter Units (cut off 30,000 NWML) (Amicon® Ultra-4 and -15 Centrifugal Filter Units -30,000 NMWL, Merk, Darmstadt, Germany) according to manufacturing recommendations. Protein amount in the purified fraction was quantified by BCA (Bicinchoninic Acid) method and used at 15 Sperm samples and processing. Semen samples were obtained by masturbation from patients undergoing routine semen analysis for couple infertility, in the Andrology laboratory of the University of Florence, Italy. Semen analysis was carried out according to World Health Organization (WHO) guidelines 33 . Semen samples with leukocytes and/or evident bacteria were excluded from the study. For the study purpose, spermatozoa from n = 32 normozoospermic subjects (see Supplemental Table S2 for semen characteristics) were included. After counting of spermatozoa, whole semen samples were divided in 9 equal aliquots and seeded in 96-well plates at concentration range between 1 × 10 6 and 10 × 10 6 cells per well in a final volume of 100 µl. The bacteria infection assay was performed by incubating spermatozoa in presence of bacterial strains at 1, 10 and 100 MOI/cell for 1 and 3 h at 37 °C in a humidified chamber with 5% CO 2 . The maximum effect was reached at 10 MOI/cell (data not shown). 10 MOI/cell was then used for all the experiments shown. An equal volume of sterile water was added to control aliquots. Evaluation of sperm motility. After incubation with different bacterial strains, the percentage of progressive and total sperm motility where checked at the optical microscopy, according to WHO criteria 33 , evaluating at least 200 spermatozoa for each experimental point. The analysis was conducted in the Laboratory of Andrology of the Florence Careggi University Hospital that participates in the UK-NEQAS (United Kingdom National External Quality Assessment Service) external quality control program for semen analysis since 2005. The mean (± sd) percent biases of the laboratory for the years 2019 were 7.0 (± 15.6) and 1.1 (± 11.2), respectively, for total and progressive motility and 9.2 (± 6.7) for sperm concentration (n = 16, data from UK-NEQAS). Evaluation of sperm viability. Sperm viability was evaluated by using eosin staining 33 Assessment of mitochondrial ROS generation. Mitochondrial ROS generation was evaluated using MitoSOX Red 14,53 which shows distinct specificities toward superoxide 53 . After incubation with the different bacterial strains, spermatozoa were washed in PBS and then divided into two aliquots, one aliquot was re-suspended in 100 µL PBS (negative control) and one in 100 µL PBS containing Mitosox Red at a final concentration of 2 μM (test sample), and incubated for 15 min at room temperature. After wash in PBS, sperm samples were analysed by flow cytometry (see below). Evaluation of caspase activity. Caspases activity was evaluated by using Vybrant FAM Caspase-3 and -7 Assay Kit based on a fluorescent inhibitor of caspases (FLICA TM ) according to Marchiani et al. 54 . After incubation with bacteria, each sample was splitted into two aliquots: a test sample re-suspended in 300 µL of PBS added with 10 μL of 30X FLICA working solution and a negative control incubated only with PBS. After 1 h incubation at 37 °C, samples were washed with Wash Buffer 1X and fixed with 40 μL of 10% formaldehyde for 10 min at room temperature. Wash and fixative solutions were supplied by the kit. Sperm samples were washed again twice and re-suspended in 400 µL of Wash Buffer 1X containing 6 µL of Propidium Iodide solution (PI, 50 µg/mL in PBS) and acquired by flow cytometry (see below). Evaluation of sperm DNA fragmentation. Sperm DNA fragmentation was evaluated by Tunel/PI method 28 . Briefly, after incubation with bacteria and washing twice with HTF medium, each aliquot was fixed with 200 µL of paraformaldehyde (4% in PBS, pH 7.4) for 30 min at room temperature. Semen samples were washed twice with 200 µL of PBS/1% bovine serum albumin (BSA), and then permeabilized with 100 µL of 0.1% sodium citrate/0.1% Triton X-100 (4 min in ice). Each sperm sample was divided into two aliquots and labelled with 50 µL of labelling solution (supplied by the kit) containing (test sample) or not (negative control) the terminal deoxynucleotidyl transferase (TdT) enzyme and incubated for 1 h at 37 °C in the dark. Samples were then washed twice, re-suspended in 500 µL of PBS and stained with 7.5 µL of PI (50 mg/mL, 10 min at room temperature in the dark) and acquired by flow cytometry. Flow cytometric analysis. Samples were acquired by a FACScan flow cytometer equipped with a 15-mW argon-ion laser for excitation. FL-1 (515-555-nm wavelength band) and FL-2 (563-607-nm wavelength band) detectors revealed green fluorescence of LIVE/DEAD™ Fixable Green Dead Cell Stain, caspases and Tunel and red fluorescence of Mitosox Red and PI, respectively. In the characteristic forward scatter/side scatter region of spermatozoa 28 , 8000 events were acquired. In the dot plot of fluorescence distribution of the negative sample, a marker, including 99% of total events, was established and translated in the corresponding test sample and all the events beyond the marker were considered positive. sDF was evaluated in the two sperm populations denominated PI brighter and PI dimmer 28 and reported as percentage of sDF in the two populations and in total sperm. For acquisition and analysis, CellQuest-Pro software program (Becton-Dickinson) was used. www.nature.com/scientificreports/
2021-09-30T06:23:58.920Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "7608e408dc2e65d3dbfa3ce5bbe1dde134c35065", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-98710-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9ce8e304ec6c565d92fb1269742d46cdcacbfc83", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
207904772
pes2o/s2orc
v3-fos-license
An Unusual Manifestation of Blastic Plasmacytoid Dendritic Cell Neoplasm as a Testicular Tumor Blastic plasmacytoid dendritic cell neoplasm (BPDCN) is a clinically aggressive hematologic malignancy arising from precursors of plasmacytoid dendritic cells that represent less than 1% of hematological malignancies. BPDCN initially presents with cutaneous involvement and a characteristic immunophenotype of CD4, CD56, and CD123 co-expression. Upon disease progression, BPDCN shows a strong predilection for bone marrow, peripheral blood, and lymph nodes, whereas manifestations in visceral organs are rare. Significant heterogeneity in clinical presentation and immunophenotypic profile makes BPDCN challenging to diagnose without an integrated approach based on patient history, clinical features, tumor pathology, and comprehensive immunohistochemical studies. Herein we report the first case of relapsed BPDCN manifesting as a unilateral testicular tumor. Introduction e 2008 World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues recognized blastic plasmacytoid dendritic cell neoplasm (BPDCN) as a distinct entity characterized by the malignant proliferation of precursors of plasmacytoid dendritic cells [1]. BPDCN can present in all age groups, but is more common in adults with a median age of presentation above 60 years and has a slight male predominance with a male to female ratio of 2.5 : 1 [2]. Initial presentation most commonly involves skin lesions with or without bone marrow involvement and leukemic dissemination, but cases with fulminant leukemia without cutaneous manifestation have been described [3,4]. Prognosis is poor with most cases rapidly and uniformly fatal [4]. Initial manifestations of BPDCN without cutaneous involvement are extremely rare with a few case reports described in the literature. We report a case of a 54-year-old man who presented with a unilateral testicular mass mimicking a primary testicular neoplasm, ultimately diagnosed as relapsed BPDCN. Case Presentation e patient was a 54-year-old man who presented with nontender, le -sided scrotal swelling. Scrotal ultrasound showed a hypoechoic, hypervascular, le intrastesticular mass with microcalcifications (3.2 × 2.4 × 3.2 cm) concerning for a primary testicular neoplasm. For definitive diagnosis, the patient underwent a le radical inguinal orchiectomy. On gross examination, the testicular mass was serially sectioned to reveal a tan-red, hemorrhagic, well-circumscribed circular mass (5.0 × 4.0 × 3.4 cm) with focal areas of tan-white friable necrosis (Figure 1(a)). e mass spared the epididymis (2 × 1.2 × 3 cm) and abutted the tunica albuginea. e remaining uninvolved testis parenchyma was tan-yellow and unremarkable. Histologically, the testicular mass showed diffuse, solid sheets of densely packed neoplastic cells infiltrating the testicular parenchyma, hilar so tissue, epididymis, and spermatic cord (Figures 1(b)-1(d)) with sparing of the seminiferous tubules ( Figure 1(e)). e tumor consisted of medium-sized neoplastic cells with blastoid morphology, scant agranular cytoplasm, irregular nuclei with fine to vesicular chromatin, and small nucleoli. e tumor exhibited increased mitotic activity with atypical mitotic figures, areas of necrosis, and abundant apoptotic debris (Figures 1(f)-1(h)). Based on morphology and patient age, the neoplastic cells seemed most consistent with either a lymphoma or spermatocytic tumor with anaplastic features. A preliminary immunohistochemistry (IHC) panel of CD3, CD20, AE1/AE3, and SALL4 was performed (Figures 2(a)-2(c)). e neoplastic cells were negative for all four markers, arguing against a diagnosis of lymphoma, F 1: Gross and representative H&E images. Gross findings showed a tan-red, hemorrhagic, well-circumscribed circular mass with focal areas of tan-white friable necrosis (a). At low power, diffuse, solid sheets of densely packed neoplastic cells infiltrating the testicular parenchyma, hilar so tissue, epididymis, and spermatic cord (b-d) with sparing of the seminiferous tubules (e) were seen. At high power (f-h), mediumsized neoplastic cells showed blastoid morphology, scant agranular cytoplasm, irregular nuclei with fine to vesicular chromatin, and small nucleoli. Increased mitotic activity with atypical mitotic figures, areas of necrosis, and abundant apoptotic debris were seen. primary germ cell tumor, or epithelial neoplasm. To further characterize the neoplasm, a second IHC panel of CKIT, CD45, CD68, S100, Ki-67, and CD138 was performed (Figures 2(d)-2(f)). e neoplastic cells were weakly positive for CD45 and positive for CD68 with granular dot-like pattern. Ki-67 was expressed in 80% of cells. CKIT, S100, and CD138 were negative. e positive CD45 and CD68 were suggestive of a hematopoietic neoplasm with histiocytic differentiation. At this time, the patient's chart was reviewed, which showed a history of BPDCN that initially presented as scalp nodules. e patient had received chemotherapy and a bone marrow transplant less than three months ago. Subsequently, a third IHC panel of CD4, CD56, and CD123 was performed (Figures 2(g)-2(i)). e neoplastic cells were diffusely positive for CD4 and CD56, and CD123 was positive in only rare cells, supporting the diagnosis of BPDCN. A er the diagnosis of relapsed BPDCN, the patient was treated with five rounds of pralatrexate with palliative intention. Disease progression was evidenced by the presence of diffuse joint pain and inguinal lymphadenopathy. e patient expired three months later. Discussion BPDCN is a diagnosis of exclusion, and substantial heterogeneity in clinical presentation can make the diagnosis very challenging. In the natural history of BPDCN, the skin is typically the first affected site, where it usually remains confined until a rapid second step involving leukemic spread and multiorgan involvement, eventually leading to death [5]. Involvement of the tonsils, liver, so tissues, paranasal cavities, lungs, eyes, and central nervous system has been described [2]. However, initial manifestations of BPDCN without cutaneous involvement are extremely rare. Dhariwal et al. recently described a case of a 13-year-old male who initially presented with BPDCN manifesting as a testicular mass without any cutaneous manifestation followed by leukemia-like symptoms with bone marrow F 2: Representative immunohistochemistry images. Initial IHC panel of CD3 (a), CD20 (b), and SALL4 (c), was negative and argued against a diagnosis of lymphoma, primary germ cell tumor, or epithelial neoplasm. e neoplastic cells were weakly positive for CD45 (d), and granularly positive for CD68 (e), with dot-like pattern. Ki-67 (f), was expressed in 80% of cells. e diagnosis of BPDCN was consistent with positive CD4 (g), CD56 (h), and weak CD123 (i).
2019-10-10T09:31:24.541Z
2019-10-07T00:00:00.000
{ "year": 2019, "sha1": "a75382afd9d11eb5d5b394ea822c7dd58682b9c9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cripa/2019/9196167.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26ae1de380a8633f0779e0039f4bb068764bd289", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246699906
pes2o/s2orc
v3-fos-license
Cardiometabolic outcomes of women exposed to hyperglycaemia first detected in pregnancy at 3-6 years post-partum in an urban South African setting Background Hyperglycaemia first detected during pregnancy(HFDP) has far-reaching maternal consequences beyond the pregnancy. Our study evaluated the cardiometabolic outcomes in women with prior HFDP versus women without HFDP 3–6 years post-partum in urban South Africa. Design and methods A prospective cohort study was performed of 103 black African women with prior HFDP and 101 without HFDP, 3–6 years post-partum at Chris Hani Baragwanath Academic Hospital, Soweto. Index pregnancy data was obtained from medical records. Post-partum, participants were re-evaluated for anthropometric measurements, body composition utilizing dual energy X-ray absorptiometry(DXA) and biochemical analysis (two-hour 75gm OGTT fasting insulin, lipids, creatinine levels and glucose levels). Cardiovascular risk was assessed by Framingham risk score(FRS). Carotid intima media thickness(cIMT) was used as a surrogate marker for subclinical atherosclerosis. Factors associated with progression to cardiometabolic outcomes were assessed using multivariable logistic and linear regression models. Results Forty-six(45.1%) HFDP women progressed to diabetes compared to 5(4.9%) in non HFDP group(p<0.001); only 20(43.4%) were aware of their diabetic status in the whole group. The odds(OR, 95% confidence interval(CI)) of progressing to type 2 diabetes(T2DM) and metabolic syndrome(MetS) after correcting for confounders in the HFDP group was 10.5(95% CI 3.7–29.5) and 6.3(95%CI 2.2–18.1), respectively. All visceral fat indices were found to be significantly higher in the HFDP group after adjusting for baseline body mass index. Ten-year estimated cardiovascular risk(FRS) and mean cIMT was statistically higher in the HFDP group(8.46 IQR 4.9–14.4; 0.48 mm IQR 0.44–0.53 respectively) compared to the non-HFDP group(3.48 IQR 2.1–5.7; 0.46mm IQR 0.42–0.50) respectively and this remained significant for FRS but was attenuated for cIMT after correcting for confounders. HIV did not play a role in progression to any of these outcomes. Conclusion Women with a history of HFDP have a higher risk of cardiometabolic conditions within 6 years post-partum in an urban sub-Saharan African setting. Introduction The non-communicable diseases(NCD) burden remains the leading cause of death worldwide, with diabetes and cardiovascular diseases(CVD) accounting for almost half the burden. In South Africa, diabetes and CVD are the second and third leading cause of death since 2014. In high-income countries, hyperglycaemia first detected during pregnancy(HFDP) has been associated with a sevenfold higher overall incidence of type 2 diabetes mellitus(T2DM) within the first decade following delivery [1] and an increased risk of CVD [2] and the metabolic syndrome(MetS) [3]. Prevalence figures for HFDP in Africa remain limited, with a few studies in South Africa demonstrating that 9-25% of women have HFDP [4][5][6]. These figures may underrepresent the disease burden since risk-factor based screening and varying diagnostic tests and strategies are currently employed. Whilst numerous studies have explored the short term maternal and neonatal outcomes following HFDP, long-term outcomes for the African continent remain limited to a mere three studies. One study performed amongst a predominantly mixed ethnic ancestral group in Cape Town, South Africa demonstrated a 48% progression rate to T2DM at 5-6 years following the index pregnancy [7], although no control group was included. A high CVD risk and prevalence of MetS(60.9%) was demonstrated in the same cohort [8]. A further smaller study amongst the same ethnic group from the same region established that the prevalence of T2DM at six-weeks post-partum was 27% [9]. Though racial and ethnic disparities for progression to T2DM following HFDP is well known with African American women being particularly vulnerable [10] owing to acculturation, lifestyle factors arising from social determinants in addition to genetic susceptibility and traditional risk factors [11], to our knowledge, the long-term CVD and metabolic impact of HFDP has not previously been explored in Black African women or compared to women in this setting without a history of HFDP. Obesity, a well-known risk factor of HFDP, T2DM and CVD, is commonly encountered amongst black South African women(40.9%) and accounts for an estimated 87% of their diabetic risk [12,13]. Obesity and weight-related characteristics including pre-pregnancy body mass index (BMI), post-pregnancy BMI and weight gain following the pregnancy, all have been shown to increase the risk of progression to T2DM following HFDP [14]. The pathophysiological mechanism behind this lies in the differential impact of regional fat deposition, with upper body(android and visceral fat) and lower body fat (gynoid and leg fat) showing directionally opposite associations with these risks [15] which too has been appreciated in black SA women [16,17]. However, the impact of body composition characteristics on progression to T2DM, MetS and CVD following an HFDP pregnancy has yet to be explored in this setting. Moreover, little is known about the impact of human immunodeficiency virus (HIV), which the healthcare sector in Africa is faced with, in addition to the growing burden of NCDs, on CVD and metabolic risk in the context of HFDP. The additional metabolic risk attributable to HIV is debatable, with an overall prevalence of glucose metabolism disorders in HIV-infected individuals on antiretroviral therapy(ART) in Africa ranging from 3-33.5% [18]. A meta-analysis of 5 case-control studies in Africa did not demonstrate a significant relationship between HIV and exposure to ART and the prevalence of T2DM, as encountered in other studies in Europe and North America. Given the growing prevalence of HFDP in South Africa and its well-known role in the intergenerational transmission of NCD, we sought to explore its impact on maternal CVD risk, development of T2DM and MetS in a group of Black African women with and without a prior history of HFDP. Secondary aims of our study were to explore how body composition differs between the two groups and if HIV-infection impacted the development of these outcomes following pregnancies with and without HFDP. To our knowledge this is the first study of its kind in Africa. Study design and population The study setting was the Chris Hani Baragwanath Academic Hospital(CHBAH), located in urban Soweto, South Africa. Between March and November 2019, we conducted a prospective cohort study in women previously diagnosed with HFDP (HFDP group) and women who tested negative for HFDP (non-HFDP group) using the same diagnostic test and criteria between February 2014 to January 2017. Both groups of women were derived from the same specialist clinic though were identified differently. The HFDP group were selected first and consisted mostly of women who were identified by risk-factor based screening and had attended a specialised gestational endocrine clinic at CHBAH for HFDP with their pregnancy characteristics and outcomes have been previously published [19]. A subgroup of the HFDP women were referred to the specialist clinic as a result of universal screening being performed by a research study [5]. The "control group", were women who tested negative for HFDP for the same time period and had undergone universal screening as part of a previous study [5]. Women were diagnosed using a 75-gram 2-hour oral glucose tolerance test(OGTT) with International Association of Diabetes in Pregnancy Study Group(IADPSG) criteria. HFDP comprised of "true" gestational diabetes mellitus (GDM) and "overt" diabetes in pregnancy (DIP). Participants were recruited telephonically and if unsuccessful traced by visiting their home address. Of the initial 319 HFDP cases identified, 206 were non-contactable/traceable, 4 declined to participate, 6 were pregnant. Difficulties tracing participants following delivery was mostly due to relocation or change of contact details. There were 845 women who screened negative for HFDP identified from the database of the previous study of which 103 women were contacted in a random order until the number of mothers were the same as those in the HFDP group. Two hundred and four participants were enrolled at follow up, of which 103 had confirmed HFDP and 101 did not. Sample size calculation Sample size calculations were calculated for each of the three main outcomes(T2DM, MetS and CVD risk) using a two-sample proportion test based on population parameters using a 5% margin of error, a confidence interval of 95% and power of 80% to detect the effect. Given the reported estimated risk of developing T2DM following a HFDP pregnancy of 20-60% and 12% [1,20] in the background population a sample size of 15 per group was calculated. The minimum total sample size calculated for the outcome MetS was 64 given the risk of 40% following HFDP [3] and background prevalence of MetS of 10%. A size of 98 per group was needed based on a reported estimated risk of developing CVD of 17.6% [2] following HFDP vs. 7% in the background population. The sample size needed to establish CVD risk informed our final sample size. Data collection a) Questionnaire. A self-reported questionnaire was captured at the follow-up visit and incorporated maternal demographics, marital status, various socioeconomic parameters (SES), obstetrical history, maternal complications and outcomes and postnatal factors (history of CVD risk factors, any vascular event/s, risk factors for the development of T2DM including recurrent HFDP pregnancies, breastfeeding following the index pregnancy, family history of diabetes or the presence of diabetes). Use of cholesterol-lowering or antihypertensive medication, smoking status and pack year history and ethanol consumption(ml/day) based on the quantity and frequency and physical activity was captured. Contraceptive use was self-reported and was categorized as none, oral contraceptives or injectable contraceptives and type. HIV status and therapy were noted where applicable. (See S1 Table for relevant definitions of maternal variables noted). The questionnaire (S1 Appendix)) was informed by the literature and adapted from several existing standard and recognised sources [21] in order to incorporate relevant factors. b) Anthropometrics. Subjects underwent a physical examination for weight, height, waist and hip circumference and blood pressure measurements utilising standardised methods by qualified trained research assistants. Height (cm) was recorded to one decimal place using a wall mounted Holtain stadiometer(Crymych, UK) with subjects standing on a flat surface at a right angle to the vertical board of the stadiometer. Weight(kg) was measured on a SECA digital scale (Hamburg, Germany), to the nearest 0.1kg, which was calibrated and standardised using a weight of known mass. Participants wore light clothing and were asked to remove their shoes and socks. Blood pressure(mmHg) measurements were taken. Details pertaining to each measurement is outlined in S1 Table. Dual-energy x-ray absorptiometry (DXA)(Hologic Discovery-A (S/N83145), Bedford, MA, USA) was used to determine whole-body composition since BMI alone is not an accurate indicator of body composition. This included subtotal(whole-body minus head) fat mass and fat-free soft tissue mass. Regional body fat, namely trunk, arm, leg, android, and gynoid fat mass(expressed in kg and as a percentage of subtotal fat mass, (% FM)) were measured using DXA cut-off lines positioned at standard anatomical positions, as defined in the software (software version apex 4.2.0). FMI (kg/m 2 ) was calculated using height and body fat mass which offers superiority over BMI as a marker of obesity since the index is based on fat mass, not body weight, which is a combination of fat and lean components. In addition, abdominal visceral adipose tissue(VAT) and subcutaneous adipose tissue(SAT) were estimated using algorithms included in the DXA software, which have been shown to perform as well as clinical computed tomography [22]. During data collection, a phantom scan was performed each morning to) determine the coefficient of variance of the DXA machine and the coefficient of variance (CV) was less than 0,5% for all parameters. CVs for DXA parameters were <2% for total fat mass, and 1% for fat-free soft tissue mass(FFSM). c) Biological samples and OGTT. Point of care testing was performed for haemoglobin (Hemocue r Hb 201) and HIV(Homemed HIV1/2 rapid test kit). An early morning midstream urine sample was collected from each participant to exclude pregnancy and to perform bedside testing with a urine dipstick(Roche Combur) screening for glycosuria, albumin/protein, evidence of infection or renal disease. A finger prick(OneTouch) fasting capillary glucose was performed at baseline and the OGTT was commenced irrespective of the result. At baseline, blood samples were drawn by a trained nurse after an 8-hour overnight fast for measurement of serum creatinine, lipogram, Hba1c, fasting insulin and glucose. This was followed by ingestion of 75g glucose in 250ml water, Blood samples for glucose were drawn at baseline and 2 hours. Those with self-reported diabetes diagnosis, which was confirmed by either medical card record or drugs in-use only had fasting bloods drawn. Specimens were centrifuged and stored at -80˚C within 30 min of being drawn. Categories of glucose intolerance were defined applying the 2006 WHO criteria [23]. (S1 Table). d) Carotid and femoral imaging. Ultrasonographic assessment of the common carotid artery(CCA) and femoral artery(CFA) was performed(Linear-Array 12L-RS transducer with a B-mode Logic E Ultrasound machine, GE healthcare, CT, USA), to assess for intima media thickness(IMT) and for the presence of plaque. The IMT measurement was then performed on the posterior wall of the common carotid artery and common femoral artery in an area free of plaque, defined as the distance between two echogenic lines represented by the lumenintima interface and media-adventitia interface of the arterial wall. The ultrasound machine software then detected the intima-lumen and the media-adventitia interfaces and calculated the minimum, maximum, and mean common carotid IMT(cIMT) and femoral IMT in millimetres and to 2 decimal places [24,25]. All patients were positioned supine with the neck slightly hyperextended and rotated in the opposite direction to the probe. A 45-degree angle wedge pillow was used to standardize lateral rotation. Measurements were performed by one observer with intra-observer variability 1.1%. e) Biochemistry and lab analyses. Plasma glucose was measured using Randox Rx Daytona chemistry analyser using enzymatic methods(Randox Laboratory Ltd, London, UK) glycosylated haemoglobin a1c(HbA1c) was measured using the Bio-Rad D-10™ Haemoglobin analyser using the HPLC method (catalogue number, 2200101) (Bio-Rad Laboratories, Inc. CA, USA). The precision and trueness of the Randox Rx Daytona chemistry analyser were verified using the clinical and laboratory standards institute document EP15. Coefficients of variation calculated from running 30 separate samples at 3 different times were 0.7% for glucose and 1.8% for HbA1c. Lipids including HDL, low-density lipoprotein(LDL), triglycerides(TG), total cholesterol concentrations were analysed on the Randox Rx Daytona chemistry analyser using enzymatic colorimetric (catalogue number, CH8311(HDL), CH8312(LDL), TR8332(trigs), CH8310(Chol)) Randox Laboratories Ltd., London, UK). Enzymatic colorimetric assays were used to measure TG, total cholesterol and HDL cholesterol using the Roche modular auto analyser, while low-density lipoprotein cholesterol was calculated using direct methods/Friedewald formula. Fasting serum insulin concentrations were measured on the Immulite1 1000 Immunoassay system using the chemiluminescent method(catalogue number lkin1/ catalogue number, 10381429) (Siemens) chemiluminescent healthcare GMBH, Henkestr, Germany). Serum creatinine concentrations were analysed on the Randox Rx Daytona chemistry analyser using enzymatic methods (catalogue number, CR8317) (Randox Laboratories Ltd., London, UK). CVs calculated from running 40 separate samples in duplicate were, 0.8% for HDL and total cholesterol, 1.19% for TG and 3.9% for insulin and 0.7% creatinine. Outcomes The primary outcome was progression to and time to developing T2DM between women with HFDP(sub-categorised; GDM and DIP) and women without HFDP following their index pregnancy. Secondary outcomes included comparison of body composition measures, progression to MetS and CVD risk(utilising two surrogate measure, Framingham Risk Score (FRS) and cIMT between the groups. MetS was defined using the harmonised criteria [26]. Gender specific prediction for 10-year CVD risk was calculated using the modified Framingham risk score 2008 (FRS) [27] (See S1 Table for definitions). Ethics This study was approved by the Human Research Ethics Committee at the University of the Witwatersrand(M180316). Informed consent, both verbal and written, was obtained from participants prior to enrolment in the study. Statistical analysis Data was captured using REDCap [28] (Vanderbilt University, Nashville, USA) and analysed using Stata software 13.0(College station, USA) [29]. Sensitivity analysis to address potential bias of missing data was performed. Mean and standard deviations were reported for normally distributed continuous variables(anthropometric parameters) and medians and interquartile ranges for non-normally distributed measured data (all other continuous variables). Number and percentages for categorical variables(chronic hypertension, family history of diabetes etc). Statistical differences between three groups(control, GDM and DIP), were tested using Analysis of Variance(ANOVA) or Kruskal-Wallis test. For categorical variables, Chi-squared test and Fischer's exact(small frequencies) was utilised. The statistical significance level was set at two-sided p-value <0.05. Crude odds ratio(OR 95% CI) and multivariable adjusted odds ratio(aOR 95% CI) for T2DM, MetS, and 10-year cardiovascular risk calculated using FRS were estimated from logistic regression models. Covariates evaluated as potential confounders based on a priori hypotheses are included in Table 3. Covariates were excluded as confounders if they were not associated with both the dependent and independent variable (exposure to HFDP), p<0.05. Further multivariable models were designed to explore the relationship of maternal factors associated with the relevant outcomes adopting a chronological approach in which maternal factors present either at pre-pregnancy(distal model 1), index pregnancy(intermediate model 2) and post-partum(proximal model 3) were assessed using logistic or linear regressions. The final model combined all variables from the three models. The outcomes explored in these multivariate models were fat mass index FMI(continuous, as surrogate for adiposity), T2DM (binary) and cIMT(continuous). The independent variables were identified using univariate analysis for each outcome. HFDP, an independent variable for all models, was categorised according to degree of dysglycaemia as GDM and DIP. Both BMI and WHR were used as continuous variables for the purposes of the models. For logistic regression model diagnostics, results are expressed as OR and 95% confidence intervals(CI) and we assessed the following: linearity assumption using the Lowes graph, multicollinearity using variance inflation factors, model specification using the C-statistic, and confirmed the fit of the model using the Hosmer-Lemeshow goodness-of-fit test. We also checked for outliers. For linear regressions cIMT was log-transformed to increase normality of residuals. Results are expressed as beta coefficient and 95% CI. In order to assess the influence of HIV on the outcomes, it was included as binary variable into each of the multivariate models exploring maternal factors and outcomes. Results There were 103 women recruited in the HFDP group and 101 in the non-HFDP group after all exclusions were applied. There was <1% missing data and no biases detected through the sensitivity analyses. Details are shown in the study flow diagram (Fig 1). a) Baseline and follow up demographics, maternal factors, anthropometrics, and biochemical parameters between the groups The majority of the cohort was of Black African ancestry (n = 198, 97%). Relevant baseline characteristics during the index pregnancy and at follow-up for the HFDP group and non-HFDP groups are shown in Table 1. Of the participants with prior HFDP, 45(43.7%) had "overt" diabetes in pregnancy (DIP) with the remaining 58(56.3%) classified as "true" GDM (GDM). When comparing the HFDP vs. non-HFDP groups at first booking during their index pregnancy, the median age was higher for the HFDP group, 32.5(29-38 IQR) vs. 29.5 (25-34 IQR) respectively, with a median follow-up period of 3 years (IQR3-4) and their median BMI were 35.2(IQR30.6-39.8) and 29.5(IQR 25.0-32.9). Baseline prevalence of HIV during pregnancy was lower in the HFDP group 13(12.7%) vs. 20(19.8%) amongst the non-HFDP group and this remained the case at follow-up. All women with HIV were on fixed dose combination of treatment whilst the majority(80%) were diagnosed at or before their index pregnancy. Forty-five(43.7%) of the women in the HFDP group experienced an obstetric complication vs.16(15.8%) in the non-HFDP group. When comparing the DIP and GDM groups, the only variables significantly different at baseline were glucose values on OGTT testing and exposure to therapeutic agents, with more in the DIP group being exposed to insulin 22(48.9%) vs. 8 (14.8%). At follow-up, the HFDP group remained obese (32.8 (29.1-39.2)) with elevated anthropometric measures, higher blood pressure measurements and glucose profiles when compared to the non-HFDP group. Their overall socioeconomic status was lower than those for the non-HFDP group. b) Stratified analysis of maternal outcomes including diabetes, cardiovascular risk and metabolic syndrome by HFDP subtypes: DIP and GDM (Table 2) Of the HFDP group, 46(44.6%) progressed to diabetes compared with 5(4.9%) in the non HFDP group(p<0.001). Only 20(42.5%) of the entire group were aware of their diabetes status and the average time to event was 30 months (SD +/-1.32). Both dysglycaemia(57 (55.9%) vs.14 (13.9%)) and insulin resistance (92 (90.2%) vs. 69 (68.3%), p<0.001)) were significantly higher in the HFDP group. Within the HFDP group, all measures of dysglycaemia, insulin resistance and progression to T2DM were higher among the DIP group compared to the GDM group. All CVD risk factors were higher in the HFDP group including diabetes 46(44.6%), hypertension 27(26.1%), dyslipidemia 85(82.5%), family history of CVD 20(19.4%), history of ever smoking 10 (9.7%) and central obesity 66(64.1%), though smoking and family history of CVD were not significantly different between the groups. Overall, the calculated FRS was significantly higher in the HFDP group 8 higher in the HFDP group, with a cIMT of 0.48 (IQR 0.44-0.53) vs. 0.46 (IQR 0.42-0.5) p = 0.037, though this was no longer significant after adjusting for age. No atherosclerotic plaque was noted at either carotid or femoral sites in any of the participants and there was only one reported event of CAD and one of cerebrovascular accident. Though these measured outcomes may be attenuated after correcting for differences in maternal age and BMI at index pregnancy, this has been explored in the subsequent regression models (Tables 5-7). PLOS ONE Diabetes in pregnancy and cardiometabolic outcomes e) Factors associated with FMI and progression to type 2 diabetes and cardiovascular risk using multiple variable logistic and linear regression i) Factors associated with FMI. Linear regression models of relevant independent maternal factors present at pre-pregnancy(distal), pregnancy(intermediate) and post-partum(proximal) associated with log FMI is shown in Table 5. In the distal model only multiparty was significantly associated with FMI. In the intermediate model, BMI measured at first visit in pregnancy was significantly associated with the outcome, with BMI difference (post-partum BMI-pregnancy BMI) being the only significant factor in the proximal model. In the final model, only initial BMI in pregnancy and BMI difference was significantly associated with the outcome log FMI. ii) Progression to T2DM (Table 6). Multivariate logistic regression models examining the association between relevant maternal risk factors and progression to diabetes found that higher parity, family history of T2DM, and positive HIV status were significant in the distal model. Prior HFDP was significant in the intermediate model, with absence of exclusive breastfeeding being significant in the proximal model. However, in the combined final model only prior history of HFDP, a family history of T2DM and an elevated VAT:SAT ratio were independently associated with risk of progression to T2DM. iii) Carotid intima media thickness (Table 7). Linear regression models examining maternal factors associated with log cIMT in the various models are displayed in Table 7. These analyses show that of the variables included in the distal model, no variables were significant. In the intermediate model; maternal age, initial SBP remained significant with triglyceride levels and BMI difference being significant in the distal model. In the combined final model, maternal age and SBP at pregnancy and BMI difference were significantly associated with cIMT thickness. HIV influence on outcomes The prevalence of HIV within our cohort was 21.6%(n = 44), of which 18.5%(n = 19) were HIV reactive in the HFDP exposed group. The independent influence of HIV on the measured outcomes assessed in multivariate regression models was not found to be significant between women with a prior history of HFDP vs those without (Tables 5-7). Discussion We found that Black African women with compared to those without a history of HFDP have a (a) 10.5-fold increased risk for developing T2DM(4.6 and 27.6-fold for GDM and DIP groups respectively) (b) 6-fold increased risk of having MetS together with higher visceral adiposity; and (c) higher cardiovascular risk. HIV infection did not influence any of these outcomes. The rate of progression to T2DM we found was similar to that reported from Cape Town (48% at 5-6 years post-partum) in predominantly mixed-ancestral women using the same PLOS ONE diagnostic criteria [7]. This is over 3-fold higher than the background prevalence rates of T2DM for black South African women [30], and identifies a highly vulnerable population. This risk was significantly higher both in women with GDM, the less severe form of dysglycaemia in pregnancy, as well as women with DIP. Comparisons of our findings with international studies are challenging, as different study designs, lengths of follow-up, definitions and diagnostic criteria are employed. Nevertheless, a recent meta-analysis [31] identified the time period with the highest risk of progression as 3 to 6 years following the pregnancy, which aligns with our findings. The modifiable risk factors associated with progression and risk of progression to T2DM have best been explored in high income countries(HIC) [32][33][34] with limited data for low to middle-income countries(LMIC) and very little African representation [35]. Identification of these factors play a key role when implementing strategies to delay or prevent the onset of T2DM. The post-partum period offers a critical window of opportunity to implement such strategies and screen for diabetes in these high-risk individuals. In our study, though a history of HFDP and family history of T2DM and visceral adiposity were significantly associated with progression to T2DM, only adiposity is modifiable. Though these factors have been associated with progression in previous studies, two other studies in South Africa found that a range of measures of the extent of dysglycaemia to be significant predictors: fasting and 2-hr plasma glucose on OGTT in pregnancy, diagnosis of HFDP before 24 weeks indicative of undiagnosed/unrecognized pregestational diabetes, and exposure to insulin and OHA in pregnancy [7,9], all of which are non-modifiable. The protective role of breastfeeding for T2DM [36], was not found to be significant in our study. The high number of women with HFDP who were unaware of their diabetes status indicates that postnatal assessment for diabetes is necessary even in an already overwhelmed LMIC health infrastructure. Our study highlighted that, in addition to the high risk of progression to T2DM, women with a history of HFDP are more vulnerable to CVD as evidenced by their higher CVD risk scores and greater cIMT. Though their risk scores remained significantly elevated after correcting for obvious confounders, their cIMT, a surrogate marker for identifying pre-clinical atherosclerosis and hence often used as a surrogate CVD measure [37] was independent of prior HFDP and rather influenced by age, SBP and BMI difference. Though no cardiovascular events were reported in our study, their higher cardiovascular risk is likely to translate into CVD events with time as our cohort was young with a relatively short period of follow-up. Consequently, our study did not corroborate the findings of a recent metanalysis [2] demonstrating that women exposed to HFDP have a two-fold higher risk of cardiovascular events independent of the development of T2DM 10-25 years post-delivery. The rates of obesity were high across both groups during the index pregnancy and followup, in keeping with high rates of obesity amongst Black women in SA [30]. However, fat distribution differed in those with and without a history of HFDP; with the former having significantly greater upper body fat distribution, in particular, the visceral area. Visceral adiposity is known to be a strong predictor of diabetes and CVD independent of overall fatness [17,38]. South African studies have demonstrated that visceral depots are generally lower in black African women compared to their white female counterparts, despite the higher insulin resistance and high rate of obesity in this group [30,39,40]. Pregnancies complicated by both obesity and HFDP are known to independently influence immediate and long-term maternal outcomes. In our study, initial BMI at index pregnancy and weight gain post-pregnancy were not significantly associated with progression to diabetes or cardiovascular risk outcomes, but visceral adiposity was a significant predictor of progression to T2DM. A better appreciation of the role of obesity and body composition on our measured cardiometabolic outcomes would have been possible if it weren't for our lack of knowledge relating to fat distribution pre-and during pregnancy and BMI measurement at pre-pregnancy and early post-delivery, known to be independent risk factors for progression to T2DM [30]. It is therefore plausible that our study confirmed a high prevalence of MetS in women with HFDP with a 6-fold increased risk after adjusting for age, parity, BMI and SBP. Overall, altered body composition favouring visceral adiposity, together with the increasing burden of HFDP and its cardiometabolic consequences is the setting for the perfect storm in which the coexistence of these entities may perpetuate a vicious cycle fueling the NCD burden. Of interest, the offspring for this cohort of women were evaluated for childhood adiposity at 3 to 6 years post-partum in another study [41]. Measurements included various anthropometric measures(BMI) and FMI as measured by deuterium dilution method. Maternal BMI during pregnancy was found to play a more significant role than maternal hyperglycaemia in relation to the outcome. Though numerous confounders such as BMI, age and parity were identified in relation to the measured outcomes (T2DM, MetS and overall CVD risk), these were found to be non-significant, however only weight based parameters (initial BMI and post-partum weight gain) and not a history of HFDP significantly influenced FMI. Strengths of our study were the inclusion of a control group of women with confirmed normoglycaemia on OGTT from the same time period and setting as the HFDP group. The sample was adequately powered for establishing both the primary and secondary outcomes for our study. The use of DXA to assess maternal body composition at 3-5 years post-partum has not previously been reported from Africa. Lastly, exploring the impact HIV had on these outcomes which was insignificant contributes to the limited existing data. A major limitation of our study was that this was a single centre study which limits the applicability and generalisability. Further limitations included missing pre-pregnancy BMI values for participants leading to difficulty in exploring this variable as a confounder. Difficulty tracing participants resulted in a small sample size and hence less power for sub-group comparisons in particular within the HFDP group (GDM vs. DIP). Self-reporting bias was a potential problem in the administered questionnaire. Cardiovascular risk and body composition outcomes were not measured in pregnancy for longitudinal comparison. However, these parameters are often influenced by the normal physiological adaptations of pregnancy, making interpretation difficult. The use of FRS to calculate cardiovascular risk in these women may have underestimated their risk since the formula doesn't take into account other unique potential risk factors relating to HFDP such as recurrent HFDP and hypertensive disorders of pregnancy. The time of diabetes diagnosis is unclear as women did not have a 6-week post-partum OGTT, in some cases, HFDP may have never resolved post-partum. The small number of women who were HIV positive may have accounted for the lack of effect it had on the measured outcomes. The lack of data surrounding ART regimes was a further limitation as well as the fact that this was a single centre study. The ever-growing epidemic of diabetes, obesity, and CVD, particularly amongst LMIC populations, poses a significant public health burden, occurring alongside the burden of chronic infectious diseases. Our study identifies a group of young women at high risk of cardiometabolic outcomes, in whom the postpartum period offers a window of opportunity to implement targeted screening, counseling and lifestyle and/or pharmacologic interventions. Future prospective studies are needed to explore the best timing and impact of these interventions in curtailing the adverse outcomes and improving cardiometabolic health in this vulnerable group of women.
2022-02-11T05:22:20.340Z
2022-02-09T00:00:00.000
{ "year": 2022, "sha1": "5682afe16411f22152749f010f84e1568c87c146", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0263529&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90c148f56cf420b3aa23c5df7d91ba3d9499be6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255627465
pes2o/s2orc
v3-fos-license
The Efficacy of Gum Arabic in Managing Diseases: A Systematic Review of Evidence-Based Clinical Trials Gum arabic (GA) is a natural product commonly used as a household remedy for treating various diseases in the Sub-Saharan Africa region. Despite its claimed benefits, there has been a lack of research on the findings of current clinical trials (CTs) that investigated its efficacy in the treatment of various medical diseases. The aim of this systematic review was to study CTs which focused on GA and its possible use in the management of various medical diseases. A search of the extant literature was performed in the PubMed, Scopus, and Cochrane databases to retrieve CTs focusing on evidence-based clinical indications. The databases were searched using the keywords (“Gum Arabic” OR “Acacia senegal” OR “Acacia seyal” OR “Gum Acacia” OR “Acacia Arabica”) AND (“Clinical Trial” OR “Randomized Controlled Trial” OR “Randomized Clinical Trial”). While performing the systematic review, data were obtained on the following parameters: title, authors, date of publication, study design, study aim, sample size, type of intervention used, targeted medical diseases, and main findings. Twenty-nine papers were included in this systematic review. The results showed that ingestion of GA altered lipid profiles, renal profiles, plaque, gingival scores, biochemical parameters, blood pressure, inflammatory markers, and adiposity. GA exhibited anti-inflammatory, prebiotic, and antibacterial properties. GA has been successfully used to treat sickle cell anemia, rheumatoid arthritis, metabolic disorders, periodontitis, gastrointestinal conditions, and kidney diseases. Herein, we discuss GA with respect to the underlying mechanisms involved in each medical disease, thereby justifying GA’s future role as a therapeutic agent. Introduction Gum arabic (GA) is an exudate with a gummy texture obtained from Acacia seyal and Acacia senegal umbrella-shaped branches. A cut is made on the branches by which the exudate is obtained from or naturally present in and is made to harden in the air. GA is mainly found in Sudan, Chad, and Nigeria [1,2] (Figure 1). Structurally, GA is an arabinogalactanprotein complex. This complex is composed of magnesium, calcium, and potassium salts of arabic acid. Arabic acid structure is made up of 1-3-linked β-D-galactopyranosyl units, along with branches that consist of two to five β-D-galactopyranosyl residues linked together through 1,3-ether linkages and connected to the fundamental β-D-galactopyranosyl chain by 1,6-linkages ( Figure 2) [1]. GA is largely fermented in the large intestines into short-chain fatty acids by microorganisms [3]. Traditionally, GA has been used as an oral hygiene substance [4]. Health benefits were seen following GA treatment. Direct application of herbal formulation containing GA on teeth and gums significantly reduced gingival and plaque index scores [4]. GA contains high amounts of calcium and phosphate ions. In vitro studies demonstrated that it can prevent tooth enamel demineralization, in addition to enhancing its remineralization [5][6][7]. In mice, GA supplementation in drinking water or along with diet was observed to reduce obesity by altering the expression of lipid metabolic genes and age-dependent fat deposition in the visceral adipose tissue [8,9]. It lowers cholesterol levels, as it possesses a high amount of fiber. GA treatment, along with atorvastatin, reduced total cholesterol, LDL, and triglyceride levels in patients with hyperlipidemia [10]. These effects subsequently decrease the risk of heart disease [10]. GA is largely fermented in the large intestines into short-chain fatty acids by mic organisms [3]. Traditionally, GA has been used as an oral hygiene substance [4]. Hea benefits were seen following GA treatment. Direct application of herbal formulation co taining GA on teeth and gums significantly reduced gingival and plaque index scores GA contains high amounts of calcium and phosphate ions. In vitro studies demonstra that it can prevent tooth enamel demineralization, in addition to enhancing its reminer ization [5][6][7]. In mice, GA supplementation in drinking water or along with diet was served to reduce obesity by altering the expression of lipid metabolic genes and agependent fat deposition in the visceral adipose tissue [8,9]. It lowers cholesterol levels, a possesses a high amount of fiber. GA treatment, along with atorvastatin, reduced to cholesterol, LDL, and triglyceride levels in patients with hyperlipidemia [10]. These GA is largely fermented in the large intestines into short-chain fatty acids by micr organisms [3]. Traditionally, GA has been used as an oral hygiene substance [4]. Hea benefits were seen following GA treatment. Direct application of herbal formulation co taining GA on teeth and gums significantly reduced gingival and plaque index scores [ GA contains high amounts of calcium and phosphate ions. In vitro studies demonstrat that it can prevent tooth enamel demineralization, in addition to enhancing its reminer ization [5][6][7]. In mice, GA supplementation in drinking water or along with diet was o served to reduce obesity by altering the expression of lipid metabolic genes and age-d pendent fat deposition in the visceral adipose tissue [8,9]. It lowers cholesterol levels, as possesses a high amount of fiber. GA treatment, along with atorvastatin, reduced to cholesterol, LDL, and triglyceride levels in patients with hyperlipidemia [10]. These GA also exhibited antioxidant properties by increasing superoxide dismutase, catalase, and glutathione peroxidase activity in the liver [3,[11][12][13]. GA oral supplementation increased 24 h creatinine clearance and binds with free water, thereby reducing intestinal absorption and water content in urine [14,15]. Surprisingly, GA was ignored by the locals, and the tree branches were used for coal and fire with decreases in production [1]. Thus, the benefits were unknown until recently, when many experimental and CT studies revealed its benefits. Several experimental studies have demonstrated the potential benefits of the use of GA in clinical practice [16][17][18][19]. In a recent study, GA treatment inhibited colorectal carcinogenesis in mice [16]. GA treatment reduced the formation of aberrant crypts foci in the colon, mainly by reducing local genotoxicity, as well as oxidative stress [16]. In the same study, reduced genotoxicity in the liver and bone marrow, as well as low oxidative stress in the liver and blood, were observed [16]. GA supplementation protected rat heart from ischemia/reperfusion injury by decreasing apoptotic enzyme levels, as well as from the formation of proinflammatory cytokines [20]. GA treatment alleviated B 1 -induced hepatic injury through its antioxidant and anti-inflammatory properties [17]. Another research study showed that regular GA consumption stimulates innate immunity against various infections by inducing cathelicidin expression [18]. GA supplementation in type 2 diabetic rats prevented learning and memory loss. These effects were associated with increased expression of PGC-1a and ATP synthase β-subunit protein in the hippocampus [21]. GA administration in rats with dextran sodium sulfate-induced colitis resulted in a reduction in the severity of colitis, colonic fibrosis, and TGFβ1 expression [22]. GA pretreatment prevented butralin-exposure-induced renal damage by promoting antioxidants and increasing free radical scavenging activity [23]. Another experimental study on diabetic rats concluded that GA reduced the progression of chronic kidney disease [24]. Studies were published on GA and water pipe smoking (WPS) in mice. Researchers showed that in male mice exposed to hookah smoke for 30 minutes each day for 30 days with coadministration of GA, the negative effects of smoke exposure on the reproductive system were reduced [25]. Another study on mice exposed to WPS and GA concluded that GA reduced the harmful effects of WPS on thrombosis, cardiovascular toxicity, inflammation, and oxidative stress [26]. GA was found to be effective against diarrhea [27,28]. GA supplementation as an additive to oral rehydration solution significantly reduced the duration of diarrhea and frequency of defecation and improved the consistency of the stool [27]. Clinical trials (CTs) are being conducted in the human population to develop a new therapeutic drug in order to treat, prevent, or reduce the incidence of disease [29]. To the best of our knowledge, the present systematic review may be the first of its kind to discuss CTs conducted on human subjects exclusively focusing on GA and its beneficial effects in the management of various medical diseases. Study Design A systematic review of all human CTs was conducted to explore the current best evidence of the possible use of GA for various medical diseases. Search Strategy Relevant studies were identified through a thorough search of electronic databases such as PubMed, Scopus, and the Cochrane library. In addition, a snowballing method was employed whereby relevant articles were found by screening the reference list for any additional articles that met the eligibility criteria of the current study. The earlier published papers were screened during the period from April 2022 to August 2022. The databases were searched using the keywords: ("Gum Arabic" OR "Acacia senegal" OR "Acacia seyal" OR "Gum Acacia" OR "Acacia Arabica ") AND ("Clinical Trial" OR "Randomized Controlled Trial" OR "Randomized Clinical Trial"). Unpublished articles were excluded from the search strategy. Inclusion Criteria Published literature that fulfilled the following criteria was included: (i) all studies that were published in English language and reported CTs of GA treatment against targeted medical conditions in humans. All CTs of GA treatment, regardless of randomization, blinding, phase of trial, and statistical method, were used for assessment of outcome, irrespective of negative or positive results. Exclusion Criteria The exclusion criteria for the present systematic review were: "Studies that used other natural product/compound combined with GA for the treatment"; "any preclinical, unpublished, duplicated, and incomplete CTs"; "Studies that reported the GA CT without a specific targeted medical disease" and "Studies that were published in languages other than English". Data Collection Reviewers (Y.A.-J., N.T.B.A., R.A., J.M., and S.R.S.) first screened titles and abstracts of all retrieved papers for inclusion. The full texts of all screened papers were then studied independently in order to determine the final study selection. Duplicate information on the same studies was removed. Agreement on the inclusion and exclusion criteria was concordant, and discrepancies were resolved by consensus of all researchers. The following data were collected from included studies: authors and date of study, study design, targeted medical disease, sample size, and main results/findings (Table 1). After the individuals consumed a bagel and cream cheese, along with 40 g of GA, they reported feeling less hungry 15 min and 240 min later. In comparison to the control group, the post-acacia ingestion symptoms of bloating, gas, and GI rumbling were more severe. Although there was no significant difference in the area under the curve or changes in blood glucose response, blood glucose with 20 g of fiber at 30 min was considerably lower than the control. Fasting blood sugar and HbA1c levels significantly decreased in all groups, which was followed by large drops in total protein and uric acid levels. In diabetics and those with diabetic nephropathy, there was a noticeable improvement in renal function following GA supplementation across all groups, with substantial reductions in blood urea nitrogen and creatinine levels. After chewing the pastille, the mean salivary flow rate considerably rose by 8.03 g/min compared to the mean changes after chewing the control product, which increased by 3.71 g/min. Study Selection This review was performed in accordance to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The literature search led to the identification of 631 studies (Figure 3). Following an initial screening for duplicates and after applying the inclusion criteria to the title, the abstracts of 38 articles were found to be suitable for full-text screening. During title screening, the agreement between the six reviewers was unanimous and conclusive. Twenty-nine articles were eligible, as they met the inclusion criteria and were therefor included in the systematic review. enced diarrhea again. 29 Suresh et al., 2021 [55] 1 month 10 g of GA dissolved in 200 ml of water 10 Gastroparesis Weight and height, glucose measurement, and ANMS GCSI-DD validated survey for symptom severity In comparison to psyllium husk, blood sugar levels were controlled in patients receiving GA and partly hydrolyzed guar gum. For the test fibers, the mouth-to-cecum transit time was not significant. Study Selection This review was performed in accordance to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The literature search led to the identification of 631 studies (Figure 3). Following an initial screening for duplicates and after applying the inclusion criteria to the title, the abstracts of 38 articles were found to be suitable for full-text screening. During title screening, the agreement between the six reviewers was unanimous and conclusive. Twentynine articles were eligible, as they met the inclusion criteria and were therefor included in the systematic review. Study Characteristics The majority of the publications (79.3%) were published after 2010. The papers included information from nine different countries. The studies included in the systematic review were from worldwide populations in Asia, Africa, Europe, and the United States. All the studies included in this review were CTs. Of the 29 articles that were included, 31.0% were related to metabolic disorders (e.g., type 2 diabetes and hyperlipidemia) [ [38][39][40], 6.9% were related to rheumatoid arthritis [46,47], and 3.4% were related to drug efficacy [48] (Table 1). Gum Arabic and Other Diseases The findings from different published studies are discussed under different sections. Gum Arabic and Metabolic Disorders GA was found to have a positive effect on satiety and appetite reduction [30]. Subjects reported a decrease in caloric intake and an increase in the their satiety following consumption of GA [33]. Furthermore, when studying the effects of ingesting GA on adults who were at high risk of developing metabolic syndrome, it was found that study subjects had reduced systolic and diastolic blood pressure, fat-free body mass, appetite, and fasting plasma glucose, along with an increased dietary fiber intake. Improvement in bloating and bowel movements was also reported [34]. Another study asserted similar findings; subjects who ingested GA exhibited a significant reduction in fasting plasma glucose and HbA1c [34][35][36]. Another trial, in which patients with hyperlipidemia were given GA alongside atorvastatin medications, reported that the reduction in their lipid profile was significantly improved [10]. A significant increase in the HDL cholesterol level was also noted [35]. These findings were contradicted by one study that investigated the effects of the viscosity of fiber supplements on the lipid profile. The authors discovered that GA, owing to its low viscosity, did not significantly lower cholesterol levels compared to other fiber supplements containing medium-to high-viscosity water-soluble dietary fiber [37]. Moreover, GA was deemed useful for the reduction in weight gain in patients with type 2 diabetes, as it helped to decrease the body adiposity index by 23.7% and BMI by 2% [31]. Another study yielded similar findings; a reduction of 2.18% of body fat was reported in patients who ingested GA [32]. It was further observed that consumption of GA significantly decreases fasting blood glucose, HbA1c, total protein, and uric acid concentration [36]. It was also observed that GA is a helpful supplement for diabetic patients, specifically those who have diabetic nephropathy, as it decreases blood urea nitrogen and creatinine concentrations [36]. Gum Arabic and Sickle Cell Anemia In sickle cell anemia, dyslipidemia is a common occurrence as a result of oxidative stress reactions [38]. It was revealed that GA significantly reduced total cholesterol, LDL, and triglyceride levels [38]. A team of researchers who conducted a CT study found that sickle cell patients who consumed GA experienced a significant reduction in their direct bilirubin, serum alanine transaminase, and serum urea levels [39]. Another beneficial effect of GA was an increase in total antioxidant capacity and a reduction in MDA and H 2 O 2 oxidative markers in sickle cell anemia patients [40]. Gum Arabic and Oral Health CTs were conducted to test the efficacy of GA as an antibacterial in comparison to liquorice and chlorhexidine mouthwashes. Results revealed a statistically significant decrease in the counts of Streptococcus mutans and Lactobacillus acidophilus for both GA and liquorice mouthwash without any oral side effects. Moreover, resistance was observed in subjects who used chlorhexidine mouthwash. No significant difference was found between GA and liquorice mouthwash, implying that they can both be used to effectively prevent dental caries [41]. Furthermore, another research finding in the field of oral health showed the effective use of GA in the prevention of dental plaques [42]. In comparison to sugar-free gum, daily photographic assessment of erythrosine-stained plaque showed lower scores following consumption of GA [42]. A group of researchers observed similar findings with respect to reduction in plaque and gingival inflammation [43]. Subjects who applied GA powder had a significantly lower mean gingival index score, mean plaque index score, and gingival crevicular fluid interleukin-1β [43]. A CT study reported that GA use has been of immense benefit for patients with xerostomia. Subjects who were in the GA group had a significantly higher salivary flow rates by 8.03 g within 10 min compared to the control group [45]. Another CT study observed a significant reduction in PPD and a gain in CAL in subjects who used GA gel. Additionally, improved plaque and gingival index scores were also noted [44]. Gum Arabic and Rheumatoid Arthritis It was found that GA had a positive effect on restoring the baseline liver and kidney profiles in patients with rheumatoid arthritis [46]. GA significantly decreased liver enzymes, with the exception of alkaline phosphatase, urea, and sodium levels, and significantly increased albumin levels, with a minor impact on the serum globulin level [46]. Another research study showed that GA significantly decreased TNF-alpha, the erythrocyte sedimentation rate, and the number of swollen and tender joints, as well as the disease severity in rheumatoid arthritis patients [47]. Gum Arabic and Drug Interactions GA was also found to enhance drug efficacy, specifically the absorption of amoxicillin [48]. When measuring the peak amoxicillin concentration between two groups (one that took GA 2 h post amoxicillin ingestion and the other that took the drug simultaneously with GA), it was found to be significantly lower in the group that took GA simultaneously [48]. In an experimental study, the effect of GA on gastric ulcers and its interaction with the antiulcer effect of ranitidine was studied in rats. GA significantly potentiated the antiulcer effect of ranitidine [56]. In another study, oral administration of GA was shown to accelerate the absorption of certain solutes [57]. Meloxicam was used as an anti-Cox-1 and Cox-2 non-steroidal anti-inflammatory drug [58]. In rats, GA supplementation showed protective effects against meloxicam-induced gastrointestinal insult. In this study, there were no pharmacological interactions with meloxicam [58]. Gum Arabic and Gastrointestinal Conditions Researchers observed that administration of GA positively improved acute non-bloody diarrhea in children in terms of symptoms, weight improvements, and the prevention of marked severe dehydration [54]. Treatment with GA was found to be ineffective in patients with fecal incontinence. Patients who received GA supplementation had a fecal incontinence frequency that was not statistically different from the group that received a placebo. Instead, psyllium as a supplement was found to be of beneficial in reducing the frequency of fecal incontinence [52]. Another CT study revealed a beneficial effect of GA in children who had colostomies. It was revealed that the group with GA ointment experienced a significant reduction in peristomal skin inflammation in comparison to the control group [49]. GA was also shown to have potential prebiotic benefits. Researchers found that following ingestion of GA, the count of Bifidobacterium and Lactobacilli increased significantly [53]. In a CT, patients with gastroparesis were shown to benefit from GA administration, as it played a role in the regulation of their blood glucose levels. However, no significant findings were noted with respect to the mouth-to-cecum transit delay [55]. Gum Arabic and Chronic Kidney Diseases GA was also found to alleviate the adverse effects of chronic renal failure [13,14,50,51]. Patients with chronic renal failure who received GA showed significant decreases in serum urea levels compared to the baseline and the control group [51]. Serum creatinine levels also significantly decreased in the groups of gum users compared to the control group. There was a significant decrease in the serum uric acid level compared to baseline. Serum calcium levels increased, and this increase was significantly different from the baseline and control groups. Serum phosphorus levels decreased significantly compared to baseline [51]. A reduction in serum urea level was also reported in another CT, alongside an increase in fecal bacterial mass and nitrogen content in patients following GA consumption [14]. A study revealed that GA significantly increased total antioxidant capacity levels and reduced oxidative markers MDA and C-reactive protein in patients undergoing hemodialysis, serving as evidence of potent anti-inflammatory properties of GA [13]. These results are similar to those reported in another study showing that consumption of GA significantly decreased C-reactive protein and sodium levels without affecting the levels of other electrolytes, urine volume, or indoxyl sulfate [50]. A schematic diagram depicting the beneficial effects of GA on various medical diseases is shown in Figure 4. compared to baseline [51]. A reduction in serum urea level was also reported in another CT, alongside an increase in fecal bacterial mass and nitrogen content in patients following GA consumption [14]. A study revealed that GA significantly increased total antioxidant capacity levels and reduced oxidative markers MDA and C-reactive protein in patients undergoing hemodialysis, serving as evidence of potent anti-inflammatory properties of GA [13]. These results are similar to those reported in another study showing that consumption of GA significantly decreased C-reactive protein and sodium levels without affecting the levels of other electrolytes, urine volume, or indoxyl sulfate [50]. A schematic diagram depicting the beneficial effects of GA on various medical diseases is shown in Figure 4. GA is a soluble fiber; dietary fiber increases fecal bulk [61] and reduces the transit time [62]. Dietary fiber is important for combating obesity, and increased intake of dietary fiber has been associated with a reduction in BMI [63]. Dietary fibers have effects on satiety and blood glucose levels. An previous study showed that GA exhibited significant positive effects on satiety measures 15, 30, and 240 min following consumption [30]. The type and viscosity of fiber can have an impact on blood glucose levels after consumption, and variation in the amount of fiber consumed cannot consistently predict the resulting reduction in postprandial glycemic response [30]. In in animals [9,64] and humans, GA treatment has been shown to be effective against body weight and adiposity. Researchers found that GA supplementation significantly reduced BMI, body fat percentage, hip circumference, lipid accumulation product, and visceral adiposity index (VAI) [31,32]. Furthermore, reduced VAI was associated with impaired glucose and lipid metabolism, insulin resistance, and hypertension [31].Thus, it was concluded that GA's positive effects in combating obesity may be related to its positive effect on satiety [30]. Studies confirmed GA effects on chronic conditions, demonstrating their repeatability [31,33]. These effects include a reduction in weight gain, blood pressure, and BMI, which are all positive indicators, strongly suggesting the use of GA as a supplement [31,33]. In addition, it has been demonstrated that GA ingestion leads to a decrease in total cholesterol, LDL, and triglycerides in patients with sickle cell anemia [38]. This lipid-lowering effect is useful, as dyslipidemia is common in patients with sickle cell anemia resulting from oxidative stress [38]. It has been suggested that GA lowers lipid levels. The mechanism behind this phenomenon is that GA binds to bile acids and reduces their absorption from the terminal ileum [65]. The fermentation process in the large intestine then makes the bile acids insoluble, thus promoting their excretion in stool [65]. De novo production of bile acids by the liver requires serum cholesterol. Thus, prolonged ingestion of GA may lead to a reduction in the cholesterol level in plasma [65]. Whereas this information supports the former hypothesis, the findings from the published literature retrieved for this review do not necessarily confirm or deny this phenomenon. Further studies with sufficient clinical data need to be considered when prescribing as a supplement for the treatment of medical diseases. A study was conducted on 47 patients carrying hemoglobin SS (HbS); GA was administered in a dose of 30 g/day for a period of 12 weeks [39]. In patients with sickle cell anemia, GA administration decreased bilirubin, AST, and serum urea [39]. The results show that GA increased the level of HbF and significantly decreased the level of HbS [40]. The positive effect of GA was explained by the fact that GA degradation resulted in short-chain fatty acids, which, in turn, stimulated HbF expression in the red blood cells [40]. It has not yet been confirmed whether GA directly causes these effects or whether they are byproducts of the repeatedly demonstrated effects on dietary content and overall caloric intake. However, this may be less likely, given that the effects lasted for eight weeks after patients discontinued treatment. Additionally, GA was found to significantly increase total antioxidant capacity and decrease MDA and H 2 O 2 levels in patients with sickle cell anemia [40]. A similar effect was also found in patients undergoing dialysis, whereby treatment with GA led to decreased CRP and increased antioxidant capacity and MDA levels [51]. GA also resulted in decreased post-colostomy peristomal skin inflammation in pediatric patients treated with acacia ointment compared to those treated with zinc sulfate ointment [49]. Although this does not directly apply to the other studies because of the different mode of delivery, it does potentially provide insight into the mechanism of action of the active compounds in GA, which need to be studied in detail [49]. In addition to specific mechanisms, GA's general anti-inflammatory [66] and antioxidant effects [11,13] are important in clinical conditions, especially following surgery. GA was also demonstrated to decrease serum urea, creatinine, uric acid, and phosphorus levels, and to increase serum calcium [51], in accordance with its positive effects on kidney product profiles reported in other studies involving various chronic kidney diseases [50]. It is pertinent to mention that DNA damage in kidney disease was first detected in the deoxycorticosterone acetate (DOCA)/salt model, and researchers found DNA singleand double-strand breaks [67]. Oxidative stress leads to damage of the kidneys [68]. The antioxidative properties of GA can verify the complete formation of superoxide and oxidative-stress-induced DNA double-strand breaks [11]. The potent antioxidant properties of GA can be used in vulnerable patient populations with various clinical conditions characterized by increased lipid peroxidation and tissue injuries [37]. However, potential drug interactions need to be better characterized, findings need to be explored in CTs with larger sample sizes. GA was found to have a potential prebiotic effect, as researchers have demonstrated increased counts of Bifidobacteria and Lactobacilli in patients taking GA [53]. This can be explained by the fact that GA is only degraded in the cecum, where it undergoes complete fermentation and therefore promotes bacterial proliferation [53]. Fecal incontinence (FI) is loss of control of bowel contents, leading to discharge of fecal matter. Dietary fiber can lessen FI through its withstanding capacity to fermentation by colonic bacteria, as well as its solubility and degradation [52,69]. In a single-blind RCT, GA supplementation did not significantly reduce FI frequency compared to psyllium supplementation [52]. Researchers hypothesized that the high degradation of GA by colonic bacteria [69] and its subsequently reduced content in feces could be the reason for its lack of clinical effect on FI [52]. Inflammatory state affects the synovial joints [70]. In patients with rheumatoid arthritis, GA treatment improved liver and kidney enzyme profiles, with positive subsequent improvement in their condition [46]. Other findings reported decreased TNF-α and ESR; patients also experienced fewer swollen and tender joints and lower disease severity scores [47]. GA may act as a positive immunomodulator. Butyrate is an end product of dietary fiber and starch after their aerobic fermentation by colonic bacteria [71]. Butyrate is a well-known potent anti-inflammatory agent. It suppresses the expression of proinflammatory cytokines by inhibiting NFκB activation [71]. The anti-inflammatory property of GA manifested through its derivative, butyrate. GA can be used as a natural means of increasing the level of short-chain fatty acids, which have an immunomodulatory effect that is helpful in reducing inflammation and improving patients' quality of life [47]. GA was demonstrated to improve amoxicillin absorption [48]. Although no specific mechanism was suggested in the study, this result potentially indicates that GA affects the absorption of biomolecules in the gut, which may explain some of the effects observed in other studies [48]. This may amplify allergies to amoxicillin by increasing its serum concentrations with the same dose, which still needs to be considered if GA becomes more widely used. This effect was further substantiated by a study that found that the coexistence of GA and amoxicillin in the upper gastrointestinal tract significantly decreased the absorption of amoxicillin [48]. This might lead to therapeutic failure and the development of drug resistance [48]. Numerous research papers have found a possible relationship between GA and oral health [41][42][43][44][45]. A recent CT on GA mouthwash showed promising caries-preventive and antibacterial effects with no oral side effects. Furthermore, a lack of significant difference between oral Streptococcus mutans and Lactobacillus acidophilus counts, as well as DMF index scores, in patients treated with either a licorice mouthwashes or GA mouthwash indicated equivalent capabilities of the two interventions in preventing caries [41]. The same study demonstrated bacterial resistance and oral side effects to a chemical agent, chlorhexidine mouthwash, after 9 and 12 months of use. This demonstrates that GA may be used as a natural mouth wash to prevent caries and may have benefits associated with improved adherence. However, in our opinion, future studies are needed to ensure that other bacteriacausing caries are also prevented by GA. In addition, GA supplementation was found to be associated with increased enamel hardness, which may be explained by the presence of polysaccharides and the high concentration of minerals (calcium, magnesium, and sodium) in GA [5][6][7]. Other findings are associated with oral health, including a decrease in plaque formation compared to intervention with sugar-free gum [42][43][44]. This suggests that GA is also a suitable supplement that can be used to effectively prevent oral infections, in particular in patients with difficulty in brushing teeth (e.g., Parkinson's patients living alone) or in areas where access to running water or dental hygiene products is limited [72]. The present study is subject to some limitations. We did not assess the reported CTs in terms of randomization, blinding, or phase of trial (0-V). Furthermore, the CTs were not scored in terms of quality. We did not limit the literature search to any particular time period but included as many studies as possible. Moreover, we did not assess the included published papers for publication bias. Recommendations The majority of the reported CTs on GA are limited by the following principal factors: a lack of a specifically demonstrated mechanism of action; a lack of repeatable findings; and studies involving special populations, for example, pregnant and breast-feeding women, children, and elderly patients. Addressing these issues will provide robust evidence for the use of GA in therapeutic applications. GA has been reported to possess various compounds. The isolation of active compounds from GA is highly recommended because the efficacy of active compounds can be easily studied at the molecular or genetic level. Furthermore, this can ease the marketing of the drug as an effective candidate for therapeutic use. The reported CTs used different doses of GA. Further research is needed on the selection and preparation of the final dosage of GA specific to each disease. For effective results, additional focus should be placed on nanoformulation-based drug delivery of GA. An example is resveratrol nanoformulations, which have been reported to lead to remarkable results [73]. The CTs include in this review did not report any toxicities or side effects associated with GA treatment. The majority of included CTs are of short duration and therefore may not have revealed toxicities or side effects. Hence, future multicenter studies with longer durations of treatment and larger sample sizes are warranted. Conclusions This systematic review represents a humble attempt to justify the role of GA in complementary medicine with evidence from published CTs. The results of the included CTs reported on the efficacy of GA against various diseases, such as sickle cell anemia, rheumatoid arthritis, periodontitis, metabolic disorders, kidney disease, oral health problems, gastrointestinal conditions, and peristomal skin inflammation. These findings indicate that GA exerted quantifiable benefits in a number of inpatient and outpatient cohorts. Admittedly, CTs did not provide sufficient data on the adverse effects of GA. Future studies need to explore each active compound present in GA, which has various protective functions in different medical diseases. Easy availability, compliance, and cost-effectiveness could encourage the use of GA with evidence from larger studies from different parts of the world. Additionally, toxicity changes in the liver and kidney need to be explored in detail. Acknowledgments: The authors acknowledge the kind help received from Hassan Al-Lawati for the photographs taken from a Shutterstock subscription. Conflicts of Interest: The authors declare no conflict of interest. Natural products NF-κB Nuclear factor-κB PGC-1alpha Peroxisome proliferator-activated receptor-gamma coactivator TGFβ1 Abbreviations Transforming growth factor beta 1 TNF Tumor necrosis factor WPS Water pipe smoking
2023-01-12T16:13:50.627Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a6631005f820465e982c1e6055f5b5ed3e5a70a4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/13/1/138/pdf?version=1673272441", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c3b21646fd550ac0aeaf5964a5154c4b2a105d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208017936
pes2o/s2orc
v3-fos-license
Inhibitory effect of alpinetin on IL‐6 expression by promoting cytosine methylation in CpG islands in the IL‐6 promoter region Abstract Background Alpinetin is a flavonoid which exerts antibacterial and anti‐inflammatory functions. In order to prove that the induced methylation is an important mechanism for alpinetin in regulating the expression of inflammatory factor Interleukin‐6 (IL‐6), we detected the dinucleotide methylation status of CpG islands in the IL‐6 promoter region and IL‐6 level after treatment of RAW246.7 murine macrophages with alpinetin. Methods After RAW246.7 murine macrophages were treated with alpinetin, alpinetin + GW9662 (the peroxisome proliferator‐activated receptor (PPAR) antagonist), and alpinetin + DNA methyltransferase 3 alpha (DNMT3A) siRNA for 96 hr, CpG islands were analyzed using time‐of‐flight mass spectrophotometry (TOF‐MS) and bisulfite sequencing polymerase chain reaction (BSP). Dinucleotide methylation status of the CpG islands in the IL‐6 promoter region was analyzed by methylation‐specific Polymerase Chain Reaction (PCR). IL‐6 level was detected using the enzyme‐linked immunosorbent assay (ELISA) method. Pearson's correlation analysis was conducted to test for potential correlation between the methylation status of CpG islands in the IL‐6 promoter region and IL‐6 level in RAW 246.7 cells. Results Alpinetin promoted dinucleotide methylation status of two CpG islands in the IL‐6 promoter region stretching 500–2500 bp upstream of the transcriptional start site (TSS) (p < .05). This promoting effect was more significant for the CpG island stretching 500–1500 bp long. The methylation ratio of dinucleotide at this position was significantly inversely correlated with the level of IL‐6 (p < .05). PPAR antagonist GW9662 and interference of DNMT3A could reverse both the alpinetin‐induced methylation and inhibitory effects on IL‐6 expression. Conclusion Alpinetin could induce dinucleotide methylation status of CpG islands in the IL‐6 promoter region by activating methyltransferase, thus inhibiting IL‐6 expression in murine macrophages. | INTRODUCTION Inflammatory damage is considered as the main threat to human health, which is intended to restore the steady-state level of inflammatory factors (Abdelhalim, Moussa, Qaid, & Al-Ayed, 2018). Glucocorticoids were once used as an intervention for inflammatory diseases including rheumatoid arthritis and inflammatory bowel disease, but its side effects of long-term use cannot be ignored (Lambert, Roff, Panganiban, Douglas, & Ishmael, 2018;Palme, 2018). Therefore, looking for new anti-inflammatory drugs with low toxicity is the primary concern at present. Chinese patent medicines have proven to have anti-inflammatory functions, and many new herbal medicines have drawing an increasing attention because of its anti-inflammatory effects (Hu, Yang, Tu, Luo, & Ma, 2013;Lee & Lee, 2016;Liang et al., 2018;Raja, Saranya, & Prabhu, 2019;Tsai et al., 2018;Yang et al., 2019). For example, a Dong medicine extracted from the fruits of rusty-leaf muuna is usually used to treat painful swelling on the body surface. Evidences further prove that 2-phenyl-chromone, a type of flavonoid, is the main active component in this medicine (Tsai et al., 2018). Flavonoids have drawn an increasing attention due to their roles in regulating glucose and lipid metabolism, and insulin resistance (Raja et al., 2019;Yang et al., 2019). Flavonoids may also play a part in regulating the production of inflammatory mediators (Lee & Lee, 2016). The pharmacological effect of flavonoids is related to the activation of PPARs which inhibits the expression of inflammatory mediators through several pathways. Among the known flavonoids, alpinetin derived from Alpinia katsumadai Hayata is the most easily available and highly effective in activating PPAR (Hu et al., 2013). Previous study indicated that alpinetin inhibits the expressions of intracellular inflammatory signaling pathways after activating PPARs, while inhibit the synthesis of upstream transcriptional factors of inflammatory genes such as tumor necrosis factor α (TNF-α), IL-1ß, and IL-6. Notably, alpinetin induces deacetylation of H3K9 that binds to the promoter region of the inflammatory genes by activating histone deacetylase 1 (HDAC1), which further influences the binding of the transcriptional factors to the promoter (Liang et al., 2018). Additionally, alpinetin regulates the expression of the inflammatory mediators TNF-α, IL-1β as well as Tolllike receptor 4 (TLR4) mediated nuclear transcription factor-kappaB (NF-κB) and NOD-like receptor protein 3 (NLRP3) inflammasome activation, indicating that alpinetin has protective effects on DSS dextran sulfate sodium (DSS)-induced colitis in mice (He et al., 2016). In our previous report, it was found that the level of DNMT3A binding to PPAR intranuclearly detected by the co-immunoprecipitation technology is promoted with the increase of the concentration of alpinetin in RAW246.7 murine macrophages. This study indicated that alpinetin may regulate the expressions of the target genes by inducing methylation after activating PPAR (Liang et al., 2016). In the present study, we detected the effects and mechanisms of alpinetin on dinucleotide methylation status of CpG islands in the IL-6 promoter region and IL-6 level in RAW246.7 murine macrophages, for providing the basis for the clinical use of alpinetin. | MATERIALS AND METHODS 2.1 | Cell culture RAW246.7 murine macrophages were purchased from Nanjing Hua'ao Biotechnology Co., Ltd., China, and cultured in the RPIM1640 medium (Promega Biosciences Inc.) containing 10% fetal bovine serum (FBS), 100 U/ml penicillin and 0.1 mg/ml streptomycin (Invitrogen, Carlsbad, CA, USA) at 37°C in a 5% CO 2 incubator. The culture medium was replaced on a regular basis and cell passage was performed. | DNA Modification Wizard genomic DNA purification kits (Promega Biosciences Inc.) were used to extract DNA from RAW246.7 murine macrophages. DNA Methylation Gold kits (Promega Biosciences Inc.) were used for sodium bisulfate modification of DNA according to the instruction manual. The unmethylated cytosine was converted into uracil after sodium bisulfate modification, while the methylated cytosine remained unchanged. | Bisulfite sequencing polymerase chain reaction (BSP) detection The sequence of murine IL-6 gene was searched at the University of California Santa Cruz (UCSC) (http://genome.ucsc.edu/cgi-bin/hggat eway). The sequence stretching 3,000 bp upstream of the transcriptional start site (TSS) was located at the Genomic Sequence interface and input into the on-line website Cpgplot. It was confirmed that the two CpG islands including 500-1500 bp and 1500-2500 bp upstream of the TSS in the IL-6 promoter region which met the definition of CpG islands: longer than 200 bp, GC content approaching 50%, and expected ObsCpG/ExpCpG exceeding 0.50. Primers were designed based on the sequences flanking the two CpG islands in the IL-6 promoter region using the Methyl Primer Express V2.0 software (Applied Biosystems, Foster City, CA, USA). Two pairs of upstream and downstream primers were designed (Table 1). The amplified region was located in each CpG island and the upstream and downstream primers contained CpG dinucleotide. According to the instruction manual of TaKaRa, Japan, the PCR reaction system was 50 μL in volume containing 1 μl of template for sodium bisulfate modification of DNA, 1 μL upstream and downstream primer each, 5 μl of 10 × PCR buffer containing Mg 2+ , 1 μl of 10 mol/L dNTP, and 0.8 μl of 5 × 10 6 U/L Taq DNA polymerase. PCR procedures was: predenaturation at 95°C for 4 min, denaturation at 94°C for 30 s, annealing at 55°C for 30 s, extension at 72°C for 30 s, 38 cycles, and final extension at 72°C for 8 min. PCR products were analyzed by 3% agarose gel electrophoresis. Target fragments were identified and 10 μl of the products was submitted to China National GeneBank (Shenzhen, China) for TOF-MS. According to the principle of modification, if the cytosine in the original sequence is methylated, the sequencing result remains unchanged; if the cytosine in the original sequence is not methylated, the sequencing will indicate that cytosine is converted into thymine (T). Each batch of DNA was treated in three replicates. The above CpG sites were screened and lined up in order. Different colors were used to indicate the sequencing results. The methylation ratio was calculated for the CpG sites in the amplified region. | Methylation specific polymerase chain reaction (MSP) detection Cpgplot library was searched. One pair of methylated and unmethylated primers were designed for MSP using Methyl Primer Express V 1.0 (Applied Biosystems Inc.) according to the sequences at 500-1500 bp and 1500-2500 bp upstream of the TSS in the IL-6 promoter region. Primers were synthesized by Sangon Biotech (Shanghai) Co., Ltd., China, and shown in Table 2. PCR was performed using methylated and unmethylated primers, respectively. The PCR reaction system was 20 μl in volume, containing 1 μl of template for sodium bisulfate modification of DNA, 1 μl upstream and downstream primer each, 5 μl of 10 × PCR buffer containing Mg 2+ , 1 μl of 10 mol/L dNTP, and 0.8 μl of 5 × 10 6 U/L Taq DNA polymerase. The volume was diluted to 20 μl using double distilled water. PCR reaction procedures were as follows: predenaturation at 95°C for 5 min, denaturation at 95°C for 100 s, annealing at 56°C for 10 s, extension at 72°C for 30 s, final extension at 72°C for 30 s, 36 cycles, and final extension at 72°C for 5 min. Then 10 μl of the PCR product was analyzed by 3% agarose gel electrophoresis. The gels were scanned using an imaging system. Target bands in the gel represented the methylation status of CpG islands in the primers. Optical densities of target bands were calculated using AlphaEase FC Version 4 software (AlphalmagerHP, Alpha Innotech) to reflect the relative methylation ratio. | Western blot analysis A total of 1 × 10 6 RAW246.7 cells with high viability were added with RLN lysis buffer containing 0.1 mol/L Tris-HCl, 150 mmol/L NaCl, 1.5 mmol/L MgCl 2 and 0.5% Nonidet to obtain nuclear precipitate. Radio immunoprecipitation assay (RIPA) lysis buffer (Sangon Biotech (Shanghai) Co., Ltd., China) containing 50 mmol/L Tris-HCl, 150 mmol/L NaCl, 1% Triton X-100 and 1% sodium deoxycholate was added T A B L E 1 Sequence of primers used in amplification of CpG islands in IL-6 promoter region in BSP test and oscillated. Centrifugation was performed and the supernatant was collected to obtain nuclear protein extract. After sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), 30 g proteins were transferred to membranes and sealed with skimmed milk powder in Tris Buffered saline Tween (TBST) for 24 hr. Next, the membranes were incubated with labeled primary antibodies against PPAR and DNMT3A (all dilution at 1:1,000, Cell Signaling Technology, Danvers, MA, USA) at room temperature for 2 hr. The membranes were washed with TBST for three times, further incubated with HRP-labeled goat anti-mouse secondary antibody (1:1,000, Cell Signaling Technology) and washed with TBST for three times. Enhanced chemiluminescence (ECL) substrate was added for color development. Optical density (OD) of the target band was calculated using AlphaEase FC Version 4 software (AlphalmagerHP, Alpha Innotech), and the result was expressed as the grayscale ratio between the target protein and internal reference (β-actin). | Determination of DNA methyltransferase 3 alpha (DNMT3A) activity DNMT3A catalyzed tyrosine receiving a methyl group donated by s-adenosylmethionine (SAM) to form 3-methyltyrosine. We obtained 25 μL of nuclear protein extract using 50 mmol/L 4-(2-Hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), 0.2 mmol/L MnCl 2 , 2mmol/L SAM and 2 mmol/L DTT following the steps in WB analysis. Thus, the methylation reaction system of 200 μl was established. The reaction time was 10-30 min, and 50 μl of the reaction liquid was taken out every 5 min. The reaction was terminated by adding trifluoroacetic acid (TFA) and the reaction liquid was subjected to high performance liquid chromatography (HPLC). The first peak appearing in HPLC corresponded to tyrosine; the second peak occurring at 2.5-10 min after the first peak corresponded to 3-methyltyrosine. The peak height ratio between 3-methyltyrosine and tyrosine was the relative activity of DNMT3A. | Enzyme-linked immunosorbent assay (ELISA) measurement IL-6 level in the culture medium was determined by ELISA kits (Beijing Huanya Taike Company, China) according the instruction. Standard wells, sample wells and blank control wells were set up. A total of 10 μl of the standard and sample was added into each well, respectively, and cultured at 37°C for 30 min. The coated ELISA plate was washed and ELISA working solution was added. The plates were washed three times and incubated with ECL substrate at 37°C in the dark for 15 min for color development. The reaction was terminated by adding the stopping solution. IL-6 level was measured three times per well by plotting the standard curve, and the average was taken (pg/ml). | Statistical analysis Measurements were expressed as mean ± standard deviation (SD). All statistical analyses were conducted using Statistical Package for the Social Sciences (SPSS) 19.0 software (IBM Corp, Armonk, NY, USA). Multiple comparisons were performed using one-way analysis of variance (ANOVA) Pairwise comparisons were conducted using least significant difference (LSD) test. The strength of correlation between the dinucleotide methylation status of the CpG islands in the IL-6 promoter region and IL-6 level was measured by Pearson's correlation coefficient. The difference was considered significant when p < .05. | CpG Islands located at the IL-6 promoter region in mice retrieved from bioinformatics databases As shown in Figure 1, two stretches of 500-1500 and 1500-2500 bp upstream of TSS were considered on CpG islands, respectively. Location of the islands and density of CpG dinucleotides described that the 500-1500 bp region was defined as the first CpG island containing 70 CpG dinucleotide pairs, while 1500-2500 bp was the second CpG island containing 46 CpG dinucleotide pairs. CpG islands in IL-6 promoter region via activation of PPAR and DNMT3A Results in Figure 2 suggested that, in the control group, the methylation ratio of CpG at the two CpG islands remained at a low level (11.4% in the first island and 6.8% in the second island). However, alpinetin displayed a promotion in the methylation ratio of the two CpG islands, especially in the first CpG island (500-1500 bp upstream of TSS) in a dose-dependent manner. Furthermore, the promoting effect could be reversed by PPAR blocker GW9662 or DNMT3A interference. In addition, the methylation ratio of CpG dinucleotide in the first and second CpG islands was evaluated by MSP ( Figure 3). It was shown that alpinetin increased the relative amount of sequence (located at the first island) when amplified by methylated primers in a dose-dependent manner, while a completely opposite result was obtained when amplified by nonmethylated primers (p < .05, Figure 3a). Moreover, the dose-effect relationship between alpinetin and methylation was not so obvious at the second CpG island (p > .05). Only the content at a higher concentration (1 mg/ml) amplified by primers could be affected by alpinetin. Furthermore, changes in methylation status could also be reversed by the use of PPAR blocker GW9662 or DNMT3A interference (p < .05, Figure 3b). Therefore, above findings suggested that the increase in methyltransferase activity induced by alpinetin may be attributed to the PPAR and DNMT3A activation. F I G U R E 1 CpG Islands located at IL-6 promoter site retrieved from UCSC and Cpgplot F I G U R E 2 Methylation status of CpG dinucleotide at CpG island in IL-6 promotor region tested by BSP combined with TOF-MS. In this test, CpG dinucleotide pairs stretching 500-2500 bp upstream of TSS of IL-6 were all selected and lined up in order. Each site was sequenced for three times, and different colors were used to describe the result: If the site was confirmed as cytosine for three times, this locus would be represented by black; light or dark grey respectively represented confirmation for twice or once. Finally, white was used if thymine was indicated for three times. #,▽ p < .05 versus control group (#,▽respectively denoted the first and second CpG island); ▼ p < .05 versus the second CpG island | 7 of 12 HU et al. | Alpinetin inhibited IL-6 production via PPAR/DNMT3A pathway Following treatment of alpinetin, protein expression of PPAR in the RAW246.7 cells was tested by Western blot assay. Results showed that alpinetin up-regulated the expression of PPAR in a dose-dependent manner but this increase could be blocked by the use of PPAR inhibitor GW9662 (p < .05). Interference of DNMT3A had no influence on PPAR expression (p > .05) (Figure 4). Furthermore, protein expression of DNMT3A in RAW246.7 cells was estimated by Western blot analysis. Data in Figure 5 showed that alpinetin at a high concentration (1,000 μg/ml) increased the protein expression level of DNMT3A (p < .05), while alpinetin at the initial concentration below 500 μg/ml showed no effects on the content of DNMT3A (p > .05). This promoting effect could be completely reversed by interference of DNMT3A and partially blocked by GW9662. DNMT3A activity was also evaluated to support above results based on the peak height ratio of 3-methyl tyrosine to tyrosine in mass spectrometry. Results in Figure 6 showed that alpinetin at different concentrations (0, 50, 100, 200, 500, 1,000 g/mL) promoted the activity of DNMT3A in nucleus and this effect showed a dose-dependent manner (p < .05). This increase could be reversed by the use of GW9662 or interference of DNMT3A, suggesting that DNMT3A was the methyltransferase activated following the alpinetin-induced activation of PPAR. The levels of IL-6 secreted by RAW246.7 cells in all groups were evaluated by ELISA. Data in Figure 7 demonstrated that alpinetin down-regulated IL-6 expression in a dose-dependent manner (p < .05) and this decline could be completely blocked if GW9662 was added in advance. Interference of DNMT3A could also reverse the effect caused by alpinetin, but the extent was lowered compared with GW9662. | The methylation ratio of CpG dinucleotide in the IL-6 promoter region was negatively correlated with IL-6 level Linear correlation analysis was applied to assess the association between the methylation ratio of CpG dinucleotide located at the two CpG islands and IL-6 level in the culture medium of RAW246.7 cells. As demonstrated in Figure 8, a significantly negative association was found between the methylation ratio at the first CpG island and IL-6 level (regression equation: y=−74.02x + 65.89, r = −0.879, p < .01). However, no obvious relevance was confirmed between the IL-6 level and the methylation ratio at the second CpG island (p > .05). F I G U R E 3 Methylation ratio of CpG dinucleotide at two CpG islands in the IL-6 promotor region evaluated by MSP. Following treatment with alpinetin at 0, 50, 100, 200, 500, 1,000 g/ml, the methylation ratios of CpG dinucleotide at the first (a) and the second (b) CpG islands in the IL-6 promotor region were evaluated by MSP. # p < .05 versus control group F I G U R E 4 Effect of alpinetin on protein expression of PPAR. Expressions of PPAR were determined by Western blot assay after cells were treated with alpinetin at different concentrations. Data were expressed as means ± SD. # p < .05 versus control group, ▼ p < .05 versus 1 mg/ml Alp group 4 | DISCUSSION At present, inflammatory diseases are the major threat to human health and affect millions of people worldwide (Abdelhalim et al., 2018). Although hormones exhibit a considerable inhibitory effect on non-specific inflammations, their side effects should not be neglected. Previous studies provided the evidences that alpinetin has protective effects on acute pulmonary injury, ulcerative colitis and atherosclerosis (AS) in animal models (Jiang, Sang, Fu, Liang, & Li, 2015;Liang et al., 2016;Zhou et al., 2008). Further understanding on the anti-inflammatory effects and mechanism of F I G U R E 5 Effect of alpinetin on protein expression of DNMT3A. Expressions of PPAR and DNMT3A in RAW246.7 cells were determined by Western blot assay after cells were treated with alpinetin (0, 50, 100, 200, 500, 1,000 g/ ml). Data were expressed as means ± SD. # p < .05 versus control group, ▼ p < .05 versus 1 mg/ml Alp group F I G U R E 6 Effect of alpinetin on DNMT3A activity. DNMT3A activities were confirmed by 3-methyltyrosineto-tyrosine conversion experiment in RAW246.7 cells. Dates were expressed as means ± SD. # p < .05 versus control group, ▼ p < .05 versus 1mg/ml Alp group alpinetin in RAW246.7 cells can help us find a better treatment for inflammatory diseases. Researchers have already conducted a full research into the anti-inflammatory mechanism of alpinetin and found that the nuclear factor kappa B (NF-кBs) and extracellular regulates protein kinases (ERKs) signaling pathways are inhibited after alpinetin activates PPAR, thus leading to a reduced synthesis of inflammatory factors Ma et al., 2017). However, this is far from being a sufficient explanation for the inhibited expression of inflammatory factors after the activation of PPAR. Along with the emergence of the epigenetic detection techniques, some research teams have found that acetylation is involved in the expression of inflammatory mediators and progression of the inflammatory diseases (Kim et al., 2015;Zhang et al., 2010). In this study, we detected intranuclear deacetylase activity in RAW246.7 cells following alpinetin treatment and proved that the increased activity of DNMT3A is related to the deacetylation of CpG binding to the promoter region of the inflammatory factors. Such deacetylation further results in the disorder of the transcriptional factor binding to the promoter, which finally influences the expression of the inflammatory factors. However, interference experiment indicated that PPAR antagonist GW9662 can completely reverse the alpinetin-induced interference to the synthesis of the inflammatory factors. In contrast, interference of DNMT3A only partially reverses the effect of alpinetin on the synthesis of the inflammatory factors, implying that alpinetin inhibits the expression of the inflammatory factors via the activation of PPAR. DNA methylation is the most widely studied epigenetic mechanism. It is generally accepted that abnormal methylation of cytosine or histone is involved in the occurrence and F I G U R E 7 Effect of alpinetin on IL-6 production. IL-6 level in the culture medium of RAW246.7 cells was determined by ELISA. Data were expressed as means ± SD. # p < .05 versus control group, ▼ p < .05 versus 1 mg/ml Alp group, ■ p < .05 versus 1 mg/ml Alp + GW9662 group F I G U R E 8 Relationship between the methylation ratio of CpG dinucleotide in IL-6 promoter region and the IL-6 level. The correlation between the methylation ratio of CpG dinucleotide at the first (a) and second (b) CpG islands and IL-6 level (pg/ml) were analyzed development of inflammatory diseases and tumors (Shi et al., 2018). For example, the methylation level of IL-6 gene can affect its protein expression, which further induces inflammatory responses in both cord blood monocytes and SK-N-BE neuroblastoma cells (Dinicola, Proietti, Cucina, Bizzarri, & Fuso, 2017;Sureshchandra et al., 2017). In animals with AS, the average methylation level at several CpG sites in the core regulatory region of monocyte TLR4 promoter is decreased. Moreover, the H3K27Me3 level in the blood vessel plaques correlates to AS, and the expressions of methyltransferases MLL2, G9a and DNA methyltransferase 1 (DNMT1) are increased in unstable plaques (Greiβel et al., 2016;Wierda et al., 2015). Transfecting the interference fragment that contains enhancer of zeste homologue 2 (EZH2) into the macrophages will cause a reduction in the H3K27 methylation level, which further inhibits the methylation of the integral membrane protein ATP-binding cassette transporter A1 (ABCA1) promoter and promotes the expression of ABCA1. As a result, the lipids will be transported out of the cells, which helps stabilize the AS plaques (Liang et al., 2013). In addition, methyltransferase inhibitor 5-Aza-Cdr inhibits the synthesis of inflammatory factors in endothelial cells induced by shock (Di Taranto et al., 2012), which also delays the formation of AS plaques in ApoEknockout mice (Cao et al., 2014). Taken together, methylation is involved in inflammatory factor synthesis, occurrence and development of inflammatory diseases and drug intervention of the inflammatory response. Methylation may be also the key mechanism for PPAR agonist inhibiting the expression of the inflammatory factors. To verify above hypotheses, we performed co-immunoprecipitation assay in preliminary experiment, which indicated that alpinetin promotes the binding of PPAR to DNMT3A. DNMT3A is composed of the N-terminal domain and C-terminal with methyltransferase activity and is the most important methyltransferase for CpG islands in mammals (Tajima, Suetake, Takeshita, Nakagawa, & Kimura, 2016). Also, this protein catalyzes the conversion of cytosine into 5-methylcytosine to influence gene expression (Cole et al., 2017;Yang et al., 2017). IL-6 is found to be the most important B-cell activating factor back in the 20th century, which mediates the cross-linking between several immunocytes and function execution and acts as the key factor in triggering the inflammatory cascade (Ohtsu et al., 2017). We verified that alpinetin first activates PPAR and then promotes cytosine methylation of the IL-6 promoter region by activating DNMT3A, thereby regulating the expression of IL-6 in RAW246.7 cells. LIMITATIONS In summary, alpinetin-induced PPAR activation further increases DNMT3A activity or promotes its synthesis in RAW246.7 cells. As a result, cytosine methylation of the CpG islands in the IL-6 promoter is promoted and the expression of IL-6 is inhibited. Our study provides a new antiinflammatory mechanism of alpinetin from the methylation perspective, and indicates that reversing DNA methylation may be the new orientation for treating inflammatory diseases with alpinetin and other Dong medicines. However, there are several limitations. First, the RAW246.7 cells line represents a major limit, results in this study needs to be repeated in other cell lines. Second, we only investigate the effects of alpinetin on inflammatory disease in cultured cells, and a possible clinical treatment for alpinetin in animals remains to be ascertained.
2019-11-15T14:09:31.328Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "973594ccd1b07dca66f7fbd6355c634317b0cc55", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mgg3.993", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a547d4f50e2ffc4979bdcbcc0987561b7f4b637d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
12057145
pes2o/s2orc
v3-fos-license
Illusions of Visual Motion Elicited by Electrical Stimulation of Human MT Complex Human cortical area MT+ (hMT+) is known to respond to visual motion stimuli, but its causal role in the conscious experience of motion remains largely unexplored. Studies in non-human primates demonstrate that altering activity in area MT can influence motion perception judgments, but animal studies are inherently limited in assessing subjective conscious experience. In the current study, we use functional magnetic resonance imaging (fMRI), intracranial electrocorticography (ECoG), and electrical brain stimulation (EBS) in three patients implanted with intracranial electrodes to address the role of area hMT+ in conscious visual motion perception. We show that in conscious human subjects, reproducible illusory motion can be elicited by electrical stimulation of hMT+. These visual motion percepts only occurred when the site of stimulation overlapped directly with the region of the brain that had increased fMRI and electrophysiological activity during moving compared to static visual stimuli in the same individual subjects. Electrical stimulation in neighboring regions failed to produce illusory motion. Our study provides evidence for the sufficient causal link between the hMT+ network and the human conscious experience of visual motion. It also suggests a clear spatial relationship between fMRI signal and ECoG activity in the human brain. Introduction The posterior temporal region of the non-human primate brain (areas MT/MST), and its human homologue, known as area V5 [1] or human MT complex (hMT + ) [2,3] are responsive to visual motion [4]. Electrical stimulation of this region in non-human primates can influence motion direction discriminations, suggesting that its activity is critically linked to perceptual decisions [5,6]. Although fundamental to our current understanding of motion perception, studies in non-human primates cannot ascertain conscious perceptual experiences during these direct alterations of neural activity. To determine whether a brain region is causally linked to a perceptual experience, one must modulate its neural activity. Causal necessity can be established by inactivation (e.g. lesion) of the brain region and observing a perceptual deficit, whereas causal sufficiency is established by modulating its activity (e.g. by electrical stimulation) and observing a corresponding change in the perceptual experience. Non-invasive methods such as functional magnetic resonance imaging (fMRI) have provided evidence in the human brain of relationships between hMT + responses and subjective visual motion perception (for review, see [7]). However, correlational techniques like fMRI and electroencephalography (EEG) cannot establish a causal relationship between hMT + activity and conscious motion perception. Non-human primate lesion studies first demonstrated the necessary role of MT in motion discrimination judgments [8,9]. Subsequent reports addressed the necessity of human MT + in the conscious experience of visual motion. For instance, visual motion blindness (akinetopsia) was reported in a few patients with extensive stroke in the posterior temporal region [10][11][12]. Deficits in motion processing have since been reported in healthy controls during transcranial magnetic stimulation (TMS) of posterior temporal cortex [13][14][15][16], in one patient with epilepsy during electrical stimulation of the anatomical area around hMT + , including superior, middle and inferior temporal and angular gyrus [17], and in a few patients with variable amounts of brain damage in the vicinity of the anatomical locus of hMT + [18][19][20][21][22][23][24]. In contrast to these findings of disruption of motion perception, reports of positive percepts caused by functional alteration of hMT + are missing [25]. Although some studies in humans have elicited ''motion percepts'' by electrical stimulation in various regions of the brain, the precise anatomical location of these stimulation sites and their spatial relationships to hMT + remain uncertain. Penfield first reported illusory motion caused by electrical brain stimulation (EBS) of the posterior temporal region in some cases of intraoperative monitoring [26]. Plant and colleagues [22] reported a patient who saw a moving colorless ''fog'', without moving objects, during seizure auras as well as during electrical stimulation of epileptic tissue. Lee and colleagues [27] reviewed the evidence of visual illusions caused by electrical stimulation of human visual cortex and suggested that the experience of ''visual movement'' can be elicited at many sites across cortex. We note, however, that the definition of visual movement was not specified. In a study of one patient implanted with intracranial electrodes, Matsumoto and colleagues [28] were the first to relate evoked potentials from magnetoencephalography (MEG) during a visual motion task with a patient's reported illusions of objects moving in depth during electrical stimulation of the posterior superior temporal sulcus. These previous findings of positive percepts must be interpreted with caution due to several caveats. The cortical tissue causing illusory percepts could have been diseased (epileptogenic), and the presence or absence of epileptic after-discharges (triggered by EBS) was not reported. In addition, the precise location of the stimulation was not adequately established by neuroimaging methods. Indeed, a more recent study failed to produce a visual motion percept by electrical stimulation at the border of fMRIdefined hMT + in one patient [29], leaving open the question of whether electrical stimulation of hMT + is sufficient to induce visual perceptions. The question of the spatial relationship between effective sites of induction of visual illusions by EBS and the site of visual stimulusinduced activity recorded by fMRI and electrocorticography (ECoG) remains unexplored. Moreover, the relationship between fMRI and ECoG signals during motion perception has not been characterized but has the potential to provide a bridge between human fMRI measures and electrophysiological recordings in animals [30]. Combining three methods of neuroscientific inquiry (i.e. fMRI, ECoG, and EBS) in the same conscious human subjects allowed us to address the critical link between fMRI and electrophysiological correlates of motion perception and the role of hMT + in the conscious perception of motion. Co-localization and pattern of responses to motion as measured by BOLD fMRI and ECoG In three subjects, functional imaging using fMRI independently revealed higher levels of blood-oxygenation-level-dependent (BOLD) responses bilaterally in the posterior inferior temporal sulcus when viewing moving, compared to static, visual stimuli ( Figure 1A-C). This area of increased BOLD activation in response to moving visual stimuli was labeled area hMT + in each individual separately (see Materials and Methods for details). Intracranial electrophysiological recordings (ECoG) in subject B revealed a marked spatial overlap between the BOLD response and electrophysiological activity during the same task. During blocks of moving images, there was a significant increase in power specific to the theta (4-7 Hz) and high-gamma (50-120 Hz) bands ( Figure 2A) only in the electrode directly overlapping with the fMRI-defined area hMT + . This electrophysiological signature is consistent with previous reports of the relationship between electrophysiological and BOLD measures [31]. Note that no ECoG recordings were performed in subject A. The temporal profile of the power ( Figure 2B) in the theta and high-gamma frequency bands shows several noteworthy findings. The profile of high-gamma and theta responses is very distinct after the first second of motion stimulus presentation. At the onset of the motion stimulus, the relative power of high-gamma band activity increases up to ,190% of the power during the static stimulus and sustains an elevated power (,120% of power during static stimulus) for the entire four seconds of the motion stimulus. The relative theta power is modulated at the frequency of the stimulus, with peaks in the theta power occurring approximately at the mid-point between transitions from inward to outward movement of the concentric circles. Interestingly, the peaks in theta power reach a higher level for outward than for inward motion, suggesting similarities in the response properties of our recorded theta modulation in the human brain to non-human primate neuronal tuning in MST, which shows a higher proportion of cells responsive to expansion than contraction [32]. Changes in the high-gamma and theta frequencies occur in individual four-second trials, only for the electrode overlapping with the area of significant BOLD modulation. For each electrode, we plotted the mean relative power for both the theta and highgamma bands over each individual four-second trial ( Figure 2C). These plots illustrate the clear separation of responses to motion and static stimuli in the mean relative power of the high-gamma and theta bands, only in electrode II (middle row, Figure 2C). For electrode II, 91% of motion trials show a response above the mean power of the high-gamma band (i.e. above y = 1 in Figure 2C), and 90% of static trials show a response below the mean relative power of the high-gamma band. Theta band responses during individual trials are similarly consistent (89% of motion trials above mean theta power and 94% of static trials below mean theta power). For both high-gamma and theta bands, the mean of the distribution during the motion condition is larger than the mean of the static condition (p,0.001, t-test; for all 3 experiments and each band). Electrodes that did not overlap with areas of significant BOLD modulation failed to show significant electrophysiological activation in response to the same motion stimulus in ECoG recordings. In all other analyzed electrodes (N = 14), none of which overlapped fMRI-defined area hMT + , the distributions of responses to motion and static trials are not well-separated (p.0.15, t-test, for all 3 experiments and each band), with approximately equal numbers of points of each condition falling above and below the mean relative power (see Electrodes I and III in Figure 2C as examples). In subject C, intracranial electrodes were situated near the border of, but not within, fMRI-defined area hMT + ( Figure 1C). In this subject, we did not find any significant task-induced theta or high-gamma band activity in any intracranial electrodes, congruent with the idea that the electrophysiological and BOLD signals agree spatially. Electrical brain stimulation in hMT+ causes illusory visual motion As part of routine brain mapping procedures conducted for clinical purposes, electrical stimulation was performed in all three patients. During this process, a weak and focal electrical current was delivered to the brain area located between two electrodes (i.e. bipolar stimulation) while subjects were lying comfortably in the hospital bed with their eyes open. Patients were generally unaware of the timing of electrical stimulus delivery, which also included interspersed sham stimulations. Subjects were asked to describe in detail all changes in perception or subjective experience during electrical stimulation. We define illusory visual motion percepts as any change in conscious visual perception that (a) involves the translocation of one or more parts of the visual environment across visual space and (b) is directly elicited by the electrical stimulation. Reproducible, vivid, illusory visual motion percepts occurred when electrical charge was delivered through electrodes that were localized within the hub of fMRI activity corresponding to hMT + in subjects A and B ( Figure 3). The qualitative experience of the percepts was stereotyped within each individual regardless of stimulation intensity (1-12 mA) or duration (3-6 sec). The conscious illusory experiences in subjects A and B were similar but not identical. Electrical stimulation of right hMT + in subject A Figure 1. Overlap of intracranial electrodes with functional MRI localizer of area hMT + . Location of intracranial electrodes (blue disks) and the area of fMRI activation in the motion localizer task (orange-red) are shown for three subjects (A-C). Pairs of electrodes were electrically stimulated; cyan electrodes indicate those pairs between which electrical stimulation elicited reliable, lucid, illusory motion (see Figure 3). FMRI activation is thresholded at a p-value corresponding to a false discovery rate of 5% in each individual. FMRI time series, shown next to each subject's 3D cortical surface, are extracted from the hMT + region of interest and averaged across two runs. doi:10.1371/journal.pone.0021798.g001 caused displacement and transposition of the entire visual field to the left (i.e. optical allesthesia). This reported illusory percept of the visual field ''jumping'' to the left was spontaneously generated and present even with eyes fully deviated in left lateral gaze. In subject B, electrical stimulation of left hMT + caused an illusory percept of objects moving in the contralateral (right) upper visual field as if they were ''vibrating'' (subject's word). For example, while looking at the experimenter's face, the subject reported that ''the top right corner of the face is vibrating''. The effect was limited to the subject's upper right visual field quadrant. Interestingly, when the subject's eyes were closed and he was asked to imagine an object he had just seen, the imagined object was reported as ''vibrating'' during electrical stimulation and not during sham stimulation. The intensity of the illusory experiences in both subjects was not subtle. The subjects volunteered their descriptions readily and seemed to be completely captivated by the intensity of the experience. Importantly, subjects successfully kept fixation during electrical stimulation trials. Exhaustive direct inspection of video Figure 2. Electrophysiological response to the same motion stimuli as during fMRI. (A) Power spectrogram from electrode II in subject B during one representative run. Electrode II was the only electrode overlying the area of significantly increased BOLD activation during the motion stimulus (see Figure 1B). This electrode shows significantly increased power (denoted in decibels, dB) in the high-gamma and theta bands during motion compared to static (yellow-red). The significance threshold is FDR-corrected (q = 0.1, p,0.02). For presentation purposes, the spectrogram is smoothed over 3 frequency bands and 230 ms. (B) Temporal profiles of the relative power for high-gamma and theta bands in the same electrode during the same run. The power of high-gamma and theta band activity was normalized by the mean power (y = 1) within that band and across the run. The power in each band was scaled by these means. Vertical solid lines indicate transitions between static and motion, and vertical dotted lines indicated transitions between outward and inward motion of the stimulus. Horizontal dashed lines indicate the mean relative power during the static condition. The shading on the time courses indicates the standard error of the relative power. (C) The increase of high-gamma and theta power during the motion stimulus is consistent across individual trials and is selective to electrode II. Each red marker denotes the mean relative power across a single four-second trial of motion, while each blue marker denotes the mean relative power across a single four-second trial of static, for electrode II (middle row) and two neighboring electrodes (top and bottom rows, see Figure 1 for precise locations), across the duration of each run of the experiment. Different shapes (circles, squares, triangles) denote different runs of the experiment. Note that electrode I was near, but not overlapping with, the area of significantly increased BOLD activation to motion. doi:10.1371/journal.pone.0021798.g002 recording obtained during electrical stimulation did not reveal any macroscopic eye movements in either subject during electrical stimulation (see Video S1; note the consideration of imperceptibly small eye movements in Discussion). The total number of electrical stimulation trials at each site and the number of times a motion percept was elicited at that site is shown in Figure 3. Across all subjects, stimulation directly over hMT + (III-IV in Subject A, I-II in Subject B, see Figure 1 for locations) elicited illusory motion percepts in 92% (24 of 26) of trials ( Figure 3). Electrical stimulation at sites directly neighboring hMT + (II-III and IV-V in Subject A, II-III in Subject B) elicited the illusory motion in 39% (7 of 18) of trials, but these positive trials only occurred at the highest stimulation amplitudes tested. At these neighboring locations, one of the two bipolar electrodes was overlapping hMT + . In contrast, stimulation at all other cortical locations, where neither stimulating electrode overlapped hMT + , elicited illusory motion 0% (0 of over 100 trials) of the time. Sham stimulation trials, which were interspersed between hMT + stimulation trials and did not involve current delivery, also did not elicit any illusory motion (0 of 6 trials). Illusory motion was not elicited by stimulation at any electrode sites in subject C, who had electrodes positioned adjacent to, but not within, fMRI-defined area hMT + ( Figure 1C). Together with the results from subjects A and B, these negative findings in subject C further suggest a high degree of spatial congruence between fMRI and electrophysiological responses to motion and conscious perceptions of motion elicited by electrical stimulation. Although not the focus of the current report, other perceptual illusions (such as an urge to move the contralateral hand, or tingling in the contralateral side of the body) occurred at some other electrode sites. None of these percepts were related to visual motion perception. Discussion We report that electrical stimulation of functionally defined cortical area hMT + causes reproducible illusions of visual motion. This illusory visual motion was only elicited when the site of electrical stimulation was precisely overlapping with the area of fMRI activation, defined independently in each subject in response to visual motion stimuli ( Figure 1). Moreover, the electrophysiological activity recorded by ECoG during the same task was clearly limited to the electrode overlapping the area of fMRI activation ( Figure 2). We interpret these results in the context of previous human and non-human primate studies that have shown the causal necessity of area MT for motion perception. Our results show, for the first time, that altering neural activity in hMT + by electrical charge delivery is sufficient for producing complex positive illusions of visual motion ( Figure 3). They also provide converging evidence from three different methodologies (i.e. fMRI, ECoG, and EBS) that allows inferences regarding the electrophysiological basis of the fMRI signal and the relevance of fMRI and ECoG correlates of a perceptual task to human conscious perception. Necessity and sufficiency-a causal link between the activity of the hMT+ network and subjective visual motion perception A substantial body of previous research has provided strong evidence to support correlations between hMT + activity and visual motion perception [7]. Causal links between MT activity and motion direction discrimination judgments have been demonstrated in the non-human primate [5], but studies in non-human primates have limited ability to address the subjective perceptual experience produced by experimental alterations of MT neuronal activity. The loss of cortical tissue surrounding the anatomical location of MT/hMT + has been shown to produce loss of motion sensitivity in non-human primates [9] and akinetopsia (motion blindness) in humans [11], both negative symptoms. Similarly, transcranial magnetic stimulation (TMS) to hMT + can lead to transient loss of motion sensitivity [13,15]. These prior studies Figure 3. Electrical stimulation only over hMT + evokes illusory motion. Red circles represent stimulation sites and amplitudes that elicited illusory motion at least once, while dark gray circles represent stimulation sites and amplitudes that did not elicit any illusory motion. Numbers inside circles represent the number of electrical stimulation trials evoking illusory motion over the total number of electrical stimulation trials with a particular pair of electrodes at that current amplitude. Sham indicates trials where no current was injected but the subject thought stimulation was taking place. Subject C did not perceive any illusory motion (not shown). The quality of the motion percept differed between subjects A and B but was highly consistent within each individual. No other stimulation sites elicited a percept of visual motion, even though all adjacent pairs of electrodes in the brain were electrically stimulated for clinical reasons. See Figure 1 for electrode positions. doi:10.1371/journal.pone.0021798.g003 provide evidence for the necessity of MT/hMT + in motion perception. While disruption of function (negative effect supporting necessity) can occur following lesions or TMS, positive percepts (supporting sufficiency) can only be achieved by altering, rather than stopping, the activity of a critical network. Reports of positive percepts of motion are much more rare and have not been linked to human area MT + as defined by BOLD fMRI. The placement of intracranial electrodes in the human brain is a unique opportunity to observe the effects on conscious perceptual experience during alterations of neural activity. Reproducible and consistently elicited conscious motion percepts caused by electrical charge delivery to hMT + , as reported here, satisfy conditions of sufficiency. That is, altering the neural activity within hMT + , and the network it is connected with, is sufficient for producing vivid subjective motion percepts. In one previous study, electrical stimulation at the border of hMT + [29] failed to elicit any percept (similar to our finding in subject C). Blanke et al [17] also failed to produce positive illusory visual percepts during electrical stimulation of the temporoparietal region in a single patient, but it is noted that the posterior extension of their electrode grid only covered the anterior portions of the junction between the inferior temporal sulcus (ITS) and the ascending limb of the ITS, where area hMT + is generally thought to be located. Also, no fMRI or ECoG measures of visual motion perception were obtained. In addition to exact location, precise electrical stimulation parameters may be crucial in determining whether positive or negative perceptual phenomena occur. The lack of positive perceptual phenomena in these previous studies are in line with our own null result in subject C, and can be explained by our observations in subjects A and B that the positive phenomenon of illusory visual motion is elicited only if the site of EBS is co-localized precisely with the brain site that shows positive functional response (identified by fMRI or ECoG) during visual motion perception. This need for functional localization is clear when considering the individual variability of the location of hMT + with respect to anatomical landmarks [33]. Visual imagery is affected by electrical stimulation of hMT+ In our experiment, we asked subject B to close his eyes and imagine a recently viewed object ''in his mind's eye'' while electrical charge was delivered to hMT + . Interestingly, the subject reported the same visual motion illusion (''vibrating'', or oscillatory left-right motion of the imagined object) caused by electrical stimulation. In contrast, sham stimulation during imagery trials elicited no positive reports by the subject (i.e. he did not see any change in the mental image; see Video S1). Therefore, the percept produced by electrical stimulation of hMT + affects a mental image similarly to a real visual image. This finding lends support to the hypothesis that mental imagery may be an emulation of perception and that the neurons that code for a mental image may be the same as, or overlap with, those used in visual perception [34,35]. Propagating electrical charge within selective anatomical networks The spatial spread of electrical charge is an important consideration for interpreting results from EBS experiments. Although little is known about the effect of electrical stimulation of the cerebral cortex in the human brain, the emerging evidence from cortical micro-stimulation (micro-EBS) [36] and deep brain stimulation (DBS) [37] in mammalian brains strongly suggests that the electrical charge delivery is more likely to recruit neural fibers whereas the activity of neurons in the stimulated area is either unchanged [37], blocked through depolarization blockade [38], or only altered in a sparse and distributed set of neurons [36,39]. Reliable recruitment of neural fibers will lead to propagation of electrical activity along the afferent or efferent fibers and will reach the brain regions that are connected with the stimulated area of the brain [40]. Given that each region of the brain has selective neuroanatomical connectivity with cortical and subcortical structures, the propagation of electrical activity will only affect the activity of a selective neuroanatomical network. Thus it may be difficult to compare the functional effect of EBS, as used in brain mapping procedure, to the effect of TMS, micro-stimulation, DBS, or structural lesioning. In other words, during brain mapping, a volley of 50 Hz signals may cause depolarization blockade (i.e. impairment of function) in the actual target of electrical stimulation but, at the same time, the volley of 50 Hz electrical signals recruits a selective neuroanatomical network in the gamma band frequency. In our experiments, it is possible that hMT + may have been blocked by the depolarization blockade, but in conjunction with the recruitment of its selective neuroanatomical network (such as V1), the manipulation seems to be sufficient to lead to a subjective experience of visual motion. Given that the network is recruited artificially with 50 Hz signals, the resulting subjective experience is an illusion of visual motion when there is no real motion in the visual field (i.e. a positive phenomenon). It is interesting to note that back-propagation of signals from hMT + to V1 is thought to be necessary for visual awareness of motion percepts [41]. Whether the effect of EBS is excitatory or inhibitory depends on stimulation frequency, and stimulation frequency at 50 Hz, as in our study, is more likely to be inhibitory [42]. Inhibitory effects on connected brain areas may be as relevant as the excitatory effect of electrical stimulation for causing positive illusory phenomena. It is likely that the inhibitory effect of EBS on the areas connected to MT, such as visual areas V1 to hV4 and parts of parietal cortex [43], which are involved in maintaining the stability of the visual world [44][45][46][47], may result in instability of visual images and hence the illusion of motion. Because the EBS in our study was performed in a purely clinical setting for clinical diagnosis, which does not easily accommodate research stimuli/procedures, we were unable to test the ability of subjects to perceive normal visual motion during electrical charge delivery to area hMT + . However, given the magnitude of the illusory percept caused by the EBS, it is more than likely that the subjects would have failed to perceive normal visual motion during the procedure. Therefore, our finding of positive illusory percept is not in conflict with the previous findings of impairment in visual motion perception during electrical stimulation of hMT + . Mechanistic interpretations of different perceptual experiences The precise perceptual experiences reported by the two subjects differed and would be difficult to predict a priori. Nevertheless, previous literature suggests that both types of percepts are supported by hMT + activity. Subject A's percept is qualitatively similar to the phenomenon of ''apparent motion''. This phenomenon describes the perception of jumping motion between two sequentially blinking stationary stimuli separated in space. In humans, hMT + activity, and perhaps feedback from hMT + to early visual cortex, correlates with the perception of apparent motion [48,49]. As for subject B's percept, there is also evidence that MT in monkeys and humans is required for perceiving lateral oscillatory motion [50]. We further propose that the percepts in subjects A and B may both be related to the role of hMT + and its selective neuroanatomical network in supporting the stability of the visual world during normal vision. Specifically, the reported illusion in subject A is reminiscent of descriptions of a shifting visual world after retrobulbar paralysis of the eye muscles [51]. The similarity of these descriptions, along with the proposed roles of MT and parietal regions during saccadic eye movements [44,46], suggests that electrical alteration of activity in hMT + in subject A may have caused alteration of activity in its anatomical network (i.e. synthetic and erroneous signal from hMT + to its connected parietal areas). These synthetic signals could be interpreted by the receiving areas as a corollary discharge for an eye movement that did not, in fact, take place. A corollary discharge would be expected to result in a shifting visual world in preparation for an eye movement [44,47]. The experience of visual jitter in subject B may be related to MT's normal active role in suppressing movement of the visual world due to microsaccadic eye movements [45,52]. Introducing spurious signals through electrical stimulation of the set of neurons underlying these computations would conceivably alter the relationship between MT signals and ongoing microsaccadic eye movements, leading to perceptions of microsaccades in a restricted region of the visual field. (The effect would be spatially localized because hMT + is organized retinotopically-see [53]). Currently, we cannot distinguish between such an indirect effect and the possibility that the alternating electrical current from EBS is directly interpreted as alternating left-right motion in this subject. However, we can exclude the possibility that EBS directly caused eye movements that explain the percept because the percept was limited to one quadrant of the visual field, while an induced, microscopic nystagmus would be equally salient in all parts of the visual field. We note that all subjects were able to keep visual fixation during electrical stimulation trials (Video S1), although we cannot exclude the possibility that electrical stimulation caused imperceptible eye movements. However, such small eye movements would be unlikely to explain the large visual motion percepts experienced by subject A, or the spatially localized percepts (within a visual field quadrant) experienced by subject B. Even if small eye movements were to explain the reported percepts, it is interesting that they would have occurred only with electrical stimulation of hMT + . The differences in the percepts reported by the two subjects might be attributed to the involvement of different sub-regions of hMT + , MT and MST, each of which could have their own network connectivity. While our current methods did not allow us to specifically address whether different sub-regions were stimulated in each subject, future studies can incorporate stimuli intended to differentiate between MT and MST [3] to test this hypothesis. Finally, since electrode grids were implanted in the right hemisphere of subject A and the left hemisphere of subject B, the differences in reported perceptions may also be due to a leftright hemispheric functional asymmetry in the affected networks. Linking fMRI, ECoG, and EBS ECoG recordings are a field potential aggregated from approximately 5x10 5 neurons underlying each electrode [54], similar to the number of neurons in an fMRI voxel (10 5 neurons/ mm 3 6,5-30 mm 3 voxel size). This similar spatial resolution to fMRI, in conjunction with the similarity in the signal type to the local field potential (LFP), puts ECoG recordings in a unique position to link fMRI BOLD findings in humans to LFP responses in non-human primates [30]. The ECoG response to the same motion stimulus as used for fMRI was limited to the theta and high-gamma bands, suggesting that these particular frequency bands correlate with the hMT + BOLD signal response. Future studies can test the generality of these findings in more subjects. We note the strong similarity of our ECoG recordings from hMT + (Figure 2A) to LFP recordings from area MT in the nonhuman primate using microwire electrodes (Figure 3 in [55]). In both cases, there is increased power in the high-gamma band (,50-120 Hz) at the onset of the stimulus. In our recordings, using a long four-second stimulus, the strong high-gamma band response decreases somewhat after approximately 500 ms. The theta power is sustained at a high level throughout the stimulus ( Figure 2B), although it is also temporally modulated by the stimulus. Such differential dynamics of signals across frequency ranges will be an interesting point of study in the future. Combining the multiple methodologies of fMRI, ECoG, and EBS provides an especially powerful set of interrelated findings to help understand specific functions of cortical areas. Epileptic brains Although our results were obtained in patients with epilepsy, we believe the results are unlikely to be explained by pathological factors. As noted, area hMT + was void of any epileptiform activity in all three patients, and data from any electrodes showing epileptic activity were excluded in our electrophysiological analysis. The positive illusory percepts were also recorded without the presence of any after-discharges. Our study included only three subjects, but it should be noted that the posterior regions of the brain are rarely implanted with electrodes and thus intracranial recordings from hMT + are uncommon. Restrictions due to the clinical setting of this research provided other challenges as well. We were not able to perform ECoG recording from the hMT + electrodes in Subject A because the EBS procedure was performed shortly before surgery and we could not delay the surgery in order to obtain those recordings. Conclusions Taken together, our findings are consistent with studies in nonhuman primates suggesting a crucial role of area MT and its interconnected network in conscious motion perception. We demonstrate that electrical stimulation of area hMT + , as defined by fMRI and verified by electrophysiological responses in individual subjects, elicits a conscious experience of visual motion in awake human subjects. In the context of previous research, our results show that the hMT + network circuitry is both necessary and sufficient for producing conscious motion percepts. The spatial agreement of fMRI and electrophysiological measures allows inferences about the link between these stimulus-evoked signals and their ultimate relation to conscious visual perception when the activity of the same part of the brain is electrically modulated. Ethics Statement Our study was approved by the Stanford University IRB Office for Protection of Human Research Subjects. All subjects signed informed consent for participation in our research study. Subjects Our subjects were three patients (1 male, 2 female) undergoing epilepsy surgery for intractable epilepsy. Our study did not cause additional risk to any participants, and the intracranial procedures were conducted entirely for clinical reasons to localize the source of epileptic discharges. Our diagnostic studies revealed no pathological activity in hMT + . Patient A was diagnosed with multifocal epilepsy originating from frontal and posteromedial regions, whereas patients B and C were diagnosed with epileptic foci in the medial (but not lateral) parieto-occipital region, after resection of which, both subjects, to date, remain seizure free. Functional Magnetic Resonance Imaging (fMRI) Localizer sessions were aimed at identifying motion-responsive areas. The stimulus consisted of a set of concentric dark gray circles on a gray background. The stimulus alternated between static and moving in blocks of 16 sec. During motion blocks, the circles expanded and contracted at a rate of 0.5 Hz (i.e. one full expansion and contraction every two seconds). Each run (n = 2) lasted 208 secs (192 secs in subject B) and included 6 blocks of motion and 7 blocks of static stimuli (6 static in subject B). Subjects fixated on a white dot in the center of the screen and pressed a button anytime the fixation dot randomly flashed red. All subjects performed this independent task at near 100% accuracy, indicating stable fixation. Functional magnetic resonance images were acquired on a 3T GE MRI scanner and an 8-channel volume head coil using a spiral-trajectory pulse sequence [56] with the following parameters: one shot, TR = 2000 ms, TE = 30 ms, flip angle = 77u, FOV = 220 mm, voxel size = 1.7261.7262 mm 3 in subjects A and C, 36362.5 mm 3 in subject B. Twenty-one oblique slices covering occipital and temporo-parieto-occipital cortex were prescribed approximately along the AC-PC plane. We analyzed fMRI data using the freely available, open-source mrVista software package (http://vistalab.stanford.edu/software/). The acquired BOLD signal from each voxel was first divided by its mean in order to compute a time series of percent modulation. High-pass temporal filtering was used to deduct baseline drifts from the time series. Small motion artifacts within and across scans were corrected using an affine transformation of each temporal volume in a data session to the first volume of the first scan [57]. The data were analyzed on a voxel-by-voxel basis using a general linear model (GLM) that modeled the BOLD signal using a two regressors (motion and static), with an additional DC regressor for each run to account for shifts in baseline. Statistical maps were computed as voxel-wise t-tests between the motion and static conditions. Area hMT + was defined by the contrast motion . static at a statistical threshold equivalent to a false discovery rate of 5% (q = 0.05) in each individual subject. The resulting statistical contrast maps were interpolated to the T1-weighted volume anatomy and restricted to gray matter layers. These maps are projected onto a cortical surface mesh (consisting of the surface along the gray-white boundary) for visualization. In subject A, the fMRI hMT + localizer was performed post-surgically, while subjects B and C participated in the same localizer session before electrode implantation. Electrode Localization We used MS08R-IP10X-000 strips and IG64C-SP10X-0TB grids made by AdTech Medical Instrument Corporation (http:// www.adtechmedical.com) for recording and stimulation in our subjects. These electrodes have the following parameters: 4 mm flat diameter contacts with 2.3 mm diameter of exposed recording area (4.15 mm 2 ) and inter-electrode distance of 1 cm. Postsurgical computed tomography (CT) images indicating the location of electrodes were aligned to preoperative T1-weighted structural MRI images using a mutual-information algorithm, implemented in SPM5 (http://www.fil.ion.ucl.ac.uk/spm). The electrodes were easily identified in the CT scans and their locations were manually marked. These images were visualized using ITKGray, a segmentation tool based on ITKSnap [58]. The resulting images were manually aligned to 3D mesh renderings of the T1 anatomical images produced using mrVista, on which the fMRI activation is displayed, thereby conserving the electrode to T1 anatomical image alignment. This procedure allowed us to construct 3D visualization of electrode locations relative to each patient's cortical anatomy within a few millimeters (,,3 mm) in error. The accuracy of estimated electrode sites was also validated by digital photos, obtained intraoperatively. Electrophysiological Recording and Analysis After implantation of the electrodes and post-surgical stabilization, the hMT + -localizer task was administered to the patients for ECoG recordings (patients B and C only). This task was identical to the one described for fMRI, except that blocks were 4 seconds in length instead of 16 because of the increased temporal resolution of ECoG over fMRI. There were 22 blocks of motion and 23 blocks of static, giving a run time of 180 s. Subject B completed three runs, and subject C completed two runs. We recorded signals at 3051.8 Hz through a 128-channel recording system made by Tucker Davies Technologies (http://www.tdt. com/). Off-line, we applied a notch filter at 60 Hz and harmonics to remove power line noise. We removed channels with epileptic activity, as determined by the patient's neurologist. To visualize electrophysiological responses, we created event-related spectral perturbation (ERSP) maps based on the normalized power of electrophysiological activity during each condition. A Hilbert transform was applied to each of the 42 bandpass filtered time series to obtain instantaneous amplitude and power [59]. Using the Hilbert-transformed time series, time-frequency analysis was performed for event-related data. We logged the onset and duration of each trial via photodiode event markers for each experimental condition time locked with the ECoG recording. Event markers were used to align and average power at each frequency band over repeated trials for each condition to create ERSP maps. The ERSP was scaled by the total mean power at each frequency in order to compensate for the skewed distribution of power values over frequencies and the result was converted to decibel units. In order to test the significance of changes in ERSP, we compared each ERSP frequency-time point with a constructed ''null'' ERSP. We first generated a surrogate data set by transforming the original instantaneous power time series into the Fourier domain and adding random phases, resulting in a surrogate of instantaneous power that has randomized phase but preserved amplitude. Therefore, the first and second order moments of the surrogate remained unchanged but its local temporal structure was removed [60]. A ''null'' ERSP was then constructed from the surrogate data with the same number of trials (randomly selected) as the condition of interest. We constructed a set of ''null'' ERSPs by iterating the surrogate procedure 470 times (e.g. for the presented ERSP in Figure 2A, we generated 47 surrogate data sets and for each set, we shuffled the surrogate events 10 times). We expect that the distribution of the ''null'' ERSP at each frequency-time point approaches a Gaussian distribution with sufficient iterations (law of large numbers). We tested the Gaussianity of the constructed distribution by monitoring kurtosis. We kept the absolute value of the distribution kurtosis below 0.5 (the kurtosis of Gaussian is zero) by increasing the number of iterations of the surrogate procedure. Following this procedure, we used a normal distribution to fit the ''null'' ERSP at a given frequency for one cycle period in order to estimate its mean and standard deviation. We shifted and scaled the ERSP at each frequency-time point relative to the obtained mean and standard deviation (Z-score). We then converted the normalized ERSP (Z-scores) to p-values. Finally, we used a false discovery rate method to correct for multiple comparisons and to set a significance threshold level for each subject, electrode and condition. Tests of significance for increases or decreases in the ERSP map were performed separately. The parameters for presented ERSPs are: (q = 0.1; p-values for the increase and decrease are 0.02 and 0.001, respectively). Electrical Brain Stimulation (EBS) Electrical stimulation was performed as part of routine clinical procedure of brain mapping to determine areas of hyperexcitability whose stimulation causes the patient's typical behavioral seizures, and to determine the function of each brain region before making a decision about the extent of epilepsy surgery [25]. Electrical charges used (50 mC/cm 2 /pulse) in each patient were within the safety parameters and appreciably less than the ones used in older classical studies by Penfield and colleagues (,700 mC/cm 2 /pulse). Stimulation was performed using the following parameters: Square wave currents from 1 to 12 mA at 50 Hz and with a pulse width of 200 ms. The impedance of these electrodes is measured to be approximately 400 V at 1 kHz [61]. Subjects were comfortably lying in their hospital bed during bipolar electrical stimulation, with their eyes open (except where noted) and fixated on an object in the room. Eye movements were monitored by video recordings. Care was taken not to influence subjects' reports of perceptions by asking open-ended questions (''Did you hear, see, or feel anything strange?'') and by including the same questions during sham stimulation trials. Supporting Information Video S1 This supplementary video file shows how stimulation of the hMT+ in two patients with implanted intracranial electrodes causes illusion of visual motion. (MP4)
2014-10-01T00:00:00.000Z
2011-07-13T00:00:00.000
{ "year": 2011, "sha1": "04136b09f0eb20d9d8f6622a10d062f8653a9f7f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021798&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04136b09f0eb20d9d8f6622a10d062f8653a9f7f", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
225744382
pes2o/s2orc
v3-fos-license
A Case of Eagle’s Syndrome Treated with Carbon Dioxide Laser Eagle syndrome is relatively uncommon with an incidence of abnormal stylohyoid length being 4% to 7.3%. A vast majority of individuals with elongation of the styloid process are asymptomatic. It is a syndrome marked by the clinical signs and symptoms of facial pain, ear pain, throat pain, dysphagia and a globus sensation in the throat. The cause of Eagle syndrome is believed to be a congenital or hormonal change and reactive osseus hyperplasia of the styloid process in response to pharyngeal trauma or surgical intervention, such as tonsillectomy. We present here a case of a 37-year-old female with a twelve-month history of both sided oropharyngeal pain and globus sensation which has no trauma or surgical intervention. The patient presented with a long, slender, bony intraoral projection that was found to be an elongated styloid process. We removed this elongated styloid process with a CO2 laser, and her symptoms disappeared. INTRODUCTION Eagle syndrome was defined in 1937, when Eagle reported a syndrome presenting with various symptoms such as dysphagia, odynophagia, sore throat, and foreign body sensation caused by an abnormal increase in the length of the styloid process. 1 The extended styloid process stimulates carotid arteries, cervical sympathetic nerves, and cerebral nerves, causing neck and facial pain, syncope, transient cerebral ischemia, and clicking joint noise when opening the mouth. 2 The normal length of the styloid process is 2.5-3 cm, and the length exceeding 3 cm is considered an abnormal extension, which may be caused by trauma, hormonal changes, and excessive calcification due to genetic causes. 3,4 Eagle syndrome, caused by an abnormal increase in the length of styloid process, has a low incidence of 4%-7.3%. Since most cases are asymptomatic, there are few complaints of symptoms. 5 When symptoms are reported, they are often mistaken for symptoms of tonsillitis, resulting in the prescription of antibiotics and painkillers. This report describes a patient who complained of odynophagia and foreign body sensation on both sides of the oropharynx for a year. The patient was diagnosed properly as Eagle syndrome by using neck computed tomography (CT). After operation, using carbon dioxide (CO2) laser for shortening styloid process, the patient showed an immediate improvement in symptoms. Thus, we report this case with a review of relevant literature. CASE REPORT A 37-year-old female patient was admitted after a series of repeated unsuccessful treatments at a local hospital performed in response to complaints of foreign body sensation and odynophagia in the oral cavity that had lasted for a year. There was no dysphagia or a clicking joint noise while rotating the neck. Patients past medical history was hypothyroidism and family history was malignant lymphoma (mother). Patient showed no history of smoking, drinking, trauma, or surgery. In physical examination, there was no notable finding on both sides of the tonsils ( Fig. 1A and B). Also there was no unusual finding in the oral cavity, nasopharynx, or both sides of the neck. Despite medication for tonsillitis at a local hospital for a year, there was no improvement in symptoms and the tonsils had no redness or edema to be considered as chronic tonsillitis. For these reasons neck CT image scan was performed. Neck CT revealed extended styloid process, invading both sides of the tonsils and advancing inward (Fig. 2). As the apex of the styloid process piercing the tonsils, odynophagia was caused by tonsil movements while swallowing. Based on neck CT results and symptoms, the patient was diagnosed with Eagle syndrome. Since the patient showed no symptom relief after medication for a year and diagnosed as Eagle syndrome at this point, there was no treatment available other than surgery. Under general anesthesia, tonsillectomy was performed. Projections of the styloid process were detected in the area from where the tonsils were removed. After trimming the soft tissue around the styloid process, 2.0cm long styloid process protruding into the tonsillar fossa was removed using CO2 laser and the cut surface was smoothed. Operation site was closed using absorbable suture materials (Fig. 3A-3D). The patient was discharged on the first day after surgery with immediate relief of symptoms such as foreign body sensation and odynophagia. Patient has been followed-up for 7 years after the surgery without any relapse or complaints of discomfort. DISCUSSION The anatomical abnormality in the stylohyoid complex was first described by Marchetti in 1656. Afterward, in 1937, Eagle reported the relationship between the abnormal extension of the styloid process and pain. Since then, the condition was known as Eagle syndrome. 1 It is characterized by foreign body sensation, dysphagia, prosopalgia, and sore throat confined to the tonsillar fossa and can cause pain or clicking joint noise even when turning the head or moving the tongue. In addition, it may stimulate the cranial nerves V, VII, IX, and X as well as cervical sympathetic nerves, leading to various symptoms including facial and neck pain. It must be differentiated from chronic tonsillitis, neuralgia, myofascial pain, dental pain, and temporomandibular joint abnormalities, which repeatedly cause similar symptoms. 2,5 The elongation of the styloid process occurs nearly 4% in patients, and if calcification of the hyoid bone area is included, the incidence increases up to 7.3%, although the actual complaining symptoms are rare. 5 Therefore, in clinical practice, it is an important diagnostic point to differentiate whether a patient's symptoms are due to the elongation of styloid process. In this case, since the pattern of the extended styloid process piercing the tonsil was clear enough to cause odynophagia, the diagnosis was not difficult. Styloid process may vary in length but normally ranges 2.5-3 cm and above 3 cm is regarded abnormal extension. 4 Various causes of abnormal extension have been suggested, such as the residual cartilage, ligament calcification and ossification, traumatic hyperplasia and metaplasia, aging-related inflammation, and hormonal changes. 6 Two conditions has been suggested for pain triggering in Eagle syndrome. First, patients who had tonsillectomy for any other reason may develop scar tissues on the tonsillectomy site, which can irritate styloid process to generate pain. Another condition is called Stylocarotid artery syndrome in which the carotid artery or the internal and external carotid arteries are compressed, decreasing the vascular diameter or stimulating the nociceptor to cause pain. 7 However, in our case, the pain was caused by the styloid process directly piercing the tonsils which could be a different mechanism. For diagnosis, patient's medical history and physical examination such as palpation of the tonsils or tonsillar fossa is essential. Eagle syndrome is suspected if symptoms decrease when lidocaine is injected into the anterior pillar of the tonsillar fossa. 8 CT scans can show the length, angle, and ossification of the styloid process better than panoramic radiography. In particular, three-dimensional-reconstructed CT scans are useful to identify position of the styloid process, and its relationship with the surrounding organs. 9 Multiple therapeutic treatment may be considered in combination with analgesics, anticonvulsants, antidepressants, lidocaine, steroid, and nerve blocking injections. 10 Symptoms can be improved with medication administration, but they may recur when the medication is tapered. 11,12 A more fundamental and consistent treatment is a surgical approach, which is divided into transoral and transcervical approaches. In transoral approach, after tonsillectomy, muscles and mucosa membranes are dissected to find the apex of the styloid process, which is then removed using a cutting tool (CO2 laser). [13][14][15][16][17][18] Transoral approach has a cosmetic advantage which leaves no visible scar after surgery, but also it has a disadvantage that the styloid process does not completely expose during the operation. 19 Recently, a new method was introduced in which the styloid process is removed after an incision is made into the anterior tonsil pillar and the tonsil is only pushed to the medial side without performing tonsillectomy. 20 Complete resection is possible with the transcervical approach, although it has risks like post operation site scaring, long recovery period, and possibility of carotid artery damage during surgery. 21 However, in our case, if the styloid process piercing the tonsil is considered to be the cause of odynophagia, it seems to remove the styloid process after exposing it through tonsillectomy to be the most reasonable method. In summary, the patient was diagnosed as Eagle syndrome using CT and cured completely after surgical intervention using CO2 laser, who used to suffer from odynophagia and foreign body sensation even after a long period of medication, can be a rare case and we would like to report this rare case with literature review attached.
2020-07-02T10:08:13.153Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "985ad18ce634a42a5fa80ce1cdfd25e2c6302bc3", "oa_license": "CCBYNC", "oa_url": "https://www.jkslms.or.kr/journal/download_pdf.php?doi=10.25289/ML.2020.9.1.71", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "aa953bb7d14773e4298cfcd42c13a0686b6bab7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139610040
pes2o/s2orc
v3-fos-license
Experimental study of thermophysical properties of thin-film coatings based on hollow microspheres The paper describes results of an experimental study of thermal properties of energy-efficient thin-film coatings based on hollow glass microspheres MS-V2L in a styrene acrylic dispersion binder «Akrilan 101». A value of energy-efficient paint thermal conductivity depending on its composition and temperature and a value of the thermal diffusivity of the paint were experimentally determined. Data on the energy saving paint density and specific heat capacity were also obtained. Introduction Reduction of heat loss is an urgent task to improve energy efficiency of heat energy generating, transmission and consumption facilities. To reduce heat loss various insulating materials are used, which must have a number of positive characteristics, i.e. low thermal conductivity, low moisture absorption, low corrosive activity and mechanical strength [1,2,3]. At present, in order to save energy thin film coatings consisting of hollow microspheres arranged in a binder material are widely used, which have physical properties of a paint, but with a lower thermal conductivity [4,5]. The data known in the literature on the thermal properties of energy-efficient thin-film coatings are highly contradictory and differ by at least an order of magnitude [6,7]. Therefore, the study of the thermal properties of energy-efficient thin-film coatings (energy saving paints) is a vital task, the solution of which will improve the accuracy of the thermal calculations. Determination of the thermal conductivity coefficient An estimate of the thermal conductivity coefficient was made experimentally for energy-saving paint samples with a mass content of microspheres of 8%, 25% and 32.6% in acrylic binder and for a sample of acrylic coating with no added microspheres. The thickness of the test samples was as follows: 1.5 mm for a binder with added microspheres and 1 mm for pure acryl. The thermal conductivity coefficient of the samples was determined on a laboratory bench by the method of a cylindrical layer in the regular regime of thermal conductivity [8,9]. Laboratory bench is a thick-walled steel cylinder with an outer diameter of 245 mm and 630 mm in length, inside which there are the two electrical heaters connected to the electrical network via an autotransformer. To ensure uniform heat transfer on the surface of the working zone and exclude convective currents near electric heaters, the inner cavity of the steel cylinder was filled with claydite. To reduce heat losses, the side and end surfaces of the cylinder were covered with thermal insulation. The surface temperature of the cylinder was measured by thermocouples located at 8 points along the circumference of the laboratory bench working zone. Energy-saving paint was applied to a thin plate of mild steel with a width of 50 mm and a thickness of 0.2 mm. The steel plate with the applied energy-saving paint was placed on the working zone of the laboratory bench. The temperature of the inner surface of the layer of the test sample is equal to the average surface temperature of the cylinder, which was found from the thermocouples readings of the working zone. The temperature of the outer layer of the test sample was determined by means of 4 contact thermocouples applied to the test sample. The density of the heat flow passing through the layer of energy-saving paint was determined by the ITP-MG4.03/X(I) "Potok" instrument with a relative error of measurement of ±6% and automatic data fixation function. The temperature of the internal and external surfaces of the energysaving paint was determined using type "T" thermocouples and a secondary device ADAM-4000 with an error of measurement of ±1°C. The power of the electric heaters was regulated by an autotransformer in the range from 5V to 65V in steps of 15 volts. The stationary mode of thermal conductivity was established in 24 hours after the heating start. The readings of the measuring instruments were recorded in automatic mode with an interval of 1 hour. A series of experiments was performed at different values of the heat flux passing through the samples under study. The average value of the thermal conductivity coefficient in the temperature range of 20 to 100°C was as follows: ~0.028 W/(m·K) for the binder (acrylic); ~0.025 W/(m·K) at 8% content of microspheres in energy-saving paint by weight; ~0.022 W/(m·K) at 25% microsphere content; ~0.019 W/(m·K) at 32.6% content of microspheres. The thermal conductivity coefficient of the energy-saving paint in the temperature range of 20 to 100°С with accuracy better than 12% can be approximated by the formula, W/(m·K): where C is the mass concentration of microspheres,%; T is the temperature, °C. The experimental values of the thermal conductivity coefficient of energy-saving paint depending on the temperature and the composition are shown in figure 1 and figure 2. Determination of thermal diffusivity The coefficient of thermal diffusivity of energy-saving paint was found by the regular regime method [10,11] using a modified air "α-calorimeter", in which the test sample was asymmetrically heated longitudinally by the flow of hot air around it. The air flow velocity was chosen so that the condition of the thermally thick body Bi> 100 was satisfied for the heated sample. The test sample was a parallelepiped with dimensions of 45x60x75 mm, made of energy-saving paint with a mass content of microspheres of ~ 32.6% in an acrylic binder. The sample, insulated with mineral wool on all sides except the working surface, was blown by a stream of hot air from a blower with a built-in heater. Using the blower control unit, it was possible to change the velocity and temperature of the air flow in the interval of 0 to 20 m/s and 20 to 120°C, respectively. During the experiments, the velocity and temperature of the air flow were measured using a MES-200A meteorological meter with a velocity measurement error of ± 1.5 m/s and a temperature measurement error of ±0.5°C. The temperature of the sample at three points (in the center, on the upper and lower faces) was determined using type "T" thermocouples and a secondary device ADAM-4000 with an error of measurement of ±1°C. A series of experiments was performed at different air temperatures. It was found that the thermal diffusivity of the energy-saving paint was in the range of 2.7·10 -8 to 3.1·10 -8 m 2 /s. Determination of density The density of the energy-saving paint was determined by weighing the test sample on an electronic scale SHIMADZU UW-420H with a mass measurement error of 0.001 g. The test sample was made by successively applying layers of paint ~ 2 mm thick in a cylindrical form with a volume of 383.5 ml. After applying each layer of paint, the sample was dried during the day. Also, the form was weighed with energy-saving paint without drying it. The density of the energy-saving paint was determined by the well-known expression, kg/m 3 : where m is the mass of the sample, kg; V is the sample volume, m 3 Calculation of the specific heat capacity Knowing the experimental data on the of thermal conductivity, thermal diffusivity and density of the energy-saving paint, the mean specific mass heat capacity can be found by formula (3), J/(kg·K): where λ is the coefficient of thermal conductivity of energy-saving paint, W/(m·K); a is the coefficient of thermal diffusivity, m 2 /s; V is the energy-saving paint density, kg/m 3 . The specific mass heat capacity of energy-saving paint with a mass content of microspheres of 32.6% is ~ 2670 J/(kg·K). Conclusions New data on the value of the thermal conductivity coefficient of energy-saving paint, depending on its composition and temperature were obtained, also the value of the coefficient of thermal diffusivity was estimated. The density of energy-saving paint and its specific heat capacity were determined. The new data on the thermophysical properties of thin-film coatings (energy-saving paints) will increase the accuracy of heat transfer analysis of multi-layer enclosing structures.
2019-04-30T13:04:44.349Z
2017-11-10T00:00:00.000
{ "year": 2017, "sha1": "19385778d266513a994e7c494b5ab07d633e93ce", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/891/1/012333", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6c0a5147c7d97f4532f78a8b1e85a7ed43de884f", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
6584865
pes2o/s2orc
v3-fos-license
Robust no-free lunch with vanishing risk, a continuum of assets and proportional transaction costs We propose a continuous time model for financial markets with proportional transactions costs and a continuum of risky assets. This is motivated by bond markets in which the continuum of assets corresponds to the continuum of possible maturities. Our framework is well adapted to the study of no-arbitrage properties and related hedging problems. In particular, we extend the Fundamental Theorem of Asset Pricing of Guasoni, R\'asonyi and L\'epinette (2012) which concentrates on the one dimensional case. Namely, we prove that the Robust No Free Lunch with Vanishing Risk assumption is equivalent to the existence of a Strictly Consistent Price System. Interestingly, the presence of transaction costs allows a natural definition of trading strategies and avoids all the technical and un-natural restrictions due to stochastic integration that appear in bond models without friction. We restrict to the case where exchange rates are continuous in time and leave the general c\`adl\`ag case for further studies. Introduction The main contribution of this paper is to construct a continuous time model for financial markets with proportional transaction costs allowing for a continuum of risky assets. Such a model should have two important properties: 1. financial strategies should be defined in a natural way ; 2. it should allow one to retrieve the main results already established in the "finite dimensional price" case. Our model has both. Frictionless models with a continuum of assets have already been proposed in the literature, cf. [2], [8], [13] and [24]. However, working with infinite dimensional objects leads to important technical difficulties when it comes to stochastic integration. This imposes non-natural restrictions on the set of admissible trading strategies, resulting in that even markets with a unique equivalent martingale measure are incomplete, in the sense that the set of attainable bounded claims is generically only dense in L ∞ and not closed. Other surprising pitfalls and counter-intuitive results were pointed out in [25]. Introducing transaction costs allows one to reduce these problems. The main reason is that it naturally leads to a definition of wealth processes which does not require stochastic integration. Once frictions are introduced, one comes up with a more realistic but also more natural and somehow simpler model. In [4], the authors studied for the first time an infinite dimensional setting within the family of models with proportional transaction costs. They considered a countable number of assets in a discrete time framework, and imposed a version of the efficient friction condition, namely that the duals of the solvency cones have non-empty interior. Since perfectly adapted to discrete time models, they studied the No-Arbitrage of Second Kind (NA2) condition, first introduced in [19] and [20]. They showed that it implies the Fatou closure property of the set of super-hedgeable claims and noted that this closure property is in general lost if the efficient friction condition is replaced by a weaker condition, such as only requiring the solvency cones to be proper (as in finite dimensional settings). In [4] also a dual equivalent characterization was given in terms of Many Strictly Consistent Price Systems (MSCPS condition), cf. [18], [19]. These price systems are the counterpart of the martingale measures in frictionless markets, i.e. the building blocks of dual formulations for derivative pricing and portfolio management problems. The main contribution of the present paper is to provide an extension of this model to a continuous time setting with a continuum of assets: the price process is, roughly speaking (for details see ( , when endowed with its sup norm. Taking into account the infinite dimension, we develop this into a Kabanov geometrical framework (cf. [18] for the finite dimensional case), with locally compact instantaneous solvency cones in M([0, ∞]) endowed with its weak* topology, their dual cones being viewed as subsets of C([0, ∞]). Within this model, we study the No-Free Lunch with Vanishing Risk property, which is admitted to be the natural no-arbitrage condition in continuous time frictionless markets since the seminal paper of Delbaen and Schachermayer [9]. As [14], we consider a robust version (hereafter RNFLVR), robust being understood in the sense of [23], see also [15]: the no-arbitrage property should also hold for a model with slightly smaller transaction cost rates. It is now standard in the continuous time literature. Within this framework, the Fatou-closure (resp. weak*-closure) property of the set of super-hedgeable claims evaluated in numéraire (resp. in numéraire units at t = 0) is established (Theorem 3.1). Moreover, by using Hahn-Banach separation and measurable selection arguments, we prove the existence of Strictly Consistent Price Systems, which turns out to be equivalent to the RNFLVR condition (Theorem 4.1 and Theorem 4.2). From these results, a super-hedging theorem would be easy to establish by following very standard arguments, compare for instance with [3], [7] and [11]. All these results are natural extensions of the finite dimensional case, which validates the well-posedness of our model. Several subjects are left to future studies. First, we have chosen to consider continuous price and transaction costs processes. This restriction is motivated by our wish to separate the difficulties related to the infinite dimensional setting and the ones coming from possibly time discontinuous prices and exchange rates. The latter case would require an enlargement of the set of admissible strategies along the lines of [7]. We have no doubt that this is feasible within our setting and leave it to further studies. Second, the NA2 property of no-arbitrage (robust or not) could also be discussed in continuous time settings, see [12]. We also leave this to further studies. Model formulation We first briefly introduce some notations that will be used throughout the paper. All random variables are supported by a filtered probability space (Ω, F , F, P), with F = (F t ) t∈T satisfying the usual conditions, T := [0, T ] for some T > 0. Without loss of generality, we take F 0 equal to {Ω, ∅} augmented with P-null sets, and F T = F . If nothing else is specified, assertions involving random variables or random sets are understood to hold modulo P-null sets. We denote by T the set of all stopping times τ ∈ T. As usually, for a sub σ-algebra G of F and a measurable space (E, E), L 0 (G; (E, E)) stands for the set (of equivalence classes modulo P-null sets) of G/E-measurable E-valued random variables. For a topological space E, the Borel σ-algebra generated by E is denoted B(E) and when no risk for confusion the terminology "measurable space E" is used. For a sub σ-algebra G of F , this defines the notation L 0 (G; E). For a normable (real) topological vector space E, we denote by L p (G; E), the linear subspace of elements ζ ∈ L 0 (G; E) such that, for a compatible norm · E , ζ E has a finite moment ζ L p (G;E) of order p if p ∈ (0, ∞), and is essentially bounded if p = ∞. For p ≥ 0, L p (G; E) is given its standard vector space topology. For E = R or G = F , we sometimes omit these arguments. For two topological spaces E and F, C(E; F ) is the set of continuous Let E be a compact Hausdorff topological space (in the sequel all compact spaces are supposed to be Hausdorff, if not stated differently). The Banach space C β (E) (resp. topological vector space C σ (E)) is by definition the vector space C(E) endowed with its supremum norm · C(E) (resp. with its weak σ(C(E), M(E)) topology), where M(E) is the vector space of real Radon measures on E, i.e. M(E) is the topological dual of C β (E). Such Radon measures will always be identified with their unique extension to the completion of a regular Borel measure on E. We use the standard notation µ(f ) = E f dµ for µ ∈ M(E) and all µ-integrable real valued maps f on E. If f is µ-essentially bounded, we write f µ to denote the measure in is by definition the vector space M(E) endowed with its total variation norm · M (E) (resp with its weak* σ(M(E), C(E)) topology). The positive orthants of C(E) and M(E) are denoted by C + (E) and M + (E) respectively. We also use the notation C >0 (E) for the set of continuous functions taking only strictly positive values. If G is a sub σ-algebra of F , G is a topological space and F is a setvalued function Ω ∋ ω → F (ω) ⊂ G, then L 0 (G; F ) is the subset of elements f ∈ L 0 (G; G), such that f (ω) ∈ F (ω) P-a.s., so L 0 (G; F ) is the set of G/B(G)measurable selectors of the graph Gr(F ) := {(ω, e) ∈ Ω × G : e ∈ F (ω)}. In this context, we make the following convention concerning the topology of G: is then the set of weakly (resp. weak*) measurable selectors of Gr(F ). When E =R + := R + ∪ {∞}, the one point compactification of R + , we simply write C C for C(R + ) and M M for M(R + ). The objects C are defined in an obvious way with reference to C C and M M. Given a subset Y ⊂ C(E), we say that a process ζ = (ζ t ) t∈T is Y -valued if ζ t (ω) ∈ Y for (ω, t) ∈ Ω × T a.e. dP ⊗ dt. We say that it is strongly (resp. The process ζ is said to be strongly continuous if ζ ∈ C(T; C β (E)) P-a.s. Given a family of random Radon Financial assets and transaction costs We first describe the financial assets. Since we want to allow for a continuum of assets, covering the case of bond markets, we model their evolution by a stochastic process with values in the set of curves on R + . More precisely, we consider a mapping and interpret S t (x) as the value at time t of the asset with index x. We make the following standing assumptions, throughout the paper: In models for bond markets, x ∈ R + can be interpreted as the maturity of a zero-coupon bond and it is usually assumed that x → S t (x)(ω) has (for a.e. ω) certain differentiability properties. In this paper, we only impose its continuity and positivity. Note that, although in applications to bond markets it is natural to model prices as a curve x → S(x) on R + , we here assume that R + ∋ x → S t (x)(ω)/S 0 (x) has an extension to C C. Similar conditions are satisfied in continuous time models without transaction costs, cf. [13,Theorem 2.2]. In this paper, we consider a market with proportional transaction costs. When transferring at time t an amount a(x, y) from the account invested in asset x to the account invested in asset y, the account invested in asset y is increased by a(x, y) and the account invested in asset x is diminished by (1 + λ t (x, y))a(x, y). Otherwise stated buying one unit of asset y against units of asset x at time t costs (S t (y)/S t (x))(1 + λ t (x, y)) units of asset x. The mapping is assumed to have the following continuity and measurability properties: λ is C(R 2 + )-valued and weakly F-adapted, λ is a strongly continuous process, (2.6) The two first assumptions are of technical nature. The "triangular condition" (2.7) is natural from an economical point of view and does not limit the generality. The important assumption is contained in (2.4) which imposes (strictly) positive transaction costs on any exchange between two different assets. This corresponds to a strong version of the usual efficient friction assumption, which was already imposed in continuous settings by [14], [15] and [17]. See Remark 2.3 below. Wealth process 2.2.1 Motivation through discrete strategies Before to provide a precise definition of the notion of trading strategy we shall use in this paper, let us consider the case of discrete in time and space strategies, in a deterministic setting. In such a context, we can model the money transfers from and to the accounts invested in assets x i ∈R + , i ≥ 1, at times s k , k ≥ 1, by non negative real numbers a s k (x j , x i ) ≥ 0: the amount of money transferred at time s k to the account invested in x i by selling some units of x j . Since the price at time s k of the asset x i is S s k (x i ), the net number of units of x i entering and exiting the portfolio at time s k is given by To obtain the time-t value of these transfers, one needs to multiply by S t (x i ): The global net value at time t of all transfers to and from the account invested in the asset x io on the time interval [0, t] is then given by These quantities will in general be random, but must be adapted in the sense For a real valued function f onR + , let us set Then, where L is the Borel measure on T ×R 2 + defined by for A × B × C in the Borel algebra of T ×R 2 + . If one wants to introduce an initial endowment v = (v({x i })) i≥1 labeled in amount of money, then one has to convert it into time t-values so that the time t-value of the portfolio becomes Viewing V t and v as a Radon measures onR + , this leads to Trading strategies and portfolio processes The discussion of the previous section shows that it is natural, in the presence of a continuum of assets, to model financial strategies and portfolio processes as measure-valued processes on T ×R 2 + andR + respectively. We now make this notion more precise. We recall that Radon measures are identified with their unique extension to regular Borel measures. is weak*-adapted. We set by convention L 0− ≡ 0, and denote by L the collection of such processes. Note that the above definition is a natural extension of the finite dimension case in which transfers are modeled by multidimensional càdlàg non-decreasing adapted processes. We are now in position to define the notion of portfolio processes. For f ∈ C(T ×R + ), we set We note that H is a linear continuous operator from C β (T×R + ) to C β (T×R 2 + ) (and also when both spaces are endowed with the weak topology) and observe that according to the definition of G in (2.8), . for some trading strategy L and some initial endowment v ∈ M M. If v = 0, we simply write V L . It follows from Proposition 5.1 (b) in the Appendix and from the continuity of H that V v,L is weak* F-adapted. A related question is: If we only know thatL is a M + (T ×R 2 + )-valued random variable and that the portfolio process VL, constructed as in (2.12), is weak*-adapted, does it follow thatL ∈ L, i.e. t →L| [0,t]×R 2 + is weak*adapted ? The answer is no. However, it follows from Corollary 5.2 in the Appendix that there always exists L ∈ L such that V L = VL. Solvency cones and dual cones We first define, for ω ∈ Ω, 13) where cone denotes the convex cone (finitely) generated by a family. The set K t (ω) := {ν ∈ M M : δ t ⊗ ν ∈K(ω)} coincides with solvent financial positions at times t ∈ T ∩ Q in the assets x ∈R + ∩ Q, i.e. portfolio values that can be turned into positive ones (i.e. elements of M M + ) by performing immediate transfers. This corresponds to the notion of solvency cone in the literature, see [18]. We then define K(ω) as the weak* closure in M(T ×R + ) ofK(ω). Using the a.s. continuity of (t, x, y) → λ t (x, y) noted in Remark 2.1, one easily checks that the (positive) dual cone K ′ (ω) of K(ω) in M σ (T ×R + ) is given by Given t ∈ T, the instantaneous solvency cone K t (ω) in the state ω at time t and, what will be proved to be, their dual cones K ′ t (ω) are defined as 16) in which cl denotes the norm closure on C C. Before continuing with our discussion, let us first state important properties of the above random sets. The proofs are provided at the end of this section. For each ω ∈ Ω and t ∈ T, we denote by int(K ′ (ω)) (resp. int( . Note that if the strong topology is replaced by the weak one, then the interiors of K ′ (ω) and K ′ t (ω) are always empty, since this is the case for C + (T ×R + ) and C C + . The proofs of the following results are provided at the end of this section. and The fact that the cones K ′ t have non-empty interior is an immediate consequence of the condition λ t (x, y) > 0 ∀(x, y) contained in (2.4). The condition intK ′ t = ∅ is usually referred to as the efficient friction assumption. In finite dimensional settings (i.e. ifR + is replaced by a finite set), it is equivalent to the fact that the K t are proper or that λ t (x, y) + λ t (y, x) > 0 for all x = y, see e.g. [18]. This last equivalence does not hold anymore when the dimension is not finite, see [4, Remark 6.1]. We now define the associated notion of liquidation value at t ∈ T, the highest value in asset 0 which can be obtained from a position ν ∈ L 0 (F t ; M M σ ) at t by liquidating all other positions in (0, ∞]: (2.20) Observe that the duality between K t and K ′ t implies The function ℓ t inherits the measurability properties of Proposition 2.2, as will be proved below. is the open ball in C(T ×R + ) of radius ǫ centered at 0. Since T ×R + is compact, it follows from (2.14) that such an ǫ exists if and only if formula (2.17) holds. Let e ∈ C(T ×R + ) be the constant function taking the value 1. Then e ∈ int(K ′ (ω)) according to (2.17), since λ(ω) has a strictly positive minimum on T ×R 2 + by compactness and continuity. Let A t be the right hand side in the equality (2.18). A t is non-empty since it contains the positive constant functions, recall (2.4). We define the linear continuous operator P t : Being also surjective, P t is an open mapping. Therefore O t := P t (int(K ′ (ω))) is a non-empty open set. For the moment, we make the hypothesis that Since int(K ′ (ω)) and A t are non-empty convex cones, their closures coincide with the closures of their interiors. The continuity of P t thus ensures that . This proves equality (2.19). Taking the interior of both sides of this equality gives (2.18). Finally, we prove the above hypothesis O t = A t . The inclusion O t ⊂ A t follows trivially, by definition (2.16) and equality (2.17). To prove the Since the unit constant function e belongs to int(K ′ (ω)), a compactness and continuity argument allows to choose δ > 0 small enough such that f ∈ int(K ′ (ω)) given by (2.17), recall Remark 2.1. 2. Here again, we fix ω ∈ Ω and set t := τ (ω) to alleviate the notations. By definition, 3. We now prove the measurability properties. a. We start with K ′ τ . For f ∈ C C and t ≤ T, let us set Note that, for f ∈ C C, For n ≥ 1 and 0 ≤ k ≤ 2 n , set s n k := k2 −n t, for some t ∈ T, and let (x l , y m ) l,m≥1 be dense inR 2 + . Then, the above, combined with the continuity of λ stated in Remark 2.1 and the compactness of T ×R 2 + , implies that into R 3 is a Carathéodory function, i.e. measurable with respect to ω and continuous with respect to f , hence F t ⊗ B(C C σ )-measurable. By continuous compositions, so is the mapping (ω, f ) → F s n k ,x l ,ym (f )(ω). Hence, A t ∈ F t ⊗ B(C C σ ). By arbitrariness of t ∈ T, this shows that Gr(K ′ τ ) ∈ F τ ⊗ B(C C σ ). For later use, note that minor modifications of the above arguments show that . It remains to discuss the measurability of Gr(K τ ). It will follow from the P-a.s. duality between K τ and K ′ τ . We first note that whereF is defined as in (2.22). Let (f n ) n≥1 be a dense family of C C β and set The assertion (2.23) implies that B := ∩ n B n is an element of F τ ⊗ B(M M σ ). Proof of Proposition 2.3. The result follows from Proposition 2.2 and the fact that, for c o ∈ R, 3 Robust no free lunch with vanishing risk and closure properties 3 .1 Definitions We are now in position to define the notion of no-arbitrage we shall consider. As in [14], we use the robust version of the No Free Lunch with Vanishing Risk criteria. For this purpose, we restrict to strategies that are bounded from below in the following sense. is the subset of random variables ζ ∈ L 0 (F T ; M M σ ) bounded from below by c in the sense that The set of all M M-valued random variables bounded from below is A strategy L ∈ L is said to be bounded from below, if there exists η ∈ M M such that We denote by L b the set of such strategies, they are said to be admissible. The set of admissible strategies, for which the terminal portfolio values are c-bounded from below is denoted by The set of bounded from below random claims that can be super-hedged starting from a zero initial endowment and by following an admissible strategy is The no-free lunch with vanishing risk property (NFLVR) is defined in a usual way. Definition 3.2 (NFLVR) We say that (NFLVR) holds if for each sequence (X n , c n ) n≥1 ⊂ X T b × R + : lim n c n = 0 and X n ∈ X T b (c n ) for all n ≥ 1 imply lim sup n ℓ T (X n ) ≤ 0 P-a.s. In order to define a robust version of the above, one needs to consider models with transaction costs strictly smaller than λ. We denote by Υ the set of C >0 (R 2 + )-valued adapted processes ǫ such that the left-hand side of (3.2) satisfies the conditions (2.4)-(2.7). The above definition is similar to Definition 5.2 in [14], except that they use a notion of simple strategies. Closure properties The main result of this section is a Fatou-type closure property for the set of terminal values of super-hedgeable claims X T b . A subset F of L 0 (F ; M M) is said to be Fatou-closed if any Fatou-convergent sequence has a limit in F . It will readily imply that the corresponding set of super-hedgeable claims labeled in terms of numeraire units at t = 0 is weak*-closed. The proof of Theorem 3.1 will be split in several parts. We first establish two boundedness properties which follow from our (RNFLVR) assumption (compare with [14, Lemma 5.4, Lemma 5.5]). Proof If the assertion of the lemma is not true, then one can find a real number α > 0 and a sequence (X n ) n≥1 ⊂ X T ǫ b (c) such that By definition of X T ǫ b (c), there exists (η n ) n≥1 ⊂ L 0 (F ; M M) such that η n M M ≤ c and X n + S T S −1 0 η n ∈ K ǫ T , for all n ≥ 1. SetX n := X n /n andη n := η n /n, so thatX n + S T S −1 0η n ∈ K ǫ T and c/n → 0. Under (NFLVR) ǫ , this implies that ℓ ǫ T (X n ) → 0 in probability. This contradicts (3.4). Proof Let ǫ be as in Definition 3.3. 1. Fix L ∈ L b (c) a c-admissible strategy and set it follows that where for some η ∈ M M with η M M ≤ c. Now observe that K T ⊂ K ǫ T , and therefore In particular, this shows that 2. Let L ∈ L b (c) be as above. By (3.5) and (2.21) applied to ℓ ǫ T , Appealing to (3.6) and Lemma 3.1, this implies that {ℓ ǫ T (µ L ), L ∈ L b (c)} is bounded in probability. We now apply Remark 2.4 to ℓ ǫ T : where ι T ∈ L 0 (F ; (0, ∞)). Since L ∈ M + (T ×R 2 + ), the lemma now follows from where a := inf{ǫ s (x, y)S T (x)/S s (x) : (s, x, y) ∈ T ×R 2 + } ∈ L 0 (F ; (0, ∞)) by a continuity and compactness argument, recall Remark 2.1 and the definition of Υ. In order to deduce from the above the required closure property, we now state a version of Komlòs lemma. Lemma 3.3 Let E be a compact space and (L n ) n≥1 ⊂ L 0 (F ; M +β (E)) be bounded in probability. Then, there exists a sequence (L n ) n≥1 , satisfyingL n ∈ conv(L k , k ≥ n) for all n ≥ 1, which weak*-converges P-a.s. to some L ∈ L 0 (F ; M + (E)). Proof a. Let I := (f k ) k≥1 be a dense subset of the separable space C β (E). Then, combining [18, Lemma 5.2.7] with a diagonalisation procedure shows that there exists a sequence (L n ) n≥1 such thatL n ∈ conv(L k , k ≥ n) for all n ≥ 1, and such that (L n (f k )) n≥1 converges P-a.s. to some ζ k ∈ L 0 (F , R). We set L(f k ) = ζ k . b. We now extend L to C(E). To do this, we note that, for each g ∈ C(E), one can find a sequence (g k ) k≥1 ⊂ I that converges in C β (E) to g. We claim that lim k≥1 L(g k ) is well defined and does not depend on the chosen sequence (g k ) k≥1 that converges to g. First, we show that (L(g k )) k≥1 is P-a.s. a Cauchy sequence. Indeed, The first term on the right is a.s. bounded while the second term converges to 0 as k, k ′ → ∞, since C β (E) is complete. It remains to check that the result is the same if we consider two different approximating sequences. But this follows immediately from the same estimates. For g as above, we can then define L(g) := lim k≥1 L(g k ). c. To see that (L n ) n≥1 converges P-a.s. to L in the weak* topology, let us note that, for g ∈ C(E), one has Taking (g k ) k≥1 that converges to g in C β (E) leads to the required result by first taking the limit n → ∞, and then k → ∞. d. The above also shows that the map C β (E) ∋ g → L(g) is continuous P-a.s. The linearity is obvious. e. The measurability is obvious since L(f k ) is F -measurable as the P-a.s. limit of F -measurable random variables, which extends to L(g) for any g by the construction in b. above. is bounded in probability. Then, there exists a sequence (L n ) n≥1 , satisfyingL n ∈ conv(L k , k ≥ n) for all n ≥ 1, that converges P-a.s. for the weak* topology to some L ∈ L. Proof It suffices to apply Lemma 3.3 to E := T ×R 2 + . The weak*measurability property of Definition 2.1 follows by the weak*-convergence property of Lemma 3.3. We are now in position to conclude the proof of Theorem 3.1 by using routine arguments, which we provide here for completeness. Proof of Theorem 3.1. a. Let us suppose that (X n ) n≥1 ⊂ X T b weak*converges P-a.s. to X ∈ L 0 (F T ; M M). Moreover, assume that there exists η n ∈ L 0 (F T ; M M) such that X n + S T S −1 0 η n ∈ K T a.s. and c := sup n η n M M ∈ L ∞ . Let (L n ) n≥1 ∈ L b (c) be a sequence of transfer measures associated to (X n ) n≥1 , i.e. such that X n (f ) ≤ L n (G T (f )) for all n ≥ 1 and f ∈ C C + . (3.7) It follows from Lemma 3.2 that (L n ) n≥1 is bounded in probability. Applying Corollary 3.1, we may assume without loss of generality (up to passing to convex combinations) that L n T weak*-converges P-a.s. to some L ∈ L b . Using Remark 2.1, one easily checks that L n (G t (f )) → L(G t (f )) P-a.s. for all f ∈ C C. Passing to the limit in (3.7) thus implies X(f ) ≤ L(G T (f )) for all f ∈ C C + . This shows that X T b is Fatou-closed. b. By Krein-Šmulian's Theorem, (c.f. Corollary, Ch. IV, Sect. 6.4 of [22]), it suffices to show thatX T b ∩ B 1 is σ(L ∞ (F T ; M M), L 1 (F T ; C C))-closed, where B 1 is the unit ball of L ∞ (F T ; M M). To see this, let (X α ) α∈I be a net in X T b ∩ B 1 which converges σ(L ∞ (F T ; M M), L 1 (F T ; C C)) to someX ∈ B 1 . After possibly passing to convex combinations, we can then construct a sequence (X n ) n≥1 inX T b ∩ B 1 which weak*-convergences P-a.s. toX, see e.g. [4,Lemma 4.1]. By the continuity property of Remark 2.1, this implies that (X n ) n≥1 in X T b weak*-converges P-a.s. to X, with X n (f ) :=X n (f S T /S 0 ) and 4 Equivalence with the existence of a strictly consistent price system From now on, we define the set of strictly consistent price systems, M(int(K ′ )), as the set of C C-valued weakly F-adapted càdlàg processes Z = (Z t ) t∈T such that s. for all predictable τ ∈ T , (Zc.) ZS/S 0 is a C C-valued martingale satisfying ZS/S 0 C C ∈ L 1 . The terminology strictly consistent price systems was introduced in [23]. They play the same role as equivalent martingale measures in frictionless markets, see e.g. [18]. Existence under (RNFLVR) The main result of this section extends the first implication in [14, Theorem 1.1] to our setting. In order to show the above, we shall follow the usual Hahn-Banach separation argument based on the weak*-closure property of Theorem 3.1 above. This is standard but requires special care in our infinite dimensional setting. In particular, we shall first need to show that simple strategies are admissible. To this purpose, we introduce the notation (4.1) Clearly, the measurability of Proposition 2.2 extends toK. An element of −K τ can be interpreted as a portfolio holding, evaluated in terms of time-0 prices, obtained by only performing immediate transfers at time τ . The following technical result is obvious in discrete time settings. Proof Fixξ ∈ L ∞ (F τ ; −K τ ). We must show that there exists L ∈ L b such that This equation is satisfied if the portfolio process V L satisfies V L τ (g) = L τ (H(1 ⊗ g)) = ξ(g), for all g ∈ C C, We can now apply Corollary 5.2 in the Appendix and define L by Since λ1 [0,t]×R 2 + and µ1 [0,t]×R + are F t -measurable, it follows that L has the properties required by Definition 2.1, recall Remark 2.1 and (a.) of Proposition 5.1 in the Appendix. Asξ ∈ L ∞ (F ; M M β ), the strategy is bounded in the sense of Definition 3.1 We can now provide the proof of Theorem 4.1. Proof of Theorem 4.1. Fix ǫ ∈ Υ such that (NFLVR) ǫ holds. We shall construct Z such that (Zc) holds and Z τ ∈ K ǫ′ τ for all stopping times τ ∈ T . In particular, as a martingale, ZS/S 0 has to be càdlàg (cf. [21, Ch. II, Th. (2.9)]), and, since S has continuous paths and takes strictly positive values, Z is càdlàg. We shall also show that Z T ≥ 0 and that Z T (x) > 0 for at least onex ∈R + (actually along a dense sequence). Since (ZS/S 0 )(x) is a martingale, this implies that Z τ (x) > 0 for all stopping times τ ∈ T . In view of the definition of K ǫ′ τ this readily implies that Z τ ∈ int(K ′ τ ). Our continuity assumptions, see Remark 2.1, then imply that Z τ − ∈ K ǫ′ τ for all predictable stopping time τ ∈ T . Similarly as above, we must have Z τ − (x) > 0, see e.g. [16,Lemma 2.27], so that Z τ − ∈ int(K ′ τ ), whenever τ is predictable. This will show that M(int(K ′ )) = ∅. To find anǭ ∈ Υ such that M(int(Kǭ ′ )) = ∅, we just note that (RNFLVR) for the original transaction costs λ implies (RNFLVR) for some λǭ defined as in (3.2) for some Υ ∋ǭ < ǫ. Thisǭ can be easily constructed by using the argument of Remark 3.1. 1. It follows from the assumption (NFLVR) ǫ thatX T ǫ b ∩ L ∞ (F T ; M M + ) = {0}. The Hahn-Banach theorem and Theorem 3.1 then imply that, for any ν ∈ L ∞ (F T ; M M + ) \ {0}, there exists f ν ∈ L 1 (F T ; C C) and a real constant a ν such that SinceX T ǫ b is a cone of vertex 0 which contains L 0 (F T ; −M M + ), we deduce that Also observe that we may assume without loss of generality that f ν C C ≤ 1. 2. In the following, we use the fact that M M + is the σ(M M, C C)-closure of the cone generated by the countable basis (δ If Γ ∈ F T is a non-null set, then P [Γ ∩ A k (ν)] > 0 for ν defined by ν := δ x k 1 Γ ∈ L ∞ (F t ; M M + ). This follows from the left-hand side of (4.7) and the right-hand side of (4.5). By virtue of [18, Lemma 2.1.3 p74], we can then, for k given, find a countable subfamily {A k (ν i k ) : i ∈ N} ⊂ A k such that Therefore, B := ∩ k B k is a set of measure 1. Let us setŽ On each B k ,Ž T (x k ) > 0. This follows from (4.8) and (4.6). Since x →Ž T (x) is continuous, this implies thatŽ T (x) ≥ 0 for all x ∈R + P-a.s. For later use, note that Indeed, if it is not the case then, for every ω in the non-null set Λ τ := {Z τ / ∈ K ǫ′ τ } ∈ F τ , we may find ξ ω ∈ K ǫ τ (ω) ∩ M M 1 such that ξ ω (Z τ ) < 0. It follows that the set Γ := (ω, ξ) ∈ Ω × M M 1 : ξ ∈ K ǫ τ (ω) and ξ(Z τ (ω)) < 0 is of full measure on Λ τ × M M 1 , i.e. Λ τ \ {ω ∈ Ω : ∃ ξ ∈ M M 1 s.t. (ω, ξ) ∈ Γ} = ∅ up to P-null sets. As Γ is F τ ⊗ B(M M σ )-measurable, by a measurable selection argument, we then obtain an F τ -measurable selector ξ such that (ω, ξ(ω)) ∈ Γ on Λ τ and ξ = 0 otherwise, see e.g. [ Then, since {(S 0 /S τ )ν : ν ∈ L ∞ (F τ ; −K ǫ τ )} ⊂X T ǫ b , see Proposition 4.1 above, we obtain a contradiction to (4.9) if τ is such that S 0 /S τ C C ∈ L ∞ . This shows that Z τ ∈ K ǫ′ τ for such stopping times τ . In view of (2.2) and (2.3), the general case is obtained by a standard localization argument. 4. It remains to prove (4.10). We notice that the ξ in (4.10) is F τmeasurable, by construction. Thus, the random measure (S 0 /S τ )ξ can be viewed as an optional random measure with respect to (F t∨τ ) t∈T . Since Z τ S τ /S τ is by construction the (F t∨τ ) t∈T -optional projection at the stopping time τ of Z T S T /S τ =Ž T S 0 /S τ , it follows from Theorem 5.1 in the Appendix that Existence of strictly consistent price systems implies (RNFLVR) The fact that the existence of strictly consistent price systems implies (NFLVR) follows as usual from the super-martingale property of admissible wealth processes when evaluated along consistent price systems. In our infinite dimensional setting, this super-martingale property can not be deduced directly from an integration by parts argument as in e.g. [7]. We instead appeal to an optional projection theorem which we state in the Appendix. In the following, we let M(K ′ ) be defined as M(int(K ′ )) at the beginning of Section 3 but with K ′ in place of int(K ′ ). Proof Fix t ≥ s ∈ T and L ∈ L b . 1. Fix τ ∈ T and assume that In the following, we write X τ for the stopped process X ·∧τ associated to an adapted process X taking values in C C, C(R 2 + ) or M M. One has Moreover, a.s. to E V L t (Z t ) − |F s . It is then sufficient to apply Fatou's Lemma to the left-hand side of (4.13) to deduce that which concludes the proof. where the last inequality follows from the fact that Z 0 ∈ K ′ 0 . We now use (2.21) and the fact that Z T (0) > 0 P-a.s. to obtain Z T (0)ℓ T (X) ≤ Z T (0)X(Z T /Z T (0)) = X(Z T ), (4.14) so that, by the above, 2. Let (X n , c n ) n≥1 ⊂ X T b × R + be such that lim n c n = 0 and X n ∈ X T b (c n ) for all n ≥ 1. Let (η n ) n≥1 ⊂ M M be such that η n M M ≤ c n and X n + η n ((S T /S 0 )·) ∈ K T for all n ≥ 1. Then, Since η n (Z T S T /S 0 ) → 0 P-a.s., the last inequality combined with (4.15) applied to X = X n implies that X n (Z T ) → 0 P-a.s. We conclude from (4.14) and the fact that Z T (0) > 0 P-a.s. that lim sup n ℓ T (X n ) ≤ 0. Remark 4.2 (i). The existence of Z ∈ M(int(K ′ )) also implies a version of the robust no free lunch condition which is weaker than the one of Definition 3.3. More precisely, it implies that we can find ǫ, satisfying all the conditions in the definition of Υ except that the process t → ǫ t may no more be strongly continuous but only càdlàg, such that (NFLVR) ǫ holds. It is given by Then, Z ∈ M(K ǫ′ ) and Z T (0) > 0 by construction. To check that the property (NFLVR) ǫ holds, it then suffices to observe that the strong continuity assumption on the process λ is not used in the proof of Corollary 4.1. (ii). Combining Theorems 4.1 and 4.2 leads to: M(int(K ǫ′ )) = ∅ for some ǫ ∈ Υ ⇔ (RNFLVR) holds. One may want to prove: M(int(K ′ )) = ∅ ⇔ (RNFLVR) holds. Actually, Theorem 4.1 provides the direction ⇐. To prove the reverse implication, one will typically need to construct some ǫ as in (i) above. But this one does not, in general, belong to Υ if one only knows that Z is int(K ′ )-valued. One would need more information, for instance that Z is strongly continuous. As a matter of fact, the last equivalence can, in general, only hold if one can remove the strong time continuity condition in the definition of Υ, i.e. deal with jumps in the bid-ask prices. As explained in the introduction, we leave this case for further research. Appendix We report here on technical results that were used in the previous proofs. On optional projections and the measurability of composition of maps We first provide two standard results, which we adapt to our context. The proofs follow classical arguments and are reported only for completeness. ) t∈T is optional for any A ∈ B(R + ). Assume further that |µ|(|X|) ∈ L 1 . Then where X o is defined as the point-wise optional projection of X: Proof Obviously, one can restrict to the case where µ is non-negative by considering separately µ + and µ − . If X is of the form X t (x)(ω) = 1 A (x)ξ t (ω) with A ∈ B(R + ) and ξ is F ⊗B([0, T ])-measurable and bounded, then the optional projection X o of X is given by . Then, µ A is an optional random measure on [0, T ] by our assumption on µ. It then follows from [10, Chapter VI.2] that (5.1) holds. The monotone class theorem allows to conclude in the case where X is just measurable and bounded. The general case is obtained by a standard truncation argument. Proposition 5.1 Let E be a compact metrizable topological space and G a sub σ-algebra of F . Proof: (a.) By Pettis' theorem, weakly-measurable and strongly measurable C(E)-valued random variables coincides. We can then assume that g is strongly measurable. Let (h n ) n (resp. (µ n ) n ) be a convergent sequence in the Banach space C β (E) (resp. the Polish space M +σ (E) (see Corollary 5.1)) converging to h (resp. µ). The triangular inequality implies that |µ n (h m ) − µ(h)| ≤ |(µ n − µ)(h)| + |µ n (h m − h)| for all n, m ≥ 1. The first term on the r.h.s. converges to 0 by weak* continuity. The second converges to 0 by norm convergence in C(E) and norm boundedness of (µ n ) n≥1 (since weak*convergent). This proves the continuity of the bi-linear form. (b.) This assertion now follows by continuous composition of measurable mappings. (c.) Also here the continuity of the bi-linear form and the composition with a measurable mapping gives the result. Some topological properties of the solvency cones We now establish some topological properties of the solvency cones. Many arguments below are inspired by standard texts, see e.g. [6]. Since a deterministic set-up is sufficient here, we only consider deterministic transaction costs λ, but we consider a slightly more general context in terms of spaces than in the preceding sections. Namely, we consider two spaces X and Y satisfying X is a compact metrizable space and Y := T × X (5.2) where T = [0, T ] for some T ∈ [0, ∞). For λ ∈ C + (T × X 2 ) the cone K(λ) is now defined (cf. Sec.2.3) to be the closure in M σ (Y ) of the cone is the dual cone of the cone K ′ (λ) in C σ (Y ) and also of the cone K ′ (λ) in C β (Y ). Let us define Λ int := {λ ∈ C + (T × X 2 ) s.t. int(K ′ (λ)) = ∅}, (5.5) in which the interior is taken in C β (Y ). The A 1 is dense in M σ (Y ). It follows directly from the definition of K(λ) and (5.3) that the set A = A 1 ∩ K σ (λ) is dense in K σ (λ). The topological space K σ (λ) is therefore separable. The space C β (Y ) is separable, since Y is compact and metrizable (cf. [5, Theorem 1, Ch. X, §3]). LetC be a countable and dense subset of C β (Y ) and let V be the linear hull ofC. Since M +σ (Y ) is a closed subset of K σ (λ), when λ ∈ Λ int , the following is deduced from the above by setting T = 0. Corollary 5.1 M +σ (X) is a Polish space. A measurable selection result for trading strategies We now establish a measurable selection result. It is used in the proof of Proposition 4.1 to establish that simple strategies are admissible. This requires the introduction of some additional notations and of an elementary notion of deterministic causality described by progressive measurability, but without reference to the filtered probability space (Ω, F , F, P). As in the preceding section, X and Y are given as in (5.2), while Λ int is defined in (5.5). Let Λ int,β be Λ int endowed with the induced topology as a subspace of C β (T × X 2 ). In all this section, we fix λ ∈ Λ int , and defineΛ (resp.Λ β ) as the subset of Λ int (resp. subspace of Λ int,β ) of elements λ ∈ Λ int such that λ ≥λ. The topological space is Polish since this is the case of M +σ (T × X 2 ) (apply Corollary 5.1). (5.7) endowed with the coarsest topology for which all functions in C Pr (T × A) (resp. C Pr (T × B)) are continuous. The mapping I Pr : A Pr → B Pr is defined by I Pr (t, λ, L) = (t, I(λ, L)), (5.12) where I is defined in (5.9). For t ∈ T consider the canonical projection For t ∈ T, F A t is the inverse image of the Borel σ-algebra B(A) under this projection and F A := (F A t ) t∈T defines a filtration of A (when endowed with its conventional Borel measurable space structure). Similarly, the σ-algebra max(λ s (x, y), λ t (x, y)) if s ∈ (t, T ], for all (t, x, y) ∈ T × X 2 . We then define sets of progressive processesà Pr andB Pr , representing the equivalence classes, and a mappingĨ Pr :à Pr →B Pr bỹ
2013-02-02T02:48:29.000Z
2013-01-30T00:00:00.000
{ "year": 2013, "sha1": "71add6110cbf7e60c7354bff96baf1c4bc372408", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.spa.2014.04.012", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3e14bdb4aec7734814a52d0d4c1ef41ce972979a", "s2fieldsofstudy": [ "Economics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
221585961
pes2o/s2orc
v3-fos-license
Many-body theory of radiative lifetimes of exciton-trion superposition states in doped two-dimensional materials Optical absorption and emission spectra of doped two-dimensional (2D) materials exhibit sharp peaks that are often identified with pure excitons and pure trions (or charged excitons), but both peaks have been recently attributed to superpositions of 2-body exciton and 4-body trion states and correspond to the approximate energy eigenstates in doped 2D materials. In this paper, we present the radiative lifetimes of these exciton-trion superposition energy eigenstates using a many-body formalism that is appropriate given the many-body nature of the strongly coupled exciton and trion states in doped 2D materials. Whereas the exciton component of these superposition eigenstates are optically coupled to the material ground state, and can emit a photon and decay into the material ground state provided the momentum of the eigenstate is within the light cone, the trion component is optically coupled only to the excited states of the material and can emit a photon even when the momentum of the eigenstate is outside the light cone. In an electron-doped 2D material, when a 4-body trion state with momentum outside the light cone recombines radiatively, and a photon is emitted with a momentum inside the light cone, the excess momentum is taken by an electron-hole pair left behind in the conduction band. The radiative lifetimes of the exciton-trion superposition states, with momenta inside the light cone, are found to be in the few hundred femtoseconds to a few picoseconds range and are strong functions of the doping density. The radiative lifetimes of exciton-trion superposition states, with momenta outside the light cone, are in the few hundred picoseconds to a few nanoseconds range and are again strongly dependent on the doping density. Optical absorption and emission spectra of doped two-dimensional (2D) materials exhibit sharp peaks that are often identified with pure excitons and pure trions (or charged excitons), but both peaks have been recently attributed to superpositions of 2-body exciton and 4-body trion states and correspond to the approximate energy eigenstates in doped 2D materials. In this paper, we present the radiative lifetimes of these exciton-trion superposition energy eigenstates using a many-body formalism that is appropriate given the many-body nature of the strongly coupled exciton and trion states in doped 2D materials. Whereas the exciton component of these superposition eigenstates are optically coupled to the material ground state, and can emit a photon and decay into the material ground state provided the momentum of the eigenstate is within the light cone, the trion component is optically coupled only to the excited states of the material and can emit a photon even when the momentum of the eigenstate is outside the light cone. In an electron-doped 2D material, when a 4-body trion state with momentum outside the light cone recombines radiatively, and a photon is emitted with a momentum inside the light cone, the excess momentum is taken by an electron-hole pair left behind in the conduction band. The radiative lifetimes of the exciton-trion superposition states, with momenta inside the light cone, are found to be in the few hundred femtoseconds to a few picoseconds range and are strong functions of the doping density. The radiative lifetimes of excitontrion superposition states, with momenta outside the light cone, are in the few hundred picoseconds to a few nanoseconds range and are again strongly dependent on the doping density. The doping density dependence of the radiative lifetimes of the two peaks in the optical emission spectra follows the doping density dependence of the spectral weights of the same two peaks observed in the optical absorption spectra as both have their origins in the Coulomb coupling between the excitons and trions in doped 2D materials. Optical absorption and emission spectra of doped twodimensional (2D) materials in general, and of transition metal dichalcogenides (TMDs) in particular, exhibit sharp and distinct peaks that are often attributed to neutral and charged excitons (or trions) [1][2][3][4][5][6][7][8][9][10][11]. Although optical signatures of excitons and trions in doped semiconductors have been observed for a long time [8], their nature, especially of trions, in doped materials had remained somewhat of a mystery. For one, it was difficult to understand how a photon, being a boson, could get absorbed and create a trion, if a trion is taken to be fermionic bound state of three particles. Second, it was not clear what happened to one of the charged particles left behind when a trion emitted a photon. Pauli's exclusion required the left behind charged particle to be deposited outside the Fermi sea, but the energy and momentum conservation requirements following from Pauli's exclusion were never observed in the measured photoluminescence spectra. Third, the variation of the energy separation of the two peaks observed in the optical absorption spectra, as well as the spectral weight transfer between these two peaks with doping, did not seem to follow from the assumption of excitons and trions being independent excitations. Several recent works have contributed to resolving this mystery and clarifying the nature of excitons and trions in doped semiconductors [12][13][14][15][16]. Recently, the authors have presented a theoretical model based on two cou-pled Schrödinger equations to describe excitons and trions in electron-doped 2D materials [12]. One is a 2-body Schrödinger equation for a conduction band (CB) electron interacting with a valence band (VB) hole, and the other is a 4-body Schrödinger equation of two CB electrons, one VB hole, and one CB hole interacting with each other. The CB hole is created when a CB electron is scattered out of the Fermi sea by an exciton. The eigenstates of the 2-body equation were identified with excitons and the eigenstates of the 4-body equation were identified with trions. A bound trion state is therefore a 4-body bosonic state, and not a 3-body fermionic state. The two Schrödinger equations are coupled as a result of Coulomb interactions between the excitons and the trions in doped materials. The model shows that pure exciton and trion states are not good eigenstates of the Hamiltonian in the presence of doping. However, good approximate eigenstates can be constructed from superpositions of exciton and trion states. This superposition includes both bound trion states as well as unbound trion states. The latter are exciton-electron scattering states. These superposition states, first proposed by Suris [13], resemble the exciton-polaron variational states proposed by Sidler et al. [14][15][16]. The optical conductivity obtained from the model proposed by the authors explains all the prominent features experimentally seen in the optical absorption spectra of doped 2D materials including the observation of two prominent absorption peaks Only the exciton states are coupled to the material ground state via optical coupling. The trion states are optically coupled to excited states of the material consisting of a CB electron-hole pair. The trion states include both bound and unbound trion states. and the variation of their energy splittings and spectral shapes and strengths with the doping density [12]. Furthermore, the peaks observed in the optical absorption spectra of doped 2D materials do not correspond to pure exciton or pure trion states. Each peak corresponds to a superposition of exciton and trion states. While previous papers, including the one by the authors, have addressed the problem of light absorption by excitons and trions [12,13,15,16], questions related to light emission and radiative lifetimes of excitons and trions in doped materials remain unanswered. The model developed by the authors [12], rather interestingly, also showed that the 4-body trion states have no optical matrix elements with the material ground state. The ground state of, say an electron-doped material, is defined as the state consisting of a completely full valence band (no VB holes), and a completely full Fermi sea in the conduction band (no CB holes inside and no CB electrons outside the Fermi sea). Therefore, the contribution to the material optical conductivity from the 4-body trion states results almost entirely from their Coulomb coupling to the 2-body exciton states [17]. The exciton and trion states and the related couplings are depicted in Fig.1. However, the trion states, including both bound and unbound trion states, are optically coupled to the excited states of the material consisting of a CB electron-hole pair. In other words, a trion state can decay by emitting a photon and leaving behind a CB electron-hole pair. The radiative rate of this process is significant after one has summed over all possible CB electron-hole pairs that can result from the radiative decay of a 4-body trion state. The experimentally relevant radiative lifetimes are not those of pure exciton and trion states, but of the approximate energy eigenstates which, as discussed above, are superpositions of exciton and trion states. The goal of this paper is to clarify the processes contributing to photon emission from these energy eigenstates in 2D materials and calculate the corresponding radiative lifetimes. Our main results are as follows. The radiative lifetimes of the exciton-trion energy eigenstates, with momenta inside the light cone, are found to be in the few hundred femtoseconds to a few picoseconds range and are strongly dependent on the doping density. Within the light cone, the exciton component of these eigenstates provides the dominant contribution to the radiative rates. The radiative lifetimes of the exciton-trion superposition states, with momenta outside the light cone, are in the few hundred picoseconds to a few nanoseconds range and are again strong functions of the doping density. Outside the light cone, only the trion component of these eigenstates contributes to the radiative rates. The doping density dependence of the radiative lifetimes of the two peaks in the optical emission spectra follows the doping density dependence of the spectral weights of the same two peaks observed in the optical absorption spectra as both have their origins in the Coulomb coupling between the excitons and trions in doped 2D materials. THEORETICAL MODEL In this Section we set up the Hamiltonian and derive the main equations. Although the focus is on electrondoped 2D TMD materials, the arguments are kept general enough to be applicable to any 2D material. The Hamiltonian We consider a 2D TMD monolayer located in the z = 0 plane inside a uniform medium of dielectric constant . The TMD layer interacts with both TE (electric field in the z = 0 plane) and TM (magnetic field in the z = 0 plane) polarized light modes. The Hamiltonian describing electrons and holes in the TMD layer (near the K and K points in the Brillouin zone) interacting with each other and with the optical mode in the rotating wave approximation is [2,[18][19][20], Here, E c,s ( k) and E v,s ( k) are the conduction and valence band energies. s, s represent the spin/valley degrees of freedom in the 2D material, and we assume for simplicity that the electron and hole effective masses are independent of the spin/valley. U ( q) represents Coulomb interaction between electrons in the conduction and valence bands and V ( q) represents Coulomb interaction among the electrons in the conduction bands. A is the monolayer area and AL is the volume assumed for field quantization. hω( / q) is the energy of a photon with momentum / q, and g j,s ( / q) is the electron-photon coupling constant for light with photon polarization j = TE, TM (see Fig.2). Most momentum vectors in the Hamiltonian above are in 2D. Those associated with light are in 3D, carry a slash in the notation for clarity, and / q = Q + q zẑ , where Q is the momentum component in the z = 0 plane. Other than for phase factors that are not relevant to the discussion in this paper, g j,s ( / q) for electron states near the band edges in 2D TMDs can be given by [19,20], where, v is the interband velocity matrix element [2,[18][19][20]. Exciton States, Trion States, and Energy Eigenstates As shown by Rana et al. [12], approximate eigenstates of the Hamiltonian in (1) can be written as a superposi-tion of exciton and trion states, Here, |GS is the ground state of the electron doped material. The normalization factors are, The above energy eigenstate has (in-plane) momentum Q. φ ex n, Q ( k +λ h Q) and φ tr m, Q ( k 1 , s 1 ; k 2 , s 2 ; p, s 2 ) are eigenstates of the 2-body exciton and 4-body trion eigenequations, respectively [12]. The corresponding eigenenergies are, E ex n ( Q, s) and E tr m ( Q, s 1 , s 2 ), respectively. is the electron (hole) effective mass. m tr = 2m e + m h , ξ = m e /m tr , and η = m h /m tr . The underlined vector k stands for k + ξ( Q + p). The summation over the index m implies summation over all bound and unbound trion states. Expressions for the coefficients α n and β m,s are given later in this paper. The states given above are good approximations to the actual eigenstates of the Hamiltonian in (1) within the purview of single electronhole pair excitations and provided one ignores multiple electron-hole pair excitations [12]. In most cases of practical interest involving 2D TMDs, only the lowest energy exciton state needs to be considered. However, bound trion states as well as the continuum of unbound trion states need to be included since the energy differences involved therein are small [12]. This makes the direct calculation of radiative rates using Fermi's Golden Rule awkward. The optical interaction term in the Hamiltonian couples the material ground state to only the exciton states and not to the trion states (see Fig.1) [12]. However, excited states of the material containing an electron-hole pair in the CB are optically coupled to the trion states. Given this, two different kinds of radiative transitions are possible and are depicted in Fig.3. Fig.3(a) shows photon emission resulting in a decay of the energy eigenstate into the material ground state. The transition rate is determined by |α n | 2 , the weight of the exciton component of the energy eigenstate in (3). This transition is possible only if the momentum Q of the energy eigenstate is within the light cone. Fig.3(b) shows photon emission resulting in a decay of the energy eigenstate into an excited state of the material that has a CB electron-hole pair. The CB electron-hole pair is left behind after photon emission from the trion components of the energy eigenstate. Unlike the process in Fig.3(a), the process in Fig.3(b) is possible even if the momentum Q of the energy eigenstate is outside the light cone. If the emitted photon has an in-plane momentum Q within the light cone, the difference Q − Q is taken by the electron-hole pair left behind in the CB. The radiative rate for this process is determined by the magnitude of the coefficients β m,s of the trion states in the expression for the energy eigenstate given in (3). In the Sections that follow, we will calculate separately the radiative rates for the two processes in Fig.3. RATE FOR RADIATIVE DECAY INTO THE MATERIAL GROUND STATE We first calculate the rate for the radiative decay of the energy eigenstate into the material ground state. This rate is expected to be proportional to the weight of the exciton component of the energy eigenstate, and the weight of the exciton component is conveniently given by the spectral density function which is proportional to the imaginary part of the exciton Green's function. Thus, we seek an expression for the radiative rate in terms of the exciton Green's function. Heisenberg Equations We start from the Heisenberg equation for the photon operator, The Heisenberg equation for the polarization operator is [12], Here, f c,s ( k) is the electron occupation probability in the conduction band (valence band is assumed to be completely full), γ ex is a phenomenological decoherence rate for the polarization that includes dephasing due to all processes other than exciton-electron scattering. F Q ( k, s; t) is a zero-mean delta-correlated quantum Langevin noise source that is introduced by the same processes that contribute to the decoherence γ ex [21]. The energies E c,s ( k) include renormalizations due to exchange at the Hartree-Fock level Taking the mean value of the operators in (6), ignoring the first term and the last two terms on the right hand side (RHS), and Fourier transforming the remaining terms results in a 2-body Schrödinger equation for the excitons [12,21,22]. The last two terms in (6) on the RHS contain four-body operators T c Q . We define the operator T Q ( k 1 , s 1 ; k 2 , s 2 ; p, s 2 ; t) as follows, As before, the underlined vector k stands for k +ξ( Q+ p). The average of the operator T Q describes correlations arising from Coulomb interactions among four particles: two CB electrons, a VB hole, and a CB hole. Q is the total momentum of this 4-body state. We also define the connected operator T c Q as follows [12], The Heisenberg equation for the operator T c Q ( k 1 , s 1 ; k 2 , s 2 ; p, s 2 ) is found to be [12], In deriving the above equation, all 6-body operator products were reduced to 4-body operator products using the random phase approximation [21,22]. By ignoring higher order correlations we are ignoring the generation of multiple particle-hole pairs in the CB. γ tr is a phenomenological decoherence rate and D Q is the corresponding zero-mean delta-correlated Langevin noise source. If r e1 , r e2 , r h1 , are r h2 the coordinates of the two electrons, the VB hole, and the CB hole, respectively, then k 1 , k 2 , Q, and p are the momenta associated with the coordinates r e1 − r h1 , r e2 − r h1 , R = ξ( r e1 + r e2 )+η r h1 , and R− r h2 , respectively. Here, R is the center of mass coordinate of the two electrons and the VB hole. Taking the mean value of the operators in (9), ignoring the last two terms on the RHS in (9) that involve P Q , and Fourier transforming the remaining terms will result in a 4-body Schrödinger equation for the trions [12]. Each term on the RHS in the above equation (except the first and the last two) describes Coulomb interaction between two of the four particles. The last two terms involving P Q describe the generation of four-body correlation from two-body correlations, or the creation of an CB electron-hole pair by an exciton. Solution of Heisenberg Equations The polarization operator P Q ( k, s; t) can be decomposed using the complete set of exciton eigenfunctions [12] φ ex n, Q ( k + λ h Q) as follows, We assume that at time t, P n, Q (s; t) has a non-zero mean value for some particular values of n and s. P n, Q (s; t) can be non-zero if the quantum state is a superposition of the material ground state |GS and one of the eigenstates described in Section . Following Milonni [23], the strategy going forward will then be as follows. The Heisenberg equations will be solved to find how the mean value P n, Q (s; t) decays with time due to radiative transitions, and the lifetime associated with this decay would give the radiative rate. Since we are exclusively interested in radiative transitions in this paper, several approximations will be made in order to keep the focus on the relevant physics and irrelevant terms will be ignored to keep the analysis simple. (5) can be be solved by direct integration to give, Next, we find the time dependence of the operator P n, Q (s; t). Using (10) in (6), ignoring the Langevin noise sources on the RHS in (6) and (9) (because these noise sources will not have any effect on the end results sought in this paper), and using the techniques discussed in a previous paper by the authors [12] for solving the coupled system of equations in (6) and (9), the operator P n, Q (s; t) is found to be, Here, Σ ex n,s ( Q, ω) is the self-energy of the excitons arising from their Coulomb coupling to the trions [12], The summation over m above implies a summation over all bound and unbound trion states consistent with the values of s and s . The expression for the Coulomb matrix elements M m,n ( Q, s, s ) coupling the exciton and trion states can be found in a previous paper by Rana et al. [12]. The exciton self-energy thus includes contribution of trion states to the polarization via exciton-trion Coulomb coupling. (12) gives the natural frequencies associated with the material polarization response, given by the poles of the expression in the denominator, and these frequencies also correspond to the energy eigenstates of the Hamiltonian [12]. It follows that on fast time scales (of the order of the inverse of the relevant optical frequencies), P n, Q (s; t) can be written as, The above approximation, when used together with (10) in (11), results in an expression for the photon operator in the standard Markoff approximation [23], Radiative Rate Use of (15) in the first term on the RHS of (6) introduces an additional source of damping in the material polarization which is due to radiative transitions. To show this more clearly, we substitute (15) in (6), then use the decomposition in (10) and project out the equation for P n, Q (s; t), take the mean value, and retain only those terms that are relevant to see this radiative damping to get, where the spontaneous emission rate R n,s ( Q) is, Here, c = 1/ √ µ o is the speed of light in the medium surrounding the 2D monolayer. The above result for the spontaneous emission is conveniently expressed in terms of the relevant exciton/trion optical conductivity of the 2D TMD monolayer. (17) is the main result of this paper. The optical conductivity of a 2D TMD monolayer, for inplane light polarization, can be written in terms of the exciton Green's function [12], Here, G ex n,s ( Q, ω) is the exciton Green's function [12], The energies of the eigenstates in (3) are given by the poles of the exciton Green's function. We label these energies as E lo n,s ( Q) and E hi n,s ( Q). Earlier, in Section , we had remarked that the radiative rate for the energy eigenstate to decay into the ground state is proportional to the weight of its exciton component given by α n in (3). Assuming, γ tr = γ ex = 0 for simplicity, |α n | 2 for an energy eigenstate equals the residue of the exciton Green's function at the energy of the eigenstate, Before exploring the above results further, it is instructive look at the optical conductivity of 2D materials. The exciton/trion optical conductivity of electron-doped 2D MoSe 2 was calculated by the authors in a recent paper and the results are reproduced in Fig.4 [12]. The spectra shows two prominent absorption peaks which correspond to the poles, E lo n,s ( Q) and E hi n,s ( Q), of the exciton Green's function in (19). The spectral weight shifts from the higher energy peak to the lower energy peak as the electron density increases. The energy separation between the two peaks also increases nearly linearly with the electron density [12]. In the literature, the lower energy absorption peak is often identified with the trions (or charged excitons) and the higher energy peak with the excitons. This identification is true only in the limit of very small electron densities. At electron densities large enough such that the lower energy peak has sufficient spectral weight to be experimentally visible in the absorption spectrum, each peak corresponds to an energy eigenstate that is a superposition of exciton and trion states, as shown in (3). Furthermore, at large electron densities, the higher energy peak is broadened due to exciton-electron scattering and acquires a wide pedestal (more visible on its higher energy side) that corresponds to the continuum of unbound trion states (or excitonelectron scattering states). In Fig.4, linewidth broadening due to factors other than exciton-electron scattering, such as phonon scattering, was included by assuming that γ ex = γ tr = 4 meV. The rates, R lo n,s ( Q) = 1/τ lo n,s ( Q) and R hi n,s ( Q) = 1/τ hi n,s ( Q), corresponding to the lower and higher energy peaks in the absorption spectra, respectively, can be each obtained by restricting the frequency integral in (17) to the respective peak. Interestingly, because the integral of the optical conductivity in (18) satisfies the sum rule [12], one can expect from (17) that the radiative rate for the lower energy absorption peak to increase with the electron density and the radiative rate for the higher energy absorption peak to decrease with the electron density such that the sum rule above is always satisfied. In addition, since the area under the two peaks in Fig.4 become nearly the same at large electron densities ( 2×10 13 cm −2 ) (despite the fact that the peak optical conductivity of the lower energy peak is higher), one can expect the two lifetimes to become comparable at large electron MoSe2. Only the lowest energy exciton state is considered in the calculations. The spectra are all normalized to the peak optical conductivity value at zero electron density. T = 5K. The frequency axis is offset by the exciton eigenenergy E ex 0 ( Q = 0, s) of the two-body Schrödinger equation. Two prominent peaks are seen in the spectra. Each peak corresponds to an energy eigenstate state that is a superposition of exciton and trion states, as shown in (3). Figure is reproduced from the paper by Rana et al. [12]. densities. Numerical simulation results, presented in the next Section, confirm these findings. Numerical Simulations and Results For simulations, we consider an electron-doped monolayer of 2D MoSe 2 suspended in air. In monolayer MoSe 2 , spin-splitting of the conduction bands is large (∼35 meV [24]) and the lowest conduction band in each of the K and K valleys is optically coupled to the topmost valence band [25]. We use effective mass values of 0.7m o for both m e and m h which agree with the recently measured value of 0.35m o for the exciton reduced mass [26]. We use a wavevector-dependent dielectric constant ( q), appropriate for 2D materials [2], to screen the Coulomb potentials. We assume that γ ex = γ tr ∼ 4 meV [27]. We compute exciton and trion eigenfunctions and eigenenergies for different momenta and electron densities as described by Rana et al. [12]. , of the lower and higher energy eigenstates, respectively, of the coupled exciton-trion system (and corresponding to the lower and higher energy peaks in the optical absorption spectra in Fig.4) are plotted as a function of the in-plane momentum Q for different electron densities (10 12 cm −2 and 6 × 10 12 cm −2 ) for an electron-doped monolayer 2D MoSe2 suspended in air. T=5 K. parable. At small electron densities, when the entire spectral weight lies with the higher energy absorption peak in Fig.4, and the corresponding eigenstate is essentially a pure exciton state, the calculated lifetimes for the higher energy eigenstate agree well with the lifetimes published previously for excitons in 2D materials [20,28]. But at larger electron densities (¿10 12 1/cm 2 ), the results in previous work, which treated excitons and trions as independent excitations, become incorrect. Fig.6 shows the radiative lifetimes,τ lo n=0,s ( Q) and τ hi n−0,s ( Q), plotted as a function of the in-plane momentum Q (within the light cone) for different electron densities. The light cone momentum is defined as the momentum Q for which the energy of the eigenstate, E lo n,s ( Q) or E hi n,s ( Q), equals the photon energyhQc. The radiative lifetimes are more or less constant for momenta within the light cone, decrease rapidly as the momentum approaches the light cone (due to an increase in the density of photon states), and then diverge for momenta outside the light cone (where the excitonic component of the energy eigenstates cannot emit a photon and decay into the material ground state). This behavior is well known for pure exciton states in 2D materials [20,21,28], and it carries over to the coupled exciton-trion energy eigenstates in doped 2D materials. RATE FOR RADIATIVE DECAY INTO THE MATERIAL EXCITED STATES The radiative rates calculated above correspond to the process depicted in Fig.3(a) in which the energy eigenstate decays into the material ground state. In this Section, we calculate the radiative rate for the process in Fig.3(b) in which the energy eigenstate decays into an excited state of the material that has an electron-hole pair in the CB. The final state after photon emission consists of a photon with momentum q =ẑq z + Q , a CB hole with momentum p and a CB electron with momentum p + Q − Q . The radiative rate expression must include a summation over all these final states. Furthermore, the radiative rate for the process in Fig.3(b) is expected to be determined by the magnitude of the coefficients β m,s of the trion states in the expression for the energy eigenstate given in (3). These coefficients are found to be, The summation over m above implies a summation over all bound and unbound trion states consistent with the values of s and s . The expression for the Coulomb matrix elements M m,n ( Q, s, s ) coupling the exciton and trion states (including bound and unbound trion states) can be found in a previous paper by Rana et al. [12]. Radiative Rate In order to calculate the radiative rates for the process in Fig.3(b), we avoid truncating the 6-body operator products to 4-body operator products that appear during the derivation of (9), and then include a Heisenberg equation for 6-body operator products in our model. The calculations are tedious and not particularly illuminating. The final result for the radiative rate R n,s ( Q) can be written in a simple form, The spectral function S n,s,m,s ( Q, p, Q , ω) is, Here, ∆ stands for the energy difference where, The spectral function S n,s,m,s ( Q, p, Q , ω) has the following two important properties: • Its poles are at the energies of the exciton-trion superposition eigenstates shifted by ∆, the energy taken by the electron-hole pair left behind in the CB after photon emission. Therefore, the spectrum of S n,s,m,s ( Q, p, Q , ω) will have two prominent peaks just like the spectrum of optical absorption. Since for Q << k F , the energy shift ∆ will be , of the lower and higher energy eigenstates, respectively, of the coupled exciton-trion system (and corresponding to the lower and higher energy peaks in the optical absorption spectra in Fig.4) are plotted as a function of the in-plane momentum Q for different electron densities (10 12 cm −2 and 6 × 10 12 cm −2 ) for an electron-doped monolayer 2D MoSe2 suspended in air. The lifetimes shown correspond to the process depicted in Fig.3(b) for radiative decay into excited states of the material. T=5 K. The lifetimes shown are three to four orders of magnitude longer than the lifetimes shown earlier in Fig.6 for the process depicted in Fig.3(a) for radiative decay into the material ground state. negligibly small for all p < k F , and the peaks in the S n,s,m,s ( Q, p, Q , ω) spectrum will be more or less at the same energies as the peaks in the absorption spectrum. • Assuming γ ex = γ tr = 0, the residue of S n,s,m,s ( Q, p, Q , ω) at these two poles is exactly equal to the values of |β m,s | 2 given in (22), which is satisfying in the light of the discussion above. The radiative rates, R lo n,s ( Q) = 1/τ lo n,s ( Q) and R hi n,s ( Q) = 1/τ hi n,s ( Q), corresponding to the lower and higher energy peaks in the absorption spectra, respectively, and associated with the process shown in Fig.3(b), can be each obtained by restricting the frequency integral in (23) to the respective spectral peak (the integral over frequency is implicit in (23) in the q z and Q integrations). , of the lower and higher energy eigenstates, respectively, of the coupled exciton-trion system (and corresponding to the lower and higher energy peaks in the optical absorption spectra in Fig.4) are plotted as a function of the electron densities for an electron-doped monolayer 2D MoSe2 suspended in air. T=5 K. The momentum value is chosen to be just outside the light cone Q ∼ 10 7 1/m. The lifetimes shown correspond to the process depicted in Fig.3(b) for radiative decay into the excited states of the material. Simulation Results the light cone and have a weak dependence on the momentum Q. More interestingly, the radiative rates shown in Fig.7 are three to four orders of magnitude smaller compared to the radiative rates for decay into the material ground state shown in Fig.6. This large difference can be understood as follows. Consider an energy eigenstate of momentum Q, as given in (3), and consider the 4-body bound trion state component of the energy eigenstate (the bound trion state has more weight in the eigenstate than all the unbound trion states). The small radius of the bound trion state (∼ 1 − 2 nm [12]) means that the phase space occupied by each one of the two CB electrons in the bound trion state is fairly large, and is of the order of a −2 , where a is the trion radius. When one of the two CB electrons in the bound trion state radiatively recombines with the VB hole, a CB electron and a CB hole are left behind. Suppose the in-plane momentum of the emitted photon is Q , the momentum of the CB electron left behind is p + Q − Q , and the momentum of the CB hole is p. Since Q is restricted to be within the light cone (the phase space area of which is ∼ ω 2 /c 2 ), only a very small portion of the phase space of the CB electron state prior to the photon emission contributes to photon emission. This phase space fraction is of the order of ω 2 a 2 /c 2 , which is between 10 −3 to 10 −4 . Note that τ hi n−0,s ( Q) > τ lo n−0,s ( Q) in Fig.7, which is the opposite of the case in Fig.6. This is because the radiative rates in Fig.7 are proportional to |β m,s | 2 (weight of the trion component in the energy eigenstate), whereas the radia-FIG. 9: Certain processes that have been proposed in the literature for photon emission involving excitons and trions in electron-doped materials are depicted. (a) Photon emission process involving a 3-body trion state in which the CB electron recombines with the VB hole leaving behind another CB electron which is deposited outside the Fermi sea [1,20,29]. (b) Photon emission process involving an exciton in which an uncorrelated CB electron from the Fermi sea recombines with the VB hole, leaving behind an electron-hole pair in the CB [30]. (c) Photon emission process involving a trion in which an uncorrelated CB electron from the Fermi sea recombines with the VB hole, leaving behind two electron-hole pairs in the CB [30]. tive rates in Fig.6 are proportional to |α n | 2 (weight of the exciton component in the energy eigenstate). Fig. 8 shows the radiative lifetimes, τ lo n=0,s ( Q) and τ hi n−0,s ( Q), for momentum Q value just outside the light cone, plotted for different electron densities. At very small electron densities the radiative lifetime τ hi n=0,s ( Q) of the higher energy eigenstate is much longer than the lifetime τ lo n=0,s ( Q) of the lower energy eigenstate, and at very large electron densities these two lifetimes become comparable. The fact that τ lo n−0,s ( Q) << τ hi n−0,s ( Q) at very small electron densities can be understood as follows. At very small electron densities, |α n=0 | 2 ∼ 1 and |β m=0,s | 2 << 1, and the higher and lower energy eigenstates are thus nearly pure exciton and pure trion states, respectively, and exciton states do not radiatively decay into the excited states of the material. DISCUSSION AND CONCLUSION The results presented in this paper show that photons can be emitted by exciton-trion energy egenstates when their momenta Q are inside or outside the light cone. Inside the light cone, radiative rates for transitioning into the material ground state are nearly four orders of magnitude faster than the radiative rates in which the final state is an excited state of the material. Outside the light cone, only radiative decay into an excited state of the material is possible. Our results are expected to clarify many concepts associated with light emission from excitons and trions in 2D materials. Certain other concepts and processes for radiative transitions have been proposed in the literature in the context of excitons and trions in doped 2D materials that are incorrect in the opinion of the authors. We dis-cuss them briefly here. Fig.9(a) shows a photon emission process involving a 3-body trion state in which the CB electron recombines with the VB hole leaving behind a CB electron which is deposited outside the Fermi sea [1,20,29]. This model showed that the energy of the photon emitted by a trion state would be red-shifted (with respect to the photon emitted by an exciton in the same material) by roughly the Fermi energy E F (in addition to the trion binding energy) which is consumed in promoting the left-behind CB electron to the unoccupied states above the Fermi level. The red shift of the photon energy with the Fermi energy is in agreement with experiments [1,29]. However, there are several problems with this photon emission model and with the concept of a 3-body trion state itself [12]. Recent papers have unambiguously shown that the red-shifting of the lower energy eigenstate, linearly with the Fermi energy, with respect to the higher energy eigenstate is the result of Coulomb interactions [12][13][14][15][16]. Second, this model incorrectly assumes that the electrons forming the trion state are somehow not a part of the CB electronic states (as Fig.9(a) depicts) and then concludes that the electron left-behind after photon emission needs to be deposited back into the CB with enough energy to avoid Pauli blocking. The correct model, depicted in Fig.3(b), shows that when a 4-body trion state emits a photon, the CB electron and the CB hole left-behind (and that were a part of the 4body trion state) remain in the states they occupied just before the emission of the photon. Fig.9(b) shows a photon emission process involving an exciton in which an uncorrelated CB electron from the Fermi sea recombines with the VB hole, leaving behind an electron-hole pair [30]. A simple calculation using an exciton state as the initial state and a final state consisting of a Fermi sea with an electron-hole pair in the CB, and using Fermi's Golden Rule, will show that the rate of this process, although very small, is roughly proportional to the electron density which in turn is proportional to the probability of finding an uncorrelated electron near the exciton. The catch here is that the probability of finding an electron of the same spin/valley near the exciton as that of the electron forming the exciton is not proportional to the electron density but is in fact near zero due to Pauli's principle. Each electron in the conduction band, including the one forming an exciton, is surrounded by its exchange hole and the size of this exchange hole is much larger than the size of the exciton in 2D materials for electron densities smaller than ∼ 10 13 cm −3 . In our model, when we switched from the 4-body operator T Q to the connected 4-body operator T c Q in (6), we removed terms that contributed to the process shown in Fig.9(b), and one of the difference terms, given in (8), gave the exchange energy contribution, which renormalized the CB energy E c,s ( k) on the LHS in (6). The similar process for trions, shown in Fig.9(c) [30], would have a negligibly small rate for the same reason. Finally, it needs to be mentioned here that the radiative lifetimes measured in experiments depend on the type of measurement performed and therefore some care is needed in comparing experiments with theory. Radiative lifetime measurements are usually performed over exciton/trion ensembles and these ensembles can be prepared in experiments in various ways. Ultrafast resonant optical generation of excitons within the light cone and their subsequent probing via 1s → 2s excitonic transitions using a mid-IR probe pulse have yielded exciton lifetimes in 2D TMDs that match well with theory [31]. Time resolved photoluminescence (PL) measurements on the other hand would rely on the exciton-trion energy eigenstates to relax down to the light cone before they can recombine radiatively with high efficiency [32]. This relaxation process is generally bottlenecked by phonon scattering times which are usually much slower (around a few picoseconds) than the radiative lifetimes inside the light cone [33][34][35][36]. In addition, as discussed in this paper, PL collected from both peaks in the emission/absorption spectra of doped 2D materials are from states that are superpositions of exciton and trion states and contribute to PL from both inside and outside the light cone. Although the radiative rates outside the light cone are much smaller than the rates inside the light cone, the phase space available outside the light cone for hosting a nonequilibrium exciton-trion population is also much larger and a lot more exciton-trions could be present outside the light cone than inside it depending on the nature of the experiment. An accurate modeling of radiative emission from non-equilibrium ensembles requires computational approaches well beyond the scope of this work [36].
2020-09-11T01:00:30.250Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "0fb2caf2e0c7b822c40695e10914ab55de44f6e1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.04603", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0fb2caf2e0c7b822c40695e10914ab55de44f6e1", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
257823295
pes2o/s2orc
v3-fos-license
Goss’s Wilt Resistance in Corn Is Mediated via Salicylic Acid and Programmed Cell Death but Not Jasmonic Acid Pathways A highly aggressive strain (CMN14-5-1) of Clavibacter nebraskensis bacteria, which causes Goss’s wilt in corn, induced severe symptoms in a susceptible corn line (CO447), resulting in water-soaked lesions followed by necrosis within a few days. A tolerant line (CO450) inoculated with the same strain exhibited only mild symptoms such as chlorosis, freckling, and necrosis that did not progress after the first six days following infection. Both lesion length and disease severity were measured using the area under the disease progression curve (AUDPC), and significant differences were found between treatments. We analyzed the expression of key genes related to plant defense in both corn lines challenged with the CMN14-5-1 strain. Allene oxide synthase (ZmAOS), a gene responsible for the production of jasmonic acid (JA), was induced in the CO447 line in response to CMN14-5-1. Following inoculation with CMN14-5-1, the CO450 line demonstrated a higher expression of salicylic acid (SA)-related genes, ZmPAL and ZmPR-1, compared to the CO447 line. In the CO450 line, four genes related to programmed cell death (PCD) were upregulated: respiratory burst oxidase homolog protein D (ZmrbohD), polyphenol oxidase (ZmPPO1), ras-related protein 7 (ZmRab7), and peptidyl-prolyl cis-trans isomerase (ZmPPI). The differential gene expression in response to CMN14-5-1 between the two corn lines provided an indication that SA and PCD are involved in the regulation of corn defense responses against Goss’s wilt disease, whereas JA may be contributing to disease susceptibility. Introduction Routinely in the top three most-produced crops globally, corn is an increasingly important cash crop in Canada, with the country contributing 1.2% of the globe's overall production [1]. With the increased demand for corn and corn products, there is an increased demand for more corn acres. This leads to tighter corn rotations and an increase in the incidence of corn diseases such as Goss's wilt, which is triggered by the bacterial pathogen Clavibacter nebraskensis (Cn) [2]. This highly damaging Gram-positive bacterium grows as orange-colored colonies in agar culture media [3,4] and is part of a super-group of bacteria within the genus Clavibacter that infect a variety of other crops [3,[5][6][7]. Goss's wilt has reached every state in the U.S. corn belt and every Canadian province with a significant corn production, and it has been on the rise since the adoption of glyphosate as the main method of weed control instead of conventional tillage [2,[8][9][10][11]. Depending on where Cn enters the plant, it can cause either corn wilt or blight [8,9,12,13]. The bacterium primarily resides in the harvest residues of previous corn crops and enters new plants via wounds and roots [8,14]. The most common symptoms are dark green water-soaked lesions that dry out and form scorched-looking lesions over time with disease progression and wind currents [8,[14][15][16]. In severe cases, yield reductions as high as 50% have been reported. Since there is no known effective chemical control for Cn, producers rely on tillage, clean farm equipment, and primarily good corn genetics [9,14,[17][18][19][20][21]. Controlling alternative hosts such as Seteria viridis (green foxtail) may also reduce the Cn transmission to and infection of corn plants [17,22]. Given that the manipulation of Gram-positive bacteria is difficult, there is a lack of sufficient information about the functional genetic makeup of Cn [23]. Cn can colonize corn tissues using a type II secretion system to transfer virulence factors such as proteases, cellulases, chitanases, and β-1,4-xylanases [4,24,25]. Additionally, few details are available about the mechanisms that Cn employs to invade corn tissues or how to counteract the disease progression of Goss's wilt. The hypersensitive response (HR) is a prompt cell reaction to foreign attacking organisms and is achieved by activating molecular systems that end disease progression [26,27]. Programmed cell death (PCD) refers to the process of rapid cell death that occurs at the infection site and is considered a component of HR. In theory, PCD should prevent the progression of disease since there is no more live tissue around the pathogen to infect. Nitric oxide and reactive oxygen species (ROS) are key elements of PCD, as they can kill the infected plant cells [27,28]. The plant NADPH/respiratory burst oxidase D (RbohD) gene is a key player in the generation of ROS, such as hydrogen peroxide (H 2 O 2 ) and superoxide, that kill the cells at the infection site. RbohD was upregulated in corn at the site of infection by Cn, suggesting a potential role in corn defense against this bacterium [28]. Different approaches, including an expression quantitative trait locus (QTL) analysis and the transcription profiling of resistant and susceptible corn genotypes, revealed complex molecular plant-pathogen interactions, including shifts not only in the expression of genes linked to defense responses mediated by salicylic acid (SA) and jasmonic acid (JA) but also in the oxidative status of infected tissues [29,30]. For SA, we tested two genes, ZmPAL and ZmPR-1, which are known to play important roles in the SA defense pathway against biotrophic pathogens. For PCD, we tested NADPH oxidase gene RbohD, a crucial mediator in ROS production, as well as polyphenol oxidase (PPO1), ZmRab7, and ZmPPI. For JA, we assayed the transcript levels of six selected genes: ZmAOS, ZmAOC1, ZmLox9, ZmJaz12, ZmMYC7, and ZMERF147. Oxylipins, including JA, are lipid-derived signaling molecules that participate in a wide variety of developmental processes and play roles in mediating defense responses to biotic and abiotic stress in plants [31]. The oxylipin biosynthesis begins with the oxidation of polyunsaturated fatty acids to form fatty acid hydroperoxides via enzymatic peroxidation catalyzed by lipoxygenases (LOXs) [32]. In maize, six genes are predicted to encode 13-lipoxygenases (LOX7, LOX8, LOX9, LOX10, LOX11, and LOX13), and seven genes encode 9-lipoxygenases (LOX1, LOX2, LOX3, LOX4, LOX5, LOX6, and LOX12), which convert 18:3 α-linolenic acid and 18:2 α-linoleic acid to 10-oxo-11-phytodienoic acid (10-OPDA) and 10-oxo-11-phytoenoic acid (10-OPEA), respectively [33]. Upon oxygenation by a 13-lipoxygenase, an allene oxide is formed by allene oxide synthase (AOS) and is subsequently cyclized by an allene oxide cyclase (AOC) to OPDA [34]. The expression of key biosynthetic marker genes of the JA signaling pathway, LOX2, LOX3, AOC1 (allene oxide cyclase), and AOS, in ZmGLP1 (a Germin-like Protein from Maize)-overexpressing Arabidopsis was strongly induced after PstDC3000 and Sclerotinia sclerotiorum infection [35]. ZmLOX9 was selected based on previous studies that showed that LOX9, belonging to the 13-lipoxygenases family, plays an important role in defense against Bipolaris maydis, which is responsible for causing southern leaf blight in corn and is also involved in JA biosynthetic pathways [36,37]. Jaz (Jasmonate ZIM-domain) family proteins serve as transcriptional repressors of the JA signaling pathway, preventing the plant from being overwhelmed by the overactivation of the pathway, which causes unintended plant damage [38][39][40][41][42]. ZmMYC7 is a putative MYC2 ortholog that plays a crucial role in protecting maize against Fusarium graminearum via the JA signaling pathway. ZmMYC7 was found to bind to G-box cis-elements in the ZmERF147 promoter in vitro and activate its transcription. However, this activation was impeded by two other proteins, ZmJAZ11 and ZmJAZ12 [43]. Significant efforts to decipher Cn-corn interactions have been made in order to develop better control strategies and prevent outbreaks. In this article, we aim to determine the role of SA and PCD defense-related genes in corn-Cn interactions. Our results indicate potential roles for the SA pathway and PCD in corn defense against the bacterial pathogen Cn, whereas the JA pathway showed little involvement in the successful reduction of Goss's wilt disease in corn. Pathogenicity To understand how corn plants respond to Goss's wilt at the molecular level, corn lines that are tolerant (CO450) and susceptible (CO447) to Goss's wilt were inoculated with CMN14-5-1. Relative to the uninoculated corn plants and Cn-inoculated CO450 corn lines, lesions in the CO447 lines spread rapidly parallel to the leaf veins ( Figure 1A). Figure 1B shows the total AUDPC in CO450 and CO447 plants inoculated with CMN14-5-1. No AUDPC was calculated for the control wounded and unwounded plants because the lesions from the initial wounded areas did not progress. The inoculated CO450 and CO447 plants showed a strong significant difference. The CO450 corn line had a significantly lower total AUDPC than CO447 at the end of the experiment, whereas no difference was observed between the wounded controls in both tested lines. The disease severity increased over time in the corn lines both tolerant and susceptible to CMN14-5-1. Figure 1C shows a clear difference in both inoculated corn lines in which the disease severity levels were strongly induced in the susceptible CO447 in comparison with the tolerant CO450. The AUDPC for lesion lengths was calculated in control and inoculated CO447 and CO450 corn lines. Lesion size was measured on CO450 and CO447 leaves from day 2 to day 10 after inoculation with CMN14-5-1. Asterisks denote significant differences, * p < 0.05. (C) Progression of disease severity in CO447 and CO450 lines from 2 to 10 dpi was calculated using a disease severity scale of 0-5. The values represent the AUDPC values mean of 6 biological replicates and standard error was represented over time. Plant Defense against Goss's Wilt Is Not Enhanced via the Jasmonic Acid Pathway The enhanced disease resistance of the CO450 (tolerant) corn plants against the aggressive strain, CMN14-5-1, led us to further research to assay the transcript levels of genes related to plant defense using a reverse transcriptase qPCR. Treatment with CMN14-5-1 caused a 13.7-fold increase in ZmAOS expression in CO447 and a minor increase (2.9-fold) in CO450 at 2 dpi compared with 0 dpi (Figure 2A). Additionally, the ZmAOS expression in CO447 (susceptible) increased by 10.8-fold in comparison with CO450 at 2 dpi. On the other hand, treatment with CMN14-5-1 did not cause any significant changes in ZmAOC1 ( Figure 2B), and ZmLOX9 ( Figure 2C) in the CO450 lines. In addition, the transcript levels of ZmAOC1 ( Figure 2B) were repressed in CO447 lines at 2 dpi. In contrast, ZmLOX9 ( Figure 2C) showed an induction of 0.9-fold (2.5 times) in CO447 at 2 dpi compared to 0 dpi. Treatment with CMN14-5-1 led to a 1.6-fold (7 times) increase in the transcript abundance of ZmJaz12 in CO450 at 2 dpi compared with 0 dpi but did not lead to an increase in this gene's transcripts in CO447 ( Figure 2D). Furthermore, ZmMYC7 ( Figure 2E), and ZMERF147 ( Figure 2F) were significantly downregulated in CO447 and CO450 lines at 2 dpi compared to 0 dpi. SA and PCD Regulate Goss's Wilt Disease Resistance The plant hormone SA is a key player in plant defense. To further study the correlation between the disease resistance and genes associated with the defense mechanisms to CMN14-5-1, we measured the expression levels of Phenylalanine ammonia-lyase (PAL), which is a key gene for pathogen-induced SA accumulation in plants, and the SA-responsive marker genes pathogenesis-related proteins-1 (PR-1) in both tested corn lines. According to our data, the expression levels of ZmPAL and ZmPR-1 were strongly induced at 2 dpi in the tolerant CO450 line, with a 1.6-fold (2.6 times) and 7.8-fold (2.1 times) enhancement, respectively, when compared to the susceptible CO447 line ( Figure 3A,B). Inoculation with Cn did not induce high enough levels of ZmPAL and caused an increase of ZmPR-1 (6.1-fold; 7.3 times) in CO447 at 2 dpi in comparison with 0 dpi, whereas pathogen infection in the CO450 tolerant line induced significantly higher levels of ZmPAL (2-fold; 4.4 times) and ZmPR-1 (13.7-fold; 11.4 times) in the CO450 tolerant line at 2 dpi in comparison with 0 dpi ( Figure 3A,B). Figure 3C shows the higher gene induction of respiratory burst oxidase D (ZmRbohD) within the tolerant CO450 line compared to the susceptible CO447 line at 0 and 2 dpi, as the differences between the two lines were 37 and 14 times, respectively. Although CMN14-5-1 inoculation induced significantly higher levels of polyphenol oxidase (ZmPPO1) at 2 dpi in both the susceptible CO447 (2.2 times) and tolerant CO450 (2.6 times) corn lines, there were significant variations in ZmPPO1 expression levels between the two tested corn lines at 0 and 2 dpi in which the tolerant CO450 line showed a change of 3.4 and 4 times over the susceptible CO447 line, respectively ( Figure 3D). We also assayed the ZmRab7 gene in the corn lines both tolerant and susceptible to CMN14-5-1. At 0 and 2 dpi, the transcript abundance of the ZmRab7 gene in the tolerant CO450 line was roughly two times higher than that in the susceptible CO447 line ( Figure 3E). Figure 3F displays the response of the peptidyl-prolyl cis-trans isomerase (ZmPPI) gene in the tested corn plants inoculated with CMN14-5-1. At 0 and 2 dpi, the tolerant CO450 line displayed levels of ZmPPI gene expression that were 4.3 and 11.1 times higher than the susceptible line of corn ( Figure 4A). Exogenous Application of SA and H 2 O 2 Confers Partial Disease Resistance against CMN14-5-1 As a next step, we tested whether the application of synthetic SA or H 2 O 2 could induce disease resistance in the susceptible cultivar CO447 by spraying the corn plants 48 h before inoculation. Interestingly, both SA and H 2 O 2 were able to confer partial resistance in CO447 against the aggressive strain CMN14-5-1 compared with the untreated inoculated plants ( Figure 4A-C). The AUDPC values for both lesion length and disease severity in SA-and H 2 O 2 -treated CO447 corn plants in response to CMN14-5-1 were significantly lower than those in untreated CO447 corn plants. The AUDPC values for lesion length were 1.66 and 1.5 times lower for SA-and H 2 O 2 -treated CO447 plants, respectively, than for untreated plants ( Figure 4E). Furthermore, the AUDPC values for leaf disease severity were 1.54 and 1.62 times lower for SA-and H 2 O 2 -treated CO447 plants, respectively, in comparison to the untreated CO447 plants ( Figure 4F). Discussion This study illustrates that corn resistance against Goss's wilt disease is promoted through SA and programmed cell death. Significant differences were shown between the two tested corn lines, CO447 and CO450, in response to the highly aggressive Cn bacterial strain CMN14-5-1. The disease severity was higher in the susceptible CO447 line in comparison with the tolerant CO450 line, and this was represented by the highly significant difference in the length of lesions. To better understand the involvement of corn defense genes against Cn, we conducted gene expression experiments using qRT-PCR analysis to compare the relative expression of uninoculated and inoculated corn lines. Given that most plant defenses are associated with either JA or SA signaling pathways, we tested well-studied marker genes for each pathway. To test if the defense responses induced in the CO450 line were associated with the JA signaling pathway, we assayed six genes that are known to be part of JA biosynthesis and signaling defense pathways. JA has been shown to respond to both biotic and abiotic stresses [44]. Thus, we analyzed the expression levels of ZmAOS (Figure 2A Figure 2F). The susceptible CO447 line had a higher-fold increase in ZmAOS transcript abundance than the resistant CO450 line. Additionally, inoculation with Cn caused a prominent induction in the expression of ZmJaz12, the JA transcriptional repressor gene, in CO450 compared with CO447. There was also no significant induction of ZmAOC1 and ZmLOX9 in the tolerant CO450 line. In addition, a significant repression in the transcript levels of ZmMYC7 and ZMERF147 was found in both tested corn lines. Together, these results suggest that disease resistance against CMN14-5-1 was independent of the JA defense signaling pathway. The other important signaling molecule is SA, which has key regulatory functions in plant defense and plant development [45][46][47]. We tested two genes known to play important roles in the SA defense pathway against biotrophic pathogens, ZmPAL and ZmPR-1. CMN14-5-1 induced the expression levels of ZmPAL and ZmPR-1 at a higher rate in CO450 compared to the CO447 plants. Generally, a reduction in PAL activity makes plants more vulnerable to disease because SA accumulation is reduced and systemic acquired resistance is abolished [47]. Both PAL and SA contribute to corn defense against the sugarcane mosaic virus infection [47]. It has also been reported that PR-1 genes have the capacity to inhibit PCD within the initial lesion caused by Pseudomonas syringae pv. tabaci and eventually cause plant death after the pathogen is contained by PCD [48]. In addition, previous studies have demonstrated that PR-1 genes possess antibacterial properties effective against both Gram-positive and Gram-negative bacteria [49]. This data prompted us to investigate PCD-related genes. Following pathogen colonization, ROS can either strengthen the cross-linking of the plant cell walls or promote the oxidative burst to effectively collapse the invaded plant tissues, resulting in limiting the disease spread within the plant [28,[50][51][52][53]. ROS production is closely linked to PCD in response to biotic stresses [28]. ROS are produced in animals and plants by the NADPH oxidase family of enzymes [54]. Plant NADPH oxidases are commonly referred to as respiratory burst oxidases (Rboh) due to their closest functional similarity to mammalian NADPH oxidases [55]. NADPH oxidase/RbohD is a crucial mediator in ROS production and the activation of PCD in many plants [28,50,52,56,57]. Therefore, we tested RbohD, which is a NADPH oxidase involved in ROS production and PCD activation [57]. We also quantified polyphenol oxidase (PPO1), which plays a key role in plant defense by using molecular oxygen to convert ortho-diphenols into ortho-quinones. The ortho-quinones are part of the browning reaction correlated with tissue damage, which has been suggested to restrict pathogen progression as a small HR [51,58,59]. The induction of PPO1 gene expression in tomato plants has been reported to trigger resistance against P. syringae pv. tomato [59]. Since Rab7 plays a critical role in regulating ROS scavengers to protect plants from mainly abiotic stresses and in the conversion of phagosomes into lysosomes, which deactivate potentially toxic secretions in response to pathogen infection [60][61][62], we assayed Rab7. The induced expression levels of Rab7 in wheat plants in response to Pucinia striiformis f. sp. Tritici had suggested the regulatory role of Rab7 following pathogen colonization to prevent disease spread [61]. Furthermore, we tested the peptidyl-prolyl cis-trans isomerase (PPI) gene, which has been reported to play various roles in plants, including protein folding, stress response, plant development, and redox reaction regulation [63]. When a pathogen attacks a plant, it triggers the HR response, which relies heavily on the ROS that are produced through redox reactions. These reactions must be tightly controlled to protect the plant from additional damage resulting from excess ROS. In order to limit the harmful impact caused by ROS, plants activate the PPI gene. Moreover, the PPI gene is expressed at a relatively higher level than that of the control treatment, and the early induction of RbohD transcripts during the infection might indicate that the RbohD gene is regulated by the PPI gene [63]. The PPI and Rab7 genes are strongly linked to RbohD downregulation by producing ROS scavengers that eliminate the harmful impacts of excess ROS, protecting plants from further damage [60,[62][63][64]. Our findings revealed an increase in the transcript abundance of ZmRbohD, ZmPPO1, ZmRab7, and ZmPPI at 0 and 2 dpi time points in CO450 compared to CO447, implying a critical role for PCD-related genes in inducing plant defense against Goss's wilt disease. Together, these results strongly imply that SA and PCD are essential in regulating defense responses against Goss's wilt disease in corn. SA and H 2 O 2 treatments induce disease resistance and systemic acquired resistance in plants [65][66][67][68]. SA confers disease resistance against the downy mildew pathogen, Peronosclerospora maydis, by promoting the expression levels of PR-1 and PR-5 genes [69]. It has also been shown that SA triggers PR genes in rice and barley [70,71]. Additionally, the synthetic chemical analogue of SA, Benzothiadiazole (BTH), was shown to activate an enhanced resistance in wheat to powdery mildew caused by Erisyphe graminis, the leafrust-causing Puccinia recondita, and Septoria leaf spot. Magnaporthe grisea, the bacterium that causes rice blast, was also reported to be controlled by the BTH treatment of rice seedlings [72]. Consistent with these results, the exogenous application of SA or H 2 O 2 was able to restore partial disease resistance in the susceptible CO447 plants against CMN14-5-1. These results suggest that SA and H 2 O 2 , in combination with other important components, likely act in an additive manner to induce disease resistance against Goss's wilt. Plant Material The corn (Zea mays L.) lines utilized in this research, namely, CO447 and CO450, were provided by Dr. Lana Reid from Agriculture and Agri-Food Canada located in Ottawa, Ontario. These inbred lines do not have common parental ancestry. Dr. Reid requested that Dr. Daayf's lab test the two lines in the field, and the results showed that CO447 was susceptible to Cn infection while the other was tolerant [73]. The plants were grown in a controlled environment with a 16/8 h light/dark cycle and a temperature of 22/18 • C for the day/night cycle. Chemical Treatments SA (Sigma; Steinheim, Germany; cat. no. 247588) and H 2 O 2 (Sigma; St. Louis, MO 63103, USA; cat. no. 216763) treatments were carried out using 500 µM solutions. SA was made ready for use by diluting it in water. The H 2 O 2 , which was available as a 30% solution, was also diluted in water. Each of the dilutions was freshly made. Treatments were carried out by spraying solutions on three-to four-week-old maize plants 48 h prior to bacterial inoculation. Bacterial Isolation, Preparation of Inoculum, and Leaf Inoculation The highly aggressive CMN14-5-1 [13] was grown for 2-3 days at 23 • C in a nutri- [74]. The inoculum consisted of bacterial cells that were mixed with a phosphate buffer solution containing 10 mM of both monobasic potassium phosphate and dibasic potassium phosphate at pH 6.7. The bacterial culture concentration was measured and then adjusted to 1 × 10 7 CFU/mL for the inoculation process [15]. Maize plants were mechanically wounded at the V5 leaf stage using a disposable syringe plunger covered with a 5 mm sandpaper disc. The third, fourth, and fifth leaves were wounded on both sides of the midrib [13]. The control plants had 20 µL of phosphate buffer solution applied to their wounds, while the inoculated plants received 20 µL of CMN14-5-1 inoculum. For the two types of corn tested, three different treatments were used in the experiment. The first treatment was an unwounded control, while the second treatment involved wounding the corn leaves and then treating them with a phosphate buffer. The third treatment also involved wounding the leaves, but this time they were treated with CMN14-5-1. The phosphate buffer did not have any negative effects on the control treatments. Experiments were carried out in three biological replicates, with each replicate consisting of six plants. After being treated, the plants were kept in a mist chamber overnight with 100% relative humidity before being moved to a growth room for a period of 10 days. Measurement of Lesion Length, Disease Severity Rating, and Sampling Treatments and time intervals were assigned to each inbred corn line. This experiment had six time intervals: 0 (15 min after inoculation), 2, 4, 6, 8, and 10 dpi. Three different treatments were used, including: (1) no wound (control); (2) wound with PPB (control); and (3) wound with CMN14-5-1. The size of the lesion was measured in both directions from the infection site at each specific time point. For each replicate, the AUDPC was calculated using the mean of six biological replicates comprising six subsamples. The disease severity index was used to assess disease severity with the following scale: 0, the only lesion is the initial wound; 1, chlorosis or reddening only; 2, chlorosis or reddening accompanied by freckling and approximately 10% necrosis; 3, chlorosis or reddening accompanied by freckling and approximately 11-25% necrosis or wilting; 4, 26-50% necrosis; 5, 51-75% necrosis; and 6, 76-100% necrosis [13]. Leaf segments that contained the entire inoculation site plus the diseased area were excised and analyzed at 0 (15 min post inoculation) and 2 dpi for transcript analysis. Measurement of Transcript Levels TRI Reagent (Invitrogen; Vilnius, Lithuania; cat. no. AM9738) was used to extract the total RNA, which was then treated with DNase I (ThermoFisher Scientific; Vilnius, Lithuania; cat. no. EN0521) and reverse-transcribed into cDNA using the RevertAid First Strand cDNA Synthesis Kit (ThermoFisher Scientific; Vilnius, Lithuania; cat. no. K1622). Supplementary Table S1 lists all the primers used in gene expression studies. The 2 −∆∆CT method was used to quantify the relative gene expression [75], with actin serving as the reference gene. Statistical Analysis Statistical analyses for the experiments in Figures 1-3 were carried out using the PROC MIXED function of Statistical Analysis Software (SAS) (Release 9.1 for Windows; SAS Institute, Cary, NC, USA). The Tukey test (α = 0.05) was used to compare treatment means for the experiments in Figure 4. Each experiment was repeated three times, with six plants in each replicate. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy concerns and connections to other ongoing studies.
2023-03-30T15:14:11.650Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "67ea521e567197d98cabb7cada8a617006948bce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/12/7/1475/pdf?version=1679986159", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f537c8cefc9fecabb300c7026b24a3e908cddffe", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
51922780
pes2o/s2orc
v3-fos-license
Endoscopic closure of an anastomo-cutaneous fistula: Filling and shielding using polyglycolic acid sheets and fibrin glue with easily deliverable technique Background and study aims  Recently, endoscopic closure of gastrointestinal fistulas using polyglycolic acid (PGA) sheets with fibrin glue (FG) has been attempted. A 70-year-old woman who had undergone pancreaticoduodenectomy for pancreatic cancer suffered from a refractory anastomo-cutaneous fistula at the site of gastro-jejunostomy. We attempted endoscopic closure with filling and shielding using PGA sheets and FG. After introducing a guidewire into the fistula, a small piece of PGA sheet was skewered onto the guidewire and then pushed using a tapered catheter over the guidewire and delivered into the fistula. A total of 10 sheets were delivered via the same procedure. Next, the mucosa around the fistula was ablated, and the orifice of the fistula along with the surrounding mucosa was shielded with a piece of PGA sheet fixed with hemoclips and FG. After this procedure, the leakage disappeared and the fistula was undetectable on contrast radiograms. Endoscopic closure of anastomo-cutaneous fistula with filling and shielding using PGA sheets and FG is an effective, safe, low-invasive treatment, and the filling technique using a guidewire ensures a safe, smooth procedure. ABSTR AC T Background and study aims Recently, endoscopic closure of gastrointestinal fistulas using polyglycolic acid (PGA) sheets with fibrin glue (FG) has been attempted. A 70-year-old woman who had undergone pancreaticoduodenectomy for pancreatic cancer suffered from a refractory anastomo-cutaneous fistula at the site of gastro-jejunostomy. We attempted endoscopic closure with filling and shielding using PGA sheets and FG. After introducing a guidewire into the fistula, a small piece of PGA sheet was skewered onto the guidewire and then pushed using a tapered catheter over the guidewire and delivered into the fistula. A total of 10 sheets were delivered via the same procedure. Next, the mucosa around the fistula was ablated, and the orifice of the fistula along with the surrounding mucosa was shielded with a piece of PGA sheet fixed with hemoclips and FG. After this procedure, the leakage disappeared and the fistula was undetectable on contrast radiograms. Endoscopic closure of anastomo-cutaneous fistula with filling and shielding using PGA sheets and FG is an effective, safe, low-invasive treatment, and the filling technique using a guidewire ensures a safe, smooth procedure. dochojejunostomy 4 months after the operation. Computed tomography revealed no recurrent findings, and cytology of drainage from both PID and PTCD was negative for cancer. Contrast imaging performed by introducing a contrast medium through the cutaneous fistula revealed an anastomo-cutaneous fistula (▶ Fig. 1a). The site of anastomotic leakage was endoscopically confirmed by introducing indigo carmine through the cutaneous fistula (▶ Fig. 1b). Because the anastomo-cutaneous fistula was not closed despite conservative management 14 months after operation, we tried endoscopic clip closure and shielding with a PGA sheet (Neoveil; Gunze Medical Division, Kyoto, Japan). However, the fistula was still not completely closed 3 months after starting these endoscopic approaches. Therefore, we attempted endoscopic closure with filling and shielding using PGA sheets and FG (Beriplast P Combi-Set; CSL Behring Pharma, Tokyo, Japan). After confirming the anastomotic fistula using an endoscope (3.2 mm-wide working channel, GIF-Q260J; Olympus Medical Systems, Tokyo, Japan), a guidewire (0.64 mm in diameter, RAYELISSE; CREATE MEDIC, Kanagawa, Japan) was introduced into the anastomotic fistula at the orifice of the cutaneous fistula with radiologic control (▶ Fig. 2a). A tapered catheter was inserted over the guidewire and the fistula was cleaned with an adequate quantity of saline. Subsequently, a small piece of PGA sheet (10 × 5 mm) folded in half was skewered onto the guidewire at the center and then pushed using the tapered catheter (MTW; MTW Endoskopie, Wesel, Germany) over the guidewire through the scope channel and delivered into the fistula (▶ Fig. 2b, ▶ Fig. 2c). A total of 10 PGA sheets were delivered via the same procedure and complete closure of the fistula was confirmed by contrast imaging. Next, the mucosa around the fistula was ablated with ar-gon plasma coagulation. A piece of PGA sheet (20 × 20 mm) was then applied to the orifice of the fistula along with the surrounding mucosa and delivered with biopsy forceps through the scope channel to shield the fistula; it was fixed with five hemoclips at the edge of the sheet. Finally, FG was sprayed over the entire sheet with an injection needle (▶ Fig. 2d). After this procedure, no complications were observed, and leakage from the cutaneous fistula disappeared. The fistula was undetectable on contrast radiograms, even after pressure injection of contrast medium, at 1 month after the procedure (▶ Fig. 3). Discussion We were able to close a refractory anastomo-cutaneous fistula after gastrojejunostomy using PGA sheets and FG. In the current case, shielding using a PGA sheet was accompanied with a filling procedure to prevent deviation of the filled PGA sheets due to peristaltic pressure. Takimoto et al. [8] also reported filling and shielding using PGA sheets and FG to treat postoperative gastric perforation of endoscopic submucosal dissection; however, we found no reports describing the same procedure for treatment of gastrointestinal fistulas. While Nagami et al. [4] successfully used endoclips to gather the mucosa around the fistula after filling the fistula with PGA sheets, such a procedure carries a risk of inducing incomplete closure or position displacement of the filled sheets. Delivering PGA sheets over the guidewire with the tapered catheter is not only safe but also an easy procedure to repeat. Furthermore, we were able to adjust the filling position based on radiograms and fill the deep site of the fistula with the sheets. We were able to deliver several sheets simultaneously ▶ Fig. 1 a Contrast imaging performed by introducing a contrast medium through the cutaneous fistula revealed an anastomo-cutaneous fistula (arrowheads). b The site of anastomotic leakage was endoscopically confirmed by introducing indigo carmine through the cutaneous fistula (arrow). and adjust the size of the sheets depending on the size of the fistula. In case of difficulty of an antegrade approach, the sheets can be delivered retrograde from the downstream side of the fistula. Although careful guidance using a soft guidewire should be performed to avoid fistula injury, moderate mechanical stimulation to the fistula due to guidewire movement might promote formation of granulation tissue after filling the PGA sheets. To our knowledge, this is the first report showing the utility of a guidewire for a filling procedure with PGA sheets. Although our shielding procedure using a PGA sheet is able to prevent such problems, this procedure is complicated especially when using larger sheets, and can be technically difficult. An easier shielding method should be developed, such as adoption of a double-operating channel scope. Conclusion In conclusion, endoscopic closure of anastomo-cutaneous fistula after gastro-jejunostomy with filling and shielding using PGA sheets and FG is an effective, safe, low-invasive treatment, and the filling technique using a guidewire ensures a safe, smooth procedure. ▶ Fig. 2 a A guidewire (RAYELISSE; CREATE MEDIC, Kanagawa, Japan) was introduced into the anastomotic fistula. b, c A small piece of PGA sheet was skewered onto the guidewire at the center, and then pushed using the tapered catheter (MTW, MTW Endoskopie, Wesel, Germany) over the guidewire and delivered into the fistula. d The orifice of the fistula along with the surrounding mucosa was shielded by a piece of PGA sheet fixed with five hemoclips and FG.
2018-08-14T19:12:27.141Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "a22dab3700715c879d45b633cf78c8409060c1c7", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-0584-6669.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f60194464b0fc6ea035716608a953ecbb4016bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230633600
pes2o/s2orc
v3-fos-license
Upper gastrointestinal series in children: what surgeons need to know Upper gastrointestinal (UGI) series is the standard imaging tool for diagnosis of structural and functional abnormalities affecting the esophagus, stomach, and proximal small bowel. The aim of this study was to delineate the clinical indications for UGI series in children that are associated with the presence of significant radiological abnormalities aiming for more standardized care for those patients. UGI series of 118 patients was analyzed with calculation of clinical score. Vomiting was the most frequent primary complaint (63.6%), followed by dysphagia and recurrent chest infection. Forty-nine patients had positive upper GI findings (42%). The most detected abnormality was related to the stomach and duodenum (21.4%). Variable degrees of idiopathic gastroesophageal reflux were detected in 16 patients (13.6%). Patients with clinical score 2 or more had significantly more prevalence of abnormal findings (p = .001). Vomiting (especially when bilious), underweight, aspiration, and recurrent chest infection were strong predictors of abnormal findings on upper GI series (p = 0.007, 0.001, 0.009, and 0.001, respectively) and increased the diagnostic yield of upper GI series for detection of abnormalities by 3.48, 9.6, 4, and 4.12 times, respectively. Diagnostic yield of UGI series was relatively higher in patients having two or more symptoms (clinical score 2 or more) and in children with bilious vomiting, aspiration and underweight, or repeated chest infection. Background Upper gastrointestinal (UGI) series is frequently performed in the pediatric radiology department. It is considered the standard imaging tool for diagnosis of structural and functional abnormalities implicating the esophagus, stomach, and proximal small bowel. It perfectly demonstrates congenital malformations affecting the gastrointestinal tract (e.g., hiatus hernia, intestinal malrotation); in addition, it can depict extra-luminal esophageal compression by an anomalous blood vessel or an external compressing mass. Underlying causes of dysmotility disorders such as achalasia are also properly evaluated [1][2][3]. In clinical practice, UGI series is usually performed in many children with symptoms related to the gastrointestinal tracts, particularly vomiting, dysphagia, or abdominal pain. However, unindicated radiological procedure could unnecessarily expose more sensitive patients in the pediatric age group to the detrimental effects of ionizing radiation [4,5]. Moreover, in view of limited resources and emphasis on quality and safety of radiologic procedures, the risk versus potential benefits should be weighed before proceeding with any procedure. The diagnostic value of UGI series is explored in a few papers in the literature especially focusing on children with vomiting [2,[6][7][8]. Consequently, there is a remarkable need to evaluate its usefulness in the clinical management of those patients. The aim of this study was to obviously delineate the clinical indications for UGI series in children and their association with the presence of significant radiological abnormalities aiming for more standardized care for those patients. Patient selection After ethical and internal review board approval, a cross-sectional study was conducted at the radiology unit in our university pediatric hospital. All patients presented to our unit requesting UGI series were initially included in the study; patients with history of corrosive ingestion or prior UGI surgeries were excluded. Patients with incomplete studies or insufficient clinical data were also excluded. Informed oral consent including all procedure details was obtained from all parents or guardians before the procedure. All patients were subjected to detailed history taking and were instructed to fast for 4 to 6 h before the procedure. Figure 1 refers to history taking sheet adopted in our department. Procedure detail UGI series was performed by using TOSHIBA ZEXIRA DRX (MODEL BLF-15B) for pulse fluoroscopy with last screen capture at a rate 3-4 frames/s. Anteroposterior scout film is obtained. Imaging started from the oropharynx to the duodenojejunal flexure using single contrast technique. The contrast is administered by bottle or through a nasogastric tube. AP and lateral views of the esophagus are obtained. Lateral view of the duodenum is then obtained while the contrast passes through its second part. This is followed by AP view of the duodenojejunal flexure. Finally, an AP image was obtained once the contrast had passed into the jejunum. No specific maneuvers or tests were performed to initiate reflux [9,10]. Data collection Age, gender, and the primary complaint for each patient are recorded, and all radiologic abnormalities were documented. Predictor variable The indication for UGI series was determined based on patients' symptoms. Indications include vomiting, dysphagia, underweight, aspiration, recurrent chest infection, abdominal pain, change in bowel habits, hematemesis, or melena. As patients might have more than one symptom, a clinical score was calculated for each patient setting 1 point for each symptom as we have recently published [8]. The total number of patients having each symptom as isolated or in association was also calculated. Outcome variables The UGI series findings were categorized into normal and abnormal groups where the latter group includes all positive UGI series. The abnormal group was further divided into 4 groups according to the location of the dominant abnormal findings: patients with idiopathic gastroesophageal reflux (GER), patients with findings related to the esophagus, patients with findings related to both stomach and duodenum, and lastly aspiration. The groups of patients with normal and abnormal findings were compared regarding their clinical scores. Statistical analysis Different groups were compared regarding patients' age and gender. They were also analyzed regarding their clinical presentation (vomiting, dysphagia, underweight, aspiration, recurrent chest infection, abdominal pain, change in bowel habit, hematemesis, and melena). Multinomial logistic regression models were built to assess confounding variables and relevant interactions, using backward stepwise regression to determine independent predictors for positive upper GI findings Numeric variables were provided as the median (IQR) and range. The Kruskal-Wallis H test was used to compare continuous variables and the chi-square test to compare categorical variables. A probability value (p value) less than 0.05 was considered statistically significant. IBM® SPSS® Statistics 21 version was used for statistical analysis. Results One hundred and eighteen patients (out of 320 patients who underwent upper GI series) were included in the study. The remaining patients were excluded due to one or more of the previously mentioned exclusion criteria. Patients ranged in age from 5 days to 17 years (median 2 years). Table 1 summarizes the patients and findings of their UGI series according to their primary complaint. Vomiting was the most frequently encountered symptom (n = 85; 72.6%), and it was non-bilious in 79 patients; however, it was the primary complaint in 75 patients. Thirty-five patients reported recurrent chest infection (30%), and it was the primary complaint in only 11 patients. Dysphagia was reported by 18 patients (15.4%), and it was the primary complaint in 13 patients. Seventeen patients had a history of aspiration, and it was the primary complaint in 9. Additionally, underweight was present in 72 patients (61.5%), abdominal pain was present in 30, changes in bowel habits was reported in Patients with GER were found to be significantly younger than other groups (median 10 months; range 3-14 months). In addition, GER was found to be the most prevalent abnormality below the age of 6 months. On the contrary, patients with findings related to the esophagus were found to be significantly older than patients with other findings (median 36; range 4-180 months; p = .04) (Fig. 8). There was no significant The calculated clinical score was significantly low in patients with normal UGI series (p < .001). Among 29 patients with a solitary symptom (scored 1), 27 patients were found to have normal findings. Consequently, patients with clinical score 2 or more had significantly more prevalence of abnormal findings (p = .001) (Fig. 9). Forty-three patients (54.4%) out of 79 patients with non-bilious vomiting had a normal UGI series, and 19 patients (24%) had findings related to the stomach (Figs. 2 and3). However, all patients presenting with bilious vomiting had abnormal findings (6 patients) related to the duodenum: four patients with midgut malrotation, one of them had associated volvulus; one patient with duodenal atresia; and another one with SMA syndrome (Fig. 4). Vomiting, underweight, aspiration, and recurrent chest infection were strong predictors of abnormal findings on UGI series (p = 0.007, 0.001, 0.009, and 0.001, respectively) and increased the diagnostic yield of upper GI series for detection of abnormalities by 3.48, 9.6, 4, and 4.12 times, respectively. However, with multivariate analysis, recurrent chest infection did not show significant results (Table 2). Vomiting was a strong predictor of detection of GER and findings related to the stomach and duodenum on upper GI series (p = .001) as all patients with findings related to the stomach and duodenum and 93.8% of patients with detected GER on UGI series presented with vomiting. After adjusting other variables, vomiting increased the risk of the finding of GER on UGI series by 8.7 times. Weight loss or failure to thrive and chest-related symptoms were found to be strong predictors of GER on UGI series (p = .001) as 93.8% and 62.5% patients of the GER group had weight loss or failure to thrive, and chest-related symptoms, respectively. After adjusting other variables, weight loss or failure to thrive, and chest-related symptoms increased the risk of finding GERD on UGI series by 20.1 and 7.7 times, respectively. Dysphagia was a predictor of findings related to the esophagus on upper GI series (p = .034) as two of the four patients of esophageal findings had dysphagia (Figs. 6 and 7). After adjusting other variables, dysphagia increased the risk of esophageal findings on UGI series by 8.4 times. Aspiration, underweight, and recurrent chest infection were strong predictors of the aspiration group (p = 0.001) as all five patients with detected aspiration on UGI series presented with all these symptoms. Two of them had cerebral palsy, and one had neurodegenerative disorders with pseudobulbar palsy and palato-pharyngeal incoordination. Also, abdominal pain was a strong predictor of findings related to the stomach and duodenum (p = 0.001) as 68% of findings related to the stomach and duodenum belonged to patients presented with abdominal pain. After adjusting other variables, abdominal pain increased Patients with changes in bowel habits, hematemesis, or melena were found to be linked to gastric findings, but this correlation did not reach a statistically significant value. Discussion The current study analyzed the different radiological findings of UGI series in children in relation to their symptoms. Vomiting was found to be the most common presenting symptom, followed by dysphagia and recurrent chest infection. The majority of abnormal findings were due to underlying gastric and duodenal pathologies (21.4%) followed by GER (13.6%). Vomiting and abdominal pain were found to be significantly associated with findings related to the stomach and the duodenum. In addition, we adopted our new clinical score to increase the diagnostic yield of UGI series, so that the impact of clinical symptoms on the findings of the UGI series is objectively assessed [8]. We found that patients with higher clinical scores (especially score 2 or more) were more likely to have abnormal UGI series. As previously reported, we have found that idiopathic GER was more prevalent in younger children. Moreover, GER was the most prevalent abnormality in infants below the age of 6 months in whom non-bilious vomiting was the primary complaint. The high prevalence of GER is expected in this very young age group, as 70 to 85% of infants show physiological reflux during their first 60 days of life and it resolves spontaneously without intervention in the first year of life [6,11,12,17]. On the contrary, the age of the patients with abnormal esophageal findings was found to be significantly higher Fig. 7 UGI series of two patients with findings related to the esophagus. a A 15-year-old boy with lymphoma, presented with dysphagia, and significant weight loss, showing esophageal compression by large homogenous opacity and parrot peak appearance of the lower esophagus consistent with achalasia (arrow). b Axial chest CT showing enlarged mediastinal lymph nodes encasing mediastinal structures (asterisk). c A 14-year-old male patient with Alport syndrome presented with vomiting, epigastric pain, and dysphagia; UGI series shows markedly dilated esophagus (arrows) and hiatus hernia (asterisk). d AP shows dilated esophagus with esophageal wall thickening representing leiomyomatosis (asterisk) than the age of patients with GER or other positive findings, as previously reported [6]. This is primarily related to the nature of the encountered diseases themselves as peptic stricture occurs after relatively prolonged period of exposure to refluxed acidic gastric content; also, achalasia secondary to lymphoma occurs relatively at an older age. However, in our study, both increasing age and presence of dysphagia did not improve the diagnostic yield of the UGI series. It is likely that expression of dysphagia was used incorrectly by those young patients or their guardians. Bilious vomiting was found to be a strong significant predictor and improved the diagnostic yield of the upper GI series, as all patients presenting with bilious vomiting showed positive upper GI series findings related to the duodenum (duodenal atresia, midgut malrotation, and superior mesenteric artery syndrome). As previously reported, intestinal malrotation is almost always presented with vomiting characteristically bilious. Although it was reported that intestinal malrotation is classically a disease of infancy [13][14][15], intestinal malrotation should be suspected in every child with repeated attacks of bilious Spontaneous aspiration was reported to be frequent in children with feeding difficulties and is most likely to occur in children with underlying neurologic disease [16]. In our study, most of the patients with aspiration had cerebral palsy or neurodegenerative disorders with pseudobulbar palsy and palato-pharyngeal incoordination. Our findings revealed that aspiration symptoms were significantly associated with the detection of aspiration on UGI series, and it significantly improved the diagnostic yield of the UGI series especially if associated with repeated attacks of chest infection/symptoms. Our study could be limited by the heterogenous group of patients' population including all children in the pediatric age group up to 18 years of age, which could make conclusions about certain indications for UGI series difficult. Dividing patients into different age range would be appropriate to study each group individually. Moreover, given the young age of most of the patients, most of their complaints were expressed by their parents or guardians, which might not be totally accurate. However, this is a cross-sectional study that included a relatively large number of children presenting with complaints related to the upper gastrointestinal tract and it represents an experience from a tertiary care center. All patients were submitted to a standardized technique allowing a uniform interpretation of all studies in order to reach comprehensive evaluation of all results. In addition, in view of the complex nature of the presenting symptoms, a clinical score was calculated considering patients' primary complaints and all other associated symptoms in order to study the diagnostic yield of UGI series for those patients with multiple complaints. Conclusion Vomiting, dysphagia, or recurrent chest infection was the most common indication for UGI series. Most radiologic findings were due to gastric and duodenal abnormalities followed by GER. Idiopathic GER was more prevalent in young infants. Diagnostic yield of UGI series was relatively higher in patients having two or more symptoms (clinical score 2 or more) and in children with bilious vomiting, aspiration and underweight, or repeated chest infection.
2020-12-10T09:04:38.142Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "cb0ad964adbc1f8af41a1661ef927989e9af2cca", "oa_license": "CCBY", "oa_url": "https://aops.springeropen.com/track/pdf/10.1186/s43159-020-00061-9", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dec72d7e3cb176cd7d0020c83462771215f1af8c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14052680
pes2o/s2orc
v3-fos-license
B cell-stimulatory factor 1 (BSF-1) promotes growth of helper T cell lines. T cell-derived supernatants (SN) that contain B cell-stimulatory factor 1 (BSF-1) and lack IL-2 promote the growth of the IL-2-dependent T cell line, HT-2, as well as three other clones or lines of T cells that can provide help to B cells. The BSF-1 purified from these SNs promotes growth of HT-2 cells approximately 50% as effectively as purified IL-2. A potential involvement for contaminating IL-2 in the BSF-1 preparations was excluded by the demonstration that anti-BSF-1 mAbs blocked the BSF-1-induced growth of HT-2 cells; in contrast, these antibodies did not block the IL-2-induced proliferation of the HT-2 cells. In addition, anti-IL-2 mAbs or anti-IL-2-R antibodies blocked the HT-2 growth-promoting activity of purified IL-2, but not BSF-1. Finally, BSF-1 promoted only a very modest growth of Con A-induced T cell blasts, and failed to induce significant growth in seven other cytotoxic, alloreactive, and long-term T cell lines. Taken together, these results indicate that in addition to its known effects on resting and LPS-stimulated B cells, BSF-1 can promote growth of certain subsets of activated T cells, in particular, those that provide help to B cells. Preparation of IL-2 and BSF-1-containing SN. SN from PMA-stimulated EL-4 thymoma cells was prepared as described previously (21). Such SN contained high levels of IL-2 activity and negligible levels of BSF-1 activity. SN from Con A-pulsed PK 7.1 cells were prepared as described previously (11). Such SNs contained greater than 103 U/ml of BSF-1 activity and were devoid of IL-2 activity . BSF-1 Assay. BSF-I activity was measured by a modification (22,23) of the previously described anti-IgM costimulation assay (1) . 5 X 10' small (1 .083-1 .222 g/ml), Percollfractionated B cells (23) were cultured for 3 d in a volume of 200 Al containing 10 U1 of a 10% suspension of goat anti-mouse Ig coupled to Sepharose (S-GAMIg) (22,24) and SN or purified lymphokines (20 ul). The cells were then pulsed with 1 UCi/well of ['H]thymidine and harvested 16 h later. 1 U of BSF-I was designated as the reciprocal of the dilution containing 50% of the maximal activity of a standard BSF-1-containing PK 7.1 SN . IL-2 Assay. IL-2 activity was assayed by measuring the [3H]thymidine uptake of IL-2dependent HT-2 (12) or CTLL-2 (13) cell lines in response to IL-2or to BSF-1-containing SN . One HT-2 line has been maintained in our laboratory for 3 yr and the other was recently obtained from Dr. Robert Coffman (DNAX Research Institute of Molecular and Cellular Biology, Palo Alto, CA) . For the assay, 20-ul aliquots/well of a 2 .5 X 105 cells/ml suspension were added to 70 j1/well of different dilutions of the BSF-1 or IL-2-containing SN. Cells were pulsed with 1 ACi/well of [3H]thymidine after 48 h of incubation at 37°C/5% COz in air, and 16 h later the cells were harvested. IL-2 Assay on T Cell Blasts. Murine Con A T cell blasts were prepared as described by Malek et al . (25). For the assay, the cells were cultured at a density of 2 .5 X 10 5 cells/ml in 200 Al/well of assay medium containing various dilutions of the IL-2or BSF-1containing SN or the purified lymphokines . After 48 h the cells were pulsed with 1 ,uCi/well of [3H]thymidine, and 16 h later the cells were harvested . B Cell Growth Factor (BCGF) II Assay. BCGF-II activity was assayed by measuring the ability of the SN or purified lymphokines to synergize with dextran sulfate (DxS) in a B cell proliferation assay (26). B cells were not subjected to a Percoll density gradient fractionation before culture, since BCGF-II acts on low density B cells (23). Test SN were added on day 0, on day 3 cells were pulsed with [3H]thymidine, and were harvested 16 h later . BCDF for IgGI (BCDF -'Y) Assay. BCDF-,y activity was measured as described previously (6,27), using B cells prepared as described for the BCGF-II assay. 20 ug/ml LPS was [sH]thymidine incorporation of HT-2 cells cultured in the presence of serial twofold dilutions of IL-2-containing EL-4 SN (0) and the BSF-1-containing PK 7 .1 SN (O) . SNs were prepared, and [sH]thymidine incorporation assays were performed as described in Materials and Methods . added at the initiation of culture and BSF-1 was added a day later. On day 6, the culture SN were assayed for IgGI by a RIA . BCDF for IgM (BCDF-,u) Assay . SN or purified lymphokines were tested for BCDF-A activity by their ability to stimulate the secretion of IgM from the in vitro-adapted BCL, cell line, 3113 (28) . T cell SN were added on day 0, and on day 6 the culture SNs were assayed for IgM by a /A-specific RIA . Gel Filtration . Gel filtration of the different SN was carried out at 4°C using a 2 .5 X 50 cm Sephacryl S-200 column (Pharmacia Fine Chemicals, Uppsala, Sweden) equilibrated with Tris-buffered saline (50 mM, pH 7 .4) containing 0 .1 % sodium azide . After chromatography, BSA was added to the different fractions at a final concentration of 1 mg/ml, and the fractions were dialyzed against 3 changes of PBS (3 X 1 liter) and once against RPMI-1640 (1 X 1 liter) . Purification of IL-2 and BSF-1 . The initial step in the partial purification of IL-2 or BSF-1 was carried out as described (10,29) by adsorption to trimethylsilyl-coated TMS-CpG and subsequent elution by a 50% acetonitrile/0 .2 M NaCl/0 .1 % TFA solution . This procedure resulted in the adsorption of >95% of the BSF-1 or IL-2 activities from the SN with minimal contamination by BCGF-I1 and BCDF-ft activities . Reverse-phase HPLC was carried out as described previously (10), in a model 332 gradient liquid chromatograph (Beckman Instruments, Inc ., Fullerton, CA) and an Altex Ultrasphere ODS (250 X 4 .6 mm, 5 pm) reverse-phase column (Altex Scientific, Berkeley, CA) . Results HT-2 Growth-promoting Activity of PK 7 .1 SN. The HT-2 cell line was derived from T cells that provide IA d-restricted help to B cells for an anti-SRBC response . The requirement of HT-2 cells for the growth-promoting activity of IL-2 is the basis for the 1L-2 assay used by many laboratories (12) . However, when HT-2 cells were cultured with T cell SN lacking IL-2, namely, PK 7.1 (11), the HT-2 cells proliferated, as judged by both [sH]thymidine incorporation and by an increase in cell number . As shown in Fig . 1, the stimulation indexes obtained at maximal stimulatory concentrations of BSF-1-containing PK 7 .1 SN were about 50% as high as those obtained with maximal stimulatory concentrations of the IL-2-containing EL-4 SN . Proliferation was observed at 24, 48, and 72 h using the ['H]thymidine incorporation assay, and was not affected by the inclusion of a-methyl-mannoside (20 mg/ml) in the medium (data not shown), suggesting that the proliferative activity of SN from Con A-pulsed PK 7 .1 cells is not due to contaminating Con A. an apparent M, of 30,000, in concordance with a previous report (30). The HT-2 growth-promoting activity of PK 7 .1 SN, however, coeluted with BSF-1 and had an apparent Mr of 18,000 . No IL-2 activity was detected in the PK 7 .1 SN and no BSF-1 activity was detected in the EL-4 SN .z To characterize further the HT-2 growth-promoting activity present in PK 7.1 SN, both HT-2 growth-promoting activity and BSF-1 activity were purified from PK 7 .1 SN by TMS-CpG adsorption and reverse-phase HPLC, as described (10) . IL-2 was purified from EL-4 SN by the same procedure. The beads absorbed the HT-2 growth-promoting activities from both SN and the anti-Ig-mediated costimulator and BCDF-y activities from the PK 7 .1 SN . The 50% acetonitrile eluates that contained all three activities were then subjected to reverse-phase HPLC, as described in Materials and Methods. As shown in Fig. 2A, the IL-2 activity from the EL-4 SN eluted at 60-80% acetonitrile . No BSF-1 activity was detected in the original EL-4 SNs, nor in any of the HPLC fractions. Using a system similar to the one described here, the BSF-1 activity has been reported to elute at acetonitrile concentrations of 47-49% (10, 31). As shown in Fig. 2B, both the BSF-1 and the HT-2 growth-promoting activities present in the PK 7.1 SN coeluted at 47-50% acetonitrile . No HT-2 growth-promoting activity was detected in those fractions in which IL-2 should have eluted (60-80% acetonitrile) . The fractions containing the BSF-1 activity were, as expected, also positive for BCDF-y activity, according to previous reports by our group (6) and other (32,33) that BSF-1 and BCDF-y are identical. No BCGF-11 or BCDF-,u activities were detected in these fractions. Inhibition of HT-2 growth-promoting activity of purified BSF-1 but not purified IL-2 by mAb to BSF-1 . Constant amounts of BSF-I or IL-2 were preincubated for 30 min at room temperature with the indicated dilutions of a 1 mg/ml stock of monoclonal anti-BSF-1, 1 1 B 11 (") or isotype-matched control antibodies, 50C 1 (Q) . HT-2 cells were then added and proliferation was measured . Results represent the mean of triplicate experiments ± SEM . activity isolated from the EL-4 SN . The former was similar to BSF-1 based on its physicochemical properties . We, therefore, postulated that BSF-1 had IL-2like growth-promoting activity on HT-2 cells . To test this hypothesis and to disprove the possibility of contamination of BSF-1 by an IL-2 species with physicochemical properties similar to those of BSF-1, we tested the effects of soluble anti-BSF-1 mAbs (I IB11), previously reported to inhibit BSF-1 activity (19), on both the BSF-1 and HT-2 growth-promoting activities of the purified BSF-1 and IL-2 preparations from PK 7.1 and EL-4 SN, respectively . As shown in Fig . 3A, the HT-2 growth-promoting activity purified from PK 7.1 SN was readily inhibitable by I IBI 1 antibody, but not by an isotype-matched anti-DNP antibody (50C1). In contrast, the HT-2 growth-promoting activity of the EL-4derived IL-2 was not affected by I IB11 (or the control antibody) at any of the antibody concentrations tested (Fig . 3B) . The mAb 1 1 B 1 1 completely blocked the anti-Ig-mediated costimulator activity of the PK 7 .1-derived BSF-1 at the same concentrations that inhibited the BSF-1-mediated proliferation of HT-2 cells (Fig . 4) . A similar experiment could not be performed with EL-4-derived IL-2 since it did not show any anti-Ig-mediated costimulatory activity on B cells . Anti-IL-2 mAb Does Not Block the HT-2 Growth-promoting Activity of BSF-1. To obtain additional evidence that the HT-2 growth-promoting activity of purified BSF-1 was not due to contaminating IL-2, and to investigate whether the BSF-1-mediated proliferation of HT-2 cells was due to autocrine production and utilization of IL-2 in response to BSF-1, soluble anti-IL-2 mAb (DMS-1) was added to cultures of HT-2 cells containing purified BSF-1 or rIL-2 . The anti-IL-2 mAb failed to inhibit the proliferation caused by BSF-1, whereas the same concentrations of antibody inhibited the rIL-2-mediated proliferation of HT-2 cells by 50-60% (Fig . 5) . These results suggest that BSF-1 purified from PK 7 .1 SN has both growth-promoting activity on HT-2 cells and anti-Ig-mediated costimulatory activity on resting B cells, and that neither activity can be accounted for by contaminating or autocrine IL-2 . Anti-IL-2-R mAb Blocks the Growth-promoting Activity of IL-2 but not BSF-1 on HT-2 Cells . Although the above results indicate that the HT-2 growth-promoting activity of BSF-1 was not due to IL-2 contamination or mediated by autocrine IL-2 production, the proliferative signal given by BSF-1 might have resulted from interaction with IL-2-R . To test this possibility, the anti-IL-2-R antibody, AMT-13, previously shown to inhibit IL-2 binding and IL-2-mediated proliferation in activated murine T cells (20), was added to cultures of HT-2 cells stimulated with rIL-2 or BSF-1 . AMT-13 antibodies blocked (by^-70%) the proliferation caused by HL-2 but did not affect the proliferation mediated by BSF-1 (Fig . 6) . These result suggest that BSF-1 and IL-2 do not bind to the same receptor, or at least the same epitope on the IL-2-R on HT-2 cells, even though both induce their growth . The nature of the BSF-1 receptor on HT-2 cells is not known . Morphology ofHT-2 Cells Cultured with BSF-1 or rIL-2 . When HT-2 cells were cultured with purified BSF-1, they showed striking changes in morphology (Fig . 7A) when compared with HT-2 cells cultured with IL-2 (Fig . 7B) . While the cells cultured with IL-2 usually maintained a rounded shape, grew in suspension, and formed small clusters, the cells cultured in BSF-1 acquired a flattened appearance, produced numerous dendritic-like cytoplasmic projections, and attached to the surface of the culture flask . The percentage of cells undergoing Such morphological changes in the BSF-1-containing cultures was dependent upon the concentration and time of exposure to BSF-1 . Morphologic alterations were evident as early as 12 h after exposure, although they did not reach a maximum until 48 h. Compared with the proliferative activity of BSF-1, higher concentrations of BSF-1 were required to induce morphologic changes in the HT-2 cells, since lower concentrations of BSF-1 that promoted proliferation failed to induce significant morphological alterations . BSF-1 has Growth-promoting Activity on Other Subsets of T Cells. The growthpromoting activity of BSF-I on HT-2 cells prompted us to investigate its effects on a variety of T cell lines and blasts . The population of T cells to be tested was incubated with the purified BSF-1 or IL-2 preparations for 24-48 h, and then was assayed for [3H]thymidine incorporation . T cell lines included: (a) a second HT-2 line obtained from DNAX ; (b) the cytotoxic, IL-2-dependent T cell line, CTLL-2 ; (c) the long-term alloreactive T cell clone, PK 7.1-E10 (11) ; (d) a longterm murine T cell line (with cytotoxic activity to P815 mastocytoma cells [14,15]), clone 96, and another long-term T cell line of similar origin, clone 29 (14) ; (e) three CTL lines (16) ; (f) three KLH-specific Th lines (reference 17 and unpublished observations) . The antigen-specific Th all induced IgM secretion in B cells when cultured with TNP-KLH (data not shown) ; and (g) Con A-activated spleen and peripheral T cell blasts (Fig . 8) . As shown in Fig. 8, the IL-2 preparation supported the growth of all the T cell lines tested . In contrast, the BSF-1 preparation supported the growth of the Th populations (C-E) and both HT-2 cell lines (A-B), but induced only a modest proliferation of Con A T cell blasts (M), and no significant proliferation of the other T cell lines, including cytotoxic, alloreactive, and long-term T cell lines (Fig . 8 F-L) . These results suggest that BS F-1 may act as a growth factor for only some T cell subsets. Based on the T cells tested to date, it appears that BSF-1 acts primarily on T cells that provide help to B cells . It will be necessary to test the effect of BSF-1 on additional T cell lines and clones to determine if additional subsets of cells are responsive . Discussion The major finding to emerge from these studies is that BSF-I, in addition to its known effects on B cells, also induces the proliferation of some subsets of T cells such as HT-2 cells and antigen-specific Th (that provide help to B cells) . references 34 and 35), it was essential to show that the growth-promoting activity of BSF-I was not due to the presence of contaminating IL-2 . To this end, it was shown that (a) PK 7 .1 SNs, which lack IL-2 activity, support the proliferation of HT-2 cells^-50% as well as IL-2-containing EL-4 SNs; (b) The HT-2 growthpromoting activity present in PK 7 .1 SNs copurifies with BSF-1 by both gel filtration and reverse-phase HPLC ; (c) The proliferation of HT-2 cells in response to purified BSF-1, but not to IL-2, is inhibited by anti-BSF-1 mAbs . The same antibodies block the anti-Ig-mediated costimulatory activity of BSF-1 ; (d) anti-IL-2 mAbs block the HT-2 growth-promoting activity of IL-2 but not BSF-1 ; (e) anti-IL-2R mAbs block the HT-2 growth-promoting activity of IL-2 but not BSF-1 ; and (f) BSF-1 preparations that support the growth of HT-2 cells do not induce significant proliferation of other IL-2-dependent T cell lines, with the exception of antigen-specific Th lines. In contrast, IL-2 supports the growth of all the T cell lines and normal T cell blasts tested . We have chosen the PK 7.1 SN as a source of BSF-1 because of its paucity of IL-2 activity and its high levels of BSF-1 activity (11) . Other T cell lines or clones that fail to secrete IL-2 but secrete BSF-1 and/or other lymphokines that act on B cells have been described (36)(37)(38)(39). In the last year, there have been several reports describing the partial purification of BSF-1 (10,31,32), and the cloning and expression of the BSF-I cDNA (33) . In these reports, no T cell growthpromoting activity associated with BSF-1 was described. One possible explanation is that two groups (31, 32) used cytotoxic, IL-2-dependent T cell lines (CSP 2.1 and CTLL-2, respectively) as indicator cells in their IL-2 assays . All cytotoxic or alloreactive T cell lines tested to date in our system have also failed to proliferate significantly in response to BSF-1 . Paradoxically, however, Ohara et al . (10) used HT-2 cells in their IL-2 assay and could not detect any HT-2 growth-promoting activity associated with their purified BSF-1 . This observation could be explained by the fact that they used cloned IL-2-dependent T cells from the HT-2 line . During the cloning process, the cells might have lost responsiveness to BSF-1, while retaining responsiveness to IL-2 . Another possible explanation is that the HT-2 growth-promoting activity associated with PK 7.1-derived BSF-1 is related to its cell source . However, this possibility appears unlikely since we have recently purified BSF-I from BSF-I-containing SN of EL-4 cells, and found that it behaves identically to that isolated from PK 7.1 SNs (unpublished observations). If BSF-I acts primarily on Th (L3T4+, Lyt-2-), then the observation that BSF-1 induced only modest proliferation of Con A-activated T cells could be explained by the fact that Con A induces the preferential proliferation of Lyt-2+ cells (40) . In support of this possibility is the finding that immunofluorescence staining of Con A-activated lymph node blasts revealed that <10% of the cells were L3T4 + (data not shown) . Presumably, the biological effects of BSF-1 on T cells are mediated by its binding to specific receptors on the surface of activated T cells analogous to other hormones (35) . However, BSF-I receptors have not yet been characterized. Recent results from our laboratory (41) have shown that antibodies directed against the a chain of LFA-I mimic the biological effects of BSF-I on B cells. However, it is not yet known whether LFA-I is the receptor for BSF-I, or another molecule that can signal resting B cells in a similar manner . It is of interest that immunofluorescence staining of HT-2 cells with anti-LFA-1 antibodies has revealed that >99% of the cells express LFA-I . BSF-I, in contrast to IL-2, induced striking morphologic alterations of HT-2 cells, namely, increased adherence to plastic and the appearance of elongated cytoplasmic projections . These phenotypic changes may depend on cytoskeletal alterations . Such alterations may play a role in facilitating T-B cell interaction and the delivery of T cell help to B cells . The capacity of BSF-1 to induce proliferation of Th, but not of the cytotoxic/alloreactive type (Tc/Ta), is provocative. However, it should be emphasized that the negative results using clones of Tc and Ta could be due to their production and use of BSF-1, thereby abrogating their requirement for exogeneous BSF-1 . A possible physiological role for BSF-1 is that it acts on both Th and B cells during Th-B cell interaction . Noelle et al . (4) and Roehm et al . (5) have reported that BSF-I increases the levels of class II MHC molecules (Ia antigens) on the surface of resting B cells. This increase in expression of Ia antigens may enhance the recognition of processed antigen and la by the Th (2) . A proliferative signal mediated by BSF-1 to specific Th could enhance proliferation of the specific Th in the T-B cluster. Such Th cells might subsequently form new conjugates with additional specific B cells not yet stimulated . It is not yet known whether the Th subset(s) that secretes BSF-1 also bears receptors for it, or whether different subsets secrete and bind the ligand or both . In this regard, Mosmann et al . (39) have recently shown the existence of two different types of murine Th clones based on their patterns of lymphokine secretion. Type 1 Th cells produce IL-2, IFN-,y, GM-CSF, and IL-3, whereas Type 2 Th cells produce BSF-1, a mast cell growth factor distinct from IL-3, and a T cell growth factor distinct from IL-2 . The effect of BSF-1 on the proliferation of such clones was not investigated . Nevertheless, the presence of a T cell growth factor activity, different from IL-2 in the SNs of those clones also producing BSF-1 (assayed by proliferation of HT-2 cells), suggests that this T cell growth factor activity might be due to BSF-1 . Arthur and Mason (42) have also recently reported that inducer/helper T cells in the rat can be separated into two functional subsets based on reactivity with an mAb (MRC OX-22) that recognizes high molecular weight forms of the rat leukocyte-common antigen . One subset (OX-22 +) proliferates well in mixed lymphocyte culture (MLC), responds to Con A, and produces high levels of IL-2 after stimulation; this subset would be expected to play a role in cell-mediated immunity . The other subset (OX-22 -) that proliferates poorly in response to MLC and Con A, produces low levels of IL-2 but provides effective help for B cell responses ; this subset presumably plays an important role in the induction of humoral responses. Based on these data, the rat CD4+, OX-22-subset would be expected to produce (and also respond to) a lymphokine analogous to murine BSF-1 . The effects of BSF-1 on B cell activation and differentiation coupled to its preferential growth-promoting activity on Th cells could provide the means, as suggested by Arthur and Mason (42), for an independent regulation of cellular and humoral immune responses. During the past year, the complexity of the biologic activities of BSF-1 has become evident (2)(3)(4)(5)(6)(7). Until now, the activity of BSF-1 had been exclusively associated with B cells. This report provides evidence that BSF-I also acts on some T cell subpopulations . Summary T cell-derived supernatants (SN) that contain B cell-stimulatory factor 1 (BSF-1) and lack IL-2 promote the growth of the IL-2-dependent T cell line, HT-2, as well as three other clones or lines of T cells that can provide help to B cells. The BSF-I purified from these SNs promotes growth of HT-2 cells -50% as effectively as purified IL-2 . A potential involvement for contaminating IL-2 in the BSF-I preparations was excluded by the demonstration that anti-BSF-I mAbs blocked the BSF-1-induced growth of HT-2 cells ; in contrast, these antibodies did not block the IL-2-induced proliferation of the HT-2 cells. In addition, anti-IL-2 mAbs or anti-IL-2-R antibodies blocked the HT-2 growthpromoting activity of purified IL-2, but not BSF-1 . Finally, BSF-1 promoted only a very modest growth of Con A-induced T cell blasts, and failed to induce significant growth in seven other cytotoxic, alloreactive, and long-term T cell lines. Taken together, these results indicate that in addition to its known effects on resting and LPS-stimulated B cells, BSF-1 can promote growth of certain subsets of activated T cells, in particular, those that provide help to B cells . We thank Ms . U . Prabhakar, Ms . K. Sill, and Mr . W . Muller for technical assistance ; Ms . C .. A . Cheek for secretarial assistance ; Drs. J. Ohara and W. Paul for their continuing generous gifts of 111311 and 50C I antibodies ; Drs. R. Coffman (DNAX) J. Kappler, and P. Marrack for the HT-2 cell lines; Dr . R . Hodes (National Institutes of Health) for the two Th lines; Dr . J. Forman (UTHSCD) for the three cytotoxic T cell lines; and Mr . K . Oliver for assistance with the immunofluorescence staining and analysis . Received for publication 28 April 1986. Note added in proof Lee et al . (43) have recently generated a murine cDNA clone for BSF-1 . Monkey COS cells transfected with this cDNA produce BSF-I . This BSF-1 has la-inducing activity, anti-Ig costimulatory activity, BCDF-y/BCDF-e activity, mast cell stimulatory activity, and HT-2 growth promoting activity. Our results are in agreement with those of Lee et al .
2014-10-01T00:00:00.000Z
1986-08-01T00:00:00.000
{ "year": 1986, "sha1": "dcd36b87debe29eea65f41f3a7665216f16360cd", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/164/2/580.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "dcd36b87debe29eea65f41f3a7665216f16360cd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
232202674
pes2o/s2orc
v3-fos-license
State-of-the-art review of secondary pulmonary infections in patients with COVID-19 pneumonia Background The incidence of secondary pulmonary infections is not well described in hospitalized COVID-19 patients. Understanding the incidence of secondary pulmonary infections and the associated bacterial and fungal microorganisms identified can improve patient outcomes. Objective This narrative review aims to determine the incidence of secondary bacterial and fungal pulmonary infections in hospitalized COVID-19 patients, and describe the bacterial and fungal microorganisms identified. Method We perform a literature search and select articles with confirmed diagnoses of secondary bacterial and fungal pulmonary infections that occur 48 h after admission, using respiratory tract cultures in hospitalized adult COVID-19 patients. We exclude articles involving co-infections defined as infections diagnosed at the time of admission by non-SARS-CoV-2 viruses, bacteria, and fungal microorganisms. Results The incidence of secondary pulmonary infections is low at 16% (4.8–42.8%) for bacterial infections and lower for fungal infections at 6.3% (0.9–33.3%) in hospitalized COVID-19 patients. Secondary pulmonary infections are predominantly seen in critically ill hospitalized COVID-19 patients. The most common bacterial microorganisms identified in the respiratory tract cultures are Pseudomonas aeruginosa, Klebsiella species, Staphylococcus aureus, Escherichia coli, and Stenotrophomonas maltophilia. Aspergillus fumigatus is the most common microorganism identified to cause secondary fungal pulmonary infections. Other rare opportunistic infection reported such as PJP is mostly confined to small case series and case reports. The overall time to diagnose secondary bacterial and fungal pulmonary infections is 10 days (2–21 days) from initial hospitalization and 9 days (4–18 days) after ICU admission. The use of antibiotics is high at 60–100% involving the studies included in our review. Conclusion The widespread use of empirical antibiotics during the current pandemic may contribute to the development of multidrug-resistant microorganisms, and antimicrobial stewardship programs are required for minimizing and de-escalating antibiotics. Due to the variation in definition across most studies, a large, well-designed study is required to determine the incidence, risk factors, and outcomes of secondary pulmonary infections in hospitalized COVID-19 patients. Introduction Since coronavirus disease 2019 (COVID- 19) was first recognized in December 2019, it has resulted in the ongoing worldwide pandemic. COVID-19 is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), an enveloped RNA beta-coronavirus. SARS-CoV-2 shares a similar genetic identity with severe acute respiratory syndrome coronavirus (SARS-CoV) and belongs to the sarbecovirus subgenus of the Coronaviridae family [1]. COVID-19 primarily presents as a respiratory tract infection with symptoms varying from mild flu-like illness to acute respiratory distress syndrome (ARDS) [2,3]. Viral-related respiratory infections belonging to the same family of coronaviruses such as SARS-CoV and Middle East respiratory syndrome coronavirus (MERS-CoV) have been reported to be associated with secondary bacterial and fungal infections [4][5][6][7]. However, secondary pulmonary infections in COVID-19 patients are not well described and raised an important knowledge gap. Furthermore, other infectious and non-infectious complications have been described in hospitalized COVID-19 patients strongly associated with underlying COVID-19 infection such as pneumothorax, myocarditis, and even device-related secondary infections (e.g., central venous catheter, foley catheter). [8][9][10]. The aim of this review is to explore the incidence of secondary bacterial and fungal pulmonary infections in hospitalized patients with COVID-19 infection. We also discuss the bacterial and fungal microorganisms identified, the time to diagnose secondary pulmonary infections, and the frequency of antibiotic use in hospitalized COVID-19 patients with suspected or confirmed secondary pulmonary infections. There is a lack of data in terms of well-defined risk factors or predictors, and associated outcomes of secondary pulmonary infections in hospitalized patients with COVID-19 infection and, therefore, will not be a major focus of this review. Method A literature search was performed through MEDLINE, Pubmed, and Google Scholar using keywords of "coronavirus disease 2019 (COVID- 19)," "severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)," "secondary infection," "superimposed infection," "superinfection," "bacterial infection," "fungal infection," "bacterial pneumonia," "fungal pneumonia," "bacteremia," "fungemia," "hospital-acquired pneumonia (HAP)," and "ventilatorassociated pneumonia (VAP)" from January 1st, 2020 to December 31st, 2020. Our selection criteria comprised of articles with confirmed diagnoses of secondary bacterial and fungal pulmonary infections (defined as new microorganisms identified 48 h after admission) using respiratory tracts with corresponding blood cultures for similar microorganisms thought to be respiratory in origin in hospitalized adult COVID-19 patients. Respiratory tract cultures were defined as cultures obtained from sputum, endotracheal aspirates, and bronchoalveolar lavage (BAL). We also included articles in which the diagnoses of secondary pulmonary infections were suspected based on the description of cultures obtained that were respiratory in nature or microorganisms that are recognized to be respiratory in origin. Articles published in the English language were selected, and any cited references were reviewed to identify relevant literature in the English language that comprised of observational studies, case reports, and series that met our selection criteria that described secondary pulmonary infections in hospitalized COVID-19 patients. We excluded articles involving COVID-19 infections in children and pregnant women; nonhospitalized COVID-19 patients; patients with pulmonary co-infections (defined as infections diagnosed at the time of admission) by non-SARS-CoV-2 viruses, bacteria, and fungal microorganisms; secondary pulmonary infections from microorganisms that were known to be colonizers such as candida; studies that included both secondary infections and co-infections from a non-pulmonary source in hospitalized COVID-19 patients; and the diagnosis of secondary pulmonary infections made during the post-mortem examination of deceased COVID-19 patients. We screened 114 studies and included 49 studies that described secondary pulmonary infections in hospitalized adult COVID-19 patients that met our criteria (Fig. 1). Of the 12 studies published in the non-English language (defined as non-English language articles and had no English translation versions/options) that described secondary pulmonary infections in hospitalized COVID-19 patients, 6 were published in Mandarin, 4 were published in Spanish, and the remaining 2 were published in French. The diagnosis of COVID-19 was made by reverse transcriptase-polymerase chain reaction (RT-PCR) in all cases from respiratory tract specimens that include nasal and pharyngeal swabs, sputum, endotracheal aspirates, and bronchoalveolar lavage (BAL). Incidence of secondary pulmonary infections Among the 49 studies identified (Table 1), 28 (57%) studies were observational studies done on hospitalized COVID-19 patients; the remainder, 21 (43%), were small case series and case reports. Of the 28 observational studies, 78.6% were retrospective and 21.4% were prospective in nature. The majority of observational studies originated from China in 25% (7/28) of cases followed by 17.9% (5/28) in Spain, 14.3% (4/28) in France, 7.1% in Netherlands and USA, respectively, and the remainder in Belgium, Denmark, England, Germany, Italy, Mexico, Pakistan, and Switzerland. A total of 5,047 hospitalized patients with COVID-19-related pneumonia were identified in the 28 observational studies included in our review ( Table 1). The incidence of secondary bacterial pulmonary infections in hospitalized COVID-19 patients reported was 16% (580/3,633) and ranged between 4.8-42.8% in 14 observational studies, whereas the incidence of secondary fungal infections in hospitalized COVID-19 patients was 6.3% (171/2,703) and ranged between 0.9 and 33.3% according to 18 observational studies ( Microbiology of secondary pulmonary infections Out of the 28 observational studies, 14.3% (4/28) of studies had no descriptions of the specific bacterial or fungal microorganisms identified ( Table 2). The most common bacterial microorganisms identified in the respiratory tract cultures among the nine observational studies ( [17]. Other rare opportunistic fungal infections such as Pneumocystis jirovecii (PJP) had been observed in four case reports/series included in our review [18][19][20][21]. The time to diagnosis of secondary pulmonary infections and use of antibiotics The average time taken to diagnose secondary bacterial and fungal pulmonary infections from hospital and ICU admission among the 18 observational studies described was 10 days (ranged 2-21 days) and 9 days (ranged 4-18 days), respectively ( Table 2). The reported use of empirical antibiotics was 60-100% during the current pandemic between 11 observational studies ( Table 2). Furthermore, although specific data on antibiotic resistance patterns lacked in the majority of observational studies included in our review, limited observational studies had reported of detection of multidrug-resistant (MDR) microorganisms such as extended-spectrum beta-lactamase (ESBL) Klebsiella pneumoniae, ESBL Escherichia coli, MDR Pseudomonas aeruginosa, carbapenem-resistant Klebsiella pneumoniae, and Methicillin-resistant Staphylococcus aureus (MRSA) Discussion In hospitalized COVID-19 patients, the incidence of secondary pulmonary infections was low at 16% (4.8-42.8%) for bacterial infections and lower for fungal infections with an incidence of 6.3% (0.9-33.3%). However, the frequency of empirical antibiotic therapy was high at 60-100% among several observational studies included. The most common bacterial microorganisms identified in the respiratory tract cultures were 21.1% Pseudomonas aeruginosa, 17.2% Klebsiella species, 13.5% Staphylococcus aureus, 10.4% Escherichia coli, and 3.1% Stenotrophomonas maltophilia. Aspergillus fumigatus was the most common fungal microorganism identified to cause secondary pulmonary infections. Other rare opportunistic infection such as PJP was mostly confined to small case series and case reports. The overall time to diagnose secondary bacterial and fungal pulmonary infections was 10 days (2-21 days) and 9 days (4-18 days), respectively, from the time of hospital and ICU admission. In contrast, the incidence of secondary bacterial pulmonary infections during the 2009 Influenza A pandemic is up to 7% in critically ill patients [27]. However, for secondary fungal pulmonary infections, the incidence is as high as 14% in critically ill patients with seasonal influenza [28], 29. A retrospective study by Rouze et al. reported that secondary bacterial pulmonary infections were 1.6 times more likely to occur in critically ill COVID-19 patients compared to influenza patients. Four observational studies did not report any specific type of microorganism identified [3,[30][31][32]. In these studies, secondary pulmonary infections were minor secondary outcomes identified while assessing the many characteristics, risk factors, and outcomes of hospitalized COVID-19 patients. Furthermore, although 18 observational studies described secondary fungal pulmonary infections predominantly Aspergillus fumigatus, there was an absence of a standardized definition with the heterogeneity of diagnostic criteria used to differentiate between true infection versus colonization [33][34][35]. The microorganisms identified to cause secondary bacterial pulmonary infections in hospitalized COVID-19 patients are similar to microorganisms isolated during seasonal/pandemic influenza and even during the 2003 SARS outbreak [13,36,37]. The identification of gram-negative microorganisms in hospitalized COVID-19 patients is consistent with the type of pathogens commonly associated with hospital-acquired pneumonia involving Pseudomonas aeruginosa, Klebsiella species, Stenotrophomonas maltophilia, and Acinetobacter baumannii that does not necessarily suggest a specific preference for gram-negative infections in COVID-19 [38][39][40][41]. The time taken for the diagnosis of secondary pulmonary infections is highly variable between 2 and 21 days from hospital admission and 4-18 days from ICU admission according to the 18 observational studies included in our review ( Table 2). This differs in contrast to secondary bacterial infections that are diagnosed earlier in patients with influenza infection, which are 3-6 days from the initial presentation [36,42]. For secondary invasive pulmonary aspergillosis in influenza patients, the median time to diagnosis is between 5 and 10 days after ICU admission [15,29]. Although all observational studies included described respiratory tract cultures obtained more than 48 h after Species admission, the variability in time to diagnosis can be due to the inconsistency on when and a lack of information on why surveillance cultures are obtained. Bronchoscopy may be a useful tool to obtain respiratory tract cultures of sufficient quantity to help diagnose and isolate microorganisms in secondary pulmonary infections while determining the antibiotic sensitivities in hospitalized COVID-19 patients. The routine use of bronchoscopy may even lead to over-diagnosis of secondary pulmonary infections from respiratory tract colonization. According to three observational studies, the incidence of secondary bacterial pulmonary infections was 15% and more when routine bronchoscopy with BAL was performed in critically ill COVID-19 patients requiring IMV [11][12][13]. Four observational studies reported that the incidence of secondary fungal pulmonary infections was 20% and more in critically ill COVID-19 patients when bronchoscopy with BAL was performed routinely post-intubation, in a serial fashion, or any change in clinical status due to atelectasis, new lung infiltrates on imaging, and thick secretions [14][15][16]43]. Chang et al. described that respiratory tract cultures obtained from BAL have a higher positivity rate when compared to endotracheal aspirate and a greater tendency to detect different or second microorganisms as a cause of secondary pulmonary infections [12]. In the study by Torrego et al. , the microbiology findings on BAL resulted in a change in antibiotic prescribed in 83% of critically ill COVID-19 patients requiring IMV. However, the bacterial microorganisms identified such as Pseudomonas aeruginosa, Klebsiella species, Enterobacter cloacae, and Staphylococcus aureus was similar to bacterial microorganisms in mechanically ventilated non-COVID-19 patients [11]. However, bronchoscopy is often avoided as it is an aerosol-generating procedure that will predispose healthcare workers and patients to a substantial risk of further transmitting COVID-19 infection. The use of bronchoscopy in COVID-19 patients has been recommended when current respiratory samples from sputum and endotracheal aspirates are negative, in which an alternate diagnosis provided by BAL would significantly impact clinical management [44]. Nevertheless, two recent single-center retrospective studies showed no increase in the risk of COVID-19 transmission to healthcare providers when bronchoscopy is routinely performed while adhering to the proper infection control protocol [12,45]. The current knowledge of the risk factors for secondary pulmonary infections in SARS-CoV-2 is continuously evolving but remains poorly understood. Although it is becoming apparent that secondary pulmonary infections that occur in hospitalized COVID-19 patients can be associated with worse outcomes, it remains unclear if critically ill COVID-19 patients are at a greater likelihood of developing secondary pulmonary infections. COVID-19 infection will trigger innate and adaptive immune responses, including local immune response, recruitment of macrophages and monocytes, the release of cytokines, and prime adaptive Tand B-cell in an effort to resolve underlying inflammation [46][47][48][49]. However, in some cases, a dysfunctional immune response occurs that renders COVID-19 patients vulnerable to secondary pulmonary infections. Lymphocyte count, specifically T-cells, is substantially decreased, whereas inflammatory mediators of interleukins-(IL-)2, IL-6, IL-8, IL-10, tumor necrosis factor-alpha (TNF-a), and interferon-gamma are markedly increased within a week from COVID-19 presentation before recovering to normal levels, two weeks later [30,[50][51][52][53]. This dysregulated immune response that is seen to a greater degree in those with severe COVID-19 infections has an immunosuppression stage following the proinflammatory phase characterized by a sustained and substantial reduction in peripheral lymphocyte count [48,50,54]. Similar immunological findings have been described in SARS-CoV patients during the 2003 epidemic and H1N1 influenza during the 2009 pandemic [53][54][55]. This state of lymphocytopenia-induced immunosuppression observed in many hospitalized COVID-19 patients may explain the time taken for secondary pulmonary infection diagnosis seen in studies included in our review [30][31][32]56]. Furthermore, in a multi-center study involving 410 COVID-19 patients, secondary pulmonary infections were significantly associated with outcome severity. Critically ill patients had the highest percentage of secondary pulmonary infections (34.5%) compared to severely ill (8.3%) and moderately ill (3.9%) COVID-19 patients [32]. This high rate of secondary pulmonary infections occurs despite a majority of critically ill patients (92.9%) receiving antibiotics compared to 83.3% and 59.4% in the severely ill and moderately ill groups. Five observational studies reported that among critically ill COVID-19 patients, non-survivors/critically ill patients had a greater tendency to suffer from multi-organ dysfunction and develop secondary pulmonary infections despite up to 98% of them received antibiotics [24,30,31,56,57]. In all these studies, the degree of lymphocytopenia and corticosteroids administration was significantly higher in the critically ill/non-survivor group than in other groups. Furthermore, although the nadir CD4 + T-cell count was less than 200 cells/10 6 L in the majority of case reports/ series describing PJP among HIV patients co-infected with COVID-19 [19][20][21], a case report by Menon et al. described a hospitalized COVID-19 patient diagnosed with PJP despite the absence of HIV infection. Though her nadir CD4 + T-cell count was 291 cells/10 6 L and she was on chronic oral budesonide for her ulcerative colitis, the improvement with trimethoprim-sulfamethoxazole supported the diagnosis of secondary PJP infection [18]. On the contrary, a retrospective study by Karmen-Tuohy et al. reported no increased incidence of secondary bacterial or even PJP pulmonary infections in HIV-positive COVID-19 patients who were compliant with antiretroviral therapy, regardless of their CD4 + T cell count [58]. There is no single study to the current date, which has formally assessed lymphocytopenia as a risk factor for secondary pulmonary infections. Moreover, corticosteroids are frequently used in COVID-19 patients to prevent and treat cytokine storm and ARDS, which are suspected to be partly caused by dysregulated host immune response [50,53,59]. Recent studies assessing the use of corticosteroids in hospitalized COVID-19 patients demonstrated that a short course of corticosteroids over ten days has shown to be beneficial in the setting of hypoxic respiratory failure requiring oxygen therapy and mechanical ventilation requirement [60,61]. However, previous studies have demonstrated that corticosteroids may inadvertently increase the mortality and secondary infections in influenza patients, and prolong viral shedding and induce lymphocytopenia in SARS-CoV patients by down-regulating the innate and adaptive immune system. [29,54,62,63] Currently, there is no formal study to assess the risk of secondary pulmonary infections associated with corticosteroids administration in COVID-19 patients. However, in our review, although the majority of patients were receiving corticosteroids, the timing of administration, duration, and the dose of corticosteroids were not clearly described. Differentiating viral from secondary bacterial and fungal pulmonary infections remains a challenge for clinicians. This diagnostic uncertainty has contributed to the overuse of antibiotics in patients with COVID-19 viral illness. Although the incidence of secondary bacterial pulmonary infections in COVID-19 patients is low, the reported use of empirical antibiotics is 60-100% among the observational studies included in our review ( Table 2). These findings vastly differed when compared to patients with seasonal/pandemic influenza, in which the reported use of empirical antibiotics was 12-50% [36]. It is essential to consider how the frequent use of empirical antibiotic therapy could affect the prevalence of multidrug-resistant bacteria. The rising number of antibiotic use may predispose COVID-19 patients, especially those who are critically ill, to sepsis from secondary multidrug-resistant bacterial infections. An observational study during the SARS outbreak in 2003 demonstrated that MRSA acquisition identified on screening using nasal swabs drastically increased from 2.2 to 3.5 cases per 100 ICU admissions (pre-SARS and post-SARS period) to 25.3% per 100 ICU admissions (during SARS period), despite extensive infection control precautions [64]. This finding coincides with the increased use of broad-spectrum empiric antibiotics (4 th generation cephalosporins, fluoroquinolones, aminoglycosides, and carbapenems) during the SARS period, in which MRSA was responsible for up to 48% of microorganisms isolated in patients with VAP. Furthermore, common bacterial microorganisms identified on post-mortem examination of SARS patients were Pseudomonas aeruginosa, Klebsiella species, and Staphylococcus aureus, which are known for their high resistance to broad-spectrum antibiotics [65,66]. In the studies that we reviewed, antibiotic sensitivities of microorganisms and treatment duration were not reported even though MDR microorganisms were observed. Based on the current microbiological data from our review, it remains imperative that empiric antibiotic therapy covers multidrugresistant microorganisms such as MRSA and ESBL that are associated with a high fatality rate when concerns exist of possible secondary pulmonary infections in critically ill COVID-19 patients [39,40]. Our review supports the notion of frequently obtaining surveillance cultures (from sputum, endotracheal aspirate, blood, and BAL if beneficial) and daily decision-making on antibiotic requirements to deescalate and avoid prolonged therapy that will lead to the development of antibiotic resistance. The considerable variability in the incidence of secondary bacterial (4.8-42.8%) and fungal (0.9-33.3%) pulmonary infections reported and time taken for the diagnosis can be due to several limitations across the various observational studies included in this review. (1) The majority of the studies examining the incidence of secondary pulmonary infections are of poor quality and limited by the lack of a clear definition of secondary infections versus co-infections. There is also an absence of a standardized definition with the heterogeneity of diagnostic criteria used to differentiate between true invasive pulmonary fungal infection from colonization. [33][34][35] (2) Moreover, secondary pulmonary infections observed are a bystander (minor secondary outcome) result or identified during subgroup analysis while assessing the many characteristics, risk factors, and outcomes of hospitalized COVID-19 patients [3, 24, 30-32, 56, 58]. (3) It is not uncommon for early and late secondary infections to be frequently clustered together in the currently available literature for COVID-19 patients that may lead to the under-or overestimation of the exact incidence of secondary pulmonary infections, depending on the duration of the study period, especially among the 78.6% retrospective studies included in our review [67]. (4) The wide range of incidence rates reported for secondary pulmonary infections might have been due to the differences in the patient population, severity of illness, diagnostic sampling, and frequency of surveillance cultures obtained across various observational studies from multiple different countries. The routine use of bronchoscopy with BAL in critically ill COVID-19 patients where many are intubated and requiring IMV may lead to the over-diagnosis of secondary pulmonary infections [14][15][16]43]. (5) The restricted search methodology that is confined to English literature as we (authors) are not well-versed in other languages during this global pandemic likely contribute to the under-recognition of the true incidence of secondary pulmonary infections. (6) Furthermore, the high mortality rate associated with COVID-19 pneumonia may be an independent competing risk factor for the development of late secondary infection, leading to an unintended underestimation of the actual risk in non-deceased COVID-19 patients [67]. (7) Lastly, the widespread use of empirical antibiotics, analgesics, and corticosteroids likely mask underlying symptoms of infections, and lead to the delay and also underdiagnosis of secondary pulmonary infections. This could be due to the lack of routine surveillance cultures obtained because of fear towards COVID-19 transmission to health care professionals with prolonged patient contact [68]. These explain the variable incidence rate and inability to effectively perform a meta-analysis to determine better the incidence, risk factor, prognostic marker, and secondary pulmonary infection outcome in COVID-19 patients. Conclusion Our review on secondary pulmonary infections is limited by the lack of a clear definition of secondary infections versus co-infections, the inconsistency of the type microorganisms identified and time that surveillance cultures are obtained, the lack of information available on the associated antibiotic sensitivities of microorganisms, and duration of antibiotic treatment across various observational studies, small case reports and series, and variability in clinical characteristics reported in hospitalized COVID-19 patients. Additionally, with an observed strain being placed on the healthcare systems during the ongoing COVID-19 pandemic, there is a need for organized antimicrobial stewardship programs in the hospital to minimize the use of unnecessary empiric antibiotics and de-escalation of antibiotics when possible. As variation continues to exist on what constitutes a secondary infection (that we defined as infections occurring 48 h after admission) due to the lack of clear and consistent definition among many observational studies, we hope that a large, well-designed study can be performed in the future to accurately determine the incidence, microorganisms, risk factors, predictors, and outcomes of secondary pulmonary infections in hospitalized COVID-19 patients. Author contribution All authors had access to the data and were involved in writing the manuscript. Funding None.
2021-03-12T15:16:35.173Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "73eed61116849ab79b1ebc6a01aff2d9462ea2a8", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s15010-021-01602-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "73eed61116849ab79b1ebc6a01aff2d9462ea2a8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17270537
pes2o/s2orc
v3-fos-license
Relationship between Duration of Pulp Exposure and Success Rate of Apexogenesis INTRODUCTION: Apexogenesis is a way to save vitality of open apex damaged teeth with mild or moderate pulp involvement. Such teeth are not repaired through normal and usual treatments. This treatment provides usual and physiological conditions for root to develop in normal length. The aim of this study was to determine the success rate of apexogenesis according to the duration of pulp exposure. MATERIALS AND METHODS: In this animal study, mineral trioxide aggregate (MTA) and calcium hydroxide (CH) were used. The examined teeth were canines of cats with open apices. The treatment was accomplished in three periods of 1, 3, and 6 weeks after pulpal exposure. Four months later, the results were evaluated histologically and radiographically. RESULTS: The results showed no significant difference between the success rate of MTA and CH. Besides, after 6 weeks of pulpal exposure the treatment was successful. Root development and apical closure was detected in approximately 42% of teeth, while 33% of samples had a healthy Hertwig’s sheath. CONCLUSION: The findings of this study suggested that the conservative treatment in traumatized teeth after 1.5 month of pulpal exposure could be successful. INTRODUCTION The Treatment of open apex teeth is one of the most difficult and important subjects in endodontics treatments. An important factor in such treatments is the rate and severity of pulpal inflammation. Numerous studies have evaluated the progress of inflammation in pulp and the periapical region of closed apex teeth. Their results have shown that one week after pulp exposure, the adjacent area to the exposure point becomes necrotic and congestive and after four weeks, almost all parts of the pulp become necrotic (1)(2)(3)(4)(5)(6)(7). A healthy pulp is essential for the proper development of the root (8). It seems that the pulp of an open apex tooth is potentially more resistant against various irritations. Thus, this may cause its longer survival after a hard trauma rather than the pulp of a closed apex tooth (9). The condition of Hertwigʼs epithelial root sheath is another important factor that affects the treatment of an open apex tooth. In evaluation of periapical condition and Hertwigʼs sheath, it has been defined that one month after pulp involvement, Hertwigʼs sheath remains healthy (10). Root growth was only possible when the Hertwigʼs epithelial root sheath has retained its specialized function (11). Various materials have been used in pulp capping and pulpotomy procedures (12)(13)(14). calcium hydroxide (CH) is the most frequently used material since 1920. Nowaday, newer materials are preferred for vital pulp therapy, such as MTA (15). They stimulate significantly greater hard-tissue formation in the periradicular tissues and result in less inflammation in comparison with the use of CH (16). Maintaining the vitality of immature teeth until their full-root development is so important. IEJ -Volume 1, Number 4, Winter 2006 Loss of vitality of these teeth before root completion leaves poor crown/root ratio, weak roots more prone to fracture, and teeth more susceptible to periodontal breakdown. The findings of this study may help us to plan a more conservative treatment for such teeth which were referred to us, at different periods after trauma. Thus the aim of this study was to evaluate the MTA and CH used as pulpotomy agent in open apex tooth of cats at different times after pulp exposure. MATERIALS AND METHODS Eleven one-year-old cats with permanent open apex canine teeth have been selected for this study. The treatment was done on their canine teeth. They were divided into three groups: a one-week group with 4 cats, a three-week group with 4 cats and a six-week group with 3 cats. The cats were given general anesthesia with a mixture of 0.8cc Ketamin hydrochloride 50mg/ml and 0.2cc of 2% Rompun (a relaxant). After 3-5 minutes, intraoral anesthesia was achieved by injection of 1.8 ml of 2% Lidocaine containing 1:100,000 epinephrine. A radiographic image was taken to confirm the opening of the apex of the teeth. Using a high speed fissure bur number 8 (Tizkavan, Tehran, Iran) and copious water spray, the crowns were ground at the incisal part until a very small exposure about 0.5mm in diameter was achieved. These teeth were exposed according to the respected group for 1, 3, or 6 weeks. Subsequent treatment was performed as follows: The cats were anesthetized again. The cavity access was prepared. Then the pulpotomy was done until vital pulp tissue and normal bleeding was detected. The necrotic and inflamed pulp tissue was excavated completely. The apexogenesis materials (MTA and CH) were put on the healthy pulp as follows: CH on the right upper and lower teeth, MTA on the left upper and lower teeth. A double-seal method with glass ionomer and amalgam was used for coronal seal. In each group one tooth was left intact as control. Radiographic records have been taken and animals were followed. Four month later, after general anesthesia, another radiograph was prepared for each tooth. Then, the vital perfusion was processed. Finally, the histological slides were prepared and pulpal inflammation, periapical inflammation, Hertwig's sheath health status, and root development were evaluated. For evaluation of the inflammation severity, an area of 100 square micrometers was selected from the most inflammatory region in the periapical zone, and plasma cells, lymphocytes, macrophages and polymorphonuclears were counted with x400 magnification. Considering the number of cells, four categories were developed: 1-Without inflammation: 0-1 cell in 100 µm 2 2-Slight inflammation: 2-5 cell in 100µm 2 3-Moderate inflammation: 6-9 cell in 100 µm 2 4-Severe inflammation: over 9 cells in 100 µm 2 The type of inflammation was determined according to the type of infiltrative cells as: a-Chronic inflammation: plasma cells, lymphocytes, macrophages b-Acute inflammation: polymorphonuclears Chi-Square test was used in all statistical analysis. RESULTS In all control teeth the pulp was normal and without any inflammation. The odontoblastic layer and the development of the root were normal and the epithelial root sheath had normal patterns (Figure 1).The results of histopathological evaluation are shown in table 1. Comparison of the two materials at the same time: In one-week group there was a significant difference between two materials only in the conditions of Hertwig's sheath (P<0.05). In MTA-treated group, the Hertwig's layer condition was better (Figure 2). In three and six-week groups there was no significant difference between two materials in any of four factors (P>0.05) (Figure 3, Figure 4, Figure 5, Figure 6). Comparison of the two materials regardless of time: There was no significant difference between two materials in 4 studying factors (p>0.05). Comparison of three periods of time with the same material: In Both groups there was no significant difference in studying factors IEJ -Volume 1, Number 4, winter 2006 Comparison of the results in each time period regardless of the type of material: There was a significant difference only in the rate of apex closure in three time intervals (p<0.05). The results in one-week group were better than those of three and six-week groups, and in three weeks group the results were better than that of six week group. Comparison of the results of upper and lower jaws with a constant material: Neither MTA nor Ca (OH)2 showed significant difference in studying factors between upper and lower jaws (p>0.05). DISCUSSION It is known that Hertwigʼs root sheath can organize the apical cells and causes continued formation of the root even after the pulp necrosis. This demonstrated that the root sheath is not destroyed (17). Cvek stated that the root sheath is usually sensitive to trauma; however, in some circumstances it may resist against damages from trauma and infection (18). In a study on open apex canine teeth of cats with MTA, CH and formocresol, accomplished by Ghodduci et al., there was a significant difference between the rate of inflammation and root development in MTA and CH treated groups. In all of the treated cases with MTA, the radicular pulp of the root was normal and in 88.9% of them the root had been developed completely, but in CH treated group, only 75% of the teeth had a vital pulp (19). Time interval between the pulp exposure and apexogenesis in our study may be the reason of differences between our results with theirs. Our findings were more dependent on passed exposure time. In the study of Thomas et al. (20) on 12 incisor teeth of monkey and the study of Abedi et al. (16) on dog's canine teeth, there was less pulpal inflammation in MTA treated group. Time factor could be attributed in this study too. In this study, treatment of some teeth in the 6weeks group was successful and the development of root was going on. This considerable finding can affect the treatment plan of the open apex tooth with no positive response to the vital tests at first observation. In spite of trauma and infection, the Hertwigʼs root sheath remained viable and continued to map out the apical segment resulting in root end development (21). Andreasen et al. suggested that by complete removing of microorganisms and applying a material_ which is not irritant for the periapical tissue_ in the root canal, Hertwigʼs sheath may continue root-end completion in an apparently normal manner (22). We can manage these teeth with a conservative treatment and saving Hertwig's sheath in order to complete physiological root development. CONCLUSION According the results of this in vitro study, the conservative treatment of pulpally exposed traumatized teeth after 6 weeks can successful.
2018-04-03T00:47:14.072Z
2007-01-20T00:00:00.000
{ "year": 2007, "sha1": "b9b9e886c5f2a83b18aed456ca1bddb0e6d6c736", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b9b9e886c5f2a83b18aed456ca1bddb0e6d6c736", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257805795
pes2o/s2orc
v3-fos-license
building and the link between public trust and corruption perception: Comparative analysis before and after the Armenian Velvet Revolution in 2018 : Eastern European post-communist countries inherited pervasive corruption after the breakup of the USSR. Public trust was the crucial factor in tackling corruption and democracy building in these countries. This article takes Armenia as a case to study the antecedents and evolution of trust in Eastern European post-communist countries that went through a government coup in the 21st century. By comparing the corruption situation in Armenia before and after the Velvet Revolution 2018, we scrutinise how trust was and is critical to combating corruption and democracy building. We argue that in transition governments, one can distinguish two sources of creating public trust. The first wave generates when the government is newly established, and people trust the leader and his persona. Arguably, in this stage, the level of trust generated is based on expectations. The second wave of trust comes with the government’s actual performance, measured partly based on corruption perception. INTRODUCTION Corruption was a pandemic problem in the USSR (Kramer 1977;Willerton 1992).This corruption included bribery, special treatment for party leaders and members, the inefficiency of governing institutions, patronage and other forms of self-dealing.Research studies suggest that the political corruption was a contributing cause to the eventual breakup of the Soviet Union in 1991 (Leitner and Meissner 2018;Leibert, Condrey, and Goncharov 2013).It did that by undermining the performance of the economy and the legitimacy of the regime. However, this political corruption did not disappear with the breakup of the USSR.Instead, as its former republics secured independence, the political corruption from the Soviet era and institutions carried over into all the new states (Dudwick 1997;Schultz 2008;Holmes 2013;Leitner and Meissner 2018).These states faced common problems such as building their own political institutions, including independent judicial systems, establishing free and fair election systems, strengthening civil society and engaging in public administration reform (Diamond 1999;Dowley and Silver 2002;Howard 2003;Liebert, Condrey, and Goncharov 2013;Luo 2005).These countries also faced common problem of trust and legitimacy.In part because of the entrenched corruption, the regimes had to gain initial support from their people, convincing them that the governments were serving the people and not selfdealing, as was often the criticism during the communist era. Critical to the battle to address corruption or its perception was the issue of trust.As Robert Putnam (1993;2000) argued, the creation of trust and its related concept of social capital are essential building blocks in the construction of democracies.Trust, for Putnam, is essential among citizens to facilitate the relationships necessary to work together economically and in civil society with such skills paralleling or reinforcing those necessary in politics.His focus for trust, though, was among citizens or among members of a political community.There is also a different type of trust, i.e., public perception regarding political or governmental institutions.Many factors can influence this trust, including their performance in delivering on policy promises or addressing the needs of its members (Gilbreath and Balasanyan. 2017).Trust is also arguably connected to perceptions of corruption and the fairness of governmental institutions.Yet, in seeking to understand how trust is related to politics and corruption, we scrutinise one of many emerging questions, such as does trust in government change as perceptions of corruption evolve?Additionally, we study whether the quality of trust towards the government changes after a government coup and in which way. This article looks at what are the antecedents and evolution of trust in Armenia during the transition up to 2019 and before the 2020 renewed conflict over Nagorno-Karabakh with Azerbaijan.Armenia is an emerging economy that became independent along with 15 other countries after the collapse of the USSR.The state had a deep transformation of ruling elites after the breakdown of the Soviet Union, which led to a slow process of establishing democratic institutions and managing the links of oligarchs in politics.Specifically, using Armenia as an initial case study and as part of a larger project on the role of trust and institutional change in postcommunist states, it looks at how the building of trust was and is critical to combating corruption and democracy building.Moreover, Armenia was among the post-communist countries to go through a revolution in 2018 to replace the governing elites because of the lack of trust due to the high corruption level in the country.This paper seeks to elucidate how various colour revolutions in the postcommunist world have been a product of and impacted trust and corruption perception.Specifically, this article focuses on Armenia and its Velvet Revolution in 2018.The article seeks to comprehend how institutional trust is generated after a critical juncture and how it ramifies in society. What this article hopes to accomplish is an analysis of the relationship between trust and perceptions of corruption, and how both may be related to democracy building.To do that, the article describes the perception of corruption in Armenia before and after the 2018 Velvet Revolution.We specifically limit the time frame of the analysis to 2020, 2 years after the revolution.We do so for two reasons. First, the 2020 war with Azerbaijan and the 2020 pandemic and now the war in Ukraine and migration of Russian nationals to Armenia were major disruptions to the Armenian political system and regime.Examination of any survey research or data that includes dates after 2020 will not be able to untangle the impact these events had on trust and corruption perception related to Velvet Revolution in Armenia.Thus, we isolate or narrow our frame of post-coup trust analysis to be able to assess, although short term, the impact of the coup.Second, there is a limited survey research looking at changing public attitudes in Armenia since the 2018 coup.The studies that do exist provide limited analysis regarding attitudes towards the coup and how it affects trust.In effect, examination of trust beyond 2020 is limited by the inability to untangle other major disruptions to the regime and simply by a paucity of data. The larger thesis this article seeks to advance is that the road to democracy requires the development of trust in political institutions, and the latter is connected to perceptions of corruption. POLITICAL CORRUPTION AND THE TRANSITION TO POST-COMMUNIST STATES The road of post-communist states to stable democracies has been rocky.Collectively, all former Soviet or communist states faced challenges in rooting out political corruption, with varying levels of intentionality and success, while attempting to transition to democracy (Rose-Ackerman 1999;Steve and Rousso 2003;Worth 2015;Schultz and Harutyunyan 2016).In the case of the three Baltic states of Estonia, Latvia and Lithuania, they were more successful in combating corruption and establishing democracy (Holmes and Krastev 2020).Whereas other newly independent states such as Armenia, Georgia and Ukraine have had less success and have continued to endure higher levels of corruption or its perception (Dudwick 1997;Danielyan 2001;Stefes 2008;Gallina 2010;Kuzio 2015;Leitner and Meissner 2018). In these latter three countries, often referred to by colours, revolutions have been undertaken to root out political corruption and initiate democratic reforms (Mitchell 2012). Post-independence, Georgia's president was Eduard Shevardnadze who had served as the Minister of Foreign Affairs in the USSR from 1985 until its breakup in 1991.His tenure as president mostly meant that the politics of Georgia after 1991 continued the path of Soviet-style control that had been in place.However, in 2003, a pro-western movement called the Rose Revolution took place across the country with protests that challenged the parliamentary election results.Eduard Shevardnadze was forced out of office after protests that occurred when he tried to convene the disputed new parliament.This led to the eventual election in 2004 of Mikheil Saakashvili as president, who then in 2004 had to deal with a mini-Rose Revolution in Batumi and Adjara. This second Rose Revolution challenged the nearly dictatorial power of Aslan Abashidze, who served as the head of the Adjara region.Then in 2019-2020, street protests in the capital took place and were launched after Sergei Gavrilov, a Communist Party member of the Russian Duma, sat in a chair reserved by protocol for the Head of Parliament.He delivered a speech in Russian praising Russian-Georgian relations, even though he had voted in favour of the independence of Abkhazia, a region that is part of Georgia but not recognised by Russia. In 2004, Ukraine had its "Orange Revolution" to protest significant fraud in its presidential elections.It resulted in a new election that selected Viktor Yushchenko who pledged to address political reform in that country.But then, in 2014, the Euromaidan Revolution erupted over the failure of President Viktor Yanukovych to sign an agreement with the European Union that would have moved Ukraine in closer alignment with it.Among the concerns was the perception that Yanukovych was too closely aligned with Russia and was also politically corrupt (Hale and Orttung 2016).Finally, in 2019 on a campaign promise to address persistent corruption, Volodymyr Zelensky was elected president. Post-independent Armenia, too, has had several revolutions or mass political protests.Levon Ter-Petrosyan was the first president of post-Soviet Armenia who was forced to step down in 1998 after allegations of fraud and corruption.He was replaced by Robert Kocharyan, who in his first two elections too faced claims of fraud.In 2008, there were mass protests in Armenia, challenging the results of a disputed presidential election between Kocharyan and Ter-Petrosyan.Supporters of the unsuccessful Ter-Petrosyan alleged electoral fraud, resulting in widespread demonstrations in Yerevan and across the nation.On 1 March, Kocharyan, with the approval of the Armenian parliament, declared a 20-day state of emergency.He banned future demonstrations and censored the media from broadcasting any political news except that issued by official state press releases.Domestic and international criticism of the bans were significant. Then in 2018, there were anti-government protests in Armenia from April to May 2018, which have been called the Velvet Revolution.They were led in part by Nikol Pashinyan, who was the head of the Civil Contract party.They were in response to President Serzh Sargsyan's effective repudiation of his pledge not to return to office as prime minister after his term as president ended.At one point, Pashinyan was arrested and held in solitary confinement overnight.Protests ensued.Eventually, the National Assembly elected Pashinyan Prime Minister after Sargsyan was forced out of the race.His government was fortified when in December 2018, the "My step" bloc led by Pashinyan entered the parliament with 88% of electoral voices (Grigoryan, 2018). After eliminating the old governing elites, in 2019, the new established government's main tasks became strengthening the democracy in the country and developing a balanced diplomatic relationship with the EU, Russia and neighbouring countries.Nevertheless, the following year turned out more challenging for Pashinyan's government.In Spring 2020, the lockdown was introduced all over the country and the corona pandemic adversely impacted the implementation of economic and political reforms.Yet, the worse for the Pashinyan's government happened on 27 October when Azerbaijani military forces backed with support from Turkey and recruited fighters from Syria attacked the disputed territory of Artsakh (Nagorno Karabakh) inhabited with Armenians (McKernan, 2020).After the disadvantageous ceasefire for Armenia, anger was followed by protests against the prime minister (BBC News, 2020).The protests continued into the beginning of 2021, accusing Pashinyan of betrayal. The political demonstrations or so-called revolutions in Armenia, Georgia and Ukraine share several commonalities.All shared concerns and perceptions among voters that there was to a different extent political or electoral corruption or fraud (Gilbreath & Balasanyan 2017;Iskandaryan 2014;Dominioni 2017;Kovalov 2014).There were fears of democratic backsliding and a demand for reforms.Agitation over the degree of alignment with the West or Russia also seemed critical.All three countries having a territorial proximity with Russia were challenged in their efforts for economic and military independence.These three countries had historically "disputed" territories.But the demand to address political corruption clearly was a dominant theme (Baev, 2018;Galstyan, 2018;Gricius, 2019).There was also among these three countries a problem when it came to trust. After the Soviet Union's collapse, the level of institutional trust in three transition economies decreased, yet there was increasing interpersonal trust (Habibov & Afandi 2015).Personal ties became more reliable and made it quicker or easier to get things done. In these countries where interpersonal trust was high and institutional trust was low, the corruption perception level was high (Tonoyan, 2005;Transparency International, Corruption Perception Index 2003).Although all three countries were facing challenges in building democratic political institutions, Armenia had additional economic and historical challenges with its two neighbours.Closed borders with Turkey and Azerbaijan were blocking the economic development of Armenia.After the collapse of the Union, Azerbaijan became governed by Aliyev's family clan, and Aliyev was nominated as a person of the year in 2012 in organised crime and corruption (OCCRP, 2012).However, Turkey's dictatorship under Erdogan's government left little hope for the Armenian government to establish cooperative relationships with neighbours and become less economically dependent on Russia. TRUST AND DEMOCRACY The notion of trust has been extensively researched among social scientists during recent decades.According to political scientists, trust is a slowly established habit among people at the interface or overlapping of both commercial and civic activities (Putnam 1993;Fukuyama 1995).Psychological research defines it closer to morality and in contrast to unethical behaviour (Rotter 1980).Economist Albert Hirschman suggests that trust is "a moral good that grows with use and decays with disuse" (Hirschman 1984).Trust, in terms of how Putnam employs the concept, is mostly about trust among members of a community, and it is tied to market transactions or behaviour that takes place in civil society.But trust is also connected to attitudes towards government (Almond and Verba 1963).If individuals trust their government, then they are more likely to support it, its laws or its institutions because of a perception or belief that it is serving them or their needs.Trust is thus connected to regime legitimacy and eventually to democracy (Almond and Verba 1963;Mischler and Rose 1997;Zmerli and Newton. 2008). Scholars use trust to study its association with corruption level or its perception in a country (Bouckaert and van de Walle 2003;Warren 2004;Wroe, Allen, and Birch 2012).Empirical research displays trust to be both a cause and consequence of perceptions of corruption.Generalised trust creates reciprocity and nurtures social relations (Fukuyama, 2005).High levels of personalised trust can have adverse effects on the corruption level.In countries where generalised trust is higher, the perception of corruption is lower and vice versa.Arguably, countries with higher perceived corruption level among individuals have lower institutional trust level (Anderson & Tverdova 2003;Tonoyan 2005;Chang & Chu, 2006).Generalised or institutional trust has been historically low in post-communist Eastern and Central European countries (Paxton, 1999;Wallace & Latcheva, 2006).In the Armenian society, where family and kin comprise one of the essential parts of life, people heavily rely on these ties in everyday transactions.Thus, in Armenia, because of high levels of particularised trust, non-market corruption was more common, and illicit activities were mainly committed through created networks (Scott, 1972;Jar-Der Luo 2005;Tan, Yang, and Veliyath 2009).However, following the collapse of the Soviet Union, economic recession and the half-established public institutions, one did not see a high level of political trust.Thus, the high level of the perception of corruption persisted for decades in the country after the breakdown of the Soviet Union. Two main theories tend to explain what impacts the trust level in public institutions.Cultural theories of trust argue that trust in public institutions originates outside the political sphere (Jackman & Miller, 1996;Foley & Edvards, 1999).They argue that political and economic performance is not strongly related to political trust.The former changes very slowly in time and hence has a low impact on the latter.They claim that trust towards governmental institutions is instead an outcome of prevailing social norms and culture in a country.Hetherington (1998), advocating for the institutional theory of trust, argues that the performance of political institutions is decisive in establishing a trust level.Governments performing more satisfactorily can generate higher trust.At first glance, one would assume that improving socio-economic conditions would be the only key to increasing the trust in government in emerging economies.However, after the Velvet Revolution in Armenia when Pashinyan started to lead the government, he possessed a very high level of trust (in the first survey that we cite in this article, 91% had a favourable opinion about Nikol Pashinyan as a politician and person).However, at this point, he had not introduced significant socio-economic improvements in the country yet. Thus, in transition governments, one can distinguish two sources of creating public trust.The first wave generates when the government is newly established, and people trust the leader and his persona.We suggest that in this stage, the level of trust generated is based on the expectations.If individuals expect reforms to be successful, then initial trust will go up.The second wave of trust comes later, according to the government's actual performance, as measured in part based on perceptions of corruption.Here, if the government performs up to expectations, then trust goes up; if it fails expectations, then trust goes down. METHODOLOGY What we aim to test is the relationship between trust and perceptions of corruption.Specifically, as suggested above, we argue that there is a two-stage model or hypotheses being proposed.The first stage is where post-regime change trust in government goes up based on expectations of positive reform.If these expectations are met, then initial trust rises.The second stage looks at the actual performance of the government in office.If the government lives up to expectations, then trust increases; if it fails, then trust goes down.However, with the second wave of trust, a critical barometer of trust is corruption perception.By that, if the public believes the government to be corrupt or operating in a corrupt fashion, then trust goes down.Conversely, if the public perceives the government to be addressing corruption, then trust goes up.As noted above, building institutional trust is one of the several components critical to post-communist regime change to democracy.This article tests the second stage of trust and how it is related to perceptions by the Armenian people in terms of how well the government is addressing the corruption. For the purposes of this article, we follow the definition of corruption provided by Transparency International (Transparency International).They define it as "abuse of entrusted power for private gain."Their definition includes acts such as: Public servants demanding or taking money or favours in exchange for services Politicians misusing public money or granting public jobs or contracts to their sponsors, friends and families Corporations bribing officials to get lucrative deals In constructing their yearly global ranking of corruption, they have constructed a corruption perception index survey which is derived from many sources that seek to ascertain how corrupt a population perceives its government.Since it is impossible to directly detect all forms of corruption in society and even to classify whether a specific act constitutes corruption, this article accepts the fundamental premise that it is a perception of corruption that is critical to the sense of trust. Using this definition of corruption, we link surveys that measure trust in various institutions in the Armenian government to perceptions of corruption by the Armenian people.The article uses a "before and after" methodology, seeking to determine how the 2018 Revolution was a product of changing perceptions of corruption and trust and, subsequently, how it changed both.Unfortunately, the scantiness of longitudinal survey data during the time of this research limits the possibility of deep statistical analysis in this paper. TRUST AND POLITICAL REFORM IN ARMENIA Armenia continues to have relatively high levels of corruption or its perception after decades of the collapse of the Soviet Union.According to Transparency International, in 2002, Armenia's highest rank was 78, among 133 countries.The comparative stage ranking dropped to its lowest of 129 in 2011 when Serzh Sargsyan was the president and mass protests were occurring in Yerevan for political reform in the country.After introducing the new methodology by Transparency International in 2012, Armenia scored on average 35 from 2012 to 2018.According to Transparency International 0 means highly perceived corruption level in the country, and 100 indicates free from corruption.After the Velvet Revolution in 2018, Armenia's corruption perception index in 2018 was 42, demonstrating a notable seven-point improvement compared with the previous year (Transparency International, 2020).This improvement continued in 2019, scoring another seven-point improvement on the corruption perception index. Contrary, in 2016, according to Global Corruption Barometer, the highest number of representatives involved in corruption were those in governmental institutions (45%), the president and his staff (44%) and tax officials (43%) (Transparency International, 2016). After the revolution, the Center for Insights in Survey Research conducted three consecutive surveys among the Armenian residents comprising 1,200 respondents each.The survey covered all the regions in Armenia and Yerevan.The first wave was conducted from 23 July to 15 August 2018, the second from 9 to 29 October 2018 and the third from 6 to 31 May 2019.Interestingly, the first survey is called "High Expectations for Pashinyan's Government," the second "Expectations for Political and Economic Reform" and the last one "Public Expectations."As one can note from the titles, the expectations went down during that time. One explanation is that one after another, the high-ranked officials prosecuted for corruption were let free.Onerous and inefficient legal proceedings became more like vendetta performed by Pashinyan for personal offences when he was in opposition.(Later, we will discuss that those futile prosecutions could negatively impact the trust in courts and the judiciary system.)The second explanation for diminishing expectations might lie in unmet socio-economic conditions.Indeed, in the first survey, respondents listed unemployment/jobs (21%4), Artsakh (Nagorno-Karabakh) conflict (10%) and socio-economic problems (9%) as the main problems Armenia was facing then.In the successive surveys, social and economic conditions remained primary.Namely, in the second survey, socio-economic problems (12%) were in the second position after unemployment/jobs (14%) and before the Nagorno-Karabakh conflict (7%).In the last survey, the poverty/financial condition of people (8%) was in the top three main problems after unemployment/jobs (15%) and socio-economic problems (9%).Apparently, the survey respondents did not believe the new government was meeting the socio-economic and fiscal expectations of the population. Finally, juxtaposing the results with the 2016 Global corruption barometer post-revolution survey, residents found as favourable the work of the prime minister's office (82%, 85% and 72%) and the president's office (72%, 78% and 81%).In terms of openness and transparency in all three surveys, the prime minister and the president's office were placed on the highest two ranks.Arguably Pashinyan's promise during the mass demonstrations to tackle corruption as a primary hindrance for the country's development and prosecuting high-level politicians for corrupt dealings shortly after becoming the prime minister inspired trust in his persona and office.Plausibly, young people appointed by Pashinyan in high-rank positions triggered trust in his administration and the fight against corruption.On the one hand, newly assigned statesmen were inexperienced in politics; on the other hand, they did not have a track record for corruption (Hauser, Simonyan, and Werner 2020). The surveys mentioned above and data are non-inclusive, depicting the efforts of the Armenian government in addressing the corruption challenges and strengthening the democracy in the country.However, these are surveys and indicators represented by independent international organisations spanning the whole country.In addition, the analysed surveys had at least two or three waves, making them comparable and increasing their statistical power. Before the Velvet Revolution, corruption has been a considerable hindrance to the Armenian business environment too.While ranking the Top Business Environment Obstacle for firms, 5.4% of firms chose corruption.Many firms, 28.3%, chose tax rates followed by tax administration, which was selected by 23.6% of the firms as an obstacle (The World Bank). According to the Worldwide Governance Indicators' Control of Corruption Index, Armenia scores below 0, indicating low-level control of petty and grand corruption (Table 1).The index estimates range from approximately -2.5 (weak) to 2.5 (strong) governance performance.In 2018 which was the year of the Velvet Revolution, we noticed a notable improvement in the index, which is still below zero. In three surveys conducted by the Center for Insights in Survey Research, the most popular response to the question "Why do you think the change in government is positive?" the answer was "decreased corruption," respectively 32%, 37% and 27%.Further, the first two highest responses, "the biggest failures of the previous government were corruption and robbery."5This implies that corruption was one of the pivotal factors bringing people out to march in favour of a revolution (Lanskoy & Suthers 2019;Feldman & Alibasic 2018). Increased trust in institutions along with anti-corruption reforms and strengthening the democracy of the public institutions decreased the corruption perception in the country.Thus, in the beginning, Pashinyan's government was very consistent in introducing policy reform fighting corruption, which he announced as one of the priorities as a new government leader.According to the public opinion polls, the two highest corruption reforms conducted by Pashinyan included National Security Service Disclosure (36%, 32% and 24%) and detention of oligarch leaders (23%, 19% and 19%) (International Republican Institute, 2022). Fines and prosecution of corrupt politicians comprised an essential part of the newly formed government.Criminal cases were opened for abuses of power, financial losses and embezzlement on highly ranked politicians, such as retired army general Manvel Grigoryan (June 2018), third president Serzh Sargsyan (December 2019), former mayor of Yerevan Yervand Zakharyan (September 2019), former head of the State Revenue Committee and former Minister of Finance Gagik Khachatryan (August 2019), and finally the second President Robert Kocharyan who would later become his primary opponent in the snap parliamentary elections to be held on 20 June 2021, which was set after losing the Artsakh war against Azerbaijan and Turkey (Sargsyan, 2020). To increase trust in his government among the people and with foreign investors, Pashinyan introduced democratic changes to the public institutions acknowledged by the international community.According to the Economist's Democracy Index, Armenia scored 4.11 in 2017, 4.79 in 2018 and 5.54 in 2019; 4.0 is a threshold between authoritarian and hybrid regimes.Similarly, 6.0 is a threshold between a hybrid regime and flawed democracy (Economist Intelligence, 2020).According to Freedom House, Armenia's democracy score in 2018 was 2.57 and increased to 2.93 in 2019 and 3.00 in 2020 (Freedom House, 2019).Armenia's increasing score indicates the shift from a hybrid regime to strengthening democratic institutions in the country. In May 2019, the Ministry of Justice of Armenia introduced the unified electronic platform for whistleblowing on corruption crimes.The platform was meant to create awareness among citizens to eradicate corruption and provided an opportunity to report a crime.Citizens can whistle blow in the www.azdararir.amplatform anonymously, and the ministry of Justice of Armenia guarantees their protection.The name of the platform is a translation of the imperative form of the verb "to whistle-blow" (in Armenian ազդարարել) which also forms the noun whistleblower (in Armenian ազդարար). Hence, the anti-corruption measures resulting in higher trust in the newly formed government could have influenced the corruption perception of individuals.Consequently, the development of institutional trust and decreased perceived corruption were cornerstones in strengthening the democracy of the public institutions.Thus, after coming to the government, Pashinyan leveraged institutional and legal anti-corruption mechanisms for yielding lower corruption levels in the country. Trust in Armenian Institutions Perceptions of corruption remain relatively high in Armenia, although they did fall after the 2018 revolution from 35 to 49, recording the best improvement worldwide in 2 years, as noted above.This suggests possibly that the revolution did change perceptions.Yet another way to map this out is to look at how trust in specific Armenian institutions have changed over time.Caucasus Barometer performs frequent surveys of the people in the three Caucasus states, asking, among other questions, about trust in various institutions.Figures 1-5 provide a time series evolution of popular senses of trust towards the courts, the executive government, parliament, the president and political parties. In general, trust in the courts decreased slightly, while trust in the parties and parliament decreased significantly.Nevertheless, conversely, trust in the president and executive offices increased.There is a clear breaking point in the diagrams between before and after the Velvet Revolution.Two waves of trust fluctuation are evident in 2013and 2018. First, in 2013, Serzh Sargsyan won in a fraudulent election (Grigorian, 2015).It produced extreme distrust towards the authorities (Transparency International, 2013).Diminishing trust in public institutions led to an increased likelihood of protest actions (Jolobe, 2017).This contributed to more declining public trust in Sargsyan's government.Subsequently, his aspirations to become prime minister in 2018 prepared a "fertile soil" for mass demonstrations in the country.Thus, the second alteration of the curves commenced following the Velvet Revolution in 2018.Active participation in the revolution and high expectations of citizens may be the reason for increased trust in public institutions. If we combine these trust measures with Transparency International rankings for corruption perception, the results can be plotted in the next chart.There are only nine dates or data lines between 2008 and 2019 to perform statistically meaningful tests.However, if the hypothesis of this article is correct, then there ought to be some inverse relationship between trust in political institutions and perceptions of corruption.This chart graphically demonstrates that.It also shows that after the 2018 Revolution, trust in many of the Armenian political institutions went up.It did so because of the perception among Armenians that the government was seeking to tackle the corruption problems, as it promised it would.If the 2018 Revolution was about anything, then it was a protest against corruption, and the new government took steps to honour its promise to address it. Beyond graphic analysis, we performed a simple correlation analysis that looked at the relationship between trust in the courts, parliament, political parties, the executive department and the president, respectively, and perception of corruption (Figure 6).What we found were correlations of -0.10 (courts), 0.835 (parliament), 0.267 (political parties), -0.535 (executive branch) and -0.465 (president).While the paucity of datapoints might make more robust conclusions impossible, the correlations do largely capture some relationships between trust in government and perceptions of corruption.Specifically, we find that in comparing trust in various institutions and tying it to perceptions of corruption after the 2018 Revolution, there were noticeable changes.Specifically, a decline in perception of corruption had a medium or definite relationship with increased trust in the executive departments and the president.This is not a surprise given the central role that the presidency and the executive branch had in the coup and the ouster of one president with another. CONCLUSIONS What this research sought to do was to examine first the factors influencing corruption perception among Armenians and how it changed over time, especially after the 2018 Revolution.We found that trust has gradually increased over time, especially after the 2018 Revolution, when perceptions of corruption decreased. This research undertook a "before and after" approach.By that, to understand the factors that influenced corruption perceptions in Armenia, this article looks at events in this country from the end of the 2000s, when Serzh Sargsyan became the third president of Armenia, till the end of 2019, before the pandemic penetrated the globe.In 1998, the first wave of the post-Soviet Union independence transition happened.At that time, the war with Azerbaijan was finished, and the government could dedicate its efforts to economic and foreign policy reconstructions.Next, Kocharyan came to the power with differing views from Ter Petrosyan on the critical issues including resolution of the Artsakh conflict, Armeno-Turkish relations and tax collection (Astourian, 2000).Yet during Serzh Sargsyan's government, the issue of corruption aggravated to an unbearable extent for ordinary citizens, bringing them out to street protest. It was during Sargsyan's tenure that among 42 European and Central Asian countries, Armenia ended among the seven countries facing the worst corruption issues (Pring 2016, p 29).During Sargsyan's presidency, election fraud in Armenia flourished (Policy Forum Armenia 2013, p9).According to the Armenian Corruption Household Survey (2010), 39% of Armenians considered corruption as a fact of life.In the same survey, 69.7% of Armenians in response to the question "What is the primary reason that people justify their participation in corruption?" mentioned that there is no other way to get things done.As discussed above, corruption perception changed rapidly among the Armenians just after the 2018 Velvet Revolution.Pashinyan's anti-corruption policy continued his pledge to curb corruption by replacing monopoly with competition and punishing government employees for demanding bribes from the business.This change in corruption perception and an increase in trust is consistent with our thesis regarding how the two are connected, especially for second-stage trust.However, we did not find similar changes in the relationship between perceptions of corruption and trust in other institutions.This lack of a relationship may be due to the central role of the presidency in the 2018 Revolution.It also might speak to how nuanced public perceptions of trust and corruption are and how institutionally specific they are. We limited the research by the end of 2019 because the later developments in the country need separate attention, including the Coronavirus hit and handling in 2020 and the war with Azerbaijan backed up with Turkey's military forces in fall 2020.At the same time, this article leaves a lot of unanswered questions.Is Armenia unique in its experiences, or is it characteristic of other post-communist states in terms of how perceptions of corruption and trust are related, especially after the initial post-Soviet style of government regime change?Future research will situate Armenia's efforts to address corruption by comparing it with Georgia and Ukraine.It will also seek to understand changing perceptions of corruption by looking at them before and after the respective revolutions in these three countries.This research will ask the following questions: How did the revolutions lead to changes in elections, regime structures or political alignment among political parties?How did the revolutions lead to changes in international relations?What were the unique factors in each country that prompted revolutions, which have subsequently defined the capacity of the different regimes to change?How successful have the three countries been in abating real corruption or its appearance?What role did constitutional and legal changes have in addressing perceptions of corruption?These are questions that still need to be answered. At this stage of the research, the article has shown a pattern of how trust and perceptions of corruption are connected and how specific events might trigger or connect the two.Specifically, it provides insights into how the various colour revolutions may be a product of trust and perception and how the two change afterwards.For policymakers, the finding suggests that shortly after the revolution, the perceived corruption level will decrease, and the populous might be ready to undertake crucial combats against fighting corruption at this stage.Similarly, for local and international anti-corruption organisations, it is an optimal time to introduce robust anti-corruption measures that might have a long-term effect and last longer even when the first wave of euphoria after the revolution is behind.Finally, it might suggest that trust and perceptions need to be untangled better and the focus needs to be on specific governmental institutions and not simply regime-wide. Armenia provides an initial case study for a broader research design that seeks to understand the empirical connections between perceptions of corruption, trust and democracy building.After Armenia, Ukraine and Georgia are case studies contrasting to states such as Latvia, Lithuania and Estonia, which have completed the path from the Soviet era to stable western European-style democracies.Future research is needed to look at how the interrelationship between corruption perceptions and trust across states played out and what that means for a theory of democratic transitions and stability. Fig. 6 : Fig. 6: Trust and perceptions of corruption in Armenia: 2008-2019 Tab. 1: Worldwide Governance Indicators Control of Corruption in Armenia
2023-03-30T13:04:34.416Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "a6e262213b9dade79f8b8b7fe4bc5ad58b0668c1", "oa_license": "CCBY", "oa_url": "https://sciendo.com/pdf/10.2478/cejpp-2023-0003", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "274cf8cd12aacf82460866acc7a866feaedaf73b", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
219621315
pes2o/s2orc
v3-fos-license
Probing Solute–Solvent Interactions of Transition Metal Complexes Using L-Edge Absorption Spectroscopy In order to tailor solution-phase chemical reactions involving transition metal complexes, it is critical to understand how their valence electronic charge distributions are affected by the solution environment. Here, solute–solvent interactions of a solvatochromic mixed-ligand iron complex were investigated using X-ray absorption spectroscopy at the transition metal L2,3-edge. Due to the selectivity of the corresponding core excitations to the iron 3d orbitals, the method grants direct access to the valence electronic structure around the iron center and its response to interactions with the solvent environment. A linear increase of the total L2,3-edge absorption cross section as a function of the solvent Lewis acidity is revealed. The effect is caused by relative changes in different metal–ligand-bonding channels, which preserve local charge densities while increasing the density of unoccupied states around the iron center. These conclusions are corroborated by a combination of molecular dynamics and spectrum simulations based on time-dependent density functional theory. The simulations reproduce the spectral trends observed in the X-ray but also optical absorption experiments. Our results underscore the importance of solute–solvent interactions when aiming for an accurate description of the valence electronic structure of solvated transition metal complexes and demonstrate how L2,3-edge absorption spectroscopy can aid in understanding the impact of the solution environment on intramolecular covalency and the electronic charge distribution. ■ INTRODUCTION Rationalizing electronic charge distributions of transition metal (TM) complexes is a fundamental challenge in order to tailor their properties to applications in catalysis 1 and sustainable energy research. 2 This is particularly the case for the liquid phase, where a fluctuating solvent network can critically determine (photo-) chemical reactivity. In this context, it is crucial to understand how intermolecular interactions with the solvent reshape the electronic charge distribution of the solute and thus determine its catalytic capabilities. On an intramolecular level, the valence electronic structure of TM complexes is generally described by relative contributions of donation and backdonation channels that constitute the metal−ligand bond. These covalent mechanisms in turn shape the total electronic charge distribution and can be expected to be impacted by varying intermolecular interactions with a solvent. Such effects resulting from solvation are known to, for example, mediate electron transfer 3,4 and can govern photochemical properties on a quantitative 5,6 as well as qualitative level. 7−9 To study the impact of solvation on the electronic structure of TM complexes on a fundamental level, iron (Fe) cyanides are well-suited model systems, since their electronic structure has been thoroughly investigated. 10−13 In a study on solvent effects, Penfold et al. performed Kα resonant inelastic X-ray scattering (RIXS) experiments and used molecular dynamics (MD) simulations to qualitatively interpret the spectroscopic features for different solvents. 14 Similarly, Ross et al. employed various spectroscopies across a wide range of energies from the infrared to the X-ray regime. The study highlighted the importance of explicitly treating the solvent in quantum chemical simulations in order to ensure an accurate modeling of the electronic structure of iron cyanides. 15 Both studies particularly found a strong hydrogen-bonding interaction between protic solvents and the cyanide (CN − ) ligands, which impacts intramolecular covalency. More specifically, the hydrogen bond has been reported to withdraw charge from the CN − ligands, which is compensated for by a concomitant increase in π-backdonation. 14 This mechanism has also been used to explain the solvatochromism of mixed-ligand Fe complexes involving cyanide ligands 16,17 as well as their solvent-dependent photochemical pathways. 8,9,18 In order to study such trends in the covalency of TM complexes, X-ray absorption spectroscopy 19 (XAS) at the TM L 2,3 -edge provides the most direct access to the relevant frontier orbitals. For third-row TM complexes, the underlying metal 2p → 3d excitations probe the unoccupied metal 3d orbitals. 20,21 Thereby, their composition and thus role in covalent metal−ligand interactions can be directly evaluated. In terms of sensitivity to the solvent, however, L 2,3 -edge absorption spectroscopy is rather underexplored, and experimental studies are scarce. Bonhommeau et al. studied how Fe(II) and Fe(III) ions form complexes with different alcohols, thereby deducing a polarity-dependence of the covalent solute−solvent interactions. 22 Hua et al. on the other hand performed a theoretical study targeting the solvation of Fe polypyridyl complexes in acetonitrile 23 and compared their simulations to previously published data. 24,25 Explicit solute− solvent interactions were found to be minimal, and the subsequent analysis was focused on structural effects. In this work, we use XAS at the Fe L 2,3 -edge to study the impact of solute−solvent interactions on the valence electronic structure of the mixed-ligand Fe(II) complex [Fe(bpy)-(CN) 4 ] 2− (bpy = 2,2′-bipyridine). The experiment is performed using a transmission flatjet endstation 26 that allows absolute X-ray absorption cross sections to be measured 27 for different solvents without the need to invoke edge jump normalizations as in yield-based approaches. 11,20,28−30 Solventdependent changes in π-backdonation are revealed as well as compensating donation effects that maintain local charge densities at the Fe center. These conclusions are confirmed by a combination of MD and time-dependent density functional theory (TD-DFT) that allow the observed spectroscopic trends to be reproduced. The study demonstrates a nonnegligible interaction between CN − -containing TM complexes and the solvent that must be considered in future L 2,3 -edge spectrum simulations of metal cyanides in order to accurately describe the underlying valence electronic structure. Furthermore, the combination of MD and TD-DFT simulations can be established as an approach that reasonably accounts for the relevant interactions between closed-shell TM complexes and their solution environment. 4 ]OH in methanol. Finally, the solvent was evaporated under low pressure, and the product of the ion exchange was dissolved in DMSO and EtOH. Experimental Details. The X-ray absorption data were measured at the UE52-SGM beamline 32 of the BESSY II light source using a transmission flatjet system described by Fondell et al. 26 Complementary measurements have been performed at the EDAX@UE49-SGM experiment. The sample is delivered into the experimental chamber by two colliding round jets with a diameter of about 30 μm. Thereby, a free-flowing liquid leaf is formed under vacuum conditions. 33 The leaf exhibits a thickness of several μm, which allows for the transmission of Xrays at 3d L 2,3 -edges, 27 while the constant sample replenishment prevents potential X-ray-induced sample damage. 34 Depending on the solvent, different flow rates are required to keep the jet stable. The flow rate was 2.0 mL/min for water, 1.3 mL/min for EtOH, and 1.6 mL/min for DMSO. The intensity transmitted through the sample was detected as the average current of a gallium arsenide photodiode and recorded as a function of the X-ray photon energy. The bandwidth of the incident X-ray radiation was 0.3 meV at 700 eV of excitation energy. Details on experimental procedures and data treatment can be found in the Supporting Information. Computational Details. All molecular dynamics simulations were carried out with the Gromacs2019 package. 35 The SPC/F w 36 force-field (FF) was adopted for the simulations in water, while the OPLS-aa 37 FF was used for EtOH and DMSO. The parameters describing intermolecular interactions for the K + counterions were taken from ref 38. To describe the bonded interactions of the solute, a specific parametrization was carried out via the JOYCE procedure. 39 The fitting was based on the optimized structure and the Hessian of the complex in the gas phase, which were obtained at the DFT/ B3LYP 40,41 level with the def2-TZVP(-f) 42 basis set, employing the D3BJ dispersion correction. 43,44 The RIJCOSX 45 approximation was used with the def2-TZV/J auxiliary basis set 46 as implemented in the ORCA quantum chemistry package. 47 To describe the nonbonded interactions, the Lennard-Jones parameters for Fe were transferred from ref 48, while for C and N, the parameters from ref 49 were adopted (analogously to ref 50). For the Coulomb interaction term, point charges were derived using the CHELPG 51 fitting procedure in the Multiwfn 52 program. See the Supporting Information for the full set of parameters used in the MD simulations along with additional details. The spectra in the liquid phase were calculated as the sum of spectra from 50 uncorrelated snapshots taken from the MD simulations, however, with a reduced number of explicit solvent molecules. The details of the solvation model adopted for each solvent will be given further in the text. In addition, the bulk solvation effects were accounted for implicitly via the conductor-like polarizable continuum model (CPCM). 53 The spectra for each snapshot were computed with linear response TD-DFT. For the optical excitations, the lowest 50 singlet states were computed. The core-level spectra were obtained by restricting the excitation orbital window to include only the Fe 2p orbitals, and then, 100 singlet and 100 triplet core-states were computed. Finally, the spin−orbit coupling was taken into account perturbatively via the mean-field spin−orbit operator as described in ref 54. All spectrum calculations utilized the hybrid M06 55 exchange and correlation functional, which was benchmarked to best reproduce the optical MLCT band as well as L 2,3 -edge spectra. A more detailed discussion on functional choice and a comparison with other popular functionals is shown in the Supporting Information. In order to disentangle the orbital contributions to the metal−ligand bond, a fragment decomposition of the molecular orbitals of the system was carried out using the charge decomposition analysis scheme 56,57 (CDA). To minimize nonphysical populations (due to the Mulliken partition), the smaller, closely related def2-SV(P) basis set was employed for the decomposition. This analysis was carried ■ RESULTS AND DISCUSSION Figure 1a shows the molecular structure of [Fe(bpy)(CN) 4 ] 2− . The complex is coordinated with four CN − and one 2,2′bipyridine (bpy) group, which allows the complex to be approximated as octahedral and its properties to be discussed within the notation of the O h point group. The corresponding valence electronic structure is shown in Figure 1b in terms of a molecular orbital diagram. [Fe(bpy)(CN) 4 ] 2− can be described as a nominal Fe(II) closed-shell singlet with the Fe-3d-derived orbitals being the fully filled t 2g -and completely unoccupied e g -orbitals. The complex furthermore exhibits unoccupied ligand π*-orbitals from the bpy as well as CN − groups. Due to the underlying selection rules and as indicated in the scheme, the different unoccupied ligand π*-orbitals can be independently accessed by either optical or X-ray absorption spectroscopy. Here, the two experimental techniques therefore serve as complementary probes of the valence electronic structure. The optical absorption spectra of [Fe(bpy)(CN) 4 ] 2− dissolved in water, EtOH, and DMSO are displayed in Figure 2a. The two bands can be attributed to Fe t 2g → bpy π*-metalto-ligand charge-transfer (MLCT) excitations, whose energies shift as a function of the solvent. The MLCT energies have been shown to scale linearly with the acceptor number (AN), 16 which constitutes an empirical measure of the solvent Lewis acidity. Thereby, the AN accounts for both nonspecific interactions like the solvent polarizability as well as the hydrogen bond donation ability. 58 Within this framework, the solvatochromic behavior of the complex has been rationalized by negative charge being withdrawn (accepted) by the solvent from the CN − ligands via nonspecific interactions as well as hydrogen-bonding (depending on the solvent). The resulting charge deficiency is compensated for by an increase in πbackdonation from the metal center onto the CN − ligands. This stabilizes the t 2g -orbitals and subsequently linearly increases MLCT excitation energies with higher Lewis acidity. 16 Due to their sensitivity to intramolecular bonds, a similar linearity for metal−cyanide complexes has been revealed by IR spectroscopy. 17 The complementary transmission X-ray absorption measurements at the Fe L 2,3 -edge are displayed in Figure 2b for [Fe(bpy)(CN) 4 ] 2− in the same three solvents. The recorded signals are background-corrected for solvent absorption by a linear fit of the pre-edge region (E photon < 703 eV) as well as continuum excitations from 2p 3/2 and 2p 1/2 core holes by two arctangent functions following procedures by Wasinger et al. 20 and Cho et al. 25 Each spectrum is then normalized to the sample concentration and the thickness of the respective liquid leaf (deduced from the absorption at 700 eV in comparison to tabulated values 59 ). In analogy to optical absorption spectroscopy, this yields the extinction coefficient as the final entity independent of experimental parameters. In contrast to the optical regime, however, the extinction coefficient in this case is displayed in the unit of M −1 μm −1 corresponding to typical attenuation lengths in the soft X-ray regime. The general shape of all three X-ray absorption spectra is very similar to previous partial fluorescence yield measurements of aqueous [Fe(bpy)(CN) 4 ] 2− by the authors. 60 There, the two main transitions at the L 3 -edge could be assigned to be of predominantly Fe 2p → e g (708.7 eV) and Fe 2p → CN − π* (711 eV) excitation character. When comparing the three spectra presented here, the former resonance appears to be rather insensitive to the solvent environment, and the visible The Journal of Physical Chemistry B pubs.acs.org/JPCB Article differences are well within the margin of error resulting from the normalization procedure and background subtraction (see Supporting Information). This is consistent with the Fe-centric character of the 2p → e g -excitation. In contrast, substantial differences can be observed at the Fe 2p → CN − π*-excitation. The underlying transition effectively probes the Fe 3d character of the CN − π*-system locally at the metal center. Consequently, the feature has previously been identified as a "direct probe of back-bonding". 11 The measurements clearly show an increasing peak height in solvents with higher Lewis acidity (in the direction of DMSO, EtOH, water). This spectral trend therefore provides direct evidence for the previously reported enhancement of π-backdonation by strong Lewis acids in agreement with the results from optical and IR spectroscopy. 16,17 As mentioned before, these studies additionally established linear trends between their spectroscopic observables and the solvent Lewis acidity. Figure 2c therefore shows the integrated L 2,3 -edge intensity over the measured spectral range. It can be seen that the resulting total X-ray absorption cross section exhibits a similar dependence to the solvent Lewis acidity. The total L 2,3 -edge absorption decreases linearly once the Lewis acidity (corresponding ANs taken from ref 58) of the solvent is reduced. When extrapolating this trend, one can acquire the corresponding cross section for the gas-phase complex or within a noninteracting solvent (e.g., hexane). L 2,3 -edge absorption cross sections have been shown to scale with the number of nominally unoccupied d orbitals as well as the covalency of the complex. 11,20,27 As we compare an Fe(II) complex with unchanged 3d occupation within different solvents, the linear increase of the absorption cross section can therefore be expected to directly reflect changes in covalency as a function of the solvent Lewis acidity. To substantiate these interpretations, we performed MD simulations to, in a first step, acquire information on the structural arrangements of [Fe(bpy)(CN) 4 ] 2− within the three different solvents. The main results of these simulations in terms of solvation structures are presented in Figure 3. Due to the mixed-ligand character of [Fe(bpy)(CN) 4 ] 2− , the solvation of the complex exhibits varying degrees of asymmetry in the different solvents. Figure 3a shows the pair correlation functions between the N sites of the CN − groups with the H and O atoms of water and EtOH. In the case of DMSO, pair correlation functions between the CN − N sites and the S and O atoms are shown. In the case of the two protic solvents, the hydrogen-bonding causes a pronounced solvation shell with the first maxima being located at N−H bond distances of 1.7 and 1.8 Å in water and EtOH, respectively. Due to the absence of hydrogen-bonding in the case of DMSO, the coordination is significantly less structured at largely increased average solute− solvent bond distances. A closer look at the hydrogen-bonding properties of the complex reveals that, on average, the cyanide ligands experience 2.5 ± 0.4 hydrogen bonds in water and 1.8 ± 0.4 hydrogen bonds in ethanol (number of hydrogen bonds defined for r N−O < 3 Å and ∠NHO < 20°). It should also be noted that there are some inhomogeneities in the solvation shell of the cyanides with the axial ligands displaying a slightly higher average number of hydrogen bonds than the ones lying in the bpy plane, (The full analysis is available in the Supporting Information.) A different picture, however, is given from the perspective of the bpy group. Figure 3b shows the pair correlation functions between the H atoms of the bpy group and O and H atoms of the two protic solvents. For the case of DMSO, pair correlation functions between the bpy H atoms and the O and S atoms are shown. Judging from the comparably unstructured coordination in all three solvents, solute−solvent interactions between the bpy group and the two protic solvents seem to be much smaller than at the CN − side, while a similarly small interaction is observed for the case of DMSO. In order to build a realistic but manageable solvation model for the simulation of the experimental spectra, it therefore seems reasonable to make the following approximations. For the case of water and EtOH, molecules of the first solvation shell around the CN − ligands are explicitly included. The bulk solvent beyond that as well as the molecules coordinating with the bpy group are modeled implicitly by a CPCM approach. Due to the absence of coordinated solute−solvent interactions between [Fe(bpy)-(CN) 4 ] 2− and DMSO, we only consider the structural evolution of [Fe(bpy)(CN) 4 ] 2− in DMSO but model the solvent solely with a CPCM approach. The results of the simulations for the optical regime are presented in Figure 4, There, the previously introduced experimental absorption spectra in the three different solvents (a) are compared to the spectrum simulations (b). All simulated spectra represent an average of over 50 TD-DFT calculations on uncorrelated snapshots of our MD simulations with the structures being reduced as previously described. Each calculated transition is convoluted with a 0.2 eV Gaussian function to account for an estimated broadening due to vibronic contributions as well as an undersampling by the used 50 snapshots. Due to the comparably long lifetime of the valence-excited final states, a Lorentzian broadening is neglected. The calculated spectrum in water is normalized to the maximum of the low-energy MLCT band in the experimental spectrum and shifted by 0.4 eV. The calculated spectra for EtOH and DMSO are scaled and shifted accordingly. When comparing the experimental and theoretical optical absorption spectra in Figure 4a,b, the model fully reproduces the experimentally observed shift in energy of the two MLCT bands with only a slight underestimation of the shift for the case of DMSO. Our simulations further confirm The Journal of Physical Chemistry B pubs.acs.org/JPCB Article previous notions 8,9,16 that this shift is caused by, on average, lower t 2g -orbital energies in water than in EtOH and DMSO. These findings as well as the agreement with the experimental spectra thus allow it to be concluded that the applied model represents a reasonable description of the solute−solvent interactions in the three different solvents. It should further be emphasized that it is crucial to explicitly include water and EtOH molecules, since neglecting the associated hydrogenbonding does not allow the experimentally observed spectral shifts to be reproduced (see Supporting Information). We therefore proceed to the X-ray regime, where the results of the simulations are presented in Figure 5. Analogously to the optical spectra, the simulations are based on 50 TD-DFT snapshots. It should be noted that TD-DFT cannot be expected to fully account for 3d−3d and 2p−3d correlation effects in the core-excited final state of the L 2,3 -edge absorption process. However, the approach has successfully been implemented previously to model the L 3 -edge absorption spectra of closed-shell Ru 61,62 as well as Fe complexes. 23 Similarly to the optical absorption spectra, all calculated Xray transitions in Figure 5b are convoluted with an experimental Gaussian broadening of 0.3 eV as determined from the beamline bandwidth at a photon energy of 700 eV. An additional broadening of 0.2 eV is deduced by comparison to the calculations by Hua et al., 23 which accounts for vibronic contributions to the spectra. Furthermore, a broadening of 0.5 eV is estimated to again account for the supposed undersampling. Lastly, a 0.4 eV Lorentzian broadening is applied to account for the lifetime of the 2p core hole. 63 All calculated spectra are shifted by 10.3 eV to match the experimental spectra. It is important to note that in analogy to the absolute absorption cross sections measured in the experiment, each simulated spectrum for the three different solvents is displayed based on absolute oscillator strengths, which are only normalized to the number of snapshots and broadened according to the previously described procedure. Thereby, the very similar intensities of the e g -resonance at 708.7 eV are reproduced. This verifies the experimentally deduced insensitivity of its metal-centric character to the solvent environment. This comparability, however, does not necessarily hold for the spectrum of the optimized gas-phase structure, which is additionally shown. Due to the absence of intermolecular interactions in the gas-phase, the applied broadening (which is the same as for the spectra of the solvated complex) overestimates the structural variations of an isolated molecule, which are expected to be lower than in the case of a solvated molecule. For comparability, the gas-phase spectrum is therefore scaled to the maximum of the sampled spectrum in water. When comparing the four simulated X-ray absorption spectra, a decrease of the second resonance can be observed in the series of decreasing Lewis acidities from water to the gas-phase, however, with DMSO as an outlier. We attribute this to the insufficient treatment of our solvation model, where in contrast to the optical absorption, the modeling with a CPCM approach seems to not fully account for the solute− solvent interaction. We expect a full explicit treatment of DMSO molecules of the first solvation shell around the CN − as well as the bpy ligands to account for this discrepancy. When again evaluating the correlation functions of DMSO displayed in Figure 3, the coordination structure around the ligands, though significantly smaller than in the two protic solvents, The Journal of Physical Chemistry B pubs.acs.org/JPCB Article appears to be more crucial for the description of the X-ray absorption spectra than expected. An explicit treatment of the full first solvation shell of DMSO in the simulations is, however, computationally unfeasible at the employed level of theory. Nevertheless, the failure of the model for the X-ray regime, although reproducing the optical spectra, emphasizes the particular sensitivity of L 2,3 -edge absorption spectroscopy to the solvent environment. Despite the discrepancies for the case of DMSO, the model reproduces the spectral trends observed for the two protic solvents and with respect to the gas phase. We therefore proceed to rationalize the underlying mechanism with the proposed solvation model as a starting point and by comparing the case of water to the gas phase. It should be noted that the solvation model is, in order to reproduce the ensemble average detected within the measurements, based on a manifold of structures. For the sake of simplicity, we therefore employ a reduced model that still captures the essential properties of the water environment. It is based on the gas-phase structure of the complex, however, embedded in an idealized solvent structure, as determined from the MD simulations. Deduced from the previously presented hydrogen bond analysis of the complex in water, this amounts to approximately three hydrogen bonds between each CN − group and the surrounding waters with respect to the CN − group. The positions of the water molecules were chosen to mimic the first solvation shell of the cyanides, based on bond lengths and angles from the MD simulation (see Supporting Information), however, with the additional requirement that C 2v symmetry was preserved, to ease the comparison to the gas-phase complex. This model therefore neglects structural differences but allows effects caused by interaction with the solvent to be isolated. The exemplified structure is shown in Figure 6a. It is displayed as a charge-density difference, where the density of the solvated structure ρ sol is compared to the gas-phase chargedensity ρ gas . The charge density of the eight water molecules ρ H 2 O , which is calculated without the presence of the complex, is additionally subtracted. The displayed charge density is therefore calculated as ρ = ρ sol − ρ gas − ρ H 2 O . It can be clearly seen that the introduction of solvent molecules causes a manifold of charge redistribution effects throughout the CN − ligands as well as the Fe center. Our qualitative analysis is, however, restricted to a single CN − group as displayed in Figure 6b, since analogous effects (however, to a smaller degree for the axial ligands) can be seen for the other CN − ligands. In order to rationalize the observed changes also in terms of changes to the metal−ligand bond, Table 1 shows a decomposition of a selected set of molecular orbitals into contributions from different molecular fragments (see Computational Details and Supporting Information). In the discussion below, metal-centered 3d orbitals are referred to as Fe 3d orbitals and ligand-centered orbitals as either CN − (π), (π*), (σ), and bpy(π*). Approximate O h labels are used for simplicity, instead of the strict C 2v notation shown in the Supporting Information. Starting from the solvent, it can be observed how the water molecules are polarized in a way that negative charge accumulates on one side and a positively charged hydrogen points toward the ligand. This allows the N site of the CN − group to accommodate additional electronic charge within a πshaped orbital similar to observations of the HCN molecule in the presence of an electric dipole. 64 We interpret this as the signature of increased π-backdonation, as deduced from the experimental X-ray absorption spectra. This interpretation is in full agreement with the increased CN − (π*) contribution to the occupied Fe 3d orbitals (see Table 1). The fragment decomposition furthermore reveals a decrease in π-bonding, the mixing between occupied CN − (π) and Fe 3d orbitals, which has been suggested previously. 9 Since this mechanism constitutes the mixture between two occupied orbitals, it does, however, not affect the overall charge distribution. Interestingly, the increase in charge density at the N site is not at the expense of the t 2g -character of the occupied Fe 3d orbitals, as one would intuitively expect from the backdonation process, since it is usually referred to as a delocalization of metal t 2gelectronic charge onto the ligand. On the contrary, an increase of t 2g -like charge density can be seen. This is due to a concomitant decrease of π-backdonation onto the bpy ligands, which can be seen in Figure 6a from the decrease in π-shaped charge density on the bpy ligand and is confirmed by the The Journal of Physical Chemistry B pubs.acs.org/JPCB Article fragment decomposition, which yields a reduced admixture of bpy(π*) character to the occupied Fe 3d orbitals (compare Table 1). The depletion of the t 2g -character of the occupied Fe 3d orbitals due to the increase in π-backdonation onto the CN − ligands is thereby overcompensated and results in a net increase of t 2g -charge density. Spectroscopically, this effect can, however, not be observed, since the corresponding core excitations into the bpy(π*) orbitals are buried under the intense transitions into the metal-centric e g -orbitals. A loss of metal charge density can instead be found in the e gmanifold. This can be understood as a compensating mechanism that maintains local charge densities similar to effects observed for charge-transfer excitations 18,30 and oxidation/reduction processes 65 in covalent Fe complexes. As σ-density additionally localizes at the C site, this compensating effect can therefore be interpreted as a reduced degree of σdonation. This conclusion is again confirmed by the orbital decomposition (see Table 1), where a decrease of Fe e gcontribution to occupied CN − orbitals of dominantly σcharacter can be observed. Lastly, it should be noted that additional effects such as the decrease in charge density at the C site can be observed that are most likely caused by differences in the weight of πand σ-like contributions to the occupied CN − orbitals as well as the significant mixing of, in particular, occupied CN − (π) orbitals with water upon solvation (see Supporting Information). Additional measurements at the N K-edge 64 could further elucidate the effects of this behavior on the intraligand-bonding by providing the complementary ligand perspective of the underlying changes in orbital character. In order to quantify the effect of the revealed changes in metal−ligand covalency on the overall charge distribution, Figure 6c shows the charge density of the solvated complex as a function of the radius R from the Fe center (in analogy to previous work by the authors 18 as well as Johansson et al. 65 and Kubin et al. 66 ). At radii below 1 Å, the Fe L and M shells can be seen, while at bigger distances from the Fe center, the charge density can be attributed to the different ligands as well as the water molecules. The charge-density difference between the solvated complex and the gas-phase molecule is shown in Figure 6d. Only a marginal change can be observed in the Fe M shell in the case of the solvated complex. When integrated (compare Figure 6e), this amounts to an increase of ∼1% of an electronic charge at the Fe center. Changes in local metal charge are traditionally expected to be accompanied by shifts in the absorption onset of L 2,3 -edge spectra. 60,67,68 An increase/decrease in negative charge would therefore lower/ raise the excitation energy due to variations of the effective screening of the 2p core hole. The absence of any observable onset shift for the measurements in the three different solvents (compare Figure 5a) therefore is in agreement with the quantitive analysis of the reduced model, which revealed only a negligible change in local charge. This demonstrates how the solvent environment introduces sufficient degrees of freedom to allow for a full compensation of local charge-density variations around the Fe center. This is facilitated by the previously described charge-density rearrangements between the σand π-manifolds, however, to varying degrees in the different solvents. It is important to note that the constant absorption onset for the three different solvents is fully reproduced by the spectrum simulations, thereby again reinforcing the validity of the applied model underlying the spectrum calculations of the solvated complex (compare Figure 5b). The gas-phase spectrum, however, exhibits a slight shift of the absorption onset with respect to the spectra of the solvated molecule. Since the reduced model in Figure 6 yields only marginal changes in local charge at the Fe center, this is therefore more likely the result of structural effects induced by solvation. This potentially has an impact on the associated orbital energies, which in turn can affect the configuration interaction between core-excited states within the e g -manifold. Within this framework of charge-density compensation effects, we can finally also qualitatively rationalize the linear increase of the total absorption cross section with higher solvent Lewis acidity as shown in Figure 2c. As previously discussed, the increase in t 2g -like charge density can be attributed to a reduction in π-backdonation onto the bpy ligand. In order to compensate for this excess of negative electronic charge, a concomitant decrease in σ-donation can be observed, which lowers the Fe e g -content in occupied CN − (σ) orbitals and thus decreases Fe e g -like charge density at the metal center. This could be interpreted as an effective increase of the density of unoccupied states around the metal center. Within this reasoning, the overall higher absorption cross sections for higher solvent Lewis acidity would then correspond to an increase of unoccupied states as seen through the 2p core electron. This interpretation might seem contradictory to the orbital-based interpretation of the two resonances in the L 2,3 -edge absorption spectrum, which would consequently predict changes in the e g in addition to the ones in the CN − π*-resonance. It should be noted, however, that the final states of the two transitions are not fully independent, resulting in limitations of the applied single-electron picture. It has been shown previously for the case of [Fe(CN) 6 ] 4− that the CN − π*-resonance can "borrow" intensity from the e gresonance by configuration interaction in the core-excited final state, where the degree of mixing is determined by the energy separation between the two resonances. 11 To a small extent, this can be also observed in the experimental spectra. The CN − π*-resonance, at least for the case of DMSO, clearly exhibits a small shift to higher energies with respect to the spectrum in water (see inset in Figure 5a). This effect is also captured by our calculations, where the shifts to higher energies are however overestimated. This has been also observed in previous DFT calculations on [Fe(bpy)(CN) 4 ] 2− based on the restricted open-shell configuration singles method. 18,69 It should also be pointed out that the energy separation between the e g -and π*-resonances is quite sensitive to the amount of Hartree−Fock exchange included in the functional (see Supporting Information). Lastly, simulations based on singledeterminant reference methods like TD-DFT can, however, not be expected to fully account for the underlying final state effects of the core-excited state, since even restricted active space spectrum calculations struggle to correctly reproduce the energy of the CN − π*-resonance of Fe cyanides. 13,70 Future ab-initio efforts 13,70−73 that are capable of explicitly accounting for the solvent environment will be necessary to fully rationalize the underlying mechanisms. ■ CONCLUSION In this work, we have demonstrated how L 2,3 -edge absorption spectroscopy can be sensitive to charge rearrangements in TM complexes resulting from a varying solution environment. For the case of the mixed-ligand solvatochromic complex [Fe-(bpy)(CN) 4 ] 2− , we have revealed an increase in π-backdonation as a function of the solvent Lewis acidity, which can The Journal of Physical Chemistry B pubs.acs.org/JPCB Article be directly inferred from the experimental L 2,3 -edge absorption spectrum. Furthermore, a linear increase of the absorption cross section can be observed, which is caused by a concomitant decrease in σ-donation that maintains the absolute local charge densities around the Fe center. Our findings can serve as a benchmark for generally describing the interaction of TM cyanide complexes with a solution environment and how this interaction alters the valence electronic structure to a varying degree depending on the solvent's Lewis acidity. Furthermore, the combination of MD and TD-DFT simulations could be established as a framework that is capable to qualitatively account for the dominant spectral changes in L 2,3 -edge absorption measurements of closed-shell systems caused by solute−solvent interactions. This was achieved by explicitly considering the solvent as a part of the total molecular entity, which allowed charge rearrangements to be followed throughout the solute− solvent interface. Further theoretical developments based on multiconfigurational approaches will be necessary, however, in order to achieve a more quantitative agreement between simulation and experiment. This will further allow final state effects within the core-excited state to be fully rationalized as well as the framework to be expanded to a wider range of systems including the important class of open-shell TM complexes. Additional details on data acquisition and data treatment as well as details of the molecular dynamics simulations, hydrogen bond analysis, functional benchmarking, and charge decomposition analysis (PDF)
2020-06-14T13:02:47.728Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "396cba759aa2f19598b462d07f706f6cd8f7cbb7", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcb.0c00638", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "12ab4ed6140b231e64b1fd11cd348ec685e33bab", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
216553263
pes2o/s2orc
v3-fos-license
Chiral torsional effect with finite temperature, density and curvature We scrutinize the novel chiral transport phenomenon driven by spacetime torsion, namely the chiral torsional effect (CTE). We calculate the torsion-induced chiral currents with finite temperature, density and curvature in the most general torsional gravity theory. The conclusion complements the previous study on the CTE by including curvature and substantiates the relation between the CTE and the Nieh-Yan anomaly. We also analyze the response of chiral torsional current to an external electromagnetic field. The resulting topological current is analogous to that in the axion electrodynamics. Very recently, a rather new type of chiral transport phenomenon is discovered. It is induced by the spacetime torsion in the presence of chirality imbalance, and naturally termed "chiral torsional effect" (CTE) [39]. Torsion is a hypothetical spacetime property in the augmented gravity theory called the Einstein-Cartan gravity, which has raised great attention among gravity physics as reviewed by Refs. [40,41]. It arouses extra research interest that the CTE is supposed to be connected with the Nieh-Yan anomaly [42] which depicts the torsional topology of spacetime [43,44]. Although never observed in real spacetime so far, torsion can be imitated by a lattice dislocation. This idea is formulated in lattice field theory and buttressed by numerical computation in Ref. [45]. In condensed matter, torsion is realizable in diverse materials like graphene [46], topological insulators [47][48][49] and Weyl semimetal [50][51][52][53][54], where the deformation of the materials acts as torsion effectively. Especially, Weyl semimetal is an ideal context for the CTE experiments since it bears a chirality imbalance as well. Despite its profound theoretical significance and promising experimental verifiability, to the best of our knowledge, the previous studies of the CTE are incomplete in the sense that they have neglected curvature effects, or specifically, the spin connection term in the covariant derivative, and overlooked a certain torsional term allowed in the general torsional gravity Lagrangian. Besides, the connection between the CTE and the Nieh-Yan anomaly remains not entirely clear. Firm computation in a complete setup is indispensable for comprehending the interplay between torsion, curvature and axial anomaly. Hence in this paper, we decisively calculate the CTE current at finite temperature, density and curvature in the most general torsional gravity theory. In addition, we analyze the current driven by the electromagnetic field in torsional spacetime, unveiling the impact of torsion on the conventional Maxwell electrodynamics. This paper is organized as follows. Sec. II serves as a brief review of torsional gravity. We introduce the basic notion of torsion and expound the general form of the coupling between torsion and a fermion. In Sec. III, we calculate the torsion-induced current. We first evaluate the current at zero temperature and density to clarify its relation to the Nieh-Yan's torsional topological invariant, and then generalize our calculation to finite temperature and density. In Sec. IV, We analyze the current driven by electromagnetic fields in the presence of torsion, and hereby illuminate the analogy between the electrodynamics of the torsional gravity theory to the axion electrodynamics. The conclusive section V presents our summary and outlook. We adopt the Euclidean spacetime throughout this work. II. TORSION The standard Einstein gravity theory assumes the symmetry of affine connection Γ λ µν = Γ λ νµ . Together with the metricity condition, this assumption leads one to identify the affine connection with the Christoffel symbol determined solely by metric: with a permutation symbol ∆ αβγ µνρ ≡ δ α ρ δ β µ δ γ ν + δ α ν δ β µ δ γ ρ − δ α µ δ β ν δ γ ρ . By contrast, the Einstein-Cartan gravity theory arXiv:2004.11899v1 [hep-th] 24 Apr 2020 relaxes the assumption of a symmetric affine connection, allowing for an antisymmetric part termed "torsion": We henceforth attach tilde in denoting quantities containing torsion. Affine connection itself is not a tensor, but torsion is, thus qualified as a physical quantity. Provided the metricity condition, the relation betweenΓ λ µν and Γ λ µν reads Equation (3) demonstrates that the spacetime features two independent intrinsic properties, metric and torsion. Correspondingly, the covariant derivative of a spinor field comprises an extra term embodying the coupling of torsion with a fermion: where e µ m is the vierbein satisfying the orthonormal relations e µ m e µn = δ mn , e µm e m ν = g µν . The first term in Eq. (4) is the torsion-free covariant derivative in the Einstein gravity theory: with σ ab ≡ i 2 [γ a , γ b ] and the spin connection With the covariant derivative defined by Eq. (4), we readily write down the Dirac Lagrangian in torsional curved spacetime, which is sometimes called the minimal theory. After some algebra [41], we rewrite Eq. (7) to sort out the torsional contribution: where S µ is what we call "screw torsion": with ε µνρσ denoting the covariant Levi-Civita tensor. The most general Lagrangian obeying covariance, locality, renormalizability and parity symmetry allows for another type of torsional term that we call "edge torsion", and takes the form of: The parameters η 1 and η 2 are arbitrary real numbers for the general theory, while the specific choice η 1 = 1/8, η 2 = 0 recovers the minimal theory (8). Two features of the Lagrangian (11) play essential roles in later computation. Firstly, the torsional terms are entirely separated. Thus we can conveniently define the perturbation away from the torsion-free theory that corresponds to the choice η 1 = η 2 = 0. Secondly, the edge torsion couples to a fermion in the same way as a U(1) gauge field. It enables us to easily encompass an external electromagnetic field by combining it with the edge torsion: In this way, we consider E µ together with the electromagnetic field in Sec. IV. Until then we turn off A µ for simplicity. III. TORSION-INDUCED CURRENT We aim to evaluate the torsion-induced chiral current in the most general theory (11) with metric and torsion treated as background fields. Our calculation starts from the following vacuum or thermal expectation value of the chiral current: where "+" and "−" stand for right-handedness and lefthandedness respectively, and P ± ≡ 1 2 (1 ± γ 5 ) denotes the chiral projector. Throughout the present section, as explained above, the electromagnetic field together with the edge torsion, A µ , is shut down, and the screw torsion S µ is disposed as a perturbation to the linear order. In parallel, the effect of curvature is also kept to the leading order in terms of the curvature tensor R µνρσ . The chiral current J µ ± is calculated in two different setups. The result at zero temperature and density is achieved in Sec. III A. The axial current, in this case, depends on the ultraviolet cutoff and its divergence proves related to the Nieh-Yan topological invariant. Then the generalization to finite temperature and density is accomplished in Sec. III B. The chiral current relies on the interplay between torsion and curvature, and exhibits a distinctive dependence on temperature and density in contrast to the CME and the CVE. A. Zero temperature and density At zero temperature and density, given that the screw torsion S µ is an axial vector, the vector current vanishes at O(S µ ). We therefore focus on the axial current We calculate it as the trace involving the propagator, in a similar way to Ref. [45]. The perturbative expansion with respect to the screw torsion gives rise to with G representing the torsion-free propagator and Tr standing for the trace over both Dirac indices and coordinate space. We make two remarks about our power counting. Firstly, given the symmetry property of the curvature tensor, the torsion-independent part vanishes at zero temperature and density, as also pointed out in Ref. [4]. Secondly, from the perspective of parity, one can understand that the first-order derivative of S µ does not contribute to the axial current. To simplify our computation, we employ the Riemann normal coordinate around the point x at which the current is evaluated. In this coordinate system, the Christoffel symbol Γ µ νρ vanishes at x and the γ-matrices are those in flat spacetime. After the transformation into momentum k-space, the propagator at the coincidental point acquires the following form according to Ref. [55]: where the function G(k) includes curvature effects in a perturbative way: The first coefficient A 1 is proportional to the scalar curvature, The subsequent coefficients, A 1α , A 1αβ , A 2 and so forth, consist of higher orders of curvature or derivatives thereof. One can refer to Ref. [55] for the specific value of them but we focus on the leading-order curvature effect so that A 1 suffices. Inserting Eqs. (16) and (17) into Eq. (15) and taking the trace over Dirac indices yield: We have introduced the ultraviolet cutoff Λ so as to figure out the dependence of the axial current on Λ, which is also implied in Refs. [43,44]. With detailed computation left in Appendix A, we present the conclusive result as: Let us examine the axial anomaly indicated by Eq. (20). To this end, we take the massless limit m → 0. Furthermore, since the curvature is independent of torsion and irrelevant to our interest here, we rightfully take R µνρσ = 0. Then the axial current reads Accordingly, the divergence of the axial current takes the form of [42] ∂ µ J µ 5 = In fact, the volume integral of the divergence is proportional to Nieh-Yan's topological invariant [42], which characterizes the torsional topology of spacetime. The relation (22) is referred to as the Nieh-Yan anomaly [43,44] in that the right-hand side has an anomalous nature and the left-hand side embodies Nieh-Yan's topological invariant. It is noteworthy that in a general sense, the divergence of the axial current in a torsional curved spacetime receives other contributions in addition to Eq. (22), which we are nevertheless unable to capture under our truncation scheme. For instance, Nieh-Yan's topological invariant should own the Pontryagin form of the curvature [42] in accompany with Eq. (23), which is at the second order of the curvature tensor. Besides, as firstly indicated in Ref. [56], there is Λ-independent torsional contribution to the axial anomaly from higher orders of torsion and its derivative. To grasp this, one shall extend our analysis to include higher-order terms of curvature and torsion. B. Finite temperature and density We now generalize to the chiral current (13) at finite temperature T , vector chemical potential µ and axial chemical potential µ 5 . For such purpose, we resort to the Matsubara formalism. We impose the stationary condition of metric, i.e., all metric components are timeindependent and the temporal components are spaceindependent, which justifies the standard Matsubara formalism. For simplicity, we consider a massless fermion with m = 0. We observe from the Lagrangian (11) that the temporal component of the screw torsion couples to a fermion in an identical way with the axial chemical potential. Thus we absorb it into a redefined axial chemical potential: Then without loss of generality, we specify the screw torsion to be pure space-like, and further direct it along the z-axis as S µ = S zẑ on account of spherical symmetry. One can manifest that only the τ -and z-components of the current (13) are nonvanishing. Since the τcomponent does not depend on S z at the linear order, we focus on the z-component, It is straightforward to prove that the current (25) can be evaluated by a similar formula to Eq. (15) with the momentum k µ therein replaced by with the Matsubara frequencies ω n ≡ 2πT (n + 1 2 ) and the chiral chemical potential µ ± ≡ µ ± µ 5 . To the linear order, the chiral current is expressed as where G ± is given by with the perturbative expansion of G ± (k) being formally similar to Eq. (17), Applying the formulas (28) and (29) to the expression (27) and carrying out the Dirac trace, we boil the computation down to the following sum-integral: Now that we are interested in the dependence of J z ± on temperature and density rather than the ultraviolet scale, we calculate the integral with dimensional regularization and subtract the divergence according to the modified minimal subtraction scheme. After the computation of the sum-integral detailed in Appendix B, we obtain the final result: The dependence on temperature and density is expressed utilizing digamma function ψ(z) as which is depicted in Fig. 1. We remark that the result (31) should not be directly compared with that in zero temperature and density (20), because the result of J µ ± would be changed by altering the order of taking the three limits, T → 0, µ ± → 0, and m → 0. For example in Eq. (31), the µ ± → 0 limit can be taken straight while the T → 0 limit should be analyzed through the asymptotic expansion, and apparently the results bear different coefficients of RS z . Discussion on the T → 0 limit is provided in Appendix B. Moreover, during the dimensional regularization, an infinite portion in J z ± exists as the counterpart of the Λ-dependent term in Eq. (20), but has already been subtracted, thus absent in Eq. (31). IV. TORSIONAL ELECTRODYNAMICS Now we come to the study of the current response to an external electromagnetic field in the presence of torsion. It is worth reminding that we combine the edge torsion with the electromagnetic field as Hence our analysis in this section accounts for the current driven by the edge torsion as well. For simplicity, we confine our study to the massless fermion on a flat metric with zero chemical potential. We also assume the screw torsion to be stationary and homogeneous. Under these assumptions, we can perform an axial transformation to eliminate S µ from the fermionic sector of the Lagrangian. This transformation meanwhile yields the following anomalous term in the gauge sector: where F µν ≡ ∂ µ A ν − ∂ ν A µ andF µν ≡ 1 2 ε µνρσ F ρσ . Remarkably, Eq. (35) is formally the same as the action of axion electrodynamics and the screw torsion plays the role of the derivative of the vacuum angle: S µ ∼ ∂ µ θ. The functional derivative of the action (35) with respect to A µ gives rise to the vector current, This equation summarizes multiple torsion-induced phenomena. The temporal component represents an anomalous charge density resembling the Witten effect [57], in which magnetic flux traversing the gradient of the vacuum angle induces the extra charge. Thus we entitle Eq. (37) the torsional Witten effect. On the other hand, the spatial component of the current reads: The first term is the torsional realization of the chiral magnetic effect [1] in which S τ acts as the axial chemical potential. We thereupon designate it as the torsional magnetic effect. The second term is a current perpendicular to the electric field, which we name the torsional Hall effect after the anomalous Hall effect [58,59]. As the parity dual of the vector current (36), the axial current is derived in parallel from the anomalous action (35) as Given that the torsion is mimicked by lattice dislocation [45], this relation would be suggestive for condensed matter experiments about creating chirality imbalance without axial chemical potential. V. CONCLUSION We calculate the torsion-induced current at finite temperature, density and curvature for the general Einstein-Cartan gravity theory. The axial current at zero temperature and density reveals the relation between the CTE and Nieh-Yan's topological invariant. The chiral current at finite temperature and density features a rather nontrivial dependence on temperature and density, distinguished from the quadratic dependence on T and µ ± in the CVE. Our work has not only theoretical significance but also phenomenological implications. It has been proposed that torsion can be realized as lattice dislocation, indicating that the torsion-induced current is experimentally verifiable. The interaction between torsion and electromagnetic field demonstrates torsion as an alternative to the axial chemical potential for the production of chirality imbalance, heralding broader physical contexts for the study of chiral transport phenomena. The analogy between torsional electrodynamics and axion electrodynamics substantiates that novel topological effects in the latter can exist in a torsional spacetime even without a vacuum angle. One interesting example is the recently discovered axionic Casimir force that proves anomalously repulsive in Ref. [60]. Based on this paper, several future directions await us to explore. For example, we have truncated the result to the leading order of both torsion and curvature. The generalization to higher orders would fully clarify the relation between the torsion-induced current and the axial anomaly in the Einstein-Cartan gravity theory. Also, we have treated the torsion as a background field and the extension to dynamical torsion would be a challenging yet intriguing future task. After some algebra, the Matsubara sum amounts to where ζ(z, a) denotes the Hurwitz zeta function. The integrals I 1 and I z 2 have no divergence at = 0 and read directly: On the other hand, the integrals I 2 and I z 3 diverge at = 0 and therefore request regularization. We exploit the Laurent series expansion of the Hurwitz zeta function: In this way, we derive where the definition of the function F (z) has been clarified in Eq. (32) and the constant is defined as Following the modified minimal subtraction scheme, we subtract the infinity as well as the logarithmic term in Eq. (B10), and obtain the final result: Eventually, one can easily attain the chiral current (31) by plugging the sum-integrals (B7), (B8), (B12) and (B13) into the formula (B1). Notably, though temperature T appears in the denominator of the variable of F (z), taking the zero-temperature limit T → 0 does not incur singularity, because digamma function converges for a variable with the large imaginary part. To elaborate this point, we perform the asymptotic expansion of digamma function which leads to This equation illuminates the proper way to examine the low or zero-temperature limit of our result (31). By comparison, one can analyze the small density limit via the Taylor expansion of Eq. (31) with respect to µ ± straightforwardly.
2020-04-28T01:00:43.889Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "0c320337feb5271941a56513a192779119b877c5", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.102.016001", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "359a4378b52a69086ce0488614095fd7566424f4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237730619
pes2o/s2orc
v3-fos-license
Allogeneic hematopoietic stem cell transplantation in patients older than 65 years with acute myeloid leukemia and myelodysplastic syndrome: a 15-year experience Allogeneic stem cell transplantation (allo-HCT) is the only curative option for intermediate and high-risk adult acute myeloid leukemias (AML) and myelodysplastic syndromes (MDS). Despite the median age of occurrence of these diseases (>65y) [1], the majority of patients older than 65y had historically been excluded from this potentially curative option for both changes in tumor biology (conferring treatment resistance) and patient character- istics (decreasing allo-HCT tolerance) that translated into lower survival rates. Understanding which older patients are likely to bene fi t from allo-HCT versus low-intensity therapies or supportive care is critical. We here report our 15-year experience of allo-HCT in patients diagnosed with AML or MDS and older than 65y with remarkable results in terms of survival, transplant related mortality, relapse and graft-versus -host disase (GvHD) incidence. Between January 2005 2019, 90 consecutive adult patients aged >65y received an allo-HCT San Raffaele Milan. Patients' and disease characteristics are described in Supplementary Data 1. Patients' median age 68.29y 76.53). Donors ’ median age 38y (18 – 74): 64y (55 – 74) for sibling, 39y (20 – 58) for haplo, 30y – 50) for matched unrelated donors (MUD) and 32y (18 – 52) for mismatched unrelated donors (MMUD). Conditioning regimens and GvHD prophylaxis are described in Supplementary Data and GvHD without increasing NRM. Multicentric and larger studies with similar platforms are needed to con fi rm our results. Allogeneic stem cell transplantation (allo-HCT) is the only curative option for intermediate and high-risk adult acute myeloid leukemias (AML) and myelodysplastic syndromes (MDS). Despite the median age of occurrence of these diseases (>65y) [1], the majority of patients older than 65y had historically been excluded from this potentially curative option for both changes in tumor biology (conferring treatment resistance) and patient characteristics (decreasing allo-HCT tolerance) that translated into lower survival rates. Understanding which older patients are likely to benefit from allo-HCT versus low-intensity therapies or supportive care is critical. We here report our 15-year experience of allo-HCT in patients diagnosed with AML or MDS and older than 65y with remarkable results in terms of survival, transplant related mortality, relapse and graft-versus -host disase (GvHD) incidence. The novelties of our experience compared to previous and contemporary literature on patients allografted older than 65y are several. First, the majority of our patients received a treosulfanbased conditioning regimen and a rapamycine-based GvHD prophylaxis. The myeloablative properties of the treosulfan/ fludarabine conditioning regimen have already been demonstrated [2][3][4] and our data confirms prompt engraftment, with fulldonor hematopoietic chimerism, as well as a low relapse incidence, even in an older population. Second, the intensity of conditioning regimens used in our patients is remarkably different from all previous reports on older patients so far. Indeed, in half of our patients we attempted to increase the intensity of conditioning regimen with a second alkylating agent or with TBI. This could be a possible explanation for the lower relapse incidence in our patient, despite the high percentage of high/very high risk diseases. Moreover, it has been reported that patients with advanced disease performed better after transplant if treosulfan was administered as part of their conditioning regimen [4]. Third, almost all patients received in-vivo T-cell depletion. NRM was not affected by increased intensity or by in vivo T-cell depletion strategies; this may be a result of the low toxicity profile of treosulfan-based conditioning regimen [5], and of the use of PTCy for T-cell in-vivo depletion strategy [6, 7]. Both day-100 and 3-y NRM were in line with previous reports on leukemic/myelodysplastic allografted patients, even younger [8,9]. The nearly exclusive use of PBSC did not increase the incidence of aGvHD II-IV, and the incidence of grade III-IV was similar to that of previous reports in older patients [10]. These results confirm the efficacy of a rapamycine-based GvHD prophylaxis and may support the more favorable cytokine production due to treosulfan use compared to busulphan [4] . Of note, most of the extensive cGvHD resolved and patients restored a good quality of life withdrawing all IST. These encouraging results in terms of NRM, RI, and GvHD, resulted in a high OS and DFS at 3 years in our older population, far better than the first reports about allo-HCT in the elderly [8,11]. In contrast to previous data [11] but according to Ustun et al. [12], we did not find any difference in transplant outcomes in patients younger or older than 70 years. Both KPS and HCT-CI confirmed to be useful for patients selection, but they are not superimposable. Patients diagnosed with AML resulted in a reduced survival after transplant, mostly as a result of higher numbers of death from infections. Therefore, it should be suggested to proceed to transplantion as soon as possible if the patient has an indication for allo-HCT and according to his fitness. Based on our data, the use of a matched donor with PTCy as back-bone for GvHD prophylaxis seems to be the best choice for improving survival in the elderly. Our results confirm that patient age should not be the only criteria for excluding AML/MDS patients older than 65y from allo-
2021-09-27T20:55:40.458Z
2021-07-23T00:00:00.000
{ "year": 2022, "sha1": "e2bcc9ac484b48d59fe496358869dedefcbdc5ef", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-711774/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "fbac41cebcc50d48c389fa604c9fffc9344eaebb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270637158
pes2o/s2orc
v3-fos-license
Deep Ensemble learning and quantum machine learning approach for Alzheimer’s disease detection Alzheimer disease (AD) is among the most chronic neurodegenerative diseases that threaten global public health. The prevalence of Alzheimer disease and consequently the increased risk of spread all over the world pose a vital threat to human safekeeping. Early diagnosis of AD is a suitable action for timely intervention and medication, which may increase the prognosis and quality of life for affected individuals. Quantum computing provides a more efficient model for different disease classification tasks than classical machine learning approaches. The full potential of quantum computing is not applied to Alzheimer’s disease classification tasks as expected. In this study, we proposed an ensemble deep learning model based on quantum machine learning classifiers to classify Alzheimer’s disease. The Alzheimer’s disease Neuroimaging Initiative I and Alzheimer’s disease Neuroimaging Initiative II datasets are merged for the AD disease classification. We combined important features extracted based on the customized version of VGG16 and ResNet50 models from the merged images then feed these features to the Quantum Machine Learning classifier to classify them as non-demented, mild demented, moderate demented, and very mild demented. We evaluate the performance of our model by using six metrics; accuracy, the area under the curve, F1-score, precision, and recall. The result validates that the proposed model outperforms several state-of-the-art methods for detecting Alzheimer’s disease by registering an accuracy of 99.89 and 98.37 F1-score. www.nature.com/scientificreports/ that QML algorithms provide important advantages over traditional machine learning algorithms for many kinds of applications, including healthcare 11 . Several medical sectors have adopted deep learning and machine learning approaches to predict, diagnose, and classify the possibility of Alzheimer's disease.However, the findings have been impacted by the shortage of data and the accuracy of the model.Making full use of a few resources to improve AD diagnostic accuracy poses a significant obstacle in boosting healthcare.Deep learning techniques are mostly employed to autonomously find patterns in the targeted dataset without the involvement of human intervention.The Ensemble Learning takes the benefit of two or more deep models to enhance the correctness of the models.In this study, an ensemble deep learning algorithm was adopted to extract significant features from MRI scans and feed those features for QML to classify into mild demented, moderated demented, very mild demented, and non-demented.The proposed model detects the AD stages from an MRI of the brain using an ensemble of customized version of VGG16-ResNet50 deep learning models as feature extraction and applying QML algorithms for classification.The suggested approach makes decisions in a more thorough, dependable, and varied manner.The following are the main contributions of our study: 1.The proposed efficient ensemble approach that combines customized version of VGG16 and ResNet50 for Alzheimer's disease feature extraction from ADNI MRI image data 2. We used Quantum Support Vector Machine (QSVM) for classification with high accuracy using ADNI1 and ADNI2 datasets and also explored the effect of quantum ML to effectively improve the computational efficiency of the model 3. We evaluated the proposed models' effectiveness against other cutting-edge techniques.Additionally, we conducted a comparison between classical Support Vector Machine (SVM) and QSVM classifiers. The order of the remaining sections is as follows.In-depth review of prior works is discussed in Section "Related work".In Section "Methods", the background material-which consists of the two primary components, EL and QML-as well as the technique utilized is described.The data set, the models under investigation, and the suggested ensemble model are the main points of interest.The experimental results investigation, acquired data, and model evaluation are presented in Section "Result and discussion".Section "Conclusion and recommendation" concludes with recommendations for further research. Related work The difficulty of diagnosing AD from MRI scans has been examined, and different approaches have been explored.Modeling the diagnosis as a prediction and classification problem is the most employed approach 9,12,13 .In the light of recent studies Ruiz et al. 13 proposed an ensemble of 3D densely connected convolutional network models to perform a 4-way classification of 3D MRI images.As every layer in the proposed network is connected to each subsequent layer in a block, dense connections are applied to improve data flow inside the model.Better outcomes come from their studies using the ADNI dataset, which consists of preprocessed 3D MRI images from four subject groups: AD, healthy control, early MCI, and late MCI.Another research study Leela et al. 14 developed a deep learning-based, automated solution to detect AD early.Durable principal component analysis (RPCA), deep VGG-19 approaches, and the idea of transfer learning were all implemented into their proposed model.The approach plays a role because it is capable of identifying Alzheimer's disease subtypes using fused CT-MRI and EEG signals.Additionally, Chen et al. 6 introduced Multiview-slice attention and 3D convolution neural network (MSA3D), a fusion model that integrates multiple slice features and 3D structural information organically for AD classification.They fused 2D and 3D features to generate more discriminative representations.As determined by their experimental result, the model obtained accuracy values of 91.1 and 80.1% on ADNI-1 for diagnosing AD and mild cognitive impairment (MCI) convention prediction, respectively.We propose to solve this problem in our research, which is the dearth of data and the inadequate accuracy that still hinders their use in real-life applications.The authors in Kalkan et al. 15 presented cutting-edge CNN applications using single and multimodality neuroimaging data for AD classification.The authors explored the effective approaches for classifying AD to assess several dataset types, neuroimaging modalities, preprocessing strategies, and data management strategies.CNN has achieved major advances in classifying AD, but there are still many obstacles to overcome, especially given the dearth well neuroimaging data and its potential application in this area.The authors in Amoroso et al. 4 examined how brain connectivity is affected by AD, using T1 brain Magnetic Resonance Imaging data (MRI) acquired within the ADNI.They showed how graph theory-based models can accurately identify these clinical problems and how game theory's SHapley values applied to make developed models understandable and simple to grasp.The other researchers Orouskhani et al. 16 introduced a few-shot learning technique called deep metric learning, that utilizes a conditional loss function to overcome the limitation of a few samples and enhance the accuracy of the model.Experiments with OASIS datasets reveal that the model, which was inspired by VGG16, exceeded the most advanced models as a matter of accuracy.Further Baglat et al. 17 employed diverse machine learning classification algorithms including Logistic Regression, Decision Tree, Random Forest classifier, Support Vector Machine, and AdaBoost for the early identification and classification of Alzheimer 's disease using Open Access Series of Imaging Studies (OASIS) dataset.Their efforts revealed to us a noteworthy level of performance and resulted in classification using the Random Forest classifier.The authors Lu et al. 18 developed an MRI-based AD diagnosis based on deep learning/ transfer learning classifier on significantly vast and diverse datasets.They trained and tested the algorithm on a dataset of unprecedented size and diversity (from more than 217 sites/scan.They constructed an Inception-ResNet-V2 as a sex classifier with high generalization capability and achieved 94.9% accuracy.Another work conducted for the early identification of heart ailments Heidari and Hellstern 10 presented two quantum machine-learning techniques: a hybrid quantum neural network and a hybrid random forest quantum neural network.Moreover, to estimate the risk of heart disease Abdulsalam et al. 11 proposed an ensemble machine-learning model based on quantum machine-learning classifiers.The proposed approach used a quantum support vector classifier as the base classifier in a bagging ensemble learning framework.Also, they utilized the SHapley Additive exPlanations (SHAP) framework to figure out and quantify the relevance of each feature in the prediction. A deep learning pipeline was offered by EL-Geneedy et al. 19 for comprehensive stage-by-stage classification of Alzheimer's disease (AD).Their approach divides 2D T1 brain MRI images into four stages: very mild dementia, mild dementia, moderate dementia, and non-dementia using a low-level convolutional neural network architecture.The authors conduct a comparative analysis between their methodology and cutting-edge deep learning architectures, such as InceptionV3, DenseNet121, ResNet50, VGG 16, and EfficientNetB7.Reported testing accuracy reached 99.68%.However, further elaboration on the specifics of their model may be necessary. To classify Alzheimer's disease from cognitively normal and its mild cognitive impairment, Lim and colleagues 20 developed a model using just three-dimensional brain MRI scans from the ADNI collection.Together with a CNN that was created from scratch, pre-trained VGG-16 and pre-trained ResNet-50 were employed as feature extractors.The most accurate of these models was VGG-16, which had an accuracy of 83.9%. MRI scans of AD and healthy individuals are similar, making the AD classification task challenging.While the previously stated studies achieved high accuracy when distinguishing between AD and healthy, their practical applicability is still limited because of computationally inefficient classification algorithms and still there is a room for accuracy improvement, which is a limitation that we propose to address in this study.Our method focused on classifying AD dementia stages classification by taking key features based on an ensemble learning model and feeding the combined features to an efficient and robust QSVM classification algorithm. Methods The focus of this work is the development of an ensemble DL-based quantum machine learning classification model for the diagnosis of AD disease.Deep learning architectures are the most widely employed for processing and analyzing brain images in research projects 8,18 .In this paper, we introduced a method based on ensemble learning and quantum machine learning classification algorithms that analyze MRI brain images and extract meaningful features for successful classification of AD stages.According to Fig. 1, The proposed algorithm uses sequential steps, The ADNI1 and ADNI2 MRI image data sets are first prepared, pre-processed, and then combined to use in the proposed approach.This is followed by the building of an ensemble model, and its parameters are configured for feature extraction and the features are fed into QSVM for classification of AD dementia stages.Finally, the performance of the proposed model is conducted and compared against the other cutting-edge methods.To judge the models, numerous performance metrics, including accuracy, recall, precision, F1-score, and AUC were calculated.The results implied that the ensemble model-based QSVM is superior to the other cutting-edge methods in terms of performance.Likewise, it can be inferred that effective outcomes can be achieved by combining quantum classifiers and ensemble learning. As visualized in Fig. 1, the VGGNet model consists nine convolutions, two batch normalizations, three maxpooling, two dropouts, and one flattened layer.A dropout layer is inserted below each max pooling and dense layer to overcome overfitting.while the ResNet model consists of one convolution, one max-pooling, one average polling, one batch normalization, one activation, two identity blocks, three conv blocks, and one flattened layer.Finally, the extracted and flattened features from the two models are concatenated to classify into four Alzheimer's disease types using the QSVM classifier. Data and material The data used to develop the model in our work was retrieved from different sources, such as the ADNI 1 dataset and the ADNI2 dataset from the Kaggle databases (ADNI_Extracted_Axial (kaggle.com)).Alzheimer's disease Neuroimaging Initiative is a large-scale study focusing on the early detection and progression monitoring of AD.We combine ADNI 1 and ADNI 2 datasets of AD since both are obtained from MRI scans of AD patients in different time stamps.Combining different sources of datasets used to boost deep learning training algorithms for performance improvement.We used only MRI scans of patients from both sources to increase the number of datasets.The high number of datasets helps to reduce overfitting problems in DL algorithms.The summary of our dataset is in Table 1. Preprocessing The chosen data was pre-processed using a standard processing pipeline.We used a cropping algorithm to eliminate the bone and skeleton portions of the MRI images since this superfluous portion is not significant for AD classification.The original dataset had an image resolution of 176*208.We must scale the MRI image to 128*128 pixels in width and height due to hardware limitations.We lowered the dataset's dimensionality to 5 qubits in order to use it for quantum classification.In our study, we used an adaptive median filter to remove outliers for the facilitation of a reliable classification process.The augmentation technique was applied to increase the number of the Alzheimer's disease dataset.The efficacy of the augmentation in terms of model over fitting was also improved to increase the generalization capability of deep learning models.Also, there is a class imbalance problem so to alleviate that we used data augmentation comprising arbitrary height and width shift (range, 0-10%) and zooming (range, 0-8%) on the training set. Deep learning Deep learning models, specifically convolutional neural networks (CNN), have revolutionized disease detection in healthcare.We employed the prepared VGG-16 convolutional neural network model, which was enhanced by 24 .In our Ensemble model, we concatenated the features obtained from the two models as shown in the architecture depicted in Fig. 1.As previously discussed, the proposed model aims to accurately diagnose AD disease by concatenating deep features extracted from MRI images by using two different models (customized CNN architecture of VGGNet and ResNet50).First, a VGGNet model is proposed to extract features from MRI images.Correspondingly, the ResNet model extracts features from the same images.To end with, the extracted features from these models are flattened and concatenated into a single classification descriptor.Then, the extracted features are fed into the QSVM classifier. Classification with quantum machine learning Support vector machine is a classical machine learning algorithm that uses training data from the sets to classify vectors in a feature space into one of two sets.It attempted to discover a high-probability optimum separation hyperplane between two distinct groups or class in a set of samples, with whole training samples of the class located on one side of the hyperplane.The linear discrimination problem is to develop a hyperplane that may be used as a decision-boundary classification task while also providing substantial differences between two class regions 25 .Quantum-enhanced machine learning techniques can accomplish several tasks, among them lowering training time, managing complex network topology, automatically modifying network hyper parameters, performing complex matrix and tensor manipulation at high speeds, and using quantum tunneling to achieve actual objective function goals, in contrast to traditional machine learning algorithms 26 .In quantum computers, Quantum SVM is the quantum counterpart of the classical SVM.The QSVM has a quantum advantage over the classical SVM in situations where it is challenging to estimate the feature map classically.Using a quantum kernel in QSVM algorithms, quantum computers can accelerate learning by using a quantum kernel 11 .Using quantum feature maps that map data points to quantum states, classical data can be encoded to be processed by a quantum computer 10 .In the following direction, QSVM, a quantum machine learning algorithm was adopted to take the essential features obtained by the ensemble model and classify the MRI scan as AD stages. Figure 2 illustrates the structure of the QSVM algorithm, in which the feature maps are flattened by applying the dense layer.The flattened feature maps were subsequently mapped to quantum spaces using a 5-qubit feature map.By taking the inner product of the quantum feature maps, the quantum kernel maps the quantum state data points into higher-dimensional space.Following the QSVM classifier fitting to the training data and evaluating the performance of the model using the test data, for each classical input, the measurements decode the quantum data into the corresponding classical output data. Result and discussion We conducted the experiments using a Hewlett Packard Core i5, sixth-generation, 8 GB RAM, and a Colab Pro GPU that was manufactured by Google.This section presents all the experiments conducted on four class Alzheimer's brain disease datasets.We utilized efficient ensemble deep-learning architectures that consumed minimum resources.The parameters utilized for this experiment are presented in Table 2. Results of the end-to-end deep learning models Experiments are conducted on pre trained VGG16 + ResNet50 models but they scored less accuracy in our dataset.This is due to pre trained models are trained using different plants from ImageNet dataset and these image features are different from Alzheimer diseases features that is the reason for these pre trained models scored less accuracy in our experiment.Experiments were conducted using fine-tuned deep learning models including customized version of VGG-16 (VGGNet), ResNet-50 (ResNet), and Ensemble models (VGGNet ResNet).These models were trained and evaluated by applying the categorical cross-entropy loss function for mild demented, moderate demented, non-demented, and very mild demented cases.ResNet and VGGNet were enhanced with a batch normalization layer to speed up training, decrease learning time, and lessen generalization errors.Moreover, a dropout layer was utilized to avoid overfitting.There were 125 epochs implemented for each model.The results of the individual and ensemble models are summarized in Table 3.As compared to individual model VGGNet, ResNet scored the lowest accuracy, precision, recall, F1 score, and area under the curve for Alzheimer's disease four-class classification.Both the VGG-16 and Ensemble models achieved nearly the same classification accuracy.Ensemble models achieved outstanding performance in individual deep-learning models. The performance comparison of individual models using various metrics is shown in Fig. 3. ResNet scored poorly in terms of recall and F1 score.VGGNet achieved outstanding result in addition to ensemble models.When compared to the other metrics, the area under the curve (AUC) fared better. The training-validation accuracy are displayed in Fig. 4a.At epoch 1, we observed that the training accuracy was 52.56, despite this by epoch 10, we started to see disparities in the data.We boosted the performance of the ensemble deep learning models by training them for 125 epochs.Figure 4b The performance of the deep learning models with SVM classification The effects of the deep models based on SVM were also examined on the Alzheimer's disease dataset to figure out the effectiveness of the proposed model, as illustrated in The performance of the deep learning models with QSVM classification The results of our proposed ensemble model, deep learning models with QSVM classifier are presented in Table 5. The VGGNet with QSVM achieved a 95.65% accuracy and 98.85% AUC.ResNet + QSVM achieved a 91.56%The performance comparison of the ensemble models with other models is presented in Fig. 5.Among the ensemble models with other deep models, the proposed ensemble model performed efficiently, with highperformance metrics.The VGGNet with QSVM achieved outstanding results than the end-to-end VGGNet and VGGNet + SVM.Similarly, ResNet with QSVM scored with remarkable accuracy as compared to end-to-end ResNet and ResNet with SVM.The experiments validate that using QSVM for classification provided excellent results in all deep learning models in all evaluation metrics.Table 6 tabulates a systematic comparison of the present work with previous related approaches. Conclusion and recommendation The challenge of AD classification pushed us to develop an efficient model by integrating different source data to come up with a better outcome prediction.This paper proposed an ensemble DL model with QSVM, for AD classification that utilizes the learned features to feed into the QSVM.Using the ADNI1 and ADNI2 MRI data taken from Kaggle, we evaluated the performance of ensemble learning methods with the classical SVM and QSVM classifiers.We used 5 qubit quantum hardware or simulator and we utilized the QSVM model from the Qiskit library and optimized it by adding the hyper parameters.According to the experimental results, the proposed model outperforms the classical SVM in terms of AD classification accuracy and training time.It provides a significant solution to support AD primary care, especially where the MRI scan is blurred and difficult for experts Figure 1 . Figure 1.Architecture of the proposed ensemble model with QSVC/QSVM. https://doi.org/10.1038/s41598-024-61452-1www.nature.com/scientificreports/ presented the performance curves of the ensemble VGGNet + ResNet model, where the training loss was at its lowest point at epoch 80 and the validation loss accuracy at epoch 60.The validation loss dropped off dramatically throughout epochs 40 and also 125. Figure 3 . Figure 3.Comparison of individual models using performance metrics. Figure 4 . Figure 4. (a) training and validation accuracy curves of the proposed ensemble model; (b) training and validation loss curves of the proposed ensemble model. Figure 5 . Figure 5.Comparison of deep models using performance metrics with QSVM. Table 1 . 23mmary and overview of our datasets.someofthelayers.Simonyan and Zisserman identified the 16-layer convolutional architecture referred to as the VGG-16 model in 2014.The VGG-16 model pertains itself a large network with about 138 million parameters.It piles many convolutional layers to construct deep neural networks that boost their ability to learn hidden features.The network's input image possesses dimensions of (224 × 224 × 3).It additionally includes 16 convolutional layers that work as a fixed size filter (3 × 3) and 5 layers of Max grouping that encompass the entire network in size (2 × 2)21.ResNet 50 is perhaps the most powerful convolutional neural network architecture available in the recent decade22.It was also selected as the winner of the ILSVRC competition.ResNet-50, a convolutional neural network with 50 layers, is one of the versions of ResNet.A total of 48 convolution layers are included 1 Max pooling and 1 Average pooling layer.It is a deep residual learning framework built on a neural network.It can resolve the vanishing gradient problem even when working with incredibly dense neural networks.ResNet 50, despite the fact it contains 50 layers, has around 23 million trainable parameters, which are significantly less than the trainable parameters of previous architectures.In the residual network rather than learning features, it learns residuals which are the subtraction of learned features from the layer inputs ResNet connects the input of the nth layer directly to an (n + x) th layer, allowing additional layers to be stacked and a deep network established.The proposed ensemble model for feature extractionThe performance of categorizing biomedical signals is enhanced by feature extraction.To increase the efficacy of the classifier, feature extraction intends to discover the most relevant and valuable set of features (unique properties).The most important step in classifying biomedical signals is feature extraction since improperly chosen features could cause the classification performance to suffer12.Unlike traditional methods that are time-consuming and require specialized knowledge for feature extraction, deep learning can automatically extract relevant features from input images, resulting in improved prediction accuracy23.In this research, we used VGG16-ResNet50 as a base model to exploit the local spatial characteristics of the images.A multitude of features is retrieved from pre-processed image data.The features of MRI images of AD patients are explored by ensemble deep learning models namely customized version of VGG16-ResNet50.Since different CNN architectures can capture diverse information of input images, which increase performance than a single model, the concatenation process of two model features integrates the information from different CNNs to create a more discriminative feature representation than using the feature extracted from a single CNN model Vol.:(0123456789) Scientific Reports | (2024) 14:14196 | https://doi.org/10.1038/s41598-024-61452-1www.nature.com/scientificreports/freezing Table 2 . Summary of our proposed model parameter settings. Table 3 . Results of individual deep learning models. Table 4 . The VGGNet + SVM model achieved an 85.24% accuracy, 85.00% precision, 85.30% recall, 85% F1 score, and 89.18% area-under-the-curve (AUC) score.The ResNet + SVM model achieved 82.24% accuracy, 85.43 recall, and 84.73 F1-score.The proposed ensemble with the classical SVM model achieved 86.78% accuracy and 90.53% area under the curve (AUC).Even if the proposed ensemble model achieved 86.7% accuracy on the SVM classifier which is better than other models it is not a remarkable result. Table 4 . The Performance results of the Deep model with Classical SVM.99% area under the curve (AUC).All the ensemble models achieved good performance results and accurately classify the AD stages from the merged ADNI dataset.The ResNet + QSVM model achieved 6% better accuracy than the standalone ResNet end-to-end model.The Proposed model achieved 8.5% and 12.21% better results when we compared it with the proposed end-to-end ensemble model and ensemble with SVM respectively. Table 5 . Results of deep learning models with QSVM. Table 6 . Comparison of the present work with previous architectures.www.nature.com/scientificreports/ to properly suggest the disease.However, further research needs to be conducted to evaluate implementation scenarios by integrating the model within medical devices for AD diagnosis.
2024-06-22T06:17:44.657Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "ebcf5fc39e8b03181d20f865b99af55b2f35e34f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ed240fc80e2587cea954d8959af9ab08e9fe2985", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
237300567
pes2o/s2orc
v3-fos-license
Improvement of Video Measuring Systems for Electric Traction Network Diagnostics Purpose. The purpose of the article is system analysis of the state of electric traction networks, as well as methods of complex diagnostics of the contact network from a moving laboratory car to increase the resolution capability of the systems for monitoring the quality of interaction between the contact network and current collectors. Methodology. The problem was solved by theoretical analysis and experimental studies of the current collection parameters, a generalized model of the device for monitoring the wear of the overhead wire and its functional units in order to determine the factors affecting the control error, as well as the development of methods that reduce the specified error. The apparatus of factor analysis, the theory of optoelectronic circuits and methods of statistical information processing were used. Findings. Innovative approaches and qualitatively new diagnostic tools are proposed that allow expanding the functionality of the laboratory cars for testing the contact network for power supply enterprises of electrified railways, industrial and urban electric transport. Hardware and software have been developed to improve the system for measuring the parameters of the overhead wire and other components of the contact network. Originality. The theoretical maximum permissible, from the point of view of the contact network operation, error in monitoring the wear of the overhead wire and other components of the electric traction network has been determined. A method for increasing the resolution capability of a stereo television system and an adaptive lighting system is proposed. It consists in preliminary image transformation and expansion of the dynamic range of image measurement. The ways of introducing a high-speed real-time compression algorithm and using LED backlighting are proposed. Practical value. The quality of the contact network diagnostics in difficult conditions for video surveillance has been improved. A camera with a built-in image compression module without losing its performance is proposed, which allows capturing and transmitting full-frame images to a computing complex for the application of new diagnostic algorithms for contact network components. The modernized video measuring systems for the wear of the overhead wire for monitoring the grounding of the contact network supports are proposed, as well as elements of track facilities located in the visibility zone of specialized cameras, which ensure the operability of the systems at any time of the day at speeds up to 160 km/h. An air curtain subsystem was implemented to protect the cameras. Introduction The basis for ensuring the traffic safety is systems for diagnosing the state of infrastructure devices, which allow predicting their possible failures and eliminating emergency situations in a timely manner. A non-redundant link of electrified lines of railways and urban electric transport is a contact network (CN), the failures of which lead to delays of trains, urban electric transport and, as a consequence, to economic damage [1][2][3][4][5][6][7][8][9][10][11][12]. An important feature of the CN is its participation in the current collection, which causes the appearance of loads requiring the improvement of interaction models of current collectors with the CN and the development of a theory of high-current contact to improve the diagnostic systems [3,14]. The uniqueness of the CN places high demands both on the design of its devices and to the methods of their technical operation [4,7]. Reliable and economical operation of the CN is impossible without automated diagnostic devices that allow detecting the locations of malfunctions or other aberrations, as well as analysing them to develop managerial decisions to ensure the uninterrupted movement of the rolling stock [9,[11][12][13]16]. Railway and industrial companies in many countries strive to eliminate interruptions in the movement of trains and electric industrial and urban electric transport through high-quality diagnostics of the CN and malfunction repair. In this case, the economic aspect associated with the optimization of the service life of the CN devices and, first of all, the overhead wire (OW) plays the most important role [7,8]. Today, the triangulation method for determining the height and zigzag of the OW, the phase measuring method based on the use of light sources, the method of monitoring the position of the video camera system and OW wear are known in application. The existing methods are based on one of two basic principles: contactless measurement or contact one, when the sensor is installed on the current collector and touches the overhead wire. Using the contactless systems, measurements can be made at any movement speed, but the measurement accuracy is reduced. Contact systems have higher accuracy, but they provide measurements at low speeds [15,17]. The optical method of automatic control involves installation of several optical systems on the roof of the laboratory car for testing the contact network (LTCN), which films the catenary suspension with supporting structures on the move from different points and transfers the resulting images to a storage unit. In the measurement system, specialized highspeed television cameras are used, and fan-shaped raster pulse laser illuminators are used to illuminate the overhead wires. On the German Railways Network (DBAG), for monitoring small devices, such as bolt heads or torn wire strands of the carrier cable, the optics resolu-tion is 1-2 mm, and the flash duration when illuminating the object should not exceed 45 μs. When measuring the OW wear, comparison with previous results is not required. A comparison with the crosssection of the new wire is enough here. The measurement accuracy of this type is 0.1 mm, and a decrease in the wire thickness can be detected already at a length of 2-3 cm [15,17]. In recent years, qualitatively new diagnostic tools have been developed based on video measuring systems. Their speed and detection reliability when diagnosing the CN elements have been increased. Computing power has been significantly increased, new more advanced photoreceiving components and image processing algorithms have been developed [9,[13][14][15][16][17]. Therefore, there is a scientific problem of improving video measuring diagnostics systems to ensure reliable and economical current collection on electrified lines of railways and urban electric transport. Purpose The main purpose of the article is a systematic analysis of the state and development prospects of electric traction networks of electrified railways and urban electric transport, the development of hardware and software for improving video measuring diagnostic tools and expanding the functionality of laboratory cars for testing the contact network. Methodology In our opinion, in the current situation, it is necessary to solve the problem of complete replacement of most of the devices of electric traction networks of railways and urban electric transport by investing significant funds in modernization. This is evidenced by the experience of foreign countries. On many railway sections of transport corridors, as well as in large cities, a new CN is required, and only in this case safe and economical operation of the power supply for train traction system and urban electric transport will be ensured [6,14]. The problem of increasing the resolution capability of the quality control systems of the current collectors and CN's interaction from the moving laboratory car was solved using a complex approach. This approach includes theoretical analysis and experi-mental research of the parameters of the control objectoverhead wire (OW), modelling of the control device of the wear of overhead wire and its functional units, determination of factors influencing the control error [3,4,7,10]. At the same time, the apparatus of factorial analysis, the theory of optoelectronic circuits and methods of statistical information processing were used to determine the theoretically maximum allowable error in monitoring the wear of the OW and other components, from the point of view of the operation of CN. Findings Analysis of the state and development prospects of electric traction networks of railways and urban electric transport. At all stages of the development of railways, electrification was the leading link in their reconstruction, qualitatively changing the operational work (Fig. 1). For the period 1994-2011 more than 1700 km of the operational length of railways were electrified, the polygon of electrified lines was increased by 21%, while the volume of electric traffic increased to 89.7%. The highest electrification rates were achieved in 2011-2012 on the sections of the accelerated movement of passenger trains with an operational length of 176 km. The specific weight of the length of electrified lines increased to 47.3%, and the specific weight of electric cargo turnover was more than 91.2% [1]. The task of further electrification was planned in the volume of phased implementation: in total for the period 2013-2020about 1 841 km. However, this program failed. At the same time, in recent years, there has been a tendency for the development of urban electric transport in large cities. The rate of ageing of power supply devices, given the existing funding shortfall, continues to outstrip the rate of reconstruction. The length of the electrified lines operated beyond the average period (40 years) increased from 5012 km (or 52.0%) in 2007 to 6393 km (or 62.3%) in 2012, and in 2020 up to 6820 km (or 67.9%). Today, 73% of the total number of traction substations of urban electric transport operate with a service life of more than 40 years, and 43% with a service life of more than 50 years. Thus, a complete reconstruction of more than 80% of the length of the contact network and traction substations of urban electric transport is required. There is no global experience in operating a contact network with such ageing rates. Specific damage to CN, which has served for 40 years or more, is 2.7 times higher than in the sections with a service life of 10 years [1,2]. Analysis of the number of damages at all junctions of the CN of the railways for different periods shows that the most often fail the overhead wire and cables, insulators, droppers, clamps and parts. Fig. 3 shows the failure dynamics of these devices for railways. This is explained by the fact that structurally all other elements of the CN are designed to support the OW in the set position. Failures of any element of the CN often result in OW failure. On the other hand, the OW is the element of CN that directly interacts with the current collector. Interaction with the current collector during the current collection process causes intensive ageing of wires and a large number of sudden failures caused by malfunctions of the electric rolling stock. Over the past 10 years, there has been a tendency for an increase in the number of CN failures due to mechanical and electromechanical wear of droppers, as well as clamps and parts. The height depletion of the support structures, which occurred on a significant part of the polygon due to multiple track repairs, also became a problem. The specified problem can be solved only with the overhaul repair of the CN [2]. Calculations have shown that over the past 10 years, there have been changes in the failure risks, which reflected the growth of ageing, wear and degradation processes of the CN. The most sig-nificant is the risk of OW failures. The risk of failures of catenary suspension and current collectors in monetary terms is so great that it requires drastic solutions in the field of investments both in overhaul repair or construction of new catenary suspension and current collectors of electric rolling stock, and in new systems for diagnostics of current collection. Analysis of damage to power supply devices for urban electric transport shows that the overwhelming amount of damage is accounted for by contact networks, especially in the sections with a service life of more than 40 years. Significantly less damage occurs at traction substations (TS). Damage distribution by the types of devices: CN -49.7%; TS -5.4%; cable lines and trackways -44.6%. The ratio of mathematical expectations and rootmean-square deviations of damage to the contact network and current collectors as a result of delays of trams and trolleybuses in percent of the total number of delays over the past 10 years are as follows: 194.5 and 15.5, which is 64.3% and 3.4%, respectively. The need to expand tram and trolleybus lines and modernize power supply devices in a resource-saving environment requires new technologies for the design, construction and operation of infrastructure facilities. For the first time in Kharkiv, a new generation of traction substation with dry transformers, 12-pulse rectification circuits, digital protection and equipment diagnostics was put into operation, ensuring operation according to condition. It is necessary to create automated systems for laboratory cars for testing CN of trams (LTCN-T), which recognize hidden defects in CN (Fig. 4), as well as laboratories on the basis of trolley buses or trucks for trolleybus CN. This task is posed in our country for the first time [7][8][9][10][11][12]. The modern LTCN-T include an optical-mechanical unit; laser fast-acting system for OW diagnostics; video surveillance and information processing system; additional power supply system; complex control panel and functional panel; rotation angle sensors, stresses, lateral displacements, ambient temperature, car movement speed. The tram laboratory for a comprehensive assessment of the infrastructure state, in addition to the LTCN-T parameters, allows performing measurements of track depression and alignment, longitudinal track gradient, acceleration on the bogie and body, track width, rail wear, CN support dimensions, as well as video monitoring of the rail track, connections of assembling joints, connection of supply cables, inter-rail connections. For example, a specialized video system (Fig. 4, c) records a video image, the viewer of which is shown on the screen. The programs work synchronously. This makes it possible to stop the tape in the places where clarification is needed (for example, a large zigzag), enter the video program at this place and take a photo with the necessary comment to analyse, issue to the repair site and for serviceability. The use of LTCN-T allows obtaining objective data on the CN state, conducting an automated assessment of the CN state by one or several passes, linking the measurement results to a place on the map and performing video surveillance of the CN infrastructure. All the above makes it possible to create databases on the state of the contact network, track, cable lines, etc., as well as the conditions for the transition of their service by state. The solution of the problem allows improving the assessment quality of the CN state and reducing the possibility of failures, as well as ensuring energy and resource saving in the process of passenger transportations. Video measuring systems for diagnostics of contact networks. In recent years, hardware and software have been developed to improve the system for measuring the OW parameters and other CN components. A method for increasing the resolution of a stereo television system and an adaptive lighting system is proposed. It consists in preliminary image transformation and expansion of the dynamic range of image measurement. The quality of CN diagnostics in difficult conditions for video surveillance has been improved. A camera with a built-in image compression module without loss of its fast action has been proposed, which allows capturing and transmitting full-frame images to a computer complex for the application of new CN diagnosing algorithms [8,10,15,16]. The stereo television system is based on a specialized fast-acting television camera of a new generation, and the lighting system can operate both in continuous and in pulsed mode with a duration of light pulses from 20 μs. Cameras can be equipped with lenses with automatic iris control according to the P-iris standard; serial interface; serial interfaces and high-speed video compression module and frame grabber. For the enterprises of power supply of electrified railways, industrial and urban electric transport, innovative means of complex diagnostics of the CN state have been developed. They are laboratory cars, which provide monitoring of OW wear, the state of high-voltage insulation, heating of electrical connections, grounding of supports on rail. An automated video-measuring system for monitoring the CN supports grounding and its other equipment, as well as elements of the track facilities located in the visibility zone of specialized cameras, which ensure the system's operability at any time of the day at speeds up to 160 km/h, is proposed: discretization of image lines along the track length from 0.5 mm; the value of the electronic shutter at a speed of 160 km/h is not more than 22 μs; the number of pixels in a line is at least 1000. An air curtain subsystem is implemented to protect the cameras. Development of new diagnostic tools for the contact network and improving the efficiency of existing ones is a priority area of activity of DAK-Energetika LLC, which carries out the entire range of works, including research, design, manufacture, installation, commissioning, warranty and service. The manufactured measuring equipment is included in the State Register of Measuring Instruments and Register of Measuring Instruments, Test Equipment and Methods of Measurements Used in Ukrainian Railway OJSC, and is metrologically certified and protected by patents. Improvement of WEAR laser fast-acting system for measuring the parameters of the contact wire. The Aptima MT9M413C36STC video sensor used in the WEAR system has a 100-bit output data bus that transmits a block of 10-bit brightness readings of 10 neighbouring pixels of the current line per one cycle of the operating frequency fг. Each line of the image has a size of 1280 pixels and is transmitted in a block consisting of 128 fг cycles. For contactless measuring of the profile of the worn out OW part, measuring the position of the OW relative to the current collector axis, detecting OW overturns and lateral slopes of the OW clips (dropper, pull-off, etc.), LTCN is equipped with WEAR fast-acting laser diagnostic OW system. This diagnostic system belongs to the group of systems that measure OW wear by its profile, their operating principle is indicated in [8,10,13]. The measuring system consists of 8 laser fanshaped emitters, in which the collimated laser beam is converted into a flat fan out light beam 0.3-0.6 mm thick using a spreading system, and 4 matrix television cameras. When the fan beam of light strikes the OW, a visible line of its intersection with the plane is formed on the wire surface, in which the correct beam lies. It is this intersection line that is distinguished by the processing system from the resulting image of the current frame of the television camera. In this case, the shape of the fixed line weakly depends on the OW inclination and is mainly determined by its wear. The program provides the ability to display a 3-D OW model with imposed measured wear for the selected camera (Fig. 5). The use of LED illumination, which effectively illuminates the entire surface of the lower part of the overhead wire and clamps, together with the possibility of obtaining a full frame of the image at the input of the information-computing complex, can significantly increase the informativeness of the WEAR system. Thus, many unclear situations caused by the insufficient informativeness of the measuring system can be resolved in real time by visual or programmatic assessment of the received frames corresponding to the CN section that causes questions. The second important aspect of the WEAR system modernization is the need to obtain full-frame illuminated images of the CN elements at a high speed for continuous scanning of objects of interest. Based on the optical characteristics of the lenses, the frame resolution and the distance at which the cameras are located relative to the measured objects, the field of view along the OW is Ɩ = 37 mm. The maximum speed at which the WEAR system operates is ʋ = 72 km/h. The maximum time Ƭ of receiving one frame, at which continuous scanning of the CN is provided, is determined by the expression Ƭ = c • (Ɩ / ʋ), where c = 0.0036 is the reduction coefficient of values to the SI system. With the given values Ɩ and ʋ Ƭ = 0.00185 s, which corresponds to the frequency of obtaining frames ƒ = 541 fps. The required bandwidth of the channel C for transmitting only an uncompressed image for a frame with a resolution r and a bitness of one pixel n is determined by the following expression: C = r • n • ƒ. With r = 1 280 • 128 and n = 10 С = 846 Mb/s. Test results of the WEAR system. The automated system for measuring the wear of the overhead wire installed on the LTCN was tested along the 1 and 2 station track in variable cloud conditions at a temperature of +28°C. A double contact wire MF-100 is suspended within the test section. Automated measurements were carried out at a measuring car speed of 37 km/h. Manual wear measurements were taken between the 69th and 71st supports along the first station track. For accurate synchronization of measurements, manual measurements were carried out next to the dropper and pull-off clips, since such places can be easily identified using the measurement data from the WEAR system. The measurements results are given in Table 1. It was found that at the points being checked, the difference between the automated measurements of the WEAR system and manual measurements of the residual height does not exceed 0.26 mm. During the test drive on the second station track, several places were found where, due to lateral wear of the overhead wire, the wear area reached the clamp itself, as a result of which the bite wear began (Fig. 5). The difference between manual and automatic measurements ( Table 2) Originality and practical value The theoretical maximum permissible, from the point of view of the contact network operation, wear control error of the overhead wire and other components of the electric traction network has been determined. A method for increasing the resolution capability of a stereo television system and an adaptive lighting system is proposed, which consists in preliminary image transformation and expansion of the dynamic range of image measurement. The quality of diagnostics of the contact network in difficult conditions for video surveillance has been improved. A camera with a built-in image compression module without loss of its performance has been proposed, which allows capturing and transmitting full-frame images to a computer complex for the application of new diagnostic algorithms for the components of the contact network of electrified railways and urban electric transport. Conclusions To reduce the wear of the overhead wire and current collector plates, to ensure reliable and economical current collection in the process of transportation by electric transport, high-quality diagnostics of the electric traction network is required. The proposed video system has the following speed charac-teristics: obtaining a JPEG image with a compression ratio of k> = 10 and a resolution of 1 280 x 128 at a speed of 976 fps. In this case, the required maximum speed of compressed data transfer does not exceed 85 Mb/s. The system is equipped with fastacting LED backlighting, which makes it possible to obtain a continuous illuminated image of the CN elements in real time. Thus, the improvement of the WEAR laser fast-acting system can significantly increase the reliability and reduce the detection time of emerging OW malfunctions and other CN components.
2021-08-25T17:24:25.860Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b099b04e3c6f391a6cfa01d8aab7499527a4895d", "oa_license": "CCBY", "oa_url": "http://stp.diit.edu.ua/article/download/230232/229739", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5175123c84c572768d58275b1dc9194573807e10", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
59571193
pes2o/s2orc
v3-fos-license
Study on Survival of Chlamydia trachomatis in the Presence of Antichlamydial Drugs Problem statement: Recurrent genital Chlamydia trachomatis infections due to treatment failures may results in complex sequelae leading to reproductive complexity and morbidity. It can be resulted by the heterotypic resistance with decreas ed drug susceptibility characteristic of the isolat e. Studies are needed to understand the treatment fail ures and resistance characteristic of C. trachomatis. Hence, in vitro study was conducted on C. trachomatis isolate in the presence of antichlamydial drugs. Approach: Our aim was to study geD gene in C. trachomatis clinical isolate having decreased drug susceptibility profile and to analyze HeLa cells ph enotypically upon infection in presence of antichlamydial drugs. Sequencing was done to check any mutational change (s) in ygeD gene of C. trachomatis isolate (CT-244), mRNA expression was analyzed in presence of antichlamydial drugs by Real Time RT-PCR. Transduction study was carried ou t in infected HeLa cells to detect changes at cellular level in presence of antichlamydial drugs by transducing with GFP/RFP-tagged proteins and analyzed by FACS. Results: A point mutation was detected in ygeD gene of C. trachomatis isolate. Further, mRNA expression level of ygeD gene was observed to be increased at 8 hpi in pres enc of doxycycline while in presence of azithromycin it wa s increased at 24 hpi. GFP-tagged plasma membrane protein expression in infected HeLa cells found to be reduced as compare to the uninfected cells. Upon infection, the RFP-tagged actin protein expression was up-regulated in comparison to the uninfected HeLa cells. No difference in expression of plasma membrane and actin protein was observed in susceptible serovar D and CT-244 isolat e. Conclusion: The present study suggest that C. trachomatis isolate with decreased drug susceptibility profile may have an active efflux strategy for its survival in the presence of antichlamydial drugs an d it may not affect its host cell plasma membrane o r actin organization for its survival in order to res ist the antichlamydial drugs. INTRODUCTION Chlamydia trachomatis an obligatory intracellular pathogen causes a spectrum of clinically important chronic inflammatory diseases of human. C. trachomatis infection is one of the most prevalent sexually transmitted diseases in the world (Gerbase et al., 1998;Beagley and Timms, 2000). In females, C. trachomatis causes cervicitis, urethritis, ectopic pregnancy, pelvic inflammatory disease, tubal factor infertility and chronic pelvic pain (Morre et al., 2000). Studies have also implicated association of C. trachomatis infection with cervical and ovarian cancer and increase in HIV infectivity (Luostarinen et al., 2004). Antibiotics have the major role in treating chlamydial infections; azithromycin and doxycycline are considered as first line drugs by the Centers for Disease control and prevention (CDC) (Workowski and Berman, 2010). Efficacy of these drugs for treatment of chlamydial infections are high, however many researchers report the problem of recurrent infections and treatment failures Wang et al. (2005). It has also been reported that in women with persistent or recurrent infections, the infection can spread upwards from the endocervix to the fallopian tubes and may result in infertility or ectopic pregnancy (Hillis et al., 1997). Recurrent C. trachomatis infections often results from failure of antibiotic therapy or from reinfection due to unprotected sexual contact with either an untreated existing partner or a new infected Partner (Hillis et al., 1994). However, C. trachomatis atypical intracellular characteristics as persistent bodies are suggested to have a role resulting in refractory to antichlamydial drugs and recurrent infections (Beatty et al., 1994). Further, the emerging antibiotic resistance in chlamydia may create severe problems in the treatment of disease. There are few documented in vitro reports of antibiotic resistance in chlamydia but no examples of natural and stable antibiotic resistant strains collected from humans. Samra et al. (2001) few studies with clinical isolates of C. trachomatis from treatment failure patients demonstrated in vitro heterotypic resistance. Recently, 4 clinical isolates demonstrated in vitro resistance to macrolides were shown to carry mutations in the 23S rRNA gene (Misyurina et al., 2004). In vitro studies suggest that antibiotic-resistant genotypes of C. trachomatis can be generated and transferred to C. trachomatis, C. suis or C. muridarum isolates with capability of expressing significant resistant phenotypes (Sandoz and Rockey, 2011). Hence, emerging heterotypic bacterial resistance against antichlamydial drugs resulting in treatment failures in clinical settings cannot be neglected. Studies are needed for characterization of C. trachomatis clinical isolates showing decreased susceptibility towards the antichlamydial drugs which may results in resistant characteristics of the bacteria and can be concluded with respect to the patient's treatment failure (s) or reinfection (s). Further, it has also been suggested that genotypic changes may not be only responsible for the resistant characteristics of clinical C. trachomatis isolate (s) obtained from multiple treatment failure patients. Different drugs have different targets for their action in bacteria hence; mutation in a single gene may not be suggested to result in multiple treatment failures. It has been reported that in gram-negative pathogens, efflux is the predominant mechanism of tetracycline resistance including Chlamydia suis Dugan et al. (2007). Hence, studies are needed to explore the role of efflux gene(s) in emerging resistance in C. trachomatis. In addition, in the presence of stress conditions host cell might play a role in altered drug sensitivity profile of bacteria. Resistant bacteria may act on various system of a cell directly or indirectly for its survival in the presence of drugs. According to many studies C. trachomatis changes host cell plasma membrane and actin organization by modifying its arrangements to complete its life cycle (Kumar and Valdivia, 2008a;2008b). In India, a high prevalence (>30%) of C. trachomatis infections in symptomatic female patients have been reported. (Singh et al., 2003) In our previous study the antibiotic susceptibility profile was studied towards the first line antichlamydial drugs and decreased in vitro susceptibility was observed in isolates (Bhengraj et al., 2010). Few of them appeared as of heterotypic resistant isolates in cell culture in the presence of antichlamydial drugs. Further we characterized them for presence of possible mutational changes at the reported resistant marker genes (L4, L22, 23SrRNA) (Bhengraj et al., 2011). However, genotypic characterization did not revealed any mutational changes at the drug target site(s), hence further characterization is needed. Thus the aim was to study the efflux (ygeD) gene in C. trachomatis heterotypic resistant isolate for presence of any mutational change(s) and mRNA expression in cell culture condition in the presence of azithromycin and doxycycline. In addition to that host HeLa cell plasma membrane and actin was also studied to know if C. trachomatis indirectly affects on it in the presence of antichlamydial drugs to complete its life cycle, which may result in vitro altered drug susceptibility characteristics. Antimicrobial agents: Azithromycin and doxycycline (Sigma-Aldrich) were dissolved according to the manufacturer's instructions and dilutions were prepared in DMEM cell culture medium without antibiotics. DNA Isolation and Polymerase Chain Reaction (PCR): HeLa cells infected with C. trachomatis isolate were subjected to DNA extraction using QIAamp Viral RNA mini Kit (Qiagen, CA, USA) according to manufacturer's instructions. Briefly infected cells were harvested at 48 h. post infection (hpi) and the cell suspension was centrifuged at 3000 rpm for 10 min at 4ºC. Supernatants were centrifuged at 16000 rpm for 1hr at 4ºC; pellets were collected and processed for DNA isolation. Concentration of DNA was quantified spectrophotometrically at 260 nm (Biometra, USA). The amplification of efflux gene was carried out by POLYMERASE CHAIN REACTION (PCR) in a DNA Eppendorf Mastercycler personal Thermal Cycler (Eppendorf GmbH, Germany). The primer sequences are 5' ACGATCTTTCCGTGCATTGGTCGT3' for forward primer and 5'GCCATGTAAGAGCCGACACCCA3' for reverse primer (MWG-Biotech, Germany). The thermal conditions for amplification were initial denaturation at 95°C for 10 min followed by 35 cycles of denaturation at 94°C for 30s, primer annealing at 60°C for 1min and extension at 72°C for 2 min, then a final extension at 72°C for 10 min. The PCR product was visualized by electrophoresis on a 1.5% agarose gel stained with ethidium bromide on Alpha Imager gel documentation system (AlphaInnotech, San Leandro, USA). DNA sequencing: The PCR products were purified using Qiagen gel extraction kit as per manufacturer's instructions. The sequencing of purified PCR products was carried out using BigDye terminator v3.1 (Applied Biosystems, CA, USA) as per recommendations. Briefly, 75-150 ng µL −1 of purified PCR product and sequencing primers (1 pmol/µL) were added to 4 µL Big Dye Terminator Reaction mix and final volume was made up to 10 µL with autoclaved MilliQ water. Sequencing PCR was set up with 30 cycles of 30 sec denaturation at 96°C, 30 sec annealing at 55°C and 4 min extension at 60°C. After sequencing PCR, the products were purified and re-suspended in Hi-Di formamide (Applied Biosystems). The samples were denatured at 94°C for 5 min followed by a brief incubation on ice and loaded on the 3130XL Genetic Analyzer (Applied Biosystems). Sequence analysis was carried out using Sequence Analysis software (Applied Biosystems) and SeqMan module of DNASTAR v5.07 software. RNA isolation and real-time RT-PCR analysis: HeLa cells monolayers were prepared by seeding (3×10 5 cells/well) in six-well tissue culture plates and infected with C. trachomatis inoculum at MoI of 2 as described earlier Bhengraj et al. (2010). Dilutions of drugs (0.5, 5 and 10 µg mL) −1 were added at 2 hpi and cultures were incubated at 35°C in 5% CO 2 . Total RNA was isolated at 8, 24 and 48 hpi using TRIzol reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions and quantified using a UV-VIS spectrophotometer. RNA was treated with DNase I to prevent DNA carryover. The isolated RNA was further tested by PCR to check any carryover DNA contamination. There was no amplification of product detected and the RNA was considered as DNA free. Complementary DNA was prepared using SuperScript™ First-Strand Reverse Transcriptase kit (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. Real-time PCR was performed with the DyNAmo™ SYBR® Green qPCR Kit (Finnzymes, Espoo, Finland). The primer sequences used for efflux (ygeD) gene were F 5'ACGATCTTTCCGTGCATTGGTCGT3 and R 5' GCCATGTAAGAGCCGACACCCA3' and for endogenous control (16S rRNA) gene were 5'CTGCAGCCTCCGTAGAGTCTGGGCAGTGTC3'a nd 5'TTCAGATTGAACGCTGGCGGCGTGGATG 3' as described earlier. Mpiga and Ravaoarinoro (2006) Primers were of HPLC-purified grade and were commercially synthesized (MWG-Biotech AG, Ebersberg, Germany). The negative control consisted of nuclease free water substituted for cDNA. PCR amplification was performed in an Applied Biosystems 7000 Real-Time PCR System (Applied Biosystems, CA, USA). For data analysis, the 2 -∆∆Ct method was used to calculate fold change (Livak and Schmittgen, 2001). Transduction of HeLa cel: For targeting host cell actin and plasma membrane proteins HeLa cells were transduced with Cellular-Lights and Organelle-Lights transduction reagents (Molecular probes, Invitrogen, Carlsbad, CA, USA) respectively. The transduction was based on the BacMam technology of viral delivery for specific expression of a targeted (fluorescent) protein in mammalian cell. Transduction was carried out according to the manufacturer's instructions. Hela cells ~1×10 6 -4×10 6 were seeded in 50 cm 2 tissue culture Flask (Greiner, Germany), allowed to adhere and grow for approximately 24 h at 37°C, 5% CO 2 . Cellular-Lights transduction solution was prepared in Dulbecco's Phosphate Buffered Saline (D-PBS) without Ca 2 + or Mg 2 +. Upon reaching 70-80% confluency of the adhered cells, culture medium was aspirated and 5.5mL of the diluted transduction solution were added. The cells were incubated at room temperature (20-25°C) in the dark for 4 h with gentle rocking. Transduction solutions from the culture flask were again aspirated and culture medium without serum plus 1X enhancer were added. Cells were incubated for 2 h at 37°C and 5% CO 2 . After incubation enhancer solution from the culture flask were replaced with the appropriate culture medium and incubated at 37°C, 5% CO 2 for >16 h. Same method was followed for transduction of HeLa cells with organelle-lights. Transduced HeLa cells were plated in 6-well tissue culture plates with cell density of 3×10 5 cells/well in EMEM containing 10% FCS. On reaching the subconfluence, monolayer were washed twice with PBS and infected with chlamydial EBs at MoI of 2. For homogenous infection tissue culture plates were placed on a rocker for 2 h at 35°C after addition of serum free media containing EBs. Media containing unbound EBs were aspirated and supplemented with complete DMEM containing 10% FCS. Infected HeLa cells were incubated at 35°C with 5% CO 2 . Thereafter at 2 hpi media was aspirated and replaced with fresh media containing azithromycin or doxycycline. After 48 hpi cells were analysed for fluorescence using flow cytometer (BD FACS Caliber) in FL-1 and FL-2 channel. For negating auto-fluorescence same pool of untransduced cells were used and appropriate setting was used for further acquisition and analysis. Flow histogram was analysed for geometric mean using FCS V3 express (DeNovo Inc). Every experiment was done in triplicate. Statistical analysis: Differences between two groups were evaluated using Student t test and p<0.05 was considered significant. C. trachomatis ygeD gene: C. trachomatis efflux (ygeD) gene was checked for any changes in the genetic level. A fragment of 822bp was amplified and single band was observed in 1.5% agarose gel. Product was sequenced in both the directions and reviewed by assembling into alignments using reference sequence C. trachomatis serotype D (GenBank accession numbers NC000117). The sequence showed variation with a point mutation T to G in 734318 position of the studied C. trachomatis clinical isolate (CT-244). Real-time RT-PCR analysis: C. trachomatis isolate (CT-244) efflux ygeD gene was studied for any changes in gene expression in the presence of doxycycline and azithromycin in host HeLa cells. Increased expression of ygeD gene was detected at 8, 24 and 48 hpi in the absence of doxycycline. However, in the presence of doxycycline significantly (p<0.05) increased expression was observed only at 8 hpi while at 24 and 48 hpi it was found to be decreased in presence of all three concentrations of doxycycline (Fig. 1). On addition of azithromycin, no significant changes were detected at 8 and 48 hpi with all the three concentrations of drug however, at 24 hpi expression was observed to be significantly (p<0.05) increased (Fig. 2). Host cell analysis: Upon infection with serovar D and CT-244 isolate expression of Green Fluorescent Protein (GFP)-tagged plasma membrane protein in HeLa cells were found to be significantly reduced as compare to the uninfected transduced cells. On addition of drugs, expression was found to be upregulated in comparison to the absence of drugs and comparable to the uninfected cells. Further no difference in expression was observed in serovar D and CT-244 isolate in the absence of antichlamydial drugs. However at higher concentration of azithromycin and lower concentration of doxycycline non-significant difference was observed in the protein expression (Fig. 3). Host cells were further studied for any changes in actin protein expression in the presence of antichlamydial drugs. The red fluorescent protein (RFP)-tagged actin protein expression was found to be up-regulated on infection with isolates in comparison to the uninfected HeLa cells. No difference in expression of actin was observed in serovar D and CT-244 isolate, however, addition of drugs increased the expression of RFP-tagged proteins in infected transduced cells (Fig. 4). DISCUSSION To avoid the severe sequelae of C. trachomatis infection, antibiotic strategies are important to eradicate the pathogen. First-line antichlamydial drugs have proven successful for the treatment of C. trachomatis infection however, treatment failures have been observed in notable number of cases Horner (2006). Few studies suggest resistance as a cause for clinical treatment failures (Jones et al., 1990;Somani et al., 2000). The obligate intracellular nature of Chlamydia may limit the emergence of antibiotic resistance in vivo (Abdelrahman and Belland, 2005). However, the extensive use of drugs has been known to favor the selection of resistance in pathogens, including Chlamydia suis in pig (Lefevre and Lepargneur, 1998). The role of genetic resistance in the recurrence of chlamydial infections is still not clear and needs further attention. Besides well known mechanisms, a further resistance mechanism, active drug efflux, has become increasingly important in the current threat of multidrug resistance. It involves certain bacterial transport proteins which pump out antimicrobial compounds from the cell as a result of over expression of these pumps due to mutations hence decreasing intracellular antibiotic concentration. Efflux pumps possessed by various pathogens are likely to contribute their pathogenic mechanisms by escaping a number of antimicrobial compounds (Poole, 2005). Hence, we studied the efflux ygeD gene of heterotypic resistant C. trachomatis isolate in order to explore the resistant characteristics. The studied sequence showed variation with a point mutation T to G with reference sequence of serotype D. There was no difference in the products of the mutated nucleotide as reference sequence has CTT-Leucine amino acid and mutated nucleotide has CTG which also code for the leucine. This may be a nonsignificant mutation in developing in vitro resistance. In another study of efflux (ygeD) gene in clinical isolates of C. trachomatis-resistant to high and intermediate level of FQ concentrations several silent mutations and mutations resulting in amino acid substitutions were observed (Misiurina et al., 2004). Hence, this can be concluded that the mutation may not be directly related to the resistant characteristic of the bacteria but it might be possible that it has some indirect role, which may make bacteria more refractory to the drugs. Further expression of the efflux gene was also analyzed in the isolate and it was observed that efflux gene was actively expressed at 8hpi in presence of doxycycline, suggesting its expression may have helped in reducing doxycycline pressure at the initial time point. On addition of azithromycin, expression of ygeD gene was observed to be significantly increased at 24 hpi suggesting that in the presence of azithromycin efflux gene was capable in reducing the drug pressure at 24 hpi but not at the initial time point. Hence, we may conclude that C. trachomatis isolate with altered drug susceptibility profile may have an active efflux strategy for its survival in the presence of antichlamydial drugs. Antimicrobial susceptibility profile of C. trachomatis may be dependent on the host cell environmental conditions and host cell-specific factors. It is reported that oxygen concentrations in female urogenital tract affects the removal of chlamydia upon antibiotic treatment (Shima et al., 2010). In addition it has been observed that pathogenic microbes exploit the host cytoskeleton for entry, colonization and intracellular survival in eukaryotic cells (Rottner et al., 2005). C. trachomatis also co-opts host actin and intermediate filaments to form a dynamic scaffold for providing structural integrity to the chlamydial vacuole and minimizing immune detection for its survival. (Kumar and Valdivia, 2008a;2008b) Hence, host cell factors should be studied to know if this affects the antibiotic susceptibility profile Therefore, host HeLa cells harbouring the heterotypic resistant C. trachomatis isolate were studied for any phenotypic changes at the cellular level. The significantly reduced expression of GFP-tagged plasma membrane protein in HeLa cells detected may be due to the use of proteins for the invagination of infectious elementary bodies of C. trachomatis. However, on addition of drugs expression was found to be comparable to the uninfected cells. Further for detecting any changes in actin protein expression in the presence of antichlamydial drugs host cells were studied for RFP-tagged actin protein expression and it was found to be up-regulated upon infection. However, addition of drugs increased the expression in infected cells. There is no difference observed in expression of plasma membrane and actin protein in between the serovar D and CT-244 isolate. Hence, this may be suggested that C. trachomatis isolate with altered drug susceptibility profile do not affect its host cell plasma membrane or actin organization for its survival in order to resist the antichlamydial drugs. CONCLUSION In conclusion, our study supports the emergence of clinical antibiotic resistance, not an impossible scenario for C. trachomatis despite their isolated niche which limits the opportunity for acquisition of antibiotic resistance genes from other organisms (McOrist, 2000). Successful treatment is necessary for preventing sequelae of chlamydial infections hence, treatment failures and in vitro antibiotic resistance characteristics of C. trachomatis is of great concern. The results of present study in characterizing resistance in clinical isolate may enhance the understanding of chlamydial therapy and the nature or transmission of resistant C. trachomatis. Further, studies are needed in more number of C. trachomatis clinical isolates to know its biological relevance to in vivo conditions. ACKNOWLEDGMENT Science publication is acknowledged for sponsoring the article for publication. University Grants Commission, New Delhi is acknowledged for providing research fellowship to Apurb Rashmi Bhengraj. Indian Council of Medical Research (ICMR) India is also acknowledged for providing financial assistance in the form of fellowship to Harsh Vardhan, Pragya Srivastava and Suraj Singh Yadav. The study was funded by Indian Council of Medical Research.
2019-03-30T13:02:40.408Z
2012-02-20T00:00:00.000
{ "year": 2012, "sha1": "ba8c6463486c215e88f9825dd4e07d8d9c70d4ff", "oa_license": "CCBY", "oa_url": "https://thescipub.com/pdf/ajidsp.2012.5.12.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d8ed649751990a683dba9314ab0df7a64f351e37", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
230782003
pes2o/s2orc
v3-fos-license
Microstructure Analysis and Reconstruction of a Meniscus Objective To analyze the characteristics of menicus microstructure and to reconstruct a microstructure‐mimicing 3D model of the menicus. Methods Human and sheep meniscus were collected and prepared for this study. Hematoxylin–eosin staining (HE) and Masson staining were conducted for histological analysis of the meniscus. For submicroscopic structure analysis, the meniscus was first freeze‐dried and then scanned by scanning electron microscopy (SEM). The porosity of the meniscus was determined according to SEM images. A micro‐MRI was used to scan each meniscus, immersed in distilled water, and a 3D digital model was reconstructed afterwards. A three‐dimensional (3D) resin model was printed out based on the digital model. Before high‐resolution micro‐CT scanning, each meniscus was freeze‐dried. Then, micro‐scale two‐dimensional (2D) CT projection images were obtained. The porosity of the meniscus was calculated according to micro‐CT images. With micro‐CT, multiple 2D projection images were collected. A 3D digital model based on 2D CT pictures was also reconstructed. The 3D digital model was exported as STL format. A 3D resin model was printed by 3D printer based on the 3D digital model. Results As revealed in the HE and Masson images, a meniscus is mostly composed of collagen, with a few cells disseminated between the collagen fiber bundles at the micro‐scale. The SEM image clearly shows the path of highly cross‐linked collagen fibers, and massive pores exist between the fibers. According to the SEM images, the porosity of the meniscus was 34.1% (34.1% ± 0.032%) and the diameters of the collagen fibers were varied. In addition, the cross‐linking pattern of the fibers was irregular. The scanning accuracy of micro‐MRI was 50 μm. The micro‐MRI demonstrated the outline of the meniscus, but the microstructure was obscure. The micro‐CT clearly displayed microfibers in the meniscus with a voxel size of 11.4 μm. The surface layer, lamellar layer, circumferential fibers, and radial fibers could be identified. The mean porosity of the meniscus according to micro‐CT images was 33.92% (33.92% ± 0.03%). Moreover, a 3D model of the microstructure based on the micro‐CT images was built. The microscale fibers could be displayed in the micro‐CT image and the reconstructed 3D digital model. In addition, a 3D resin model was printed out based on the 3D digital model. Conclusion It is extremely difficult to artificially simulate the microstructure of the meniscus because of the irregularity of the diameter and cross‐linking pattern of fibers. The micro‐MRI images failed to demonstrate the meniscus microstructure. Freeze‐drying and micro‐CT scanning are effective methods for 3D microstructure reconstruction of the meniscus, which is an important step towards mechanically functional 3D‐printed meniscus grafts. Introduction A s a semilunar fibrocartilaginous tissue between the femoral condyle and tibial plateau, the meniscus assists in load bearing and transmission, joint stabilization, and shock absorption 1, 2 . It is well known that a torn meniscus and/or surgical removal of the meniscus will result in early articular cartilage damage and, eventually, early osteoarthritis 3 . Thus, preservation of the function of the meniscus is of great interest for doctors and researchers. Due to limited blood supply of the meniscus, meniscus suture or repair can result in non-healing or secondary surgery. Therefore, meniscal allograft transplantation has been considered for the preservation of meniscal tissue. However, concerns over limited availability, disease transmission, immune rejection, and anatomical mismatching adversely influence its application 4 . In addition, although the initial mechanics are appropriate, dense tissue is not well suited for cellular infiltration and remodeling, resulting in poor and inconsistent long-term outcomes 5 . Hence, it is important to develop new strategies for meniscus transplantation. To overcome these limitations, different kinds of tissueengineered scaffolds have been developed 6,7 , including using biomaterials to fabricate porous scaffolds, seeding cells on the scaffold, and adding growth factors for cell proliferation and differentiation 8 . A functional, tissue-engineered scaffold should mimic the biomechanical structure of the meniscus and have appropriate mechanical properties to bear stress from different directions 9 . Traditional methods for scaffold fabrication, such as lyophilization, solvent casting, phase separation, and electrospinning, cannot meet these requirements 10 . Three-dimensional (3D) printing can be used to manufacture objects with a desired size and structure and has been applied in meniscus scaffold fabrication for many years. The main steps of 3D printing meniscus scaffolds include the preparation of bioink, reconstruction of the 3D digital model of the meniscus, and 3D printing 11 . Most studies have used MRI or CT scanning to obtain 3D digital models of a meniscus. Some researchers have used computer-aided design (CAD) to design a digital scaffold based on a 3D digital meniscus, then printed the scaffold and added other matrix and culture bioactive cells to the scaffold 12,13 . Some studies have printed a hollow scaffold with a relatively closed surface and added the matrix to the hollow scaffold 14 . Several studies have used bioink to directly print the meniscus based on a digital model 15,16 . The microstructure of the meniscus is extremely complicated due to the complexity of the mechanical environment within the knee. Electron microscopy imaging studies have revealed three distinct layers of the collagen sheets in a meniscal cross-section: a superficial network that covers the surfaces by a meshwork of very thin fibrils (30 nm); a lamellar layer beneath the superficial network, represented by a layer of lamellae of collagen fibrils (150-200 nm); and a central main portion, composed of predominantly circular-oriented bundles of collagen fibrils with occasional radial-tie fibers 17 . The mechanical properties are the most important features of these materials and are the first to be considered in regenerative medicine and tissue engineering 18 . Unfortunately, the scaffolds mentioned above barely mimic the microstructure of the meniscus and, therefore, do not obtain the same initial or long-term mechanical properties as an intact meniscus. One of the most important reasons is that these studies did not obtain a 3D printer readable microscale 3D digital model. Jeffrey et al. 14 printed a meniscus 3D injection mold and injected a mix of cells, alginate, and CaSO4 into the mold. After a few weeks of culture, the tissue was engineered. The construction can reach the highest equilibrium modulus at 60 kPa, which is 50% of native tissue. Several studies [19][20][21][22] have used 3D printed PCL scaffold meniscal regeneration, which has a compressive modulus in the range of 10-54 MPa and a tensile modulus in the range of 40-80 MPa; the tensile modulus of these scaffolds is almost significantly lower than that of human menisci (78-125 MPa). These PCL scaffolds were designed by CAD according to a 3D digital meniscus model. It can be concluded that the mechanical properties of these scaffolds are far from those of native tissue. One of the most important reasons may be that the scaffold did not mimic the microstructure of the meniscus fibers. More specifically, the 3D microscale digital model is still lacking in resolution. Therefore, it is urgent to develop a method to efficiently reconstruct a 3D-printer readable and microstructure mimicing meniscus model. The aim of this study is: (i) to explore the characteristics of meniscus microstructure; (ii) to reconstruct the meniscus microstructure; and (iii) to print a 3D model of the meniscus based on the digital model. In this study, we used hematoxylin and eosin (HE) and Masson staining to analyze the histological features of the meniscus. Scanning electron microscopy (SEM) was used to help understand the meniscus microstructure. Menisci were acquired and scanned by high-resolution micro-MRI and micro-CT. We hypothesize that micro-MRI and micro-CT two-dimensional (2D) images can display the microstructure as the SEM does and that a microscale 3D digital model can be reconstructed based on the micro-MRI or micro-CT file. Methods A ll experimental protocols were approved by the Zhujiang Hospital of Southern Medical University Review Board. Informed consent was obtained from all subjects. Specimen Preparation Four menisci were removed from four TKA human knees (two men, two women; mean age, 56.5 AE 7.5 [standard deviation]) years without gross meniscal tears. One mature sheep meniscus was purchased from a butcher. The menisci were washed with phosphate-buffered saline (Sigma-Aldrich, St. Louis, MO, USA). Histological Analysis A human meniscus was fixed in 10% (v/v) buffered formalin, dehydrated with a series of graded alcohols, and embedded in paraffin. Tissue sections (4-μm thick) were stained with HE 23 for morphologic analysis and Masson's trichrome 24 for cross-linked collagen. Preparation of Freeze-Dried Menisci The freeze-drying protocol was documented in our previous publication 25 . Briefly, A human meniscus (4.7 cm × 4.3 cm) was frozen at −60 C overnight and transferred to a freezedrying machine (Alpha 2-4 LSCplus, Martin Christ Gefriertrocknungsanlagen, Germany), in which the water inside the frozen meniscus was sublimated under a pressure of 0.105 Pa and at a temperature of −40 C. The freezedrying process lasted for 5 days. Scanning Electron Microscopy A small section of freeze-dried human meniscus was cut into thin slices. Samples were sputter-coated with gold prior to SEM observation. The samples were then imaged using SEM (JSM-7600F, JEOL, Japan). Micro-MRI Scanning A human meniscus sample (4.7 cm × 3.2 cm) was placed in the bottom of the scanning tube, and the tube was filled with distilled water. After sample preparation, the tube was placed into the head coil of the micro-MRI (M3; Aspect Imaging, Jerusalem, Israel) and scanned in the T2 phase. Each sample underwent continuous scanning with a scanning accuracy of 50 μm (slice thickness, 1 mm; interslice gap, 0.1 mm; horizontal field of view, 12 mm; vertical field of view, 25 mm; pixel size, 0.05 mm). After scanning, the 2D images were exported as DICOM files 26 . Three-Dimensional Printing A sheep meniscus sample (2.7 cm × 2.0 cm) was freeze-dried and scanned by micro-CT as described above. MATLAB 2015a (The MathWorks, Natick, MA) was used on a computer with an i7-4790K CPU and 16.0 GB memory to analyze the images. A 3D voxel model was reconstructed from the data, converted to STL format, and printed by resin with an enlarged size because of the limited accuracy of the 3D printer 25 . Porosity Measurement Porosity is defined as the ratio of the volume of pores to the volume of bulk rock and is usually expressed as a percentage. Porosity gives biomaterials the ability to allow tissue infiltration and integration. It is usually the main factor taken into account during the design and synthesis of a biomaterial. SEM images and micro-CT images of the meniscus were submitted to porosity measurement using Image J (version 1.47 for Windows, 64 bit, free software, National Institutes of Health, Bethesda, MD, USA). Briefly, the images were opened with image J. After thresholding was done, the porosity was determined by using the image volume method to sum up porosity pixels of all analyzed images and by dividing that value by the sum of the areas observed on these images. Then this obtained value was multiplied by 100%. Histological Characteristics Hematoxylin-eosin staining and Masson staining showed that the meniscus was mostly composed of collagen, with a few cells disseminated between the collagen bundles ( Fig. 1). Moreover, Masson staining revealed aligned collagen fiber bundles and circumferential fibers (Fig. 1D). Scanning Electron Microscopy Analysis In SEM images, massive aligned fibers are present ( Fig. 2A). At a higher level of magnification (Fig. 2B), varied diameters of fibers can be found, and the fibers are highly cross-linked with numerus pores. The diameters of the collagen fibers are varied and the cross-linking pattern of the fibers is irregular. The porosity of the meniscus according to SEM images was 34.1% (34.1% AE 0.032%). Micro-MRI-Based Reconstruction The outline of the meniscus is clear in micro-MRI images. Because of the limited diameter of the scanning container, the model seems to be twisted. The microstructure is obscure (Fig. 3A, B). Figure 3C displays a voxel model of the meniscus. A resin 3D model was printed based on the voxel model (Fig. 3D). Figure 4A and B show a human meniscus before and after freeze-drying, respectively. The freeze-dried sample was subjected to micro-CT scanning. In the sagittal section of the micro-CT images (Fig. 5A), the surface layer (blue arrow), lamellar layer (red arrow), circumferential fibers (green arrow), and radial fibers (yellow arrow) are distinct. Figure 5B showed the sagittal section of another portion of the same meniscus. The path of the collagen fibers was clearly displayed in the transverse section (Fig. 5C). Even in the reconstructed digital model, the collagen fibers were distinct in the transverse section (Fig. 5D). The mean porosity of the meniscus according to micro-CT images was 33.92% (33.92% AE 0.03%). A 3D digital model of a sheep meniscus was reconstructed from the micro-CT files, and based on this digital model, an enlarged resin 3D meniscus model was printed (Fig. 6). Discussion T hree-dimensinonal printed meniscus grafts are an emerging and promising innovation for meniscus transplantation. However, current 3D-printed meniscus grafts are barely mechanically functional, primarily because these grafts poorly mimic the original microstructure of a meniscus. In this study, a histological method and SEM were used to analyze the characteristics of the meniscus microstructure. The highly cross-linked fibers and massive micropores indicate the difficulty in reconstructing the complicated microsturcture of the meniscus. Micro-MRI and micro-CT were used to scan the meniscus. Compared to micro-MRI, the freeze-drying and micro-CT strategy is better in displaying the microstructure and reconstructing the printer-readable meniscus digital model. Microstructure Analysis of the Meniscus Of all the functions of the meniscus, load bearing and stabilizing are the most important. Under normal circumstances, the meniscus suffers from compressive and shear forces from different flexion angles. To withstand these forces, the meniscus has a unique structure and composition. Specifically, the meniscus is composed of water, cells, and extracellular matrix. The extracellular matrix includes collagen, proteoglycans, and adhesion glycoproteins 27 . As is evident from the HE and Masson staining, a few cells are disseminated between a large number of fibers. According to SEM, the fibrous structure of the meniscus can be divided into three layers. The superficial layer, which contacts the tibial and femoral surface, is composed of a meshwork of thin fibrils. Beneath the superficial layer is the lamellar layer, which is a layer of lamellae of collagen fibrils on the tibial and femoral surfaces. The central layer, which is the main portion of the meniscus collagen fibrils, is located in the central region between the femoral and tibial surface layers. It contains radially aligned collagen fiber bundles and circumferential fibers. The lamellar, circumferential, and radial fibers form a complex network within the meniscus that helps it to withstand the varied forces (e.g. shear, tension, and compression) to which it is exposed. The lamellar layer is known to serve as an envelope for the circumferentially-oriented fiber bundles in A B the central main portion of the meniscus and is well suited to facilitate surface-to-surface motion. Choi et al. found that the normal lamellar layer plays a considerable role in the resistance to a compressive load, especially at the contact surfaces between the articular cartilage and meniscus 28 . The large and thick C-shaped bending fibers help resist tension and transfer the knee joint load for the meniscus. Moreover, radially scattered fibers (e.g., the "rope"), which strengthen the meniscus, prevent longitudinal tears caused by excessive pressure. Overall, the extremely complicated microstructure of the meniscus ensures its intricate mechanical function. In the SEM image of our study, we can see that the diameter of the fibers is varied. In addition, fibers are highly cross-linked, and between the fibers are numerous pores, which are suitable for cell migration. Considering the complicated fiber structure and the crosslinking of the fibers, it is almost impossible to fabricate a scaffold mimicking the meniscus using traditional methods. Therefore, 3D printing has become a promising approach to solve this problem. Theoretically, only if an original 3D printer readable digital model is obtained can any object be printed using the additive method of 3D printing. Reconstruction and Three-Dimensional Printing of Meniscus Microstructure To reconstruct the digital model, we used micro-MRI to scan the meniscus. The healthy meniscus tissue has a short T2 on MRI, which means these tissues exhibit low intensity on conventional MR images. In our study, the meniscus sample showed low intensity on MRI. The meniscus microstructure could not be discriminated on the MR image. However, the outline was obvious. A 3D digital model was reconstructed based on the MR images, and a resin 3D meniscus model was printed. Because the diameter of the container that holds the meniscus sample was limited, the sample was twisted in the container, resulting in a distorted 3D model. However, the shape was precise. Our result was comparable to published studies that used MR images to reconstruct the model 19,21 . Choi et al. used ultrashort echo time MR images to display the lamellar layer, which may be a future direction of meniscus MR imaging 28 . As a result, micro-CT was taken into consideration. However, soft tissue cannot be viewed clearly on micro-CT because it has a high water content 29 . To improve the contrast of biological tissue in micro-CT imaging, many contrast agents have been used to label the target tissue and to image the microstructure more clearly [30][31][32][33] . Unfortunately, the resulting images and 3D microstructures of these digital models are insufficient to allow 3D printing. Researchers have used micro-CT to scan rough menisci. The outline can be reconstructed, but the microstructure is obscure 14,34 . Freeze-drying is a sublimation process that removes moisture from materials at low temperatures while maintaining their structure, bioactivity, and other properties. We freeze-dried the meniscus before micro-CT scanning. Surprisingly, the CT image not only showed a precise outline but also displayed a relatively clear microstructure of the meniscus. From the transverse CT image, the surface layer, lamella layer, circumferential fibers, and radial fibers could be identified. Moreover, a resin 3D model was printed based on the micro-CT files, which clarified that this microscale 3D digital model was printable. Chen et al. 35 (2018) used 3D micro-printing to fabricate freestanding polymer 3D nanostructures. Combined with our study, we believe that a biomimic micro-printed meniscus is achievable in the near future. Limitations Our study has some limitations. First, limited by the resolution of the micro-CT we used, the exact path of the inner fibers was not as clear as in the SEM images. Second, few 3D printer and printing materials could match the nano-scale printing accuracy, which limited the precision of the 3D model of the meniscus. In addition, the regional variation of the fiber path was not fully analyzed in this study. The present study was the first attempt to reconstruct the microstructure of a meniscus using a freeze-drying and micro-CT strategy. In future study, we will reconstruct more accurate models and analyze the microstructure more systemically. Conclusion The most important findings of our study are that a lyophilized meniscus can be scanned by micro-CT, the micro-CT image clearly displays the microstructure of the meniscus, and a reconstructed microscale 3D digital model is printable. This study provides a new strategy to reconstruct the microstructure of the meniscus, providing a crucial step towards the completely biomimicked 3D printing of the meniscus.
2021-01-07T06:18:43.706Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "6244746302a177aeea609ae5fdb8b45315953057", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.12899", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b803d4ac66cf96ab914e2eff2899d8092e849be", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
257827259
pes2o/s2orc
v3-fos-license
Multiple Biogenic Waste Valorization via Pyrolysis Technologies in Palm Oil Industry: Economic and Environmental Multi-objective Optimization for Sustainable Energy System Agricultural biomass is one of the major wastes in the world. Most of these wastes end up in landfills and incineration, causing significant environmental problems that are detrimental to human health and other species on the Earth. Thermochemical conversion can solve this issue by utilizing the energy embedded inside the biomass, mainly organic matter, into high-grade fuels and chemicals. Fast pyrolysis is one of the technologies that can convert biomass waste to a high yield of bio-oil, which can then be used as biofuels in vehicles. In this study, palm oil biomass wastes are valorized to generate bio-oil sustainably via several pyrolysis technologies such as conventional pyrolysis, microwave pyrolysis, and thermo-catalytic pyrolysis in a multi-objective optimization framework. The formulated multi-objective mixed-integer linear programming problems are solved using the ɛ-constraint method. The Pareto-optimal solutions have illustrated a clear trade-off between two conflicting objectives: total annualized profit and the global warming potential. The most profitable solution economically has an annualized profit of $237 per ton of biomass with an emission of 628 kg CO2 equivalent per ton of biomass. On the other hand, the most environmentally sustainable solution, while still generating positive income, has an annualized profit of $122 per ton of biomass with an emission of 132 kg CO2 equivalent per ton of biomass. A second scenario with a case study presented on the palm oil industry in Malaysia has also demonstrated the selection of biomass during feedstock blending when a constraint on biomass feedstock availability is pre-defined. The proposed model is robust for planning bioenergy complex, especially those involving multiple biomass feedstocks. In fact, this study has addressed the research gap in comparison of multiple distinctive pyrolysis processes with respect to multiple palm biomass feedstocks. Introduction One of the main goals targeted at the 26th Conference of the Parties (COP26) is to reach net zero carbon emission by 2050 by ending deforestation, reducing methane emissions, and avoiding using coal as the energy provider. Coal is the second-largest energy source, accounting for 30% of the energy consumption globally to generate electricity (Adedoyin et al. 2020). The consumption of coal is led by China (50.5%), India (11.3%), and the USA (8.5%) (Chien et al. 2022). The large-scale usage of coal in emerging countries like China and India is because of their rich coal reserves, advanced exploration, mining, and utilization technologies, as well as the promotion of industrialization (Duan and Luo 2022). The growth rate of coal for energy consumption has slowed down significantly based on the review of world energy statistics conducted by British Petroleum in 2019. However, coal consumption, especially in China and India, is still massive due to the significant demand for energy sources for industrialization and electricity generation. Recently, the electricity consumption has been too excessive, and power rationing has to be implemented in China to reduce methane and carbon dioxide emissions. This is because most of the power supply is still derived from coal due to cheap production costs and availability. Therefore, 1 3 alternative energy sources, especially those that are renewable and cleaner, can be used to replace coal as the primary energy provider to reduce carbon emissions. For instance, energy-rich agricultural biomass can be used as one of the alternative energy sources for industrial production or electricity generation. Agricultural biomass is a consistent by-product derived from farming production. Most of these waste biomasses are not utilized efficiently; instead, these biomasses are usually left to decompose or burned in the open air, especially in developing countries (Tripathi et al. 2019). The continual increment in agricultural biomass production will pose a greater risk to human health and the environment. For instance, statistics showed that the open burning of biomass contributed to 18% of the total global emission of CO 2 , together with large quantities of particulates and soot (Jain et al. 2014). On the other hand, uncontrolled biomass disposal on land led to water pollution, eutrophication, and the production of biomass-induced microflora in the soil. This microflora will emit NO and N 2 O greenhouse gases with higher global warming potential than CO 2 (Tripathi et al. 2019). One of the major agricultural biomasses is oil palm (Elaeis guineensis), which is an estate crop commodity and oil-producing plant that is produced and consumed globally. Palm oil has been in the prime position in the vegetable oil world market, with an annual production of 30 million tons since 2004, registering an annual growth rate of 8% (Hambali and Rivai 2017). Nonetheless, the entire supply chain to produce palm oil generates a few different biomasses compared to other crops such as soybean, rice, maize, and wheat. For example, oil palm fronds (OPFs) and oil palm trunks are produced during harvesting oil palm fruits and replanting activities. In the process of pressing fresh fruit bunches (FFBs) to produce palm oil, several biomasses are produced, such as palm mesocarp fiber (PMF), palm kernel shell (PKS), and oil palm empty fruit bunch (EFB) (Hambali and Rivai 2017). Palm oil mill effluent is also one of the wastes produced in the wastewater treatment process from palm oil mill. Other than that, palm oil sludge (POS) is one of the solid wastes from the wastewater treatment process that can be valorized after separation. In the current practice, most of these palm oil biomasses and wastes are usually disposed of at a landfill or incinerated to generate energy. One method to efficiently utilize the energy embedded in the biomass is through thermochemical conversion. Thermochemical conversion is the decomposition of organic materials to produce liquid biofuel, solid fuels, and gaseous products (Halder and Azad 2019). There are a few thermochemical conversion technologies, such as direct combustion, pyrolysis, gasification, and hydrothermal liquefaction. The latter three have greater advantages in liberating the energy embedded in the biomass waste. Combustion is a highly exothermic chemical reaction that burns fuel in the presence of oxygen to generate heat or electrical energy (Kohse-Höinghaus 2021). On the other hand, gasification is the partial oxidation of solid fuels that react at a temperature range of 550-1000 °C, leading to mostly syngas or synthesis gas and trace of ashes (Klinghoffer and Castaldi 2013). Gasification differs from combustion, where it can be used to generate various forms of energy. The main products of gasification are combustible gases such as H 2 , CO, CH 4 , and CO 2 . These gases can be further converted into other gaseous or liquid fuels and chemicals such as ethanol and ethylene, which are the basic block for other chemical syntheses. Hydrothermal liquefaction is a relatively low-temperature process (300-400 °C) as compared to other thermochemical conversion but requires high pressure (40-200 bar) to convert wet biomass into solid biochar, liquid bio-oil, and combustible gases in the presence of hydrogen and catalyst (Milledge et al. 2014). This process also incurred a higher operating cost than other thermochemical conversions route due to its high-pressure operating condition (Gollakota et al. 2018;Milledge et al. 2014), and it is unfavorable for dry biomasses. Pyrolysis, on the other hand, occurred without oxygen to produce solid biochar, liquid bio-oil, and combustible gases at a temperature range of 300-700 °C. Pyrolysis is further separated into two major types: fast and slow. Slow pyrolysis is carried out at a temperature of about 300 °C and requires a few hours to complete, leading to biochar as the main product (Djandja et al. 2020). In contrast, fast pyrolysis involves rapid heating of fuels at low residence time (< 1 s) at about 500 °C, leading to high composition of bio-oil products (Klinghoffer and Castaldi 2013). Both technologies produce different products, leading to different applications; however, slow pyrolysis usually reduces the process efficiency (Sipra et al 2018). In fact, among the various pyrolysis products, the bio-oil is the major product obtained from the fast pyrolysis process that attracted great interest as it can be used as liquid fuel in vehicles after several treatments and upgrading via hydrotreatment (Li et al 2020). Therefore, fast pyrolysis process will be the focus in this study. It is also regarded as the most promising approach as an estimation of 75 wt% bio-oil can be produced and applied in many applications directly or after upgradation (Inayat et al. 2022;Czernik and Bridgwater 2004). Several life cycle assessment (LCA) studies have proven that the biofuels produced from fast pyrolysis can reduce the greenhouse gas emission from vehicles by 51% to 96%, depending on the type of feedstock, pyrolysis technology, and pyrolysis yield (Han et al. 2013). In the terms of economic analysis, few studies have concluded that the production of bio-oil can be economically viable. For instance, Diehlmann et al. (2019) have demonstrated the economical viability of production of bio-oil from rice straw where 46-65% of the biomass is converted. Wang et al. (2019) have concluded that a manufacturing cost of $3 per kg of cotton stalk is achieved for a production capacity of 18,000 tons bio-oil per year. It is also worth noting that different sources of raw materials, different pretreatments and upgrading methods, and recycling techniques would have significant impacts on the economic feasibility of the production of bio-oil (Inayat et al 2022). There are various types of pyrolysis technologies, including conventional pyrolysis, microwave pyrolysis, and thermo-catalytic reforming (TCR) pyrolysis. Conventional pyrolysis employs an external heating source to transfer heat to the material through a surface that depends significantly on the material physical properties, such as density, heat capacity, and thermal diffusivity. In contrast, microwave pyrolysis employs microwave heating that interacts with the biomass via electromagnetic wave (Beneroso et al 2017). This provides a direct electromagnetic energy transfer to the material, leading to volumetric and instantaneous heating regardless of the size of the material (Bermúdez et al 2015). Furthermore, the liquid product yield from microwave pyrolysis is higher compared to conventional pyrolysis, with the expense of greater capital cost incurred in power consumption (Sivagami et al 2021). On the other hand, TCR pyrolysis operates between fast pyrolysis and slow pyrolysis region, where the reaction temperature is similar to fast pyrolysis but with a lower heating rate and longer residence time (Schmitt et al 2019). TCR pyrolysis is also incorporated with a post-reforming process to improve the bio-oil quality. The catalyst required for the reforming process, biochar, is derived from the pyrolysis process (Neumann et al 2015). The combination of these processes in TCR pyrolysis converts the biomass waste into hydrogen-rich syngas, char without volatiles, and high-quality bio-oil with high heating value owing to its high carbon content, low water, and oxygen content (Schmitt et al 2019). Figure 1 demonstrates the block flow diagram for the three mentioned pyrolysis technologies to show the significant difference between unit operations and process pathway configurations. In Malaysia, 77% of the palm oil mills use combustion technology via a boiler or combined heat and power systems. In comparison, only 5% of the palm oil mills use advanced thermochemical conversion technologies such as gasification, pyrolysis, or hydrothermal liquefaction for selfconsumption power generation (Umar et al 2013). Several drawbacks in switching to advanced thermochemical conversion technologies that bring better energy efficiency for the palm oil mills include the insufficient quantity of feedstocks, especially for smallholder palm oil mills. On the other hand, large players such as government agencies and multinational companies with large biomass feedstocks prefer to sell them at an attractive buying price or use them for other purposes such as mulching (Hamzah et al 2019). Furthermore, the shifting to advanced thermochemical conversion technologies requires a high investment cost. Therefore, it is not economically viable as the available boiler capacity can sustain the palm oil mills' daily operation (Umar et al 2018). In the current practice, most palm oil mills utilize only PMF and PKS as the boiler feedstock. However, other potential feedstocks, such as EFB, OPF, and POS, can also be converted to energy and fuel via thermochemical conversion (Hamzah et al 2019). Nevertheless, various biomass feedstocks available in the palm oil industry are comprised of different amounts of organic content and different chemical composition, thus leading to different energy contents. On the other hand, the moisture content of various palm oil biomasses is not the same. Hence, the energy required to achieve the optimal moisture content for thermochemical conversion is very much dependent on it. Furthermore, different pyrolysis technologies have diverse process configurations, resulting in different pyrolysis products' yield. On the bright side, synergistic effect may occur when different palm biomass is considered for pyrolysis (Vasu et al 2020). All the factors mentioned above will significantly impact and should be considered during the designing stage of a new pyrolysis plant for energy production from biomass waste. To date, no studies have considered multiple distinct pyrolysis processes or technologies in the past, but few have analyzed and compared between the type of pyrolysis (i.e., slow and fast pyrolysis) (Adelawon et al 2021;Zhao et al 2020). Multi-objective optimization is an effective tool utilized widely in designing an efficient yet sustainable system for various processes, especially the cases that involve conflicting objectives. In general, maximizing the profit and minimizing the capital and operating cost are essential for the system to be economically viable while minimizing the life cycle environmental emission of the process. The minimization of life cycle environmental emission is significant as greater awareness has been made on sustainability recently. This tool can be applied for valorizing palm biomass feedstocks by considering numerous pyrolysis technologies. In the past, various publications have been done on pyrolysis products. For instance, Wu et al. (2019) proposed a mathematical model to optimize the co-processing of biooil and vacuum gas oil in the fluid catalytic cracker process by choosing the optimal biomass feedstock and bio-oil production process. In summary, the optimal biomass feedstock is pulpwood with fast pyrolysis adopted as the bio-oil production process in a minimized total annualized cost objective function. On the other hand, Gebreslassie et al. (2013) proposed bicriteria optimization to maximize net present value and minimize global warming potential for the hydrocarbon biorefinery via fast pyrolysis with hydrotreating, hydrocracking, and hydrogen production of hybrid poplar feedstock. Zhang et al. (2014) further expanded their work by introducing various hydrogen production technologies, The objective of this study is to address the research gap on the comparison of multiple distinct pyrolysis process pathways with respective to the multiple palm biomass as pyrolysis feedstocks. Factors include multiple biomass feedstocks derived from the palm oil industry such as OPF, PMF, PKS, EFB, and POS and numerous pyrolysis technologies that include conventional pyrolysis, microwave pyrolysis and TCR pyrolysis will be accounted for via superstructure optimization to identify the optimum biomass selection and pyrolysis route with the best economic and environmental performance. The remainder of the work is organized as follows. The next section provides the problem statement of this study and the process overview of various pyrolysis technologies and palm oil biomasses derived from the palm oil mill industry. This is followed by the multi-objective mixed-integer linear programming (MILP) model formulation that addresses economic and environmental objectives. The results and discussion are presented with two different scenarios proposed and a concluding remark is provided. Problem Statement The optimization issue to be solved in this work is to utilize various biomass feedstocks under the palm oil mill industry, which include palm mesocarp fiber (PMF), palm kernel shell (PKS), oil palm empty fruit bunches (EFBs), oil palm frond (OPF), and palm oil sludge (POS), that will undergo various pyrolysis technologies such as conventional pyrolysis, microwave pyrolysis, and thermo-catalytic reforming (TCR) pyrolysis to produce bio-oil and biochar. The physical properties of these palm biomass are provided in Table 1. This work aims to determine the optimum configuration of oil palm biomass feedstocks and the pyrolysis technologies that will lead to maximizing the total annualized profit (TAP) and minimizing the global warming potential (GWP) simultaneously. The process overview of the pyrolysis of various palm oil biomass feedstocks derived from palm oil mills is shown in Fig. 2. First, the different types of palm biomass feedstocks are collected from the plantation site and palm oil mill, then transported to the pyrolysis site to convert the palm biomass into biochar and bio-oil as the main products. Three different pyrolysis processes are analyzed in this work: conventional pyrolysis, microwave pyrolysis, and TCR pyrolysis. Different pyrolysis technologies will eventually lead to different yield of biochar and bio-oil, which is also depending on the feedstocks used. Furthermore, the working principle of the pyrolysis processes differs from one another. Thus, different unit operations are installed, directly affecting the process's capital cost and energy consumption. As demonstrated in Fig. 1, the conventional pyrolysis method includes a drying process that is not required in microwave pyrolysis because microwave heating uses radiation heat instead of convection heat. On the other hand, the TCR pyrolysis method differs from the conventional pyrolysis method, where the primary reaction is divided into two parts-intermediate pyrolysis and post reforming process. This novel method is said to improve the bio-oil quality in terms of lower oxygen content (Hornung et al. 2016). This led to a less intense hydrotreating process for the bio-oil upgrading to liquid fuels. A detailed process explanation for each of the unit operations is provided in Appendix B. General Model Formulation The problem for the optimal pyrolysis process considering multiple palm biomass feedstocks is solved via a multiobjective mixed-integer linear programming (MILP) model. Detailed equations and notation are provided in the "Supplementary Information." In the MILP model, two conflicting objective functions are introduced: maximizing the total annualized profit (TAP) and minimizing the global warming potential (GWP). These objective functions are subjected to several constraints, including the supply of five different types of feedstocks, pyrolysis facilities, economic, and environmental constraints. The outline of the model is shown as follows: To deal with the multiple objectives function feature of the model, an ɛ-constraint method is introduced. This method is frequently used to handle problems related to multi-objective optimization by presenting Pareto-optimal solutions due to its simplicity (Zhao et al 2020), and it can obtain all the Pareto-optimal solutions, including those in the convex space of the objective space (Laumanns et al 2006). To obtain the Pareto-optimal curve in multi-objective max Total annualized prof it min Global warming potential s.t. Economic constraints Environmental constraints Pyrolysis facilities constraints Palm biomass feedstock constraints 1 3 function problems, one of the objective functions, the GWP emission, in this case, is converted into an ɛ-constraint. Do note that either one of the two objectives can be selected for the conversion to ɛ-constraint. Additional information regarding the ɛ-constraint is provided in the "Supplementary Information." After converting the environmental objective function to a corresponding ɛ-constraint, the multi-objective MILP model is reformulated into a singleobjective MILP model, which is shown as follows: max Total annualized prof it s.t. ε − Constraint on global warming potential Economic constraints Environmental constraints Pyrolysis facilities constraints Palmbiomass feedstock constraints In this study, two scenarios are demonstrated. The first scenario focused on maximizing the TAP and minimizing the GWP emission without considering the constraint on the palm biomass feedstock distribution (i.e., there is no minimum feed required for each of the biomass). Therefore, the model would favor single biomass feedstock in achieving the most optimum result. In the second scenario, the state of Johor in Malaysia is selected as the case study by considering the production of palm biomass based on the plantation hectare in the selected location. The second scenario considers the actual production of palm biomass in the state of Johor in Malaysia. Thus, utilizing multiple types of biomasses available is the limiting constraint in the MILP model. The oil palm plantation area is about 0.8 million hectares in the state of Johor, and 15 million tons of fresh fruit bunches are processed annually (Aljuboori 2013). The pyrolysis plant aims to utilize 20% of the biomass available in the selected location. Table 2 shows the production rate of various palm biomass annually based on the plantation size. Scenario I-Multi-objective Optimization of Single Biomass Feedstock for the Pyrolysis Process The model is solved using LINGO 13.0 software with a lapse time of one second running on a Windows 10 Home Edition 64-bit operating system personal computer with Intel Core i5-4590 at 3.30 GHz and RAM of 8.00 GB. For the first scenario, the Pareto-optimal solution for the multi-objective optimization of economic and environmental criteria of pyrolysis process from palm mill feedstock is illustrated in Fig. 3. The Pareto-curve represents the tradeoff between the economic and environmental objectives. In general, greater annualized profit and lesser environmental emissions are preferred. Thus, the region above the curve is infeasible region, while the region below the curve is sub-optimal. Six Pareto-optimal solutions (Point A, Point B, Point C, Point D, Point E, and Point F in Fig. 3) are selected for further analysis. Point A constitutes the lowest environmental emission. However, the profit generated from the pyrolysis plant is also the lowest. For Point B and Point C, as compared to Point A, a slight trade-off in the environmental emission can yield a steep increase in the profit, mainly by switching the palm biomass feedstock from EFB to PKS and OPF. EFB exhibits properties such as higher moisture content and lower bio-oil yield as compared to PKS and OPF, which require higher energy to remove the moisture of the biomass, resulting in a lesser profit. However, in terms of environmental criteria, EFB is superior to PKS and OPF in biomass size. Thus, lesser energy is required to further reduce the size of EFB to the necessary condition of pyrolysis reaction. The energy consumed in grinding for size reduction is greater than the energy of drying. Comparing Point B and Point C, the reason for the superiority of OPF towards PKS in terms of the economic criteria is the greater bio-oil yield which led to better profit. However, in terms of environmental criteria, the PKS is far better than OPF due to lower moisture content and lower water content in bio-oil. This led to less intensive pretreatment and post-treatment to remove the water content in the biomass and bio-oil, respectively, thereby saving energy and reducing carbon emission. Point A, Point B, and Point C employ the microwave pyrolysis route that is more environmentally friendly, which will be further discussed in a later section involving the environmental emission breakdown of pyrolysis unit operations. Point D, Point E, and Point F are analogous to Point A, Point B, and Point C. The difference is that the former points exhibit both higher economic returns and greater environmental impact due to a different pyrolysis unit being used. Point A, Point B, and Point C employed microwave pyrolysis, while Point D, Point E, and Point F employed conventional pyrolysis. Conventional pyrolysis has a lower start-up cost than microwave pyrolysis, which leads to a greater profit margin. Point F represents the solution with the greatest profit but at the cost of the highest environmental emission. It is good to note that all the solutions that lie on the curve, including Point A, Point B, Point C, Point D, Point E, and Point F, are optimal solutions, where the selection of the point is based on the preference between the two objectives. Solutions in the left region (i.e., Point A, Point B, and Point C) would place importance on environmental emission as a top priority. In contrast, solutions in the right region (i.e., Point D, Point E, and Point F) aims to maximize the profit of the pyrolysis plant. Microwave pyrolysis does not require tedious pretreatment processes such as grinding and drying, which consume some of the process energy, the application of microwave process in scale-up operations is still lacking in technical information and detailed design. Thus, a higher capital cost is estimated for microwave pyrolysis, leading to a lower profit margin. It is also interesting to note that PMF and POS are not selected as the feedstock in the model mainly due to PMF fetching a high water content in bio-oil. This renders the overall lesser bio-oil yield, resulting in loss generated for the plant. Similarly, POS favors biochar yield to bio-oil yield, where the profit generated for biochar is far lesser than treated bio-oil. This resulted in meeting the same fate with PMF biomass in incurring losses if it is used as the biomass feedstock for pyrolysis. The economic breakdown for the various pyrolysis processes is demonstrated in Fig. 4 with information such as TAP, total annualized revenue, total annualized capital cost, and total annualized operating cost. Note that the biomass feedstock is constant while varying the pyrolysis technologies. Similar to the Pareto-optimal curve shown in Fig. 3, conventional pyrolysis has a higher TAP, which places the process that selected conventional pyrolysis on the right region of the Pareto-optimal graph. This is followed by microwave pyrolysis and then TCR pyrolysis. Since TCR pyrolysis has the lowest profit generated, as shown in Fig. 4, this technology is not selected, hence does not appear in the Pareto-optimal graph. One of the reasons that led to the lowest TAP for TCR pyrolysis is the reforming process resulting in higher capital costs and operating costs than conventional pyrolysis. Since some of the biochar is used as the catalyst in the catalytic reforming unit operation, the overall yield of biochar is reduced, leading to lower total annualized revenue (Schmitt et al 2019). On the other hand, microwave pyrolysis has the highest total annualized revenue from the high yield of pyrolysis products due to its unique thermal gradients. The exceptional cooler surroundings during microwave heating allows the carbohydrate derivatives to be preserved in larger quantities than in other pyrolysis processes (Beneroso et al 2017). However, the TAP of microwave pyrolysis is offset by the higher total annualized capital cost and operating cost, mainly due to a higher estimation of the unit operations of microwave pyrolysis. A few of the factors supporting this statement include the lack of large-scale operation data, even from demonstration plants, in which the technical risks have not been evaluated thoroughly and mitigated (Buttress et al 2016). This resulted in the capital cost being a vital sensitivity variable due to unforeseen uncertainties (Wang et al 2015). Haeldermans et al. (2020) have concluded that conventional pyrolysis is more viable than microwave pyrolysis as it is a simpler and more established technology. Furthermore, Kim et al. (1999) have reported that the capital cost of microwave hardware is $1000-$2000 per kW of installed power, which is higher than the other conventional heating hardware. Nevertheless, the energy efficiency of the microwave heater is much better than conventional pyrolysis that uses a burner, where the efficiency improvement can be up to fourfold (Binti Mohd 2017). This directly reduces the energy consumption per unit mass of biomass during the pyrolysis process. Table 3 shows the breakdown of environmental emission for each unit operation with respect to the type of pyrolysis process. In terms of environmental emission, the microwave pyrolysis pathway yields the most negligible environmental impact, followed by conventional pyrolysis and then TCR pyrolysis, which contribute almost the same emission at about 650 kg CO 2 equivalent per unit mass of biomass. By breaking down the pyrolysis pathways into a single unit, the grinder unit operation contributed most to the environmental emission, which is one of the essential pretreatment sections that must be done so that the heating efficiency and uniformity of biomass during the pyrolysis process are improved. Furthermore, grinding action consumes electrical energy, which contributes more CO 2 emission per kW of energy than heat energy powered by natural gas in the life cycle assessment perspective. Since microwave pyrolysis employs radiation heating, the size of biomass does not have a great impact on the heating process. Thus, the emission is lower as it does not require an energy-intensive grinding operation. The second unit operation that has the most significant impact on global warming is the condenser, which also consumes tremendous electrical energy during the separation process of bio-oil and biogas. These two unit operations should be the top priority during process optimization or intensification to reduce pyrolysis emissions. Looking solely at the pyrolysis process in Table 3, microwave pyrolysis contributed more emission than conventional pyrolysis, which has the least emission generated, followed by TCR pyrolysis when both pyrolysis and reforming process are combined. Experiment studies have shown that microwave pyrolysis consumes 75% lesser energy than conventional pyrolysis (Binti Mohd 2017). This is because microwave heating is an internal heating mechanism that leads to rapid and selective heating, as compared to other pyrolysis processes, which require a longer residence time to achieve the targeted temperature due to conductive and convective techniques (Gautam et al 2019;Liew et al 2019). Additionally, the conversion efficiency of electrical energy to heat energy for microwave heating is above 80%, thus minimizing the energy loss (Osepchuk 2002). However, because microwave pyrolysis operation requires electricity, the emission from microwave pyrolysis is seen greater than conventional pyrolysis. Scenario II-Feedstock Blending with a Constraint on Biomass Availability Based on Location A capacity of 2000 tons per day of pyrolysis facility is introduced to fully cover the 20% of the biomass available in the state of Johor. The biomass generated in the area will be transported to the pyrolysis facility in the state itself. The environmental factor is analyzed in this scenario, but not the economic factor. This is because the selection of biomass does not affect the capital cost but only the operating cost, which has a linked relationship with the environmental emission. The environmental emission shown in Fig. 5 exhibits a trend as in Table 3 where microwave pyrolysis technology has the lowest emission, followed by conventional pyrolysis and then TCR pyrolysis. It is interesting to note that the model will generate different optimal blending of biomass when there is a constraint on feedstock availability in a particular area. From Table 3, the available palm biomass based on 0.8 million hectares in the state of Johor is 715 tons, 482 tons, 679 tons, 4296 tons, and 294 tons of PMF, PKS, EFB, OPF, and POS, respectively. Given a pyrolysis plant with a 2000-ton feed capacity, feedstock blending using various biomasses must be considered to fulfill the required capacity. PKS, EFB, and POS are fully utilized for conventional and microwave pyrolysis, with the remaining sourced from OPF. Feedstock blending is proved to be advantageous because of the variety in biomass options, lower threat, and lower carriage costs (Oasmaa et al 2010). The selection of biomass also followed the priority of POS and then EFB, PKS, and OPF, which is similar to the Paretooptimal chart in Fig. 3 based on environmental emission. The POS has the least environmental emission due to the low bio-oil yield; hence, lesser treatment is needed. However, as mentioned above, the high acquisition cost and the low yield of bio-oil from POS biomass would cause the plant to incur losses if it is used as the sole feedstock for pyrolysis. The case might be different when there is a mixture of biomass. The overall environmental emission can be reduced, and other biomass that generates profits can cover the losses. The PMF biomass is not selected as the feedstock because of the high water content in bio-oil that requires greater energy for post-treatment, leading to higher cost. Another reason for this is the availability of OPF in large quantities that can fulfill the capacity of the pyrolysis plant. However, the drawback of using OPF is the accessibility of this biomass as it needs to be collected on the plantation site compared to other biomasses that can be obtained immediately after the mill operation. For TCR pyrolysis, OPF is not selected as the fourth biomass; instead, PMF is selected to be used. This is because TCR pyrolysis can yield bio-oil properties with lesser water content in bio-oil with an additional reforming section (Hornung et al 2016). Therefore, less intensive hydrotreatment is required at the post-treatment stage to remove the water content in the bio-oil. The study performed in Scenario II has shown that different biomasses would impact the environmental emissions. Furthermore, based on the selected pyrolysis technology, certain biomasses would perform better in reducing emissions and cost savings. Figure 6a, b, and c demonstrates a clearer picture of the input and output of mass and energy balance for conventional pyrolysis, microwave pyrolysis, and TCR pyrolysis for the case study in Scenario II. A sensitivity analysis is performed by varying the biomass feed for Scenario II to mimic the context of a disrupted biomass supply, as shown in Fig. 7. For example, the biomass can be used to produce other potential products with better economic yield. Therefore, other palm biomass feeds must be utilized to fulfill the capacity of the pyrolysis plant. Is it reasonable to note that the maximum supply of the type of biomass also depends on the availability in the area. In Fig. 7, the first case study, denoted as 1, is similar to the conventional pyrolysis in Fig. 5, registered the lowest emission when all the biomass feed is available, and PKS, EFB, and POS are fully utilized. For case study 2, PKS is not supplied to the pyrolysis facility, and the remaining capacity is supported by OPF, leading to a higher emission than in case study 1. In case study 3, EFB is not supplied to the pyrolysis facility. Due to the massive ample supply available for OPF, it can cover the remaining deficit required for the pyrolysis facility. However, the GWP in case study 3 is also the highest, leading to the conclusion that EFB is one of the cleanest biomass feeds for the pyrolysis process as it has the lowest emission due to the size of EFB is smaller as compared to other biomass. Case study 4 does not include OPF in the pyrolysis facility; thus, PMF has to be utilized to cover the remaining capacity. The usage of PMF has a slightly higher GWP as compared to case study 1. The final case study 5 does not include POS for the pyrolysis plant. This sensitivity analysis has proven that different biomass feeds would have an adverse effect on the GWP. Conclusion In this work, a bioenergy complex consisting of multiple palm biomass feedstocks available in the palm oil supply chain such as palm mesocarp fiber (PMF), palm kernel shell (PKS), oil palm empty fruit bunches (EFBs), oil palm frond (OPF), and palm oil sludge (POS) which fed into various pyrolysis technologies such as conventional pyrolysis, microwave pyrolysis, and thermo-catalytic reforming (TCR) pyrolysis is proposed. A multi-objective mixed-integer linear programming (MILP) model was formulated to determine the optimal designs of the palm biomass supply chain by considering both the economic and environmental objectives. This multi-objective optimization problem is solved via an ɛ-constraint method. The proposed model is robust for planning bioenergy complexes, especially those involving multiple biomass feedstocks. Furthermore, this model is generic as it can be applied not only to different regions with Fig. 6 Mass and energy balance process flow diagram for Scenario II: a conventional pyrolysis, b microwave pyrolysis, and c thermo-catalytic reforming pyrolysis ◂ different biomass availability but also to other agricultural industries. In the multi-objective optimization of single biomass feedstock for the pyrolysis process, Scenario I, some insights are obtained from the results. First, a Pareto-optimal curve has shown a clear trade-off between two conflicting economic and environmental objectives: the total annualized profit and global warming potential, respectively. The most profitable solution achieves an annualized profit of $237 per ton of OPF converted with an emission of 628 kg CO 2 equivalent per ton of OPF consumed. On the other hand, the most environmentally sustainable solution generates an annualized profit of $122 per ton of EFB converted with an emission of 132 kg CO 2 equivalent per ton of EFB consumed. Scenario II, which reflects a case study on the palm oil industry in Johor of Malaysia, has also demonstrated the selection of biomass during feedstock blending when a constraint on biomass feedstock availability is pre-defined. The selection of palm biomass feedstock based on the lowest GWP in ascending order is POS, followed by EFB, PKS, OPF, and then PMF. However, the type of pyrolysis technologies directly impacts the feedstock blending where certain feedstocks can create synergy effects with the pyrolysis units that result in either lower emission or better cost savings. For the recommendation of future research, the price fluctuation of the feedstock can be considered as the price will directly impact the plant's profitability. Furthermore, the debottlenecking of the unit operations with the greatest emission, for instance, the grinding and condenser unit operation, by process optimization and process intensification, can aid in reducing the total carbon footprint on the environment. Funding Open Access funding enabled and organized by CAUL and its Member Institutions Data Availability The dataset generated and analyzed in the current study is available from the corresponding author on reasonable request. Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 7 Sensitivity analysis of various biomass feed for conventional pyrolysis
2023-03-30T15:21:11.339Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "782fedf8d11bf57f2c1bfc2696e433f81fad769a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41660-023-00327-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4358b90243603c4a760cc52a7ed2b524821f926c", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [] }
255111127
pes2o/s2orc
v3-fos-license
Molluscs community as a keystone group for assessing the impact of urban sprawl at intertidal ecosystems Mollusc communities are getting endangered in the aftermath of urban sprawl because artificial structures do not surrogate natural substrates. In this study, we compared the diversity, community and trophic arrangements of molluscs among different models of artificial substrate and their adjacent natural rock, to detect relationships between some abiotic variables and the mollusc communities. Complexity, chemical composition and age were tested as potential drivers of the community. Diversity, community and trophic structure differed between natural and artificial substrates. Complexity at the scale of cm was detected as the most important factor driving the community structure. In addition, a chemical composition based on silica and/or scarce calcium carbonates seems to be relevant for molluscs, as well as for the secondary substrate where they inhabit. However, age did not seem to be a driving factor. Among the different artificial structures, macroscale complexity was detected as the main factor diverging a drastically poor community at seawall from other artificial structures. In this context, macro and microscale complexity, chemical composition and mineral type are variables to consider in future designs of artificial substrates. Introduction Biodiversity on natural coastal habitats is under threat by many causes, mainly: coastal artificialization, exploitation of renewable (fisheries) and non-renewable (mineral and energy extraction) resources, pollutant discharge and marine debris (Dulvy et al. 2003;Jacob et al. 2018). In the Bay of Sydney (Australia), 50% of the natural coastline is replaced by artificial substrates (Chapman 2006;Dafforn et al. 2015) and around 22.000 Km 2 of European coasts are covered with concrete or asphalt (European Environment Agency Report 2006; Airoldi and Beck 2007). This coastal transformation, the so-called 'urban sprawl' , is being boosted by shore erosion due to more frequent stormy events and the sea-level rise (Bouma et al. 2014;Bulleri and Chapman 2010), altogether, threatening intertidal ecosystems. Intertidal communities are diverse and complex due to the broad range of biotic and abiotic interactions that occur on intertidal natural substrates (Chapman 2013). For example, wave and tide action (Southward and Orton 1954), desiccation or top-down processes (predation, competition, grazing, etc.) modulate both the sessile and vagile biota, promoting the development of rich and ecologically important communities. Molluscs are one of the most abundant taxa in the intertidal zone, providing important ecosystem services (see Table 2 in Firth et al. 2016). They are considered early colonizers of substrates (Underwood and Chapman 2013) and play important roles in C and Si cycles (Meysman and Montserrat 2017). Sessile filter molluscs can contribute to clean water and improve nutrient uptake for algae (Eriksson et al. 2017) and together with other sessile organisms, they serve as engineers (Melero et al. 2017;Commito et al. 2018) setting up a secondary substrate for many different species. Also, mobile grazers can feed on macrophytes, cleaning areas for subsequent colonization of many species ). Previous studies have reported a negative impact of artificial substrates on intertidal molluscs. For example, Moreira et al. (2006) suggested that seawall do not sustain viable populations of limpets. Furthermore, complexity/heterogeneity (e.g. micro-roughness) can affect the abundance of chitons (Moreira et al. 2007) or limpets (Rivera-Ingraham et al. 2011) on artificial substrates. In fact, substrate complexity is one of the biggest drivers of intertidal biodiversity. Concrete-made artificial substrates usually lack microhabitats (crevices, rock pools, etc.), preventing refuge from stressful conditions such as desiccation or predation, and are largely responsible for the biodiversity deficit of artificial substrates compared with the natural rocky shore (Firth et al. 2016 and references therein). Therefore, eco-engineering actions that added habitat complexity at different scales have been related with a higher number of taxa (Kefi et al. 2015;Strain et al. 2018) and enhanced recruitment and survival of sessile and mobile macrofauna (Atilla and Finelli 2005). Substrate composition, like minerals and elements, is also known to be an important factor affecting communities developing on artificial structures (Coombes et al. 2015;Sempere-Valverde et al. 2018). For example, acidic siliceous quartz from sandstone may cause oxidative stress and hold less diverse and mature community when compared to limestone (Bavestrello et al. 2000;Cattaneo-Vietti et al. 2005). The mineralogical composition usually varies from artificial substrates (normally made from concrete) to natural substrates (Ido and Shimrit 2015;Ponti et al. 2015). Concrete may liberate toxic metals and carbonates that enhance alkalinity (pH ~ 13) producing stress on individuals (Ido and Shimrit 2015). In the case of molluscs, higher saturation of aragonite can facilitate a higher occurrence of burrowing bivalves (Mos et al. 2019) and alkaline concrete surfaces may increase oysters' recruitment (Anderson 1996). Although ecological succession may not occur in a parallel manner on artificial and natural substrates (Burt et al. 2011), the age of substrates has been considered as an important factor explaining the differences between artificial and natural substrates (Glasby and Connell 1999a, b). Some authors have estimated that it takes from 5 to 20 years for artificial structures to reach climax communities (Coombes 2011;Hawkins et al. 1983;Pinn et al. 2005), while others suggest that communities on low crested structures never reach climax (Gacia et al. 2007) or take more than 100 years (Perkol-Finkel et al. 2005). Consequently, we decided to consider the date of substrates deployment in the present study, together with substrate composition and complexity, to study mollusc diversity associated with artificial substrates. Furthermore, changes in the community structure of epifaunal organisms associated with artificial substrates can cause trophic shifts (Sedano et al. 2020a). Artificial substrates are known to affect prey resources (Munsch et al. 2015), limiting the diet of some mollusc species (Burgos-Rubio et al. 2015) and ultimately restricting the diversity of trophic strategies. For example, the reduced primary productivity on seawalls has been related to the scarcity of herbivore grazers (Lai et al. 2018). These effects, among others, call for an ecological evaluation of coastal artificial substrates to prevent the decline of intertidal habitats (Dafforn et al. 2015;Firth et al. 2016) and promote other ecological services (García-Gómez et al. 2014;Dearborn and Kark 2010). Taking into account that molluscs are diverse and contribute highly to this habitat (Ricciardi et al. 1997), we decided to study the community of molluscs as a model to detect relationships between the abiotic features of the man-made intertidal substrate and the associated fauna. We focused on habitat complexity, substrates composition and age intending to identify which factors are driving the differences in molluscs taxonomic and trophic structure between artificial substrates and natural substrates. In this regard, we hypothesized that: 1. Substrate complexity and composition would be the main drivers differentiating artificial from natural substrates, given the differences in complexity and composition between artificial and natural substrates in our study area. Additionally, we hypothesized that the mollusc community at rip-raps (an artificial substrate made from natural rock) would be the most similar to natural substrates. 2. Trophic community structure would vary among different artificial substrates and between artificial and natural substrates. 3. Age will be a driver structuring intertidal molluscs' community on artificial substrates. Study area Our study area was located in the Algeciras Bay (Cadiz, Spain), which achieves 400 m in depth and occupies 73 Km 2 of area. This deep bay is found next to one of the most relevant marine regions in the world, the Strait of Gibraltar. It is a marine area with high biodiversity due to its location and structure, which is placed between Africa and Europe and between two water bodies, the Atlantic Ocean and the Mediterranean Sea (Usero et al. 2016). Algeciras Bay contains five different substrates (four artificial and their nearest natural rocky shore), very close to each other and under very similar environmental conditions. We selected four nearby artificial substrates (acropods, cubes, rip-raps and seawall) and compared the molluscan assemblages and trophic structure among them and with the nearest natural substrate. Given the difficulty to find different artificial substrates next to each other, we limited our study area to this single Bay (Fig. 1). Abiotic analysis To identify possible drivers of the differences between substrates, we measured the physicochemical features of each substrate. The variables included macro and microscale complexity, elemental composition, minerals, crystallinity, calcination percentage (C.P) and age. Complexity measures were divided into macroscale complexity (m) and microscale complexity (cm). In both cases (macro and microscale), substrate roughness was calculated as in Rivera-Ingraham et al. (2011) using the equation by Blanchard and Bourget (1999): Roughness or topographical heterogeneity index (THI) = Tr / Ts, where Tr is the "effective" distance between two points "A-B" (measuring the contour between A-B) and Ts is the linear distance between A-B. Macroscale roughness was calculated over 15 m length transects. Three transects were selected at each substrate and a flexible meter was laid directly over it, trying to conform as closely as possible to all contours of the bare substrate. Regarding microscale roughness, three 15 cm profile gauges with 0.5 mm pins were pushed onto the bare rock to record the surface of each substrate . The resulting profiles were photographed, and the images were digitally processed with Adobe Photoshop to obtain two coloured images. The length of the contour of the profile was obtained with ImageJ software. The elemental composition and calcination percentage, mineralogical absorption spectra, crystallinity and lithology composition of each sample were obtained from Sedano et al. (2019). All chemical composition was characterized using three powdered fragments of each substrate. Age of the substrate was based on the date of construction and resulting from the difficulty of dating age of the natural substrate, the oldest possible date in the same order of magnitude compared to the oldest artificial substrate was used instead. Also, wave exposure was quantified at each substrate using a combination of the maximum fetch and the modified effective fetch (Fe) index developed by (Howes et al. 1994): Fe = [∑ (cos ɵ i ) X F i /∑cos ɵ i ], where ɵ i is the angle between the shore-normal, and the directions 0º, 45º left and 45º right and F i is the fetch distance in Km along the relevant vector. To determine if substrates differed physico-chemically and to detect the most relevant abiotic components that separate the substrates, we performed a Principal Component Analyses (PCA) using macro, microscale complexity, elemental composition (calcium, silicon and magnesium), crystallinity and age. Data were normalized before analyses. Biotic analysis Community and trophic structure, as well as biodiversity indices (richness, Pielou's Evenness and Shannon's diversity), were compared among artificial and natural substrates. Three different sites were randomly selected within each of the five substrates (natural, cubes, acropods, rip-raps and seawall). At each site, three replicate quadrats of 20 × 20 cm were scraped (3 sites × 3 replicates × 5 substrates = 45 samples). The samples were collected during low tide and within the lower intertidal zone (5-30 cm over the lowest tidal level). We scraped the biotic substrate (secondary substrate) and the associated fauna and preserved it in 96% ethanol until laboratory analyses. At the laboratory, associated molluscs were sorted out from the rest of sessile and vagile biota, identified down to species level whenever possible and quantified in terms of their abundance. Since the secondary substrate (sessile biota developing on the hard primary substrate) can influence the associated fauna (Chapman et al. 2005), all sessile fauna and flora that conform the secondary substrate were volumetrically quantified at each replicated site to control this variable (used as a covariate in the analyses). Besides, percentages of the most abundant species of the secondary substrate were recorded as well, to detect possible differences between substrates. To identify possible trophic shifts, we grouped the different species into trophic categories and compared the trophic structure among substrates. Species were assigned and grouped according to their trophic strategies Table 1 in Donnarumma et al. 2018), with slight modifications to better represent the feeding strategy of the species in our study. We assigned them into different groups depending on what they feed on, and the way to obtain the food (trophic guilds) (Table 1). Despite being known to feed on larvae of animals and detritus (Burgos-Rubio et al. 2015), limpets were considered herbivores. We also considered omnivores all animals that feed on suspended organic-matter or by filtering particles in the water column. After species identification, three biodiversity indexes were measured (Badalamenti et al. 2002): species richness, Shannon-Wiener diversity and Pielou evenness for each replicated site in all substrates. To test for biodiversity differences among substrates, we performed a nested ANOVA for each biodiversity index using GMAV5 (Underwood et al. 2002). A Student Newman Keuls (SNK) test was also conducted to elucidate differences among pairs of substrates. The statistic design had two factors: "Substrate" (fixed), with five levels: Natural, Cubes, Acropods, Rip-raps and Seawall, and "Sites" (random), which was nested in the substrate and had three levels (1,2,3). Cochran's test was performed to confirm the homoscedasticity of biotic data. Metric multidimensional scaling (MDS) based on the similarity of Bray Curtis matrix was performed on the community and trophic structure (see Table 1 with six categories). We made an additional CLUSTER and SIMPROF test to group the substrates depending on their dissimilarity. Furthermore, a permutational multivariate analysis of variance (PERMANOVA) was also carried out to test if the community and trophic structure of molluscs varied significantly among substrates and sites. Previously, data were square-root transformed and the analysis was carried out on a Bray Curtis triangular matrix. When a significant source of variance was detected, a pair-wise test between pairs of substrates was also computed to obtain the corresponding p-values. Correlation analyses Correlation tests between the community and abiotic matrix were made to explore potential relationships between community structure and the abiotic variables. Multicollinearity among abiotic factors was previously tested with a Draftsman's plot based on Pearson correlations and only one abiotic factor was used when there were high pairs of correlation (Pearson correlation limit was set a 0.80) (See Fig. 5 in Sedano et al. 2019). Variance inflance factor (VIF) was also perfomed to avoid multicollinearity. Furthermore, a distance-based redundancy analysis (dbRDA) was computed using a fourth root transformed biotic matrix paired with a normalized abiotic matrix, to give similar weight to variables measured on different units. DbRDA was portrayed into a bidimensional representation. A BIOENV routine (Clarke and Ainsworth 1993) was done to detect the best set of variables that better suit the response data. This method calculates correlation coefficients between response variables (community matrix) and predictor variables (abiotic matrix) (Balkenhol et al. 2009). RELATE routine (Clarke and Warwick 2001) was carried out to detect the correlation coefficient between both, community and abiotic dissimilarity matrix. All multivariant and correlation analyses were carried out with PRIMER + PERMANOVA 6 using 9999 permutations (Anderson et al. 2008). Abiotic analyses The results of the fetch index indicate that all the substrates belong to a semi-exposed wave exposure class, whereas the age of origin was different for each substrate (Table 2). Regarding substrate complexity, microscale complexity was higher at natural substrate and cubes comparing with the Semi-exposed Semi-exposed Semi-exposed Semi-exposed Semi- rest of substrates, being very low at seawall and acropods. In contrast, macroscale complexity was higher at acropods and cubes than rip-raps and natural substrate ( Table 2). From a chemical point of view, elemental composition differed between natural and artificial substrates and among all substrates. Silica (SiO2) concentration was higher at natural substrate compared to artificial substrates, which were characterized by a higher concentration of calcium oxide (CaO) at all samples (Table 3). According to the mineralogical composition, natural substrate was very different from the artificial ones, and differences were also found within the artificial substrates. The natural substrate was composed of high percentages of quartz, while cubes and rip-raps were mostly composed by quartz and CaO in a carbonated form, calcite (CaCO3). Acropods presented high levels of magnesium oxide (MgO) and their mineralogical composition was based on dolomite (CaMg(CO 3 ) 2 ) ( Table 3) (full mineralogical composition in supplementary files of Sedano et al. 2019). Figure 2 represent these values bidimensionally. In addition, crystallinity was positively correlated with silica and negatively correlated with calcium oxide and calcination percentage. Age causes the separation of cubes and natural samples from the rest of substrate samples, correlating with microscale complexity. Regarding trophic groups, natural substrate contained all the groups measured, and scavenger was exclusive for this substrate. This group was formed by a single species, Tritia tingitana, and its abundance was 2. On the other hand, mostly all artificial substrates lacked the group detritus feeder except for seawall. Shared species per substrate are shown in Fig. 4. The differences in the percentages of groups between substrates were also remarkable. In terms of percentages, more macro and micro grazers appeared at artificial substrates compared to natural substrate. In contrast, predators and detritus feeders appeared in higher percentage at natural substrate, with the exception of seawall, where the percentage of predators and detritus feeders was higher than that of natural substrate. There were also differences between artificial substrates because the percentage of filter feeders was higher at ripraps and cubes compared to natural substrate and acropods, while more macro and micro grazers appeared at acropods compared to the rest of substrates. Finally, seawall had a very heterogeneous trophic structure among samples (Fig. 5, supplementary material Table 1). Shannon's diversity and richness varied significantly among substrates (p < 0,001) (Table 4). According to SNK test, Shannon's diversity was greater on the natural substrate compared to the artificial substrate. Among artificial substrates, acropods showed higher Shannon's diversity values than seawall, cubes and rip-raps. In contrast, cubes, rip-raps and seawall did not differ significantly on Shannon's diversity. Similarly, SNK test showed that natural substrate had higher species richness than the artificial substrates. Among Percentages of volume of the most abundant species from secondary substrate (species ml/total species ml) at each substrate. Percentages < 1% are not included artificial substrates, acropods were richer than cubes, ripraps (p < 0,05) and seawall (p < 0,01). cubes and rip-raps did not differ between them but both were richer than Seawall (p < 0,05). Finally, Pielou's evenness did not differ among substrates (Fig. 6). MDS analysis for the community structure showed three groups: 1 = natural, 2 = seawall and 3 = acropods, rip-raps and cubes. These groups were statistically supported by the SIMPROF test (p < 0.05). The natural group appeared homogeneous and different from the rest of the substrates. Seawall group was heterogeneous, but it also appeared clearly segregated. The third group, formed by acropods, cubes and rip-raps was homogeneous but distinct from natural substrate and seawall groups (Fig. 7). MDS for the trophic structure revealed three groups: 1 = seawall, 2 = seawall and 3 = natural, acropods, cubes, rip-raps and seawall. These groups were statistically supported by the SIMPROF test (p < 0.05). Seawall was the most heterogeneous substrate diverging into three groups, while acropods, rip-raps, natural substrate and cubes were more similar. However natural substrate was homogeneous and was significantly different from the rest at a level of similarity of 80% (Fig. 8). PERMANOVA test indicated significant differences in community structure between substrates and among sites. According to the pair-wise tests, the community at natural substrate differed from the rest. When comparing among artificial substrates, seawall differed from rip-raps and acropods (p < 0,01), but not from cubes. Cubes, acropods and rip-raps seemed to have a similar community structure (Table 5). PERMANOVA test indicated differences in trophic structure among substrates but not among sites. The pair-wise test revealed that trophic structure on the natural substrate was different from the artificial substrates. Among artificial substrates, seawall differed from rip-raps (< 0,05) and acropods (p < 0,01) but not from cubes. Also, acropods, rip-raps and cubes had a similar trophic community ( Table 6). The volume of the secondary substrate using as a covariable was significant for taxonomical PERMANOVA but not for the trophic one. Correlation analyses A dbRDA analysis revealed a relationship between the physical-chemical composition and the molluscan community. The associated community of molluscs at natural substrate was highly correlated with a low CaO concentration and carbonated nature, a high microscale complexity and older age. On the other hand, the community at artificial substrates was correlated with a high CaO concentration and carbonated nature, low microscale complexity and younger age. Because SiO2 appeared negatively collineated with CaO and C.P, these parameters were not included in the analyses. Among artificial substrates, the seawall community was also correlated with a low macroscale complexity, and therefore they have clustered apart from the more heterogeneous artificial substrates (acropods, cubes and rip-raps) (Fig. 9). The BIOENV analyses showed that the most correlated variables (p < 0.01) were macro, microscale complexity and crystallinity (Rho = 0,669). The RELATE test showed a significant correlation of abiotic with taxonomic matrices (Rho = 0,518; p < 0,01). Discussion The molluscs community structure and diversity seemed to be significantly different between artificial and natural substrates. Among our studied variables, substrate complexity (macro and microscale roughness) and chemical composition appeared to be the main drivers of those differences. Besides, the trophic structure also seemed to be different between artificial and natural substrates. By our results, habitat complexity in terms of the relative abundance of microhabitats such as crevices , rockpools and macrophytes is considered as one of the most influencing factors on intertidal communities (Warfe et al. 2008). Higher heterogeneity at the scale of centimeters increases recruitment of spores and larvae (Sempere-Valverde et al. 2018) due to a higher number of refugees (Kostylev et al. 2005;Coombes et al. 2010). This can be particularly important for intertidal molluscs since they can not only find shelter against environmental stress (Meager et al. 2011;Harley and Helmuth 2003;Loke et al. 2015), but also against predation (Warfe et al. 2008) and competition (Huston 1979) by finding crevices that fit their shell size (Loke and Todd 2016), determining community structure and diversity. Moreover, algae turfs that cover the rocky substrate can influence the abundance and biodiversity of associated fauna, playing biogenic roles, similar to sessile animals such as barnacles or annelids who play the role of "ecological engineers", providing the secondary substrate where many species live (Simboura et al. 1995;Bavestrello et al. 2000). The greater abundance of the calcareous algae Ellisolandia elongata on natural substrate can influence the associated fauna via increasing both the habitat volume and habitat complexity (Guerra-García et al. 2012;Veiga et al. 2014;Torres et al. 2015). Moreover, it can decrease desiccation and temperature stress by providing shelter for mobile fauna (Singh et al. 2013;Kefi et al. 2015). In our study, 15 species were exclusive of natural substrate. Natural substrate had high microscale complexity, but they were also highly covered by the calcareous algae Ellisolandia elongata, altogether possibly boosting the higher occurrence of more taxa. Species sensitive to disturbance such as the bivalves Irus irus, Parvicardium vroomi and gastropods such as Skeneopsis planorbis only appeared at natural substrate. For example, S. planorbis and P. vroomi are known to be well represented along the Algeciras Bay associated with the algae Halopteris sp. (Sánchez-Moyano et al. 2000), a highly complex algae (as E. elongata) that can support rich associated communities (Navarro-Barranco et al. 2018). In addition, P. vroomi has shown preference for the algae Halopteris filiscina (Avila 2003). Similarly, more abundance of sea snails and bivalves appeared at natural substrate. For example, Cerithiopsis tubercularis is usually restricted to live on algae that are associated with its food (sponges), as it is the case of the branched Ellisolandia spp. and its association with the sponges Halichondria and Hymeniacidon (Fretter and Manly 1977). Given the Fig. 6 Bar graph for mean Pielou's Evenness, Shannon's Diversity and Richness. Error bars represent Standard Deviation. a > b > c close association between algae and molluscs that certain species can present, the absence or very low abundances of these species at artificial substrates, where the cover of algae was very scarce, highlights the importance of calcareous algae in supporting richer communities of molluscs on artificial substrates at this area. In contrast, a lower microscale complexity and scarce algae canopy, probably lead to a lower abundance of the less competitive bivalves and sea snails because fewer microhabitats are available (Underwood and Fairweather 1989;Hills 1996;Strain et al. 2018) as it happens at seawall. Also, the sandstone porosity of natural substrates increases algae settlement (Green et al. 2012), probably generating positive cascading effects. However, a higher complexity at the scale of meters increases recruitment of propagules and the dissipation of water energy (Vieira et al. 2020) on cubes, acropods and rip-raps, boosting the abundance, richness and diversity of associated fauna, on these substrates, in comparison with seawall. However, these species were mostly "limpets" as Fissurela nubeluca, Siphonaria pectinata and Patella caerulea and chitons. The increment of this taxa is possibly related to the fact that artificial substrates are better habitat for sedentary species, such as limpets and chitons, rather than strictly vagile gastropods (Rivera-Ingraham et al. 2011;Cha et al. 2013), probably by suffering lower predation and being more resilient to wave action. In fact, non-native species of Siphonaria and barnacles have been recorded on seawalls at Plymouth and Singapore (Hsiung et al. 2020). Seawalls have a small intertidal area for recruitment but, as it happens in our study, seawalls harbour abundant beds of mussels (Chapman et al. 2005) and barnacles on the secondary substrate, associated with lower biodiversity values in comparison with natural substrates (People 2006;Sedano et al. 2020b). The community at seawall was very scarce and had the lowest diversity. Chapman (2006) suggested that seawalls lack microhabitats for many species and limit the life strategies of specialized intertidal fauna, such as limpets and chitons. For example, as it has been recorded in the chiton belonging to Ischiochiton genera that inhabits underneath the boulders as habitat-specialist (Grayson and Chapman 2004). On the other hand, the pulmonate limpet Siphonaria pectinata was absent at seawall, in accordance with Moreira et al. (2006) who detected a relation among living on seawall and a reduction on the reproductive output of this limpet. However, these results contrast with Hsiung et al. (2020) who recently detected non-native species of Siphonaria guanemensis and barnacles on seawalls at Plymouth and Singapore. Chemical composition was also identified as a possible driver of the community, mainly differentiating communities settled on natural or artificial substrates, since the natural rock was mainly pure quartz (SiO2), while artificial substrates had great amount of carbonated minerals, with high levels of the calcite (CaCO3). The effect of quartz on natural substrates and the carbonated mineralogy at artificial substrates could affect the associated fauna. For example, it has been reported that quarzitic radicals inhibit the settlement of first recruits of secondary substrates such as the hydroid Eudendrium glomeratum (Bavestrello et al. 2000) or the sponge Clionia sp. (Cerrano et al. 2007) while they are neutral for algae settlement. Moreover, in the present study, associated fauna was more abundant and more diverse at natural substrate, where more Ellisolandia elongata appeared, possibly due to a reduction in competition with other sessile biota affected by the toxicity of silicon radicals (Cerrano et al. 1999). In addition, facilitation by calcium hydroxides that are liberated by concrete artificial substrates to the substrate surface, alkalinizing the pH, also contributes to the settlement of bivalves (Anderson 1996;Soniat and Burton 2005;Burt et al. 2009) and barnacles (Guilbeau et al. 2003) on the sessile substrate, as it occurs in the concrete substrates in this study (acropods and seawall). The concrete substrates are also rich in magnesium oxides and other minerals, which could influence the presence of exclusive species. For example, aragonite has been related with improving boring bivalves' settlement (Green et al. 2013), being more soluble on water than calcite (Cornelis and Cornelius 2007), and acropods that are composed by this material showed the presence of the boring species Leisonelus aristatus. Therefore, a combination of a carbonated nature and a lower microscale complexity at artificial substrates possibly promotes a different community of molluscs and increase the dominance of the most 'colonizer' species of the secondary substrate (mussels and barnacles) at cubes, acropods and seawall (Miller and Etter 2008;Underwood and Chapman 2013), all disturbing the associated fauna of molluscs. Fig. 9 dbRDA for the taxonomic structure using the abiotic variables as predictors According to the trophic structure, natural substrate seems to be more diverse, mostly because they contain all the trophic groups measured, while acropods, rip-raps and cubes lack of suspension feeder and scavengers. Contrarily, seawall had all the measured groups except scavengers. Another interesting difference was the different percentage of groups among substrates because detritus feeder and predator were more abundant at natural substrate than at acropods, rip-raps and cubes, where more percentage of macro and micro-grazer appeared. Natural substrate hold the majority of detritus feeders, a fact that could be related to higher sediment retention by macrophytes (Melero et al. 2017;Casoli et al. 2019). In contrast, the detritus feeder Barleeia unifasciata appeared at seawall. In fact, littorinid snails have been related to breakwaters, with lower crevice availability (Aguilera et al. 2014). In addition, predators were exclusive from natural substrate, probably because intraguild predation (Janssen et al. 2007) and the number of preys has been reported to be lower at structures with less complexity. In contrast, macro and micro grazers were highly abundant at artificial substrates, mostly derived from an increment in species of limpets and chitons (see the first part of Discussion). Limpets are known to control the volume of macrophytes in concert with sea-urchins by their grazing activity (Piazzi et al. 2016) and, in this area, they have been recorded as omnivorous and very generalist (Burgos-Rubio et al. 2015). This could explain the lower volume of the secondary substrate at cubes, rip-raps and acropods when compared to natural substrate and seawall. In addition, the higher volume of secondary substrate on seawall, a substrate with a low abundance of molluscs, sustains the hypothesis that grazers control these sessile populations at acropods, rip-raps and cubes, in special the Ellisolandia elongata, as has been observed at rip-raps. The idea that these grazers could be controlling the associated fauna at artificial substrates should be considered. Several authors had pointed out that biodiversity could be driven by the age of substrate (Perkol-Finkel et al. 2005;Glasby and Connell 1999a, b), and others have reported that temporal heterogeneity among artificial and natural substrates is a relevant factor driving communities (Glasby and Connell 1999a). For example, on artificial substrates, first recruits as ephemeral algae, sponges and bivalves occur fast in less than a year on but later, as a consequence of a low microscale complexity, dominant species outcompete the first colonizers (Burt et al. 2011). Nevertheless, among the artificial substrates studied in the present work, age did not appear as a driver of the community, because the community at cubes (80y) and rip-raps (20y) was similar, and community of seawall (20y) and acropods (20y) differed, independently of age. Conclusions and future approach As expected in our first hypothesis, mollusc community and diversity differed between artificial and natural substrates. Distortion in the bottom-up interactions between, a combination of low microscale complexity and a carbonated nature, rich in calcium, of the artificial substrates and the mollusc community, seems to impact over many species of molluscs and the common calcareous algae Ellisolandia elongata they inhabit in, comparing with the natural substrate. Moreover, macroscale complexity seems to influence the community of molluscs, increasing recruitment of species at acropods, rip-raps and cubes in comparison with seawall, but mostly benefiting limpets, chitons and bivalves, and also barnacles on the secondary substrate. As for our second hypothesis, physico-chemical factors seem to alter the trophic community increasing the percentage of macro and micro grazers and filter feeders on artificial substrates. In contrast with our third hypothesis, age did not appear as a driver of the mollusc community. We suggest that, according to previous studies, increasing habitat heterogeneity by means of increasing crevices (Archambault and Bourget 1996) and rock-pools ) and microscale complexity is fundamental in the future designs of artificial substrates. On the other hand, the chemical structure should be included as an important topic of research in new models of artificial substrates, possibly depending on the geology and chemistry of the surrounding lands (Moschella et al. 2005). In the case of Algeciras, rip-raps and cubes were the most similar to natural substrate in relation to abiotic features. At the same time, in relation to community and trophic structure, these substrates seem to be less disturbed. Even though the study was replicated within substrates, the lack of several types of substrates under similar environmental factors on a higher spatial scale represents a limitation. In this sense, further studies re-analyzing already published data and/or metaanalysis at higher spatial scales will provide valuable insights on the role of artificial substrates in structuring coastal assemblages. Furthermore, more research should clarify how molluscs or macrophytes recruitment and survivance are influenced by chemical and other physical issues of the substrate. Finally, biological interactions among secondary substrates and associated fauna should be also explored in future designs.
2022-12-26T14:53:56.540Z
2022-01-08T00:00:00.000
{ "year": 2022, "sha1": "f9be8d8dba731842eb39d8e8aba216ff3163cef2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s11252-021-01192-6", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "f9be8d8dba731842eb39d8e8aba216ff3163cef2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
237864598
pes2o/s2orc
v3-fos-license
Three new alien Chenopodiaceae species in the flora of Russia 1 Moscow State University, Leninskiye Gory, 1/12, Moscow, 119234, Russian Federation 2 Tomsk State University, Lenina Pr., 36, Tomsk, 634050, Russian Federation 3 Komarov Botanical Institute RAS, Prof. Popova St., 2, St. Petersburg, 197376, Russian Federation 4 Perkalsky Dendrological Park of the Komarov Botanical Institute RAS, Pyatigorsk, Stavropol Territory, 357506, Russian Federation 5 E-mail: suchor@mail.ru; ORCID iD: https://orcid.org/0000-0003-2220-826X 6 ORCID iD: https://orcid.org/0000-0003-4833-5953 7 ORCID iD: https://orcid.org/0000-0001-9954-2181 8 ORCID iD: https://orcid.org/0000-0002-5158-3096 * Corresponding author Introduction The Chenopodiaceae clade is recognized as a part of Amaranthaceae Juss. s. l. after extensive molecular studies (e. g., Cuénoud et al., 2002, Kadereit et al., 2003Brockington et al., 2009). The members of this clade play an important role in steppe, desert, and coastal vegetation types or are noxious weeds in temperate regions (e. g., Korovin, 1934;Danin, 1983;Australian vegetation, 1994;Busso, Bonvissuto, 2009). In Russia, the number of Chenopodiaceae species can provisionally be estimated as 180 (133 species in European Russia: Sukhorukov, 2014), but some difficult genera like Corispermum L. and Chenopodium L. have not been properly revised yet in the Asiatic part. There are many articles reporting new alien Chenopodiaceae in different provinces of Russia. Such translocations are occurring from natural vegetation types to the north suffering habitat changes. In almost all cases, the species originate in the same continent (intracontinental introduction). The Central Asian species Atriplex laevis C. A. Mey., Axyris amaranthoides L., Corispermum declinatum Stephan ex Iljin, Salsola collina Pall., and Teloxys aristata (L.) Moq. are the most frequent invaders in different parts of European Russia (Sukhorukov, 2014) being synanthropic components of the flora. These examples do not include Chenopodiaceae crossing between continents. However, the vegetation in many subtropical regions is already facing a problem with naturalization of some Chenopodiaceae, especially Chenopodioideae from Australia, e. g. Atriplex inflata F. Muell., A. nummularia Lindl., A. suberecta Verdoorn to North and South Africa and South America (Maire, 1962;Germishuizen, Meyer, 2003;Brignone et al., 2016;APD, 2019) and Dysphania pumilio (R. Br.) Mosyakin et Clemants to Africa, South Europe, and the Americas (Uotila, Raus, Kalheber in Greuter, Raus, 2001 [with references therein]; Iamonico, 2011;USDA, NRCS, 2021;Uotila et al., 2021). The present article provides new information about the recent intercontinental introduction of three species of Chenopo-diaceae in different parts of Russia. Material and Methods The first author Alexander Sukhorukov (AS) has observed many native and alien Chenopodiaceae in different parts of the world, especially in European Russia (1997+), the Nepal Himalaya (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015), and African countries (2009+). The morphological evaluation of the three species under consideration has also been provided by AS. The continuous field investigations were carried out in North-West Russia (the Leningrad Region) by Elena Glazkova (1993+) who also paid special attention to alien species during field trips to the North Caucasus (2005) and the Far East (the Kuril Islands) (2019). Dmitry Shilnikov (DS) carried out the field studies in the North Caucasus (1999+) covered the Krasnodar Territory, Stavropol Territory, the Republics of Adygeya, Karachayevo-Circassian, Kabardino-Balkarian, North Ossetia -Alania, Daghestan (Russia) as well as Azerbaijan. Some other territories of Russia were also visited by DS. The identification of Chenopodiastrum simplex collected by E. Glazkova was also confirmed by the phylogenetic analysis that will be reported in another paper (Uotila et al., in prep.). Morphology. For a detailed description see Basset, Crompton (1982, as Chenopodium gigantospermum). Related to Chenopodiastrum hybridum (L.) S. Fuentes, Uotila et Borsch and C. badachschanicum (Tzvelev) S. Fuentes, Uotila et Borsch. From the first species, C. simplex is easily differentiated by the loose pericarp and rugose seeds (C. hybridum has adherent pericarp and foveolate seeds). Both C. simplex and C. badachschanicum possess the same carpological characters mentioned above. However, the leaves demonstrate some differences: they are ovate in outline, dentate or with attenuate lobes in C. simplex and usually triangular in C. badachschanicum. To check the relationships in Chenopodiastrum and reveal the taxonomic status of a sample collected in the Leningrad Region, we conducted a more detailed phylogenetic analysis of the genus compared with the previous molecular trees (Fuentes-Bazan et al., 2012). The results presented in Uotila et al. (in prep.) revealed that the sample from Russia falls within one subclade with another sample of C. simplex taken from North America. Habitat. Not mentioned on the label, but the specimen seems to have been collected in a ruderal site. General distribution. Australia; as alien in W and C Europe, South America, southern Africa. Sukhorukov A. P. Three new alien Chenopodiaceae species in the flora of Russia Morphology. The description of this species is available, e. g., in Wilson (1984) and Sukhorukov (2014). Compared to Dysphania from Africa and Asia (see detailed investigations of Uotila, 2013;Sukhorukov, 2014;Sukhorukov, Kushunina, 2014;Sukhorukov et al., 2019;Uotila et al., 2021), this species has white (not green) incurved perianth segments and other reproductive characters. Compared with D. carinata, D. pumilio has slightly keeled (nor carinate or cristate), glabrous or slightly hairy perianth segments. Habitat. A small population (10 m 2 ) was found near a private house in a disturbed habitat, together with Polygonum arenastrum Boreau, Poa pratensis L., Plantago major L., and Taraxacum officinale F. H. Wigg. General distribution. Australia; as alien and naturalized in N and S America, N, S and E Africa, South Europe, East Asia. Discussion The introduction with subsequent naturalization of alien species and possible transformation of natural and secondary habitats is one of the biggest problems in the biological sciences (Didham et al., 2005;Traveset, Richardson, 2006;Pyšek et al., 2017;Russell et al., 2017). In the Chenopodiaceae clade (Amaranthaceae s. l.), some species, especially from Chenopodioideae, are widespread weeds in temperate regions of Eurasia, as well as being alien son, 1984). A large number of Eurasian Chenopodiaceae with further naturalization were discovered in temperate South and North America (e. g., Aellen, 1929;Zappettini, 1953;Clemants, Mosyakin, 2003;Brignone et al., 2016;Jocou et al., 2020;Brignone, Denham, 2021). Similarly, many Australian and American Chenopodiaceae, especially different Chenopodium taxa (recently considered within the genera Chenopodium s. str., Blitum, Lipandra and Oxybasis: Fuentes-Bazan et al., 2012), were discovered in Europe especially in the first half of the 20 th century, mostly brought with wool and other goods (Aellen, 1929;Uotila, 2001). Here we pay special attention to the distribution of Chenopodiastrum simplex, Dysphania carinata and D. pumilio outside their natural range, with further categorization of their possible alien status. Chenopodiastrum simplex is native to North America and seems to be a rare casual alien species in Europe known in Fennoscandia mostly from the first half of the 20 th century (Uotila, 2001). However, examination of herbarium collections in Vienna (W) by the first author (AS) showed the presence of plants with the same leaf and fruit/seed characters in Austria. According to GBIF (2019+), the species is also known from Germany, the Netherlands and Spain, but the corresponding records have not been checked by us. To date, the recent distribution of C. simplex in Europe requires further study. The report of C. simplex in Siberia (Lomonosova, 1992, as Chenopodium hybridum subsp. gigantospermum) is erroneous and a corresponding specimen belongs to C. badachschanicum (Tzvelev) S. Fuentes, Uotila et Borsch, a species widely distributed in Central Asia, South Siberia, the Himalaya and Tibet (Uotila et al., in prep.). Both species are morphologically similar, and the exact identification may be verified by phylogenetic analysis. 82 Sukhorukov A. P. Three new alien Chenopodiaceae species in the flora of Russia Chenopodiastrum simplex seems to be a recent immigrant to the remote Moschny Island, since it was not found during previous expeditions to the islands of the Eastern Gulf of Finland (Glazkova, 2001). Although only one exemplar of C. simplex was found in August 2017, and it was not possible to visit this island later, the population may not be extinct owing to ability of many annual Chenopodioideae including Chenopodiastrum to form a viable soil seed bank for at least several years in conjunction with the carpological characteristics (different thickness of the seed-coat testa resulting in the heterospermous seeds) and physiological dormancy (Sukhorukov, Zhang, 2013;Sukhorukov, 2014, with references therein). The introduction of C. simplex may be connected with crop import. Similarly, C. simplex was brought into Fennoscandia as a rare casual species with North American grain and soybeans (Uotila, 2001). The current status of C. simplex in Russia can be described as 'casual alien', as for some other rare exotic Chenopodiaceae in the Leningrad Region, e. g. Beta maritima L. and Atriplex oblongifolia Waldst. et Kit. found on the islands of the Gulf of Finland earlier (Glazkova, 2006;Sukhorukov, Uotila, 2007). The native distribution area of Dysphania carinata covers the easternmost part of Australia including Queensland, New South Wales and Victoria (Wilson, 1984). As an alien plant, it clearly prefers areas with arid climate, e. g., Namibia, where it mostly occupies dried-up river beds and can be considered as naturalized species (AS, pers. obs. in 2017(AS, pers. obs. in -2018. In South Africa, it has been found in many regions (Germishuizen, Meyer, 2003). The records in other semi-arid and arid regions of Africa are still scattered (Brenan, 1954(Brenan, , 1988; APD, 2019; AS, pers. obs. and a specimen [MW] collected in Tanzania in 2020). Dysphania carinata seems to be introduced in Europe in the late 19 th century with Australian wool (Aellen, 1929, as Chenopodium carinatum) and it is also reported in some parts of West, Central and North Europe (Uotila, 2011) andWest Asia (Al-Turki, Ghafoor, 1996, as Chenopodium carinatum). However, this name is frequently misapplied to the closely related D. pumilio in many floras and checklists of Europe, another alien on the continent (Chytry, 1993). For this reason, the naturalization status of D. carinata has not been properly assessed, the records have not been mapped 84 Sukhorukov A. P. Three new alien Chenopodiaceae species in the flora of Russia (Jalas, Suominen, 1980), and the species considered to be a casual alien (Uotila, 2011;Sukhorukov, 2014). In East Asia, it is reported from Japan (GBIF, 2019+), but at least some records refer to D. pumilio (Flora-Kanagawa Association, 2018;A. Sukhorukov, re-identifications in different herbaria). The reverse misidentification occurred in Ignatov (1988), when a specimen of D. carinata from the Primorye Territory of Russia was erroneously identified as D. pumilio. Thus, we exclude D. pumilio from the flora of the Russian Far East. Based on the scattered records of D. carinata in Eurasia, its naturalization in the Russian Far East seems to be impossible due to unsuitable climatic conditions (e. g., high precipitation, low winter temperatures). Dysphania pumilio occurs as a native species mostly in southern Australia (Wilson, 1984) with further spreading as an alien into many subtropical regions of southern, central and eastern Africa (Brenan, 1954;Germishuizen, Meyer, 2003;Sukhorukov et al., 2016), Japan (Flora-Kanagawa Association, 2018), North and South America (Gleason, 1952 as Chenopodium pumilio;Clemants, Mosyakin, 2003;Funez et al. 2017;Brignone, 2020). At present, it is considered to be an invasive plant in North America (CABI, 2021). Knowledge of the occurrence and status of D. pumilio in Europe has changed dramatically. In the beginning of the 20 th century this species had not yet been noticed (Aellen, 1929) owing to confusion with D. carinata (re-identifications of the late P. Aellen and A. Sukhorukov in 2019, G!), but several decades later, it was being reported as an alien species in West and Central Europe (Aellen, 1961;Jalas, Suominen, 1980). The first collections from Central Europe are dated in 1870s (Aellen, 1961). Lhotská, Hejný (1979) reported the presence of a viable seed bank based on observations in the Czech Republic as well as different dispersal characteristics facilitating the naturalization of D. pumilio in Central Europe. To date, the alien status of D. pumilio has been changed to 'naturalized alien' in Central, South and East Europe (Ukraine) (Uotila, 2011). The recent detailed investigations confirm its naturalized and invasive status in at least some countries of South Europe, namely in Spain (Castroviejo, 1990;Uotila, 2011), Italy (Iamonico, 2011), andSerbia (Bogosavljević, Zlatković, 2017) being found in various disturbed habitats. According to observations by the first author (AS), D. pumilio is common in Barcelona (Spain) and Lisbon (Portugal) growing in asphalt cracks and sandy areas. First records of the species as a casual alien are known from Bulgaria (Grozeva, 2007) and Belarus (Dzhus, 2011). Based on the literature data and our own observations, D. pumilio is able to naturalize in the countries with subtropical and warm temperate climate, and therefore further records of the species in Russia are expected in the North Caucasus and in the arid regions of European Russia. Conclusion The three species reported here have different alien status in the secondary distribution areas, and we assume their different naturalization status in Russia. Dysphania pumilio could potentially become a successful invader in ruderal sites in the southern part of European Russia and the North Caucasus. Acknowledgments The research of A. Sukhorukov was carried out in accordance with the scientific programe 121032500084-6 of the Department of Higher Plants (Lomonosov Moscow State University). The study of E. Glazkova and V. Shvanova was carried out within the framework of the research project no. АААА-А 19-119031290052-1 (Vascular plants of Eurasia: systematics, flora, plant resources) of the Komarov Botanical Institute, RAS. The field investigations of E. Glazkova on Moschny Island in 2017 were supported by the Complex Expedition "Gogland" of the Russian Geographical Society, and she thanks the organizers and all participants of the expedition. We thank Geoffrey Harper for the proofread of the paper and Maria Kushunina for help in preparation of the final maps. We are also indebted to Pertti Uotila who paid our attention to an unusual specimen of Chenopodiastrum from Moschny Island. We also thank Marina Legchenko (LE) and Nina Stepanova (MHA) for the scanned images of Chenopodiastrum simplex and Dysphania carinata, respectively.
2021-09-01T15:09:21.591Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "cd352edd41b4db9d99c9820553fe22b52a9b1541", "oa_license": "CCBY", "oa_url": "http://turczaninowia.asu.ru/article/download/9827/8059", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ba1245e97abc553e75b693189547da4f3b54d31c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
119203353
pes2o/s2orc
v3-fos-license
The hidden charm decay of Y(4140) by the rescattering mechanism Assuming that Y(4140) is the second radial excitation of the P-wave charmonium $\chi_{cJ}^{\prime\prime}$ ($J=0, 1$), the hidden charm decay mode of Y(4140) is calculated in terms of the rescattering mechanism. Our numerical results show that the upper limit of the branching ratio of the hidden charm decay $Y(4140)\to J/\psi\phi$ is on the order of $10^{-4}\sim 10^{-3}$ for both of the charmonium assumptions for Y(4140), which disfavors the large hidden charm decay pattern indicated by the CDF experiment. It seems to reveal that the pure second radial excitation of the P-wave charmonium $\chi_{cJ}^{\prime\prime}$ ($J=0, 1$) is problematic. molecular state for Y (4140) and claimed that hybrid charmonium with J P C = 1 −+ cannot be excluded. In Ref [5], they used a molecular D * sD * s current with J P C = 0 ++ and obtained m D * sD * s = (4.14 ± 0.09) MeV, which can explain Y (4140) as a D * sD * s molecular state. The author of Ref. [6] also used the QCD sum rules to study Y (4140) and came to a different conclusion than that in [5]. As indicated in our work [2], the study of the decay modes of Y (4140) is important to test the molecular structure D * sD * s of Y (4140). Assuming Y (3940) and Y (4140) as D * D * and D * sD * s molecular states, respectively, the authors of Ref. [7] calculated the strong decays of Y (4140) → J/ψφ and Y (3940) → J/ψω and the radiative decay Y (4140)/Y (3940) → γγ by the effective Lagrangian approach. The result of the strong decays of Y (3940) and Y (4140) strongly supports the molecular interpretation for Y (3940) and Y (4140). On the other hand, studying the decay modes with other structure assignments for Y (4140) will help us to understand the character of Y (4140) more accurately. Along this line, we further calculate the hidden charm decay mode of Y (4140) assuming it to be a conventional charmonium state by the rescattering mechanism [8,9]. In Refs. [10,11,12], the effective Lagrangians, which are relevant to the present calculation, are constructed based * Electronic address: liuxiang@teor.fis.uc.pt on chiral symmetry and heavy quark symmetry: where D and D * are the pseudoscalar and vector heavy mesons, respectively, i.e., ). V denotes the nonet vector meson matrices. The values of the coupling constants are [13] fπ , where f π = 132 MeV, g V , β and λ are the parameters in the effective chiral Lagrangian that describes the interaction of the heavy mesons with the low-momentum light vector mesons [12]. Following Ref. [14], we take g = 0.59, β = 0.9 and λ = 0.56. Based on the vector meson dominance model and using the leptonic width of J/ψ, the authors of Ref. [15] determined g 2 J/ψDD /(4π) = 5. As a consequence of the spin symmetry in the heavy quark effective field theory, g J/ψDD * and g J/ψD * D * satisfy the relations: g J/ψDD * = g J/ψDD /m D and g J/ψD * D * = g J/ψDD [16]. Since the contributions from Fig. 1 (c) and (d) are the same as those corresponding to Fig. 2 (a) and (b), respectively, the total decay amplitude of Y (4140) → D + s D − s → J/ψφ can be expressed as where one formulates the amplitudes of A 1−a and A 1−b by Cutkosky cutting rule Similarly, we write out the total decay amplitude of Y (4140) → D + s D * − s + D − s D * + s → J/ψφ where the pre-factor "2" arises from considering that the contribution from D + s D * − s rescattering is the same as that from D − s D * + s rescattering. The absorptive contributions from Fig. 2 (a)-(d) are, respectively, In the expressions above for the decay amplitudes, form factors F 2 (m i , q 2 ) etc. compensate for the off-shell effects of the mesons at the vertices and are written as , where Λ is a phenomenological parameter. As q 2 → 0, the form factor becomes a number. If Λ ≫ m i , it becomes unity. As q 2 → ∞, the form factor approaches zero. As the distance becomes very small, the inner structure manifests itself, and the whole picture of hadron interaction is no longer valid. Hence, the form factor vanishes and plays a role in cutting off the end effect. The expression of Λ is defined as Λ(m i ) = m i + αΛ QCD [13]. Here, m i denotes the mass of exchanged meson, Λ QCD = 220 MeV, and α denotes a phenomenological parameter in the rescattering model. By fitting the central value of the total width of Y (4140) (11.7 MeV), we obtain the coupling constant g Y in Eq. where we approximate D + s D − s and D + s D * − s + h.c. as the dominant decay mode of Y (4140) when assuming Y (4140) to be χ ′′ c0 and χ ′′ c1 , respectively. In this way, we can extract the upper limit of the value of the coupling constant g Y , which further allows us to obtain the upper limit of the hidden charm decay pattern of Y (4140). The value of α in the form factor is usually of order unity [13]. In this work, we take the range of α = 0.8 ∼ 2.2. The dependence of the decay widths of Fig. 3. In Table I In summary, in this paper, we discuss the hidden charm decay of Y (4140) newly observed by the CDF experiment when assuming Y (4140) as χ ′′ c0 and χ ′′ c1 . According to the rescattering mechanism [8,9], the hidden charm decay mode J/ψφ occurs via D + s D − s and D + s D * − s + h.c., respectively corresponding to χ ′′ c0 and χ ′′ c1 assumptions for Y (4140). Our numerical results indicate that the upper limit of the order of magnitude of the branching ratio of Y (4140) → J/ψφ is 10 −4 ∼ 10 −3 for both of the assumptions for Y (4140), which is consistent with the rough estimation indicated in Ref. [2]. Here Y (4140) lies well above the open charm decay threshold. A charmonium with this mass would decay into an open charm pair dominantly. The branching fraction of its hidden charm decay mode J/ψφ is expected to be small. Such small hidden charm decay disfavors the large hidden charm decay pattern of Y (4140) announced by the CDF experiment [1], which further supports that explaining Y (4140) as the pure second radial excitation of the P-wave charmonium χ ′′ cJ is problematic [2]. We encourage further experimental measurement of the decay modes of Y (4140), which will enhance our understanding of the character of Y (4140).
2009-08-26T00:48:58.000Z
2009-04-01T00:00:00.000
{ "year": 2009, "sha1": "1fd0b89a2ebfbec296986f65236061eace818ed0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2009.08.049", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1fd0b89a2ebfbec296986f65236061eace818ed0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239027221
pes2o/s2orc
v3-fos-license
Epidemiologic Features and Influencing Factors of Norovirus Outbreaks in the City of Wuxi, China from 2014 to 2018 ABSTRACT. The study investigated the genotypic changes and epidemiologic features of norovirus outbreaks and factors influencing the attack rate and outbreak duration in Wuxi from 2014 to 2018. Norovirus outbreaks, monitored through surveillance system, were investigated. The norovirus-positive specimens from outbreaks were collected and genotyped using a dual polymerase-capsid genotyping protocol based on a one-step polymerase chain reaction (PCR) amplicon. The genotypes were analyzed by Norovirus Typing Tool Version 2.0. A total of 74 norovirus outbreaks were reported in Wuxi from 2014 to 2018. Most (93.2%) norovirus outbreaks were caused by GII genotypes. The predominant norovirus genotypes in outbreaks have changed from GII.17 (20.3%) in 2014–2015 to GII.P16/GII.2 (40.5%) in 2017–2018. GII.P16/GII.2 in 2017–2018 season were more prevalent than GII.17 in 2014–2015 season (χ2 = 4.741, P = 0.029). 56.7% of the outbreaks occurred in primary schools. The re-outbreak rate was 16.2%. 66.7% of re-outbreaks were caused by norovirus variants different from previous genotypes. Outbreaks in nonprimary school settings (odds ratio [OR]: 4.007; 95% CI: 1.247–12.876) and those leading to temporary school or institution closure (OR: 20.510; 95% CI: 1.806–232.937) were reported with a higher attack rate. The outbreaks in primary schools (OR: 4.248; 95% CI: 1.211–14.903), re-outbreaks (OR: 6.433; 95% CI: 1.103–37.534) and longer report timing (OR: 8.380; 95% CI: 2.259–31.089) declared a significantly longer duration. It is of great importance that the monitoring of norovirus outbreaks for the emergence of novel strains, along with responsive prevention and control intervention should be strengthened in adults and school-age population, especially in primary students and preschool children. INTRODUCTION Norovirus is a genetically diverse virus that belongs to the family Caliciviridae. 1 The viral genome consists of a single positive-strand RNA of 7.7 kb encompassing three open reading frames (ORFs). The ORF1 of norovirus genome encodes six nonstructural proteins including the RNAdependent RNA polymerase (RdRp). ORF2 and ORF3 encode the major capsid protein VP1 and minor capsid protein VP2, respectively. On the basis of amino acid identified in VP1, norovirus is classified into seven genogroups (GI-GVII) of which GI, GII, and GIV are responsible for human infections. To date, norovirus has been genetically categorized into 39 genotypes, with at least nine genotypes for GI and 22 genotypes for GII. 2 In the past few years, the circulation of norovirus GII genogroups has increased significantly both in developed countries [3][4][5][6][7] and developing countries. [8][9][10][11] Norovirus infection may induce symptoms such as vomiting, diarrhea, abdominal pain, mild fever and nausea in infected cases. 12 As a leading global cause of diarrheal diseases across all ages, norovirus infection was estimated to account for 18% of acute gastroenteritis (AGE) cases and at least 50% of gastroenteritis outbreaks worldwide. 13,14 Due to the low infectious dose and stable survival in the environment, noroviruses are highly contagious, and thus, it easily results in large outbreaks with high attack rate. 15 In addition, norovirus re-outbreaks may occur as a result of host's shortlasting immunity and long duration of viral shedding. 16 Wuxi is a well-developed city in the southeast of Jiangsu Province, China. It has five districts and two counties with a total population of 6.5 million people. In 2009, Wuxi launched a laboratory-based gastroenteritis surveillance program. Similar to other studies in China, 17 norovirus was reported as the most common pathogen of AGE illnesses in Wuxi, with a proportion of 58.9% (data not shown) in AGE outbreaks. Previous studies in other cities have described that the circulating genotypes of norovirus infection changed from GII.4 to new variants GII.P16/GII.2 in China, 9,[18][19][20][21][22] however, little information is available on the genotypic trends and epidemiologic features of norovirus outbreaks in Wuxi. In this study, we analyzed the genotypes shift and explored the influencing factors associated with attack rate and duration of norovirus outbreaks in Wuxi. MATERIALS AND METHODS Surveillance of gastroenteritis outbreaks. Outbreaks of norovirus gastroenteritis in Wuxi were monitored through two surveillance systems. The first system is the Emergent Public Health Event Information Management System, in which an acute gastroenteritis outbreak is defined as $ 20 cases with symptoms including vomiting and/or diarrhea within 1 week. The other system is the norovirus outbreak surveillance system in Jiangsu province, in which an outbreak is defined as 5-19 cases with symptoms of vomiting and/or diarrhea within 3 days. Each outbreak must be reported to Wuxi Municipal CDC (Wuxi CDC). Outbreak reports included data and information such as year, month, type of exposure settings, number of cases, number of exposed persons, sanitation status, control measures, onset of first and last case, as well as reporting date of outbreak. On the basis of the outbreaks scale, the control measures were categorized into case isolation, unit closure, and temporary school or institution closure. Case isolation meant that cases could not go to school or work till their recovery. Case isolation was conducted in work institutions or school facilities with # 25% infections in the same class or work department. Unit closure meant that the classroom or work department was temporarily closed. Unit closure was conducted in work institutions or school facilities with . 25% infections in the same classroom or work department. Temporary school or institution closure meant that the school or work institution was temporarily closed for the outbreak. Temporary school or institution closure was conducted in the situation with $ 25% infections in the whole school or work institution. Sample collection, norovirus detection, and genotyping. Feces or vomitus specimens from outbreaks were tested for noroviruses by Wuxi CDC. The RNA extraction and norovirus detection were conducted as previously described. 23 For each norovirus-positive outbreak, one norovirus-positive sample was selected for genotyping using a dual polymerase-capsid genotyping protocol based on a onestep polymerase chain reaction (PCR) amplicon obtained with primer pair Mon431/G2SKR for GII strains and Mon432/ G1SKR for GI strains ( Table 1). The genotypes were determined by Norovirus Typing Tool Version 2.0 (https://www. rivm.nl/mpf/typ-ingtool/norovirus). Re-outbreaks screening criteria. The screening criteria for re-outbreaks included 1) the name of exposure setting was the same one; and 2) . 14 days between the two reporting dates of outbreaks. Statistical analysis. Descriptive statistics were used to analyze the epidemiological characteristics of outbreaks. Categorical variables were presented as numbers and proportions, and continuous variables as median and interquartile range (IQR). The x 2 test was applied to compare the proportion of GII.P16/GII.2 and GII.17 in their epidemic seasons. The Kruskal-Wallis H test and Mann-Whitney U test were conducted for stratified comparisons of attack rate and outbreak duration. Influencing factors for norovirus attack rate and outbreak duration were assessed using logistic regression model. Variables significant in univariate logistic regression were included into multivariate regression model. All analysis was performed with SPSS version 16.0 (SPSS, Chicago, IL). P values were derived from two-tailed tests and significant was assumed for P , 0.05. Ethical considerations. This investigation was performed in response to a public health emergency, and based on the Regulation on the Urgent Handling of Public Health Emergencies (http://www.gov.cn/zwgk/2005-05/20/content_145. htm), formal ethical approval was not required. But verbal consent was obtained from all participants before the interview and sampling. Parents or guardians of participants under 15 years granted consent on their behalf and accompanied them during the interview. Consent was recorded on the questionnaire using the participant's and/or guardian's signature. All participants were informed of their rights according to the law outlined above. We can confirm that all data, including all questionnaires and samples, were gathered in accordance with the "Guideline for norovirus outbreak reports and investigation," issued by the Health Department of Jiangsu Province, China. No additional data were acquired by the authors, and no participant identifying information was associated with the reported data. RESULTS The changes in circulating genotypes. From January 2014 to December 2018, a total of 74 norovirus outbreaks were reported in Wuxi. All outbreaks were genotyped into 17 variants, including four GI variants and 13 GII variants. The GII variants were responsible for 93.2% (N 5 69) of norovirus outbreaks. Variants GII.P16/GII.2 and GII.17 were the two major circulating genotypes during the study period, accounting for 40.5% and 20.3% of the outbreaks, respectively. Other genotypes detected in the study can be seen in Figure 1. The epidemiologic features and characteristics on outbreak-associated indicators. All the 74 outbreaks were associated with 2,564 illness and 114,099 exposed people. The overall median attack rate was 2.1% (IQR: 1.1-4.1%) and overall median outbreak duration was 5.0 days (IQR: 3.0-7.0). The reported outbreaks and outbreak-associated indicators in different groups were listed in Table 2. The majority (70.3%, N 5 52) of outbreaks were reported in spring and autumn. About 85.1% (N 5 63) of outbreaks occurred in urban area. Most (73.0%, N 5 54) of outbreaks occurred in settings with good sanitation status. However, there was no significant difference in both attack rate and outbreak duration among seasons, locations, genotypes, and sanitation status, respectively (P . 0.05). The outbreaks most frequently reported in primary schools (56.7%, N 5 42), followed by preschool facilities (20.3%, N 5 15), secondary schools (17.6%, N 5 13) and settings involving adults and elderly people (5.4%, N 5 4). The attack rate and outbreak duration varied by exposure settings (P , 0.05). By contrast to outbreak frequency order, the attack rate was highest in The norovirus re-outbreaks. Of 62 exposure settings reporting 74 outbreaks in all, eight settings, namely preschool facilities and schools, experienced 12 re-outbreaks in total. The re-outbreak rate was 16.2% (12/74). Six settings reported norovirus outbreaks twice (8.1%), one setting three times (2.7%), and one setting five times (5.4%). In settings that reported outbreaks twice, the time interval between two outbreaks was 8.5 (IQR: 3.3-29.3) months. In settings that reported outbreaks three times, the time intervals of outbreaks were 9 months and 32 months. In the setting that reported outbreaks five times, the time intervals of outbreaks were 12 months, 6 months, 12 months, and 23 months, respectively. About 75% of re-outbreaks occurred within 12 months. The influencing factors for norovirus attack rate and outbreak duration. The univariate (Table 4) and multivariate logistic regression model indicated that the type of exposure settings and control measures were found to have a significant influence for norovirus attack rate, with nonprimary school settings and temporary school or institution closure being risk factors for a relatively higher attack rate (Table 5). In addition, the logistic regression model demonstrated that the type of exposure settings, re-outbreaks and report timing were the influencing factors for norovirus outbreak duration, with the outbreaks in primary schools, re-outbreaks and longer report timing (4-12 days) declaring a relatively longer outbreak duration (Table 6). DISCUSSION Norovirus is highly genetically diverse and constantly evolving, resulting in the emergence of new genotypes every 2-4 years. 24,25 From January 2014 to December 2018, a total of 74 outbreaks were reported in Wuxi, and noroviruses were presented in 17 genotypes. Consistent with other findings in China, 9 norovirus GII was the most commonly detected strains in Wuxi. And the prevalence of various genotypes in this study was in accordance with the general evolvement of norovirus GII in the rest of the world. [26][27][28] In our study, the genotypic changes of norovirus outbreaks were characterized into three distinct phases. The first phase showed a prevalence of GII.17 from October 2014 to June 2015. The GII.17 variant was firstly reported in September 29 This variant quickly became the predominant in other continents, raising a global concern on its pandemics. 26,30,31 Before the emergence of GII.17, GII.4 was the most major circulating genotype worldwide in norovirus outbreaks. 4 32 Based on the earlier findings, we could speculate that the circulation of GII.P16/GII.2 in Wuxi was introduced from other neighboring cities. Norovirus outbreaks in China generally peaked in the winter and early spring. 17 Similarly, the remarkable increase of norovirus outbreaks could be observed from autumn to early spring season in our study, which was possibly related to the low temperature and high humidity. Schools, including primary and secondary schools, and preschool facilities accounted for 94.6% of all the outbreaks in Wuxi. A similar percentage of norovirus outbreaks among schools and childcare facilities was also reported on a nation-level research in China. 17 Schools, as well as preschool facilities, seemed to be more densely populated than other countries because of a large capacity of nearly 50 students in each classroom. The high contact rate in students and their insufficient immunity to the virus could increase the possibility of morbidity, especially among children with poor personal hygiene. 35 Our descriptive analysis indicated that the outbreaks in school settings and preschool facilities were associated with relatively lower attack rate than those in institutions involving adults and elderly people. This result may be attributed to the high reporting frequency of norovirus illnesses in schools and preschool facilities, which strengthened the rapid emergency response and disease control. Additionally, the routine check and health screening in children attending kindergarten and schools was conducted each morning to exclude cases with fever, vomiting, or diarrhea. The program has been helpful to discover the suspected norovirus illnesses and reduce the potential infection and transmission. For measures on disease control, the intervention, including case isolation, unit closure, and temporary school or institution closure, would be introduced in terms of outbreaks severity. The outbreaks intervened with temporary school or institution closure were scientifically reported with large number of norovirus-associated illnesses, and thus, they were reported with a significantly greater attack rate. Furthermore, our results showed a shorter duration among outbreaks reported within 3 days, compared with outbreaks reported 4 days later, underlining a beneficial effect on the responsive outbreak report to local CDC. Prolonged viral shedding from both symptomatic and asymptomatic infections, combined with limited long-term immunity, greatly contributed to secondary virus spread and norovirus re-outbreak. 36 It was observed that norovirus re-outbreaks with various genotypes occurred frequently in children 37 and that re-outbreaks could occur within a year in both children and adults. 38,39 In our analysis, the majority of re-outbreaks were also reported in primary schools and preschool facilities, namely young children. Also, a high percentage (75%) of re-outbreaks occurred within 12 months. Another significant finding was that re-outbreaks with different norovirus genotypes were common. This could be because of a limited direct immune protection for the same genotypes. The influencing factors on norovirus outbreaks attack rate and duration were not well researched yet, because published studies were mostly analyzed on a single reported outbreak rather than a cluster of outbreaks in one area. In this study, multivariate analysis indicated that outbreaks occurred in nonprimary school settings and that with temporary school or institution closure experienced a statistically higher attack rate. Moreover, our results demonstrated that norovirus outbreak duration were significantly associated with the type of exposure settings, re-outbreaks and report timing, with the outbreaks in primary schools, re-outbreaks and longer report timing (4-12 days) declaring a relatively longer duration. Therefore, the monitoring of norovirus outbreaks, along with responsive prevention and control interventions, should be strengthened in adults and school-age population, particularly among primary students and preschool children. Targeted health education promotion should be conducted in primary schools for developing a strong preventive consciousness of norovirus infection. There were several limitations in this study. Firstly, epidemiological information collected by local CDCs was limited, to some extent. Route of transmission, as well as gender and age information are not required for outbreak reports, which may be useful for identifying the risk factors and appropriate control measures. In addition, the results were inevitably subjected to inaccuracy of outbreak data. The attack rates and outbreak duration were probably underestimated because of underreporting. Secondly, asymptomatically infected individuals, as well as close-contacts such as family/sibling relationship, were excluded from our study, which could have impacted the outcomes of norovirus epidemiologic features. Thirdly, the multilevel modeling/variability analysis for both individuals and groups was not implemented in our study. CONCLUSION Although a diversity of norovirus genotypes was observed in Wuxi, the reported norovirus outbreaks were presented with an alternating predominance of GII.17 in 2014-2015 and GII.P16/GII.2 in 2017-2018. The majority of re-outbreaks were reported in primary schools and preschool facilities. In comparison to primary outbreak, re-outbreaks were frequently caused by different norovirus genotypes. The outbreaks occurred in nonprimary school settings and those with temporary school or institution closure experienced a higher attack rate. And the outbreaks duration in primary schools, re-outbreaks and longer report timing (4-12 days) were relatively longer. It is critical that the monitoring of norovirus outbreaks, as well as responsive prevention and control interventions, should be strengthened in adults and school-age population, especially in primary students and preschool children.
2021-10-20T06:16:40.834Z
2021-10-18T00:00:00.000
{ "year": 2021, "sha1": "51734011fa9a384630d15e1ee6b65beb690d5b6b", "oa_license": "CCBY", "oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/aop/article-10.4269-ajtmh.20-1371/article-10.4269-ajtmh.20-1371.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "24ef3daa45f78c7783d15225e8f40689894e1280", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199452051
pes2o/s2orc
v3-fos-license
Pharmacokinetic Evaluation of Metabolic Drug Interactions between Repaglinide and Celecoxib by a Bioanalytical HPLC Method for Their Simultaneous Determination with Fluorescence Detection Since diabetes mellitus and osteoarthritis are highly prevalent diseases, combinations of antidiabetic agents like repaglinide (REP) and non-steroidal anti-inflammatory drugs (NSAID) like celecoxib (CEL) could be commonly used in clinical practice. In this study, a simple and sensitive bioanalytical HPLC method combined with fluorescence detector (HPLC-FL) was developed and fully validated for simultaneous quantification of REP and CEL. A simple protein precipitation procedure and reversed C18 column with an isocratic mobile phase (mixture of ACN and pH 6.0 phosphate buffer) were employed for sample preparation and chromatographic separation. The fluorescence detector was set at a single excitation/emission wavelength pair of 240 nm/380 nm. The linearity (10–2000 ng/mL), accuracy, precision, extraction recovery, matrix effect, and stability for this method were validated as per the current FDA guidance. The bioanalytical method was applied to study pharmacokinetic interactions between REP and CEL in vivo, successfully showing that concurrent administration with oral REP significantly altered the pharmacokinetics of oral CEL. Furthermore, an in vitro metabolism and protein binding study using human materials highlighted the possibility of metabolism-based interactions between CEL and REP in clinical settings. Introduction Arthritis and diabetes mellitus are highly prevalent diseases with a total of over 350 million patients worldwide [1,2].The most common types of arthritis and diabetes mellitus are osteoarthritis (OA) and type 2 diabetes mellitus (T2DM), respectively [3].OA affects 14% of adults aged ≥25 years, and 34% of these patients are aged >65 years [4]; similarly, T2DM affects 12% of adults aged ≥20 years, and 26% of these are aged >65 years [5].A recent survey estimated that the prevalence of OA was higher in individuals with T2DM than in those without T2DM [6].Thus, T2DM is generally recognized as a comorbidity of arthritis [7], while some previous studies have focused on diabetes as a risk factor of arthritis [8,9].Anyway, it is evident that T2DM is closely associated with an increased incidence and prevalence of OA, though the reasons remain unclear. Repaglinide (REP), as shown in Figure 1, is a short-acting oral antidiabetic drug belonging to the class of meglitinides and is used to lower postprandial blood glucose levels in T2DM patients [10].It stimulates insulin release from the pancreas, depending on the residual function of β-cells in the pancreatic islets [11].REP is eliminated primarily by CYP2C8-and CYP3A4-mediated oxidative metabolism [12].Systemic exposure to REP has been reported to be altered by co-administration of trimethoprim (inhibitor of CYP2C8) [13], itraconazole (inhibitor of CYP3A4) [14], or rifampicin (inducer of CYP3A4) [11].Celecoxib (CEL), as shown in Figure 1, is a cyclooxygenase-2 (COX-2) selective non-steroidal anti-inflammatory drug (NSAID) and is used to treat pain and inflammation associated with OA and rheumatoid arthritis (RA) [15].Because gastrointestinal mucosal integrity is compromised by COX-1 inhibition, traditional nonselective NSAIDs such as aspirin, ibuprofen, and indomethacin that inhibit both COX-1 and COX-2 may cause serious side effects in the gastrointestinal tract [16].Thus, CEL exhibits distinctly reduced gastrointestinal toxicity as compared to conventional NSAIDs, thereby becoming a blockbuster drug for treating OA and RA [17].CEL is eliminated primarily by extensive metabolism through methyl hydroxylation to form hydroxycelecoxib, which is catalyzed by CYP2C9 and CYP3A4 [18]. Pharmaceutics 2019, 11, x 2 of 15 Repaglinide (REP), as shown in Figure 1, is a short-acting oral antidiabetic drug belonging to the class of meglitinides and is used to lower postprandial blood glucose levels in T2DM patients [10].It stimulates insulin release from the pancreas, depending on the residual function of β-cells in the pancreatic islets [11].REP is eliminated primarily by CYP2C8-and CYP3A4-mediated oxidative metabolism [12].Systemic exposure to REP has been reported to be altered by co-administration of trimethoprim (inhibitor of CYP2C8) [13], itraconazole (inhibitor of CYP3A4) [14], or rifampicin (inducer of CYP3A4) [11].Celecoxib (CEL), as shown in Figure 1, is a cyclooxygenase-2 (COX-2) selective non-steroidal anti-inflammatory drug (NSAID) and is used to treat pain and inflammation associated with OA and rheumatoid arthritis (RA) [15].Because gastrointestinal mucosal integrity is compromised by COX-1 inhibition, traditional nonselective NSAIDs such as aspirin, ibuprofen, and indomethacin that inhibit both COX-1 and COX-2 may cause serious side effects in the gastrointestinal tract [16].Thus, CEL exhibits distinctly reduced gastrointestinal toxicity as compared to conventional NSAIDs, thereby becoming a blockbuster drug for treating OA and RA [17].CEL is eliminated primarily by extensive metabolism through methyl hydroxylation to form hydroxycelecoxib, which is catalyzed by CYP2C9 and CYP3A4 [18].Because T2DM frequently co-exists with OA, there is a possibility of concurrent administration of REP and CEL.Hence, a bioanalytical method of simultaneous determination of REP and CEL could be useful and efficient for further pharmaceutical development and therapeutic optimization.To date, several bioanalytical methods have been developed and validated for quantitative determination of REP or CEL individually using HPLC with UV/Vis detection [19][20][21][22][23] or using liquid chromatography with tandem mass spectrometry (LC-MS/MS) systems [24][25][26].However, these methods are associated with a few limitations, such as an insufficient sensitivity, relatively large sample volume, and/or time-consuming liquid-liquid extraction procedures with volatile solvents that are potentially hazardous to health.Moreover, LC-MS/MS methods require relatively complex and/or expensive instrumentation, which may not be affordable for small-sized laboratories and companies in resource-limited settings.To the best of our knowledge, there have been no reported methods of simultaneous quantification of REP and CEL in biological samples using HPLC coupled with a fluorescence detector (HPLC-FL).Furthermore, a previous in vitro study reported that CEL inhibited REP metabolism in pooled human liver microsomes (HLM) with a Ki of 3.1 μM [27].This suggests the possibility of pharmacokinetic drug interaction between REP and CEL, but no information is currently available regarding this issue.Therefore, further investigation of the pharmacokinetic drug interaction between REP and CEL is necessary to prevent adverse effects in the use of these drugs. In the current study, a sensitive and simple HPLC-FL method was developed and fully validated for simultaneous quantification of REP and CEL in rat plasma.The linearity, sensitivity, precision, Because T2DM frequently co-exists with OA, there is a possibility of concurrent administration of REP and CEL.Hence, a bioanalytical method of simultaneous determination of REP and CEL could be useful and efficient for further pharmaceutical development and therapeutic optimization.To date, several bioanalytical methods have been developed and validated for quantitative determination of REP or CEL individually using HPLC with UV/Vis detection [19][20][21][22][23] or using liquid chromatography with tandem mass spectrometry (LC-MS/MS) systems [24][25][26].However, these methods are associated with a few limitations, such as an insufficient sensitivity, relatively large sample volume, and/or time-consuming liquid-liquid extraction procedures with volatile solvents that are potentially hazardous to health.Moreover, LC-MS/MS methods require relatively complex and/or expensive instrumentation, which may not be affordable for small-sized laboratories and companies in resource-limited settings.To the best of our knowledge, there have been no reported methods of simultaneous quantification of REP and CEL in biological samples using HPLC coupled with a fluorescence detector (HPLC-FL).Furthermore, a previous in vitro study reported that CEL inhibited REP metabolism in pooled human liver microsomes (HLM) with a K i of 3.1 µM [27].This suggests the possibility of pharmacokinetic drug interaction between REP and CEL, but no information is currently available regarding this issue.Therefore, further investigation of the pharmacokinetic drug interaction between REP and CEL is necessary to prevent adverse effects in the use of these drugs. In the current study, a sensitive and simple HPLC-FL method was developed and fully validated for simultaneous quantification of REP and CEL in rat plasma.The linearity, sensitivity, precision, accuracy, recovery, matrix effect, and stability of this HPLC-FL method were determined.Next, the potential for the pharmacokinetic drug interactions between REP and CEL was investigated in vivo using Sprague-Dawley rats and in vitro using rat liver microsomes (RLM) and HLM. Animals Male Sprague-Dawley rats (nine-week-old; approximately 300 g) were purchased from Samtako Bio Korea Co. (Gyeonggi-do, Korea).They were kept in a clean room of the Laboratory Animal Center of Pusan National University (Busan, Korea) at a relative humidity of 50 ± 5% and temperature of 20-23 • C with 12 h dark (19:00-07:00) and light (07:00-19:00) cycles.They were housed in metabolic cages (Tecniplast USA Inc., West Chester, PA, USA) with tap water and standard chow diet (Agribrands Purina Canada Inc., Levis, QC, Canada) provided ad libitum.The present animal study protocols were approved by the Pusan National University-Institutional Animal Care and Use Committee (PNU-IACUC, Busan, South Korea) for ethical procedures and scientific care (approval number: PNU-2018-1848; approval date: 01/05/2018). Calibration Standards and Quality Control Samples Stock solutions of REP, CEL, and IS (1000 µg/mL in DMSO) were prepared.The stock solutions of the mixture of REP and CEL were diluted with mobile phase for the preparation of working standard solutions with concentrations ranging from 1 to 200 µg/mL.The working solution of IS (final concentration of 5 µg/mL in ACN) was prepared by diluting the stock solution of IS with ACN.Calibration standard samples were prepared by spiking blank rat plasma with each working solution, yielding final plasma concentrations of 10, 20, 50, 100, 200, 500, 1000, and 2000 ng/mL.Quality control (QC) samples were prepared from separate stocks of REP and CEL in an identical manner to the preparation of calibration standards.The concentration levels of QC samples were 10 (lower limit of quantification; LLOQ), 30 (low; LQC), 120 (middle; MQC), and 1200 ng/mL (high; HQC). Sample Preparation For deproteinization, 400 µL ice-cold ACN containing IS (50 ng/mL) was added to 120 µL plasma samples.The resultant mixture was vortex-mixed for 5 min, followed by centrifugation at 15,000× g for 5 min.Next, 400 µL supernatant was transferred to another microtube and dried by N 2 gas stream.For reconstitution, 60 µL mobile phase was added to the resultant residue, and after sufficient vortex-mixing, 20 µL finally prepared sample solution was injected to the HPLC system. Method Validation This new bioanalytical method for simultaneous determination of REP and CEL was validated based on the US-FDA guidelines [28].The selectivity was assessed based on the comparison among chromatograms of REP, CEL, and IS in blank rat plasma; blank rat plasma spiked with REP, CEL, and IS; and rat plasma sample obtained from a pharmacokinetic study in rats.The presence of endogenous interferences at the acquisition windows of the analytes was examined. The linearity was determined by the addition of increasing amounts of REP and CEL to a blank biological matrix.Calibration curves (n = 5) were constructed by plotting the peak area ratios of analytes to IS (y-axis) versus the concentration ratios of REP and CEL (10-2000 ng/mL) to IS (50 ng/mL) in plasma (x-axis), and linear regression analysis was conducted using the least squares method with a weighting factor of 1/x (x = concentration).The sensitivity was assessed based on LLOQ, defined as the lowest quantifiable concentration levels of REP and CEL in calibration curves (signal-to-noise [S/N] ratio of more than 5).REP and CEL peaks at the LLOQ level should be identifiable, reproducible, and discrete with acceptable accuracy (within 80-120%) and precision (<20%). The precision and accuracy were estimated by comparison between the measured concentrations and their respective nominal concentrations in the QC samples, which were prepared as five separate sets on one day (intra-day) and five different days (inter-day).Precision was expressed as a coefficient of variation (CV) of the mean values of the measured concentration.Accuracy was expressed as a relative error between the measured and nominal concentrations.They were determined with plasma samples spiked with REP and CEL at the four different QC levels in five replicates. The extraction recovery and matrix effect were determined by comparison among the analytical signals (peak area) obtained from (A) the extracted sample, (B) the post-extracted spiked sample (extracts of blanks spiked with the analyte post extraction), and (C) non-extracted neat sample (diluted stock solution).The recovery was calculated as 'A/B × 100', and the matrix effect was calculated as 'B/C × 100'.Five replicates were assessed at the four different QC levels. The stability was assessed by comparison of the analytical signals (peak area) obtained from plasma samples exposed to various handling and storage conditions with those obtained from plasma samples.Bench-top stability was determined by exposing spiked plasma samples to room temperature for 180 min.Freeze-thaw stability was determined by exposing spiked plasma samples to freeze-thaw cycles (from −20 • C to room temperature) three times on consecutive days.Long-term stability was determined by storing spiked plasma samples at −20 • C for 30 days.Autosampler stability (post-preparative stability) was determined by exposing extracted plasma samples to 25 • C for 1 day in an autosampler.The stability was determined at the four different QC levels. In Vivo Pharmacokinetic Study in Rats Rats were fasted for 12 h prior to the pharmacokinetic experiment and then anesthetized with zoletil (intramuscular, 20 mg/kg).The femoral artery and vein of the rats were cannulated with a polyethylene tube (BD Medical; Franklin Lakes, NJ, USA) at 240 min prior to drug dosing.A single oral dose of REP alone (0.4 mg/kg), CEL alone (2 mg/kg), or REP and CEL at the same doses was administered to the rats (n = 5 per group).Drugs were dissolved in a vehicle that is a clear mixture of DMSO, ethanol, polyethylene glycol 400, and saline at a ratio of 1:5:30:64 (v/v/v/v).Approximately 300 µL aliquots of blood were collected in heparin pre-treated microcentrifuge tubes via the femoral artery at 0, 10, 20, 30, 45, 60, 90, 120, 180, 240, 360, and 480 min after the oral dosing.Following centrifugation of blood samples at 2000× g at 4 • C for 10 min, 120 µL aliquots of plasma were stored at −80 • C until HPLC analysis. In Vitro Metabolism and Protein Binding Study An in vitro microsomal metabolism study was conducted using Corning ® Gentest TM pooled male RLM (from Sprague-Dawley rats) and HLM (from more than 5 male donors) as previously described [29,30], with slight modifications and in accordance with the manufacturer's protocol.To assess the possibility of metabolic interaction between REP and CEL, a microsomal reaction mixture was prepared as follows (total volume: 0.2 mL): RLM or HLM (0.5 mg/mL), 50 mM phosphate buffer, 1 mM NADPH, 10 mM MgCl 2 , 1 µM substrate, and various concentrations of inhibitor (1-100 µM).The disappearance rates of REP (as a substrate) in the absence or presence of CEL (as an inhibitor), and vice versa, were determined.At 0 and 15 min (REP) or 0 and 45 min (CEL) after starting the metabolic reaction, a 50 µL aliquot of microsomal incubation mixture was sampled and transferred into a clean 1.5 mL microcentrifuge tube containing 100 µL cold ACN containing IS (50 ng/mL) to stop the metabolic reaction.After vortex mixing and centrifugation at 15,000× g for 10 min, a 100 µL aliquot of the supernatant was stored at −80 • C until HPLC analysis. The fractions of unbound REP and CEL (f u ) in rat and human plasma were measured using the rapid equilibrium dialysis (RED) device (Thermo Fisher Scientific, Inc.) as previously described [31].The plasma was spiked with REP alone, CEL alone, and both drugs, yielding final concentration of 10 µM.A 0.2-mL spiked plasma was placed into the 'sample' chamber, and a 0.35 mL isotonic phosphate buffered saline was placed into the adjacent 'buffer' chamber.The fraction unbound was calculated as the ratio of the drug concentrations in the 'buffer' compartment to those in the 'sample' compartment. Data Analysis The IC 50 of CEL for the inhibition of the metabolism of REP was determined by GraphPad Prism 5.01 (GraphPad Software, San Diego, CA, USA) according to the following Hill equation: Analytical data were acquired and processed using the LC Solution Software (Version 1.25; Shimadzu Co.).Non-compartmental analysis was conducted to estimate pharmacokinetic parameters such as total area under plasma concentration versus time curve from time zero to infinity (AUC inf ), total area under plasma concentration versus time curve from time zero to time of last sampling (AUC last ), and terminal half-life (t 1/2 ) using the NCA200 and 201 models of WinNonlin software (Version 3.1; Certara USA Inc., Princeton, NJ, USA) [32].Peak plasma concentration (C max ) and time to reach C max (T max ) were directly read from the observed data. Statistical Analysis A p-value below 0.05 was considered statistically significant by using t-test for comparison between two unpaired means or by using analysis of variance (ANOVA) with post-hoc Tukey's honestly significant difference test for comparison among three unpaired means.Unless indicated otherwise, all data except T max were expressed as mean ± standard deviation (median (ranges) for T max ).All data numbers were rounded to three significant figures. Method Development In this study, various chromatographic conditions were evaluated for sufficient sensitivity and good separation of analytes from endogenous substances of biological matrix within an appropriate run time.Several experiments were performed to choose suitable stationary phase, mobile phase, sample preparation procedure, and IS. The composition of mobile phase was optimized with different buffer types, such as citrate buffer (pH 3-5) and phosphate buffer (pH 6-7), and various ACN contents.Changes in the pH of mobile phase considerably influenced the peak retention times of REP (acidic compound) and endogenous interferences; however, it exerted little influence on those of CEL and ketoconazole that are neutral compounds.As a result, the mobile phase of pH 6.0 containing 46.4% ACN achieved good separation from endogenous interference in plasma with acceptable peak resolution.Thus, we settled for this mobile phase in developing the present HPLC-FL method. Sample preparation was performed using a solvent precipitation-reconstitution method which is an efficient and economical sample pretreatment procedure compared with a solid phase or liquid-liquid extraction method.For optimization, several organic solvents, such as acetone, methanol, trichloroacetic acid, ACN, and their mixtures, were evaluated.Among them, ACN yielded the lowest matrix effect and highest recovery for analytes following centrifugation at 15,000× g for a relatively short precipitation time of 5 min. Several fluorescent compounds, such as diclofenac, diflunisal, doxorubicin, metoprolol, naproxen, propranolol, and quinidine, were tested as a potential IS.However, these were unsuitable as IS, due to poor separation from analytes and endogenous substances in biological matrix.As a result, ketoconazole was finally chosen, because it exhibited good separation with acceptable retention time, peak resolution, and fluorescence intensity at the same wavelength as REP and CEL. Method Validation: Selectivity, Linearity, Sensitivity, Precision, and Accuracy As shown in Figure 2, the analyte peaks were well separated from each other and from endogenous matrix peaks in the blank plasma.Thus, it appears that the present bioanalytical method could offer acceptable selectivity without endogenous interferences occurring at the retention times of the analytes.The calibration curves (REP-to-IS or CEL-to-IS peak area ratio versus REP or CEL concentration, respectively) for REP and CEL were observed to be linear from 10 to 2000 ng/mL in rat plasma samples.A representative equation for the calibration curves was constructed, as follows: y = 1.018x − 3.029 for REP and y = 2.093x − 0.723 for CEL, where y represents the ratio of the peak area of REP or CEL to that of IS, and x represents the ratio of nominal concentration of REP or CEL.The correlation coefficients (r 2 ) were over 0.999, showing good linearity of this method.Generally, the sensitivity of a bioanalytical method is represented by the LLOQ value, which, in the present study, was determined to be 10 ng/mL for both REP and CEL.Moreover, the present method offered good sensitivity for CEL, with LLOQ comparable to those reported by previous LC-MS/MS methods in human plasma (LLOQ: 5-10 ng/mL; plasma volume: 100-200 µL) [25,26,33].The intra-and inter-day precision and accuracy of this method were determined for REP and CEL at the four different QC levels, as shown in Table 1.The precision was estimated to be 8.30% or less, and the accuracy ranged from 98.6% to 112%.These values are within a generally acceptable range, showing that the present method was precise, accurate, and reproducible. (D) Figure 2. Representative chromatograms of repaglinide (REP), celecoxib (CEL), and ketoconazole (IS) in rat plasma: blank rat plasma (A); blank rat plasma spiked with REP, CEL (10 ng/mL, lower limit of quantification (LLOQ)), and IS (B); blank rat plasma spiked with REP, CEL (120 ng/mL, middle quality control (MQC)), and IS (C); plasma sample collected 120 min after concurrent oral administration of REP and CEL solution in rats, where calculated concentrations of REP and CEL were 53 and 968 ng/mL, respectively (D).EU: emission unit. Table 1.Intra-and inter-day precision and accuracy of REP and CEL in rat plasma (n = 5).HQC: high quality control. Method Validation: Recovery, Matrix Effect, and Stability As shown in Table 2, we assessed the recovery and matrix effect of the method for REP and CEL at the four different QC levels and for IS at 50 ng/mL.The mean recovery of REP and CEL was observed to be 98.5-104% with CV values of ≤2.57%.There were no significant differences in recovery values among the four different QC levels (p = 0.066 for REP and 0.502 for CEL), indicating concentration-independent recovery for both drugs.The mean matrix effect for REP and CEL was observed to be 92.1-102% with CV values of ≤5.25%.The stability was assessed under various handling and storage conditions relevant to this HPLC-FL method.Bench-top stability, autosampler stability, freeze-thaw stability, and long-term stability were determined for REP and CEL at the four different QC levels.The extent of bias in the concentration was within ±15% of the corresponding nominal value, while the remaining fraction of REP and CEL was observed to be 89.2-106% with CV values of ≤6.04%, as shown in Table 3.These data clearly indicate that the sample preparation procedure employed in the bioanalytical method proposed herein offered sufficient extraction recovery with minimal matrix effect, and that REP and CEL remained stable under several conditions related to the present bioanalytical procedures. Method Validation: Recovery, Matrix Effect, and Stability As shown in Table 2, we assessed the recovery and matrix effect of the method for REP and CEL at the four different QC levels and for IS at 50 ng/mL.The mean recovery of REP and CEL was observed to be 98.5-104% with CV values of ≤2.57%.There were no significant differences in recovery values among the four different QC levels (p = 0.066 for REP and 0.502 for CEL), indicating concentration-independent recovery for both drugs.The mean matrix effect for REP and CEL was observed to be 92.1-102% with CV values of ≤5.25%.The stability was assessed under various handling and storage conditions relevant to this HPLC-FL method.Bench-top stability, autosampler stability, freeze-thaw stability, and long-term stability were determined for REP and CEL at the four different QC levels.The extent of bias in the concentration was within ±15% of the corresponding nominal value, while the remaining fraction of REP and CEL was observed to be 89.2-106% with CV values of ≤6.04%, as shown in Table 3.These data clearly indicate that the sample preparation procedure employed in the bioanalytical method proposed herein offered sufficient extraction recovery with minimal matrix effect, and that REP and CEL remained stable under several conditions related to the present bioanalytical procedures. Pharmacokinetic Drug Interaction Studies Rats received oral REP (0.4 mg/kg) and CEL (2 mg/kg) either alone or in combination.Then, plasma concentration versus time profiles of REP and CEL were evaluated as shown in Figure 3.The relevant pharmacokinetic parameters are listed in Table 4.The oral doses used were selected based on previous rat pharmacokinetic studies on REP or CEL [34][35][36].After oral dosing of REP, plasma REP levels increased for 20 to 45 min and then declined in a multi-exponential fashion.The AUC last , AUC inf , and t 1/2 of REP were not significantly changed by concurrent administration with oral CEL, as shown in Table 4.After oral administration of CEL, its plasma concentration profiles markedly fluctuated during the whole period of blood collection (480 min).Thus, the AUC inf and t 1/2 of CEL could not be determined in this study because there was no discernible linear terminal phase observed in the plasma concentration versus time curves of CEL.The multiple peaks in the plasma concentration profiles of CEL may be caused by slow and variable gastrointestinal absorption, which warrants further investigation.Notably, the AUC last of CEL was significantly higher (by 76.2%) after co-administration of CEL and REP than after administration of CEL alone (p = 0.0213).Because CEL is eliminated primarily by extensive metabolism [15], the increased systemic exposure of oral CEL could be attributable to a reduction of hepatic first-pass and/or systemic metabolism of CEL caused by concurrent administration of REP.Previously reported pharmacokinetic parameters of intravenous REP and CEL in rats are listed in Table S1.Because the blood-to-plasma concentration ratio (RB) was 0.61 for REP and 2.66 for CEL in rat blood (our in-house data), the blood CL (CLB) was determined to be 8.52 mL/min/kg for REP and 2.92 mL/min/kg for CEL (calculated as plasma CL/RB).Because the urinary excretion of the unchanged drug was reported to be negligible for both drugs, as shown in Table S1, it is plausible to assume that the CLB of REP and CEL could represent their hepatic clearance, which is far below the reported hepatic blood flow rate in rats (QH; ranging from 50 to 80 mL/min/kg) [37].This indicates that REP and CEL are drugs with low hepatic extraction ratios of 0.037 to 0.170 (calculated as CLH/QH).Based on the well-stirred model, the CLH of a drug with a low hepatic extraction ratio primarily depends on its intrinsic metabolic clearance (CLint) and fraction unbound in blood (fB) [30].Thus, the hepatic metabolism and plasma protein binding of the drugs were further investigated using an in vitro rat and human liver microsomes and plasma.As shown in Figure 4, dose-response curves for the inhibitory effect of REP on the metabolism of CEL were constructed in RLM and HLM.REP significantly inhibited the metabolic reaction of CEL with IC50 values of 16.1 ± 4.5 μM in RLM and 14.4 ± 0.6 μM in HLM.However, the metabolism of REP in RLM and HLM was not significantly altered by CEL (data not shown).Additionally, protein binding interactions between the two drugs in rat and human plasma were assessed.As shown in Figure 5, there were no significant differences in fractions of unbound drugs either alone or in combination, suggesting the minimal possibility of protein binding-based interactions between the two drugs.Previously reported pharmacokinetic parameters of intravenous REP and CEL in rats are listed in Table S1.Because the blood-to-plasma concentration ratio (R B ) was 0.61 for REP and 2.66 for CEL in rat blood (our in-house data), the blood CL (CL B ) was determined to be 8.52 mL/min/kg for REP and 2.92 mL/min/kg for CEL (calculated as plasma CL/R B ).Because the urinary excretion of the unchanged drug was reported to be negligible for both drugs, as shown in Table S1, it is plausible to assume that the CL B of REP and CEL could represent their hepatic clearance, which is far below the reported hepatic blood flow rate in rats (Q H ; ranging from 50 to 80 mL/min/kg) [37].This indicates that REP and CEL are drugs with low hepatic extraction ratios of 0.037 to 0.170 (calculated as CL H /Q H ). Based on the well-stirred model, the CL H of a drug with a low hepatic extraction ratio primarily depends on its intrinsic metabolic clearance (CL int ) and fraction unbound in blood (f B ) [30].Thus, the hepatic metabolism and plasma protein binding of the drugs were further investigated using an in vitro rat and human liver microsomes and plasma.As shown in Figure 4, dose-response curves for the inhibitory effect of REP on the metabolism of CEL were constructed in RLM and HLM.REP significantly inhibited the metabolic reaction of CEL with IC 50 values of 16.1 ± 4.5 µM in RLM and 14.4 ± 0.6 µM in HLM.However, the metabolism of REP in RLM and HLM was not significantly altered by CEL (data not shown).Additionally, protein binding interactions between the two drugs in rat and human plasma were assessed.As shown in Figure 5, there were no significant differences in fractions of unbound drugs either alone or in combination, suggesting the minimal possibility of protein binding-based interactions between the two drugs.Since oral AUC is calculated as F × D/CL (F, oral bioavailability; D, dose; CL, total clearance), an increase in F and/or decrease in CL result in an increase in AUC.Moreover, hepatic metabolism is the major elimination route for both REP and CEL that are drugs with a low hepatic extraction ratio.Thus, the inhibition of hepatic metabolism of the two drugs can reduce their hepatic first-pass effect (increase in F) and hepatic systemic clearance (decrease in CL), consequently leading to an increase in AUC.In our present in vitro metabolism study in RLM and HLM, as shown in Figure 4, the metabolism of REP was not significantly changed by CEL, while the metabolism of CEL was inhibited by REP with a mean IC50 of 16.1 μM in RLM and 14.4 μM in HLM.Assuming that the in vivo concentration levels of REP in the rat liver after oral dosing are high enough to inhibit the metabolism of CEL, the increased oral systemic exposure of CEL by concurrent administration of REP (AUClast in Table 4) could be attributable to a reduction of hepatic first-pass effect and/or hepatic systemic clearance of CEL caused by the inhibitory activity of REP on the hepatic metabolism of CEL. Concentration of REP (μ M) The present rat study highlighted the possibility for metabolism-based interactions between CEL and REP in clinical settings.As shown in Figure 4, REP significantly inhibited the metabolic reaction of CEL with comparable IC50 values between RLM and HLM.The Cmax of REP administered orally to Since oral AUC is calculated as F × D/CL (F, oral bioavailability; D, dose; CL, total clearance), an increase in F and/or decrease in CL result in an increase in AUC.Moreover, hepatic metabolism is the major elimination route for both REP and CEL that are drugs with a low hepatic extraction ratio.Thus, the inhibition of hepatic metabolism of the two drugs can reduce their hepatic first-pass effect (increase in F) and hepatic systemic clearance (decrease in CL), consequently leading to an increase in AUC.In our present in vitro metabolism study in RLM and HLM, as shown in Figure 4, the metabolism of REP was not significantly changed by CEL, while the metabolism of CEL was inhibited by REP with a mean IC50 of 16.1 μM in RLM and 14.4 μM in HLM.Assuming that the in vivo concentration levels of REP in the rat liver after oral dosing are high enough to inhibit the metabolism of CEL, the increased oral systemic exposure of CEL by concurrent administration of REP (AUClast in Table 4) could be attributable to a reduction of hepatic first-pass effect and/or hepatic systemic clearance of CEL caused by the inhibitory activity of REP on the hepatic metabolism of CEL. The present rat study highlighted the possibility for metabolism-based interactions between CEL and REP in clinical settings.As shown in Figure 4, REP significantly inhibited the metabolic reaction of CEL with comparable IC50 values between RLM and HLM.The Cmax of REP administered orally to Since oral AUC is calculated as F × D/CL (F, oral bioavailability; D, dose; CL, total clearance), an increase in F and/or decrease in CL result in an increase in AUC.Moreover, hepatic metabolism is the major elimination route for both REP and CEL that are drugs with a low hepatic extraction ratio.Thus, the inhibition of hepatic metabolism of the two drugs can reduce their hepatic first-pass effect (increase in F) and hepatic systemic clearance (decrease in CL), consequently leading to an increase in AUC.In our present in vitro metabolism study in RLM and HLM, as shown in Figure 4, the metabolism of REP was not significantly changed by CEL, while the metabolism of CEL was inhibited by REP with a mean IC 50 of 16.1 µM in RLM and 14.4 µM in HLM.Assuming that the in vivo concentration levels of REP in the rat liver after oral dosing are high enough to inhibit the metabolism of CEL, the increased oral systemic exposure of CEL by concurrent administration of REP (AUC last in Table 4) could be attributable to a reduction of hepatic first-pass effect and/or hepatic systemic clearance of CEL caused by the inhibitory activity of REP on the hepatic metabolism of CEL. The present rat study highlighted the possibility for metabolism-based interactions between CEL and REP in clinical settings.As shown in Figure 4, REP significantly inhibited the metabolic reaction of CEL with comparable IC 50 values between RLM and HLM.The C max of REP administered orally to rats was reported to be 297 ± 103 ng/mL (dose: 0.4 mg/kg) in this study, as shown in Table 4, and 105.1 ± 30 ng/mL (dose: 0.5 mg/kg) in a previous study [35], which are roughly comparable to the reported C max of 65.8 ± 30.1 ng/mL (dose: 4 mg; converted to 0.41 mg/kg in rats based on the human equivalent dose concept proposed by the FDA) in humans (FDA drug label information).Moreover, there were no significant differences in the unbound fraction of REP between rat and human plasma, as shown in Figure 5. Based on these findings, it is plausible that the increased systemic exposure of CEL by co-administration of REP in the present rat pharmacokinetic study could have some clinical relevance, depending on species differences in hepatic distribution profiles of REP between rats and humans. Additionally, there have been no reported studies on the relationships between the AUC and toxicity of CEL.However, the FDA drug label of CEL indicates that the steady-state AUC of CEL is increased about 40% and 180% by mild (Child-Pugh Class A) and moderate (Child-Pugh Class B) hepatic impairment, respectively.Thus, the daily recommended dose of CEL should be reduced by approximately 50% in patients with moderate (Child-Pugh Class B) hepatic impairment.In our present study, the AUC last of CEL in rats was observed to be 189 (ranging from 113 to 285) µg•min/mL after administration of CEL alone and 333 (ranging from 458) µg•min/mL after administration of CEL and REP in combination.If these rat data could be extrapolated to humans, it is plausible that careful toxicity monitoring and/or dose modification may be needed for the combined dose of CEL and REP in clinical practice.Despite intrinsic limitations associated with nonclinical studies, our present in vivo rat and in vitro HLM data warrant further clinical study on drug interactions between REP and CEL. Conclusions This study successfully developed a simple, sensitive, and validated HPLC-FL method for simultaneous determination of REP and CEL in rat plasma.The new bioanalytical method provided several merits, including good sensitivity, high extraction recovery, negligible matrix effect, and simplicity of sample preparation procedures.The application of this method in the study of pharmacokinetic interactions between REP and CEL revealed that the pharmacokinetics of oral CEL were significantly altered by concurrent administration with oral REP.Furthermore, an in vitro metabolism and protein binding study using HLM and human plasma highlighted the possibility of metabolism-based interactions between CEL and REP in clinical settings.Therefore, the bioanalytical method proposed herein could become a promising tool for preclinical pharmacokinetic studies and, by extension, clinical use after partial modification and validation. Figure 2 . Figure 2. Representative chromatograms of repaglinide (REP), celecoxib (CEL), and ketoconazole (IS)in rat plasma: blank rat plasma (A); blank rat plasma spiked with REP, CEL (10 ng/mL, lower limit of quantification (LLOQ)), and IS (B); blank rat plasma spiked with REP, CEL (120 ng/mL, middle quality control (MQC)), and IS (C); plasma sample collected 120 min after concurrent oral administration of REP and CEL solution in rats, where calculated concentrations of REP and CEL were 53 and 968 ng/mL, respectively (D).EU: emission unit. Figure 3 . Figure 3. Plasma concentration versus time profiles of REP (A) and CEL (B) after oral administration of 0.4 mg/kg REP and 2 mg/kg CEL either alone (closed circle) or in combination (open circle) to rats.The circles and vertical bars represent means and standard deviations, respectively (n = 5). Figure 3 . Figure 3. Plasma concentration versus time profiles of REP (A) and CEL (B) after oral administration of 0.4 mg/kg REP and 2 mg/kg CEL either alone (closed circle) or in combination (open circle) to rats.The circles and vertical bars represent means and standard deviations, respectively (n = 5). Figure 4 .Figure 5 . Figure 4. Dose-response curves for the inhibitory effect of REP on metabolic reactions of CEL in RLM (A) and HLM (B).The circles and vertical bars represent the means and standard deviations, respectively (n = 4). Figure 4 .Figure 4 .Figure 5 . Figure 4. Dose-response curves for the inhibitory effect of REP on metabolic reactions of CEL in RLM (A) and HLM (B).The circles and vertical bars represent the means and standard deviations, respectively (n = 4). Figure 5 . Figure 5. Fraction of unbound REP (A) and CEL (B) either alone or in combination in rat and human plasma (n = 3). Table 1 . Intra-and inter-day precision and accuracy of REP and CEL in rat plasma (n = 5).HQC: high quality control. Table 2 . Recovery and matrix effect of REP, CEL, and IS in rat plasma (n = 5). a Room temperature for 3 h.b 10 • C for 1 day in the autosampler.c Three freezing and thawing cycles.d −20 • C for 30 days. Table 4 . Pharmacokinetic parameters of REP and CEL in rats after oral administration of 0.4 mg/kg REP and 2 mg/kg CEL either alone or in combination (n = 5). * Significantly different from the single group (p < 0.05). Table 4 . Pharmacokinetic parameters of REP and CEL in rats after oral administration of 0.4 mg/kg REP and 2 mg/kg CEL either alone or in combination (n = 5).
2019-08-07T13:04:06.522Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "27d8d94b951ff98e84ceaf19b41cb64dc13fa178", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/11/8/382/pdf?version=1564720235", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e73179fa13046396d1c30d2c30b29a25cb70ba48", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
10413302
pes2o/s2orc
v3-fos-license
Opposite Impact of REM Sleep on Neurobehavioral Functioning in Children with Common Psychiatric Disorders Compared to Typically Developing Children Rapid eye movement (REM) sleep has been shown to be related to many adaptive cognitive and behavioral functions. However, its precise functions are still elusive, particularly in developmental psychiatric disorders. The present study aims at investigating associations between polysomnographic (PSG) REM sleep measurements and neurobehavioral functions in children with common developmental psychiatric conditions compared to typically developing children (TDC). Twenty-four children with attention-deficit/hyperactivity disorder (ADHD), 21 with Tourette syndrome/tic disorder (TD), 21 with ADHD/TD comorbidity, and 22 TDC, matched for age and gender, underwent a two-night PSG, and their psychopathological scores and intelligence quotient (IQ) were assessed. Major PSG findings showed more REM sleep and shorter REM latency in the children with psychiatric disorders than in the TDC. Multiple regression analyses revealed that in groups with developmental psychopathology, REM sleep proportion correlated positively with scores of inattention and negatively with performance IQ. In contrast, in the group of TDC, REM sleep proportion correlated negatively with scores of inattention and positively with performance IQ. Whilst shorter REM latency was associated with greater inattention scores in children with psychopathology, no such an association existed in the group of TDC. Altogether, these results indicate an opposite impact of REM sleep on neurobehavioral functioning, related to presence or absence of developmental psychiatric disorders. Our findings suggest that during development, REM sleep functions may interact dissimilarly with different pathways of brain maturation. INTRODUCTION Rapid eye movement (REM) sleep is characterized by bizarre dreaming consciousness upon lack of external input (Hobson, 2009;Nir and Tononi, 2010), which is accompanied by specific neurophysiologic signatures including wake-like low frequency desynchronized electroencephalogram (EEG) dominated by theta and gamma EEG oscillation, swift occurrence of REMs and pontine-geniculate-occipital waves and absence of muscle tone (Rechtschaffen and Kales, 1968;Hobson and Pace-Schott, 2002;Hobson et al., 2014). Remarkably, amongst all sleep-wake stages, REM sleep has a prominent role in enabling neuronal plasticity, increased synaptic connectivity, and immediate early genes synthesis (Ribeiro et al., 2002;Grosmark et al., 2012) and is signified by a strong cortical activation (De Gennaro et al., 2004;Massimini et al., 2010). Thus, the functional roles of REM sleep and dreaming have been of sustained interest. In healthy adults, REM sleep has been shown to support many adaptive functions. These include consolidation of emotional memory (Wagner et al., 2001;Nishida et al., 2009), resolution of affect (van der Helm et al., 2011), further transformation of previously consolidated during non-REM (NREM) sleep memories (Walker and Stickgold, 2010;Rasch and Born, 2013;Llewellyn and Hobson, 2015), consolidation of procedural or implicit memory and motor learning (Yordanova et al., 2008;Diekelmann and Born, 2010), and promoting human heuristic creativity (Cai et al., 2009;Brand et al., 2010). REM sleep's physiological and psychological features also have been associated with more complex functions. For example, REM sleep has been suggested to heighten autobiographic memory Malinowski and Horton, 2015) and to render previously encoded memories more distinct through its hyper-associative dreaming state (Llewellyn, 2013;Llewellyn and Hobson, 2015). Also, it is thought to incorporate these previously encoded memories into a broader vital context, thus embedding them in consolidated residuals of hypotheses, emotions, basic needs, and individual genetic traits (Kirov, 2013). Next, REM sleep has been proposed to generate an innate virtual model of the world, thus modulating predictive coding (Hobson and Friston, 2012;Hobson et al., 2014;Hopkins, 2016). In view of REM sleep deviations in almost all psychiatric conditions (Benca et al., 1997;Gottesmann and Gottesman, 2007;Baglioni et al., 2016), REM sleep also has been regarded as a mechanism mediating brain adaptation in normal and pathological conditions (Benca et al., 1992;Horne, 2013Horne, , 2015Goldstein and Walker, 2014;Hobson et al., 2014;Hopkins, 2016;Mota et al., 2016). From a developmental perspective, infants have much more REM sleep quantity, which descends through childhood and adolescence, than adults (Roffwarg et al., 1966;Brand and Kirov, 2011). It has been proposed that this developmental decrease in REM sleep sub-serves brain maturation through synaptic reorganization and/or pruning, internally generated stimulation of neuronal assembles, or genetic programming (Marks et al., 1995;Jouvet, 1998;Feinberg and Campbell, 2010;Hopkins, 2016). In this regard, an insufficient decline or variations of the normal REM sleep decline during development is proposed to underpin a broad spectrum of child and adolescent psychiatric disorders (Partonen, 1998;Kobayashi et al., 2004;Garcia-Rill et al., 2008;Brand and Kirov, 2011;Kirov and Brand, 2014). Given that both the hypothalamus-pituitary-adrenocortical axis activation and REM sleep overdrive are closely associated in psychiatric conditions (Steiger et al., 2013), this notion has received an indirect support by documenting an existence of elevated cortisol levels in association with disturbed sleep and impaired neurobehavioral functions in a cohort of children with various psychiatric symptoms (Hatzinger et al., 2012), and thus, probably, with deviant stress sensitivity (Brand and Kirov, 2011;Gruber, 2014). However, whether and how REM sleep in common developmental child psychiatric disorders may be linked to neurobehavioral functioning is still less well understood. We have shown previously a REM sleep overdrive in children with attention-deficit/hyperactivity disorder (ADHD), Tourette syndrome/chronic tic disorder (TD) and ADHD/TD comorbidity, with this REM sleep overdrive being associated mostly with ADHD core symptoms (Kirov et al., 2007). More recently, we have demonstrated the following pattern of associations between REM sleep quantity and neurobehavioral functions in youth ADHD: (1) In children with ADHD, REM sleep proportion correlated positively with inattention and negatively with performance intelligence quotient (IQ). (2) In opposition, the proportion of REM sleep in typically developing children (TDC) correlated negatively with inattention and positively with performance IQ . Similarly, another recent study showed that whereas in youths with ADHD, theta (4-8 Hz) EEG power during REM sleep correlated negatively with emotional memory consolidation, in healthy individuals, this correlation was positive (Prehn-Kristensen et al., 2013). Collectively, these latter findings suggest at least a bi-directional role of REM sleep and its physiology, depending on presence or absence of ADHD psychopathology. The present study aimed at further investigating the impact of REM sleep on neurobehavioral functioning in children with a broader spectrum of common developmental psychiatric disorders. In an attempt to clarify if the associations between REM sleep proportion and neurobehavioral functions reported previously were only linked to ADHD psychopathology possibly reflecting a disorderspecific dysfunction of neural regulation leading to both daily symptoms expression and sleep disturbances (Kirov et al., 2007), we enrolled in the present study larger sample sizes of children with ADHD, TD and ADHD/TD comorbidity and compared them to healthy TDC, while testing associations not only between REM sleep parameters and psychopathological scores, but also considering children's IQ. We hypothesized a differential impact of REM sleep on cognitive and behavioral functioning in the children with the continuum of common developmental psychiatric disorders (Gaze et al., 2006;Kirov et al., 2011) compared to the TDC. Also, we proposed that REM sleep may be associated dissimilarly with different psychopathological scores across the children's specific psychiatric diagnoses. Subjects Eighty-eight children aged between 8 and 16 years (66 outpatients with ADHD, TD and ADHD/TD comorbidity and 22 TDC) participated in the study. All children and their parents were native German speakers. Children were examined clinically by two independent board-certified child psychiatrists and underwent clinical tests for neurological and internal diseases, including routine EEG and electrocardiogram. All patients were consecutive referrals to the Clinic for Child and Adolescent Psychiatry at the University Medical Center of Goettingen, Germany. They were diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders 4th edition (DSM-IV; American Psychiatric Association [APA], 1994) with ADHD-combined subtype (314.01), Tourette syndrome/chronic TD (307.22/307.23) and ADHD/TD comorbidity. TDC were recruited among friends and relatives of the clinical staff. Exclusion criteria for the children with psychopathology were presence of internal diseases, neurological or psychiatric problems not associated with ADHD and TD, and verbal, performance and total IQ < 70 (the German version of Wechsler Intelligence scale for children; Tewes, 1999), as evaluated by the certified child psychiatrists. Further, the exclusion criteria applied for the controls were neuropsychiatric or internal diseases, IQ < 70 and current sleep problems, as assessed during an adaptation night with polysomnography (PSG). None of the patients had clinically expressed psychiatric disorders different from ADHD, TD and ADHD/TD co-morbidity, somatic and neurological diseases and IQ < 70, and none of the TDC had neuropsychiatric or internal diseases, IQ < 70 and current sleep problems. The 88 children formed the following four groups. Twentyfour children (27.3%) with ADHD-combined subtype, 21 (23.9%) with TD, 21 (23.9%) with ADHD/TD comorbidity, and 22 (25%) TDC ( Table 1). The four groups were matched for age and gender, but not for IQ. However, as can be ADHD, attention-deficit/hyperactivity disorder; TD, tic disorder; ADHD/TD, co-morbidity; MED, medicated before study; NMED, never medicated; N/A, not applicable; CPRS, Conner's parent rating scale; CBCL, child behavior checklist; LOI, Leyton obsession inventory; TSSS, Tourette syndrome severity scale; PSG, polysomnography. Significant (p < 0.05) group differences (independent samples t-tests): a ADHD vs. Controls; b TD vs. Controls; c ADHD+TD vs. Controls; d TD vs. ADHD and ADHD/TD. seen in Table 1, the groups did not differ significantly for IQ. Most patients (n = 40; 60.6%) have never received any medications. The medication of the others (n = 26; 39.4%) was as follows: (1) Nine boys and one girl with ADHD were treated with Methylphenidate Hydrochloride (MPH: Ritalin R , Novartis Pharma GmbH, Nuremberg, Germany). (2) In the TD group, seven boys were treated with Tiaprid (Tiaprid R , neuraxpharm Arzneimittel GmbH, Langenfeld, Germany), and two boys with Haloperidol (Haloperidol R ; ratiopharm direct GmbH, Ulm, Germany). (3) Six boys with ADHD/TD comorbidity received a combination of MPH and Haloperidol, and one girl received Tiaprid ( Table 1). The medication of the 26 children with psychiatric disorders was discontinued 5 to 14 days before study. The study was performed according to the clinical standards of the Declaration of Helsinki and approved by the Local Ethics Committee at the University Medical Center of Goettingen, Germany. A detailed description of the investigation was provided to the parents and their children. Parents of each child signed written consent and children gave age-appropriate consent. Psychometric Assessment To further provide detailed data for a quantitative assessment of psychopathological problems across groups, control and patient groups were carefully assessed by means of psychometric questionnaires, including Child Behavior Checklist (CBCL; Achenbach and Edelbrock, 1983), Conners Parent Rating Scale (CPRS; Goyette et al., 1978) and Leyton Obsessional Inventory (LOI; Berg et al., 1986). Only for the children with TD and ADHD/TD comorbidity, the Tourette Syndrome Severity Scale (TSSS; Shapiro et al., 1988) was used. For a quantitative assessment of the level of hyperactivity and impulsiveness, the short 10-item version of CPRS (3-point scale ranging from not true to often true) was used (Cronbach's α = 0.89). On a 20-item LOI concerning obsessive-compulsive behavior, children were asked to respond in the items with 'yes' or 'no' scored as 1 or 0 points, respectively. When 'yes' responses were obtained, children were assessed for either resistance to their symptoms or interference with other activities that the symptoms cause by 4-point scales (Cronbach's α = 0.90). Tourette Syndrome Severity Scale (five items) was applied to quantitatively measure the severity of motor and vocal tics (rated by 'not true' , 'somewhat' , and 'often true' and scored as 0, 1, and 2 points, respectively (Cronbach's α = 0.86). Child Behavior Checklist and CPRS scores were rated by the mothers of the children, whereas the scores of LOI and TSSS were rated by experts after psychiatric interviews with mothers and children. As shown in Table 1, the groups significantly differed in each CBCL subscale, CPRS and LOI subscales, with no significant differences between TD and ADHD/TD comorbidity groups on TSSS. All psychometric and IQ evaluations of the patient and control groups of children were made 1 or 2 days before conducting a two-night PSG. Polysomnographic (PSG) All children underwent a PSG in the sleep laboratory during two consecutive nights. An unrestrained sleep regime was employed with the major goal to avoid as much as possible situational variations that could potentially affect sleep findings. The PSG included EEG (C3 and C4) electrodes referenced to the right (A2) and left (A1) mastoids, electrooculogram recorded from electrodes above and below the right eye and the outer canthi of the orbits and submental electromyogram. All PSG recordings were performed on a 21-channel polyphysiograph (Nihon Kohden, Tokyo, Japan) with electrode impedance <5 kohms and stored on a computerized video-monitoring system (Sagura Polysomnograph 2000, Sagura Medizintechnik GmbH, Muhlheim, Germany). PSG data were analyzed visually in 30-s epochs according to standard criteria (Rechtschaffen and Kales, 1968) by three independent certified technicians blind to subject grouping: inter-rater agreement [>89%; Cohen's kappa: (range 0.89-0.93)]. The first PSG night served as an adaptation night, during which a monitoring for presence of primary sleep disorders was conducted. To avoid the first-night effect, sleep PSG data were taken only from the second night (Kirov et al., 2012). Statistics Data were analyzed using IBM SPSS Statistics 19 (IBM Corp., Armonk NY, USA). All demographic (except gender), IQ and psychometric data and all sleep PSG data were first tested for normality of distribution by means of Kolmogorov-Smirnov test, with the following results obtained: (1) patients (n = 66: Z < 1.28, p > 0.12), (2) TDC (n = 22: Z < 1.19, p > 0.14), (3) ADHD group (n = 24: Z < 1.11, p > 0.16), (4) TD group (n = 21: Z < 1.01, p > 0.18), and (5) ADHD/TD comorbidity group (n = 21: Z < 1.22, p > 0.12), verifying a Gaussian distribution for all data sets. Therefore, parametric statistics was used. The demographic and clinical data, excluding gender (chi-squared test) were statistically evaluated by means of independent samples t-tests (Table 1). All sleep PSG parameters were subjected to a one-way multivariate analysis of variance (MANOVA) with one between-subjects factor group. In case of significant group effects, independent samples t-tests were conducted. The alpha level of significance was fixed at 0.05. To test if any index of the neurobehavioral functioning (psychometric and IQ scores) may be a specific determinant of REM sleep parameters in the psychopathological group (ADHD, TD and ADHD/TD comorbidity; n = 66), multiple regression stepwise analyses were conducted, where in separate analyses different REM sleep parameters were included as dependent variables, and psychometric and IQ scores, group, age (in months), gender, and medication status ( Table 1) were used as independent predictors. The same multiple regression stepwise analyses were also conducted for the TDC, as well as separately in the three patient groups. Finally, to test whether other sleep PSG variables may be associated with the neurobehavioral functioning across groups, the multiple regression stepwise analyses described above were performed additionally. Further, both the crude (taken from lights-off) and the adjusted (taken from sleep onset) REM sleep latencies in the 66 children with psychopathologies, were only predicted by, and correlated negatively with inattention (R > 0.357; R 2 > 0.127; Adjusted R 2 > 0.114; F 1/64 > 9.33; p < 0. When applied the multiple regression analyses, where the relative and absolute amounts of REM sleep and REM latencies were dependent variables to each psychopathological group separately, and the following patterns of results were obtained: (1) In the ADHD group, the relative REM sleep proportion was predicted independently by inattention and performance IQ, and correlated positively with inattention and negatively with performance IQ. (2) In the TD group, the relative REM sleep proportion was predicted independently by inattention and interference LOI scores, and correlated positively with scores of inattention and LOI (interference scores). (3) In the group of ADHD/TD comorbidity, the relative REM sleep proportion correlated positively with inattention ( Table 3). In each one of the psychopathological groups separately, the absolute REM sleep amount correlated positively with only inattention (R > 0.580; R 2 > 0.337; Adjusted R 2 > 0.302; F 1/19(22) > 9.65, p < 0.001; B > 1.98; β > 0.580: t > 3.01, p < 0.006). Further, in each one of the groups with psychopathologies, both the crude and the adjusted REM sleep latencies correlated negatively with only inattention (R > 0.441; R 2 > 0.194; Adjusted R 2 > 0.152; F 1/19(22) > 4.59, p < 0.04; B > −2.56; β > −0.441; t > −2.14; p < 0.04). Additional multiple regression analyses did not extract any predictors for SOL, SWS latency, as well as for the other sleep stages proportion and PSG parameters, including total sleep time and sleep efficiency. DISCUSSION The key findings of the present study were that among a sample of children diagnosed with ADHD, TD and ADHD/TD comorbidity, the more REM sleep amount was associated with higher scores of inattention and lower scores of performance IQ. By contrast, in TDC, the more REM sleep amount was associated with lower scores of inattention and higher performance IQ. Further, whereas in children with psychiatric disorders, the shorter REM sleep latencies were associated with higher scores of inattention, no such association was found for the TDC. The present findings add to the current literature in that we observed "double-edged" associations between REM sleep PSG parameters and neurobehavioral functioning, not only in ADHD Prehn-Kristensen et al., 2013). Notably, this study clearly showed that this opposite impact of REM sleep on daytime neurobehavioral functions is also observable in a broader spectrum of developmental child psychiatric conditions, as compared with TDC. Collectively, these findings give support to our hypothesis that REM sleep parameters may be associated dissimilarly with daytime neurobehavioral functions, depending on presence or absence of developmental psychiatric disorders. Although the relative REM sleep proportion in each of the three psychopathological groups was predicted consistently by inattention, some observations (Table 3) merit further attention. First, the observed positive association between REM sleep proportion and interference LOI scores in the TD group might underline the closeness of TD with subclinical obsessivecompulsive symptoms. Second, the modest differences in REM sleep predictors across the groups ( Table 3 and the reported within text results) may be accounted for by variations in statistical power due to the relatively small sample sizes included in each group. The present data do not allow a deeper introspection into the exact neurobiological and psychological mechanisms underlying the pattern of associations, as found and described above. It is notable, however, that currently reported REM sleep alterations in children with a broad spectrum of psychiatric disorders correspond to findings in adults where psychiatric conditions such as depression, major depressive disorder, and post-traumatic stress disorder are featured by enhanced amounts of REM sleep (Benca et al., 1997;Wang et al., 2015;Baglioni et al., 2016). In adults, two parallel ways with a common source in the monoaminergic or cholinergic systems have been assumed to regulate both REM sleep and symptoms of depression (Wang et al., 2015). Likewise, we have proposed previously that altered aminergic-cholinergic ratio may lead to a REM sleep overdrive in the spectrum of developmental psychiatric conditions, as shown in the present study (Kirov et al., 2004Brand and Kirov, 2011;Kirov and Brand, 2014). Further, specific brain regions including the hippocampus, amygdala and medial prefrontal cortex may contribute to both daily symptoms and REM sleep impairments in both depressive adults (Wang et al., 2015) and children with ADHD (Hart et al., 2013) While the precise neurobiological mechanisms of the co-existing deviations in daily behaviors and REM sleep remain to be established (Wang et al., 2015), our current findings on the relationships between REM sleep amount and daily behavior both in patients and healthy children may open a relevant new line of understanding by considering the contra-directionality of these relationships. If increased REM sleep amounts in children were expressions of co-impaired regulation of REM sleep and attention, increased REM sleep might not predict superior achievements in TDC. If, on the contrary, increased REM sleep in patients reflected compensation, it should have predicted less symptoms severity in patients. The currently found bi-directionality of associations suggests that functional efficiency of REM sleep might be critically impaired in children with psychiatric disorders. There is evidence that the functional efficiency of REM sleep may vary both in pathological and normal conditions. For example, chronic stress in rats has been found to synchronize the theta rhythm between the hippocampus and amygdala, which was accompanied by increased amounts of REM sleep (Hegde et al., 2011). Further, Pellicciari et al. (2013) provide evidence for functional inefficiency of REM sleep alpha activity in depression in humans, which can be remediated by repetitive transcranial magnetic stimulation. These observations imply that the functional rhythms and associated mechanisms during REM may be primarily impaired in children with psychiatric disorders. Hence, in pathology, the increased REM sleep amount may reflect an attempt to compensate for functional inefficiency. While functional inefficiency of REM sleep may not be compensated by increased REM sleep amount in patients, increase in functionally efficient REM sleep in TDC is associated with an improved attention and higher performance IQ. The present results also imply effects on psychological functioning. REM sleep plays a role in the consolidation of negative emotional memories (Wagner et al., 2001;Nishida et al., 2009;Goldstein and Walker, 2014) and resolution of affect through dissipation of amygdala activity in response to previous emotional experiences, thus reducing next-day subjective emotionality (van der Helm et al., 2011). In this regard, the present results suggest that those REM sleep neuronal mechanisms sub-serving successful emotional processing may be intact in the TDC and may be insufficient or impaired in the children with the spectrum of psychiatric disorders. Thus, our results point tentatively to a possible contribution of emotional liability and related anxiety in these child psychiatric disorders Gruber, 2014), Since a wealth of empirical evidence show that children with common developmental psychopathologies display greater difficulties in coping with emotional problems, inappropriate behaviors and social interactions (Swain et al., 2007;Brand and Kirov, 2011;Kirov and Brand, 2014;, this may affect the natural functions of REM sleep and may lead to impaired attention and procedural skills, respectively. Further, from a view point of the aminergic-cholinergic reciprocal interaction model for NREM-REM sleep cycle regulation (Hobson et al., 1975), we have proposed previously that altered aminergic-cholinergic ratio may lead to a REM sleep overdrive in the spectrum of developmental psychiatric conditions, as shown in the present study (Kirov et al., 2004Brand and Kirov, 2011;Kirov and Brand, 2014). Though speculative, changes in this ratio may lead to dissimilar functions of REM sleep for implicit motor memory. Indirect support to this assumption comes from two consecutive studies. Whilst suppression of REM sleep induced by noradrenergic agonists led to a slight improvement of procedural memory, suppressing REM sleep by application of the cholinergic antagonists have produced opposite effects (Rasch et al., 2009a,b). Thus, REM sleep physiological, neurochemical and psychological features may interact differently with pathological traits in psychiatric conditions and, on the other hand, in normative behavioral functions in healthy youths. Our findings are in accord with previous research, observing an adverse impact of REM sleep overdrive or its EEG signatures on daytime neurobehavioral functions in children with ADHD compared to TDC (Kirov et al., 2007Prehn-Kristensen et al., 2013). Yet, they are at odds with findings from other studies. An earlier study has found a less REM sleep proportion and prolonged REM sleep latency in a community-based sample of 5-to7-years-old children with ADHD relative to controls, and has demonstrated that the less REM sleep and the longer REM latency correlated positively with inattention and hyperactivity/impulsivity (O'Brien et al., 2003). Another study conducted among 6-to 16-years-old children with TD, ADHD and TD/ADHD comorbidity, diagnosed according to DSM-III-R (American Psychiatric Association [APA], 1987) did not find any changes in REM sleep, but did find modest correlations between movements in REM sleep and hyperactivity/impulsivity (Stephens et al., 2013). Recently, it has been shown a REM sleep overdrive in un-medicated children with ADHD which did not correlate to any psychopathologies or impaired daytime behaviors (Virring et al., 2016). Notably, however, our patients and controls differed from those in the above studies in age, applied diagnostic criteria and medication. Hence, as proposed by Kirov and Brand (2014), differences in age, diagnostic criteria used and medication status could significantly contribute to controversial sleep PSG findings in youths with ADHD and their effect on daytime behavior. Although further studies are needed to clarify the role of neurobiological variables, the present findings show that the delayed and/or deviant developmental decrease in REM sleep might represent a risk factor for developmental psychopathology. Despite of the present findings, several considerations warrant against overgeneralization of the results. First, since periodic limb movements in sleep (PLMS) and sleep-disordered breathing (SDB) are among the most common sleep disorders in ADHD (Cortese et al., 2009), their presence was not reported here, because our focus was on the associations between sleep stages and daytime neurobehavioral functions. Thus, the role for PLMS and SDB in our findings mandates future investigations. Second, the sample sizes used are relatively small and heterogeneous in terms of age, gender and medication status before study. However, none of the multiple regression analyses extracted age, gender and medication status as predictors. Third, while it would be helpful to supplement our results by providing data about presence of parasomnias, sleep hygiene and circadian rhythm disorder, the lack of sleep diaries and actigraphy precludes such observations. Last, the present results might have emerged due to further latent neuroendocrinologic variables, which might have biased two or more dimensions in the same or opposite direction (Steiger et al., 2013). CONCLUSION Our results indicate an opposite impact of REM sleep on neurobehavioral functioning, depending on presence or absence of developmental psychiatric disorders. Thus, the development of REM sleep functions and the way how these relate to different kinds of children's behavior later on seems to depend on different factors (e.g., genes, environment, adaptation, etc.) influencing the child's brain maturation either to a normal or an aberrant neuronal system. AUTHOR CONTRIBUTIONS RK, TB, and AR: Substantial contributions to the conception and design of the work. RK, SB, and TB: Interpretation of data and drafting the manuscript. RK, SB, and TB: Statistical analysis. RK and TB: Data selection and matching the groups for age and gender. SB, TB, and AR: Final approval of the paper draft and agreement to be accountable for all aspects of the work. FUNDING The reported study in this manuscript was supported by University Medical Center of Goettingen, Goettingen, Germany, and did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2017-05-05T09:13:22.977Z
2017-01-09T00:00:00.000
{ "year": 2017, "sha1": "2e5a92be23d5eae55a241c2e6ab46da541ce1592", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.02059/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e5a92be23d5eae55a241c2e6ab46da541ce1592", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
226301732
pes2o/s2orc
v3-fos-license
A Reliable BioFET Immunosensor for Detection of p53 Tumour Suppressor in Physiological-Like Environment The concentration of wild-type tumour suppressor p53wt in cells and blood has a clinical significance for early diagnosis of some types of cancer. We developed a disposable, label-free, field-effect transistor-based immunosensor (BioFET), able to detect p53wt in physiological buffer solutions, over a wide concentration range. Microfabricated, high-purity gold electrodes were used as single-use extended gates (EG), which avoid direct interaction between the transistor gate and the biological solution. Debye screening, which normally hampers target charge effect on the FET gate potential and, consequently, on the registered FET drain-source current, at physiological ionic strength, was overcome by incorporating a biomolecule-permeable polymer layer on the EG electrode surface. Determination of an unknown p53wt concentration was obtained by calibrating the variation of the FET threshold voltage versus the target molecule concentration in buffer solution, with a sensitivity of 1.5 ± 0.2 mV/decade. The BioFET specificity was assessed by control experiments with proteins that may unspecifically bind at the EG surface, while 100pM p53wt concentration was established as limit of detection. This work paves the way for fast and highly sensitive tools for p53wt detection in physiological fluids, which deserve much interest in early cancer diagnosis and prognosis. Introduction The transcription factor p53 is considered the most important tumour suppressor for its pivotal role in many cellular processes inducing a number of genes implicated in DNA repair, cell-cycle arrest and apoptosis [1,2]. More than half of human cancers harbour p53 mutations, this reducing wild-type p53 (p53 wt ) concentration in cells and extracellular physiological fluids in patients with cancer, with respect to the standard nanomolar range characterizing healthy tissues [3,4]. On the other side, high concentrations of p53 wt protein in blood have been correlated to asbestosis, cancer of lungs and mesothelioma, even many years before the malignancy became clinically overt [5][6][7]. Thus, p53 wt concentration in cells and blood deserves a clinical significance for the early diagnosis of many important diseases. As a consequence, a variety of techniques have been proposed for p53 wt protein assay, including, but not limited to, colorimetry, chemiluminescence, electrochemiluminescence, immunochromatography, enzyme-linked immunosorbent assays (ELISA), surface-enhanced Raman spectroscopy (SERS), surface plasmon resonance (SPR), electrochemistry, standard and field-effect transistor (FET) based amperometry [3,4,[8][9][10][11][12][13][14][15]. In the last years, among various analytical methods, biosensing based on FET technology (also known as BioFET) has drawn the attention of the scientific community for its outstanding properties, both on the basis of sensing performance (i.e., sensitivity, specificity and fast response to target concentration) and of the effective cost and possibility of mass production thanks to semiconductor technology [16,17]. Most importantly, BioFET sensors would overcome issues related to target structural modifications by label introduction, which, in turn, could affect the biosensor recognition ability. Finally, BioFET does not need any optical devices for readout, with significant reduction of the biosensor dimensions. In a FET device, the current flow between two suitably polarized terminals (drain and source electrodes), embedded in a semiconductor substrate, is finely modulated by the potential of a third element, the gate electrode. In BioFET, the gate electrode is in contact with an electrolyte solution in which a stable reference electrode is placed to set the gate surface potential constant. The gate surface is suitably biofunctionalized with receptor molecules that can specifically recognize and capture the target molecules in the solution. If the target molecules are endowed with a net charge, which depends on both their isoelectric point (pI) and the solution pH, capturing of the charged targets onto the FET gate surface results into a variation of its potential due to the charge variation that, in turn, affects the measurable drain-source current [18]. Therefore, a correlation between such a current and the target concentration would allow an unknown biomarker concentration in solution to be determined. Although huge efforts have been devoted to the implementation of commercial BioFET, two main drawbacks have limited, up to date, the development of low-cost and disposable devices. The first originates from the presence of ion species in physiological solutions that hinder the target charge to affect the gate surface potential, above the so-called Debye screening length. This length characterizes the charge screening in electrolytes under the Debye-Hückel model, and mainly depends on the working solution temperature and ionic strength [17,19,20]. In physiological solutions and ambient temperatures, the Debye screening length is in the order of one nanometer. Since the biomolecules involved in biorecognition processes at the basis of target molecule capture may largely exceed this length, a drastic electrostatic screening of the target charges may occur, with minimal or no effect on the gate surface potential [21]. Buffer dilution reduces the ionic strength of the solution and extends the Debye length; however, this can cause severe issues such as instability of the biomolecules in solution, or reduced affinity of the capturing-target molecule interaction [17]. Nevertheless, it is worth noting that, in some cases, biosensing by means of BioFET has been reported well beyond the Debye length, in high ionic strength solutions, and this has been attributed to the formation of a Donnan equilibrium within the adsorbed target layer, which modifies the charge distribution at the interface, affecting either the interface capacitance [22] or the surface potential [19]. However, a series of loopholes emerged to overcome the Debye screening adverse effect in physiological solutions, which were based on the use of short receptors, such as aptamers [23], or nanostructured gate electrodes, such as silicon nanowires or carbon nanotubes [24,25]. Also, a particular sensing layer based on long chain of polyethylene glycol (PEG) polymer has been successfully implemented to extend the Debye length [26][27][28], thanks to a newly induced Donnan equilibrium [29]. A second drawback is related to the coupling difficulties arising when a semiconducting electronic device (FET) is to be employed in a wet environment. Accordingly, a new device design (EGFET) has been proposed, in which the FET gate is extended through a suitable conductive electrode, the so-called extended gate (EG) [28,30,31]. In such a configuration, a suitably microfabricated electrode, whose upper part can be functionalized with target-capturing molecules, is exposed to the target-containing solution, while the other side of this electrode is placed in contact with the internal FET gate terminal. In such a way, the charge variation effect occurring upon biorecognition is transmitted to the FET gate, resulting analogously in the I ds variation. Then, contamination of the FET silicon die is avoided and the FET can be reused, while the disposable EG can be removed after the measurement. In this paper, we present a label-free EGFET immunosensor for the detection of p53 wt , which benefits from both the extended-gate configuration, in terms of disposability and low-cost production, and the introduction of a suitable PEG layer for the enhancement of the device sensitivity as due to an extended Debye length. Such a BioFET design allowed us to reach a limit of detection (LOD) of 100 pM for p53 wt in physiological solutions at room temperature. The BioFET detection range spans over three orders of magnitude (0.1-10 nM), with a sensitivity of 1.5 ± 0.2 mV/decade. All these features make the proposed biosensor particularly suitable for the assays of p53 wt at the concentration range expected in cancer cells with p53 gene mutation (0.1 nM), as well at concentrations expected in healthy cells (1 nM), and beyond. Experimental Setup To reduce the FET-to-EG distance, contact resistances and ambient noise, a commercial zero-threshold n-type metal oxide semiconductor field-effect transistor (MOSFET; ALD110900A from Advanced Linear Devices Inc., Sunnyvale, CA, USA) was integrated in a printed circuit board (PCB) with a socket for connecting the sensor chip (which contains the gold EG and two pseudoreference electrodes, see next Section 2.2) and plugs for wiring the probe station ( Figure 1a). The sensor chip was vertically immersed in a custom fluid cell, together with a commercial bulky Ag/AgCl reference electrode (DriRef-2, World Precision Instruments Ltd, Hitchin, UK). The fluid cell had the dimensions of 1.0 × 0.5 × 3.5 cm 3 , and it was fitted into a heavy plastic support during measurements, to ensure stability and avoid mechanical noise. At a height of 1.5 cm from the bottom of the fluid cell, a lateral hole allowed the injection of the target solutions into the cell during measurements. The maximum capacity of the cuvette was 600 µL. The experimental measurements were carried out by a Keithley 2636B (Tektronix, Beaverton, OR, USA), which have a sensitivity in the fA range for current measurements and µV range for potential measurements. Two source-meter units (SMUs) of the Keithley apparatus were used. The SMU1 applied the voltage between the drain and the source electrodes (V ds ), which was kept constant at 100 mV for all the experiments, and measured the corresponding current flow (I ds ). The SMU2 applied a potential bias (V ref ) to the reference electrode in solution. Sensors 2020, 20, x 3 of 13 In this paper, we present a label-free EGFET immunosensor for the detection of p53wt, which benefits from both the extended-gate configuration, in terms of disposability and low-cost production, and the introduction of a suitable PEG layer for the enhancement of the device sensitivity as due to an extended Debye length. Such a BioFET design allowed us to reach a limit of detection (LOD) of 100 pM for p53wt in physiological solutions at room temperature. The BioFET detection range spans over three orders of magnitude (0.1-10 nM), with a sensitivity of 1.5 ± 0.2 mV/decade. All these features make the proposed biosensor particularly suitable for the assays of p53wt at the concentration range expected in cancer cells with p53 gene mutation (0.1 nM), as well at concentrations expected in healthy cells (1 nM), and beyond. Experimental Setup To reduce the FET-to-EG distance, contact resistances and ambient noise, a commercial zerothreshold n-type metal oxide semiconductor field-effect transistor (MOSFET; ALD110900A from Advanced Linear Devices Inc., Sunnyvale, CA, USA) was integrated in a printed circuit board (PCB) with a socket for connecting the sensor chip (which contains the gold EG and two pseudoreference electrodes, see next Section 2.2) and plugs for wiring the probe station ( Figure 1a). The sensor chip was vertically immersed in a custom fluid cell, together with a commercial bulky Ag/AgCl reference electrode (DriRef-2, World Precision Instruments Ltd, Hitchin, UK). The fluid cell had the dimensions of 1.0 × 0.5 × 3.5 cm 3 , and it was fitted into a heavy plastic support during measurements, to ensure stability and avoid mechanical noise. At a height of 1.5 cm from the bottom of the fluid cell, a lateral hole allowed the injection of the target solutions into the cell during measurements. The maximum capacity of the cuvette was 600 μL. The experimental measurements were carried out by a Keithley 2636B (Tektronix, Beaverton, OR, USA), which have a sensitivity in the fA range for current measurements and μV range for potential measurements. Two source-meter units (SMUs) of the Keithley apparatus were used. The SMU1 applied the voltage between the drain and the source electrodes (Vds), which was kept constant at 100 mV for all the experiments, and measured the corresponding current flow (Ids). The SMU2 applied a potential bias (Vref) to the reference electrode in solution. Sensing Chip Microfabrication The sensing chip was microfabricated through the procedure shown in Figure 1b, in order to have the gold EG and two Ag/AgCl pseudoreference electrodes on the same silicon chip. The manufacturing procedure was carried on thermal oxidized (100) 4" silicon wafers (n-type) accurately cleaned with an acetone and 2-propanol ultrasonic bath for 15 min and then rinsed in deionised water, followed by hotplate dehydration at 120 • C for 600 s. A single layer of AZ5214 reversal image (negative tone) photoresist (MicroChemicals GmbH, Ulm, Germany) about 1.4 µm-thick was dispensed on the silicon substrate and soft-baked at 110 • C for 50 s. The patterning of the photoresist was realized by 365 nm optical lithography process in hard contact mode with a Karl Suss MA6 (Suss Microtec, Waterbury Center, VT, USA) tool. The photoresist layer was exposed with a 5" quartz mask with chromium patterns for the realization of the working electrode; after reversal image procedure completion (for AZ5214), the sample was dipped into AZ326 developer to remove soluble photoresist areas. In order to completely remove photoresist residuals from silicon exposed surface and guarantee highly reproducible subsequent etching process, a final five-second spray step with AZ326 developer was added to standard procedure. After photolithographic steps, a Cr/Au thin film (5 nm/200 nm) was deposited by e-beam evaporation, followed by lift-off to pattern the working electrode and the contact paths for reference electrodes (see process graphic Figure 1). A second lithography step with the same recipe was performed for the deposition of the silver reference electrodes, completed by a 300 nm silver deposition by e-beam evaporator. After the fabrication of all electrodes, the wafer was fully passivated by 400 nm of ultra-low residual stress (+50 MPa) PECVD silicon nitride, deposited at 350 • C. A final photolithography process allows to selectively etch the silicon nitride layer by Inductively Coupled Plasma process with a SF 6 /O 2 mixture to expose working and reference electrodes to the solution. A final measurement of the gold working electrode before and after the plasma process, confirms the average final roughness of about 5 nm. Figure 1 also shows the fabricated electrode with two embedded pseudoreference electrodes around the circular working electrode. The electrochemical chloridation of the silver electrodes to Ag/AgCl was performed in 1% NaCl aqueous solution with an applied DC current density of 0.003 mA/mm 2 for 90 s, followed by multiple rinses in deionized water. These Ag/AgCl microfabricated pseudoreference electrodes were introduced in the sensing chip design to improve the compactness of the experimental setup; this deserving interest in connection with the developing of future devices. However, preliminary, careful experiments conducted by using these electrodes to set the gate potential to a fixed value, evidenced a tension offset with respect to the bulky commercial electrode (see previous Section 2.1) of about 100 ± 20 mV. Therefore, in order to provide experimental results comparable with those reported in the literature with the same MOSFET working also in liquid environment [32], we decided to conduct experiments by using a bulky commercial electrode as reference electrode. Extended-Gate Electrode Surface Biofunctionalization Prior to functionalization, the EG gold electrode was cleaned with H 2 O 2 (Merck Millipore, Darmstadt, Germany) under ultraviolet (UV) light for 30 min, according to the so-called liquid-based hydrogen peroxide-mediated UV-photooxidation (liquid-UVPO) technique [33]). Then, it was rinsed with deionized water and dried with nitrogen. The biofunctionalization procedure ( Figure 2) was performed essentially in agreement with that reported by Tarasov and coworkers [28]. It consists of four steps, each one separated by the following by rinsing the EG surface with phosphate-buffered saline (PBS) 1X (147 mM) solution at pH 7.4 for 15 min, to remove unbound molecules, successive rinsing with filtered (0.2 µm filtering membrane pore size) deionized water, and drying by pure nitrogen. First, the EG electrode was incubated with a mixed solution of SH-PEG-COOH (0.5 kDa; Biochempeg, Watertown, MA, USA), and SH-PEG (10 kDa; Merck Millipore), at a concentration ratio of 40 µM to 2 µM, respectively, in PBS 1X solution at pH 7.4, for one hour at room temperature. ratio in PBS 1X solution at pH 7.4, for 20 min at room temperature. Then, the EG electrode was incubated overnight at 4 • C with the target-capturing p53 antibody (mouse monoclonal anti-p53wt, PAb1620 clone antibody; Merck Millipore) at a concentration of 13 nM in PBS 1X solution at pH 7.4. Finally, to avoid nonspecific binding, the unbound sites on the electrode surface were blocked by incubating it with a 1 mg/mL bovine serum albumin (BSA; Merck Millipore) solution in PBS 1X solution at pH 7.4 for one hour at room temperature. Biosensing Experiments All the biosensing experiments were performed by using PBS 100 mM solution at pH 8.0 as working buffer. The ionic strength of the working buffer was chosen to mimic the screening length of physiological solutions [26], while the pH value was selected somewhat higher than the physiological one (7.4), in order to increase the net charge of the p53wt target molecules. Indeed, p53wt (Genscript, Piscataway, NJ, USA) has a pI of 6.3 in its unphosphorilated state [15], and thus it will bear a negative net charge at pH 8.0 [17]. Due to the geometry of the EG sensing area (circular gold area on the top of the sensor chip in Figure 1a) and of the fluid cell, the starting buffer volume was set at 350 μL, to ensure the complete immersion of the EG electrode area. The electrolyte solution was not purged to remove dissolved oxygen and was exposed to the atmosphere during the measurements. Each biosensing experiment consisted of three steps: (i) the BioFET transfer curve (Ids-Vref curve) was acquired by measuring the MOSFET Ids as a function of Vref, as applied by the reference electrode, while it is dipped in the electrolyte solution; (ii) Ids at constant both Vds (100 mV) and Vref (400 mV) was consecutively measured, until a stable signal was reached; (iii) Ids at constant Vds (100 mV) and Vref (400 mV) was consecutively measured while small volumes (20 μL) at increasing concentrations of p53wt were added by pipetting into the fluid cell (the same working buffer was used, in order to avoid current responses as a result of change in pH or ionic strength in the electrolyte [34]). Transfer curves from nine different experiments were used to calculate the average curves. Per each tested p53wt concentration, five different biosensing experiments were carrier out. Data analysis, plotting and fitting were performed by using the Microcalc OriginPro 8.5 software product (OriginLab Corporation, Northampton, MA, USA). Figure 3 shows the average transfer curve Ids-Vref of the BioFET as obtained when the EG electrode is functionalized with p53wt specific antibody (black curve). For comparison, the average transfer curve obtained with bare gold EG surface is also shown (red curve). The slope in the linear regime of the single transfer curves, corresponding to the gm transconductance [17]), has been Biosensing Experiments All the biosensing experiments were performed by using PBS 100 mM solution at pH 8.0 as working buffer. The ionic strength of the working buffer was chosen to mimic the screening length of physiological solutions [26], while the pH value was selected somewhat higher than the physiological one (7.4), in order to increase the net charge of the p53 wt target molecules. Indeed, p53 wt (Genscript, Piscataway, NJ, USA) has a pI of 6.3 in its unphosphorilated state [15], and thus it will bear a negative net charge at pH 8.0 [17]. Due to the geometry of the EG sensing area (circular gold area on the top of the sensor chip in Figure 1a) and of the fluid cell, the starting buffer volume was set at 350 µL, to ensure the complete immersion of the EG electrode area. The electrolyte solution was not purged to remove dissolved oxygen and was exposed to the atmosphere during the measurements. Each biosensing experiment consisted of three steps: (i) the BioFET transfer curve (I ds -V ref curve) was acquired by measuring the MOSFET I ds as a function of V ref , as applied by the reference electrode, while it is dipped in the electrolyte solution; (ii) I ds at constant both V ds (100 mV) and V ref (400 mV) was consecutively measured, until a stable signal was reached; (iii) I ds at constant V ds (100 mV) and V ref (400 mV) was consecutively measured while small volumes (20 µL) at increasing concentrations of p53 wt were added by pipetting into the fluid cell (the same working buffer was used, in order to avoid current responses as a result of change in pH or ionic strength in the electrolyte [34]). Transfer curves from nine different experiments were used to calculate the average curves. Per each tested p53 wt concentration, five different biosensing experiments were carrier out. Data analysis, plotting and fitting were performed by using the Microcalc OriginPro 8.5 software product (OriginLab Corporation, Northampton, MA, USA). Figure 3 shows the average transfer curve I ds -V ref of the BioFET as obtained when the EG electrode is functionalized with p53 wt specific antibody (black curve). For comparison, the average transfer curve obtained with bare gold EG surface is also shown (red curve). The slope in the linear regime of the single transfer curves, corresponding to the g m transconductance [17]), has been obtained by a linear fit of the curves in the 300-500 mV range. The obtained average values are (70.5 ± 2.7) × 10 −6 A/V when the EG electrode is clean, and (79.6 ± 1.2) × 10 −6 A/V when it is biofunctionalized. negative values, as registered for the functionalized EG electrode, implies the accumulation of positive charges at this electrode. BioFET Transfer Characteristics Based on the obtained transfer curves, we set the working Vref at 400mV, to ensure that the ntype MOSFET is working in the linear regime conditions. In this way, each variation of the recorded Ids will be proportional to the corresponding shift in the Vth (see inset in Figure 3), with the transconductance gm being the proportional constant (ΔIds = −gm ΔVth)) [17]. [35], and the correspondence between Ids and Vth changes are shown. Stability Over Time The starting Ids current in every BioFET experiment (at constant Vds = 100 mV and Vref = 400 mV) is in range of 30-40 μA and it always exhibited an approximately exponential increase of a few μA before reaching a stable value, as shown in Figure 4. A similar behaviour is observed also when clean gold EG electrodes are used. Drift of the Ids in BioFET could depend on leakage due to reference electrode microfabrication [19]. Indeed, we found some instabilities (not shown) in the gate voltage when our microfabricated Ag/AgCl pseudoreference electrodes (see Figure 1) were used. Therefore, we decided to use a much more stable bulky Ag/AgCl reference electrode [34], as discussed in Section 2.2. In this case, the (around ten µA) is highlighted by the grey area. Inset: both extraction of the threshold voltage (V th ) from a representative curve, by a linear extrapolation method [35], and the correspondence between I ds and V th changes are shown. The threshold voltages (V th ) of the transfer curves have been obtained by applying a linear extrapolation method [35], as shown in Figure 3 (inset). Their average values are (+160 ± 30) mV for clean gold EG electrode, and (−70 ± 40) mV for biofunctionalized gold EG electrode. These deviations from the zero-threshold which characterizes the commercial n-type MOSFET used, could be attributed to a combination of three possible effects: (i) the presence of the electrical double layer (EDL) which is formed in the electrolytic solution; (ii) the presence of the different molecules involved in the electrode functionalization; (iii) changes in the flat band voltage, somewhat correlated to the previous mechanisms [34,36]. In particular, the shift of the threshold (and obviously of the transfer curve) toward positive values, as observed for the clean gold electrode dipped in the electrolytic solution (see Figure 3), is indicative of the accumulation of negative charges at the EG electrode surface. Indeed, when an n-type MOSFET is used as transduction element, the accumulation of negative charges on the EG surface (and then on the FET gate) induces holes in the semiconductor channel and, thus, a decrease of I ds measured at a fixed V ref . Conversely, the threshold shift toward negative values, as registered for the functionalized EG electrode, implies the accumulation of positive charges at this electrode. Based on the obtained transfer curves, we set the working V ref at 400mV, to ensure that the n-type MOSFET is working in the linear regime conditions. In this way, each variation of the recorded I ds will be proportional to the corresponding shift in the V th (see inset in Figure 3), with the transconductance g m being the proportional constant (∆I ds = −g m ∆V th )) [17]. Stability over Time The starting I ds current in every BioFET experiment (at constant V ds = 100 mV and V ref = 400 mV) is in range of 30-40 µA and it always exhibited an approximately exponential increase of a few µA before reaching a stable value, as shown in Figure 4. A similar behaviour is observed also when clean gold EG electrodes are used. To the best of our knowledge, no previous studies have mentioned similar current drifts when a gold EG electrode is immersed in the electrolyte solution. Nevertheless, we can speculate that, in this case, the application of Vref could cause a charge translocation from the gold EG to the gate electrode of the FET, where dispersive transport may occur due to trapped states in the dielectric layer. Response to p53wt: Extended-Gate Surface Potential Changes and Detection Limit After current stabilization, the sensor response to p53wt has been tested by continuously recording Ids while injecting small volumes (20 μL) of p53wt solutions in the fluid cell. As shown in Figure 5a-c, p53wt solution injection induces a decrease of Ids, as due to a shift of the Vth towards positive values, as expected since p53wt bears a net negative charge at pH 8.0 [15]. Protein injections have been performed also by sequentially injecting increasing concentrations of p53wt. In this last case, at each step, the total Ids response (Ids shift with respect to the starting value) has been associated to the total p53wt concentration (as obtained by cumulating the different injections). Comparable current jumps are observed at the same p53wt concentration, either in single and in multiple injection experiments. Concentrations of p53wt from 50 pM to 10 nM have been tested, obtaining Ids jumps ranging from about −5.0 × 10 −8 A up to about −5.0 × 10 −7 A. The measured current jumps ΔIds have been converted in the corresponding ΔVth, and averaged per p53wt concentration. The so-called BioFET calibration curve can thus be obtained by plotting the obtained average ΔVth as a function of the p53wt concentration, on a semilogarithmic scale, as shown in Figure 6 [17]. The error bars represent the data standard deviation, which likely depends on variability in the sensor microfabrication, solution preparation and volume injection, and also on the antigen concentration. The best regression line (R 2 = 0.96), shown by the red line in Figure 6, reveals that the BioFET sensor responds to three orders of magnitude of the target concentration. Therefore, Drift of the I ds in BioFET could depend on leakage due to reference electrode microfabrication [19]. Indeed, we found some instabilities (not shown) in the gate voltage when our microfabricated Ag/AgCl pseudoreference electrodes (see Figure 1) were used. Therefore, we decided to use a much more stable bulky Ag/AgCl reference electrode [34], as discussed in Section 2.2. In this case, the observed I ds drift can depend only on changes of the surface potential of the gate electrode, which affects the threshold voltage V th in the transfer curve [34]. An analogous drift of V th has been frequently observed when oxide materials (i.e., SiO 2 , Si 3 N 4 , Al 2 O 3 , Ta 2 O 5 ) are used for the BioFET gate electrode and directly immersed in the electrolyte solution [17]. This initial current drift has been described in terms of a rate-limiting ion transport from the solution to buried sites on the gate surface, and the application of a dispersive transport model led to a power-law time dependent decay of diffusivity and mobility, consistent with a stretched exponential time dependence of the gate surface charge [34]. To the best of our knowledge, no previous studies have mentioned similar current drifts when a gold EG electrode is immersed in the electrolyte solution. Nevertheless, we can speculate that, in this case, the application of V ref could cause a charge translocation from the gold EG to the gate electrode of the FET, where dispersive transport may occur due to trapped states in the dielectric layer. Response to p53 wt : Extended-Gate Surface Potential Changes and Detection Limit After current stabilization, the sensor response to p53 wt has been tested by continuously recording I ds while injecting small volumes (20 µL) of p53 wt solutions in the fluid cell. As shown in Figure 5a-c, p53 wt solution injection induces a decrease of I ds , as due to a shift of the V th towards positive values, as expected since p53 wt bears a net negative charge at pH 8.0 [15]. Protein injections have been performed also by sequentially injecting increasing concentrations of p53 wt . In this last case, at each step, the total I ds response (I ds shift with respect to the starting value) has been associated to the total p53 wt concentration (as obtained by cumulating the different injections). Comparable current jumps are observed at the same p53 wt concentration, either in single and in multiple injection experiments. Concentrations of p53 wt from 50 pM to 10 nM have been tested, obtaining I ds jumps ranging from about −5.0 × 10 −8 A up to about −5.0 × 10 −7 A. Sensors 2020, 20, x 8 of 13 the calibration curve allows to determine an unknown p53wt concentration from the corresponding measured Vth shift, within the above mentioned range. From the slope of the regression line, a sensitivity (σ) of 1.5 ± 0.2 mV/decade is obtained, in agreement with the indication of the International Union of Pure and Applied Chemistry (IUPAC) [17]. Such a sensitivity value, together with the tested range of concentration, enables our BioFET as suitable tool for the assays of p53wt at concentrations expected in cancer cells with p53 gene mutation (0.1 nM), as well at concentrations expected in healthy cells (1 nM), and beyond (to diagnostic diseases characterized by high level of p53wt in blood). It is worth noting that the obtained sensitivity is low if compared to the limiting value of 59 mV/decade, expected if biomolecule binding would follow the Nernstian model of equilibrium potentials of ions across semipermeable membranes [17]. Such an extreme value has been previously reported for glucose [37] and urea [32] BioFETs with gold EG. However, there are significant differences between proton equilibria and macrobiomolecule binding equilibria, which imply that the Nernst model would not apply to protein binding [17]. Indeed, much lower sensitivity values (down to about 10 −3 mV/decade) have been reported for protein biosensing via BioFET with gold EG [38]. To test the specificity of the BioFET, we have analysed the current response to the addition of two other biomolecules (BSA and human serum albumin-HSA), which also bear a negative net charge at the working buffer pH, by injecting them in the sensor fluid cell at comparable concentrations and volumes with respect to p53wt. A representative current response obtained as consequence of the injection of 20 μL of working buffer with 1 nM of HSA is shown in Figure 5c. The injection of HSA or BSA generally gives rise to current variations in the (-5.0 ± 3.0) × 10 −8 A range, resulting lower than the p53wt induced current changes (except for the 50 pM concentration). The corresponding Vth variation is in the 0.6 ± 0.4 mV range, and it is represented, in Figure 6, by the grey area at the bottom. This offset is likely due to unspecific interactions between the proteins and the EG biofunctionalized electrode [17,39], and it can be considered as a blank signal. The measured current jumps ∆I ds have been converted in the corresponding ∆V th , and averaged per p53 wt concentration. The so-called BioFET calibration curve can thus be obtained by plotting the obtained average ∆V th as a function of the p53 wt concentration, on a semilogarithmic scale, as shown in Figure 6 [17]. The error bars represent the data standard deviation, which likely depends on variability in the sensor microfabrication, solution preparation and volume injection, and also on the antigen concentration. The best regression line (R 2 = 0.96), shown by the red line in Figure 6, reveals that the BioFET sensor responds to three orders of magnitude of the target concentration. Therefore, the calibration curve allows to determine an unknown p53 wt concentration from the corresponding measured V th shift, within the above mentioned range. From the slope of the regression line, a sensitivity (σ) of 1.5 ± 0.2 mV/decade is obtained, in agreement with the indication of the International Union of Pure and Applied Chemistry (IUPAC) [17]. Such a sensitivity value, together with the tested range of concentration, enables our BioFET as suitable tool for the assays of p53 wt at concentrations expected in cancer cells with p53 gene mutation (0.1 nM), as well at concentrations expected in healthy cells (1 nM), and beyond (to diagnostic diseases characterized by high level of p53 wt in blood). It is worth noting that the obtained sensitivity is low if compared to the limiting value of 59 mV/decade, expected if biomolecule binding would follow the Nernstian model of equilibrium potentials of ions across semipermeable membranes [17]. Such an extreme value has been previously reported for glucose [37] and urea [32] BioFETs with gold EG. However, there are significant differences between proton equilibria and macrobiomolecule binding equilibria, which imply that the Nernst model would not apply to protein binding [17]. Indeed, much lower sensitivity values (down to about 10 −3 mV/decade) have been reported for protein biosensing via BioFET with gold EG [38]. Sensors 2020, 20, x 9 of 13 Determination of a blank signal allows us to calculate the BioFET LOD, which is defined by IUPAC as the smallest measure that can be reasonably detected for a given analytical procedure, and which is obtained by the blank signal incremented by a certain number of times of its standard deviation (SD), depending on the confidence level required [17]. For BioFET devices with gold EG, the LOD has been calculated by the value of the blank signal incremented by three times its standard deviation [40]. Other authors calculated the LOD as three times the standard deviation of the blank signal divided by the slope of the calibration curve [38]. We have obtained the LOD of our BioFET by calculating <Vth blank> + k SD blank per k = 1, 2, 3, obtaining LOD = 130, 190 and 290 pM, respectively. On the other hand, some authors refer to the LOD simply as to the lowest target concentration that provide a detectable sensor response visually, based on the calibration plot [28,32]. In such a case, a LOD of 100 pM is obtained for our BioFET. A LOD of 100 pM obtained in physiological solution is a good result if compared with the previously realized BioFET for p53wt, for which a label-free detection of 100 nM in diluted buffer has been reported [15]. On the other hand, the LOD of our BioFET is from one to four orders of magnitude higher than those previously reported for optical biosensors (see Introduction), except for the colorimetric assay, which has a LOD of 5 nM [9]. However, our BioFET is able to detect label-free p53wt in physiological buffer, while most of the previously proposed methodologies (excepted SPR [4]) are label-requiring and/or need dilution of the medium (with a few exception [3,9,11,13]). Moreover, thank to the chosen antibody, our BioFET is highly specifically sensitive only to p53wt, excluding the mutated forms; this avoiding misleading (unsorted) outputs. Finally, our BioFET has the clear advantages of reduced dimension, low cost and capability to work with small sample volumes, which render it a good candidate for the future development of portable devices. Energetics of the Adsorption Process The BioFET response to the target adsorption can further provide information on the energetics of the receptor-target binding, by applying models based on the Langmuir adsorption isotherm. This has been previously obtained both from real-time measurements through flow cells [41,42] and by static measurements such as in our case [26,27,39,43,44]. Within this context, and taking into account the offset observed when nonspecific adsorption occurs (ΔV0), the Vth variation can be described by: To test the specificity of the BioFET, we have analysed the current response to the addition of two other biomolecules (BSA and human serum albumin-HSA), which also bear a negative net charge at the working buffer pH, by injecting them in the sensor fluid cell at comparable concentrations and volumes with respect to p53 wt . A representative current response obtained as consequence of the injection of 20 µL of working buffer with 1 nM of HSA is shown in Figure 5c. The injection of HSA or BSA generally gives rise to current variations in the (-5.0 ± 3.0) × 10 −8 A range, resulting lower than the p53 wt induced current changes (except for the 50 pM concentration). The corresponding V th variation is in the 0.6 ± 0.4 mV range, and it is represented, in Figure 6, by the grey area at the bottom. This offset is likely due to unspecific interactions between the proteins and the EG biofunctionalized electrode [17,39], and it can be considered as a blank signal. Determination of a blank signal allows us to calculate the BioFET LOD, which is defined by IUPAC as the smallest measure that can be reasonably detected for a given analytical procedure, and which is obtained by the blank signal incremented by a certain number of times of its standard deviation (SD), depending on the confidence level required [17]. For BioFET devices with gold EG, the LOD has been calculated by the value of the blank signal incremented by three times its standard deviation [40]. Other authors calculated the LOD as three times the standard deviation of the blank signal divided by the slope of the calibration curve [38]. We have obtained the LOD of our BioFET by calculating <V th blank > + k SD blank per k = 1, 2, 3, obtaining LOD = 130, 190 and 290 pM, respectively. On the other hand, some authors refer to the LOD simply as to the lowest target concentration that provide a detectable sensor response visually, based on the calibration plot [28,32]. In such a case, a LOD of 100 pM is obtained for our BioFET. A LOD of 100 pM obtained in physiological solution is a good result if compared with the previously realized BioFET for p53 wt , for which a label-free detection of 100 nM in diluted buffer has been reported [15]. On the other hand, the LOD of our BioFET is from one to four orders of magnitude higher than those previously reported for optical biosensors (see Introduction), except for the colorimetric assay, which has a LOD of 5 nM [9]. However, our BioFET is able to detect label-free p53 wt in physiological buffer, while most of the previously proposed methodologies (excepted SPR [4]) are label-requiring and/or need dilution of the medium (with a few exception [3,9,11,13]). Moreover, thank to the chosen antibody, our BioFET is highly specifically sensitive only to p53 wt , excluding the mutated forms; this avoiding misleading (unsorted) outputs. Finally, our BioFET has the clear advantages of reduced dimension, low cost and capability to work with small sample volumes, which render it a good candidate for the future development of portable devices. Energetics of the Adsorption Process The BioFET response to the target adsorption can further provide information on the energetics of the receptor-target binding, by applying models based on the Langmuir adsorption isotherm. This has been previously obtained both from real-time measurements through flow cells [41,42] and by static measurements such as in our case [26,27,39,43,44]. Within this context, and taking into account the offset observed when nonspecific adsorption occurs (∆V 0 ), the V th variation can be described by: ∆V thmax is the potential variation associated at the surface saturation, C p53 is the p53 wt concentration in solution, and K D is the dissociation constant at the chemical equilibrium: The experimental ∆V th as a function of C p53 have been replotted, up to the saturation level (i.e., up to 50 nM), in linear scale, in Figure 7. As for Figure 6, the experimental errors could mainly originate from variability in the sensor microfabrication, solution preparation, volume injection and antigen concentration. In the measurements performed at 50 nM concentration, in addition to the variability induced by the above mentioned issues, it should be taken into account the variability in the sensor surface preparation, which become critical in the saturation regime, when the number of available binding site on the electrode represents an upper limit to charge accumulation at the gate surface. The data have been fitted with the proposed model (red line), obtaining a good match (R 2 = 0.96). From the fitting analysis, an offset value of 1.0 ± 0.4 mV is obtained, which is consistent with the blank signal represented by the grey area in Figure 6. The dissociation constant of the interaction among p53 wt and its conformational antibody PAb1620 is also obtained, as K D = (2.2 ± 1.3) × 10 −8 M. This corresponds to an affinity constant K A = 1/K D ≈ 4.5 × 10 7 M −1 that is only slightly lower than those previously reported between p53 wt protein and PAb241, PAb246 antibodies or consensus ds-DNA (6.9 × 10 8 M −1 , 3.7 × 10 8 M −1 and 1.1 × 10 8 M −1 , respectively [4]). Indeed, the monoclonal antibody we used (PAb1620) binds an epitope on the surface of correctly folded p53 wt , and, upon binding, allosterically inhibits p53 wt binding to DNA. However, it is highly sensitive to protein unfolding or mutation (even single aminoacidic mutation) and p53 wt is known to assume different conformations, not all of which are recognized by PAb1620 [45]. This could partially justify the observed K D, which, on the other hand, has not been so far reported in the literature. An even lower affinity (K D = 3.5 × 10 −6 M) has been reported by SPR for the binding of a peptide cloning the p53 wt binding site to PAb1620 [45]. allosterically inhibits p53wt binding to DNA. However, it is highly sensitive to protein unfolding or mutation (even single aminoacidic mutation) and p53wt is known to assume different conformations, not all of which are recognized by PAb1620 [45]. This could partially justify the observed KD, which, on the other hand, has not been so far reported in the literature. An even lower affinity (KD = 3.5 × 10 −6 M) has been reported by SPR for the binding of a peptide cloning the p53wt binding site to PAb1620 [45]. Conclusions We have developed a BioFET for the detection, in physiological-like solution, of the wild-type form of the tumour suppressor p53, whose concentration in cells and blood has a clinical significance for early diagnosis of some types of cancer and other importantly related diseases. We have implemented the device with a sensing electrode constituted by a carefully microfabricated high-purity gold electrode acting as a disposable extended gate. The sensing electrode is, on one of its ends, placed in the biological solution and, on the other end, connected to the MOSFET gate. Electrostatic (Debye) screening by the solution ions is mitigated by creating at the EG surface a Donnan-like membrane, permeable to the target biomolecules, by means of suitable PEG polymers entering in the biofunctionalization procedure. We have thus obtained a BioFET with disposable biosensing element, endowed with a quite good sensitivity (1.5 ± 0.2 mV/decade). The BioFET LOD can be as low as 100 pM with a very good specificity toward p53 wt . The range of significance is spanning over three orders of magnitude, falling in the range of p53 wt concentrations that are suitable for early diagnosis and prognosis of cancer. All these features pave the way for a future development of a related point-of-care device, provided that it is, compactly, coupled to a suitable microelectronic system able to reveal, process and display signals at high resolution and statistically significant, with the aim to reduce dimension and cost, by remaining highly competitive with standard immunoassays.
2020-11-12T09:10:12.991Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "8e0569ea835e3200c8248cd68bec0d8c8ad510c5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/21/6364/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf5d1dd6d9571c5399faae149204b6888526a717", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Chemistry" ] }
18651717
pes2o/s2orc
v3-fos-license
Localized surface plasmon resonances dominated giant lateral photovoltaic effect observed in ZnO/Ag/Si nanostructure We report substantially enlarged lateral photovoltaic effect (LPE) in the ZnO/Ag/Si nanostructures. The maximum LPE sensitivity (55.05 mv/mm) obtained in this structure is about seven times larger than that observed in the control sample (7.88 mv/mm) of ZnO/Si. We attribute this phenomenon to the strong localized surface plasmon resonances (LSPRs) induced by nano Ag semicontinuous films. Quite different from the traditional LPE in PN junction type structures, in which light-generated carriers contributed to LPE merely depends on direct excitation of light in semiconductor, this work firstly demonstrates that, by introducing a super thin metal Ag in the interface between two different kinds of semiconductors, the nanoscale Ag embedded in the interface will produce strong resonance of localized field, causing extra intraband excitation, interband excitation and an enhanced direct excitation. As a consequence, these LSPRs dominated contributions harvest much more carriers, giving rise to a greatly enhanced LPE. In particular, this LSPRs-driven mechanism constitutes a sharp contrast to the traditional LPE operation mechanism. This work suggests a brand new LSPRs approach for tailoring LPE-based devices and also opens avenues of research within current photoelectric sensors area. Results and Discussion All the experiments were performed in the ZnO/Ag/Si and ZnO/Si nanostructures. The thickness of the N-type Si (111) wafer is around 0.3 mm and the resistivity is in the range of 50-80 Ω cm at room temperature. The Ag nanoscale films were deposited by dc magnetron sputtering at room temperature, and the nominal thickness controlled by different grown time was ranging from 2.7 nm to 13.5 nm. The ZnO layer was then fabricated by magnetron reactive sputtering under the deposition pressure 0.6 Pa for Ar and 0.15Pa for O 2 , and the identical thickness was approximately 28.5 nm. The samples were labeled as sample 1 to sample 4 in accordance with the sequential silver thickness 2.7 nm, 8.1 nm, 10.8 nm and 13.5 nm. Here the no silver intermingled ZnO/Si was regarded as the control sample. As a matter of fact, the deposition rate of Ag and ZnO determined by the stylus profile meter on thick calibration samples were checked out to be 1.35 Å/s and 0.95 Å/s respectively. All the samples were scanned spatially with a He-Ne laser focused on a roughly 50 μm diameter spot at the surface and without any spurious illumination (e.g. background light) reaching to the samples. Furthermore, all the contacts (less than 1 mm in diameter) to the film were formed by alloying indium and showed no measurable rectifying effect. Other experimental details were similar with our recently published papers on LPE 12,13 . As shown in the Fig. 1a, when the laser spot is impinging on the ZnO side of nanostructures (see the inset of Fig. 1b), all the values of measured LPVs in these systems basically have presented with linear change trend versus the laser spot position as we have reported before 9 . The nonlinearity (spatial resolution for 100 μm) of the effective linear area (namely the distance between the innermost side of the two contacts) was kept in a range of 3-10% in this report. Clearly, these results were much smaller than the lowest nonlinearity value 15% required in the practical application, suggesting an appropriate candidate for the PSD devices. Besides, all the LPVs output induced by the 405 nm laser have obtained their maximum values respectively when the incident light is closest to the contacts (namely, the electrodes), and then as the spot scanned away from the indium electrodes, the values displayed a fitted linear decrease and dropped to zero at the midpoint. Yet, here the unexpected fact is that the largest sensitivity signal 55.05 mv/mm detected in sample 3 (see the pink dot depicted in the inset of Fig. 1a) was about seven times larger than the one 7.88 mv/mm tested in the control sample ZnO/Si. To better investigate this particular phenomenon, we also made a series of additional LPV tests for all these samples with the varying laser wavelength from 405 nm to 780 nm within the 4 mm contacts spacing as shown in Fig 1b. Indeed, we can discover that all the samples embedded with the semicontinuous Ag film have unambiguously exhibited better sensitivities compared with the control sample ZnO/Si in the visible and near infrared region. Therefore, it is clear that the Ag semicontinuous film in these complex nanostructures is certain to play a crucial role in modulating the optical and electrical properties of the multilayer systems. It has been well known that the silver embedded nanoscale metal-dielectric composite materials can generate strong localized surface plasmon resonances (LSPRs) at frequencies in the visible range since its dielectric constant has a large negative real part and a relatively small imaginary part 21,24 . Given this peculiar behavior, many positive results have been achieved in the Ag nanostructure enhanced photoluminescence (PL) material researches 18,25 , especially for the band gap emission (~370 nm) of ZnO hybrid systems in recent years because that the metallic LSPRs here can not only take the special responsibility for the more efficient absorption of excitation light but also assist in radiating the consequent fluorescence emission of nearby molecules to the far-field 26 . Therefore, PL spectra are always clues to the presence of potential LSPRs. Based on this fact and considering that the silver semicontinuous interlayer film in this research was also processed in nanoscale, we performed a detailed investigation of the PL spectrum to check whether these samples show some LSPRs characteristics as an attempt. The Fig. 2a shows the PL spectra of all the five different samples. We find the sample 3 (the pink line) recorded the strongest PL peak. This indicates that the nanoscale (10.8 nm) Ag granular films in the interface of sample 3 can cause the strongest LSPRs. Likewise, the sample 1, sample 2 and sample 4 showed relatively weak PL peak due to relatively weak LSPRs. The control sample presented with a practically negligible PL peak because there is no Ag in the interface to stimulate the metallic LSPRs. It has been well accepted that the metallic LSPRs can trigger both plasmonic excitation and interband excitation 27 . Besides, since the samples in this report were fabricated on the Si substrate, the silver associated LSPRs could also lead to an amplified direct photoelectron generation process owing to its large absorption cross section and high localized optical intensity 24,28 . Obviously, if that happens, the output of current of IV curve (under the same applied voltage and the uniform incident laser) will increase proportionally to PL intensity. This is because the greater the intensity, the more LSPRs-induced carriers will participate in conducting (see Fig. 3c). Thus, to further confirm the presence of LSPRs, we measured the IV curves of five samples under light illumination (see Fig. 2b). It can be seen that the sample 3 outputs the biggest current, and the values of output current of sample 1, sample 2 and sample 4 (including the control sample) decrease accordingly in sequence of their PL intensities. These results show that IV curves of the samples are well consistent with their PL measurements and also consistent with our foregoing analysis. Based on these LSPRs-related results, the greatly enhanced LPE can be well interpreted. For general PN junction type or metal-oxide-semiconductor structures, taking the control sample ZnO/Si as an example, the LPE operation mechanism 8,29 can be interpreted as following (please refer to Fig. 3a,b): When the incident laser impinged at one point on the surface of the structure, photons with energies larger than the energy bandgap of Si will generate electron-hole pairs inside the semiconductor substrate. After a very short while, the excited electrons will tunnel into the ZnO layer at the laser spot position through the Schottky Barrier (SB) while the holes are left in the semiconductor. Then the excess electrons in ZnO and holes in Si would generate a concentration gradient laterally between the illuminated spot and the unilluminated area due to the non-equilibrium state. Later then, the photon-generated electrons flow diffuses in an exponential way along the ZnO side from the laser spot position to the ohmic indium contacts which is taking charge of collecting the diffusing carriers and hence ultimately form the lateral photovoltaic output numerically described as the formula below 8,29 : Here N is the electrons density at the laser position x of the ZnO layer, λ is the electron diffusion length in the nanoscale ZnO film which is approximately several millimeters according to our previous works 8, 29 . K is the proportionality coefficient related to electron charge, fermi level, and the temperature. L is the distance of the two contacts. Ideally, when λ  L/2 , the expression of LPV and its sensitivity can be simplified as: In fact, as recorded from the Fig. 1a, we have found that all of the LPV outputs had made acceptable linear changes versus the laser spot displacement, showing a relatively large electron diffusion length (Here the length of λ can be obtained from measuring the exponentially decreasing LPV outputs of the region outside the two collecting contacts as we have reported before 8 . In this letter, to focus our discussion on the linear changing parts and further explore their underlying mechanisms, we did not show the LPV performances (exponential curves) outside the two electrodes). Obviously, these experimental results basically agree with what the equation (2) have analyzed. However, it is intriguing that all the silver nanostructure embedded samples presented with LPV performances several times larger than that of the control sample. Based on the aforementioned PL spectra and IV characteristics, we think that this phenomenon was ought to be associated with the metallic LSPRs triggered by Ag granular films. It has been well known that the metallic LSPRs induced local field can effectively drive a boasted conduction electrons excitation process since its violent oscillations resonant with the applied optical field can attenuate randomly in a very short time through a nonradiative decay mechanism including the interband transition and intraband transition (plasmonic carriers generation) 16,27,30,31 . Yet here on account of the fact that the Ag nanofilm was closely constructed on the semiconductor Si substrate, the LSPRs would trigger three concurrent processes and thus generate large amount of high energy carriers. That is to say: the enhanced direct photon-induced electrons (N 1 + ΔN 1 ) yielded in the Si substrate due to the abnormal light trapping (here the density of direct photoelectron in the control sample was merely amount to N 1 ); the plasmonic carriers coming from the intraband excitation occurred nearby the metal-dielectric interfaces ΔN 2 and the interband excitation mainly due to the d-band transitions inside the silver layer ΔN 3 . These are all clearly illustrated in the Fig. 3c,d 27,30 . As we have known that the optical absorption of the silver granules was strongly localized here, hence the excited electron quantity P MFP of an active silver granule that participated in the effective transfer should be figured up by integrating the product of frequency ω, local electric field strength E 2 and the imaginary part of the dielectric function ε Im( ) over the volume V MFP adjacent to the silver granules within the mean-free path (MFP), eventually it would come out to be 27 : Based on this mechanism, electron density N LSPRs which is closely related to the P MFP at the laser position x of the ZnO layer should be expressed as: Here V 1 , V 2 and V 3 represent the valid unit volume account for the ΔN 1 , ΔN 2 and ΔN 3 respectively. Notably, the N LSPRs participated in the consequent formation of the LPV output is believed to be much larger than that generated in the control sample which barely valued as = N N 1 . Thus according to the equation (2) and equation (3), it can be eventually demonstrated that these high energy charge carriers 1 would significantly facilitate the tunneling efficiency of the excited carriers and greatly increase the quantity of diffusing electrons located at the laser spot, causing an enhanced LPV. We want to stress here that, once again according to the equation (2), the LPV can also be regulated by changing the diffusing length λ in the ZnO layer even if there are not any LSPRs effect. It is sure that the nanoscale Ag embedded interfaces could bring in a decreased resistivity ρ, an enhanced Fermi level E F and a prolonged life-time τ of the non-equilibrium electrons in the structures, thus eventually leading to a slightly increased . diffusing length., which can be written as: 8,30 Furthermore, on the basis of our experimental data, the length of λ had presented with a little bit increase much less than 1 mm 8 , consistent with what the equation (6) has demonstrated. Thus these very slight changes of λ are not enough to arouse such a huge LPV enhancement in this report. Besides, according to our previous researches on the ZnO/Si nanostructures with different λ 29,32 , even the optimum λ can never bring forth a LPE improvement of several folds. Thus, we believe that the large extra production of high energy conducting electrons triggered by the metallic LSPRs are meant to be the major factors of this anomalous giant LPV performance. In addition, the rapid oscillation of the local field also produced another kind of plasmonic energy dissipation called Joule heating effect 33 , which was meant to favor the lateral diffusion process inside the ZnO layer owing to an inherent thermalization method, i.e. through the internal electron-electron scattering, electron-phonon scattering and electron surface scattering as well [33][34][35] . To gain further insights into this metallic LSPRs effect, the spatial fluctuations of the resonant electromagnetic field intensity for the silver nanoscale granules are simulated with the finite-difference time-domain (FDTD) method under the setting periodic boundary conditions 36 . As shown in the Fig. 3e,f, the left-hand image demonstrate the intense spatial localization and remarkable field enhancement of the Ag donated surface plasma strength confined on a nanoscale 100 nm*40 nm x-y plane while the right-hand graphic page unequivocally reflect the evanescent waves nature of this localized resonant field in a subwavelength region, where the various relucent colors represent a set of field intensity dissipated from high to low. These results are well fitted with the intrinsic characteristics of the LSPRs as we have discussed above. In summary, we have discovered a LSPRs-based giant lateral photovoltaic effect (LPE) in ZnO/Ag/Si structures for the first time. The new operation mechanism behind the LSPRs constitutes a sharp contrast to the traditional LPE mechanism and can be expected to improve the LPV performance at a rate of several times with a simple preparation process and a relatively low cost, suggesting a brand new LSPRs approach for tailoring LPE-based devices. We believe it will also open avenues of research within current photoelectric sensors area.
2018-04-03T04:30:09.459Z
2016-03-11T00:00:00.000
{ "year": 2016, "sha1": "53ead945076d987a8fd8fff6c61e1e5d835db3e3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep22906.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53ead945076d987a8fd8fff6c61e1e5d835db3e3", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
35177925
pes2o/s2orc
v3-fos-license
Underlying mechanism of protection from hypoxic injury seen with n-butanol extract of Potentilla anserine L. in hippocampal neurons The alcohol and n-butanol extract of Potentilla anserine L. significantly protects myocardium from acute ischemic injury. However, its effects on rat hippocampal neurons and the mechanism of protection remain unclear. In this study, primary cultured hippocampal neurons from neonatal rats were incubated in 95% N2 and 5% CO2 for 4 hours. Results indicated that hypoxic injury decreased the viability of neurons, increased the expression levels of caspase-9 and caspase-3 mRNA, as well as cytochrome c, Caspase-9, and Caspase-3 protein. Pretreatment with 0.25, 0.062 5, 0.015 6 mg/mL n-butanol extract of Potentilla anserine L. led to a significant increase in cell viability. Expression levels of caspase-9 and caspase-3 mRNA, as well as cytochrome c, Caspase-9, and Caspase-3 protein, were attenuated. The neuroprotective effect of n-butanol extract of Potentilla anserine L. was equivalent to tanshinone IIA. Our data suggest that the n-butanol extract of Potentilla anserine L. could protect primary hippocampal neurons from hypoxic injury by deactivating mitochondrial cell death. INTRODUCTION Neurons die from hypoxia or hypoxia-ischemia much faster than other cell types [1] . Extensive studies have indicated that mitochondrial injury is the central cause of hypoxic brain injury [2][3][4] . After hypoxia, cytochrome c in the mitochondria is released, and results in the opening of the mitochondrial permeability transition pore [5][6] , thus triggering the caspase cascade. Caspase-9 is the major initiator caspase of the intrinsic mitochondrial apoptotic pathway [7][8] . Caspase-3 acts as the final executor of cell death and is also activated in hypoxic neurons [9][10] . Caspase inhibitors can reduce hypoxia or hypoxia-ischemia induced neuronal death [11][12][13] . Potentilla anserina L., commonly called the monorchid herminium herb, belongs to the Rosaceae family and contains polysaccharides, amylum, fatty acids, essential amino acids, and vitamins. Potentilla anserina L. possesses a high medical and nutritional value, and has been used as a crude drug and a Chinese herbal medicine in Tibet, China. Recent studies have shown that Potentilla anserina L. strengthens immunity, exhibits anti-oxidative activity, and anti-hypoxic properties [14][15][16] . A previous study showed that the alcohol extract of Potentilla anserina L. could protect myocardium cells from ischemic or ischemic/reperfusion injury in vitro and in vivo [17][18][19] . In particular its n-butanol extract, an effective part of the alcohol extract, could remarkably protect the myocardium from acute ischemic injury [20][21] . However, its effects on rat hippocampal neurons and the mechanism of this protection are not yet well understood. In the present study, we investigated the effects of the n-butanol extract of Potentilla anserina L. on hypoxic injury induced by low oxygen density in primary hippocampal neurons. The effects of Potentilla anserina L. were then compared with tanshinone IIA, which has been shown to be neuroprotective [22][23][24][25][26][27] . Morphology of primary cultured hippocampal neurons After 7 days in culture, neurons were plump, strongly refractory, displayed central cell nuclei and nucleoli were clearly visible. Neuronal processes were interwoven into a thick network ( Figure 1A). Microtubule-associated protein 2 (MAP2) is an abundant neuronal cytoskeletal protein that binds to tubulin and stabilizes microtubules [28] . MAP2 is essential for the development and maintenance of neuronal morphology [29] . MAP2 was abundantly expressed in hippocampal neurons, and seldom expressed in gliocytes. The purity of primary cultured hippocampal neurons was identified by immunocytochemistry using MAP2. Results showed that the proportion of positively stained cells reached 75.2 ± 8.1% ( Figure 1B). These cells were then used for subsequent research. Pretreatment with n-butanol extract of Potentilla anserine L. significantly increased cell viability in hypoxic hippocampal neurons Cell viability was verified by MTT assay. Hypoxia led to a decrease in neuron cell viability (P < 0.01, versus the control group). Decreased neuronal viability was suppressed by pretreatment with the n-butanol extract of Potentilla anserine L. (P < 0.01, versus the model group). The 0.25 mg/mL dosage group showed increasing (C) Neuronal viability was determined using MTT assay. Data are expressed as mean ± SEM (n = 12). Differences between the means were determined by one-way analysis of variance followed by a Student-Newman-Keuls test for multiple comparisons. a P < 0.01, vs. control group; b P < 0.01, vs. model group. (mg/mL) viability compared with 0.062 5 mg/mL dosage groups (P < 0.05). Moreover, pretreatment with tanshinone IIA could also increase the viability of hippocampal neurons under hypoxia (P < 0.01, versus the model group; Figure 1C). n-butanol extract of Potentilla anserine L. significantly decreased the release of cytochrome c and attenuated the expression of caspase-9 and caspase-3 in hypoxic hippocampal neurons Reverse transcription-PCR results showed that the expression levels of caspase-9 and caspase-3 mRNA were very low in the control group. Hypoxia strongly induced the activation of caspase-9 and caspase-3 mRNA in hippocampal neurons (P < 0.01, versus the control group). Each dosage of Potentilla anserine L. extract could significantly reduce the expression of caspase-9 and caspase-3 mRNA in hypoxic-neurons (P < 0.01, versus the model group; Figure 2). Western blot revealed that protein expression levels of cytochrome c, Caspase-9 and Caspase-3 were very low in the control group. Hypoxia strongly induced the expressions of cytochrome c, Caspase-9 and Caspase-3 in hippocampal neurons (P < 0.01, versus the control group). Pretreatment with Potentilla anserine L. extract or tanshinone IIA could significantly decrease the expression of Caspase-9 and Caspase-3 (P < 0.01 or 0.05, versus the model group; Figure 3). However, pretreatment with tanshinone did not decrease the expression of cytochrome c (P < 0.01; Figure 3). DISCUSSION Mitochondrial injury is a major contributor to hypoxic brain injury as it connects upstream and downstream signal transmission. After hypoxia, electron transfer in the mitochondrial respiratory chain is hindered and energy metabolism is obstructed. Cytochrome c in the mitochondria is released and reactive oxygen species are generated, resulting in the opening of the mitochondrial permeability transition pore. The released cytochrome c from the mitochondria induces cell death in two ways. First, it triggers the caspase cascade. Caspases, cysteine-dependent protein kinases, are the most important enzymes in cell death. Cytochrome c can associate with apoptosis protease activating factor and pro-caspase 9, and triggers the activation of caspase-3 and apoptosis [30][31] . Caspase-3 can specifically cleave Bcl-2, resulting in the loss of inhibitory effect on Data are expressed as absorbance ratio of target gene to β-actin (mean ± SEM, n = 3). Differences between the means were determined by one-way analysis of variance followed by a Student-Newman-Keuls test for multiple comparisons. a P < 0.01, vs. control group; b P < 0.01, vs. (B-D) Quantification of cytochrome c, Caspase-3, and Caspase-9 expression. Data are expressed as absorbance ratio of target protein to β-actin (mean ± SEM, n = 3). Differences between the means were determined by one-way analysis of variance followed by a Student-Newman-Keuls test for multiple comparisons. a P < 0.01, vs. control group; b P < 0.01, c P < 0.05, vs. model group. mitochondrial permeability transition pore, thus releasing more cytochrome c [32][33][34] . Second, it interrupts the electron transport chain, and thereby inhibits oxidative phosphorylation; this generates oxygen free radicals and results in a lack of ATP and eventually cell death. The caspase-dependent pathway is the faster process leading to cell death. In addition, hypoxic stress also leads to activation of caspase-8, which elicits the release of cytochrome c into the cytosol and activates the release of other caspases. This process initiates internucleosomal DNA fragmentation and results in apoptosis [35][36] . Mitochondrial cell death can occur due to apoptosis and necrosis. Hypoxic/ischemic brain injury induces cell death in neurons by apoptotic and necrotic mechanisms [37] . Apoptosis and necrosis are initially identified by two different modes, based on the morphological criteria. Caspases are essential for the execution steps in apoptosis [38] . Caspase-9 is the major initiator caspase of the intrinsic mitochondrial apoptotic pathway, and its inhibition in the brain by LEHD-CHO (Caspase-9 inhibitor) has been demonstrated to have a neuroprotective effect. Caspase-9 is important in the pathophysiology of hypoxic/ischemic neuronal destruction in newborn rats [39] . Caspase-3 acts as the final executor of cell death and is also activated in hypoxic neurons. Caspases may also be expressed in the context of necrotic cell death [40] . Necrosis is generally considered to occur because of an external stimulus (changes in ion flux), but recent studies have shown that neuronal death after oxygen ion flux is far from passive cell swelling and dissolution, but requires an orderly activated cell death program. Necrosis may be triggered by mitochondrial dysfunction, subsequently leading to the release of cytochrome c and the activation of the caspase system. This means that apoptosis and necrosis have the same final pathway. This involves mitochondrial dysfunction caused by cytochrome c release followed by the activation of Caspase-9 and Caspase-3 by cytochrome c, along with apoptosis protease activating factor-1. This activation is followed by hydrolysis, and finally the activation of the caspase kinase system occurs. However, in the process of apoptosis, gene expression and protein synthesis requires a large of amount energy. During hypoxia, ischemia, or any other low-energy state, the huge amount of energy required for protein synthesis cannot be met. By contrast, caspases exist in normal cells, and activation requires only a small amount of energy; therefore, in hypoxia, necrosis may be the main cause of cell death [41] . In this study, expression levels of caspase-9 mRNA and caspase-3 mRNA were very low in the control group. Hypoxia strongly induced the activation of caspase-9 and caspase-3 in neurons. However, pretreatment with Potentilla anserine L. extract significantly reduced the expression of caspase-9 mRNA and caspase-3 mRNA in hypoxic-neurons. Likewise, similar results were observed for the expression of Caspase-9 and Caspase-3 protein. Pretreatment with Potentilla anserine L. extract also significantly decreased the release of cytochrome c into the cytosol. These findings suggested that the n-butanol extract of Potentilla anserine L. could protect primary hippocampal neurons from hypoxic injury by attenuating mitochondrial cell death. Oxygen free radicals are a main cause of mitochondrial injury. Our previous studies demonstrate that Potentilla anserina L. exhibits anti-oxidative activity [16,20] . This activity may contribute to the mitochondrial protective effect of Potentilla anserina L. In summary, our findings demonstrate that the n-butanol extract of Potentilla anserine L. has a neuroprotective effect on hypoxic injury in primary hippocampal neurons. The possible mechanism is as follows: the n-butanol extract of Potentilla anserine L. protects mitochondrial function by attenuating the release of cytochrome c into the cytosol, and thereby inhibits the caspase cascade pathway. This prevents cell death. These findings provide a theoretical basis for developing the n-butanol extract of Potentilla anserine L. as a neuroprotective agent. MATERIALS AND METHODS Design A cytological comparison study. [42] . Drugs Potentilla anserine L. was purchased from Qinghai Institute of Chinese Medicine, China. The n-butanol extract of Potentilla anserine L. was extracted as previously described [20] . Five compounds have been isolated from the extract, which were considered as contributors to the protective function of anti-hypoxia in neurons. They are adenosine, daidzin, puerarin, 3'-methoxypuerarin and daidzein 8-C-apiosyl glucoside. Tanshinone IIA (Huike Botanical Development Co., Shaanxi, China) with a purity of more than 98%, was dissolved in 0.1% dimethyl sulfoxide and made up to a concentration of 20 mg/mL in D-Hank's medium (Gibco, Grand Island, NY, USA). Primary hippocampal neuron cultures Sprague-Dawley neonatal rats were anesthetized with diethyl ether and disinfected with 75% alcohol. Primary hippocampal neurons were prepared from the hippocampus under sterile conditions [43] . Neurons were suspended in a culture medium that contained DMEM-F12 (Gibco), fetal bovine serum, mycillin, and glucose (4 × 10 5 cell/mL), and then plated onto poly-D-lysine-coated 60 mm dishes. The medium was changed after 48 hours by replacing the fetal bovine serum with N2 (Gibco), and half of the medium was replaced every 3 days. The cells were cultured in a CO 2 incubator at 37°C and 5% CO 2 . After 7 days in culture, observation under a phase-contrast microscope (Olympus, Tokyo, Japan) demonstrated that cells were predominantly neurons (> 96%). All experiments were performed after cells were cultured for 7 days. The purity of primary cultured hippocampal neurons identified by immunocytochemical analysis Cultured cells were fixed in 4% paraformaldehyde, permeabilized in 0.1% Triton X-100, and blocked in 5% bovine serum albumin. MAP2 was detected with rabbit anti-MAP2 polyclonal antibody (1:100; Cell Signaling Technology, Beverly, MA, USA), and primary antibodies were incubated overnight at 4°C , followed by goat anti-rabbit secondary antibodies (Invitrogen, Grand Island, NY, USA). 3,3-diaminobenzidine was then used to visualize immunohistochemical staining. Cell nuclei were then counterstained with hematoxylin. Images were obtained with an Olympus BX51 microscope (Olympus) and the proportion of positive staining cells was analyzed with Image-Pro plus 5.1 software (Bethesda, MD, USA) [28][29] . Grouping and intervention Culture dishes were randomly divided into the control, hypoxic injury model, and the 0.25, 0.062 5, and 0.015 6 mg/mL of n-butanol extract of Potentilla anserine L. groups. These concentrations were chosen based on previous studies [20] . Tanshinone IIA, a positive control, was preincubated before hypoxia at a working concentration of 0.2 mg/mL. After 7 days of culture, control dishes were kept in normoxic conditions. D-Hank's medium with different concentrations of the extract was used for the hypoxic injury model group and the intervention group. D-Hank's medium with extract was initially placed in a hypoxic environment (95% N 2 , 5% CO 2 ) for 30 minutes and then replaced with normal medium. The model and intervention groups were then exposed to a 95% N 2 , 5% CO 2 air mixture for 4 hours. Viability of hippocampal neurons determined using MTT assay Neuronal viability was assessed using MTT assay as previously described [44] , with some modifications. The yellow MTT was reduced to a purple formazan by mitochondrial dehydrogenase in live cells. Briefly, a total of 5 mg/mL MTT was added to each well (final concentration was 1 mg/mL) and another 4 hours of incubation at 37°C was conducted. The assay was stopped by the addition of a 100 µL lysine buffer (20% SDS in 50% N, Ndimethylformamide, pH 4.7). Absorbance value was measured at 570 nm with ELX-800 microplate assay reader (BioTek, Winooski, VT, USA). Determination of caspase-9, caspase-3 mRNA in hippocampal neurons by reverse transcription-PCR The total RNA were extracted from neurons using TRIzol reagent (Gibco), and its purity and integrity were measured. The primers (Invitrogen) were as follows: caspase-9, F: 5'-CCC GTG AAG CAA GGA TTT-3', R: Reverse transcription-PCR was performed using a TaKaRa RNA PCR Kit (AMV) Version 3.0 (TaKaRa, Otsu, Japan). Products were visualized with ethidium bromide staining. The relative expression of caspase-9, and caspase-3 mRNA was given as the ratio of the target mRNA absorbance value to the β-actin absorbance value. The absorbance of each band was analyzed with Gel-pro 3.0 (Bethesda). Western blot analysis of cytochrome c, Caspase-9, and Caspase-3 protein expression in hippocampal neurons Cells were lysed in ice-cold lysis buffer. After centrifugation, the supernatants were collected. Protein samples were then run on sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred to a polyvinylidene difluoride container. Membranes were blocked with 5% fat-free milk powder in Tris-buffered saline, incubated at 4°C overnight with mouse anti-rat cytochrome c (1:1 000), Caspase-3 (1:1 000), Caspase-9 (1:1 000), and β-actin (1:1 000) monoclonal antibodies (Santa Cruz Biotechnology, Santa Cruz, CA, USA). This incubation was followed by probing with goat anti-mouse secondary antibody (1:1 000, Santa Cruz) at 37°C for 1 hour. The protein was visualized with enhanced chemiluminescence solution, scanned, photographed, and the target protein absorbance value to the β-actin absorbance value was analyzed with a gel-image analytical system, Gel-pro 3.0 (Bethesda). Statistical analysis Statistical analyses were performed using SPSS 10.0 (SPSS, Chicago, IL, USA) and the results were expressed as mean ± SEM. Differences between the means were determined by one-way analysis of variance followed by a Student-Newman-Keuls test for multiple comparisons. A value of P < 0.05 was considered significant. Author statements: The manuscript is original, has not been submitted to or is not under consideration by another publication, has not been previously published in any language or any form, including electronic, and contains no disclosure of confidential information or authorship/patent application disputations.
2018-04-03T05:45:26.527Z
2012-11-25T00:00:00.000
{ "year": 2012, "sha1": "208c27a0a7906150a44870890dcd2374aa78d28e", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "686571e032ff14ad5c7f1ef5ad6314ccea4c993b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251371453
pes2o/s2orc
v3-fos-license
Family of New Binary Transition Metal Nitrides Superconductors Superconductivity in transition metal nitrides (TMNs) has been investigated for a long time, such as zirconium nitride (ZrN) with a superconducting transition temperature Tc of 10 K. Recently, a phase diagram has been revealed in ZrN x with different nitrogen concentrations, which is very similar to that of high-temperature copper oxide superconductors. Here, we study the TMNs with face-centered cubic lattice, where ZrN and HfN have been experimentally obtained, and predict eight new stable superconductors by the first-principle calculations. We find that CuN has a high Tc of 39 K with a very strong electron-phonon coupling (EPC) strength. In contrast to ZrN, CuN has softening acoustic phonons at the high symmetry point L , which accounts for its much stronger EPC. In addition, the highly symmetrical structure leads to topological protected nodal points and lines, such as the hourglass Weyl loop in k x/y/z = 0 plane and Weyl points in k x/y/z = 2 π/a plane, as well as quadratic band touch at Γ point. CuN could be a topological superconductor. Our results expand the transition metal nitrides superconductor family and would be helpful to guide the search for high temperature topological superconductors. Transition metal nitrides (TMNs), as a class of materials with excellent physical properties, such as high hardness and strength, strong corrosion resistance, high melting point, good chemical and thermal stability [1][2][3][4][5], have attracted extensive attention and wide applications in various fields [6][7][8][9][10]. In addition, many TMNs exhibit superconductivity [11,12]. Since the 1930s, many TMN superconductors have been discovered. Nitrogen atoms in these superconductors provide strong bonding and large electron-phonon coupling (EPC), resulting in superconductivity [13]. For instance, ZrN is a typical TMN superconductor, which displays the highest superconducting transition temperature Tc of 10.0 K among the IVB TMNs [14,15]. As a hard metal nitride superconductor, ZrN is suitable for applications under extreme working conditions, while most materials with superior mechanical strength and hardness are semiconductors or insulators, which lack metallicity and superconductivity [16]. A recent theoretical study has shown that on the deformation path of high-symmetry crystallography [001], Tc of ZrN can reach 17.1 K under tensile strain [17]. Several other TMN superconductors have been reported including TiN with a maximal Tc of 6.0 K [18], HfN with a Tc of 8.8 K [19], VN with a Tc of 8-9 K [20,21], NbN with a Tc of 17-18 K [20, 22, 23], TaN with a Tc of 10.8 K [24], and W 2 N with a Tc of 1.3 K [20]. MoN has been predicted to have the high Tc since 1981 [11,13,[25][26][27]. A high Tc of 30 K was theoretically predicted in MoN, but the low Tc of 5-14K were obtained in the experiments [28][29][30][31][32], where the obtained MoN samples were not pure and often contaminated with Mo 2 N, γ-Mo 2 N, Mo or even MoO x phases. In addition, studies have shown that the doping have a great impact on the superconductivity in TMNs. For example, the Tc of TiN 0.995 , TiN 0.95 , TiN 0.8 , and TiN 0.55 are 6.0, 1.7, 1.5, and 1.2 K, respectively [33]. Pure VN, 0.5% B doped VN, 0.5% La doped VN, 0.2% B and La each co-doped VN, and 0.5% B and La each co-doped VN have the Tc of 9.2, 8.0, 7.8, 8.2 and 5.6 K, respectively [34][35][36][37]. ZrN x with different nitrogen concentrations has been synthesized recently. Although ZrN x is generally believed to obey the Bardeen-Cooper-Schrieffer (BCS) superconducting mechanism, the phase diagram of ZrN x is very similar to that of high-Tc copper oxide superconductors, indicating the possible relation between high-temperature superconductivity and ZrN x [38]. In this article, we study the face-centered cubic (FCC) TMN in which ZrN and HfN have been experimentally reported. By substituting elements, eight new stable superconductors have been predicted through first-principle calculations. Interestingly, we found that CuN has a high Tc of 39 K with a very strong EPC of 3.099. In contrast to ZrN, CuN has softening acoustic phonons at the high symmetry point L, which accounts for its much stronger EPC. In addition, the highly symmetrical structure leads to topological protected nodal points and lines, such as the hourglass Weyl loop in k x/y/z = 0 plane and Weyl points in k x/y/z = 2π/a plane, as well as quadratic band touch at Γ point. As shown in Fig. 1(a), ZrN has a NaCl-type FCC lattice with the F m3m space group. By substituting Zr with other transition metal elements and N with other VA group elements, we have obtained eight new structurally stable transition metal pnictides superconductors by the density functional theory (DFT) calculations, including CdN, CdP, CdAs, CuN, CuSb, HfP, HgSb, and ZnSb. Using the MicMillan-Allen-Dynes approach [39,40] based on the BCS theory with a typical Coulomb repulsion value µ * = 0.1, we calculate the Tc, arXiv:2208.03021v1 [cond-mat.supr-con] 5 Aug 2022 EPC stength λ, and logarithmic average frequency ω log of these superconductors as listed in Table I. In addition, we also calculate the Tc of two experimentally reported TMN superconductors ZrN and HfN, which are 10.93 and 9.1 K, respectively, close to the experimental Tc of 10 and 8.8 K. Among these stable FCC TMN superconductors, CuN has the strongest EPC, and the highest Tc of 39 K among other TMNs. In the following, we studied CuN as an example in details, and in the Supplemental Materials (SM) we provide the structural information, electronic bands, phonon spectra, density of states (DOS), and superconducting properties of other seven transition metal pnictides superconductors [41]. The optimized lattice constant of CuN is 4.177Å. Fig. 1(b) gives the Fermi surface of CuN, where red and blue surfaces represent the different bands crossing the Fermi surface, which obviously accords with the crystal symmetry of this structure. Fig. 1(c) shows the band structure along the high-symmetry paths Γ-X-U -K-Γ-L-W -X and projected DOS of CuN, which show the metallic property with the same contribution of Cu and N atoms near the Fermi level. Because of the highly symmetric crystal structure, this material can exhibit many interesting topological properties in the band structure [42]. Fig. 2(a) shows the shape of the nodal loop obtained from the DFT calculations in the k z = 0 plane within the Brillouin zone (BZ). Similarly, other two nodal loops appearing on k x = 0 and k y = 0 planes can also be obtained by symmetry. In Fig. 2(c), we plot the constant energy slice at -0.8 eV, which cuts through the drumhead states, forming a few arcs. In Fig. 1(c), we can find another band crossing [14,15] point along U -X path at about -1 eV. It is a nontrivial band crossing, leading to a nodal Weyl point. Since the fourfold rotation symmetry along Γ-X, there should exist another three Weyl points in the k y = 2π/a plane, which has been shown in Fig. 2(b). In addition, the band dispersion around Γ points is quadratic along all three directions in k space as shown in Fig. 2(d), indicating that CuN possesses the quadratic contact point Γ protected by the crystalline symmetry [43]. In Fig. 3, we study the phonon spectra weighted by the magnitude of EPC λ qν , projected phonon DOS, Eliashberg spectral function α 2 F (ω), and cumulative frequency-dependent EPC λ(ω) for CuN and ZrN. For CuN, the vibration modes can be divided into two parts: low-frequency modes (¡ 8 THz) contributed by both Cu and N atoms and high-frequency modes (¿ 8 THz) dominated by vibration of N atoms. For ZrN, the vibration modes can also be divided into two parts: low-frequency modes and high-frequency modes dominated by Zr and N atoms, respectively. Between the two frequency ranges of ZrN, there is a 4.5 THz phonon gap. It is worth noting that at the high symmetry point L, the acoustic phonons of CuN exhibit obvious softening, accounting for about 83% of the total EPC (λ = 3.099) below 6 THz, while ZrN has no softening acoustic phonons, and the vibration modes of Zr atom below 7.5 THz only account for 63% of the total EPC (λ = 0.657) below 7.5 THz. Softening acoustic phonons are the main source for the large EPC in CuN. Fig. 4 gives pressure dependence of logarithmic Tc and EPC λ of CuN and ZrN, respectively. Their detailed calculation results are listed in Table II. It can be seen that with increasing pressure, the EPC λ of both CuN and ZrN decrease, resulting in the decrease of Tc. The calculated dTc/dP of ZrN is -0.133 K/GPa, which is close to the experimental result of -0.17 K/GPa [44]. Based on the BCS theory, Tc(P ) obeys the following relationship [45]: where B is the bulk modulus, γ ≡ -dln ω /dlnV is the Grüneisen parameter, η ≡ N (E F ) I 2 is the Hopfield parameter [46] with I 2 the square of the electron-phonon matrix element averaged over the Fermi surface, and ∆ ≡ 1.04λ(1+0.38µ * )[λ-µ * (1+0.62λ)] −2 . Because the first term on the right hand side of Eq. (1) is smaller than the second term [47], the sign of dTc/dP is mainly determined by the relative magnitude of the two terms in curly (1) is also comparable with the fitting slope −0.035 K/GPa in Fig. 4(a). In summary, we have found eight new stable transition metal pnictides superconductors by the first-principle calculations. Among these superconductors, CuN is found to have the strongest EPC of 3.099 and the highest Tc of 39 K. Considering the high symmery crystal structure, we have obtained the quadratic contact point at Γ, where the band dispersion is quadratic along all three directions in k space. Besides, Weyl loop in k x/y/z = 0 plane and Weyl points in k x/y/z = 2π/a plane have also been uncovered, indicating the topological properties of CuN. In contrast to ZrN, we find that the softening acoustic phonon models in CuN are responsible for the much higher EPC strength and the much higher Tc. By studying the relationship between Tc and pressure, we find that both ZrN and CuN match well with the BCS superconducting mechanism. Our results not only predict CuN having the highest Tc in TMN superconductors, but also offer new materials with both superconductiv-ity and novel topology in band structures that would be helpful for studying majorana zero modes in topological quantum computation.
2022-08-09T07:35:17.703Z
2022-01-01T00:00:00.000
{ "year": 2023, "sha1": "bc27888aa672e7c6f9bbbcfe3b8805a3d9c1a928", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc27888aa672e7c6f9bbbcfe3b8805a3d9c1a928", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [] }
53444555
pes2o/s2orc
v3-fos-license
Mutation Patterns Due to Converging Mitochondrial Replication and Transcription Increase Lifespan, and Cause Growth Rate-Longevity Tradeoffs DNA replication and RNA transcription share many properties (Little et al., 1993; Hassan & Cook, 1994; Marczynski and Shapiro 1995; Mohanty et al., 1996; Prado & Aguilera 2005), notably in mitochondria (Nass 1995; Lee & Clayton, 1997). The joint occurrence of transcription and replication on DNA apparently necessitates coordination (Gilbert, 2001; MacAlpine at al., 2004), among others because collisions occur between the replication and transcription complexes on the same DNA strand (Mirkin & Mirkin, 2005). This coordination may be part of the regulation of gene expression (Patnaik, 1997) and the rates of both processes (Morton, 1999). This predicts the structural organization of genes on chromosomes around replication origins in relation to functional pressures (Schwaiger & Schubeler, 2006): highly expressed genes are located close to replication origins, those expressed in few tissues are more distant (Huvet et al., 2007). Such functional pressures seem strong enough to cause convergences in genome organization between very distant organisms such as yeast (Saccharomyces cerevisiae) and Caulobacter, despite that the proteins involved in their replication and transcription are basically unrelated (Brazhnik & Tyson, 2006). For that reason, transcription-associated genes are frequently located close to replication origins (Couturier and Rocha, 2006). The conserved arrangements of mitochondrial tRNA genes in vertebrates also seems to optimize between early replication of tRNAs whose anticodons have high probability to mutate in the single strand state (Seligmann et al., 2006a) and early transcription of tRNAs with frequently used cognate amino acids (Satoh et al., 2010). Note that this principle of optimizing between two competing processes exists also at the level of translation, between initiation and elongation (Xia et al., 2007), and might apply to many other molecular processes. Introduction DNA replication and RNA transcription share many properties (Little et al., 1993;Hassan & Cook, 1994;Marczynski and Shapiro 1995;Mohanty et al., 1996;Prado & Aguilera 2005), notably in mitochondria (Nass 1995;Lee & Clayton, 1997). The joint occurrence of transcription and replication on DNA apparently necessitates coordination (Gilbert, 2001;MacAlpine at al., 2004), among others because collisions occur between the replication and transcription complexes on the same DNA strand (Mirkin & Mirkin, 2005). This coordination may be part of the regulation of gene expression (Patnaik, 1997) and the rates of both processes (Morton, 1999). This predicts the structural organization of genes on chromosomes around replication origins in relation to functional pressures (Schwaiger & Schubeler, 2006): highly expressed genes are located close to replication origins, those expressed in few tissues are more distant (Huvet et al., 2007). Such functional pressures seem strong enough to cause convergences in genome organization between very distant organisms such as yeast (Saccharomyces cerevisiae) and Caulobacter, despite that the proteins involved in their replication and transcription are basically unrelated (Brazhnik & Tyson, 2006). For that reason, transcription-associated genes are frequently located close to replication origins (Couturier and Rocha, 2006). The conserved arrangements of mitochondrial tRNA genes in vertebrates also seems to optimize between early replication of tRNAs whose anticodons have high probability to mutate in the single strand state (Seligmann et al., 2006a) and early transcription of tRNAs with frequently used cognate amino acids (Satoh et al., 2010). Note that this principle of optimizing between two competing processes exists also at the level of translation, between initiation and elongation (Xia et al., 2007), and might apply to many other molecular processes. mitochondrial genomes seem good candidates for testing this hypothesis, because a) data are available for many species, b) they affect ageing and c) mitochondrial replication and transcription are at least partially collinear. Indeed, in vertebrate mitochondria, the distance from the D-loop determines D ssh t, while D ssh r results from calculating relative distances from the OH, also in the D-loop, and the light strand replication origin (OL, see Seligmann et al., 2006b for details on D ssh calculations, and Seligmann, 2008). Usually, one considers that mitochondrial genomes have a single OL located in the WANCY region, a cluster of 5 tRNA genes (Desjardins & Morais, 1990;Clayton, 2000), resulting in D ssh rW. Both processes are only partially collinear when solely the WANCY region functions as OL, but the probabilistic combination of multiple tRNA clusters distributed across the genome that putatively act as OLs (Seligmann et al., 2006b;Seligmann, 2008;Seligmann, 2010b) can result in an overall replication gradient (D ssh rX) collinear with the transcription gradient (D ssh t). In Figure 1, D ssh rX (as it is expected after integrating with equal weights all putative tRNA clusters as OLs into D ssh r calculations) is highly correlated with the distance from the Dloop. As compared to D ssh rW, this D ssh rX has only one region with high mutation risks (this region codes for ND6 and CytB), while for D ssh rW, there is an additional region (coding for ND1 and ND2), ranging over 4 of all 13 mitochondrially encoded protein coding genes and both rRNAs. Hence, one could expect that evolution of multiple OLs in mitochondrial genomes, especially in taxa with long lifespan, would tend towards increasing collinearity of D ssh rX with D ssh t, reducing the extent of DNA regions with high mutation risks. Multiple OLs would regulate D ssh r->D ssh t convergence. These interactions between mitochondrial replication and transcription would be an additional process interacting with mitochondrial transcription (Bonawitz et al., 2006). Alternative replication mechanisms This is in line with studies suggesting that multiple OLs exist in vertebrate mitochondria (Brown et al., 2005;. The hypothesis that mitochondrial light strands are replicated at multiple locations by Okazaki fragments (Holt et al., 2000) as the lagged strand in nuclear genomes is also compatible with the statistical patterns observed by Seligmann et al., (2006b). In fact, the deamination gradients detected by comparative analyses are considered as strong evidence in favor of the unidirectional replication mechanisms (Gibson, 2005). My interpretation is that the unidirectional replication is relatively rare, but it leaves at evolutionary scales a clear imprint on genomes because it causes biases in mutation patterns, and that at least one other replication process, putatively similar to the one in nuclei, exists. That process is more frequent and effective, affecting less the genome at evolutionary scale. Indeed, some evidence on mitochondrial transcription factors suggests that two replication modes coexist, and that the modes of mitochondrial replication are regulated by mitochondrial metabolism (Pohjoismaki et al., 2006). Results and conclusions will be also interpreted according to this hypothesis, considering that only one replication mechanism, the unidirectional one, creates replication deamination gradients. Lifespan and convergence of replication and transcription Heavy strand sequences of mitochondrial tRNA genes tend to form OL-like structures and seem to assist the "recognized" vertebrate mitochondrial OL in the WANCY region (Seligmann & Krishnan, 2006). D ssh rX resembles D ssh t more than does D ssh rW (see Figure 1, and Seligmann et al., 2006b). Pathogenic mutations, as compared to non-pathogenic polymorphisms in human mitochondrial tRNAs, disturb the fine balance of D ssh rX by altering which tRNAs function and which do not function as alternative OLs (Seligmann et al., 2006b). These observations strengthen the hypothesis that collinearity between these processes increases longevity by slowing ageing. A further observation in line with this hypothesis is that nucleotide contents of heavy strand DNA sequences coding for the first and second positions of tRNA anticodons in vertebrate mitochondrial genomes correlate with D ssh t calculated according to the highly conserved tRNA arrangement along the vertebrate mitochondrial genome. When mitochondrial replication and transcription are collinear, as observed in Homo sapiens after integrating all putative OLs in D ssh rX calculations, overall deamination risks at sites coding for the first two anticodon positions are minimized (Seligmann et al., 2006b), not only during transcription, but also during replication because replication is collinear with transcription in this case. It hence makes sense that ageing-related processes, such as developmental stability and lifespan, are affected by convergence between replication and transcription. I test this hypothesis and discuss alternative hypotheses that could account for the patterns described below. Materials and methods In order to test this ageing-related collinearity hypothesis, I calculated rt, the correlation coefficient between C or T contents at third codon positions for all 13 protein coding genes (methodology as in Seligmann et al., 2006b) and D ssh t; and rW, the correlation coefficient between C or T contents as above, and D ssh rW. I used the light strand sequences, coding Cs by "1" and Ts by zero, so that the gradient reflects the slower deamination reaction of A to G that occurs during replication on the heavy strand as a function of D ssh . I did also similar calculations for the other gradient, coding light strand As by "1" and Gs by zero, reflecting the heavy strand gradient due to the faster deamination of C to T. Results were generally qualitatively similar for this gradient, but are not presented here. I used for D ssh calculations the numbering system of Genbank for nucleotide sites, also used by Tanaka and Ozawa (1994) and Seligmann et al., (2006b). D ssh t is the relative distance of the base pair from the starting point of the transcription, meaning the number assigned to that base pair following that numbering system divided by the total length of the genome: where b is the distance in bases of the nucleotide position from the genome numbering starting point and N is the total mitochondrial genome length. I calculated D ssh rW of ND1 and ND2 genes using the equation: where W is the position at mid-location of the sequence forming the classical light strand replication origin. In species lacking the classical origin, I used for W the mid-location of the sequence located between the two tRNAs that normally flank the regular light strand replication origin, tRNA-Asn and tRNA-Cys. For other genes, I calculated D ssh rW according to the equation: Note that visual examinations of gradients in single species, such as those shown in Figure 2, are based on gene-wise averages of the binary C and T contents of that gene at third codon position. Analyses based on such averages would probably yield qualitatively similar results. I opt for the method using site specific nucleotide contents, without averaging over genes in order to maximize the amount of information used from the raw sequence data. It is possible that reducing data by averaging following the natural units of protein coding sequences might reveal additional phenomena or aspects of the main phenomenon examined, a point that should be kept in mind. The gene-wise averaging method has clear advantages for graphical presentation and is therefore used here in various Figures. D ssh r calculations for tRNA clusters different from the WANCY region containing the classical light strand replication origin are done following Seligmann et al., (2006b). These calculations were not used besides for Figure 1, for other analyses presented here, only D ssh rW and D ssh rt were used. D ssh r calculations for the various tRNA clusters as light strand replication origins, as they were used to estimate D ssh rX in Figure 1, are calculated following the principles described by Seligmann (2008 (Sade and Hildrech 1965), Procolobus badius (http://www.missouri.edu/~anthmark/courses/mah/factfiles/redcolubus.htm) and Pongo abelii (Wich et al., 2004)). In other groups, I only tested for the correlation of rt-rW with lifespan. The use of maximal lifespan for animals in captivity is a reasonable proxy for longevity, as well as maximal lifespan in the wild, as was shown at least in geckos (Werner et al., 1993). Before using correlation coefficients as variables in analyses, they were z transformed in order to linearize their scales (Amzallag, 2001) considering sample sizes (Seligmann et al., 2007). Analyses were done for various groups for which the relevant genomic and life history data were available for a sufficient number of species. For lizards, correlations with lifespan were tested, as well as correlations with estimates of developmental stabilities, when such were available (Seligmann et al., 2003). Two independent sets of lizard species were used: Amphisbaenidae, (Bipes biporus, Bipes canaliculatus, Bipes tridactylus, Diplometopon zarudnyi, Rhineura floridana), using the number of intercalated annuli on the ventral side of these animals (Seligmann & Krishnan, 2006) as a measure of developmental instability (the association with maximal longevity was not tested in this group because of lack of longevity data), and species for which both complete genome sequences and estimates of maximal lifespan* or fluctuating asymmetry$ in subdigital lamellae under the fourth toe were available (Agamidae, Varanus niloticus, Sphenodon punctatus) were excluded from analyses. I also tested for correlations of rt-rW, termed collinearity between gradients or D ssh r->D ssh t convergence, with the length of the gestation period. Information on gestation periods is also from http://genomics.senescence.info/species/. Replication versus transcription gradients in various species Examining graphs plotting the mean C/T ratio at third codon position for each gene as a function of D ssh t and as a function of D ssh rW, one finds that for a majority of species, D ssh rW is the better predictor of nucleotide contents at third codon positions and there is no evidence for a gradient resembling the one that would be expected due to transcription, whether due to replication convergent with transcription or transcription itself (for example the greater white-toed shrew Crocidura russula in Figure 2a). In some species, usually relatively long lived, such as in the western gorilla and the Yangtze river dolphin Lipotes vexillifer (Figures 2b, c), the situation is less clear, with both D ssh t and D ssh rW explaining a significant amount of variation in nucleotide contents, although D ssh rW is the better predictor and hence can be considered as the major cause of the gradient (meaning, the WANCY region would be the most commonly used OL). In some rarer cases, such as in the yellow-spotted night lizard Lepidophyma flavimaculatum (Figure 2d), the correlation www.intechopen.com Dssh [C]/[C+T] Dsshr Dssht Fig. 2. Proportion of cytosine at third codon positions as a function of time spent single stranded during transcription (D ssh t) and replication (D ssh r) in 13 mitochondrial protein coding sequences: a) in the insectivore Crocidura russula, a typical example where the replication gradient is by far stronger than the transcription-like gradient; b) in the western gorilla, where the transcription-like gradient is apparent, but weaker than the replication gradient; c) in the cetacean Lipotes vexillifer, where both gradients are similar; and d) in the lizard Lepidophyma flavimaculatum, where the transcription-like gradient is stronger than the replication one. www.intechopen.com of nucleotide contents with D ssh t is better than with D ssh rW, indicating that in that species, processes causing deamination gradients due to the time spent single stranded tend to start usually within the Dloop. This could either indicate that in that species the frequency of transcriptions is overwhelmingly larger than of replications, or that replication does relatively rarely start in the region of the regular OL. The latter option is in line with the fact that there is no recognized OL sequence in that lizard species at the regular OL location, between tRNA-Asn and tRNA-Cys, and that in this species, unlike in other lizards lacking a recognized OL sequence between these tRNAs, the adjacent 5' arm of tRNA-Asn and 3' arm of tRNA-Cys, including the short intergenic sequence, do not form OL-like structures as those found in Trogonophis (Seligmann & Krishnan, 2006) and other lizards (Macey et al., 1997). Hence it is likely that patterns are due to replication resembling (converging with) transcription, rather than due to transcription itself. This point is further discussed below. These observations suggest that variation exists among species in the extent that D ssh r converges with D ssh t, and that this variation might associate with life history: in the examples presented, regular replication gradients starting at the recognized OL sequence are observed in short lived species with high metabolic rates (shrew), while the convergence between replication and transcription increases for more long lived species with lower metabolisms (gorilla, dolphin, lizard), paralleling the dichotomy noted above for gradients between prokaryotes (where patterns remind more those found in mitochondria of short lived mammals) and eukaryotes (resembling more those found in mitochondria of long lived animals with slower metabolisms). This justifies testing whether the extent of D ssh r->D ssh t convergence correlates with lifespan and other ageing-related processes. Gradient convergence and lifespan in Primates In Primates, the strength of the replication gradient that considers only the recognized OL (rW) does not correlate with maximal lifespan (r = 0.11, P = 0.29, one tailed test, not shown); the strength of the transcription gradient (rt) increases with maximal lifespan (r = 0.318, P = 0.049, one tailed test, not shown). This improvement in the correlation with lifespan fits the prediction that the actual replication gradient, calculated having considered all putative OLs and not only the one in the WANCY region, is to some extent collinear with the transcription gradient, and hence the strength of the transcription gradient, rt, is a better estimate of the strength of the replication gradient than rW. In this case, and as expected by the working hypothesis, the extent by which rt is stronger than rW would measure the extent by which D ssh rX resembles D ssh t. I quantified this extent by calculating the residuals of rt for each Primate species from the regression between rt (dependent) and rW (independent) (rt = 0.822*rW+0.04, r = 0.83, P < 0.001). These residuals are unlikely, from a statistical point of view, to correlate with lifespan because rt correlates with lifespan: they represent only a small fraction of the variation inherent to rt because rW explains 69% of the variation in rt. Nevertheless, results show that they correlate better than rt with maximal lifespan (r = 0.405, P = 0.016, 1 tailed test; see Figure 3), indicating that the extent of D ssh r->D ssh t convergence affects lifespan. Analyses reveal similar patterns in other groups, such as Carnivora (Figure 4, analyses excluding Pinnipedia). In these cases, no residual analyses were done, and rW was simply subtracted from rt. The correlation is positive as expected for a pool of groups excluding Mustelidae and other closely related groups. Patterns in Mustelidae closely resemble those for other Carnivora, besides for an outlier, the Eastern spotted skunk (Spilogale putorius), whose maximal lifespan is lower than expected considering its relatively high D ssh r->D ssh t convergence. Fig. 3. Maximal primate lifespan as a function of a measure of convergence between replication and transcription in primate mitochondrial genomes. The x axis is the residual of rt, the strength of the transcriptional deamination gradient, with rW, the strength of the replicative deamination gradient calculated considering only the classically recognized OL in the WANCY region. Gradient strengths are estimated by Pearson correlation coefficients (see also text for further explanations). Species names are followed by numbers that indicate pairing in phylogenetic contrast analyses, then by NCBI (genbank) entries for species that were not used by Seligmann et al. 2006a Convergence of replication towards transcription Analyses between the various life history traits and gradient strengths presented and discussed in the rest of this study did not detect any significant correlation with rW, while those with rt were systematically stronger and sometimes statistically significant. This is despite the strong mathematically trivial correlation existing between rt and rW, which is also apparent from Figure 1. But the strongest correlations were systematically with rt-rW, confirming that the factor that is most relevant to life history is the extent of convergence of replication towards transcription, rather than the extent of the transcription-like replication gradient. This is the main point of the hypothesis presented here. Too extreme convergence between replication and transcription decreases lifespan Closer examinations of Figures 3 and 4 reveal that for species with relatively high (or even extreme) convergence between gradients (rt>rW: Cercopithecus aethiops, C. sabaeus, Macaca mulatta and M. sylvanus in Primates; Figure 3; similar patterns exist in Mustelidae, Figure 4), lifespan is sometimes much below the general trend expected according to other species with lower convergence levels. This suggests that at high D ssh r->D ssh t convergence levels, another factor decreases lifespan. It is plausible that collinearity between the processes increases the frequency of collisions between replication and transcription forks. This decreases the respective rates of these processes, increasing the overall time spent single stranded, causing more mutations. This increase might be greater than the relative decrease in mutation rate due to collinearity between the processes, especially at high collinearity levels. Figure 5 plots lifespan in Fig. 5. Maximal lifespan as a function of a measure of convergence between replication and transcription in mitochondrial genomes in Cetacea. Axes are as in Figure 3. Species names are followed by NCBI ( Cetacea as a function of D ssh r->D ssh t convergence. At low convergence levels, lifespan increases with convergence until a threshold region in D ssh r->D ssh t convergence. Beyond that threshold, lifespan decreases with D ssh r->D ssh t convergence. It is hence not a surprise to find a negative correlation between D ssh r->D ssh t convergence and maximal lifespans in lizards ( Figure 6). Hence the few outliers found in Figures 3 and 4a would reflect the same phenomenon as the one observed for a larger part of species in Cetacea (those for which a negative correlation of lifespan with convergence for high convergence levels exists) or for lizards ( Figure 6). Developmental stability and convergence between transcription and replication Analyses testing for correlations between D ssh r->D ssh t convergence and developmental stabilities yield qualitatively similar results to those found for associations with maximal lifespan: in some groups, convergence decreases stability (in a pool of lizards from several families, r = -0.52, Figure 7), and in others, convergence decreases instability (Amphisbaenia, r = -0.76, Figure 8). Rates of development and convergence between replication and transcription As noted above, convergence between replication and transcription increases the frequency of collisions between these processes, hence decreasing their respective rates. Ultimately, www.intechopen.com decreased replication and transcription rates should impede on an organism's development, decreasing its differentiation and growth rates. I used the length of the gestation period as an estimate inversely proportional to differentiation rate and tested for the expected positive correlation between gestation period and D ssh r->D ssh t convergence levels (see the example for Insectivora in Figure 9). Because maximal lifespan, together with brain size, correlates positively with the length of the gestation period (Sacher & Staffeld, 1974;Jones & MacLarnon, 2004), this result does not independently confirm the D ssh r->D ssh t convergence hypothesis, despite that the mechanisms assumed to cause the correlations with lifespan and those with gestation length differ: lifespan is presumed to increase because convergence increases mutational robustness (only extreme convergence decreases mutational robustness and lifespan); at the same time, convergence decreases the rates of replication and transcription,, and presumably also developmental rates. www.intechopen.com However, the rationale that D ssh r->D ssh t convergence affects both lifespan and gestation yields a prediction that is not trivial, despite the strong positive correlation that exists between lifespan and the length of the gestation period: in groups of species with high fertility and rates of development (short gestation), considered as r-strategists, one expects that D ssh r->D ssh t convergence adaptively coevolved with the length of gestation, while in groups of species with low fertility and rates of development (long gestation and lifespan), considered as K-strategists, it makes sense to expect adaptive coevolution between D ssh r->D ssh t convergence and lifespan. Hence, despite that lifespan and the length of gestation are highly correlated, a testable, independent, nontrivial prediction exists, which is that correlations between D ssh r->D ssh t convergence and lifespan should be weaker in r strategists than those between D ssh r->D ssh t convergence and the length of gestation, while in K strategists, the opposite is expected. This is estimated by subtracting the z transformed correlation coefficient between D ssh r->D ssh t convergence and the length of gestation from the z transformed correlation coefficient between D ssh r->D ssh t convergence and lifespan in that group (z transformation was adjusted for differences in sample sizes between different taxonomic groups, see method in Seligmann et al., 2007). Figure 10 tests this prediction by plotting this subtraction as a function of the mean maximal lifespan for that taxonomic group, used here as an estimate of the extent that the group is a relatively r-or K-strategist (short and long maximal lifespans, respectively). Results in Figure 10 fit the expectation that correlations with lifespan, relative to those with the length of the gestation period, increase along the r-K gradient. The increase in the subtraction is approximately gradual along the r-K gradient (which is estimated by the mean maximal lifespan in that group). According to this result, patterns from more than 100 mitochondrial genomes follow the complex predictions from a simple hypothesis. Replication versus transcription gradients in various species Results show that species vary widely in extents of convergence between replication and transcription gradients. In many species, the replication gradient starting at the recognized OL is the only or the major gradient detected, as found for Crocidura (Figure 2a). In these species, no gradient resembling the transcription gradient, whether due to transcription or replication, was detected. This observation, considering that transcription occurs in all species, suggests that most mutations on mitochondrial DNA occur during replication. The lack of detection of gradients that resemble what could be interpreted as a transcriptionrelated gradient suggests that in those fewer species where significant correlations occur between nucleotide contents and D ssh t, these reflect mutations occurring during replication, Fig. 10. Difference between strength of association of convergence between replication and transcription gradients with maximal lifespan and with length of gestation period as a function of the mean maximal lifespan in that mammalian taxonomic group. For each taxonomic group, the Pearson correlation coefficient of gradient convergence with length of gestation in that group was subtracted from its correlation coefficient with maximal lifespan. Group names are indicated near datapoints, followed by the number of species used for the lifespan analyses, and the correlation coefficients with lifespan and length of gestation, respectively. Values used for the y axis, but not for those indicated inside the figure, are z transformed correlation coefficients, taking into account sample sizes (see text). Carnivoraindicates that analyses were done excluding Mustelidae and Pinnipedia, and Primates+ indicates that analyses of this group included Cynocephalus and Tupaia. with replication origins distributed such that the overall replicational gradient (integrating over different replication origins) resembles the one caused by transcription. It is possible that because transcription is much more frequent than replication, the reactions that create the gradients are saturated, and hence no gradient is detected at the time scales the phenomenon is observed here. Hence while the comparative methodology used seems adequate to detect replication gradients, other methods should be used in order to detect transcription gradients. This means that at this evolutionary time scale, replication is the main phenomenon, and transcription is probably a secondary phenomenon, whose detection necessitates more sophisticated methods, as explained for some cases from the literature in the Introduction. The correlation between convergence of replication gradient with transcription gradient with lifespan was r = 0.80 in Chiroptera (5 species), but this group is not included in Figure 10 because only for 2 Chiroptera both genomic and gestation period data were available. Results presented for lizards (Figure 9) are not included because there is no gestation per se in these groups. Transcription or replication? For the sake of simplicity, I consider that the major cause for the observed gradients in nucleotide contents with D ssh t is replication, meaning that in these cases replication converged with transcription, but that transcription itself is not responsible for the observed gradients. Although this is at this point a rough simplification, there are several reasons beyond those given above justifying this assumption. It makes sense that the polymerization rates by the gamma polymerase, the enzyme replicating mitochondrial DNA, and by the mitochondrial RNA polymerase (Bonawitz et al., 2006) differ, because these are very different enzymes and the functional requirements differ for each process: the frequency of transcription is much greater than that of replication, and its rate is also probably much greater. However, the impact of errors during RNA polymerization is lower than that during DNA replication and hence RNA and DNA polymerase fidelities are also probably very different. Deamination gradients result from time spent single stranded during these processes, but because one can assume that transcription is much faster than replication, it is likely that the properties of the mutation gradient resulting from transcription differ from those of the replication gradient. Hence effects of one D ssh unit on nucleotide contents should differ between gradients caused by transcription or replication. Examining the various graphs in Figure 2, one can see that this is not the case: the slopes found for gradients with D ssh rW and D ssh t are very similar when gradients are detected with each D ssh rW and D ssh t (see for example the western gorilla, Figure 2b). This justifies the simplifying assumption that replication is the major cause of the observed gradients, and this approach should be considered as a satisfying approximation at this point. This does not mean that this assumption should not be tested later, especially that exploring this issue might yield valuable information on the relative regulations of transcription, replication, and/or various types of replication, which are at the heart of the mitochondrial replication controversy and ageing-related pathologies. Note that even at that level of distinguishing between deamination gradients caused by transcription and those caused by replication in a situation where both are confounded because replication is collinear with transcription, bioinformatics analyses can be helpful. Two deamination gradients exist on the heavy strand, one caused by the chemical reaction C->T, and one by A->G (both hydrolytic deaminations). The former is the faster reaction, and therefore the latter saturates less quickly, also from an evolutionary point of view (see Krishnan et al., 2004a, b). Therefore each of these two mutation types reacts differently to D ssh . Hence the ratio between the slopes of each of these gradients should differ if the gradients are due to transcription (C->T should be less saturated and more similar to the A->G gradient because transcription is faster than replication) than when deamination gradients are due to replication. Hence such analyses could determine which process, transcription or replication, created the detected gradient(s), even when both processes are collinear and apparently confounded. Gradient convergence and lifespan in Primates Results in Figure 3 suggest that convergence between replication and transcription slows ageing-related processes in Primates. Note that Figure 3 shows that relative to other Primates, longevity in Homo sapiens is greater than expected according to convergence between replication and transcription. This would be congruent with the hypothesis that human longevities increased recently, due to factors other than convergence of D ssh r->D ssh t, but suggests that future evolution increasing this convergence could still increase longevity. The correlation in Figure 3 also remains significant after applying the method of phylogenetic contrasts (Felsenstein, 1985) to the data, suggesting that the results are statistically valid independently of phyletic constraints (r = 0.50, P = 0.03, one tailed test). It makes sense that results of regular correlations, with and without accounting for phylogenetic contrasts are qualitatively similar because evolution of tRNAs functioning as OLs tends to be saltatory (Seligmann et al., 2006b). Gestation time, despite its association with maximal lifespan in Primates (r = 0.67, P < 0.01), only slightly increases with the level of D ssh r->D ssh t convergence (r = 0.11, P < 0.1). This suggests that collinearity between replication and transcription might cause interferences, slowing down both processes and ultimately developmental rates. Even a weak effect on developmental rates (inversely proportional to gestation length) could be a potent selective pressure in natural populations, counterbalancing pressures against cumulating excess mutations that favor collinearity between the processes. This effect on growth rates is probably relatively weak in Primates and in general Kstrategists, which maximize lifespan rather than developmental rates (Brookfield, 1986). The opposite is expected in groups that are, relatively to Primates, more r-strategy-oriented, a strong prediction corroborated in Figure 10 and discussed below. Correlations between molecular and whole organism levels One should note that several correlations between life history parameters and molecular indices characterizing metabolic strategies of cells have already been described, specifically for Primates: the length of the gestation period with cost minimization of nuclear amino acid usages (Seligmann, 2003), cost minimization of mitochondrial ribosomal frameshifts , slopes of (regular) mitochondrial replication gradients (Raina et al., 2005); and now maximal lifespan with convergence between mitochondrial replication and transcription. Seligmann & Krishnan (2006) discuss how whole organism properties probably result from many different, coadapted cellular processes, so that the wealth of significant correlations detected between molecular properties and whole organism features should be of no surprise. In addition, it is notable that nuclear genome size is not related to life-history traits in Primates (Morand & Ricklefs, 2005), so that effects of mitochondrial properties are more likely to be detected in this group. Too extreme convergence between replication and transcription The examination of Figures 3 and 4 shows that the trend between maximal lifespan and D ssh r->D ssh t convergence has outliers, and that these outliers are usually placed in the same relative area of the graph: these are species with relatively high convergence but lower than expected lifespan. It is possible that this situation results from asymmetry in inaccuracies in maximal lifespan estimations, as sampling error can only cause lower values than the real maximal lifespan. However it makes little sense that the well studied Macaca species, for example, have a lifespan that is much greater than in Figure 3, although these species are clearly outliers in respect to the general trend in Figure 3. This situation was also observed in other taxonomic groups (results not presented graphically here but used in Figure 10), and it is remarkable that there were never cases of outliers with low convergence but high lifespan. Hence the hypothesis of statistical artifact is unlikely here, and this situation is most probably biologically meaningful. It indicates that low convergence between replication and transcription does not enable to reach a long lifespan, but that high convergence is not necessarily a sufficient condition to enable a long lifespan, and that other factors affect this. The results for Cetacea ( Figure 5) indeed show that high convergence might in fact limit lifespan. Presumably, this is because at high convergence levels, the decrease in mutations due to collinearity between replication and transcription might be smaller than the increase due to longer D ssh because of increasing delays due to collisions between replication and transcription. This could explain the relatively sharp boundary between the region where convergence increases lifespan, and the one where a negative correlation is observed in Cetacea, and would account for outliers in figures presenting results for other taxa. Rates of development and convergence between replication and transcription The hypothesis that collisions decrease rates of replication and transcription when both processes are collinear predicts that rates of development decrease with D ssh r->D ssh t convergence. The cause for this would differ from the correlation between D ssh r->D ssh t convergence and lifespan. For lifespan, convergence decreases cumulation of mutations and in general, increases mutational robustness; for developmental rates, they are the direct result of decreased replication and transcription rates because of increased collision frequencies between replication and transcription forks. It is notable that this rationale yields a molecular mechanism for the well known negative association between metabolic rates and longevities, as described in Insects (Antler flies, Bonduriansky & Brassil, 2005;Drosophila, Marden et al., 2003;Novoseltsev et al., 2005;Mockett & Sohal, 2006), nematodes (Jenkins et al., 2004;Chen et al., 2007;Lee et al., 2006;Hughes et al., 2007) and mice (Cargill et al., 2003;and others, Bonsall, 2006). Some ecological data explaining the tradeoffs exist (Bonduriansky & Brassil, 2005), and results suggest the tradeoff is due to dietary metabolism (Partridge et al., 2005a,b;Speakman, 2005a,b;Kaeberlein et al. 2006;Ruggiero & Ferrucci, 2006;Szewczyk et al., 2006;Wolkow & Iser, 2006). Other evidence shows that this rule might not be universal (Van Voorhies et al., 2004;Khazaeli et al., 2005;Johnston et al., 2006), stressing the need for unifying hypotheses. Several molecular or biochemical mechanisms have been proposed (Balaban et al., 2005;Bartke ,2005;Knauf et al., 2006;Powers et al., 2006 ) but no general molecular model exists, stressing the importance to link the D ssh r->D ssh t convergence hypothesis with the lifespan-growth rate tradeoff. Making a meaningful test for this prediction that D ssh r->D ssh t convergence decreases developmental rates (hence increases the length of gestation) is not straightforward because of the strong positive association that exists between maximal lifespan and gestation length. However, using evolutionary ecology theory on r and K strategists, the simple molecular mechanism makes complex predictions on the relative strengths of association of D ssh r->D ssh t convergence with lifespan and gestation length, respectively. The fact that these predictions are overall verified by the analysis of a large number of species and groups of species in Figure 10 is strong support for the D ssh r->D ssh t convergence hypothesis and its coevolution with major life history traits. Causal interpretations of correlations between lifespan and convergence between replication and transcription The hypothesis that collisions decrease rates of replication and transcription when both processes are collinear enables to predict the occurrence of species that seem outliers in graphical analyses. However, the longevity-growth rate tradeoff hypothesis suggests the possibility that the causal interpretation of the association of D ssh r->D ssh t convergence with maximal lifespan is opposite to the direction assumed. This is because offspring fitness decreases with parental age (Kern, 2001;Priest et al., 2002;Moore & Harris, 2003;Moore & Sharma, 2005), putatively due to ontogenetic cumulation of mutations, especially in mothers (McIntyre & Gooding, 1998;Hercus & Hoffmann, 2000), which are inherited by offspring. This issue is particularly relevant to mitochondria. Indeed, species with long lifespan probably have relatively high transcription/replication ratios. Hence what appears to be convergence of replication gradients towards transcription gradients could be the result of increased lifespan, rather than its cause. This interpretation assumes that the gradients observed are transcription-, rather than replication ones, which remains possible despite the arguments against this in previous sections. Notwithstanding these arguments, this interpretation is not compatible with other predictions presented here about developmental rates, the relatively frequent outlying species characterized by high convergence and lower than expected lifespan, and the threshold phenomenon observed in Figure 5. In addition, this individual-based observation is a stabilizing feedback mechanism where increased longevity causes inheritance of mutations that decrease offspring longevity. This would rather predict negative correlations, or no correlation at time scales larger than that of single generations, such as in the inter-species comparisons described in the Results. The specific situation in Homo, where recent evolution caused a rapid increase in lifespan that is not paralleled by a proportionately high D ssh r->D ssh t convergence, could be interpreted both ways: lifespan, which is known to have increased recently by man-made environmental changes and not cell metabolism (Larkin, 2000), does not fit what would be expected according to cell metabolism (as measured by D ssh r->D ssh t convergence), suggesting that in other species where no such fast changes occurred, D ssh r->D ssh t convergence explains lifespan. Alternatively, one could speculate that in Homo, the recent man-made increase in lifespan did not yet alter the relative strengths of transcription versus replication gradients, following the hypothesis that a long lifespan increases more the number of transcriptions than of replication. According to that scenario, the relatively recently increased transcription/replication ratio did not yet result in stronger transcription gradients in Homo, explaining the position of that species in Figure 3. Besides that the latter interpretation is based on a more complex rationale than the former, it also seems less likely because if the causal mechanism underlying the D ssh r->D ssh t convergence-lifespan association is that increased lifespan causes more transcription-related deaminations, this is due to mutations cumulating ontogenetically (see the effects of parental age on offspring quality referred to above). However, following this rationale, gradients should almost immediately react to the increase in lifespan, which is not the case in Homo. Developmental stability and convergence between transcription and replication It is interesting to note that the principles observed for the association between D ssh r->D ssh t convergence and lifespan are also valid for that between D ssh r->D ssh t convergence and developmental stability. This observation fits the general trend that developmental instability associates with low fitness and pathologies. It would be interesting to explore whether this hypothesis of convergence of replication with transcription fits with the "double-agent" unifying hypothesis of ageing and diseases based on the tradeoff between oxidative stress inducing genetic reaction mechanisms against stress and its effect on ageing and age-related disease (Lane, 2003). The molecular processes presented provide mechanistic explanations for these similarities. Conclusions I present the original hypothesis that heavy strand sequences of tRNA-coding genes functioning as additional light strand replication origins tend to increase the similarity of mutational patterns resulting from replication with those due to transcription, putatively decreasing cumulation of mutations during the two processes. Variation exists among mitochondrial genomes in the extent that replication mutation gradients resemble transcription gradients; in most species (mainly short lived with high metabolism), replication gradients do not resemble transcription gradients. The similarity of replication mutation gradients to transcription ones correlates positively with maximal lifespan in Primates and other taxa. Systematically, outliers to these trends have replication mutation gradients relatively resembling transcription gradients but are for short lived species, the opposite (long lived outliers with replication gradients not resembling transcription gradients) does not occur. In some taxa such as Cetacea, this phenomenon is enhanced with two clearcut ranges in similarity between replication and transcription, one with relatively low similarities, where maximal lifespan increases with the similarity of replication gradients to transcription gradients, and another region where similarities are highest and maximal lifespan decreases with similarity. These patterns suggest that low convergence does not enable high maximal lifespans, but too high convergence limits lifespan, probably because too many collisions between replication and transcription forks decrease both replication and transcription rates, increasing durations spent single stranded, and mutation frequencies. The length of gestation periods increases also with convergence, notably, in r strategists; in K strategists, the convergence levels coevolve more with maximal lifespan, fitting the rationale that the molecular machinery is adapted for high metabolism and fertility in r strategists, and high survival in K strategists. Results are interpreted assuming that the observed phenomena are due to replication that sometimes resembles transcription, but are not due to transcription. Evidence supporting this is presented: in species possessing two gradients, one according to the classical replication origin, and one resembling transcription, both mutation gradients have very similar slopes, which is more compatible with a single enzymatic machinery (the mitochondrial gamma DNA polymerase) causing both gradients, rather than each due to a different polymerase. A method based on differences in the respective rates of replication and transcription for distinguishing between replication and transcription gradients is suggested, where the ratios between slopes of mutation gradients of purines versus pyrimidines should vary when mutation gradients are due to replication resembling transcription rather than transcription itself. The hypothesis that results are due to a causal relationship opposite to the one proposed (high longevity causes high transcription/replication ratios and hence transcription gradients dominate replication ones) is examined and discussed. This interpretation is unlikely, not only because gradients seem to be due to a single enzymatic process, but also because this hypothesis is less compatible with patterns in the data: among others it does not predict the patterns observed for outliers and the differences between r-and K-strategists. The study of DNA advanced human knowledge in a way comparable to the major theories in physics, surpassed only by discoveries such as fire or the number zero. However, it also created conceptual shortcuts, beliefs and misunderstandings that obscure the natural phenomena, hindering its better understanding. The deep conviction that no human knowledge is perfect, but only perfectible, should function as a fair safeguard against scientific dogmatism and enable open discussion. With this aim, this book will offer to its readers 30 chapters on current trends in the field of DNA replication. As several contributions in this book show, the study of DNA will continue for a while to be a leading front of scientific activities.
2018-10-25T17:14:04.332Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "4cdccd5ccbcc61e34ce9377b431ff996b82c5004", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5772/24319", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "131b45653d39f5a7eea3c6e5ad25797070ae0c13", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
248872129
pes2o/s2orc
v3-fos-license
Conflict Management Strategies for Children of Interfaith Marriages in Religious Decision Making Interfaith marriage is a form of pluralism development in Indonesia. This, however, creates a conflict for someone who chooses to marry someone of a different religion, such as when choosing a child's religion. Parents will decide the religion of their children when they are still a baby. However, when children understand the law, they will decide their religion which is often at odds with their parent's decisions. Differences of opinion in making religious decisions will lead to interpersonal conflicts in the family. This research used a qualitative descriptive approach with a constructivism paradigm. This research was carried out in the city of Solo and its surroundings with a population of children from interfaith marriages who have reached the age of 17 years and over. The sampling technique used in this research is purposive sampling. This research aims to determine the conflict and management strategies that exist in families of different religions in making religious decisions for their children. Dyadic power theory is a theory of power that looks at several factors that cause a person to believe he has power over others through interpersonal communication. INTRODUCTION Interfaith marriage is not a new phenomenon in Indonesia, and it is a form of plurality [1]. In the view of Islam and other religions, interfaith marriage is not allowed. The holy book of Islam explains that interfaith marriage is prohibited, as well as in II Corinthians 6:14-18 regarding the prohibition of interfaith marriage. The Islamic viewpoint on interfaith marriages is expressed in the Fatwa decision of the Indonesian Ulema Council (MUI) Number: 4/MUNAS VII/MUI/8/2005, which concludes that interfaith marriages are not permissible because they will cause conflict between Muslims and humans. Religious differences in marriage can cause several conflicts. One of them will arise when the couple has children. Religious conflicts, such as the child's choice of religion, are common among children born from interfaith marriages. The decision to choose religion in children as adults will become the child's identity, which will affect his or her future. Family decisions are frequently made by the person who dominates the family. This means that each family has one person with decision-making authority, namely the parents. This case is significant to study because each family member has their own decision to make but is regularly hampered by the parents' decisions. When a child reaches the age of 18 and is legally capable, he or she will be able to make decisions about his or her own life. Religion is a sensitive subject; it is feared that it will cause conflict among family members, with one party unable to resist each other's wishes and accept the decisions of others. This can lead to interpersonal conflict in the family, which can persist if not addressed. There has been previous research on the religious choices of children in adolescence from interfaith marriages. The study depicted the conflict that occurred, but there was no discussion of conflict management strategies [2]. A similar study was conducted on the religious decision-making for children of interfaith couples who try to decide their children's religion but still want to maintain a respectful relationship [3]. However, this study did not use the Dyadic power theory to explain decision-making in families, instead applying communication strategies appropriate for the conflicts encountered by the informants. Based on the above description, the researchers formulate the problem in this research as follows "How is the conflict management strategy for children of interfaith marriages in making religious decisions?" Dyadic Power Theory This research used Dyadic power theory which focuses on power and close relationships to make joint decisions. This theory was first coined by Rollins and Bahar (1976) and was later revised by Dunbar with the conclusion that power is part of the increase in a relationship, especially in couples, such as husbandwife, parents-children, and employees-bosses, and that it can influence how couples interact with one another and make decisions together [5]. Dyadic power theory is a power theory that examines a number of aspects that lead to a person believing they have power over others through interpersonal communication [6]. Interpersonal communication and power from others can help people make joint decisions, which can lead to better relationships. Joint decisions made in a close relationship, such as a family, will benefit from the power that exists inside it. The power in question is that of a single-family member who holds the family's highest position of respect. Power is one way to increase close relationships with determined interpersonal communication and decision-making in the family [21]. Often, it is the parents who wield such authority. However, in this research, children play an important role in determining which religion to choose. As a result, the children take over the power that was formerly held by the parents. Parents who believe they have authority over their children will make decisions for them. However, children are not always able to carry out their parents' decisions, which can lead to conflict. In this research, the dyadic power theory, which is a theory about the power of close relationships, is used to examine interpersonal relationships in the family. In interpersonal relationships with many differences, decisions can be made by discussion or unilateral decisions. Unequal power in the family leads to unilateral decisions in interpersonal relationships. The power lies in one member of the family. Interpersonal relationships with unbalanced power will lead to vague satisfaction on one side when making the decision, hence it is critical for interfaith families to manage interpersonal conflicts in their relationships [6]. Conflict Management Strategy Interpersonal conflict can cause discontent in relationships because there are differences in goals or opinions in it, therefore conflict management becomes a strategy to avoid prolonged conflict [20]. Conflict management strategies can take the forms of avoidance and doing something extreme such as physical violence [17]. Conflict management strategies are influenced by several aspects, namely goals, emotional state, cognitive assessment of the situation, personality and communication competencies, and family history. Devito outlined conflict management strategies that can be linked to research in this area [7]. 1) Avoidance and Active Fighting Strategy Avoidance can take the form of running away from the place of conflict, and emotional avoidance. Sometimes, it is preferable for an individual to fight actively by listening and respecting the opinions of others 2) Force and Talk Force is a physical violence-related strategy that can harm interpersonal relationships. The Talk strategy might help to resolve problems by communicating honestly and giving good feedback on the other party's arguments without resorting to violence. 3) Defensiveness and Supportiveness Defensiveness is a way of speaking in a judging tone. Conversations that pass judgment on specific parties will provoke a negative response. Supportive strategies include carefully selecting phrases to avoid making someone feel judged and making it easy for someone to listen to arguments. 4) Face-Attacking and Face-Enhancing Strategies Face attacking strategy is used to bring down the other party by exploiting their flaws. This strategy focuses on blaming others rather than finding solutions to problems. Meanwhile, face enhancing is a strategy that emphasizes apologizing, providing support, and respecting others' decisions. 5) Silencers and Facilitating Open Expression Silence is a strategy that involves using unpleasant actions like sobbing or shouting to silence the other person. Facilitating Open Expression is a dispute resolution strategy that includes positive behaviors such as allowing the other person to express the truth and appreciating the viewpoints of others. 6) Gunnysacking and Present Focus Gunnysacking occurs when someone discusses past mistakes during a conflict. The present focus is a strategy that only focuses on the current conflict and attempts to find a way out of it. Advances in Social Science, Education and Humanities Research, volume 661 Verbal aggressiveness is the selfish behavior of one party to win an argument by hurting another, while argumentativeness occurs when someone gives understanding to the other party to agree to an argument without hurting him. Rahim and Bonoma (1979) also provided five understandings of conflict management strategies, including giving obligations to others, strengthening self-will, avoiding conflict, dominating, and making compromises or discussions. These five things are based on concern for oneself and others [18]. METHODS The constructivism paradigm was applied in this qualitative descriptive research. This method was used to investigate material in more depth. The research was conducted in the city of Solo and its surroundings, with the research population consisting of children of interfaith marriages. Purposive sampling was used for sample selection involving people with an understanding of the object of research. The sample in this research was children from interfaith couples who were over 17 years old. 17 years of age was a consideration since at that age, a person has understood the law and can decide which religion he would follow. Data collection was carried out using primary and secondary data. Primary data were obtained through interviews with several research participants, while secondary data were gathered from reference books or journals related to the material on topics, including decision making, family relationships, and interpersonal communication. The review was done by comparing the findings of the interviews with the research as a whole. Data reduction was done by sorting the data so that they do not deviate from the research topic. Coding was done using a deductive approach that grouped the data in a conical manner. The data interpretation analysis stage was used to understand the data using Dyadic power theory and conflict management Strategy. Finally, theoretical triangulation was used to test the validity of the data by determining the research pattern through analysis based on theory. Father's power in deciding his children's religion The experiences shared by the three respondents show that their fathers have the power to decide things in the family. The authority of the father is manifested in children's religious decisions when they are young. "…. When I was young, I was a Catholic, because my parents married in Catholicism. In Indonesia, you can't get married in two religions. I was raised Catholic even though my mother was Confucian and my father was Catholic…. (Respondent 1) "….since I was young, my father was Catholic" (Respondent 2) "….I was previously a Moslem, I was taught Islam since I was a kid " (Respondent 3) The explanations of the three respondents indicated that their father had the power to decide on their children's religion while they were young. All three respondents followed the father's religion because he had the dominant power in the family. The power of each family member can influence decision-making. Dunbar argued that dyadic power applies when someone who is most dominant in interpersonal relationships has more power to make decisions [9]. The three respondents, as small kids, followed their father's religion because they could not make their own decision on which religion to follow. It occurs because the father's power is greater than that of other family members. Interpersonal conflict is common in close relationships that are experiencing negative emotions as a result of differences in viewpoints and goals [24]. In delivering messages, there are times when children and parents have opposing viewpoints, which can lead to interpersonal conflicts. "At the age of 17, I wanted to make an ID card, and I didn't feel like I fit in Islam, I just felt it was more comfortable in my mother's religion" (Respondent 3) Respondent 3 expressed a different point of view. She felt more at ease if she practiced her mother's religion. Disagreements in viewpoints between children and parents almost always lead to conflict, including in the relationship between children and parents of respondent 3. "I converted to Islam following my mother because many of my family are Muslim… then my father found out about this, and I was labeled as an apostate" The experience of respondent 2 shows that there were differences of opinion between him and his father. The disagreement was over his father's religion, which he chose at a young age and the religion to which he now adheres. These differences created interpersonal conflicts between father and son. The differences of perspectives between respondent 2 and his father led to interpersonal conflicts. In contrast to respondents 2 and 3, the different points of view in respondent 1 were more inclined to her mother. In this case, her mother had differing views and concerns about the religious views her child chose when she had not yet converted to Islam. This concern was based on the religion of her mother's extended family. However, over time the respondent's mother had the same opinion as to her, so the interpersonal conflict did not last long. Respondents 3 and 2 shared similar interpersonal conflicts, specifically conflicts with their fathers. Their father had a different choice from the child's point of view. This caused interpersonal conflicts that made both parties uncomfortable. According to Verderber and Fink, interpersonal conflict is a disagreement between two people who understand that their goals are different [10]. Respondents 2 and 3 had different understandings of religious perspectives and desires from their fathers. This created interpersonal conflict in their relationship. Respondent 1 had a similar experience, despite having a different point of view from his mother. However, these differences could be addressed immediately, so interpersonal conflicts did not last long. Joint decisions were obtained through the closeness of interpersonal relationships and communication with each other in an interpersonal relationship [2]. Following the interpersonal conflict, respondent 1 became closer to and shared the same views with her mother. The closeness of the two was marked by the easing of worries of the respondent's mother about the religious decisions of her child. Religious decision-making by children Conflicts arose in the three respondents, where one of the parents, either father or mother, did not agree with the religion they chose. When respondent 1 converted to Islam, she experienced some conflicts with her Confucian mother and extended family. Respondent 1 was a devout Catholic until she graduated from high school. One day, something happened that made her fell and lost her way, and she began to waver about her religion. This made respondent 1 decided to embrace Islam and discussed it with her parents. Both parents were relieved that their daughter chose Islam, but on the other hand, her mother did not agree with her choice of religion. It was because the mother's extended family adhered to the Confucian religion, she was afraid that her daughter would be disparaged and rejected. "My parents' reaction was typical because my grandmother's family was diverse. In fact, the challenge was my mother's family. I didn't want to go to Medan until my grandmother got sick. My mother's family has not agreed until now, but I have not given it much thought (respondent 1) The mother's concerns regarding her child's religious preference made their interpersonal relationship a bit problematic. However, the conflict did not last long because both of them understood and were willing to listen to the reasons for converting their religion. Advances in Social Science, Education and Humanities Research, volume 661 Respondent 2 is still experiencing conflict with one of his parents. His father is opposed to his son changing religion. "When I was in elementary school, I had a baptismal name. I was raised Catholic, but my mother was originally Muslim. After my parents separated, I converted to Islam" (respondent 2) Respondent 2 embraced Catholicism as a child until grade 1 of high school and converted to Islam when he was in grade 2 of high school. Prior to his conversion, he had a strained relationship with his father. When his father learned that he had changed religion, he became enraged and said harsh words that hurt his feelings. Respondent 2 decided not to contact his father as a result of this negative response. His parents are currently in the process of divorcing. Religions became sensitive issues to discuss in the respondent 3 case. This occurred after she converted to Christianity, her mother's religion. "When I was 17, I wanted to make an ID card and I didn't feel like I fit in Islam, I felt comfortable in my mother's religion. I was unsure about which religion to choose, then, I discussed it with my parents. Despite the pros and cons, until now I have embraced Christianity. My mother agrees because she is also a Christian, but my father does not because I was taught Islam from a young age (respondent 3). Respondent 3 received Islamic education from her father when she was a child, but as an adult, she felt she was incompatible with Islam. She tried to tell her parents about it, but her father was opposed to her conversion. Her father was dissatisfied with her daughter's religious preference. To this day, religion is still a sensitive issue in the family. The three respondents decided to change religions when they grew up. This was rejected by one of the parents. The parental refusal has an impact on interpersonal conflicts in the family. In the families of respondents 2 and 3, the power was concentrated on one side, namely the father, resulting in an unbalanced power structure. According to Dunbar, growing up in a family of unequal power can blur the lines between power and contentment on one hand [5]. Conflict arises as a result of one side's dissatisfaction with the religious decisions made by the children. This conflict creates a schism between parents and their children. Respondent1's experience was different; while one of the parents was initially concerned about his child's religion when the child attempted to explain, they were willing to accept differences of opinion. Dyadic power theory shows that when the power in the family is balanced, in other words, the power lies with each family member, they tend to use communication in an approachable way [6]. The strategy used by respondent 1 was by explaining her reason for conversion. DISCUSSION The three respondents are from Javanese families who generally follow the decisions of the family's head. They adhere to the rule that children must embrace the religion of their father. This shows the dominant paternal power in the three families. Wives in Javanese families tend to follow the decisions made by the head of the family, namely the husband. Despite having the same social status and education, this tendency persists in Javanese families. The father is the family's head and acts as a decision-maker, emphasizing the interaction of family members to negotiate joint decisions [8]. However, power dominance is not limited to Javanese families. Several studies have concluded that fathers have more influence and power in the family due to the tradition of women always supporting men, particularly in the family realm [11]. The lack of women's power in the family causes fathers to have more power to influence and make decisions. Adjustment in religion in families of different religions is borne by children and also women, meaning that they should follow the rules or religion of the head of the family [23]. This experience also demonstrates how patriarchal values influence interfaith family negotiations and how women are more likely to bear the burden of adjustment. Two out of three respondents practiced their father's religion and were rejected by their father when they decided to change religions. This demonstrates that the father wields power in the two respondents' families, as well as the authority to make religious decisions. Decision-making often occurs amid interpersonal conflicts, especially when one party holds more power in several areas [14]. According to dyadic power theory, a person's strength and power are influenced by how they influence other people in close relationships and can control that person so that when someone who is less empowered takes power in making decisions, it becomes difficult. However, when it comes to religion for children, fathers do not have more authority because religion is a personal choice. Advances in Social Science, Education and Humanities Research, volume 661 Differences of opinion regarding the decision can lead to interpersonal conflict. Kellerman argued that interpersonal conflict can arise when people have opposing viewpoints on a topic [12]. Face Attacking and Face Enhancing Strategies Respondent 2 experienced verbal attacks from his father as a form of rejection related to religious conversion. Such attacks do not aid in the resolution of conflicts, but rather worsen relationships, particularly interpersonal relationships. "I converted to Islam following my mother because many of my family members are Moslem. Then, my father found out about this, and I was labeled as an apostate. Now, we do not communicate any longer" (respondent 2) The statements of respondent 2 show that the relationship between the two became increasingly distant due to verbal attacks from his father. Respondent 2 preferred not to reply to his father's words because he was upset and hurt. Verbal attacks referring to someone in a close relationship can trigger conflict by hurting and being aggressive [22]. Face attacking and face enhancing strategies often occur in interpersonal relationships. Face attacking is a strategy when someone attacks another person using their weaknesses [7]. Instead of focusing on resolving a conflict, some people tend to blame others, causing relationships to deteriorate. Respondent 2 had a conflict because the father was more focused on his son's religious conversion, which made him angry. The father of respondent 2 intentionally vented his emotions with harsh words that made their relationship worse. Face enhancing strategies assist others in maintaining a positive image, and they tend to engage in discussions when looking for a solution. The results of these discussions reduce the likelihood of future conflicts and improve interpersonal relationships, particularly in family relationships [7]. Unfortunately, in this case, respondent 2 and his father did not discuss the conflict at hand. When the father vented his rage through verbal attacks, respondent 2 preferred to avoid his father and ended the relationship. Respondent 2's strategy harmed interpersonal relationships. Conflicts were not handled properly, and the two drifted apart due to opposing viewpoints. Parents frequently respond in ways that are contrary to the wishes of their children. This can lead to conflict because parents have a sense of ownership over their children, and it is assumed that the choice of the parents is the best choice [13]. Avoidance and Active Fighting Strategies Conflict avoidance occurs when a person leaves the scene of conflict. Emotional avoidance is another type of avoidance. In other words, a person withdraws psychologically from the conflict because he or she does not want to deal with the problems and arguments that cause conflict. Instead of doing this, one should participate in interpersonal conflicts by actively listening and expressing one's opinions while not hurting the other person. ".... Due to the conflict, I had to stay at my aunt's house. Until now, my father has been adamantly opposed to my conversion to Christianity. We no longer discuss religion in the family because he is still sensitive" (respondent 3) The narration of respondent 3 shows that avoidance was carried out by both father and daughter. Fathers try to avoid emotions by not listening and agreeing to their daughter's arguments, while the daughter chose to leave the place where the conflict was occurring. The two's response to the conflict caused the conflict to be insurmountable on the spot. Avoidance causes a person to fail to fulfill the interests of both himself and others because he refuses to discuss the conflicts that arise [19]. Respondent 3 and her father both avoided conflict, so they were unable to reach an agreement. In this case, someone must act as an intermediary to help the two parties understand each other. Another family member, namely the mother, must participate in family relationships in order to resolve conflicts. Unfortunately, this did not occur in the above case because the two of them had previously avoided discussing the conflict. The avoidance that both of them practice has a negative impact on the interpersonal relationship between a father and his daughter. The rejection of religious conversion experienced by respondent 3 occurred because of the father's sense of belonging to his child. Parents put power over their children when they are young, but that can change when they grow up because children already have their own power and decisions [4]. Respondent 1 avoided the conflict as well. The avoidance took the form of an attitude of not caring about the mother's extended family's response and the mother's concern about the extended family's response. Because of this avoidance, there was an interpersonal Advances in Social Science, Education and Humanities Research, volume 661 conflict in the family. This interpersonal conflict is the most damaging thing that can happen in a relationship because it causes negative emotions to arise [15]. Respondent 1 experienced negative emotions, so she avoided the place of the conflict. Respondent 1 felt something was wrong with her actions after avoiding for a while, and she felt guilty. Guilt can aid in the resolution of interpersonal conflicts, [25] as a result, respondent 1 took an active role in the fight by explaining her conversion. Respondent 1 effectively communicated her argument about her religious conversion, allowing the mother to accept the different points of view. CONCLUSION Each family has its own way of dealing with conflict. Two out of three respondents did not succeed in resolving the conflicts that existed in their families, resulting in long-lasting interpersonal conflicts. Respondents 2 and 3 both chose strategies that negatively impacted their interpersonal relationships with their parents. In contrast, respondent 1 chose a strategy of actively fighting for parental approval. Dyadic power theory, a theory about power in close relationships, can be applied to interpersonal relationships when making decisions. However, if interpersonal conflicts arise during the decision-making process, they must be resolved first. Essentially, dyadic power theory is a power theory in decision making that is based on power, resources, and social exchanges to reach joint decisions. When power is concentrated on one side of the family, it is very likely that decisions are made by one party only, clouding the satisfaction level of other family members. Vague satisfaction will lead to interpersonal conflict, so the power of equal family members can assist children in making religious decisions while minimizing conflicts. Often, power is concentrated on one side of the family, namely the parents, particularly the father. Husbands or fathers typically wield more power in a relationship and can make decisions unilaterally [16]. If interpersonal conflicts arise during the decisionmaking process, it is preferable for someone to be aware of the conflict management strategy. Thus, it will be easy to resolve conflicts and joint decisions can be made without hurting one another because religious decisions involve several family members, not only children or parents.
2022-05-19T15:23:04.316Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "75e82b2445ecec01d279ae03946dae10628a9785", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125974100.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9162a5ddf18f719b172e3531c051eeca1b82a3a9", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
195872905
pes2o/s2orc
v3-fos-license
eMap: a Web Application for Identifying and Visualizing Electron or Hole Hopping Pathways in Proteins Hopping I. INTRODUCTION 2][3][4] Many proteins, including photosystem II, ribonucleotide reductase, galactose reductase, cytochrome c oxidase, photolyase, and cryptochrome, utilize redox-active aromatic Tyrosine (Tyr or Y) and Tryptophan (Trp or W) amino acid residues to shuttle electrons or holes. 5The recent analysis by Gray and Winkler has revealed that one third of structurally characterized proteins exhibit extended chains of Tyr and Trp residues, which they hypothesize to serve as universal oxidative stress protection mechanism in proteins. 6,7derstanding the mechanistic features of proteins and their stability, therefore, often depends on availability of detailed information on the electron/hole transfer pathways through aromatic sites.Here, we present eMap, a robust computational tool aimed at qualitative mapping of electron hopping pathways in proteins based on their tertiary structures.The primary purpose of this software is to efficiently identify possible electron hopping channels to be used in further quantitative studies.The performance of the model is illustrated on two representative proteins. Existing software for predicting paths of electron transfer in proteins ranges from empirical models (e.g.Pathways plugin 8 for VMD 9 ) to more sophisticated electronic structure based schemes (e.g.Electron Tunneling in Proteins Program or ETP 10 ).The Pathways model 11,12 by Beratan and Onuchic describes electron transfer as a collection of pathways, each of which is defined as a sequence of through-space, through-covalent bond, and through-hydrogen bond hopping events with multiplicative penalty functions.The ETP software developed by Stuchebrukhov and co-workers 10 evaluates and visualizes electron tunneling current based on multi-level electronic structure calculations including semi-empirical simulation on an entire protein.However, this sophisticated analysis is computationally demanding.eMap provides an inexpensive chemistry-inspired alternative and can be viewed as a coarse-grained version of the Pathways model, limiting the analysis to through-space hopping between automatically identified Electron Transfer Active (ETA) moieties. The structure of the manuscript is as follows.The empirical electron hopping model used in eMap is described in Sec.II A and II B. The structure of the software is outlined in Sec.II C. The functionality and user interfaces are described in Sec.II D. Finally, the results of the eMap analysis for two representative proteins are discussed in Sec.III. II. MODEL AND IMPLEMENTATION A. Pathways model Surface-exposed residues are indicated as squares, while buried residues are indicated as circles. The eMap analysis is based on a coarse-grained version of the Pathways model [11][12][13][14] with only through-space tunneling between aromatic and user-specified sites being considered.In the Pathways model, an electron/hole transfer pathway between the specified donor and acceptor is described as a series of through-space, through-covalent bond, and through-hydrogen bond tunneling events.Considering multiplicative penalty functions ( i/j/k ) for each tunneling event, the resulting tunneling matrix element (T DA ), an effective donor-acceptor coupling, has the following form: 11 2][13][14] The penalty functions for through-covalent bond tunneling ( j = 0.6) are constant regardless of the nature of the atoms and bond length.The penalty functions for through-space ( i = 0.3 × exp(−1.7(Ri − 1.4))) and through-hydrogen bond ) tunneling are empirical and distance-dependent, where R i/k is the interatomic distance in bohr. The protein then can be viewed as an undirected graph, with each atom representing a node and the edges connecting the nodes being associated with the penalty functions.To make the problem tractable with graph theory methods, it is convenient to operate with non-negative edge lengths and additive parameters rather than multiplicative penalty functions.This can be achieved using modified penalty functions, P = − log .Maximizing the product of penalty functions, which determines T DA coefficient (Eq. 1) is equivalent to minimizing the sum of the corresponding modified P functions.2][13][14] This can be efficiently done using graph theory algorithms, for example, Dijkstra's.The analysis relies on the available tertiary structure and a specified electron donor and acceptor. The model employed by eMap can be viewed as a coarse-grained extension of the Pathways model where only through-space hopping events between ETA sites are considered.The unique features that distinguish eMap from other available software are discussed below.sites (see Sec. II D for more details).The software then finds the shortest paths connecting a given electron/hole source (node 1 in Fig. 1) to either all surface-exposed residues of the protein (nodes 2, 3, and 7 in Fig. 1) or to a user-specified target residue, which can be buried or surface-exposed.More details on different modes of using eMap are given in Sec.II D. C. Implementation details eMap is implemented as a web application and takes information on the protein structure (PDB) and user-defined algorithmic parameters as the input.The eMap architecture is schematically shown in Fig. 2. The front-end part is responsible for direct communication with the user.The initial step of the analysis is to set up the input for eMap, which includes specifying the protein structure using PDB ID or PDB/CIF files and defining the algorithmic parameters (see Sec. II D).This step is fully performed on the client side (Module I in Fig. 2). Once all input parameters are specified, the back-end part carries out the preliminary analysis based on the specified input. The first step on the back end is to parse the protein structure file, which mainly relies on the open-source Biopython package. 15,16eMap locates all ETA sites, including aromatic amino acid residues, aromatic moieties of cofactors, and user-specified ETA sites (Module II in Fig. 2).The next step is to identify the surface-exposed ETA sites (Module III in Fig. 2). ETA residues are classified as buried or surface-exposed using residue depth [17][18][19] or relative solvent accessibility [20][21][22] criteria (see Sec. II D).Biopython, 15,16 together with the MSMS 17 or the DSSP 20,23 software, is utilized to perform this classification.A pairwise distance matrix is then constructed for the ETA sites, and an image of a graph with each node representing an ETA site and the edge lengths being defined by the distances between sites is returned to the front end (Module III in Fig. 2).The final step on the back end is to search for the shortest paths connecting a specified electron/hole source to each surface-exposed residue or to a single user-specified target (Module IV in Fig. 2).The analysis is done using the NetworkX python package, 24 and the graphs are visualized using PyGraphviz. 25The graph images, along with the results of the analysis (the identified paths ranked by their lengths) are then passed to the client side, where the pathways are also visualized in 3D using the NGL Viewer, 26,27 (Module V in Fig. 2).Some of the relevant algorithmic details and features are discussed below. The full list of external packages, references, and licenses is given in Supporting Information. D. Features and the user interface Below we discuss the key input parameters and algorithmic details used to construct the graph and predict the most efficient electron transfer channels. Specifying ETA sites.Once the protein structure uploaded by the user (using either PDB ID or PDB/CIF file) has been parsed, the user can specify the sites to be considered as ETAs, and therefore, to be included in the analysis.This is done using options illustrated in Fig. 3a and 3b.By default, all of the Tyr and Trp residues from every chain of the protein and all of the automatically identified aromatic moieties of cofactors will be included into the analysis. The "Additional Residues" tab (Fig. 3b) shows identified non-amino acid aromatic sites.The user can also manually specify ETA sites atom-by-atom (from PDB atom serial number) using the Custom Atom Range option (Fig. 3b). Identifying surface-exposed residues.The user can choose one of two algorithms to classify residues as surface-exposed or buried (Fig. 3a).The default option is the residue depth criterion.8][19] Alternatively, the user can choose relative solvent accessibility, [20][21][22][23] which is defined as the ratio of the calculated solvent accessible surface area to the tabulated maximum solvent accessible surface area (MaxASA) for this residue type. 22Relative solvent accessibility cannot be evaluated for non-protein ETA sites due to the lack of pre-computed MaxASA values for non-protein residues.Graph and shortest path search parameters.The general parameters panel sets the distance measure used in the penalty function: either the distance between the centers of mass of the two ETA sites or the shortest distance between two atoms of the ETA sites (Fig. 3a).In addition, the "Advanced" tab enables tuning threshold parameters (Fig. 3c) for how the graph is constructed.Edges with distances greater than Distance Cutoff (Fig. 3c) are immediately discarded.The density of the graph can be tuned using the Edges per Vertex and Standard Deviation (SD) sliders (Fig. 3c).The former specifies the percentage of the shortest edges that are kept per node, with a minimum of 2 edges being preserved as long as they satisfy the distance cutoff.Among the remaining edges, only those with length l ≤ lnode + σ node are kept, where σ node is determined by the SD parameter, and lnode is the average length of the edges for a given node. Specifying electron/hole donor and acceptor Once all of the input parameters are specified and the structure has been processed, the user then specifies the electron or hole source. The target(s) can be selected to be a single site, or the collection of all surface-exposed residues. The shortest paths are evaluated using the NetworkX package. 24For a single target, the five shortest paths connecting the source and target are identified based on Yen's algorithm. 28e shortest paths connecting the source to each surface-exposed residue are identified using Dijkstra's algorithm (single path per single target residue).The paths are then grouped based on the first surface-exposed residue reached during the path, and ranked according to their length.The pathways are further visualized in 3D using the NGL viewer. 26,27 III. APPLICATIONS Below we illustrate the capabilities of eMap in predicting electron hopping pathways in proteins using Arabidopsis Thaliana Cryptochrome 1 (Cry1) and Pseudomonas aeruginosa azurin as examples. Mapping electron transfer in Cryptocrhome.Cryptochromes are photoactive flavoproteins with diverse biological functions, including being integral part of a circadian clock machinery and likely being involved in magnetoreception by avian birds. 290][31] The resulting hole then propagates to the surface of the protein via so-called Trp triad, three conserved Trp residues (W400 -W377 - W324 in Cry1). 29,30,32,33Here we analyze Cry1 structure (PDB ID 1U3D 34 ) with eMap.Once the structure is parsed by eMap, three non-protein aromatic sites are identified: FAD510-1, FAD510-2, and ANP511 in addition to standard side chains of the aromatic residues.The two former sites are adenine and flavin of FAD cofactor, whereas ANP511 is adenine of AMP-PNP bound to the protein.All three non-amino acid aromatic sites as well as all Trp and Tyr residues have been included into the analysis.Default values for all thresholds have been used in this simulation.After the structure has been processed by eMap, the source and target sites have been specified.The source of the electron/hole transfer is chosen to be flavin of FAD (FAD510-2) and W324 is selected as the hole target.With this selection of the source and target, eMap successfully identifies the electron transfer channel active upon photoactivation of Cry1: FAD510-2 -W400 -W377 -W324.The identified path is shown in bold in the 2D graph (Fig. 4a) and in 3D using the NGL viewer 26,27 (Fig. 4b). Electron transfer in azurin mutant.As shown above eMap can efficiently predict electron transfer pathways in proteins.Yet, simple distance-dependent penalty functions also impose some limitations.In particular, the intermediate nodes might be missing in the predicted shortest path.To illustrate this, an example of Pseudomonas aeruginosa azurin mutant 35 and hole acceptor (Cu). 35The crystallographic structure with PDB ID 6MJS 35 was used for the analysis.The Custom Atom Range feature was used to specify a hole donor (Re-atom) and acceptor (Cu-atom).The results of the eMap analysis are shown in Fig. 5. CUST-2 and CUST-1 ETA sites represent Re and Cu atoms, respectively.One can see that if the default parameters for edge generation are used (Fig. 5a), the edge between W124 and CUST-1 is present on the graph, and, therefore, the shortest path found by eMap corresponds to CUST-2-W124-CUST-1 hopping pathway, rather than to a pathway involving two Trp residues.Thus, if a direct hopping is allowed in the model (i.e. the corresponding edge is present in the graph), the resulting onestep hopping will be always preferred by eMap over a multi-step hopping, which may or may not be the case in the actual protein system.If the Distance Cutoff criterion is tightened (13 Å) the download file view on ChemRxiv emap_paper.pdf(4.85 MiB) FIG. 1 . FIG.1.A graph representing the connectivity network between the aromatic sites in a protein.P ij is a penalty function associated with electron/hole hopping between the sites (see text for more details). FIG. 3 . FIG. 3. Options specifying pairwise distance map construction: specification of the chains included in the analysis, algorithm used to identify surface-exposed residues, inter-site distance evaluation scheme, and standard ETA sites (a); selection of the non-protein ETA sites included in the analysis (b); tuning thresholds and cutoffs for drawing graph edges (c). FIG. 4 . FIG. 4. Results of the eMap analysis for Cry1 protein: shortest path connecting flavin of FAD (FAD510-2) and the terminal Trp from the Trp triad (W324) visualized in 2D (a) and 3D (b). is FIG. 5 . FIG. 5.The results of eMap analysis for 6MJS structure: (a) the shortest path connecting Re (CUST-2) and Cu (CUST-1) atoms identified using the default parameters for the graph edges generation; (b) the shortest path connecting Re (CUST-2) and Cu (CUST-1) identified using tighter Distance Cutoff (13 Å); (c) 3D image of the pathway identified with 13 Å cutoff.
2019-07-11T13:15:37.011Z
2019-03-27T00:00:00.000
{ "year": 2019, "sha1": "b2300419c92b809bd0706c99d2937714a1fa280a", "oa_license": "CCBYNC", "oa_url": "https://figshare.com/articles/journal_contribution/eMap_A_Web_Application_for_Identifying_and_Visualizing_Electron_or_Hole_Hopping_Pathways_in_Proteins/9199397/1/files/16754171.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "542ba5847158a9d47dd471be4f674e5665623583", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
225796340
pes2o/s2orc
v3-fos-license
Cognitive Similarity-Based Collaborative Filtering Recommendation System . Introduction The information overload problem on the Internet is popular today so that the recommendation system is powerful methods to handle these problems. The recommendation system covers a wide range of recommendation targets such as travels, movies, restaurants, fashion, news, and so on [1,2]. Clearly, one of the highly effective technologies applied to the recommendation system is Collaborative Filtering (CF) [3][4][5][6]. Basically, the operation of the CF system is described as a following. First, CF collects user feedback, and such responses reside within a certain domain and allow users to rate items within that domain. Second, CF exploits the similarities between ranking behaviors of users. Finally, it is possible to determine how to recommend an item. CF accumulates user-item ratings, identifies users with common ratings to items and offers recommendations based on the inter-user comparison. In other words, recommendations for a specific user are based on the behavior and evaluation of other users. The motivation for CF comes from the idea that people often get the best recommendations from someones (i.e., neighbors) who have similar preferences. The main problem of collaborative filtering is how to incorporate and weigh the preference of neighbors. The purpose of collaborative filtering algorithms is to suggest new items or predict the utility of a given item for a particular user based on the feedback from the user and the other users who like and leave ratings for the item. Let's assume that, there is a list of n users U = {U i |i ∈ [1, ..., n]} and a list of m items I = {I j |j ∈ [1, ..., m]}. Each user u i has a list of item I u i , in which the user has created their feedback. The feedback can be given by the user as a rating score, usually a certain numerical scale, or can be implicitly derived from historical records, by analyzing timing logs, mining web hyperlinks and so on [6,7]. Because, I u i ⊆ I and it is possible for U u i to be a null-set. Therefore, there exists a distinguished user called the active user for whom the task of a collaborative filtering algorithm is to find an item similarity that can be in the two forms, prediction and recommendation. Figure 1 shows the schematic diagram of the collaborative filtering process. In addition, CF algorithms represent the entire matrix n × m user-item data with each entry in the matrix n × m represents the preference score (ratings) of the user u n on the item i m . Each rating get a numerical scale and when the user has not rated yet, it will be 0. Input (ratings table) Output interface CF -Algorithm Generally, collaborative filtering algorithms can be divided into two main categories: memory-based and model-based algorithms [8]. Memory-based Collaborative Filtering algorithm uses all or a sample of the user-item database to do the predictions. Each user is part of a group with similar preferences. The prediction about preferences for the items for users can be created by identifying the neighbors of a new user (or active user). On the other hand, Model-based Collaborative Filtering algorithms allow the system to learn the given model so that the algorithms recognize complex patterns based on training data. Then, based on learned models, the system makes the intelligent predictions for collaborative filtering tasks for the test or real-world data. Besides the outstanding advantages as mentioned, the user-based collaborative filtering still has a lot of successes. However, their widespread use has revealed challenges, such as: • Scalability: With large systems, such as Netflix (https://www.netflix.com) and Amazon (https: //www.amazon.com/), the number of users and items increases a lot every day. The traditional CF algorithm will face serious scalability issues, with computational resources exceeding actual or acceptable levels. For example, if we have millions of users and millions of distinct items, the complexity of the CF algorithm is already too large. Besides, the system needs to respond immediately to online requests and make recommendations to all users regardless of their purchase and rating history, thus requiring high scalability. • Sparsity: In fact, many commercial recommendation systems are used to evaluate very large sets of products. Therefore, the user-item matrix used for collaborative filtering will be very sparse, and the performance or predictions of CF systems are challenged. In several situations, specifically, the cold start problem occurs when having a new user/item in the system. It may not find the similar user/item because there is not enough information (it is also called the new user problem or new item problem [9,10] ). Besides, neighbor transitivity is another problem with sparse databases. The users with similar preferences may not be identified if they have not rated any of the same items. So, it will reduce the effectiveness of a recommendation system based on comparing users in pairs to make predictions. This paper has three primary research contributions: (i) propose a cognitive similarity approach and collect the real cognitive similarity data based on a crowdsourcing system (called OurMovieSimilarity) [11]. (ii) formulation of a pre-computed model (the three-layered architecture) of cognitive similarity to extract the cognitive similarity from users. (iii) proposed the cognitive similarity-based collaborative filtering recommendation system. In particular, we create a crowdsourcing system [11][12][13][14] to collect the cognitive data from the user. Then, we propose the three-layered architecture [15] to extract the cognitive information from the user. Our architecture is bottom-up and structure made of three superposed networks that are strongly linked: • User network relating users on the basis of explicit from the cognitive network. • Cognition network relating cognitive similarity between users based on selecting items similarity. • Item network relating items based on the basis comparing features extracted from them. The remainder of this paper is organized as follows. In the next section, we present some of the research related to user-based collaborative filtering. In Section 3, we present a definition of cognitive similarity and propose the three-layered architecture to extract the cognitive similarity. We present the recommendation system based on cognitive similarity in Section 4. The details of our experiment, data set, evaluation, and result will be provided in Section 5. In the final section, we provide concluding remarks and directions for future works. Related Work GroupLens [3,6] has implemented the MovieLens (https://movielens.org) [16] as one of the large systems that allow new users to sign up and rate their favorite movies. GroupLens researchers have also released a data set that they collected over the years with more than 25 million movie ratings. They provide a pseudonymous collaborative filtering solution for movies based on their data set in order to improve and solve the disadvantages of collaborative filtering, especially to improve user-based collaborative filtering in a recommender system. Other technologies have also been applied such as Bayesian networks and clustering. A Bayesian Network (BN) [17][18][19] is a compact representation of a multivariate statistical distribution function. BN encodes the probability density function governing a set of random variables {X i |I ∈ [1, ..., n]} by specifying a set of conditional independence statements together with a set of conditional probability functions. In particular, a BN consists of a qualitative part, a directed acyclic graph where the nodes mirror the random variables X i and a quantitative part, the set of conditional probability functions. In general, BN creates a model based on a training-set with a decision tree at each node and edges representing user information. The model can be built off-line over a matter of hours a day. The resulting model is very small, fast, and essentially as accurate as the nearest neighbor method. Recently, a lot of improvements for user-based CF have been proposed to mitigate the effects of the data sparseness [20,21]. For example, a singular vector decomposition was used to condense the original user-item matrix [22] for dimensional reduction, and latent semantic models [23] was used to cluster the users and items. However, these approaches have a disadvantage that the decomposition must be renewed every time another user or rating is added to the matrix. Another, more recent contribution is based on an analysis of prediction errors to improve the accuracy of user-based CF. This approach has the limitation that the cost for the calculation of errors [24] of all ratings during training is quite an expensive. Alternative approaches, using recursive prediction strategies, have been proposed to exploit not only the neighbors but also the neighbors of the neighbors [25]. Because the similarity calculation of neighbors of all neighbors is required, such strategies incur high computational costs and grow exponentially with the depth of the recursion. Besides, these strategies must be enrich the information of the user-item matrix to improve the performance of user-based CF [8,26]. In addition, in [27,28], the two item-based similarity measures have been designed to overcome the cold-start problem by incorporating genre data of items. They use popular datasets such as MovieLens and MovieTweets in their experiments. According to their approach, an item be uniform to other items because they have more than one common genre. Therefore, by considering the association of common genres, they exploit one of the similarity measures that is determining the degree of direct asymmetric correlation between items. The proposed method in this paper was inspired by [29] that proposes the Rated-Item Pools (called RIP-based) approach to improve user-based CF. This approach aims to eliminate extra calculations that increase computational complexity and thereby avoid the need to add external knowledge resources resulting in potential cost. In order to formulate the approach, the author used a related method [3] that applied Equation (1) to predicting the rating value R u,i for an active user u and item i where N u,i represents a subset of the neighbors v of the active user u who explicitly rated item i. In addition, the user similarity sim(u, v) is normalized by the sum among all similarities computed between both the active user and the neighbors from N u,i . The classic user-based CF approach calculates sim(u, v) as a global similarity. To calculate sim(u, v), either the cosine similarity metric (also referred to as vector spatial similarity) or, more frequently, the Pearson correlation coefficient are generally used. The Pearson correlation coefficient is defined as following: where R u,i represents the user u rating on the item i; C u,v represents the intersection of the item rated by the users u and v; R u represents the average rating of user u on all the C u,v co-rated items. In Equation (2), similarity is not only used to contribute sim(u, v) to Equation (1), but also to find the neighbors (N u,i ) of the active user u. Normally, we have two methods that can select the nearest neighbors. However, the more popular is to choose the K users most similar to the active user [3]. In this method, we estimated a similarity threshold, and then alternately, all chosen users have distances to the active user which not exceeded a similarity threshold [4]. We decided this classical method which described above as User-based Pearson Correlation Similarity (UBPS) and adopts it as the baseline for the comparative analysis presented in this paper. Cognitive Similarity Our work explores the cognitive similarity between users, then we can define the most similar user for the active user. For example, consider the relation between users such as Kyle, Jason, and Paul. Typically, the process of a CF system first detects the preference of Jason based on his rating items. In the second step, the system comparing the Jason's ratings against Kyle and Paul to find the most "similar" tastes. The final step is to recommend items that similar users have rated highly but not yet been rated by Jason. However, how do we combine and weigh the preferences of user neighbors to define the top-N recommendations for Jason? We recognized the behavior of users when using a service is crucial to making accurate predictions [30]. Our work aims to understand the cognitive similarity of the user. Therefore, according to our approach, we can define the most similar user of Jason is Kyle so that the suitable recommendations to Jason almost depend on Kyle and a little from Paul. As shown in Table 1, with the traditional CF method the user Jason has the same relation with Kyle and Paul, while our proposed method showed that Jason and Kyle have the stronger relation than Jason and Paul. In the remainder of this part, we describe the details of our approach to extract the cognitive similarity between users Suppose n is the number of items and k is the number of features extracted from each item. Therefore, each item will be represented as a vector I n = {F i |i ∈ [1, ..., k]}. When users u selects a pair of items similar, the cognitive similarity of user u will be represented as where Sim F i is a cosine similarity between features F i of each items I n . The cognitive similarity between users will be enriched by each of their selection. Generally, we have a definition of cognitive similarity as a following: Definition 1. The cognitive similarity (CS) between user u and v is their priority of these F features extracted from each item i in the selecting process a pair of an item similar and can be formulated as: where Sim F i ,F j is the similarity of features F which extracted from the pair of items i and j in the selection of user u and user v. Otherwise, p i,j represents the similarity of the priority between user u and user v in order to select pairs of item similarity. Measuring Cognitive Similarity The most important step in memory-based collaborative filtering algorithms is calculating the similarity between items or users. The basic idea of calculating the similarity between two items item i and item j is to first isolate the user who evaluated both of these items and then apply the same calculation technique to determine the similarity Sim item i , item j . In this study, we using soft cosine similarity is the metric for measure the similarity. The cosine similarity is defined to equal the cosine of the angle between two non-zero vectors of an inner product space. Given the vector I and vector J, the cosine similarity is represented as follows: where I h and J h are components of vector I and vector J respectively. For example, given movie m i which users have seen and all movies m j remained in the database, we measure similarity Sim m i , m j by using Vector Cosine-Based Similarity. Movie m represented as a vector T, G, D, A, P , in which T, G, D, A and P represent the feature of title; genre; director; actors; plot. In this regard, the formula for scoring Sim m i , m j described as follows: where T ij , G ij , D ij , A ij , and P ij represent the features which measure similarity between movie m i and movie m j such as titles, genres, directors, actors and plots. In particular, consider the title feature, by apply Equation (4), the similarity between the title of movie m i and movie m j describe as follows: Respectively, we measured the remainder similarity between features such as genre (G ij ), director (D ij ), actors (A ij ), and plot (P ij ). Finally, we repeat that calculation for all the remaining movies in the system and obtain a set {Sim m i , m j+h |h ∈ [1, .., n]}. Besides, we add an element that is the priority of users while they select a similar movie. Hence, the Equation (5) is re-written as follows. where ω k denote the priority of user in selecting a pair of movies similar; k denote the number of features extracted from movie m respectively is the title, genre, director, actor, and plot; Sim k m i , m j is a similarity measuring between movie m i and movie m j as described in Equation (5). The priority of users in selecting a pair of movies similarity dynamic re-calculated and updated by the OMS system when users have a new activity (a new pair of movies similarity). Based on the history of the users (their activities), we collected all the pair of movies similarity so that we can represent the user by using the feature extracted from the all of the pair of movies similarity which user recognized. By using Equation (3), we formulate an equation that measure the cognitive similarity between user Kyle and Paul or Jason. It's described as follows: Three-Layered Architecture for Cognitive Similarity Our purpose is to extract the similarity between users based on their cognitive similarity in finding the pair of similar items. Then we can use the cognitive similarity to find the k-nearest neighbor of active users. Hence, we introduce a three-layered architecture as shown in Figure 3, including (i) an item network S, (ii) a cognition network C, and (iii) a user network U. The networks are considered with several different relations between individuals. Hence, each network is characterized as a set of relations and a set of objects (nodes). The characteristics of each layer and the relationships between layers in the three-layered architecture are described below. • Item Layer In the item network S, nodes are representing items, and relations (i.e., edges) are the similarity between items. A item network S is a directed graph N S , E similarity S , where N S is the set of item and E similarity S ⊆ N S × N S is the set of relations between these items. From the item network in the Figure 3, the dot edges represent the relationship between the nodes while these nodes represent the items (the movies). In this study, the relation between items is measured by using cosine vector similarity as mentioned above. • Cognition Layer The cognition network C is a network N C , E i C , in which N C is a set of cognitive similarity from groups of user and E i C ⊆ N C × N C is the relationship between these groups. The objective relationship from the S to the O is through the selecting pairs of items similarity by users which can be expressed by a relation: Selections ⊆ N S × N C . We can easily interpret the hubs as being the user's groups that combine a large number of other users with cognitive similarity. These would be an exciting starting point for any new users willing to annotate a similar set of objects as his friend. For example, from the cognition network in Figure 3, the new user Bill has cognitive similarity in groups {John, Karla, Bill} so that Bill will start related with the groups {John, Paul, Kyle, Karla}. These will be enriched during the cognitive of Bill and dynamically changed in this network. The relation of these groups is represented by the overlap from sets of features that collect implicit from the activities of users. In particular, feature sets extracted from movies are {t, g, d, a, p}, in which, t denotes title, g denotes genre, d denotes director, a denotes actor, and p denotes plot. Likewise, cognitive similarity between users will be extended and imported based on many histories of users' activities (i.e., the item's similarity). Clearly, there is a difference between cognition networks and item networks though: in item networks, based on several connected items, cognition will be extracted from there. The connection in cognition network extends to include the relation between cognitive similarity of the user and between user groups. Thus, it would be useful to recognize those hubs that connect users on the same groups, these are likely to be the expression of alignment between the two groups. • User Layer In the user layer U, nodes are users, and relations are the numerous kinds of relationships that can be found in cognitive similarity. The user network U is a network N U , E i U , in which N U is a set of entity of a user and E i U ⊆ N U × N U the relationship between these entities. The relation was extracted based on the objective relationship from the C to the U that is through the extraction of relation users' group in a cognition similarity and can be expressed by a relation: Recommendation System Based on Cognitive Similarity CF-based recommendation systems usually use similarity method for finding k-nearest neighbor users to target a user. Then, the system utilize the past ratings of neighbor users to predict or recommend new content to the active user who will like that content. The content recommendation can be also made by using different methods based on the similarity of information from the past rating (buying, browse, and so on) of the users. In this paper, we use cognitive similarity among users to find k-nearest neighbor users. Obviously, by using a rating score, we can identify the user preferences, but a key problem is how to combine and weight the preferences of user neighbors. We consider another side, that is finding the cognitive similarity between users and combine with the user preferences. It is worth mentioning that, users cognitive similarity must be constructed based on their cognition about the items instead of rating score. All activities of the user can be collected and saved in the database. The features extracted from items that a user uses to recognize similarity can be used to develop the initial of user cognitive similarity. In this case, these features collected implicitly from the users through their movie similar selection. The system then analyzes and updates each cognitive similarity of users individually based on the collected features. The system continues to recommend pairs of similar movies of the k-nearest neighbors to collect feedback from the active user. Finally, the feedback from users on the results of recommendations can be used to adjust their cognitive similarity. In order to develop the cognitive similarity, items similarity needs to be elaborated in the preprocessing. After that, the cognitive similarity between users will be occurred based on the similar items previously browsed and selected. The recommendation processes can be divided into three steps as follows: • The representation of the user information. The cognitive similarity by user is analyzed and modeled. • The generation of neighbor users. The similarity of users can be extracted from the three-layer architecture according to the data collected and the collaborative filtering algorithm presented in Section 3. • The generation of recommendations. Top-N items will be recommended to the users according to the cognitive similarity of the neighbors. Following to the above steps, each user activity in the database can be used to calculate the user list of neighbors which are recorded in the corresponding record in the user database. When users log into the system, the recommendations can be presented based on the cognitive similarity of the neighbors. Then, after each activity of the user can be used to enrich their cognitive similarity and store to database. The process of recommendation is shown in Figure 4. Most recommendation systems based on user feedback to provide high-quality recommendations. Explicit feedback is sometimes considered as more reliable, implicit feedback requires less intervention to users, captures short-term interest, and continuously updates user preference [21]. Modern approaches make the quality of recommendation based on implicit feedback comparable to those based on explicit feedback. That is the reason, we consider the dynamic update cognitive similarity based on understanding implicit feedback from the user. By allowing the user to update their selection or suggest a new items' similarity for collecting feedback from the user, we can make the measuring of cognitive similarity more efficiency. In addition, recommendations are computed by the cognitive similarity of neighbors. According to the cognitive similarity extracted, we know that the neighbors of each user, so we can list all the items similarity as to summary the most popular ones. For example, from the three-layered architecture described in Figure 3, consider these users Paul, Kyle and Jason, we can recognize that the neighbors of Kyle are Jason and Paul in the threshold neighborhood. Hence, the pair of similar items A, B , and C, D which recognized similar by Jason and Paul should be presented to the user Kyle. Then, when Kyle makes the selection (feedback from Kyle), his cognitive similarity will increasingly, dynamic re-calculated, and updated to the database. Overview of the OMS System We propose the OurMovieSimilarity (http://recsys.cau.ac.kr:8084/ourmoviesimilarity) (OMS) is the crowdsourcing system which can be collecting the cognitive similarity of users. Our system was built based on Java and MySQL database [31]. Because our system contains services are web-service and background service, the security and handle multiple access are one of the most important. Therefore we designed the system based on the Model-View-Controller (MVC) [32] model. Besides, we implement Apache Tomcat for the web services side and MySQL database was used because it has rock-solid reliability, scalability, and security. OMS system is a web-based crowdsourcing platform, therefore we identify that the lowest latency should be considered carefully. Besides, the main challenge in the web-based system that it has enough instructions for the users during the entire system. In order to solve this problem, we using the concept of progressive disclosure, which is "show users what they need when they need, and where they want" in the whole all function of OMS. In addition, in order to improve the user experience, we focus on the simplest interact and fast response to design our system. In particular, we use one template for the system to maintain consistency. Therefore, users can easier to recognize the interface function (e.g., buttons, functions) when they interact with OMS. All of the features we mentioned above based on three gold rules of user interface design [33]. Generally, interacting with the user is the most important in our purpose so that we made the process of selecting a similar movie from a user as simple as possible, whereby the user, in turn, selects the movies that they have seen and continues to choose which movie is the similar movie from the suggestion of the OMS system. In case the user does not find any of the proposed movies, it is possible to search for movies from IMDB and add to the OMS system. Data Set When the OMS was designed, we need the initial movie database to conduct the process of collecting the cognitive similarity of users. Therefore, we implement the movie crawling function in the OMS system, which is automatic collects movie information from sources provided online. We identify an IMDB (https://www.imdb.com/) is an extensive highly scalable movie database. In order to implement the crawling functions to collect movie information from IMDB, we used the open API provided by OMDb (http://www.omdbapi.com/). Up to now, we collected over 14,000 popular movies from 1990 to 2019 with nine genres, 3439 directors, and 8057 actors. The OMS system still continues collecting data online. At this time, we have about 150 active users and more than five thousand activities of users. The number of data collected from users has the format: (U i , m j , m k , CS U i m j ,m k , γ i ) inside U i is the id of the user; m j and m k are a pair of the movie similar; CS U i m j ,m k is a vector represented the cognitive similarity of user U i ; and γ i is the number of times user change suggested movies in select a pair of movies similar. Evaluation To evaluate the recommender system, firstly, the pair of item similarity (in this evaluation, it will be called item) of each user was divided into two sets. These datasets were selected randomly and called the training-set (the first set) and the Test-set (the second set). The proposed algorithms were first implemented on the training-set in order to filter N items to be recommended to the active user that is called top-N. Then, the items in top-N were compared with the items in the Test-set. The common items in the Test-set and top-N were called Hit-set. Finally, after obtaining the Test-set, training-set, and Hit-set, we can calculate the accuracy percentage of the algorithm using evaluation criteria. Here, we used two criteria evaluation that are Precision and Recall. Measuring the Precision will returns the proportion of relevant recommendations according to the total recommendations (denotes as N), where the relevant recommendations are the ratings equal to or greater than a threshold. The Recall is the proportion of relevant recommendations regarding the total relevant items (from the total number of items selected by the user). However, note that whereas N is a constant, the number of relevant items is not. Hence, the Recall is a "relative" measure because extract relevant recommendations from a few relevant items are more difficult than a large of relevant items. Generally, for better performance, we use the F 1 that is a combination of two above criteria and can be formulated as follows: where S Hit-set is the size of the Hit-set; S Test-set is the size of the Test-set; and S Top-N-set is the size of top-N set. F 1 was computed for each user and the average F 1 obtained from all users was considered as the criterion for determining the algorithm accuracy. To compare the proposed methods with the previous methods, we compared with the recommendation system that has been designed based on association rules. The following diagram shows the results of these algorithms. In the following evaluations, the various values of top-N were considered from 10 to 250. Experimental results show that the accuracy of collaborative filtering based on cognitive similarity (CF-Cognitive Similarity) is higher than collaborative filtering based on Pearson correlation similarity (CF-Pearson Correlation Similarity) approach. The proposed method achieves improvement over the baseline in the best case is 11.1%. Consider various values of top-N in a set {10, 50, 100, 150, 200, 250}, we have the comparison between the proposed method and the baseline as shown in Table 2. In addition, to improve the evaluation, we continue to enable a comparative analysis by using MAE and RMSE as the evaluation metric and the measurement is defined as follows: where n denotes the number of cognitive similarity values in the Test-set, y i denotes the actual cognitive similarity values, and y p i denotes predicted cognitive similarity values. In general MAE can range from 0 to infinity, where infinity is the maximum error according to the cognitive similarity values scale of the measured. The main reason following this approach is because the predicted cognitive similarity values can create the ordering of items in which the predictive accuracy can be used to measure the ability of a recommendation system to rank items according to user cognitive [34]. In order to create the Test-set, we divide the items of each user following the k number of section/folds (k-fold cross-validation) where each fold is used as the Test-set at some point. According to [35], we decided k is set as 5 because yield test error rate estimates that suffer neither from excessively high bias nor very high variance. Specifically, the data set was split into five folds. The first fold used as the Test-set and the remainder used as Tranning-set at the first interaction. In the next interaction, the second fold will be a Test-set and the remainder is Tranining-set. This process will repeat until each fold of the five folds has been used as the Test-set. As mentioned above, we used Pearson Correlation Similarity which uses the similarity and recommendation models (1) and (2) as a baseline. Because the baseline is reported to perform best if around 50 neighbors of the active user, we decided 5, 10, 20, 30, and 50 as the neighborhood sizes in our experiments. The comparison between our proposed method and the baseline shown in Tables 3 and 4 . Table 3. Comparison between CF-Pearson Correlation Similarity and the CF-Cognitive Similarity with varying neighborhood sizes (MAE metric). Conclusions In this paper, we proposed the three-layered architecture which can extract the cognitive similarity so that it can exactly identify the k-nearest neighbors. In order to apply the architecture, we aim to create a web-service crowdsourcing platform (called OurMovieSimilarity) to collect the cognitive feedback from the users. Our crowdsourcing system has deployed online and continues to collect feedback from the user. Our data set includes over 150 users and more than 5000 feedback stored in our database. In the evaluate how accurate the proposed method work in the recommendation system, we designate the collaborative filtering Pearson correlation similarity as the baseline to comparing with our methods. Clearly, the results demonstrate that the accuracy of cognitive similarity-based collaborative filtering higher than the baseline. Specifically, compared with the Pearson Correlation, our method more accurate and achieves improvement over the baseline 11.1% in the best case. The result shows that our method achieved a consistent improvement of 1.8% to 3.2% for various neighborhood sizes in MAE calculation, and from 2.0% to 4.1% in RMSE calculation.
2020-06-25T09:07:06.860Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "d1f9892e9266ffc1675e3461676ccf00f7598a3d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/12/4183/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "77fcf6fdd38ac40cfd317ce95864d0d56d78a6b3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
139749718
pes2o/s2orc
v3-fos-license
Propagation of reinforcement corrosion: principles, testing and modelling Reinforcement corrosion is the risk most frequently cited to justify concrete durability research. The number of studies specifically devoted to corrosion propagation, once the object of most specialised papers, has declined substantially in recent years, whilst the number addressing initiation, particularly where induced by chlorides, has risen sharply. This article briefly describes the characteristics of steel corrosion in concrete that need to be stressed to dispel certain misconceptions, such as the belief that the corrosion zone is a pure anode. That is in fact seldom the case and as the zone is also affected by microcells, galvanic corrosion accounts for only a fraction of the corrosion rate. The role of oxygen in initiating corrosion, the scant amount required and why corrosion can progress in its absence are also discussed. Another feature addressed is the dependence of the chloride threshold on medium pH and the buffering capacity of the cement, since corrosion begins with acidification. Those general notions are followed by a review of the techniques for measuring corrosion, in particular polarisation resistance, which has proved to be imperative for establishing the processes involved. The inability to ascertain the area affected when an electrical signal is applied to large-scale elements is described, along with the concomitant need to use a guard ring to confine the current or deploy the potential attenuation method. The reason that measurement with contactless inductive techniques is not yet possible (because the area affected cannot be determined) is discussed. The method for integrating corrosion rate over time to find cumulative corrosion, P corr , is explained, together with its use to formulate the mathematical expressions for the propagation period. The article concludes with three examples of how to use corrosion rate to assess cathodic protection, new low-clinker cements or determine the chloride threshold with an integral accelerated service life method. Abstract Reinforcement corrosion is the risk most frequently cited to justify concrete durability research. The number of studies specifically devoted to corrosion propagation, once the object of most specialised papers, has declined substantially in recent years, whilst the number addressing initiation, particularly where induced by chlorides, has risen sharply. This article briefly describes the characteristics of steel corrosion in concrete that need to be stressed to dispel certain misconceptions, such as the belief that the corrosion zone is a pure anode. That is in fact seldom the case and as the zone is also affected by microcells, galvanic corrosion accounts for only a fraction of the corrosion rate. The role of oxygen in initiating corrosion, the scant amount required and why corrosion can progress in its absence are also discussed. Another feature addressed is the dependence of the chloride threshold on medium pH and the buffering capacity of the cement, since corrosion begins with acidification. Those general notions are followed by a review of the techniques for measuring corrosion, in particular polarisation resistance, which has proved to be imperative for establishing the processes involved. The inability to ascertain the area affected when an electrical signal is applied to large-scale elements is described, along with the concomitant need to use a guard ring to confine the current or deploy the potential attenuation method. The reason that measurement with contactless inductive techniques is not yet possible (because the area affected cannot be determined) is discussed. The method for integrating corrosion rate over time to find cumulative corrosion, P corr , is explained, together with its use to formulate the mathematical expressions for the propagation period. The article concludes with three examples of how to use corrosion rate to assess cathodic protection, new low-clinker cements or determine the chloride threshold with an integral accelerated service life method. Introduction Social appreciation for the built heritage is growing. The conservation and maintenance required to prevent its deterioration entails high costs, however, given the characteristics of all the technologies and processes involved in rehabilitation and repair. Beyond such economic considerations, deterioration diminishes functionality and may even pose the risk of accidents with the loss of human life due to unexpected collapse. Buildings and infrastructure form an essential part of developed societies, and as such their usability to the highest standards of safety is an indisputable priority. The material most widely used in the modern built heritage is concrete and corrosion of its steel reinforcement heads the list of its possible causes of decay. Concrete underwent ceaseless development in the twentieth century for many reasons, one of which, directly related to this study, was that most design engineers deemed it to be eternally maintenance-free, in the understanding that cement alkalinity affords chemical protection for the steel and the cover constitutes a physical barrier much more effective than paint. It was not until the nineteen seventies and early eighties when Rilem began to explore reinforcement corrosion, a subject that aroused the interest of very few researchers. As new construction was booming and time-mediated damage had not yet been detected, only a handful of cases of corrosion had been identified, despite one very distressing instance of collapse [1]. By way of example of the scant attention lent to corrosion, when the author defended her Ph.D. thesis in 1973, she only found around 40 articles on the subject in all the journals on concrete published at the time. Pioneering the concern was Committee 60-CSC, Corrosion of Steel in Concrete, chaired by Peter Schiessl, who published a summary of the state of the art at the time [2]. Engineer Hans Arup, who also alerted to the problem early on, organised the first workshop devoted exclusively to corrosion in 1981. A photo of the participants is reproduced in Fig. 1. One of scientists in the picture, Kyosti Tuutti, published a much-cited and impactful thesis [3] in 1982, describing a two-period (initiation and propagation) model. A number of Rilem committees were subsequently created, but only one on reinforcement corrosion, RILEM TC-154-EMC: ''Electrochemical Techniques for Measuring Corrosion In Concrete'' chaired by the author, published Recommendations on the techniques for measuring corrosion, which are still in effect [4][5][6][7]. Later committees went on to develop service life models, in particular in connection with chloride diffusion [8], or determine the steel depassivation limit or its governing parameters. The many and very valuable scientific and technical papers that have been forthcoming in the interim attest to the economic implications of the subject and the scientific challenges posed. The author apologises for listing but a few of those papers here [9][10][11][12][13][14], chosen because they deal specifically with active steel corrosion. This article summarises some of the principles associated with corrosion, its measurement and modelling in the propagation stage, as a tribute to Rilem in its 75th anniversary and as a token of my gratitude for the inspiration I have drawn from its working approach throughout my career. Rilem's routine practice of creating short-lived technical committees, Gonzalez, (5) ??, 6) C. Locke, (7) C. Hansson, second row: (8) consistently mandated to explore innovative and topical concerns, expands the frontiers of knowledge, invariably challenging presumably established principles. This review calls some of those principles into question and identifies the many gaps still extant in our understanding of reinforcement corrosion. Basic principles and mechanisms Concrete corrosion is an electrochemical process, for it involves both chemical reactions and the circulation of an electrical current, in which the reagents are electrical charges (electrons and ions). In essence, corrosion is triggered when concrete pH at the interface with the steel drops to levels below around 8 [15], dissolving the passive layer that forms in the highly alkaline aqueous phase. Under those circumstances the iron atoms in the steel convert to positively charged ions (oxidation), generating an excess of electrons in the base metal: To maintain its electrical neutrality, the metal in adjacent areas induces another reaction (reduction) which in neutral and basic media uses dissolved oxygen: That oxidation-reduction (redox) mechanism supports the corrosion 'cell', an only apparently simple development, for neither oxidation nor reduction takes place in a single stage. Rather, both entail a series of intermediate species, whilst electron migration from the steel to the reagent depends on the so-called 'tunnel effect' to overcome the energy barrier involved. Therefore, although the basic mechanisms governing steel corrosion may be simplified, sight should not be lost of their complexity, a feature they share with nearly all chemical reactions taking place in successive or simultaneous stages. As a result, in the spontaneous reaction (corrosion-free potential), the total anode and the cathode current densities are equal and equal as well to the corrosion current density. As some of the most significant characteristics of these basic processes and their governing parameters are often poorly understood, they are dealt with in the sections listed below. 1. Corrosion cell and acidification of the corrosion zone 2. Microcells and macrocells (corrosión rate and galvanic current) 3. Effect of the oxygen 4. The role of potential 5. Composition of the rust formed 6. Effect of temperature Although these effects feed back into one another, they are addressed separately here for greater clarity. Corrosion cell and acidification of the corrosion zone As Evans showed [16] nearly 100 years ago with his droplet experiment (Fig. 2), corrosion begins at the atomic level with the formation of microcells. When a drop of salt water is left on steel the cell forming in the centre of the drop is anodic because it contains less oxygen than the border of the droplet, which becomes cathodic. Evans proved that anodes and cathodes were formed ( Fig. 1) by adding phenolphthalein (reduction generates hydroxyls, colouring the outer side of the drop pink) and sodium sulfocyanide (which when combined with iron yields a blue iron sulfocyanide complex). If left standing the colours mix in the drop due to the formation of many nano-and microcells. According to Pourbaix [15], the pH dips to around 3-5. Corrosion always entails local acidification in the zones affected [17]. Consequently, the buffering capacity of portlandite is of key importance not only in retarding carbonation but in chloride-induced depassivation in concrete. When the first iron atoms are released into the solution by the depassivating action of the chlorides, they hydrolyse the water, lowering the local pH. That effect is neutralised by the buffering capacity of the hydroxyls, sourced primarily from the portlandite, as well as from the alkalis, which repair the passive layer, mitigating the decrease in pH. The critical chloride content is reached when the amount of dissolved Fe ?? ions, generating protons by inducing water hydrolysis, cannot be neutralised. Consequently, the Cl -/OH -ratio governs the chloride corrosion threshold. In chloride attack, acidification is local because the attack is localised and the rest of the pores retain their alkalinity, passivating the rest of the metal surface. Moreover, the polarisation and lower potential induced by cell formation beginning at the pit polarise the surrounds to the pit potential (Fig. 3). That creates a need for more chlorides to exceed the Cllimit at the lower mixed potential induced by pit formation, ultimately protecting the surrounds. The attack therefore proceeds in depth instead of spreading laterally, which may nonetheless occur, depending on the degree of polarisation at the pit borders and pit growth. In carbonation, hydroxil ions from portlandite exert a buffering effect across the entire surface, attempting to offset the carbon dioxide-mediated reduction of pH: as the hydroxil ions are consumed calcium carbonate forms with the calcium ion of portlandite, the pH declines and corrosion is driven across the entire surface by many micro-anodes and micro-cathodes (Fig. 4). In that case acidity is governed by carbonate/bicarbonate ions, which tend to maintain a neutral pH at values of around 7. One important implication of acidification in the corroded areas is that the cathodic reaction may reduce not only the oxygen in the adjacent passive areas, but also the protons in the corrosion zone itself. Corrosion may consequently proceed independently via the interior microcells (self-catalysis) if the source of oxygen to the adjacent areas is interrupted (Fig. 3). In other words, protons may be reduced in a cathodic reaction, a development that has been detected in the form of H 2 bubbles inside pits [18]. Acidification and the repassivation capacity of the passive layer are the factors that determine whether the attack is local (proceeding depth-wise) or laterally widespread. In stainless steels with a very high repassivation capacity, corrosion proceeds as very small pits that bore quickly inward. Therefore, whilst the use of stainless steel ensures very high chloride thresholds, once corrosion is triggered and the pits become active, the attack is very dangerous because the pit may grow depth-wise in a narrow area with a scant loss of material. In contrast, in black steel with a lower repassivation capacity, the pits are shallower and spread sideways. Microcells and macrocells (corrosión rate and galvanic current) In microcell corrosion the anodes and cathodes are separated by just a few microns, whereas in macrocell corrosion they may be spaced at several centimetres or even metres. Metres-wide macrocell separation can only exist in concrete in marine media where electrical charges can circulate across long distances thanks to the low resistivity of seawater. Corrosion rate and galvanic current are different concepts. The reference area for expressing the galvanic current, which is the current circulating between the anodic and cathodic zones, is the anodic area only, whereas corrosion rate refers to the sum of the anodic and cathodic areas. Corrosion, particularly in concrete, is invariably supported mostly by microcells, with macrocells contributing to the corrosion rate only marginally (see Figs. 3 and 4). If corrosion proceeds at the micron level, what role do passive zones much greater than the corrosion area play in local corrosion? Whilst along with galvanic current, I g , they indisputably contribute to corrosion current density, but unless the anodic area is very small, they do not account for a majority proportion in the total corrosion current. In other words, the corrosion current measured is the sum of the microcell and the galvanic currents. The inference is that the galvanic current measurement alone cannot be directly likened, nor is it necessarily proportional, to the corrosion rate. This issue will be discussed more fully in the section on measuring techniques. That corrosion is primarily supported by microcells in the corrosion zone was confirmed long ago, as the earlier reference to Evans's droplet experiment shows. Atomic force microscopy (AFM) has recently corroborated those findings. In [19] a drop of saltwater lying on a steel plate was observed under an atomic force microscope (AFM) at different ages. Figure 5 shows, in a 5 micron window, that the ferrite acting as the anode dissolved whereas the cementite acting as the cathode remained intact (Fig. 5). The behaviour clearly visible in Fig. 5 for the metal in a prestressing steel tendon in which the layers of cementite and ferrite alternate, may vary in other steel microstructures. Generic corrosion is summarised in Fig. 6, which shows that at the nano-and micro-levels some areas of the steel are anodic and others cathodic. The figure also describes the mechanism in which water is hydrolysed by Fe ?? , generating protons that acidify the anodic zones. On a somewhat larger (millimetre) scale, the trial conducted by Mansfeld [20] with a Kelvin probe, likewise with a drop of saltwater, is also illustrative. [19] As Fig. 7 shows, the potential is positive only on the outer side that acts as a cathode, whereas the potential on the inner side adopts very negative values. The mixed or corrosion-free potential measured on a metal bar immersed in an electrolyte would lie inbetween the anodic and cathodic semi-reaction corrosion potentials. The exact value would depend on how the process is conducted and whether the anode and cathode are coplanar and positioned face-to-face. As shown in Fig. 8, steel bars may be placed face-to-face in separate chambers connected by a salt bridge (Fig. 8, left) or in the same chamber separated by a concrete disc acting as a diffusion cell (Fig. 8, right). Electrical resistances are then introduced in the circuit between anolyte and catholyte to simulate different types of concrete and degrees of water saturation. Figure 9 shows some of the results obtained from this experiment using galvanised steel -black steel couples. In the figure S c = cathodic bar surface and S A = anodic (galvanised) bar surface [21]. Inasmuch as the salt bridge itself generates ohmic resistance, in the graph on the left in Fig. 9 the resistance drop prevents the potentials from concurring. The mixed potential is any value from the cathodic -500 mV to the anodic -1400 mV. In the middle graph in Fig. 9 the mixed potential continues to differ where the gap between anodic and cathodic potential is wide in high resistance values, but as that resistance lowers the mixed potential ultimately concurs with the anodic potential. Given that the anode is aeriated, it can support corrosion by itself with its microcells. The graph on the right depicts the situation when the anode is non-aeriated. Here also, if resistance in the medium is very high the potentials fail to concur. In contrast, when resistance declines both the anode and the cathode are polarised, yielding an intermediate mixed potential for the galvanic couple. The explanation is that due to non-aeriation, corrosion in the anode (right in the figure) is That is, if the anodic zone is as aeriated as the cathode (middle of Fig. 9), the mixed potential is the anode potential, whereas the potential adopts an intermediate value when the corrosion zone has restricted access to oxygen. In addition to these factors, the value of the mixed potential is affected by the size of the cathodic relative to the anodic area. Another aspect that influences the contribution of the galvanic current to the corrosion rate is since in a given bar the anodic and cathodic zones are in the same plane (coplanar), the galvanic current is transmitted laterally, whereby the 'transmission line model' should be applied in its calculation [22] and may result of smaller proportion than face-to-face arrangement. Effect of the oxygen The role of oxygen in corrosion generation and progress in concrete has been misconceived in some respects. Trying to control oxygen very seldom limits concrete corrosion. Oxygen is essential for depassivation but not for corrosion progression. The description of its role can be divided into two stages, as discussed below: under passive conditions and in the presence of active corrosion. Passive conditions For oxygen to feed the cathodic reaction it must be dissolved in the pore solution. In the absence of pollutants oxygen contributes to thickening the passive layer. The variation over time in the corrosion potential characteristic of a bar embedded in an undersea structure with a 7 cm cover is graphed in Fig. 10. The potential evolves more positive up to values of ? 80 mV Ag/AgCl , denoting steel passivity and the presence of a compact, late age passive layer due to an uninterrupted supply of oxygen. The figure also shows the fluctuation in corrosion potential in a bar immersed in a 0.5 M NaCl solution bubbled with nitrogen throughout the test to maintain the dissolved oxygen at 0.3 ppm or under. The potential under these conditions is much more cathodic because the low O 2 concentration induces structural changes in the passive layer. The experiment with the non-aeriated solution depicted in Fig. 10 confirms the near impossibility of lowering the oxygen content of a solution below 0.1 ppm by de-aerating with nitrogen [23], due to the oxygen impurities in this gas. The potential is consequently around -500 mV Ag/AgCl and not lower. If the oxygen level declines further, however, the potential would shift to more cathodic values (Fig. 11), entering the generalised corrosion region ( Fig. 11) and, in the absence of depassivating ions, triggering passivation. Lower potential values would carry the current into the cathodic protection region, with the reduction of water as per the cathodic reaction: To put it briefly, under passive conditions the value of the corrosion potential is governed by oxygen. When the oxygen concentration declines, the potential shifts to more cathodic values, hampering depassivation. However, as Fig. 10 shows, oxygen has fairly ready access to the reinforcement not only in dry but also in saturated concrete, where the oxygen content in the pore solution suffices to maintain and raise steel passivity. The reason is that water contains around 8 ppm of oxygen, which is more than enough to heighten the passivation of concrete-embedded steel. Oxygen concentration in water depends on salt concentration and temperature and although oxygen solubility is lower in highly saline solutions, such situations are seldom present in concrete, even when it is underground or undersea. The behaviour deducible from Fig. 10 likewise stands as evidence that cover thickness has no impact on oxygen availability around the reinforcement. That is, tantamount to saying that although corrosion at different depths logically begins at different times because it takes chloride ions longer to penetrate thicker covers, the corrosion rate is similar. Therefore, oxygen concentration is either the same at all depths or it does not control the process. Models that make corrosion rate dependent upon oxygen permeability are consequently erroneous. The role of oxygen in depassivation and propagation of corrosion As said, sufficient oxygen is normally present in concrete pore solutions to maintain passivity. That notwithstanding, in alkaline media it plays an essential role in the initiation of corrosion, but not in its propagation. In the presence of a high enough chloride content, oxygen at concentrations as small as a few tenths of ppm suffices to initiate corrosion (localised at the site where the oxygen is in contact with the metal surface). At low oxygen concentration, corrosion is much localised, as shown in Fig. 12 and Table 1 Table 1. Rusting was more visible in the 30 day test at 0.5 M NaCl. The corrosion products formed along a line in the bar surface which may have been generated during manufacture (Table 1). Crevice corrosion was also observed at the edge of the tape used to delimit the working area. In both the 0.5 M and the 2.5 M NaCl solutions only the areas with surface imperfections such as seams or rolled laps, corroded. These imperfections seem to be the preferred sites for pit initiation. High I corr values are indicative of active corrosion even when rust is not visible to the naked eye until it reaches certain dimensions (hence the readier visibility in the monthlong than in the 5 day experiment). When potential is not spontaneous but electrochemically induced (potentiostatic test), the results are as given in Table 2. At potential values more cathodic than -650 mV (Ag/AgCl electrode), no pits are detected. Pits form at -550 mV, while generalised corrosion appears at -350 mV. As commented, once the depassivation is produced and sufficient rust formed, oxygen enhances the corrosion rate but it is not necessary for the corrosion to progress, due to the protons and the Fe ??? ions may feed the cathodic reaction. The role of potential in depassivation is illustrated in Fig. 13, where breakdown potential is plotted against chloride concentration threshold. Further to the timehonoured fundamentals on breakdown potential in the presence of depassivating ions, for localised corrosion to take place, the potential must be greater than the potential at which passivity breaks down, i.e., when due to the presence of oxygen, the potential exceeds the breakdown potential threshold. Figure 13 [24], inspired by Pourbaix's [15] findings, represents potentiostatic tests in mortars made with different types of cement [25].Different potentials are applied to bars embedded in specimens immersed in a NaCl solution and the current is recorded. When the amount of chlorides arriving to the bar surface exceeds the threshold, the current rises steeply, denoting breakdown of the passive layer. Potential, then, controls tolerance to one chloride threshold or another. The more positive the spontaneous potential, the more Fe ??? found in the passive layer (due to more oxygen present), the more vulnerable is passivity to breakdown and the lower is the chloride level required to initiate local corrosion. As the graph in Fig. 13 shows, when the potential applied is high, fewer chlorides are needed to induce corrosion, whereas when the potential shifts to negative values, the chloride threshold is higher. The graph mimics the natural pattern described earlier, in which a low enough potential prompts cathodic protection. Briefly, potential is an indication of the level of oxygen present. Where it is not under -650 mV, the risk of corrosion subsists and local or generalised corrosion may take place with scarce oxygen given sufficiently high chloride content [23]. In concrete semisubmerged in seawater, corrosion tends to begin in the more aeriated tidal zone, which constitutes a sacrificial anode for the rest of the structure, inasmuch as saltwater is a very good conductor. The differential aeriation between tidal and undersea zones gives rise to a galvanic macrocell. In the absence of such a difference in potential, corrosion may begin in the underwater zones. While not immune to corrosion, given the more cathodic potential present, such underwater zones fail to depassivate until a higher chloride concentration is reached. Composition of the rust formed As the reaction products may vary widely and involve many intermediate species, establishing the basic mechanism calls for very specific studies and the application, for instance, of ring-disks [26] that are unusable in concrete. Although the description of the mechanism is normally simplified when intended for practical purposes, sight should not be lost of the complexity ensuing from its many stages and intermediate species [15,[26][27][28] The oxides involved in steel corrosion in concrete are the same as observed in its corrosion in the atmosphere, including nearly all the most common iron oxides [27], such as magnetite, goethite and lepidocrocite and, in the presence of chlorides, the akaganeite. Given such diversity, all attempts to relate oxide composition under real exposure conditions to corrosion mechanisms have proven futile, for composition fluctuates constantly, in keeping with the balance between acidity (in turn dependent upon chloride or bicarbonate ion concentration) and oxygen availability. When exposed to natural media, the composition and dynamics of the oxides present are neither predictable nor indicative of corrosion rate. Although composition is variable, certain differences exist between the oxides detected in the early stages of corrosion, depending on whether it is carbonation-or chloride-induced (with the appearance in the latter of akaganeite, an oxide that takes up Potential (mV, SCE) % Total Cl (cement weight) Fig. 13 Potentials applied to bars embedded in mortar (mean of 10 trials) vs total Clcontent needed to induce depassivation (chloride threshold) [24,25] chlorides in its structure). Chlorides generally give rise to more soluble oxides that diffuse farther than those involved in carbonation. Since the corrosion zone acidifies, the oxide ions form a suspension that progressively gels due to coagulation of the individual constituents (Figs. 14, 15). The figure shows that the OH --solvated tetrahedron housing the Fe ion evolves into an octahedron that begins to coagulate with others, forming larger particles that ultimately constitute a network of crystalline oxides. These grains can migrate across the pores and spread, precipitating and clustering until they appear on the surface. When the resulting gels [29] find alkaline cathodic zones they coagulate further and precipitate, ceasing to spread as they convert into laminar rust or solid particles. They diffuse neither continuously nor at a continuous rate. Rather, diffusion adapts to the steady increase of concrete resistivity over time, rendering corrosion rate resistivitydependent [30]. Cover cracking effect If the reaction proceeds, the rust on the concrete migrates across its pores. Although the rust exerts pressure from the outset, due to the porosity of the steel/concrete interface the concrete is unaffected by the full thrust of the pressure. When the pores at the interface become saturated, the full radial pressure is transferred to the concrete, ultimately prompting cracking at the surface of the bar that extends to the surface of the concrete (Fig. 15). The more porous the concrete, then, the longer it takes for rust to exert effective pressure that could crack it. Very porous or moist concrete may not crack at all. The cantilevered roof over Maracaná Stadium, for instance, contained chlorides due to the haste with which it was built. Sixty years later, no cracks were detected even though the reinforcement had corroded completely and disappeared entirely in extensive areas [31]. As corrosion progresses the rust no longer migrates but remains at the bar/concrete interface where, depending on the steel microstructure or the presence of moisture in the concrete, it spreads by layers or cycles (Fig. 15). This rust detaches readily when accessed, but may afford misleading information, for the residual sound diameter may appear to be larger than the actual uncorroded diameter (Fig. 15g). Another fact worth mentioning is that section losses of 20% or higher may render the steel more brittle, for the hydrogen generated during corrosion may diffuse toward the steel. If the pressure exerted by the lack of space to accommodate the rust forming suffices to crack the concrete, cracking begins at the reinforcement surface (Fig. 16) and proceeds toward the closest concrete surface, where it emerges and spreads practically linearly [32] with corrosion penetration, P corr between 10 and 50 microns. Corrosion-induced cracking has been a popular line of research in recent years, with any number of papers on analytical and numerical calculation methods. Many fewer experimental studies have been conducted, however [33,34]. The general consensus is that as an attack of under 10 lm to 50 lm suffices to crack the cover, cracking may be a visual symptom of the presence of corrosion. The earlier corrosion is detected the better, whilst attempts to prevent it by adding fibres to the concrete or wrapping with carbon based textiles, may be counterproductive. Alternative cathodic reactions Another consideration to be borne in mind with respect to the oxides formed during corrosion is that the ferric oxides forming with Fe ?? oxidation may in Whether corrosive action continues therefore depends not only on oxygen availability in the still passive zones. It may also be supported by the oxidation-reduction of the oxides forming in the corrosion zones [35] and by the proton reduction generated in water hydrolysis. That is what is meant when corrosion is said to be self-catalytic and explains that it may proceed with no or scant access to oxygen. Effects of temperature Temperature is known to accelerate chemical reactions by reducing activation energy. The Arrhenius equation is normally used to calculate the effect of a change in temperature on the rate constant and therefore on the rate of reaction [36]. However, this Fluctuations in temperature also change the solubility of the components of the aqueous phase, which may not necessarily rise with temperature. Portlandite and sulfates, for instance, are less soluble at higher temperatures. Furthermore, temperatures of over 60°C may decompose sulfoaluminate phases, releasing sulfates into the aqueous phase. Another significant effect of rising temperature is the decline in gas solubility in liquids. Oxygen is consequently expelled at higher temperatures, which translates into less rather than more corrosion. For all the foregoing, the Arrhenius equation is not recommended to be applied in corrosion in concrete. By way of summary, all these developments make corrosion much more than a simplistic process involving anodes and cathodes in which iron oxidises and oxygen is reduced. A rigorous interpretation must take all the intermediate steps and the many stages governing them into consideration. Corrosion rate and measuring techniques Corrosion entails a loss of metal. Since in the case of reinforcement steel the aggressive substances penetrate the concrete from outside, the outcome is normally an asymmetrical loss of diameter (Fig. 17, left). The residual diameter, U t , is therefore equal to the initial diameter, U 0 , less the corrosion penetration, P corr . When corrosion is localised (Fig. 17, right), the depth of the deepest pit (P pit ) determines the residual diameter. Loss can be measured gravimetrically, i.e., (a) by weighing the sample before and after testing it in specific environments, to find the cumulative loss, P corr , expressed in g/cm 2 of area exposed to corrosion, or (b) by using electrochemical techniques able to measure the metal weight loss through Faraday's law. Electrochemical techniques The oldest of such techniques, the corrosion potential measurement, is qualitative, indicating only the risk of corrosion, as specified in (4). A second technique, which measures concrete resistivity, which is an indication of the degree of material saturation (5), was recently proposed by the author to characterise overall concrete durability, as discussed in a subsequent section. The technique that has been widely shown to quantify corrosion most accurately consists in Section and diameter loss due to generalized (left) and localized (right) corrosion measuring polarisation resistance, R p , or linear polarisation (6). Over the years, this technique, first applied to concrete in the early nineteen seventies [37,38] to study a series of corrosion inhibitors, has proven [39] to be the most appropriate, yet to be excelled by any other, for quantifying and monitoring material loss both in concrete and any other electrolyte/metal system. Inasmuch as corrosion is an electrochemical event, such techniques are indisputably the most suitable for quantification purposes. Linear polarisation, R p [6,37], routinely used in all metal/electrolyte systems, consists in applying no more than 20-30 mV above the corrosion-free potential (DE) of the embedded steel to induce an electrical current (DI). The duration of the increase in potential should be at least 30-60 s to obtain the correct R p value, for at shorter times the value is affected by the transitory electrical capacity induced by the double layer at the steel/electrolyte interface. Measurements lasting only a few seconds, such as in some devices, yield erroneous values due to the nondissipation of that effect. The duration of testing to ensure optimal results is one of the features addressed in the earliest studies of this technique [37,39]. The ratio between R p and corrosion rate, I corr , is known as the 'Stern formula'. Between 1970 and 1978 [37], further to an approach developed by the author, constant B in that expression when applied to concrete, was quantified by calibration with simultaneous gravimetric losses (Fig. 18) and application of Faraday's law to convert lA/cm 2 to corrosion penetration in mm/yr. The constant B was found to be 52 mV in the passive state and 26 mV in the active or corroded state, although 26 mV can be used in all cases inasmuch as an error of a factor of 2 in passive steel is negligible, in light of the gravimetric losses. Three important issues, have to be taken into account regarding this technique (6). 1. As the area exposed to corrosion must be taken into account, R p is expressed as Xcm 2 and hence corrosion rate in lA/cm 2 . R p,ap is the value not normalised to the area of the polarised metal. 2. In concrete the upper case 'I' rather than the lower case 'i' (=uniform corrosion current density) is used [41] as an indication that corrosion may be localised (not visible in concrete). The use of i corr is only correct when corrosion is generalised and uniform. 3. In the absence of coloured rust (=where the steel remains passive), I corr is less than 0.1-0.2 lA/ cm 2 and it is greater than those values, when coloured rust is detected. Values of over 0.1 lA/ cm 2 may be temporarily recorded soon after the steel is immersed in the alkaline solution due to passive layer formation. Once the layer becomes uniform the I corr values decline to under 0.1 lA/ cm 2 , whether the steel is immersed in a solution or embedded in concrete. When the steel is actively corroding, I corr values lower than 0.1 lA/cm 2 may nonetheless be found when the concrete is dry. The aforementioned technique is applicable to all systems, as shown by the mean corrosion rates for solutions with different pH used to simulate the values inside and outside a pit induced by chlorides (x) given in Fig. 19. In this figure, it is appreciable that only solutions with a pH of over 12.5 exhibit I corr values of less than 0.1 lA/cm 2 . That does not mean, however, that corrosion begins at pH values of \ 12.5, for the pH attendant upon carbonation-mediated corrosion has been shown to be around 8 (15). Depassivation as a result of a general decline in pH would, then, depend on the composition of the pore solution and call for values of around 8. Electrochemical impedance Electrochemical impedance spectroscopy (EIS) delivers results very similar to those found with linear polarisation. As noted many years ago [39], the parameters are the same in the two, but expressed differently: as direct current in terms of time (linear polarisation) or as frequency (EIS). Since EIS is more work-intensive, its use is entirely unnecessary in most cases. Its application is only justified in specific studies where, for instance, the object is to determine capacity-related properties in the double layer of the concrete or the corrosion mechanisms involved. It is often used to establish concrete resistivity, which can be found quickly with a single pulse using only one frequency. It is not discussed in detail in this review paper in light of such specific applicability. Contactless measurement As accessing the reinforcement entails breaking the concrete cover, corrosion should ideally be measured on the surface, with no physical contact with the bar. That approach, which had never before been used to measure corrosion (even though the corrosive power of stray current had long been known), was shown by the author to be viable in small-scale specimens in a paper published in 2001 [42] and was subsequently patented in Spain (Patent No. ES 2 237 241 B). It was based on a simplification of the analogue electrical model (Fig. 20) [43]. Contactless measurements in large specimens need to account for the reinforcement length (critical length, see later) reached by the electrical pulse, as is made with normal corrosion measurements. No publications were found by the authors on how to know this critical length in the contactless method. Then, the application to large structures of this new non-contact method with the same level of accuracy of the traditional one should be questioned, until the scientific basis and convincing results are published. Measurements in small-and large-scale elements Devices able to measure R p are based on potentiostatgalvanostats, which can apply a fixed potential or current. The technique can be readily deployed in small specimens and all commercial devices include the option. The problem in large members in real structures is that as the counter-electrode is much smaller than the reinforcement area, the potential scatters across large distances, as shown in Fig. 21. The problem of finding the critical length can be countered in one of two ways. • In the guard ring method for current confinement [44] the ring must be modulated [adapting the outer ring current to confine it to the circle predetermined by the two inner Vss electrodes ( Fig. 22)]. The current is not correctly confined without those electrodes (6). Only one commercial device correctly confines the current with this method; other market devices without such inner electrodes deliver erroneous corrosion rate values, under-or over-confining the central current. • In potential attenuation method [45], electrodes aligned with the counter-electrode (Fig. 23) measure the distance reached by the potential applied and use the total area polarised (6). Only one commercial device (the one correctly confining the current) features this method, which should only be implemented when the structure is very moist and satisfactory confinement is not possible. Fig. 21 Scatter with distance of the steel bar of the current applied through the small counter Fig. 22 Modulated confinement of the current: uses a Guard ring which is controlled with the two (Vss) electrodes in between guard ring and central counter-electrode [44] All that need be done to verify whether a device delivers correct values is to measure samples containing passive steel. Devices that fail to confine the current satisfactorily cannot deliver values below 0.1 lA/cm 2 , the correct rate for passive steel. This is relevant, for instance, to determine whether corrosion inhibitors effectively reduce the corrosion rate [44]. Resistance control. Relationship between R p and R ohm One issue with significant implications that has not been addressed sufficiently in the literature to date is that in concrete I corr is subject to resistance control, i.e., is proportional to the resistance of the medium. Empirical verification of that fact [30] led to the socalled corrosion rate-resistivity (I corr -q) diagram (Fig. 24), from which the following expression was derived: The relationship exhibits some scatter, depending on the temperature at which resistivity is measured and the porosities and resistivities of the concrete pore solution, although the general pattern is consistently observed. With that empirical relationship, I corr can be related both to resistivity and hence to the effective diffusion coefficient [46], from which the apparent diffusion coefficient can be deduced. Thus, which can be rearranged as: This reasoning can be carried one step further, by again comparing that equation [12] to the basic equation for R p [10], whereby R p can be equated to resistivity: In a concrete specimen with actively corroding steel and a resistivity of 20 kXÁcm, for instance, the bar corrosion rate can be found with Eq. (15): As the relationship only holds in the presence of active reinforcement corrosion, it is not applicable when the steel is still passive, for in such circumstances deducing the R p from the q value would yield wholly erroneous results. Propagation period model Once the corrosion rate is quantified, the effect of any variable that alters it [47], such as inhibitors or the presence of moisture in carbonated or chloride-bearing concrete, can be studied, the condition of a real structure can be assessed, and the propagation period [48] quantified by integrating the value over time. On those grounds and bearing in mind the structural consequences of corrosion [49] the I corr values can be classified into four levels as shown in Fig. 25 and the difference between instantaneous corrosion rate, I corr , and cumulative corrosion penetration, P corr (6), can be expressed. I corr values of over 100 lA/cm 2 , equivalent to a diameter loss of over 1 mm/year, are extremely high, for the very highest values measured in concrete [48] are on the order of 100-200 lA/cm 2 . Those should consequently be the maxima used to artificially accelerate corrosion with an external current [33]. Tests with I corr [ 200 lA/cm 2 create unrealistic conditions that yield types of rust differing from those actually found due to the high acidity generated by such high currents. Integrating I corr over time has also provided insight into the effect of climate, leading to the establishment of patterns and the definition of the yearly representative corrosion rate, I corr,REP . Figure 26 reproduces an example of instantaneous corrosion rate in carbonated concrete over several years [50] and Fig. 25b the respective cumulative corrosion penetration values, P corr . The graphs in Fig. 25c, d show the same parameters for concrete with chlorides and the Fig. 25e, f are of concretes submerged in seawater. The inference drawn from these behaviours is that corrosion can evolve over time presenting different trends. Either the corrosion evolves linearly (Fig. 25 line A) or accelerately (line C) or it may attenuate (line B). The difference can be impacted by cracking. In carbonated concrete, once formed, the cracked region on the steel surface remains dry longer than the uncracked zones (Fig. 24a, b). When cracks form in underwater structures, the opposite occurs, for the ingress of seawater across the cracks accelerates (Fig. 24e, f). In summary, the propagation period can be expressed as a linear relationship between time and P corr when I corr is constant, or as two bilinear periods that may also be expressed mathematically as exponential or potential, (Fig. 27). A Representative yearly corrosion rate, I corr, REP can be obtained from the slope of the evolution of P corr with time. In the linear trend the value will be unique when in the bilear trend, there would be two representative corrosion rate. Diameter loss and structural implications Although the relationship between corrosion and structural decay is not addressed in this paper, diameter loss in reinforcement is acknowledged as the basic parameter for quantifying the loss of bearing capacity (Fig. 28) [43]. In future, this link should be used for developing better models of corroded structures in order to optimize the calculation of the residual safety. Three examples are presented for the practical application of the principles described before. 1. Verification of the efficacy of cathodic protection 2. Accelerated service life testing: diffusion coefficient, chloride threshold and corrosion rate in concrete; inhibitor efficacy 3. Corrosion resistance of new binders. Verification of the efficacy of cathodic protection The uncertainties associated with the verification of cathodic protection efficacy are well known, for none of the methods tested to date ensures that the steel has repassivated or corrosion has been reduced to negligible levels. The methods most commonly used are: • 'instant-off' potential measurement • potential change after 4 h of depolarisation • 24 h depolarisation potential. Efficacy can be accurately measured, however, if in addition to any of these methods, R p is measured on site. I corr should be measured by suitably confining the current (modulated confinemenet). The condition can be measured when the steel is disconnected from the power source. Depolarising after 4-24 h, its I corr value indicates whether or not the steel is passive. Figure 29 illustrates the case of a bridge. In the left of the figure it is represented the decay in potential registered when depolarizing after 6 months with cathodic protection. The decay is smaller than the required (100 mV) while in the right of the figure is represented the I corr before and after the 6 months. The values of I corr have however shown a clear decrease in the corrosion rate. Pcorr (mm) tim Initiation Propagation A C B t p t ip2 Fig. 27 Bilinear trend of the propagation period [51] Structural performance Accelerated testing to determine the service life of a concrete mix is one of the challenges facing specialists today. The procedure normally involves testing the mix for chloride penetration, either under accelerated or natural diffusion, to find the diffusion coefficient (D ap ). That, in conjunction with the respective age factor value, the surface concentration and an assumed chloride threshold, can be used to predict steel depassivation time. Present tests [52] deliver only the value of, coefficient D ap and of surface concentration: all the others are assumed. An accelerated 'integral' method has been developed [53] to find the values of the service life prediction parameters determining: • surface concentration (likewise derivable from other tests) • the chloride threshold • corrosion rate after depassivation. The method, now standardised in Spain [54], consists in embedding one bar each in two cubic concrete specimens and conducting an accelerated chloride migration test by applying 12 V between two electrodes until the steel depassivates, as shown in Fig. 30). The subsequent steps are as follows. The drop in exterior voltage is interrupted in one of the specimens, allowing corrosion to proceed for at least 15 days to measure the I corr values that appear spontaneously. • The other specimen is split open to: • Extract a sample of the concrete in contact with the chloride solution from which to find the chloride surface concentration prevailing during the test • Observe the corrosion induced in the steel by the chloride front visualised by spraying the material with silver chloride • Extract a sample of the concrete/steel interface to determine the (threshold) chloride content that induced such corrosion. • The time to depassivation is then used to calculate the apparent diffusion coefficient (Fig. 31) through the application of the equation of Table 3. Figure 31 also shows the chloride threshold measured. Figure 32 shows the registration of the corrosion rate I corr with time from the initiation of the test. In the figure several concentrations of the same inhibitor are compared. It is evident that A0 with no inhibitor presents the earliest depassivation while the rest present later depassivation times as the inhibitor concentration increases. The findings were reasonable and while the long-term results regarding inhibition power are unknown, comparisons that help classify the starting materials in the concrete may be drawn from the short-term data. Until now it was found that the values obtained with this accelerated test are consistent with those found with the natural diffusion test [55]. This accelerated method delivers more experimental parameters than the diffusion coefficient test alone. More specifically, it constitutes an accelerated method for determining critical chloride content. Corrosion resistance in new binders The intense efforts presently being made to lower the clinker content in cement as a way of reducing its carbon footprint entail the use of additions of different types. The smaller proportion of portlandite attendant upon the reduction in clinker content translates into less effective buffering of the decline in pH induced by corrosion. As noted earlier, the buffering capacity of the so-called 'alkaline reserve' in cement plays a pivotal role in long-term concrete durability. Steel passivation capacity or corrosion resistance in any type of cement, including new binders with much less clinker, has traditionally been studied on the grounds of electrochemical techniques. The methodology was exemplified in [55]. A thorough study of corrosion resistance calls for specific passivation tests and plotting polarisation curves for different amounts of chlorides, a discussion of which lies outside the object of this article. Be it said, however, the comparison with a type I reference cement can be made by testing the chloride and carbonation resistance. • Chloride resistance can be determined with the 'integral' method described above. Mortars or concretes can be prepared with the cements to be tested, comparing with a cement type I the D ap 's, the chloride thresholds, the surface concentrations and the corrosion rates. • Carbonation resistance calls for two types of specimens. • Cubic non-reinforced specimens are used to observe carbonation penetration in natural or accelerated (B 3% CO 2 ) tests. cover to shorten the time required for carbonation to reach the steel/concrete interface. CO 2 concentrations of up to 20% may be applied for accelerated carbonation, as the aim is full carbonation rather than rate comparison. After the specimen is fully carbonated, it is exposed to high humidity (submerged conditions) to induce corrosion. The I corr values observed for the new binders are then compared to those found for a type I reference cement. Experiments of this nature are summarised in [55] and Fig. 33. Reports of resistance to depassivation must specify whether chloride or carbonation testing was involved. Concrete resistivity and cement buffering capacity are both instrumental in chloride-mediated depassivation. In contrast, buffering capacity is the predominant mechanism for resisting the effects of carbonation. Final comments Steel corrosion takes place in concrete in much the same way as in the atmosphere or the soil, although cement generates a unique alkaline medium. Corrosion is triggered by local carbonation or chloride attack-mediated acidification and proceeds primarily via microcells. Corrosion zones in concrete, even where involving a single pit, are not pure anodes connected to a cathode. Galvanic currents between corrosion and passive zones are not equivalent to corrosion rate and in fact account in general for a very small percentage of that rate. Their measurement may therefore detect only part of the process. Oxygen is needed to trigger corrosion in concrete, but as tenths of ppm suffice for localised corrosion where chloride content is high, the process may occur even underwater. Less aeriated zones cannot therefore be said to be immune to corrosion. Oxygen is not required for corrosion to proceed (due to water hydrolysis and the reduction of the oxides formed), although its presence intensifies the process, which is enhanced by the sum of all the cathodic processes. Concrete was one of the first systems in which the instantaneous corrosion rate, also known as 'polarisation resistance' and 'linear polarisation' (alluding to the fact that a small alteration in potential is used to induce a linear increase in current), was measured. The technique has been widely used ever since it was described in the ASTM G59-97 (2014) standard for all metal/electrolyte systems. Attempts to discredit it have been unable to disprove its good correlation with weight losses simultaneously measured by gravimetry and converted using Faraday's law. At this writing its direct current version (more readily interpreted than as electrochemical impedance) continues to be the easiest and most convenient technique for quantifying measuring corrosion in any metal/electrolyte system. The application of R p to real structures is influenced by the quasi-infinite area of the reinforcement spraying longer than the auxiliary electrode border: inasmuch as the actual area of the metal polarised by the signal must be known, the area of the auxiliary electrode is not the correct reference. That circumstance necessitates specific devices in which measurement is based on modulated current confinement or in measuring the attenuation of potential with the distance from the auxiliary electrode border. Although a method for contactless corrosion measurement was published in 2001, to date no market device has proven able to take accurate measurements in large-scale structures (despite claims to the contrary), for the area affected has not yet been determined nor have studies on its calibration been forthcoming. Periodic measurement of corrosion rate has provided the grounds for calculating cumulative corrosion and proposing corrosion propagation period models. In outdoor structures, seasonal cycles can be applied to show that corrosion may be constant if expressed as a yearly mean (I corr,REP ). In 'resistance control' of corrosion, also shown to exist in concrete, the rate is proportional to resistivity. This provides grounds for certain analogies, such as deducing corrosion rate from resistivity (under conditions of active corrosion) or relating it to the effective diffusion coefficient. An incorrect diagnosis of the causes of corrosion and how it proceeds is known to be one of the major reasons for the limited durability of concrete repairs. Corrosion rate measurement is essential to studying and diagnosing progression of this process in a given structure. Activity in each zone of the structure can only be reliably determined by measuring corrosion rate. Engineers engaging in the repair of deteriorated structures must acquire more specialised knowledge to ensure effective intervention. Structural behaviour is in itself complex, but the basic processes triggering corrosion are much more complex than the mere presence of a corrosion cell with an anode and cathode. Another challenge to be confronted is to understand the structural behaviour of concrete with microcracks around the reinforcement, for with the concomitant impact on its bond with the steel it ceases to behave as a composite material. Intense basic research on reinforcement corrosion will continue to be needed in the years to come to fill the existing gaps in the knowledge of its effect on structural behaviour and, using the findings, to develop calculation tools to accurately assess the residual strength and hence the safety of the structures concerned.
2019-04-30T13:08:52.662Z
2018-12-26T00:00:00.000
{ "year": 2018, "sha1": "e02bad295f55ff8a30601053cfeded9381348677", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1617/s11527-018-1301-1.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2cda163253b805fee8d239f68e25e470c10b95f9", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
257279761
pes2o/s2orc
v3-fos-license
Random Fixed Boundary Flows We consider fixed boundary flow with canonical interpretability as principal components extended on non-linear Riemannian manifolds. We aim to find a flow with fixed starting and ending points for noisy multivariate data sets lying on an embedded non-linear Riemannian manifold. In geometric term, the fixed boundary flow is defined as an optimal curve that moves in the data cloud with two fixed end points. At any point on the flow, we maximize the inner product of the vector field, which is calculated locally, and the tangent vector of the flow. The rigorous definition derives from an optimization problem using the intrinsic metric on the manifolds. For random data sets, we name the fixed boundary flow the random fixed boundary flow and analyze its limiting behavior under noisy observed samples. We construct a high level algorithm to compute the random fixed boundary flow and the convergence of the algorithm is provided. We show that the fixed boundary flow yields a concatenate of three segments, of which one coincides with the usual principal flow when the manifold is reduced to the Euclidean space. We further prove that the random fixed boundary flow converges largely to the population fixed boundary flow with high probability. We illustrate how the random fixed boundary flow can be used and interpreted, and showcase its application in real data sets. Introduction Most existing statistical methods assume a linear dependency between features. As the dimensionality of features increases, the representation of the features in a high-dimensional space becomes more complex and it thus becomes more challenging to understand the relationships between features. In many applications, modern data structures are often complex and not necessarily linear. Indeed it is often the case that there is a lower-dimensional structure, namely a manifold embedded in the high-dimensional ambient space (Fefferman et al., 2016(Fefferman et al., , 2018, as in the examples of geometric shapes in the shape space (Turk and Levoy, 1994;Dryden and Mardia, 2016;Kilian et al., 2007;Bradley et al., 2013) and graphs in computer graphics (Phillips et al., 1997;Gross, 2005;Arjovsky et al., 2017). A series of methods that aim to recover the underlying structure of the lower-dimensional manifold have been developed over the past two decades. These methods, usually called manifold learning, are focused mostly on mapping data in a d-dimensional space into a set of points close to an m-dimensional (m d) manifold. Among them, is a method known as known as the Principal Component Analysis (PCA), which is commonly used to reduce the feature dimension in the Euclidean space. To address features lying in a non-linear space (i.e., a manifold), methods such as LLE (Roweis and Saul, 2000), Isomap (Tenenbaum et al., 2000), MDS (Cox and Cox, 2000), and LTSA (Zhang and Zha, 2004), which determine the low-dimensional embedding, preserving local properties of the data, may be preferable. A comprehensive review of such work appears in Ma and Fu (2011). Another line of research relating to statistics on manifolds is centered on the extension of existing methods defined in the Euclidean space to the manifold space. The manifold space can be the actual physical space that the data lies on or the learnt manifold created through the manifold learning methods. In recent decades, numerous non-linear approaches have been developed to analyze the data on the manifold directly (Jupp and Kent, 1987;Fletcher et al., 2004;Huckemann and Ziezold, 2006;Kume et al., 2007;Fletcher and Joshi, 2007;Kenobi et al., 2010;Jung et al., 2012;Eltzner et al., 2018). Throughout the paper, we focus on the known manifold, based on the assumption that the manifold embedding is known. Next, we will mainly review the "curve fitting" methods on manifolds. A geodesic is a generalization of the straight lines in the standard Euclidean space to the manifold. The principal geodesic analysis (Fletcher et al., 2004), which extends the PCA to the manifold, was proposed to describe the non-linear variability of data on a manifold. The principal curves, proposed in Hastie and Stuetzle (1989), are flexible one-dimensional curves that pass through the middle of data points. Having said that, principal curves are able to better capture the non-linear variation of data in comparison to all other regression lines in the Euclidean space. Ozertem and Erdogmus (2011) redefined principal curves and surfaces in terms of the gradient and the Hessian of the probability density estimate, based on the consideration that every point on the principal surface should be at the local maximum of the probability density in the local orthogonal subspace, and not the expected value as in Hastie and Stuetzle (1989). For applications in classification tasks, Ladicky and Torr (2011) proposed a new curve fitting method to find the smooth decision boundary with bounded curvature. A recent piece of work on principal flows (Panaretos et al., 2014) works as an extension of the principal curves on Riemannian manifolds. Therefore, the principal flows are also flexible onedimensional curves, which pass through the Fréchet mean of the data points. The principal flows are able to capture the non-geodesic pattern of variation both locally and globally. Instead of handling curves with an explicit parameterization, Liu et al. (2017) combine the level set method with the principal flow algorithm to obtain a fully implicit formulation, so that the obtained co-dimension one surface on the manifold fits the data set well. When the data comes with multiple paths, it would be quite natural to want to isolate one of the paths in particular -that with a fixed direction. All the methods outlined above/earlier fail to determine flows with fixed directions. Hence, we propose flows with a fixed direction, each determined by fixed data boundaries, namely their start and end points. For example, we consider seismological events that took place in the Sea of Japan between 1904 and 2015, with the epicentres plotted as green dots in Figure 1 (b). From the information of tectonic plates shown in Figure 1(a), we observe that the seismological events in this analysis tended to occur around the tectonic plate boundaries (shown as black curves with triangles in Figure 1(a)). Specifically, we deduce that the seismological events occurred frequently along the boundaries of four tectonic plates: the North American plate, the Eurasian plate, the Philippine Sea plate and the Pacific plate. Given these seismological events, the principal flow passes through the Fréchet mean and captures local variations that depend on the value for the scale parameter, h. Since there are a greater number of seismological events along the boundary of the Pacific plate, the resulting Fréchet mean appears around the Pacific plate and the principal flow starts moving from the Fréchet mean. In Figure 1(b), the red curve represents the principal flow of the earthquake data for a scale parameter of 400 miles. We observe that the principal flow moves along the boundary of the Pacific plate (red curve in Figure 1(c)). When we focus on the seismological events caused by the Philippine Sea plate, the principal flow will not be of interest in terms of finding a boundary. In this sense, the trend along the boundary highlighted in blue in Figure 1(c) would be more appropriate. Although we could derive a flow similar to that shown in blue by selecting the data with latitudes and longitudes around the Philippine Sea plate, it is hard to accurately determine which data points to include in practice. Hence, we propose fixed boundary flows, where the flow will be automatically determined by using boundary points that are chosen by users manually. If we select start and end points around the Philippine Sea plate, the obtained fixed boundary flow for a scale parameter of 400 miles is shown in blue in Figure 1(b). Furthermore, we observe that the fixed boundary flow starts from the fixed starting point, moves along the boundary of the Philippine Sea plate (blue curve in Figure 1(c)) and terminates at the fixed ending point. In order to clarify the aforementioned concepts and parameters, we hereby review the technique proposed for principal flows in brief and demonstrate that this technique comes up short when considering boundary constraints. Throughout this paper we work within the context of a complete Riemannian manifold M of dimension m, and M is isometrically embedded into the Euclidean space (R d , · ) with m < d. The related preliminaries in Riemannian geometry can be found in the Supplementary Materials. Given data points {x i } n i=1 on the Riemannian manifold, the methodology for the principal flow seeks to solve for a curve on the manifold that passes through the Fréchet mean of the data, such that the tangent vector along the curve locally follows the direction of maximal variation of the data in a local tangent space. As we will define later, the vector field characterizes the direction of maximal variation and the scale parameter characterizes how locally or globally we wish to describe a path of maximal variation. A flow with a large scale parameter captures the global trend while a flow with a reduced scale parameter describes the finer structure. Mathematically, the principal flow finds a curve γ : [0, r] → M starting at a Fréchet meanx and maximizing ( 1.1) where λ 1 (x) and W n,h (x) are the first eigenvalue and the first unit eigenvector of the local tangent covariance matrix Σ n,h (x), respectively. The definition of Σ n,h (x) is reviewed in the Supplementary Materials, and we remark that Σ n,h (x) is computed with the projections of data points onto T x M, which implies that the first eigenvector W n,h (x) also locates in the tangent space at T x M. The subscript n of Σ n,h indicates that Σ n,h (x) is calculated from the data points of cardinality n, while the subscript h of Σ n,h (x) indicates the locality. Specifically, Σ n,h (x) is computed using the data points in B d (x, h), the Euclidean ball centered at x of radius h. With a different x, the eigenvectors form the vector field W = {W n,h (x)}. To avoid confusion, we will omit the subscript n and h of W n,h (x) hereafter. The first eigenvalue is assumed to be simple throughout, which guarantees the uniqueness of W . We note that the projection onto the local tangent space might be impossible in practice, in the case that either M or the formula of the local tangent space is unavailable. Under these circumstances, we might omit the projection step in computing Σ n,h (x) and use the local covariance matrix in ambient space instead. Let us think of a simple example: noisy "C"-shaped data in M = R 2 as shown in Figure 2. Furthermore, by setting h = ∞, we will use this example to demonstrate that determining fixed boundary flows is not a simple extension of the work of principal flows. The first eigenvalue λ 1 (x) in (1.1) varies with x with Figure 2 visualizing its changes. One may see that the first eigenvalue reaches its trough at the Fréchet mean, which in turn implies that the first eigenvalue would have been increasing along any direction after its departure fromx. By differentiating λ 1 (x) (see derivation given in the Supplementary Materials) we have and that λ 1 (x) increases most rapidly along its gradient, that is, W (x) and −W (x). Therefore, maximizing either λ 1 (γ(t)) or γ(t), W (γ(t)) at γ(t) =x locally leads to two half-lines along W (x) and −W (x) starting fromx. Hence, maximizing the optimization problem (1.1), which is the product of λ 1 (γ(t)) and γ(t), W (γ(t)) locally, leads to the principal flow along W (x) through x, as represented by the dashed line on the left panel. Figure 2: Distribution of the first eigenvalue λ 1 (x) for x ∈ M with h = ∞. The black points represent sample points; the red arrow represents the direction of W (x); the red cross on the left panel represents the Fréchet meanx of the sample and the dashed line on the left panel represents the principal flow; two red crosses on the right panel represent the fixed boundaryx 1 andx 2 ; the dashed arrow on the right panel represents the opposite direction of W (x); the color bar represents the magnitude of λ 1 (x). Things are very different when one considers the fixed boundary flows, which begin from the fixed starting pointx 1 , move along the data points and end at the fixed ending pointx 2 . From the right panel of Figure 2, we observe that the λ 1 (x) is large at the boundary and will decrease when a curve moves towards the data cloud's center fromx 1 . Furthermore, from the differentiation form, λ 1 (x) decreases most rapidly along its gradient W (x 1 ), as shown by the red arrow in the right panel of Figure 2 and increases most rapidly along −W (x 1 ), the dashed arrow shown on the right panel of Figure 2. Therefore, if one maximizes the inner product γ(t), W (γ(t)) at γ(t) =x 1 in (1.1), in favor of the curve moving along the vector field W (γ(t)), one should takeγ(t) = W (γ(t)). This means the curve would move along the red arrow in the right panel of Figure 2, which makes λ 1 (γ(t)) decrease most rapidly. While if one maximizes λ 1 (γ(t)), one should takeγ(t) = −W (γ(t)), in favor of the curve moving along −W (γ(t)) since it is the gradient of λ 1 (γ(t)). However, takinġ γ(t) = −W (γ(t)) makes γ(t), W (γ(t)) decrease fastest. From this point of view, we conclude that maximizing λ 1 (γ(t)) and the inner product γ(t), W (γ(t)) in (1.1) is mutually conflicting. Such conflict makes the fixed boundary flows unique, unlike the principal flows, meaning one cannot, therefore, simply extend the optimization problem of principal flows to fixed boundary flows. We are now motivated to consider the fixed boundary flows that capture the manifold data variation in a way that differs from the principal flows. To achieve this, we initialize an optimization problem to capture a smooth flow for non-random data lying on the manifold that starts and ends at pre-defined points in Section 2. For each point of the flow, its tangent vector is close to the vector field at that point. When noise presents, the data follows from the underlying distribution of the population flow on the manifold and it is thus non-deterministic. And so too are the fixed boundary flows. The random fixed boundary flows, generalizing the fixed boundary flows, are proposed in Section 3.1. An efficient algorithm to determine the random fixed boundary flow, with its convergence of the random fixed boundary flow, is outlined in Section 3.2. In Sections 4 and 5, we illustrate that the random fixed boundary flow is able to capture patterns of variation in synthetic, seismic and real-world image data. Several statistical properties and theories of the fixed boundary flow are examined in Section 6. Fixed boundary flows Fixing two boundary points produces an infinite number of flows. To begin with, we describe the class of curves that provide the candidates of the fixed boundary flows. Givenx 1 andx 2 , we define the class as: where W (γ(t)) is the value of the vector field W , calculated form local data {x i } n i=1 at γ(t), and (γ[0, t]) denotes the length of the parametric flow γ[0, t] from γ(0) to γ(t), for all 0 < t ≤ r. Here, ∆ = d(x 1 ,x 2 ) denotes the geodesic distance betweenx 1 andx 2 and C > 1 is a given constant. The choice of C controls the size of Γ(x 1 ,x 2 ). Since t ∈ [0, C∆], the length of the flows in the class Γ(x 1 ,x 2 ) is less than C∆. A smaller C filters out the flows that (1) are far away from the data cloud by restricting the length of flows in the class Γ(x 1 ,x 2 ), and (2) overfits the data (this is because overfitted flows tend to go through all data points which will increase its length). We assume ∆ < 1 without loss of generalization, otherwise the manifold M should be rescaled. For any flow γ ∈ Γ(x 1 ,x 2 ), we could determine its moving direction and vector field at every point. The moving directions and vector fields vary with different points and different flows. To follow the direction of highest variation, we aim to find a flow with a moving direction that matches the vector field as much as possible at any given point on the flow. From the classical mechanics perspective, we seek a flow with fixed starting and ending points, that best approximate the vector field globally. Conventional local Euclidean approaches fail to achieve this without being able to accommodate the boundary conditions globally, while forcing the flow to stay on the manifold. We term such an optimal flow the fixed boundary flow; that is, it is defined as a smooth flow γ on the manifold M, starting and ending at the fixed points, with a derivative vectorγ that is maximally compatible with the vector field W , calculated from local data. Definition 2.1. (Fixed boundary flow at scale h) Letx 1 ,x 2 ∈ B, where B is the neighborhood that contains the data {x i } n i=1 on the manifold. Assume that Σ n,h (x) have distinct first and second eigenvalues for any x ∈ B. A fixed boundary flow of {x i } n i=1 with givenx 1 andx 2 is the curve satisfying where W (γ(t)) is the vector field over the neighborhood of γ(t) for 0 ≤ t ≤ C∆. The fixed boundary flow is the solution of the optimization problem defined in (2.2). Random fixed boundary flows Besides being high-dimensional, the data on the manifold is usually noisy, representing some underlying distribution. One accessible way to illustrate the noisy data is shown in the following assumption. Remark 3.1. Also, γ * (t 1 ) =x 1 and γ * (t n ) =x 2 . As shown in Figure 3,x 1 andx 2 are chosen to be at the inner end of the data cloud so that there are enough samples in the neighborhood ofx 1 andx 2 . Section 3.2 will further formulate the relationship betweenx 1 (x 2 ) and the end points of the population flow. Under Assumption 3.1, the relation between the fixed boundary flow and γ * is summarized in the following theorem. The proof of Theorem 3.1 is given in Appendix B in the Supplementary Materials. From Theorem 3.1, we observe that the inner product γ * (t), W (γ * (t) is close to its maximum, that is, 1 with sufficiently small h. This means that the integrand in the optimization problem (2.2) achieves a very large value along the main segment of the flow γ * . Note that we choose to work with γ simply because there might not be enough samples at the two ends of γ * . Here, boundaryx 1 and x 2 are at the inner end of the data cloud so that the main segment γ * (x 1 ,x 2 ) is h/2 away from γ * (0) and γ * (r * ). Hence, γ * (x 1 ,x 2 ) well approximates the optimal solution to (2.2), as illustrated by Figure 3. The theoretical analysis will focus on the main segment of γ * . For convenience, we use γ * simplifying γ * (x 1 ,x 2 ) for the rest of the paper. Figure 3: Illustration of the population flow γ * and random fixed boundary flowγ. The main section of γ * is plotted along the purple dashed line andγ is shown on the blue curve. A selected data point x i and its corresponding segment γ * (t i ) are highlighted in green. Now, let us turn to the random fixed boundary flows. Under Assumption 3.1 and given fixed boundaries, a random fixed boundary flow is the empirical flow,γ, computed from the data points with the fixed boundary. Our focus here is twofold. First is to determine the random fixed boundary flows through an efficient algorithm without intensive computation. Second is to investigate the distance property between the random fixed boundary flow and the population flow γ * , where a theoretical analysis of the bound of the Hausdorff distance is derived from the geometry property of the underlying manifold. Determination of random fixed boundary flows The aim of the proposed approach is to determine the random fixed boundary flow via a discrete flow with the fixed boundary. Furthermore, each point of the discrete flow moves along the direction of the vector field, which captures the localized variation maximally. From this perspective, the proposed approach attains an approximate solution for the original optimization problem in (2.2). Given the fixed boundary pointsx 1 andx 2 , the implementation begins with a discrete flowγ (0) starting atx 1 and ending atx 2 , with a user-defined resolution N . The choice of the initial flow γ (0) can be a geodesic on the manifold M or a straight line fromx 1 tox 2 in the ambient space, neither derailing the convergence of the algorithm, as we will show. The initial flow is denoted Then, the proposed approach will iteratively update the flowγ (k) (t i ) from k = 1 until the convergence criterion is met. Gradually, at each point, the flowγ (k) (t i ) is determined to maximize the localized variation of the data. Hence, user-defined values for the scale parameter h, shrinkage constant ρ and stopping criterion constant , are each needed during iterations. During the iterations, we update the discrete flowγ (k) (t i ), i = 0, 1, . . . , 2N , for k = 1, 2, . . . by maximizing the optimization problem (2.2). There are four core steps with this aim in mind: choosing scale parameter, calculating local covariance matrix, determining vector field, and updating. Here, we elaborate on each of these core steps, as shown in Figure 4. (1) Choosing scale parameter: we choose an appropriate scale parameter h (k) = ρ k h, where h ≤ 1 and ρ ∈ (0, 1] is a shrinkage constant. In our study, we let ρ = 0.9. One may note that the shrinkage constant ρ makes the scale parameter h (k) decrease during the iterations. Hence, the scale parameter h (k) guarantees the capture of the local variation. (2) Calculating local covariance matrix: the local covariance matrix is determined by using the discrete flowγ (k−1) that we have obtained from the previous iteration. Specifically, we use the pointsγ (k−1) (t 2j+1 ), j = 0, 1, . . . , N − 1, with odd indices to calculate the local covariance matrix. Determining the local covariance matrix is a vital step for the following updating step of the discrete flow. We note that the pointsγ (k−1) (t 2j+1 ), j = 0, 1, . . . , N − 1, may not lie inside the data cloud. To capture the local variation accurately, we propose to project these points back inside the data cloud. To this aim, we first project these points to the nearest data points. As the nearest data points might be outliers, we further select the local data points within the distance of h (k) from the nearest data points and obtain the mean points. Eventually, the projected pointsγ (k) proj (t 2j+1 ), j = 0, . . . , N − 1 are the nearest data points to the mean points. Then, the projected pointsγ (k) proj (t 2j+1 ) are used to select the local data points to further compute the local covariance matrix. Denote by {y l } n 2j+1,k l=1 the data points in the neighborhood of the Euclidean ball B d (γ (k) proj (t 2j+1 ), h (k) ) with centerγ (k) proj (t 2j+1 ) and radius h (k) . Eventually, the local covariance matrix is computed at the mean z (k) 2j+1 of the local data points {y l } n 2j+1,k l=1 and can be calculated by where a ⊗ a = aa T . It is crucial to ensure that the random fixed boundary flow always moves along the direction that maximizes the vector field. Therefore, a stop criterion is necessary to the implementation. According, we terminate the iteration process when the optimization function f (γ (k) ) does not change too much. Lastly, interpolation and projection will be implemented to ensure that the pointsγ(t i ) on the resulting random fixed boundary flow are equidistant and lie on the manifold. The detailed algorithm is summarized in Algorithm 1. The convergence of the random fixed boundary flow will be investigated in Section 3.2. Convergence of the random fixed boundary flow In the following statement, We use upper C, C 0 , C 1 , · · · or lower c, c 0 , c 1 , · · · to denote constants greater or less than 1. Here, a constant means a value independent of h, h (k) and x. Values of C and c with various subscripts may differ from line to line. Recalling our Assumption 3.1, samples are blurred by Gaussian noise. Hence, by Gaussian concentration, the maximal distance between a point x i and γ * is bounded above by σ( √ d + ln(n C )) with probability at least 1 − n −C . If σ is sufficiently small such that we can further bound the maximal distance between a point x i and γ * above by √ σ, with probability 1 − n −C , since the following holds This inequality above shows that the samples mainly lie in the tube Note that Step 3(a) of Algorithm 1 projects points to the data cloud by finding its nearest samples in X. Assumption 3.2 bounds the distance between the given point and the projected point above, which essentially leads to the convergence. Algorithm 1 selects decreasing scales h (k) = ρh (k−1) with a given ρ ∈ (0, 1] in each iteration, until the scale is less than 4 √ σ or the objective function hardly changes. Each iteration takes the output discrete flow of the previous iteration as input, updates the vector field with a smaller scale and outputs a discrete flow using the updated vector field. Theorems 3.2 -3.4 with full proofs in Appendix C in the Supplementary Materials, together prove that the random fixed boundary flow converges to the population flow γ * , given certain conditions of the initial discrete flow. Specifically, Theorem 3.2 exploits the k-th iteration and bounds d H (γ (k+1) , γ * ) above when (a) its input discrete flow is sufficiently close to γ * , (b) the points in the discrete flow are sufficiently dense, and (c) the points with odd indices are not too close to the two ends of the population flow. Note that (c) is needed since the vector field near the two ends does not follow the population flow. This means that the fixed boundariesx 1 andx 2 should be chosen not too close to the two ends, γ * (0) and γ * (r * ), in practice. Theorem 3.3 proves that imposing constraints on the initial discrete flow, that is the input discrete flow for k = 0, also leads to the upper bound of d H (γ (k+1) , γ * ). Theorem 3.4 proves the convergence of the random fixed boundary flow, as the projection ofγ (K) onto M. Theorem 3.2. Suppose the discrete curve at the k-th iteration satisfies the following conditions: For any given δ > 0, there exists C such that any point in the polylinẽ We only sketch the proof of Theorem 3.2. Recalling Algorithm 1, the polylineγ (k+1) is composed of segments passing {γ (k+1) (t 2j+1 )} N (k) −1 approximates γ * in the order of h (k) 2 . Hence, the polylineγ (k+1) which is composed of these segments is also within Hausdorff distance O(h (k) 2 ) to γ * . According to the stopping criteria, h (K) = O( √ σ) when Algorithm 1 stops. Hence, the final Note that the interpolation step generates a discrete curve containingγ (K) , and the projection step will not change the order of the Hausdorff distance as Theorem 3.4 has proved. To be precise, the final discrete curve of Algorithm 1 is located in a tube along the population curve γ * with a radius in order of √ σ. Simulations To illustrate the performance of random fixed boundary flows, we studied several random data sets generated on two manifolds, a unit sphere and a right-circular unit cone. The two manifolds are in R 3 with intrinsic dimension d = 2. In the simulation, the boundary points were selected manually from the given data set so that there are enough data points around the boundary points to calculate the local variation. To generate the random fixed boundary flows, we applied the proposed algorithm with different values of the scale parameter h. Here, we note that the random fixed boundary flow is a discrete curve with derivatives that approximately capture the direction of the maximum local variation depending on h. Throughout the numerical studies in sections 4 and 5, we use RFBFs to denote random fixed boundary flows. In the first part of the simulation, we evaluate the performance of the RFBFs on the unit sphere. The noisy data sets are randomly generated from three population flows, which are plotted in purple in Figure 5 (a)-(c). Specifically, Gaussian noise is added to the points on the population flows with a constraint such that the perturbed points remain on the test manifold. In this manner, we generated three noisy data sets, each representing different types of variation on the unit sphere. The first data set is concentrated around a "C"-shaped curve on the unit sphere, thus presenting a variation pattern along the geodesic. After that, we considered two data sets from two non-convex closed flows. In this setting, the simulated data sets present local variation patterns along the non-convex flows. In particular, the second data set is generated from a quarter of the six-fold star-shaped flow, and the third data set is concentrated around a half of the two-fold star-shaped flow. To obtain RFBFs, the initial flows used in our analysis are straight lines connectingx 1 andx 2 . One may use other initial flows, for example, the geodesic fromx 1 tox 2 . Given a set of randomly generated data, we obtained a RFBF with a specific h. For the data sets plotted in Figure 5 (a)-(c), the RFBFs obtained with a specific value of h are illustrated in red in Figure 5 (d)-(f). To further investigate the performance, we obtained ten sets of random data for each population flow. The RFBFs are then obtained with a sequence of h for the random data. An analysis of the mean errors for the Hausdorff distances between the population flow and RFBFs are summarized in Table 1. From the numerical results, we note that the RFBFs are able to capture the variation globally and locally. As we lower h, the performance accuracy of the RFBFs improves generally. On the other hand, overfitting may occur as we lower h gradually. For the two non-convex population flows in Figure 5 (b)-(c), we also generated noisy data sets from the whole closed flows. As boundary points are required to obtain RFBFs, we handled these noisy data sets parts by parts. For example, we fitted the noisy six-fold data set quarter by quarter and the noisy two-fold data set half by half. We specified the boundary points for each part of the whole data set and obtained the RFBFs with predetermined values of h. The obtained RFBFs are shown in red in Figure 6. To compare the performance accuracy, we further applied the level set methods in Liu et al. (2017) to the random data sets and plotted the obtained curves in blue in Figure 6. In contrast to the level set methods, the RFBFs are able to capture the local variation better, especially at the parts of the curves with high curvature. We also note that the level curve methods reach the locations outside the data cloud at some parts of the two-fold data. In the second part of the simulation study, the testing manifold is extended to a right-circular unit cone, with apex at (0, 0, 0), height H = 1 and radius R = 1. Three types of random data sets are generated to examine the performance of RFBFs on the right-circular unit cone. The first data set is concentrated around a band on the cone. For the second and third data sets, they are generated from a "C"-shaped and "S"-shaped population flows on the tested manifold. The RFBFs with a predetermined value of h are illustrated in red in Figure 7. As the data plotted shown, we observe that the RFBFs work well to capture different types of variations on the cone. Similarly, we fitted RFBFs with a sequence of h for ten randomly generated data sets. To examine the performance accuracy, the mean errors of the Hausdorff distances between the population flows and RFBFs are summarized in Table 1. As expected, the obtained RFBFs do indeed divine the variation accurately on the cone as we lower h. It becomes more challenging to capture the variation accurately for all three types of variation investigated when the variation pattern becomes more complicated. Seismological Data Here we explain the full analysis of the previously mentioned seismological events. The data set was sourced from the International Seismological Center (ISC) and features significant earthquakes (magnitude 5.5 in Richter scale and above, including continental events of magnitude 5.0) between (a) noisy six-fold data (b) noisy two-fold data Figure 6: Comparison of RFBFs and the level curve methods on the unit sphere. Black dots: data points; red dots: RFBFs; blue dots: curves obtained from the level curve methods. (a) noisy band data (b) noisy "C" shape data (c) noisy "S" shape data 1904 and 2015. The earthquake epicentre data is plotted in black in Figure 8. Before we fit the RFBF for the earthquake data, we first investigate the distribution of the first eigenvalue for the data. This is shown in Figure 8, from which we observe that the variation of the first eigenvalue among the earthquake epicentres along the distribution of the earthquakes is quite non-uniform. Furthermore, we also observe that the first eigenvalue changes with different values of h, which changes the determination of local variation. Hence, the analysis of seismological events is an example with a varying first eigenvalue and we will investigate the performance of RFBF for this case. We note that earthquakes tend to occur around the tectonic plate boundaries. As has been mentioned earlier, the shape of the plate boundaries shown in Figure 1(a) carries the global variation (from east to west, or north to south) and the localized variation along different plates. If we selectx 1 andx 2 around the Philippine Sea plate manually, we expect the RFBFs would move along the plate boundary and mirror the blue curve shown in Figure 1(c). At the same time, the movement of the RFBFs will also reflect the local variation pattern of the data, which is captured by h. In our analysis, we scaled the data onto the unit sphere and selected three different sets ofx 1 andx 2 along the Philippine Sea plate manually. Figure 9 illustrates the earthquake data on a flat world atlas with the three sets ofx 1 andx 2 , namely (a)-(c), (d)-(f) and (g)-(i). To visualise and compare the performance, we fit RFBFs using three values of h. As we expected, the RFBFs move along the boundary of the Philippine Sea plate and capture the variation between the given boundary points. Furthermore, we let h vary and visualize the RFBFs that reflect the various localized variation patterns. Given the boundary points, we note that the RFBFs work well in capturing the variation patterns of the data. As we lower h, the RFBFs uncover the global and local variation pattern more accurately. For example, when we set h = 0.075, the RFBFs in Figure 9 (a), (d) and (g) move inside the data cloud and trace the global variation from south to north better than the RFBFs in the other plots of Figure 9. When we gradually increase the value of h, more data points will be involved in the determination of the local variation and this also influences the trend of the RFBFs. In the last three plots of Figure 9, we select two sets of boundary points with opposite directions. Comparing the results in Figure 9 (a)-(c) and (g)-(i), we note that the direction of the boundary points does not inordinately affect the RFBFs with the same h. Labeled Faces in the Wild In this section, we consider another concrete case -Labeled Faces in the Wild (LFW) in Huang et al. (2007). The data set comprising face photographs is designed to provide a system of face recognition with over 13, 000 images of faces collected from the web. Each face image is labeled with the name of the person in that image. Note also that among those face images are 1, 680 people who have two or more distinct photographs in the data set. In our study, we downloaded 264 images of 66 people with four images of each person. To facilitate the analysis, the face region was cropped from the original image and resized to 50 × 37 pixels. The images of the face region for the 66 individuals can be found in the Supplementary Materials. As the analysis uses four different images for each individual, the data set can be written as {x i } 264 i=1 , where x i are vectors in the ambient space R 1850 . We assume that the data points {x i } 264 i=1 lie on the unit sphere S 1849 , which is embedded in R 1850 . To begin with, we chose two images with the largest distance from one another in the ambient space, setting them asx 1 andx 2 . As shown in Figure 10, the image of Andy Roddick in Figure 10(a) is the starting image, and the image of Jack Straw in Figure 10(p) is the ending image in our analysis. Then, we fit RFBFs with various values of h. The obtained RFBFs are discrete flows of face images which capture the variation of facial structure from the starting image to the ending image. With the exception of the boundary images on the RFBFs, we generated a sequence of fake faces, which are plotted in Figure 10 (b)-(o). The person plotted in each fake face image is not a real person that can be identified in the given image set. On the contrary, the person is constructed using the characteristics extracted from the local and global variation pattern of the given images. The intermediate face images on the RFBF reflect the progressive face changing from the starting image to the ending image. There are some noteworthy conclusions that we draw from the RFBFs. First, the skin tone of Andy Roddick shown in the starting image of Figure 10 (a) appears somewhat wheatish, while Jack Straw's face, plotted in the ending image of Figure 10 (p), possesses a light skin tone. Through the fake faces constructed on the RFBF, we are able to observe the gradual changes of skin tone from dark to light. Second, we note that the Andy Roddick dons a cap in the starting image and Jack Straw's hairstyle features a fringe in the ending image. For the first few images in Figure 10 (b)-(f), the fake faces on the RFBF are also wearing caps. In the last few images plotted in Figure 10 (m)-(o), the fake faces of the RFBF have fringes. Hence, we are also able to monitor the change of hairstyle through the intermediate fake faces on the RFBF. Although RFBFs are able to reveal some progressive face changes, the characteristics captured by the RFBFs are onedimensional. Hence, the variation pattern analyzed by the RFBFs is limited when we are dealing with high-dimensional data with large m values. We will consider the extension of RFBFs in the future. Fixed Boundary Flow for Non-random Data in Euclidean Space The aim of this section is to prove that fixed boundary flows for non-random data are canonical, in the sense that they will pass through the usual principal component, in the context of Euclidean spaces. Hereafter, we suppose M is a linear subspace of R d , and h = ∞, which implies Under this configuration, we will figure out the supremum of L(W, γ) defined in (2.2) subjected to the constraint γ ∈ Γ(x 1 ,x 2 ) defined in (2.1). Proposition 6.1 analyzes the optima of (2.2) under a strict condition that γ * (t) =x 2 with t = C∆. If the condition is relaxed to be t ≤ C∆, things are more difficult. For further analysis, we suppose the original point of M to bex = n i=1 x i , and [v 1 , · · · , v d ] to be the basis with v 1 = W (x). For convenience, we denote z i = v T i x to be the i-th coordinate of any z ∈ M and Before giving our final proposition, we define some important sets and curves first. With representing Hadamard multiplication, we denote a subset of Γ(x 1 ,x 2 ), where the curves have the same direction with W , The red curves in Figure 11 (a) demonstrate flows satisfyingγ(t) W (γ(t)) ≥ 0, that is, the curves have the same direction as W . Denote p 1 = v 1 v T 1x 1 and p 2 = v 1 v T 1x 2 as the projections ofx 1 andx 2 , respectively, onto the first axis. And Γ + (x 1 , p 1 , v 1 )(Γ + (x 2 , p 2 , v 1 )) as the set of the curves fromx 1 (x 2 ) to p 1 (p 2 ), orthogonal to v 1 and satisfyingγ(t) W (γ(t)) ≥ 0. We set And we also setγ : [0, p 1 − p 2 ] → M as the straight line between p 1 and p 2 , that isγ(t) = p 1 + t p 2 −p 1 (p 2 − p 1 ). Let γ s be the concatenation ofγ 1 ,γ andγ 2 , that is γ s : [0, (γ 1 ) + (γ) + (γ 2 )] → M satisfying then γ s is continuous and in the closure of Γ + (x 1 ,x 2 ) by Proposition 4.1 in the Supplementary Materials. The yellow curve in Figure 11 (a) demonstrates γ s . In Figure 11 (a), we use the blue arrows to demonstrate an example of the vector field satisfying Assumption 6.1. Generally speaking, this refers to the arrows at the left half plane pointing towardsx and arrows at the right half plane pointing in the opposite direction. Moreover, the arrows straighten horizontally as they approach the second axis. We summarize the assumptions on the vector field in Assumption 6.1 (b) and (c). Assumption 6.1. Assumption 6.1 is not strict. Figure 12 illustrates (b) and (c) of Assumption 6.1 with two data sets, as represented by black points that are concentrated around a "C"-shaped curve and an "S"-shaped curve in R 2 . The two diagrams in the left-hand panel show the vector fields for the two data sets, both of which satisfy Assumption 6.1(b), while the diagrams in the right-hand panel show how |v T 2 W (x)| varies at different points of x. Specifically, |v T 2 W | gets larger when the color transitions to yellow, and smaller when the color transitions to blue. One can conclude from the two diagrams in the right panel that the vector field between the two orange lines satisfies Assumption 6.1(c). Figure 12: Vector field (left) and |v T 2 W | (right) at different points. We can now set out sthe second proposition, which is under the general condition γ(t) =x 2 with t ≤ C∆. This proposition shows that if we restrict γ in Γ + (x 1 ,x 2 ), the fixed boundary flow will pass through the usual principal component. The proof of Proposition 6.2 can be found in Appendix D in the Supplementary Materials. Also in Appendix D in the Supplementary Materials, we further explain the inequality Combining the inequality in Proposition 6.2 and the inequality (7.6), we conclude that the optimal solution of (2.2) always passes through the usual principal component. The scheme to show the inequality (7.6) is organized as follows. As shown in Figure 11(b) and (c), we construct γ + (the red curve) by any γ (the blue curve), and illustrate L(W, γ s ) ≥ L(W, γ). In particular, if the dimension of the space is 2, the comparison between L(W, γ s ) and L(W, γ) can be achieved by calculating the integration over the gray area using Green's Theorem. Discussion The determination of a fixed boundary flow for data points on non-linear manifolds is a very different problem from the case of principal flow. We propose the notion of a fixed boundary flow to define a curve with fixed starting and ending points and a tangent velocity that matches the maximal variation of data in its neighborhood at each point. The local geometry of data variation is represented by the tangent space at the given point, which compels us to use the local vector fields. Based on this choice, we formulate an optimization framework to construct a smooth curve on the manifold, with a tangent vector that always matches the local vector fields. There is no doubt that the solution to the optimization problem, and equivalently, the fixed boundary flow, depends on how a neighborhood is defined at a certain point, just as with principal flow. The choice of the neighborhood depending on the scale parameter h determines how local or global covariation features are captured by the fixed boundary flow. Algorithm 1 provides a way to select a series of decreasing h (k) till ρh (k) ≤ 4 √ σ, which obliges us to focus on the global trend of the curve first and the local second. Using this algorithm, we generate a curve represented bỹ γ. We discuss below the construction of a "confidence band" for the resulting fixed boundary flow γ ∈ M. As we define the confidence band for the flow on the manifold, it should be a confidence ellipsoid. Note that the samples in B d (x, h) roughly lie within an ellipsoid with principal axes of length h, λm √ λ 1 h, respectively. Thus, we use the formulation of {λ i } m i=1 to construct the "confidence band". Specifically, for any point x =γ(t) on the computed fixed boundary flowγ, we can define an ellipsoid of dimension (m − 1) in the intersection of T x M and the normal space at x, which could cover most samples in this intersection. By allowing the orthonormal U (x) ∈ R d×m be a basis of T x M, the confidence ellipsoid is of dimension (m − 1) obeying Note that U (x) can be estimated with certain theoretical guarantees (see Tyagi et al. (2013)). We remark thatγ(t) usually approximates W (γ(t)), that is, e 1 . This makes V full column rank, and consequently, the dimension of the ellipsoid is (m − 1). Ifγ(t) is happened to be orthogonal to W (γ(t)), the dimension of the ellipsoid would reduce to (m − 2). With certain covering ellipsoid conditions for the samples in the neighborhood, one might consider boundingγ andγ under the current setting. Some of the results in Yao and Zhang (2020) will be helpful in this respect. As this is part of our ongoing work, we intend to further investigate it in the future. In this section, we will introduce some preliminaries in Riemannian geometry and review the principal flows. We focus on studying a complete Riemannian manifold M of dimension m, equipped with a metric g. The smooth Riemannian manifold M can be isometrically embedded into the Euclidean space (R d , · ), m < d. Assuming that the embedding is known, there exists a known differentiable function F : R d → R m , and we have The Riemannian metric g(·, ·) on the Riemannian manifold M is induced by an inner product ·, · defined in the tangent space T x M at each point x ∈ M, and the tangent space T x M is denoted by where DF is the m × d derivative matrix of F evaluated at x. Since the tangent space T x M is able to locally approximate the manifold, we define two mappings between the tangent space and the manifold. The exponential map at x takes a tangent vector v ∈ T x M denoted by and there exists a unique geodesic γ v satisfying γ v (0) = x with initial velocityγ v (0) = v/ v . Therefore, the exponential map is locally defined by exp The inverse of the exponential map, the logarithm map, is denoted by The tangent components e 1 (x), . . . , e m (x) atx form a basis for the tangent space TxM, and they are given by the first r eigenvectors of the scale h local tangent covariance matrix When h = ∞, and M = R d , the local tangent covariance matrix reduced to be Noting W (x) is unit length, W (x), dW (x) = 0, we could calculate the derivation of λ 1 (x) as follows: Appendix B: Proof of Theorem 3.1 In subsequent proof, we need a special case of Theorem 8 by Yao and Xia (2019), where the manifold degenerates into a curve, that is, the dimension of the manifold is 1. We state this special case of Theorem 8 below, where we use h instead of r as the scale, to ensure the same as the symbol in this paper. Theorem 7.1 (Slight deformation of Theorem 8 by Yao and Xia (2019)). Let z be a point off a curve γ, z * be the projection of z onto γ, d(z, γ * ) ≤ h. We have where Π * z * denotes the orthogonal projection onto the normal space of z * and Π z = v ⊥ v T ⊥ . Here V ⊥ is the orthogonal component of v and v is the first eigenvector of Σ h (z). To bound the summation of some power of ξ i above, we need Proposition 2.3 by Yao and Xia (2019) as follows. Proposition 7.1 (Proposition 2.3 by Yao and Xia (2019)). Suppose ξ ∼ N (0, σ 2 I d ); then we have, for any positive integer k: (4) ξ i k 2 and ξ j k 2 are independent if ξ i and ξ j are independent, where C 1 , C 2 , and C 3 are three constants depending on d and k. Based on the above proposition, we obtain the upper bound of the summation of each ξ i k for points lying in a tube surrounding γ * . Proposition 7.2. For a given δ, there exists C n such that if n ≥ C n √ σ, then Proof. Noticing {ξ i } are i.i.d. samples drawn from Gaussian distribution, we can obtain the expectation µ k = C 1 σ k , variance σ 2 k = C 2 σ 2k and the third moment ρ k = C 3 σ 3k of ξ i k according to Proposition 7.1. By Berry-Esseen Theorem, the cumulative distribution of where Φ is the cumulative distribution function of standard normal distribution. So, there exists C depending on d, k and δ such that with probability at least 1 − δ/3 − C / |I(x, h)|. To estimate |I(x, h)|, we calculate the probability of i ∈ I(x, h) based on h > 4 √ σ, . Proof of Theorem 3.1. For any t ∈ T , plugging z = γ * (t) into Theorem 7.1, we have where the second inequality holds by Proposition 7.2 with probability 1−δ, and the last inequality holds since h > 4 √ σ. Let u =γ * (t), the tangent vector of γ * at γ * (t), and v = W (γ * (t)), the first eigenvector of Σ h (γ * (t)). By the definition of Π z and Π * z * , we have Π 3) in the main manuscript and To get a tight bound on d(z, γ * ), we denote Π * z * to be the orthogonal projection onto the normal space of γ * at z * and obtain where the second term of the last inequality follows Theorem 4.18 by Federer (1959). Noting and 1 |I(x,h)| i∈I(x,h) ξ i ≤ Cσ with high probability by Proposition 7.2, we could bound which completes the proof. Lemma 7.2. Let z be a point off γ * , z * be the projection of z onto γ * , and v be the tangent vector of γ * at z * . If d(z, γ * ) ≤ C 1 h 2 and z − γ * (t) > h/2 for t = 0, 1, then for any given δ, there exists C such that vv T − e 1 (z)e 1 (z) T ≤ Ch with probability 1 − δ. Proposition 7.3. Suppose u and v are normal vectors, then uu T − vv T = √ 2 (I − vv T )u . Proof. To prove this proposition, we calculate uu T − vv T 2 and (I − vv T )u 2 respectively as per uu T − vv T 2 = uu T , uu T + vv T , vv T − 2 uu T , vv T = 2 − 2 uu T , vv T , and (I − vv T )u 2 = (I − vv T )u, (I − vv T )u = I, uu T − uu T , vv T = 1 − uu T , vv T , which complete the proof. Proof. Let a = γ * (t 0 ) and ∆t = b − a, γ * (t 0 ) . By Taylor's expansion, Proof. This proof is conducted in two steps: First, we show that there isx ∈ T z (k) To begin with, where vv T − e 1 (z (k) i )e 1 (z (k) i ) T ≤ Ch (k) by Lemma 7.2. Denote the projection of x onto T z (k) i * γ * to bex, then Taking a = z (k) i * and b =x in Proposition 7.4, we have d(x, γ * ) ≤ Ch (k) 2 , which completes the proof. then the three conditions of Lemma 7.3 hold with probability (1 − δ) k for any k ≥ 1. Proof of Theorem 3.3. By Proposition 7.7, the three conditions of Lemma 7.3 hold with probability (1 − δ) k for any k ≥ 1. When the conditions hold, we have d H γ (k+1) , γ * = O(h (k) 2 ), with probability 1 − δ by Theorem 3.2. Hence, Proof of Theorem 3.4. Since γ * ⊂ M, we have d(x, M) ≤ d(x, γ * ). Using this inequality, we could obtain the following inequalities: Appendix D: Data set of Labelled Faces in the Wild in Section 5.2 In our study, we downloaded 264 images of 66 people with four images of each person. The images of the face region for the 66 individuals are shown in Figure 13. Appendix E: Proof of Proposition 6.2 Proposition 7.8. If (γ 1 ) + (γ) + (γ 2 ) ≤ C, then γ s belongs to the closure Γ + (x 1 ,x 2 ). In Figure 14, we display the cross sectional area of M along the first and i-th axis for i ≥ 2. In the left panel, the blue curve is γ and the red curve is γ + . Without loss of generality, we focus on v T ix 1 ≤ 0 and t ≤ t 0 . The other three cases in (7.7) can be similarly verified. First, we compare the integrals over the orange curve C 1 and the yellow curve C 2 in Figure 14. Then the integral on C 1 denoted by I 1 is and I 2 = C 2 v T i W (z)dz i . Then, I 1 − I 2 is the integral of v T i W (z) over the closed anticlockwise curve consisting of C 1 and the inverse of C 2 . When d = 2, such integral is equal to an integral over the gray region denoted by D shown in the right panel of Figure 14 by Green Theorem, that is, ≤ 0 for z 1 ≤ 0 based on Assumption 6.1(c). For i ≥ 2, if ∂v T i W (z) ∂z j ≥ 0 holds for any j > i and ∂v T i W (z) ∂z j < 0 holds for any j < i, the conclusion can be extended to a higher dimension by the Stokes' theorem. Second, we compare the integrals over the purple and pink curve in Figure 14. By Assumption 6.1 (b), the integral of v T i W over the purple curve is negative, while the integral over the pink curve is zero. So, the integral of v T i W over the purple curve is less than the pink curve. The above discussion summarizes · v T i W (γ + (t))dt, for any i ≥ 2. Moreover, where the last inequality can be verified by similar proof of Proposition 6.2. Implementing the above discussion for t ≥ t 0 analogically, we also have Along with (7.5) we conclude ≤ L(W,γ) + L(W,γ) + L(W,γ) = sup γ∈Γ + (x 1 ,x 2 ) L(W, γ), which supports the inequality (7.6).
2019-04-24T13:55:47.000Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "355bf0c0d14c98752cfe8e3b3cfb9966a4173b5f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "355bf0c0d14c98752cfe8e3b3cfb9966a4173b5f", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
231906038
pes2o/s2orc
v3-fos-license
Association of daily step count and serum testosterone among men in the United States Purpose To describe the association between daily activity (i.e., daily step counts and accelerometer intensity measures) and serum TT levels in a representative sample of US adults aged 18 years or older. Methods A retrospective cohort study was carried out utilizing the NHANES (National Health and Nutrition Examination Survey) 2003–2004 cycle. Physical activity was measured with a waist-worn uniaxial accelerometer (AM-7164; ActiGraph) for up to 7 days using a standardized protocol. Using linear and multivariable logistic regression controlling for relevant social, demographic, lifestyle, and comorbidity characteristics, we assessed the association between daily step counts and TT. Results A total of 279 subjects with a median age 46 (IQR: 33–56) were included in the analysis. 23.3% of the cohort had a low serum TT level (TT < 350 ng/dl). Compared to men who took <4000 steps per day, men who took >4000 or >8000 steps/day had a lower odd of being hypogonadal (OR 0.14, 95% CI: 0.07–0.49 and 0.08, 95%CI: 0.02–0.44, respectively). While a threshold effect was noted on average, TT increased 7 ng/dL for each additional 1000 steps taken daily (β-estimate: 0.007, 95% CI: 0.002–0.013). Conclusions Patients with the lowest daily step counts had higher odds of being hypogonadal. The current work supports a possible association between daily steps, total testosterone, and hypogonadism for men in the US. Introduction Testosterone is necessary for normal male development and function. Abnormal testosterone levels have been associated with changes in male body muscle mass and fat distribution as well as bone metabolism and energy levels [1]. Male hypogonadism comprises of both persistent-specific symptoms and biochemical evidence of testosterone deficiency [2]. Hypogonadism becomes more prevalent with age. In particular, the EMAS study reported a 0.4% per annum decrease in total testosterone (TT) and a 1.3% per annum decline in free testosterone (fT) [3]. In addition, there is a high prevalence of hypogonadism within specific populations, including patients with type 2 diabetes, metabolic syndrome, obesity, and low performance status [4,5]. Given the association between serum testosterone and health, it is reasonable to assume that physical activity (PA) is also associated with TT [6]. PA is an important source of physical, psychological, cognitive, and social health benefit for all age groups, including the prevention of muscle-skeletal fragility events, which may ultimately lead to long-term pain, loss of function, and higher mortality rates in the elderly [7]. This relationship is evident in prostate cancer (PCa) survivors treated with androgen deprivation therapy (ADT) who are at higher risk for muscle mass and muscular strength decrease, loss of bone density as well as increasing in body weight and fat mass [8]. However, previous studies from the National Health and Nutrition Examination Survey (NHANES) [9,10] were unable to detect any association between overall circulating TT levels and the amount of PA as assessed by survey responses. However, self-reports of type, duration, and intensity of PA, may be affected by recall and social desirability bias as well as by individual perception of PA intensity [11,12]. Objective measures of PA through activity monitors may provide more reliable measures. The purpose of the current study was to describe the association between daily activity (i.e., accelerometer intensity measures) and serum TT, fT, and Bioavailable T levels in a representative sample of US adults aged 18 years or older. Study population Data from the NHANES cycle 2003-2004 were analyzed (available from: https://wwwn.cdc.gov/nchs/nhanes/2003-2004/PAXRAW_C.htm). NHANES consists of a noninstitutionalized US civilians sample, using a multistage probability sampling design that considers geographical area and minority representation via a cross-sectional survey conducted by the National Center for Health Statistics (NCHS) of the Centers for Disease Control and Prevention (available from https://wwwn.cdc.gov/nchs/nhanes/ana lyticguidelines.aspx). Sample weights are generated to create nationally representative estimates for the US population and subgroups defined by age, sex, and race/ethnicity [13]. As the analysis utilized deidentified data with no direct participant contact, it is not considered to be human subjects research and consequently does not require institutional review board approval. Demographic information (i.e., age, ethnicity, education), health behaviors (smoking, BMI) and concomitant comorbidities were also collected. The main inclusion criteria were participants who were at least 18 years or older with available serum samples in the repository and also wore an ActiGraph model 7164 accelerometer on the hip during waking hours for a 7-day period with at least 1 day of valid wear (i.e., ≥10 h/d). Following an overnight fast, men serum samples were firstly dawned between 8.30 and 11.30 a.m. and then testosterone concentrations were determined using a competitive electrochemiluminescence immunoassay on the 2010 Elecsys autoanalyzer (Roche Diagnostics, Indianapolis, IN, USA) with the lowest detection limit of the assay being 0.02 ng/mL. All sex steroid hormones from the present NHANES cycle were assayed at Boston Children's Hospital (Boston, MA, USA) by laboratory technicians blinded to participant characteristics. The details for the NHANES laboratory methodology for testosterone determination are available from: https://wwwn.cdc.gov/nchs/nhanes/2003-2004/SSCHL_C.htm. fT and Bioavailable T were then separately calculated given the serum values of sexhormone binding globulin and albumin levels in accordance with the formula described by Vermeulen et al. [14], available from http://www.issam.ch/freetesto.htm. Men were excluded from the analysis if they were diagnosed with medical history of PCa (as they may be treated with hormone ablation therapy), reported limitations to engaging in PA (i.e., unable to walk without an assistive device), or were missing information on testosterone/ accelerometer data or on covariates of interest. Of 2922 men aged at least 18 years old at the time of the survey, 2052 had valid accelerometer data. A total of 386 of the 2052 with accelerometer data also presented with available serum TT information. Out of these, 107 (27.7%) were excluded due to missing data reaching a final sample size of 279 subjects enrolled into analysis. Accelerometer-measured PA PA was measured with a waist-worn uniaxial accelerometer (AM-7164; ActiGraph LLC; Ft. Walton Beach, FL) for up to 7 days using a standardized protocol [13] within the same data acquisition time frame for each NHANES participant. The signals are then filtered and digitized by converters in the device and summed over a user-specified period of time (epoch) to provide activity counts per epoch, commonly expressed as count per minute or per day (CPD). Data were initially screened for non-wear time using a previously developed algorithm for NHANES accelerometer data [13]. Days with fewer than 10 h of wear time were excluded and participants with at least 1 valid day of accelerometer data were included in the analysis. Pax intensity assessment The physical activity monitors used in NHANES collected objective information on the intensity and duration of common locomotion activities such as walking, and jogging defined by the "PAXINTEN" variable, which correspond to the sequential observation number in minutes as recorded by the monitor device intensity value (available from: https://wwwn.cdc.gov/nchs/nhanes/2003-2004/PAXRAW_ C.htm). Each day of wear produces 1440 individual minute records up to the last minute of day 7. Pax intensity values were classified into weighted quartiles and modeled upon TT serum levels and the available covariates information. The 25th percentile (~<155078.6 CPD) was used as the reference [13]. Multiple imputation method for daily steps count determination Although activity count data from the 2003-2004 NHANES cycle has been publicly available, step count data from this cycle were not released because of missing data. Several research teams previously addressed this issue by utilizing a semi-parametric multiple imputation method, whereby bootstrapping and ordinary least squares regression provided accurate values for missing data [15,16]. This imputation method uses demographic, questionnaire, laboratory and accelerometer data on participants who are not missing steps values, to predict step values for individuals who are missing daily step values. Prior literature has validated the imputation [15]. Ordinary least squares regression on newly imputed dataset was compared to those of the pre-imputation dataset, and similar estimates of linear and/or logistic regression analysis were found between both datasets. Statistical analysis Following the recommended guidelines from the NCHS (Centers for Disease Control and Prevention 2012a), all the analyses were performed with appropriate weights for the complex survey sampling method of NHANES data. Student's t test and chi-squared statistic were used to assess differences among means and proportions between continuous variables and subgroups, respectively. To explore the relationship concerning step CPD or pax intensity CPD, we examined the odds of impaired TT levels (i.e., <350 ng/ dl), fT (<6.5 ng/dl), and bioavailable T (<110 ng/dl) in jointly classified categories with low step counts (<4000 steps/day) and low pax intensity (lower quartile) as reference group. Linear regression was used to estimate and compare variation of serum TT concentrations across cumulative recorded steps and pax intensity measures. Oneway ANOVA on ranks (Kruskal-Wallis test) was used to test the differences concerning continuous TT levels between groups categories (steps categories and pax intensity quartiles). Multivariable logistic regression was used to estimate the odds of low TT, fT, and Bioavailable T for varying step and pax intensity levels. Sensitivity analyses were performed using different TT thresholds and associations with step counts. The TT threshold was chosen from established values where men are recommended to consider hormone replacement therapy by several International American/European Andrological and/or Endocrinological Societies and similar to prior analyses [10]. Locally weighted scatter-plot smoother (LOWESS) function was used to graphically depict the relationship between continuous step/ pax intensity counts and TT, fT, and Bioavailable T. The analysis was adjusted for covariates selected based on previous investigation, including age, BMI, race/ethnicity (non-Hispanic white, Mexican American, non-Hispanic black), education level (less than high school, high school, greater than high school), and smoking status (current vs. never). Moreover, the model was adjusted for all comorbidity covariates known to influence both TT homeostasis (alcohol consumption, diabetes, hypercholesterolemia, hypertension, cancer) as well as participant's motility limitation (stroke, coronary artery disease, heart failure). Data analysis was performed using SAS v.9.2 (Cary, NC, USA) with p values < 0.05 considered statistically significant. Results 279 men had complete data regarding accelerometer, serum TT, and covariates of interest. Overall, 214 (76.7%) men had normal TT levels and 65 men had low TT levels. Men presenting with normal TT levels (i.e., >350 ng/dl; mean 581.1 ± 182 vs. 248 ± 75) were significantly younger (p = 0.045) and had a lower mean BMI (p = 0.006) compared to men with lower TT levels. No significant nor clinically relevant differences were identified with regard to other socio-demographic, or comorbidities distribution (Table 1). Participants took a mean of 8702.2 (SD: 4337.2) steps per day while the mean pax intensity value was 249,848.5 (SD: 139,941.1). TT, fT, and Bioavailable T levels were consistently different across the step categories (p = 0.012, 0.001, and 0.005, respectively) and pax intensity quartiles (p = 0.026, 0.000, and 0.146, respectively) ( Table 2). Daily pax intensity and TT levels Similar findings to the daily step count data were confirmed when assessing the association between pax intensity and TT levels ( Supplementary Fig. 1). A positive association between TT levels and pax intensity was seen (p = 0.001). On multivariable logistic regression, we confirmed a lower odd of hypogonadism, fT, and Bioavailable T with increasing pax intensity levels. LOWESS function showed analogous trajectory revealing the reduced probability of hypogonadism as well as fT and Bioavailable T with cumulative increase of pax intensity values per day (Supplementary Figs. 2 and 3a, b, respectively). Discussion In the present cross-sectional survey, we observed a positive association between daily step count and TT levels. Moreover, as daily steps increase, the odds of hypogonadism declines across a range of serum TT hypogonadism thresholds. Men with more than 4000 steps per day had a significantly lower odds of having low TT levels. To our knowledge, this is the first study to report an association between daily step count and serum testosterone levels. The interaction between various measures of PA and testosterone has been studied. Previous experiences have addressed this relationship relying on self-reports on PA intensity and/or overall duration. Muller et al. [17] demonstrated that greater TT levels were associated with higher PA in subjects aged 40-80 years old. However, PA was assessed in the year prior to the survey susceptible to recall bias. Evidence suggests that the increase in circulating testosterone levels only occurs within a short period of time from the onset of exercise and returns to baseline thereafter [18]. Moreover, the acute exercise-induced boost in TT has Table 2 Steps and pax intensity count per day (CPT) according to normal, normal-low, and low total testosterone values within the study population Fig. 1 Multivariable adjusted one-way ANOVA on ranks assessing differences concerning continuous total testosterone levels between steps groups (count per day [CPD]) categories a smaller magnitude of increase and relatively shorter time benefit [19]. While self-report is the most cost-effective and simple method to measure PA [11,20] and can provide estimates of the type, duration, and exercise intensity in populationbased studies; the differing questionnaires adopted and activity definitions often make it difficult to compare studies. For example, a cross-sectional analysis on 696 men from the European Prospective Investigation into Cancer and Nutrition study reported that high levels of vigorous exercise (i.e., 3 or more hours/week) were associated with an 11% higher testosterone concentration compared with those who reported no vigorous exercise [21]. However, no association was observed between total recreational exercise-time (<7.5, 7.5-14, or >14 h per week) and serum TT levels. In contrast, two studies examined the NHANES dataset with regard of sex steroid hormone levels and PA [10,22]. The authors analyzed that NHANES cycles analyzed were the 1988-1991, 1991-1994, and 1999-2004 and reported that men in the highest category of frequency of PA had higher concentrations of TT consistent with the current report. Of note, Shiels et al. [22] found that those participants who had the highest PA per week had higher mean levels of TT though participating in vigorous exercise was not associated with higher TT levels. In contrast, Steeves et al. [10] noted an association between PA and TT only in those expending the highest level of energy and who were non-obese. The etiology is uncertain, but some investigators have hypothesized mechanisms of action involving increasing caloric expenditure [23,24]. For example, in vitro and in vivo evidence suggests a possible role for both the HPA axis as well as direct effect of exercise on the regulation of body mass composition. The effect of physical exercise in a rabbit model with metabolic syndrome-associated hypogonadotropic hypogonadism showed a down regulation of the majority of steroidogenic enzymes leading to T synthesis. Interestingly, the increase in genes related to inflammation, estrogen signaling, and glucose metabolism observed in metabolic syndrome were significantly reduced after the rabbits were exercise-trained to run on a treadmill for a 12week period. In this model, Corona et al. [24] demonstrated how the expression of Kiss 1 gene and its receptor (Kiss1R) (neurotransmitters regulating GnRH secretion) functions, decreased orexigenic and GnRH-inhibiting factors (dynorphin and its receptors OPRD1 and OPRK1), as well as increased anorexigenic ones (proopiomelanocortin) were significantly restored in the group of exercise-trained cavies. Furthermore, studies from human professional sports have historically shown how physical effort directly damage muscle fibers leading to a subsequent increase in concentration of anabolic hormones such as insulin-growth factor-1 (IGF-I) or growth hormone (GH) [25]. Moreover, physical expenditure has been found to significantly change the Cortisol/fT ratio that can impact on the secretion of GH and IGF-I from liver and finally on the fat-free body mass composition by blocking E 2 -mediated negative feedback on the HPA axis, thus increasing luteinizing (LH) secretion and ultimately testosterone production [26]. This can be also empirically observed in the healthy hypogonadal subjects and/or individuals with testosterone deficiency (ADT for PCa or administration of GnRH agonists) who have been presenting overall lower muscle volume, strength and function, higher fat mass, and higher incidence of insulin resistance, when compared to age-matched eugonadal men [27,28]. In addition, there is a relevant body of evidence suggesting that testosterone therapy preserves muscle strength and power in aging men. Magnussen et al. [29] found that testosterone administration improved muscle mechanical and physical function in addition to increasing lean leg mass and total lean body mass in men aged 50-70 years with type 2 diabetes and Bioavailable T levels <7.3 nmol/L thus corroborating the importance of androgen homeostasis in men's health. Certain limitations warrant mention. First, only a limited number of NHANES respondents had available data for analysis. Nevertheless, we did identify a significant association between daily step count and TT. Importantly, the NHANES data reported in the present study must be considered observational and no causal inferences may be deemed. Moreover, the information concerning the total steps count were extrapolated through multiple imputation inference strategy given that original information was not released due to missing data in this specific 2003/04 cycle. However, the same techniques have previously been reported by Saint-Maurice et al. in their analysis of daily steps count and overall mortality within NHANES [16]. In addition, TT samples were not measured with liquid chromatography tandem mass spectrometry, which is considered the gold standard testosterone determination while moreover, no sufficient data were available on medications assumed by single participants thus precluding the possibility to further adjust our model on specific drugs such as morphine or oral glucocorticoids. Finally, we were unable to determine each man's "valid wearing days" for the accelerometer nor extrapolate the relative influence of sport activities on the overall step CPD, which may limit the interpretation. The relative influence of those participants with only 1 or 2 "valid wearing days" or those actively involved in sports activities might indeed possibly model the contribution of the accelerometer-measured daily step count by over-or underestimating the overall effect sizes described in the present study. Conclusion Our present analysis is the first study assessing PA volume using objective measures of PA. We found that a higher physical expenditure quantified by the daily step count (or pax intensity) taken by each individual was associated with a lower odds of hypogonadal ranges total, free, and bioavailable T levels. While there were differences in baseline BMI and age among men with and without hypogonadism, the current work supports an association between daily steps, serum testosterone levels, and the risk of hypogonadism. Funding Open Access funding provided by Università degli Studi di Roma La Sapienza. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
2021-02-13T15:01:21.739Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "dd3d5d7cee9086906a0bd74b487666ede2d07504", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12020-021-02631-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dd3d5d7cee9086906a0bd74b487666ede2d07504", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169072725
pes2o/s2orc
v3-fos-license
Indonesian Waste Management: Municipal Biowaste Inventory at Yogyakarta City in 2017 Municipal solid waste is the most waste problem which should be manages by the city. Biodegradable components of municipal solid wastes such as fruit and vegetable wastes are produced in large quantities in markets, and constitute a source of nuisance in municipal landfills because of their biodegradability. In this research, those wastes are known as biowaste. The objectives of this research were to calculate the inventory of municipal biowaste at Yogyakarta city and determine the source of municipal biowaste. Methods used in this research were field observation, mapping, and also municipal waste measurement and sampling based on Indonesian National Standard SNI 19-3964-1994, Method of Collecting and Measurement of Samples and Composition of Urban Waste. The result form the research showed that the waste composition generally contain 47 municipal biowaste. This research classified the result based on the waste produced by residences and waste produce by city public facility. Residences at Yogyakarta City produce around 61,12% municipal biowaste. Public facilities at Yogyakarta city produce around 30% of municipal biowaste. Most of biowaste in public facility was generate from traditional market and restaurant. From the result of inventory calculation, recommendation of biowaste treatment and management at Yogyakarta city could be determined. Introduction Municipal solid waste is the most waste problem which should be managed by the city. Every Indonesian generates around 0.76 kg/day of solid waste while the total population in Indonesia was more than 200 million and the total area for disposal is limited [1]. Municipal solid waste became a problem in several cities at Indonesia. On 2005, landslide disaster at Bandung disposal site has killed hundreds persons [1]. In Yogyakarta, Piyungan disposal site has limited carrying capacity. On the other hand, based on the existing land use map of Yogyakarta city, Yogyakarta has no area for disposal area site. Thus, this condition became a waste emergency condition for Indonesia, specially in Yogyakarta [2]. Biowaste which also called biodegradable waste means any waste which is capable of undergoing anaerobic or aerobic decomposition, such as food and garden waste, and paper and paperboard [3]. Based on previous research, the amount of municipal solid waste at D.I. Yogyakarta to 470 ton/day which consisted of 77% organic fraction and 23% inorganic fraction [4]. Hence, most of municipal solid waste in D.I. Yogyakarta Province was a biowaste. The management of municipal biowaste is required in order to reduce the carrying capacity of final disposal site. The initial action to manage the municipal biowaste in Yogyakarta was calculate the inventory of the municipal biowaste composition. The existing condition of municipal solid waste management in Yogyakarta has waste bank and temporary disposal site. Dengan ketersediaan eksisting sarana dan prasarana pengelolaan sampah di Kota Yogyakarta saat ini setidkanya dapat sedikit banyak mampu mengurangi permasalahan sampah di Kota Yogyakarta [2]. The objectives of this research were to calculate the inventory of municipal biowaste at Yogyakarta city and determine the source of municipal biowaste. Based on the result of this research and the existing condition of municipal solid waste management at Yogyakarta, we could determine the strategies to manage the municipal biowaste. Methods and Material Methods used in this research were field observation, sampling, and also municipal waste measurement and sampling based on Indonesian National Standard SNI 19-3964-1994, Method of Collecting and Measurement of Samples and Composition of Urban Waste. Field Observation In order to determine the source of municipal biowaste, field observation is needed. The purposes of field observation were to decide where the sampling point and the source of municipal biowaste are. Below the explanation how we conduct the research. Land use Mapping The land use mapping is the base tools to establish the strategy of waste reduction at Yogyakarta city. The purposes of land use mapping were to identify the existing land use at Yogyakarta city and identified the source of municipal waste at Yogyakarta by land use based. The primary and secondary data was needed to create the land use map. The secondary data that required to support the land use map were Yogyakarta city satellite imagery and the map of Rupa Bumi Indonesia which including topography, infrastructure, river, and the road. The primary data for the land use map were the existing data of public facility and the residential area. The primary data was collected by direct observation at the research field. The existing land use map of Yogyakarta city would be classified into several categories, such as residential, facility (non-residential), and the waste reduction facility. The land use map was overlay with the result of field observation. The overlay result would be show the land use area and the problems on it, such as the variation of waste composition and how big the waste generation in the city. The outputs of this activity were land use map and waste reduction facility map at Yogyakarta city. The existing land use map was obtained by the integration between direct observation and the tentative land use map of Yogyakarta city. Sampling The sampling method for waste generation measurement was purposive sampling. Purposive sampling is the technique for sampling with specific consideration [5]. The consideration used in this research was the facility type and residential criteria [2]. Sampling points for the non-residential or public facility were school/ education institution, office buildings, shopping area, traditional market, restaurant, hotel, recreation area, medical facilities, industries, prayer building, and the road. The standard method to measure and take the sample was based on SNI 19-3964-1994 and calculation method guideline about the calculation of waste generation and waste composition from Ministry of Environment and Forestry Year 2012 with some modification [6] [7]. Municipal Waste Measurement This method was used to obtain the existing value of waste generation and waste composition at Yogyakarta city. The data from the measurement could be used to determine the component of Yogyakarta city waste. From waste composition, the inventory of municipal biowaste could be conducted. Sampling point determination for the residential was using the standard from SNI 19-3964-1994. The population of Yogyakarta city was 417.744 persons. The amount of the population is categorized as medium city [6]. The population in the noon would be increasing 1.5 times bigger than native inhabitants. Thus, Yogyakarta city could categorize as big city. From this statement, we assumed that the total population of Yogyakarta city was around 600.000 persons. There is a calculation method to determine the amount of sampling point. The equation to calculate the sampling point is stated below: (2012), explained that the proportion of settlement in the big city is 25% permanent, 30% semi-permanent, and 45% non-permanent [7]. But, after the field observation, the existing condition of Yogyakarta city has more than 70% permanent settlement. Thus, the sampling point collection was modified. The Existing Land Use Map of Yogyakarta City Before the inventory calculation of biowaste, this research was use land use map to collect the sampling point and investigate the source of municipal waste. Based on field observation, the existing land use map of Yogyakarta city is showed on the Figure 1. There were 2 classifications for the settlement, including residential sector and non-residential sector. The residential sector divided into 3 classifications, such as permanent residential, semi-permanent residential and non-permanent residential. The non-residential sector would be divided into several public facilities, education, office building area, medical, business and service, industrial area, praying building, and the public facility. From the map, the most dominant land use classification in Yogyakarta city is settlement or residential. Total area for residential in Yogyakarta was 64.62%. Total area land use for business and service was 17.88%. Total area land use for office building and education were 11,94%. Total area land use for industrial area was 1,6%. The others land use were for agricultural, tourism, and others had total area 3,92%. Composition of Waste Generation in the Residential Area The percentage of composition municipal solid waste in Yogyakarta city was calculated from the measurement of waste generation. The result showed that the general municipal solid waste composition in Yogyakarta city were 61,12 organic waste, 30,55% inorganic waste, 3,7% hazardous waste, and 4,63% for the residue. Details composition of municipal solid waste could be seen at Table 1. Organic waste was the dominant waste composition at residential area Yogyakarta City. From the Table 2 organic wastes on this composition consist of food waste, vegetables, fruits, and yard garbage. The dominant organic waste generate from food waste (45,06%), yard garbage (10,91%), and vegetables and fruits (5,15%). Composition of Waste Generation in the Non-Residential Area The composition of municipal waste in the public facility area at Yogyakarta city could be seen at Table 3. Food waste, vegetables and fruits waste still be the dominant waste for this non-residential area at Yogyakarta city. Food wastes were generally produced by restaurants. Nowadays, a lot of restaurants become a tourist destination for culinary. Yogyakarta city is one of tourism city in Indonesia [8]. Market, especially traditional market, produced a large number of vegetables and fruits waste. Bus terminal also generates around 39,22% of food, vegetables, and fruits waste. In this category of classification, biowaste compositions were including food, vegetables, and fruits waste. The dominant waste of this classification was different for each public facility. Beside restaurants and market, tourism also contributes the food, vegetables, and fruits waste. Tourism produced around 55,34%. One of traditional market in Yogyakarta City already had a plan to treat and produce the municipal biowaste as fertilizer. But the result was not to significant for the soil nutrition. From the measurement, the average of municipal biowaste produced at Yogyakarta City was 40,25 % from total waste generation in the city. The amount of municipal biowaste suggested be treating and recycling as fertilizer [2]. From the report on the year 2017, the strategy suggested was compost and liquid fertilizer [2]. Liquid fertilizer possibility is highest to be implemented and become an efficient way for municipal biowaste utilization. 4. Conclusion a. Total amount of organic wastes for the residential in existing land use classification were 61,12%. b. Total amount of food, vegetables, and fruits waste for non-residential were 30,30%. c. The compositions for municipal biowaste at Yogyakarta city were food waste, vegetables waste, and fruits waste. d. Total amount municipal biowaste for the residential and non residential were 50,21% from the total amount of residential municipal solid waste generation and 30,30% from the total amount of non-residential municipal solid waste generation.
2019-05-30T23:46:31.572Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "ff31b593d8d87f23b595cc5d7d4059c5d3632f67", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/212/1/012011", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9156337715d3f01f9dd068d5480868e517aa4203", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
231689886
pes2o/s2orc
v3-fos-license
RNA-dependent RNA polymerase (RdRp) inhibitors: The current landscape and repurposing for the COVID-19 pandemic The widespread nature of several viruses is greatly credited to their rapidly altering RNA genomes that enable the infection to persist despite challenges presented by host cells. Within the RNA genome of infections is RNA-dependent RNA polymerase (RdRp), which is an essential enzyme that helps in RNA synthesis by catalysing the RNA template-dependent development of phosphodiester bonds. Therefore, RdRp is an important therapeutic target in RNA virus-caused diseases, including SARS-CoV-2. In this review, we describe the promising RdRp inhibitors that have been launched or are currently in clinical studies for the treatment of RNA virus infections. Structurally, nucleoside inhibitors (NIs) bind to the RdRp protein at the enzyme active site, and nonnucleoside inhibitors (NNIs) bind to the RdRp protein at allosteric sites. By reviewing these inhibitors, more precise guidelines for the development of more promising anti-RNA virus drugs should be set, and due to the current health emergency, they will eventually be used for COVID-19 treatment. RNA virus infections Among the major groups of infections, RNA virus infections contribute considerably to the worldwide death and morbidity index related to viral infection. Chronic illness related to persistent RNA virus infections represents a crucial public health worry [1]. While human immunodeficiency virus-1 as well as hepatitis C virus (HCV) are perhaps the most popular instances of persistent RNA viruses that cause chronic disease, proof recommends that numerous other RNA viruses, consisting of re-emerging viruses such as Ebola virus and Zika virus, develop persistent infections [2]. Furthermore, swine and avian influenza infections, together with Middle East respiratory syndrome coronavirus (MERS-CoV) and severe acute respiratory syndrome coronavirus (SARS-CoV), stand for substantial pandemic risks to the general populace [3]. Considering the high frequency and vast circulation of RNA viruses, their huge genetic diversity as well as the frequent recombination of their genomes and raising activity at the human-animal interface, these viruses are identified as a recurring hazard to human health and wellness [4]. This truth once again became noticeably obvious in late 2019 and very early 2020, when severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was found to be the root cause of a large and quickly spreading outbreak of lower respiratory tract infection and disease, consisting of potentially fatal pneumonia, in Wuhan, China [5,6]. This novel coronavirus-induced febrile respiratory system disease was formally named coronavirus disease 2019 (COVID-19) by the WHO. Presently, the COVID-19 pandemic is still spreading worldwide. Along with vaccinations, people are also hoping for particular medicines [7,8]. An increasing number of medication prospects have come to the attention of clinical scientists, and many clinical trials are being performed throughout the world. However, there is AEs Adverse events AUC 0-t Area under the blood level-time curve BID Bis in die CeH Carbon-hydrogen bonds C-D Carbon-deuterium bonds COVID- 19 RNA virus and SARS-CoV-2 An RNA virus is a virus that utilizes RNA as its genetic material. Based on their genome and mode of replication, three distinct groups of RNA viruses have been classified: double-stranded RNA viruses (dsRNA), single-stranded RNA (ssRNA) and retroviruses [13,14]. dsRNA viruses contain one to a dozen different RNA molecules, each of which encodes one or more viral proteins [15]. The genome of positive-sense ssRNA viruses is directly used as mRNA, and the host ribosomes translate this genome into a single protein that is modified by host and viral proteins to form the various proteins needed for replication [16e18]. One of these includes RdRp, which copies the viral RNA to develop a double-stranded replicative type [19,20]. Consequently, this dsRNA routes the development of brand-new viral RNA. The genome of negativesense ssRNA viruses must be copied by an RNA replicase to form positive-sense RNA [21,22]. The positive-sense RNA molecule then acts as viral mRNA, and this mRNA is translated into proteins by host ribosomes. In addition, retroviruses have a ssRNA genome but are generally not considered RNA viruses because they utilize DNA intermediates for replication [23]. The nucleic acid of a pathogenic virus, such as coronaviruses, hepacivirus, influenza viruses and respiratory syncytial virus (RSV), is typically ssRNA (Table 1) [24,25]. These RNA viruses can create lower respiratory system tract infections that cause bronchiolitis as well as pneumonia [26]. Young children, the elderly, and patients with compromised heart, lung, or immune systems are at the highest risk for significant disease related to these RNA virus-related breathing infections [27,28]. SARS-CoV-2 is a positive-sense ssRNA virus that can infect humans [29]. The infection largely spreads from human to human through close contact and by breathing droplets generated from sneezes or coughs. The 30-kb genome of SARS-CoV-2 contains 14 open reading frames (ORFs) that can encode at least 27 proteins [30e32]. The 3 0 end of the genome encodes four structural proteins, namely, spike, envelope, membrane, and nucleocapsid proteins, and eight accessory proteins that disrupt the host's inherent immune responses. The ORF1ab region at the 5 0 end inscribes a polyprotein, which is hydrolysed into 16 nonstructural proteins (nsp 1-16) to create a replicase/transcriptase complex (RTC). The main RTC is RdRp (nsp12) (Fig. 1). In RNA viruses, such as SARS-CoV-2, RdRp creates the machinery needed for RNA synthesis and the organized replication and transcription of genomic RNA. RNA-dependent RNA polymerase The genomic replication process of RNA infections is controlled by RdRp, which is inscribed by the virus itself [30,33]. After the virus attacks a host cell, the viral genomic RNA is directly utilized as a template, and the host cell protein synthesis system is utilized for the translation of RdRp. RdRp is consequently used to complete the transcriptional synthesis of negative-strand subgenomic RNA, the synthesis of different structural protein-related mRNAs, and the replication of viral genomic RNA. RdRp can properly and efficiently synthesize tens of thousands of nucleotides and thus facilitates all other biological activities that occur after the virus invades a host cell. The structure of RdRp of positive-strand RNA viruses resembles that of a cupped right hand and includes fingers, palm and thumb subdomains that are largely associated with binding to the design template, polymerization, nucleoside triphosphate (NTP) access and associated features [34e36]. In addition to these three central subdomains, an N-terminal subdomain that bridges the fingers and thumb subdomains is located in all RdRps [37], and this subdomain serves as the active site of RdRp. The completely encircled active site cavity is responsible for considerable communication between the finger and thumb subdomains. The active site of RdRp is extremely well preserved. The finger subdomain plays a considerable role in establishing the geometry of the active site [38] by holding the template RNA in place and facilitating polymerization. The thumb subdomain harbours residues that are involved in packing against the template RNA and stabilizing the initiating NTP on the template [39]. This subdomain also facilitates the translocation of the template following polymerization by accommodating large conformational rearrangements. The thumb subdomain exhibits the greatest diversity among the identified RdRp and differences in size and complexity based on the mode of replication initiation. The palm subdomain translocates to the junction of the finger and thumb subdomains and houses many structurally conserved elements associated with catalysis [40]. The palm subdomain is involved in the selection of NTP over deoxyribonucleoside triphosphate (dNTP) and catalyses the phosphoryl transfer reaction by coordinating with two metal ions (Mg 2þ and/or Mn 2þ ). The best-known RdRps of positive-strand RNA viruses are the polioviral 3Dpol and hepatitis C virus nonstructural 5B (NS5B) proteins as well as the SARS-CoV-2 RdRp, which has recently attracted much attention. The structure of the SARS-CoV-2 RdRp complex consists of a nsp 12 core catalytic unit, a nsp7-nsp8 (nsp8-1) heterodimer, and an additional nsp8 subunit (nsp8-2), and nsp12-nsp7-nsp8 is defined as the minimal core component for virus RNA replication [31]. The N-terminal portion of nsp12 contains a b-hairpin (V31eK50) and a nidovirus-specific extension domain (D60-R249). The b-hairpin is sandwiched by the palm subdomain in the RdRp core and nidovirus RdRp-associated nucleotidyltransferase (NiRAN), a configuration not observed in other coronavirus RdRp structures [41]. The Cterminal catalytic domain of nsp12 (A250-F369) connects to NiRAN through an interface subdomain. The C-terminal catalytic domain of nsp12 (S367eF920) adopts a canonical cupped right-handed configuration of all viral RdRp, composed of the finger, palm, and thumb subdomains. Catalytic metal ions are not observed in the absence of primer-template RNA and NTPs, although they are present in several structures of viral polymerases that synthesize RNA. The nsp7-nsp8 heterodimer binds above the thumb subdomain and stabilizes the thumb-finger interface. Nsp7 makes a major contribution to the binding of the heterodimer to nsp12, while nsp8 only contacts a few residues from nsp12. The other copy of nsp8 (nsp8-2) sits atop the finger subdomain and forms additional interactions with the interaction subdomain. In this structure, similar to other positive-strand RNA viruses RdRp, the template/primer entry channel, NTP entry channel, and nascent strand exit channel are all positively charged and converge in a central cavity, which is the active site of the SARS-CoV-2 RdRp that is formed by seven conserved catalytic motifs (A to G). In this central cavity, these RdRp motifs mediate template-guided RNA synthesis. Motif A (T611-M626) houses the catalytic motif DX2-4D, in which the first aspartic acid D618 is invariant in most viral polymerases. The flexible loop in Motif B (T680-T710) serves as a hinge to undergo conformational arrangement associated with template RNA and substrate binding. Motif C (F753eN767) contains the catalytic motif SDD, which is essential for binding the metal ion. Motif D contains residues L775-E796. Motif E contains residues H810eV820 and combines with the palm subdomain to support the primer chain. Motif F (K912-E921) interacts with the phosphate group of incoming NTP. The NTP entry channel is formed by a set of hydrophilic residues, including K545, R553 and R555 in motif F [42]. Motif G (K500eS518) interacts with the template strand. The RNA template strand enters the active site composed of motifs A and C through the groove sandwiched by motifs F and G. The product-template hybrid exits this active site through the RNA exit channel on the front side of the RdRp (Fig. 2). The polymerase of segmented negative-strand RNA viruses (sNSVs), including influenza virus, is composed of three polypeptides: PB1, PB2 and PA/P3. PB1 contains the polymerase active site, whereas PB2 and PA/P3 have cap-binding and endonuclease domains, respectively, needed for transcription initiation by cap snatching [43e45]. In addition, the catalytic core of nonsegmented negative-strand RNA viruses (NNVs), including vesicular stomatitis virus (VSV), RSV and Ebola virus, is the L protein, which catalyses RNA polymerization during both replication and transcription, cap addition, and cap methylation of nascent viral mRNAs [36,46e49]. The RdRp domain is a functional domain of the L protein, and the L proteins of nonsegmented negative-sense single-stranded RNA viruses share six conserved regions and three functional domains (RdRp, capping, and cap methyltransferase) (Figs. 3 and 4). RdRp is an important therapeutic target because it plays a pivotal role in replication of the RNA genome and because the host lacks a functional equivalent to this protein. In addition, due to the absence of a counterpart to RdRp in mammalian cells, its inhibition is not expected to cause target-related side effects, and thus, RdRp is considered an attractive target in drug discovery and development. The development of effective RdRp inhibitors to block viral replication has long been a research topic in many scientific institutions and pharmaceutical companies. There are two known classes of RdRp inhibitors: nucleoside analogue inhibitors (NIs) and nonnucleoside analogue inhibitors (NNIs). The classes show differences in structure and the location that bind to RdRp: enzyme active site (Nis) and allosteric sites (NNIs). This review summarizes the promising RdRp inhibitors that have been launched or are currently in clinical studies for the treatment of RNA virus infections, including COVID-19. Nucleoside inhibitors NIs terminate the RNA synthesis step, which is essential for RNA replication, through their incorporation by RdRp, which prevents incoming nucleotides from being added to the RNA chain [50]. It has been proposed that steric hindrance by nucleoside inhibitors, which contain a 3 0 -hydroxyl group, is responsible for the observed termination of chain elongation [51]. Due to this mechanism, NIs are sometimes called chain termination inhibitors [52]. NIs of RNA viruses, which have been developed as prodrugs, eventually become cleaved at their site of action in the liver by hepatic enzymes and undergo phosphorylation into a triphosphate form, which targets the polymerase at its highly conserved active site (Fig. 5). NIs of RdRp are classified into three major classes: pyrimidine nucleoside inhibitors, purine nucleoside inhibitors and miscellaneous nucleoside inhibitors ( Table 2). . 1 acts as a weak alternative substrate for cytidine triphosphate (CTP) to potentially modulate virus replication steps that are dependent on these structures, such as encapsidation, translation and replication [58]. 2 (EIDD-2801) is the 5 0 -isopropylester of 1 that exhibits broad activities against influenza viruses and multiple coronaviruses. In phase II clinical trials, Ridgeback Biotherapeutics is evaluating various candidates for the treatment of newly hospitalized adults with COVID-19 and symptomatic adult outpatients with COVID-19. Preclinical studies are also ongoing to determine its potential as a treatment for influenza and MERS-CoV infection. The study of pyrimidine nucleoside inhibitors as potential antiviral drugs revealed that the unique substituents at the C2 0 or C4 0 position of the nucleoside exhibit obvious antiviral activity. The incorporation of 2 0 -C-modified monophosphates onto the 3 0 termini of growing virus RNA strands promotes the termination of elongation due to steric hindrance between the incoming natural nucleotide and the unnatural 2 0 -C-group of the inhibitor [59,60]. 3 (NM-107) is a 2 0 -C-methylcytidine that was initially identified as a competitive inhibitor of NS5B polymerase, and the EC 50 of 3 in wild-type replicon cells is 1.85 mM [61]. Upon phosphorylation into its 5-triphosphate form, this metabolite inhibits viral RNA chain elongation and viral RdRp activity, and these effects block the viral production of HCV RNA and thus viral replication. In addition to HCV, this compound inhibits the replication of a variety of other viruses, such as dengue virus (DENV) and norovirus [62,63]. 4 (valopicitabine, NM-283), which is the 3 0 -O-valinyl ester of 3, was synthesized to obtain a compound with improved oral bioavailability compared with that of its parent compound 2 0 -C-methylcytidine. Physicochemical, pharmacokinetic, and toxicokinetic studies have shown that 4 is an acid-stable prodrug of 2 0 -C-methylcytidine with excellent pharmacokinetic and toxicokinetic profiles [64]. 4 is currently being evaluated in phase II clinical trials for the treatment of chronic hepatitis virus C (HCV) infection. Janssen screened a series of 4 0 -cytosine nucleoside analogues based on their pharmacodynamics and pharmacokinetic properties and found that 4 0 -chloromethyl-2 0 -deoxy-2 0 -fluorocytidine (5, ALS-8112) exhibited the most promising activity in the RSV replicon assay, with an EC 50 of 0.15 mM [65]. 5 enters various types of epithelial cells in the respiratory tract and is subsequently phosphorylated to form an intracellular nucleoside triphosphate with a half-life (t 1/2 ) of approximately 29 h [66]. The 5 0 -triphosphate of 5 is the active form of the drug and inhibits RSV polymerase with an IC 50 of 0.02 mM, and no appreciable inhibition of human DNA and RNA polymerases was detected at a concentration of 100 mM [67]. 6 (lumicitabine, ALS-8176), the 3 0 ,5 0 -di-O-isobutyryl prodrug of 5, is a first-in-class nucleoside RSV polymerase inhibitor that demonstrated excellent anti-RSV efficacy and safety in a phase II clinical trial for the treatment of RSV [68]. Moreover, a number of 2 0 -fluoro-4 0 -substituted cytidine nucleosides exhibited potent inhibition of the RSV replicon with a wide selectivity window in these studies, and their 5 0 -triphosphates effectively inhibited RSV polymerase with high selectivity with respect to the host polymerases [65]. These findings indicate that access to the F atom might allow the synthesis of first-in-class antiviral agents against RSV infection. 7 (4 0 -Azido-2 0 -deoxy-2 0 -C-methylcytidine) is a potent nucleoside inhibitor of the NS5B polymerase that displays an EC 50 value of 1.2 mM and shows moderate in vivo bioavailability in rats (F ¼ 14%) [69]. 8 (TMC-649128) is the di-isobutyryl ester of 7, and its pharmacokinetic properties in rats can potentially be improved by introducing prodrug esters. Isobutyryl ester 8 exhibits greatly improved mean maximum plasma concentrations (C max ¼ 4.65 mM) and hence a larger area under the blood level-time curve (AUC 0t ¼ 12.7 mM/h) and greater oral bioavailability (F ¼ 65%). Fluorinated nucleosides are well known for their antiviral and anticancer properties. Pharmasset screened a series of 2 0 -cytosine nucleoside analogues with obvious anti-HCV activity. b-D-2 0 -Deoxy-2 0 -fluoro-2 0 -C-methylcytidine (9, PSI-6130), in which the C2 0 position of uridine is substituted by an F atom and methyl group, exerts a highly effective inhibitory effect on HCV replication [70,71]. Unfortunately, clinical phase I trials showed that 9 does not have good pharmacokinetic characteristics. The low oral bioavailability of 9 might be due to the presence of hydroxyl and amino groups, which are more polar, exhibit poor fat solubility, and cannot easily penetrate biofilms in its structure [72]. In addition, its amino group at position 4 is unstable under acidic conditions, and this amino group can be easily removed to generate carbonyl groups. To resolve the problem of bioavailability and metabolism, the 3 0 ,5 0diisobutyrate prodrug 10 (mericitabine, RG-7128) was designed to promote the absorption and metabolism of the compound in the intestine [73e76]. 10 is rapidly absorbed via the oral route and converted to 9, which is subsequently metabolized to its metabolite. The results also showed that 10 improves the pharmacokinetic parameters, and clinical trials have also shown that the drug exerts a certain effect. However, 10 is not the most ideal medicine due to its low general efficacy and short t 1/2 . Uracil analogue inhibitors b-D-2 0 -deoxy-2 0 -fluoro-2 0 -C-methyluridine (11, PSI-6206) is a metabolite of 9 in vivo [77] that can significantly inhibit HCV replication. The patients with hepatitis C treated with 11 generally show good tolerance. However, the bioavailability of 11 is too low (25%) because 11 cannot be converted into 11-triphosphate (11-TP). However, observations of the metabolites of 9 revealed that monophosphate 9 can be further transformed into 11-TP, which has a long t 1/2 and better activity than 9-TP [77,78]. This major discovery indicates that uridine monophosphate derivatives might be ideal direct-acting antiviral agents (DAAs). To ensure the safety and efficiency of the drug, the F atom and methyl group at the C2 0 position are retained. Because compounds containing phosphoric acid groups are negatively charged, the related compounds cannot be easily absorbed by the human body. The prodrug design was finally adopted, and the first prodrug 12 (PSI-7672) was designed [77]. Since then, a large number of derivatives have been synthesized, and some of the related experience has been previously summarized [79]. The isomeric form of the amino acid is significant because the D-alanine derivative is inactive, which means that the natural L-amino acid is required for activity. Observations of the amino acid side chain (R 1 ) have revealed that a small alkyl group is a viable substitution, but significant decreases in potency are observed with substitutions larger than ethyl. Methyl results in the greatest potency and is therefore a viable substitution. If the amino acid is alanine and the phosphate ester is a phenyl substituent, the groups at the carboxylic acid ester (R 2 ) that provide the desired submicromolar activity are small alkyl groups and branched alkyl groups. However, cytotoxicity was observed with n-butyl, 2-butyl and n-pentyl esters. Phenyl and halogenated alkyl groups do not provide sufficient improvements in potency. The evaluation of the phosphoramidate ester substituent (R 3 ) revealed that a derivative with phenyl as a substituent exhibits good potency and is not cytotoxic ( Table 3). It was finally determined that 13 (PSI-7851) is an ideal DAA with a favourable pharmacokinetic profile for inhibiting HCV [77,78,80]. 13 contains a chiral phosphorous atom and is therefore a mixture of two diastereomers, S p diastereoisomer 14 (PSI-7976) and R p diastereoisomer 15 (sofosbuvir, PSI-7977) [81]. The activity of 15 is significantly better than that of 14, and this difference might be due to the different binding orientations of 14 and 15 to the enzyme, which are productive and nonproductive, respectively. 15 might preferentially bind in the nonproductive orientation and form a dead-end complex to exert a significant antiviral effect [82]. Compound 15 is an approved NS5B polymerase inhibitor (EC 90 ¼ 0.42 mM) and exerts pangenotypic antiviral effects against HCV genotypes (GTs) 1e6. The potent antiviral activities of 15 are higher than 90% [83]. In addition, 15 has the ability to suppress different families of viruses, including Zika virus (ZIKV), DENV and chikungunya virus [84e86]. Moreover, 15 exhibits a rapid response, and over 0.8 and 2 days, this compound Table 3 Structureeactivity relationship of the 2 0 -deoxy-2 0 -a-fluoro-b-C-methyluridine-5 0 -monophosphate analogue. [87]. The bioavailability of 15 is high, and maximum plasma concentrations (C max ) were detected 0.5e2 h after oral administration [88]. 15 has been considered a potential effective drug for inhibiting SARS-CoV-2 infection since the emergence of the COVID-19 pandemic. The administration of 15 and daclatasvir in combination with standard of care (SOC) for the treatment of patients with COVID-19 resulted in better 14-day recovery rates and a shorter hospital stay. The patients in the therapy group experienced a shorter duration of hospital stay and a shorter median time to discharge than the control group (6 vs 8 days and 6 vs 11 days, respectively). Recently, Pinar Mesci et al. [89] reported that 15 can potentially be used to alleviate COVID-19related neurological symptoms. 16 (AL-335) is a type of uridine analogue with 4 0 -fluoro-2 0 -Csubstituted sugar moieties [90]. 16-TP exhibits potent inhibition of NS5B polymerase with IC 50 values as low as 27 nM. In an HCV subgenomic replicon assay, the phosphoramidate prodrug of 16 demonstrated very potent activity with EC 50 values as low as 20 nM [91]. The administration of 16 in combination with simeprevir and odalasvir has been evaluated in human phase II clinical trials and has shown promising efficacy and safety results. 16 is well tolerated when administered as single and multiple doses and exhibits an acceptable pharmacokinetic profile [92]. 17 (JNJ-54257099) is a cyclic phosphate ester derivative belonging to the class of 2 0 -deoxy-2 0 -spirooxetane uridine nucleotide prodrugs [93]. This compound profoundly dose-dependently decreases HCV RNA levels in mouse models of HCV GT 1a and 3a infections. 17 was terminated following completion of phase I clinical studies conducted by Janssen Pharmaceutical. This is because the clinical antiviral activity of 17 in patients infected with HCV GT 1 was insufficient to justify further clinical studies. Compound 18 (VX-135) exhibited pronounced antiviral activity against GTs 1e6 (EC 50 values between 12 and 390 nM) in vitro [94] and has been evaluated in phase II clinical studies for the treatment of hepatitis C. The phase I study evaluated the pharmacokinetics, safety and antiviral activity of 18 in 48 healthy controls and 30 patients with HCV GT 1 infection. The most common adverse events were headache and diarrhoea (two subjects each), and no severe adverse events were recorded. Compound 18 demonstrated potent antiviral activity with a 4.5 log 10 decrease in HCV RNA over 7 days at a dose of 200 mg quaque die (QD) in patients infected with chronic hepatitis C [95]. Thymine analogue inhibitors 19 (ACH-3422) is an NS5B polymerase inhibitor that displays pangenotypic activity and a high in vitro barrier to resistance. 19 is designed to introduce three deuteriums on the side chains of pyrimidine and ribose groups. The incorporation of deuterium into pharmacologically active agents according to the principle of deuterium isotope effects (DIEs) offers potential benefits, such as improved exposure profiles and decreased production of toxic metabolites that could yield improvements in efficacy, tolerability, or safety [96]. Therefore, 19 is well tolerated and induces no serious adverse events in healthy volunteers and hepatitis C patients [97]. Among active patients, increasing doses of 19 resulted in increased viral decline. In the proof of concept group administered 700 mg of the antiviral, mean decreases in the maximum viral load of 3.4 log 10 , 4.2 log 10 , and 4.6 log 10 were obtained after 7, 10, and 14 days of treatment, respectively. Three of six patients (50%) achieved viral clearance after the administration of 700 mg for 14 days. Adenine analogue inhibitors 20 (galidesivir, BCX4430) is an adenosine nucleoside analogue developed by BioCryst Pharmaceuticals. This compound was originally intended as a treatment for HCV but was subsequently developed as a potential treatment for deadly filovirus infections [98,99]. Studies have shown that 20 protects against both Ebola and Marburg viruses in both rodents and monkeys [99]. This compound also shows broad-spectrum antiviral effectiveness against a range of other RNA virus families, including SARS-CoV and MERS-CoV [100]. 20 can bind SARS-CoV-2 RdRp, with a binding energy of À7.0 kcal/mol [101]. In April 2020, BioCryst opened enrolment into a randomized, double-blind, placebo-controlled clinical trial aiming to assess the safety, clinical impact and antiviral effects of 20 in patients with COVID-19. 21 (Nuc, GS-441524) is a 1 0 eCNemodified adenosine C-nucleoside analogue that exhibits antiviral activity against a variety of RNA viruses [102]. Structurally, the 1 0 -CN group provides potency and selectivity for viral RdRp. A study conducted in 2019 revealed that 21 can potentially be used for the treatment of feline infectious peritonitis caused by a coronavirus [103]. 21 is synthesized into 21-TP through intracellular metabolism, and 21-TP can interfere with the activity of viral RdRp. However, the kinetics of the monophosphorylation of 21 are slow, and the use of a parent nucleoside modified with monophosphate might greatly increase the intracellular NTP concentration [102]. Compound 22 (remdesivir, GS-5734) is the Sp isomer of the 2-ethylbutyl L-alanine phosphoramidate prodrug and effectively bypasses the ratelimiting step of 21 monomer phosphorylation [104]. Compound 22 is activated more rapidly than 21 in human cells infected with SARS-CoV and MERS-CoV, and multiple uses of this compound have been explored with the aim of helping to address urgent and unmet medical needs around the world, including Ebola disease, SARS, MERS and most recently COVID-19 [105]. In Vero E6 cells, 22 effectively blocks SARS-CoV-2 infection at low concentrations (EC 50 ¼ 0.77 mM) and exhibits low cytotoxicity (CC 50 > 100 mM). In addition, the EC 90 value of 22 against SARS-CoV-2 in Vero E6 cells is 1.76 mM [10]. A study using the rhesus macaque model of SARS-CoV-2 infection revealed that therapeutic treatment with 22 initiated early during infection results in a clear clinical benefit in SARS-CoV-2-infected rhesus macaques [106,107]. NIAID reported that remdesivir was superior to placebo in shortening the time to recovery in adults who were hospitalized with COVID-19 and had evidence of lower respiratory tract infection in phase III clinical trials [108]. Recently, the FDA approved 22 for the treatment of patients with COVID-19 requiring hospitalization. To date, 22 is the first and only FDA-approved treatment for COVID-19 in the United States. In addition, some researchers have argued for the direct administration of 21 as a COVID-19 treatment because 21 exhibits either similar to or more potency than 22 against SARS-CoV-2 in cell culture [109]. 23 (AT-527) is a novel modified guanosine nucleotide prodrug inhibitor of the NS5B polymerase that belongs to the same category as 22. Compound 23 exhibits higher in vitro antiviral activity than compound 15. The free base of 23 had an EC 95 value of 25 nM and thus exhibited 10-fold higher potency than 15 in Huh-7 cells bearing the HCV GT 1b replicon [110]. The antiviral activity and safety of 23 has been demonstrated in phase II clinical studies of hepatitis C patients. The mean maximum reductions observed in noncirrhotic subjects with HCV GT 1b, noncirrhotic subjects with HCV GT 3, and subjects with compensated cirrhosis after 7 days of treatment were 4.4, 4.5, and 4.6 log 10 IU/mL, respectively [111]. A phase II clinical trial was established to evaluate the safety and efficacy of 23 for the treatment of adult patients hospitalized with moderate COVID-19 disease. 24 (INX-189), the phosphoramidate nucleoside analogue prodrug of 2 0 -C-methylguanosine, is a potent HCV replication inhibitor (EC 50 ¼ 35 nM) that is currently being investigated in a phase II clinical trial by Bristol-Myers Squibb for the oral treatment of hepatitis C virus infections [112]. Guanine analogue inhibitors 25 (IDX-184), a highly potent inhibitor of HCV replication in vitro, was designed to achieve enhanced targeting to the liver through monophosphorylation and reduce the exposure of other tissues to the drug. Compound 25 is preferentially cleaved by hepatic enzymes to form TP, and 25-TP potently inhibits NS5B polymerase (IC 50 ¼ 0.31 mM, Ki ¼ 52.3 nM) but does not inhibit human polymerases a, b or g (IC 50 > 50 mM) [113]. The administration of 25 at single and multiple doses of up to 100 mg/day for three days revealed that the compound is safe and well tolerated in both healthy volunteers and treatment-naïve HCV GT 1-infected subjects, respectively. 25 in combination with pegylated interferon-a (Peg-IFN) and ribavirin (RBV) was generally safe and well tolerated in HCV GT-1-infected subjects, and its lowest dose of 50 mg QD resulted in marked viral load reductions compared with those obtained with Peg-IFN/RBV alone [114]. (favipiravir, T-705) is a broad-spectrum anti-RNA virus drug that was approved in 2014 for the oral treatment of influenza A and for the treatment of influenza B infection. Studies have shown that 26 also exhibits good antiviral effects against a variety of RNA viruses, such as Ebola virus and rabies virus, in addition to influenza viruses [115e117]. Wang et al. [10] showed that 26 can effectively reduce SARS-CoV-2 infection in vitro (EC 50 ¼ 61.88 mM, CC 50 > 400 mM, SI > 6.46). Ongoing clinical trials have shown that 26 can accelerate the recovery of COVID-19 patients, as demonstrated by a median cure time of 2.5e9 days, whereas that obtained with the control group is 11 days (8e13 days). Compared with the control group, the patients with nonsevere new coronavirus belonging to the 26 group exhibited a shorter virus clearance time, and chest CT also showed significant improvement. In addition, the patients in the 26 treatment group experienced fewer adverse reactions and better tolerance [118]. The results of the clinical trial conducted in Wuhan suggest that among patients with common COVID-19, the 7-day clinical recovery rate of the patients in the 26 treatment group was 71.43%, which was significantly higher than the rate of 55.68% obtained with the control group; in addition, treatment with 26 significantly shortened the times to fever and cough relief in the patients with hypertension/diabetes [119]. Phase III clinical trials of 26 for the treatment of hospitalized patients with SARS-CoV-2 infection are outgoing. Future large-scale clinical trials will help verify the effectiveness and safety of 26 as a drug for the treatment of COVID-19. 27 (ribavirin, ICN-1229) can directly induce antiviral activity against a number of RNA viruses by increasing the mutation frequency in the genomes of several RNA viruses [120]. This compound is primarily indicated for the treatment of hepatitis C and viral haemorrhagic fevers. 27-TP also exhibits an inhibitory action on viral mRNA guanylyltransferase and mRNA 2 0 -O-methyltransferase of DENV [121] and was used as a therapeutic drug in the SARS outbreak in 2003 [122]. Tong et al. compared 27 and supportive therapies for patients with severe COVID-19 and found that ribavirin therapy is not associated with an improved negative conversion time in the SARS-CoV-2 test or with an improved mortality rate in patients with severe COVID-19 [123]. In addition, a clinical trial studied the efficacy and safety of the combination of interferon beta-1b, lopinavir-ritonavir, and 27 for the treatment of patients with COVID-19. The results showed that early triple antiviral therapy was safe and superior to lopinavir-ritonavir alone in alleviating symptoms and shortening the durations of viral shedding and hospital stay in patients with mild symptoms, and the side effects were mild and controllable [124]. However, the results from the trial need to be further verified by an expanded double-blind trial. It is worth noting that the safety of ribavirin has been controversial. The adverse reactions that have been reported include teratogenicity and haemolytic anaemia [125]. Non-nucleoside inhibitors The structures of NNIs are diverse. Most NNIs change the spatial conformation of RdRp by binding to allosteric sites on the surface of the enzyme and thereby inhibit its activity and the replication of RNA viruses (Fig. 6). 28 (pimodivir, JNJ-63623872) is an NNI of the PB2 domain of the RdRp of influenza A virus [126]. Phase I and II clinical trials have shown that 28 has the potential to not only reduce the viral load but also have a clinical impact on patients [127,128]. However, due to the unsatisfactory results of phase III clinical trials, the clinical study of 28 has been terminated. In addition, clinical studies of allosteric site inhibitors have mainly focused on anti-HCV infection. Five different allosteric binding sites for NNIs in HCV NS5B polymerase have been discovered [129,130]. Two of these sites are located in the thumb subdomain of the polymerase, and the other three are located in the palm subdomain. The NNIs of HCV can be divided into five different classes according to the location of their respective binding sites (referred to as thumbs I and II and palms I, II and III) ( Table 4). Thumb I inhibitors Thumb I inhibitors are mainly benzimidazole and indole compounds, and the compounds under clinical studies all show excellent anti-HCV activity. These types of compounds bind to thumb I mainly through hydrophobic interactions and salt bridge/hydrogen bonds between the ester group or carbonyl group of the compound and the guanidine group of the amino acid residue R503. The initially discovered benzimidazole 29 has weak inhibitory activity (IC 50 ¼ 1.6 mM) against HCV GT 1 NS5B polymerase, and its EC 50 value is higher than 10 mM in a cell-based model for subgenomic replicon [131]. The structure has been optimized to improve the activity of benzimidazole compounds. JTK-109 (30) is a superior compound that was obtained by modifying position 2 of benzimidazole. 30 inhibits HCV GT 1NS5B polymerase with an IC 50 value of 0.022 mM and an EC 50 value of 0.62 mM [132]. In addition, the carboxyl group at position 5 of benzimidazole can be modified to enhance the antiviral activity of the compound. A longer side chain was linked by an amide bond to obtain 31, which exerts an inhibitory effect on NS5B polymerase GT 1 polymerase (IC 50 ¼ 0.3 mM) with an EC 50 value of 1.7 mM [133]. To further improve the activity of the compounds on enzymes and cells, the structure of these compounds was further modified by replacing the benzimidazole ring with the indole ring, which yielded 32. The IC 50 value of 32 in inhibiting HCV GT 1b NS5B polymerase is 0.016 mM, and the EC 50 value is 4.2 mM [134]. Compared with 29, the activity of 32 at the enzyme level was increased 100-fold, and the activity in the cell model was also improved. The structure of 32 was modified using the same strategy as that used to obtain 29, which yielded 33 (BILB-1941). The IC 50 value of 33 in inhibiting HCV GT 1b NS5B polymerase is 0.045 mM, and the EC 50 value is 0.084 mM [131]. Compound 33 was administered via a single oral dose in a clinical phase I trial, which revealed that this compound has antiviral activity against HCV GT 1. Adverse events (AEs) were mainly related to the gastrointestinal tract (most frequent diarrhoea), and the frequency increased with increasing dose [135]. However, at high doses (450 mg), all five actively treated patients were unable to tolerate the compound due to gastrointestinal reactions, and clinical studies have been discontinued [136]. Whether the compound has benzimidazole or indole as its core, the dihedral angle between the core and the 2-position aryl group exerts a greater impact on the activity of the compound during the process of structural modification [137]. Structure-activity relationship (SAR) studies involving an indolo-benzoxazocine scaffold led to the identification of 34 (MK-3281), an inhibitor that exhibits good potency in the HCV subgenomic replication assay and attractive molecular properties suitable for a clinical candidate [138]. Compound 34 inhibited HCV NS5B polymerase with an IC 50 value of 0.006 mM, and an EC 50 value of 0.038 mM was obtained in the replicator model (GT 1b). The compound caused a consistent decrease in viremia in vivo, as demonstrated with the chimaeric mouse and chimpanzee model of HCV infection [139]. In a 7-day clinical trial of 34 monotherapies, patients with HCV GT 1b infection showed the greatest decrease in viral load and no viral load rebound, whereas patients with HCV GT 1a infection exhibited a small decrease in viral load, and their viral load appeared to rebound. One patient developed severe myoclonus side effects, and clinical trials of the compound were terminated. The EC 50 values of 35 (deleobuvir, BI 207127) in the cell-based HCV GT 1b and GT 1a subgenomic replicons are 23 nM and 11 nM, respectively [140]. In a clinical trial of 35 combined with SOC for the treatment of HCV infection, the patients showed a significant reduction in viral load [141]. However, the use of 35 for the treatment of hepatitis C has been discontinued by Boehringer Ingelheim due to weak market competition. 36 (beclabuvir, BMS-791325) is a thumb I-NS5B polymerase ligand [142]. In cell culture, 36 inhibits the replication of HCV subgenomic replicons representing HCV GTs 1a and 1b at EC 50 values of 3 nM and 6 nM, respectively, and similar values (3e18 nM) were obtained for GTs 3a, 4a, and 5a [143]. The oral bioavailability of 36 is 66%, and its volume of distribution is 2.7 L/kg; following intravenous administration, a plasma clearance of 3.5 mL/min/kg and an estimated plasma t 1/2 of 8.3 h were obtained in a 24-h rat pharmacokinetic study [144]. 36 represents a valid drug that exhibits a good tolerability and safety profile, and together with other antivirals, this compound exhibits optimal efficacy against HCV in compensated phases of the diseases, as demonstrated in clinical studies [145,146]. At present, 36 in combination with asunaprevir and daclatasvir has been launched in Japan for the treatment of hepatitis C. Thumb II inhibitors The thumb II site is a spatially distinct allosteric site on the polymerase situated at the base of the thumb subdomain at a distance of 30 Å from the enzyme active site. Inhibitors acting on the thumb II site are mainly dihydropyrones, thiophene carboxylic acids and pyranoindole compounds. The lipophilic substituents of these compounds occupy the shallow grooves formed by the amino acid residues Leu 419, Tyr477 and Trp 528 in the thumb II site. The acidic groups of the compounds generate hydrogen bonds with the backbone amide bonds of the amino acid residues Ser 476 and Tyr477 [147,148]. 37 (radalbuvir, GS-9669) is an inhibitor of NS5B polymerase that is currently in phase II clinical trials for the oral treatment of patients with HCV infection. In replicon cell lines, compound 37 exerts a high antiviral effect against HCV GT 1a (EC 50 ¼ 11 nM), HCV GT 1b (EC 50 ¼ 2.8 nM) and HCV GT 5a (EC 50 ¼ 8 nM) [149]. Due to its synergistic or additive effects with other antivirals and the lack of cross-resistance, this compound might be an important component of interferon-free combinations for the treatment of HCV infection [150]. 38 is a seed compound identified by high-throughput screening. The compound has an IC 50 value of 0.93 mM for HCV GT 1b NS5B polymerase inhibition, and an EC 50 value of 48 mM was obtained in the replicon model [148]. To optimize the structure of the compound, one strategy is to introduce an aromatic group that can interact with the residue and undergo p-p stacking interactions, and another strategy is to replace the sulfur atom connecting the dihydropyrone and the aromatic group on the right with a carbon atom. These strategies aim to improve the membrane permeability and pharmacokinetic properties of the compound and yield highly active compound 39 (filibuvir, PF-868554) [151,152]. [154]. In a phase I clinical trial of 41, the average viral load of HCV type 1 patients who were treated with 800 mg TID was evaluated 10 days later, and the largest decrease in volume reached 2.5 log 10 IU/mL [155]. Although more patients had diarrhoea, no serious adverse reactions were observed. The EC 50 value of 42 for HCV GT 1 in the replicon model was found to be equal to 0.1 mM [156]. In a clinical phase I trial, the compound rapidly reduced the viral load of HCV GT 1 patients after 3 days of treatment, and the maximum reduction in viral load obtained with these treatments was 1.5 log 10 IU/mL [157]. The compound was well tolerated in healthy volunteers administered a single oral dose of 1500 mg and in patients with HCV orally administered 750 mg BID, and the maximum average viral load observed 3 days after the treatment was decreased by 3.7 log 10 IU/mL. The phase II clinical trial of 43 in combination with telaprevir for the treatment of patients with HCV GT 1 was terminated [158,159]. Compound 44 (HCV-371) is a type of pyranoindole HCV thumb II inhibitor that has a structure that differs from that of the two abovementioned types of compounds [160e162]. For 90% HCV GTs 1a and 1b, the IC 50 value was 0.3e1.4 mM, the IC 50 value for HCV GT3a inhibition was 1.8 mM, and the EC 50 values for HCV GTs 1a and 1b in the replicon model were 6.1 and 4.8 mM, respectively. The compound was well tolerated in clinical phase I trials, but due to a lack of significant antiviral activity, clinical trials of the compound have been terminated. Palm I inhibitors The palm I binding site is located between the active site and the palm II site and contains a deep hydrophobic pocket. The inhibitors that bind to this site mainly include N-aryl uracil analogues, benzothiadiazines and acyl pyrrolidines. A series of N-aryl uracil analogues was reported as a novel structural class of NS5B polymerase NNIs [163]. Compound 45 is a potent inhibitor of GTs 1a (EC 50 ¼ 51 nM) and 1b (EC 50 ¼ 19 nM) NS5B polymerase [164]. Replicon activity was maintained when the assay was conducted in the presence of 40% human plasma [EC 50 ¼ 61 nM (1a), EC 50 ¼ 22 nM (1b)]. However, 45 exhibited poor pharmacokinetic properties in rats, with high plasma clearance and poor oral bioavailability (F ¼ 1.4%). The physical properties of this compound that can be associated with poor oral exposure include low aqueous solubility and poor membrane permeability. The solubility problem is likely related to the high melting point of the compound. Based on its excellent antiviral activity profile, 46 (ABT-072) was developed. The replacement of the amide linkage in 45 with a trans-olefin yielded a compound with improved permeability and solubility and markedly better pharmacokinetic properties in preclinical species. The replacement of the dihydrouracil in 45 with an N-linked uracil provided better potency in the HCV GT 1 replicon assay. Compound 46 is a potent inhibitor of HCV GT 1 replicons, with EC 50 values of 1 nM and 0.3 nM against HCV GT 1a and GT 1b, respectively. The results from phase I clinical studies supported the once-daily oral dosing of HCV-infected patients with 46 [163,165]. A phase II clinical study that combined 46 with the HCV protease inhibitor ABT-450 revealed a sustained virologic response at 24 weeks after dosing (SVR24) in 10 of 11 patients who received treatment [166]. 47 (dasabuvir, ABT-333) is also an Nlinked uracil derivative that was identified via throughput screening of the aryl dihydrouracil fragment [167]. This compound does not exhibit stereoisomerism, is thermodynamically stable and shows aqueous solubility, dissolution, and Caco-2 permeability. The IC 50 for clinical isolates ranges between 2.2 and 10.7 nM for HCV GTs 1a and 1b [168]. In 2016, 47 in combination with ombitasvir/ paritaprevir/ritonavir was launched for the treatment of chronic hepatitis C infection. However, 47 has limitations in terms of its limited genotypic coverage, and its administration to patients with advanced cirrhosis is difficult. This compound also represents a large pill burden when added to combination therapy [169]. Compounds with benzothiadiazine as the basic skeleton show strong activity at the enzyme level and in the cell model, and their physical and chemical properties are not ideal due to their special compound structure (intramolecular hydrogen bonds bring the aromatic ring close to the same plane), which results in poor pharmacokinetic properties in the body [170,171]. Such structures are optimized by introducing carbon-containing branches, reducing the number of aromatic rings, and reducing the polar surface area (PSA) of the molecule. 48 (setrobuvir, ANA598) is a benzothiadiazine analogue with a reduced aromatic ring that is currently in phase II clinical trials. In the replicon model, the EC 50 values of 48 in inhibiting HCV GTs 1a and 1b NS5B polymerases were found to be equal to 0.05 and 0.003 mM, respectively [172]. In a phase I clinical trial, BID treatment with 48 at doses of 200 mg, 400 mg, and 800 mg for 4 days decreased the viral load by 2.4, 2.3 and 2.9 log 10 IU/ml, respectively. In an earlier phase II study, 48 was administered in combination with Peg-IFN/RBV to naïve HCV GT 1infected patients for 12 weeks, and the combination exhibited potent antiviral activity and good safety and tolerability [173]. [175]. 50 is well absorbed and well tolerated by all healthy male volunteers included in the phase I study. A single-day 200-mg BID dose resulted in exposure-related HCV activity with maximal 0.5 to 1.1 log 10 reductions in plasma HCV RNA levels [176]. 51 (CC-31244, undisclosed structure) is a pangenotypic inhibitor of NS5B polymerase (GTs 1e5) that was designed for the treatment of hepatitis C infection. Compound 51 shows no significant cytotoxicity, CYP450 inhibition, or off-target or drug-drug interactions [177]. In the phase I study, a rapid and marked decline in HCV RNA levels, slow viral rebound after treatment, and no viral breakthrough during treatment were observed in the patients, which indicates that this compound is highly favourable compared with the currently approved NNIs [178]. Palm II inhibitors The palm II site is mainly composed of a large hydrophobic pocket in the palm area, and the RdRp inhibitors that bind to this site include benzofurans. These palm II site inhibitors are different from other nonnucleoside inhibitors in that they exhibit potent activity against HCV GTs 1 to 4 and NS5B polymerase. 52 (nesbuvir, HCV-796) is the first palm II-NNI inhibitor to enter phase II clinical trials. In hepatoma cells containing an HCV GT 1b replicon, the IC 50 value of 52 was found to be 9 nM [179]. In a phase I clinical trial, the greatest decreases in the average viral load, which reached 1.4 log IU/mL, were observed with the 1000 mg BID and 1500 mg BID treatments [180]. The initial results of phase II clinical trials showed that the combined treatment of 52 and PEG-IFNa can increase the therapeutic effect and reduce the occurrence of mutant strains. However, elevated liver enzyme levels were observed in some patients administered the combination treatment for at least 8 weeks [181]. The efficacy and safety of 52 for the treatment of hepatitis C via intravenous injection are currently being evaluated. 53 (tegobuvir, GS-9190) is currently in a phase II clinical trial for the treatment of HCV infection [182]. Its EC 50 values were lower than 16 nM against HCV GT 1 and higher than 100 nM for other GTs [183]. In the clinical studies of 53 for the treatment of HCV GT 1-infected patients (doses of 40 mg and 120 mg BID), the viral load decreased by 1.4 and 1.7 log IU/ml after 8 days, respectively. The X-ray crystal structure of a complex with 54 has provided structural insights into the mechanism of inhibition and aided the rationalization of the structure-activity relationships. Compound 54 binds within the active site cavity of NS5B polymerase near the top of the palm subdomain, and the binding site of this compound is a new binding site, which has been denoted the palm III site. Discussion and perspectives Currently, only 22 has been approved in the United States as the first COVID-19 treatment drug, the specific drug for the treatment of COVID-19 remains scarce, and the rapid identification of an effective strategy for the treatment of COVID-19 is currently a major challenge facing researchers. In terms of development time, research progress, safety and effectiveness, small-molecule drugs are the best choice compared with other therapies, such as monoclonal antibodies, oligonucleotide-based therapies, plasma therapies, and peptide therapies. However, the medicinal chemistry of SARS-CoV-2 infection remains in its infancy, and target-specific lead molecules remain to be identified. The existing antiviral drugs have established safety characteristics and effectiveness against related coronaviruses. The reuse of existing small molecule antiviral drugs is an important and promising strategy for addressing the COVID-19 epidemic. If the existing drugs can be repurposed for the treatment of SARS-CoV-2 infection, preclinical research (such as animal experiments and pharmacological research) and early clinical research can be bypassed, and the drugs can directly enter phase II or III clinical trials. RdRp is one of the most important viral proteins of RNA viruses for RNA synthesis and has been proposed as a valuable target for the development of antiviral therapeutics. Considering the similarity of the key drug-binding pockets between SARS-CoV-2, SARS-CoV, and MERS-CoV RdRps, repurposing known RdRp inhibitors for SARS-CoV-2 remains a promising strategy [185]. W. Yin et al. reported the complex structure of 22 inhibiting SARS-CoV-2 RdRp, which provided insights into the mechanism of viral RNA replication and a rational template for drug design to combat viral infection [186]. In this study, the complex structure revealed that the partial double-stranded RNA template was inserted into the central channel of RdRp, where 22 was covalently incorporated into the primer strand at the first replicated base pair and terminated chain elongation. At present, the candidate drugs that have shown obvious anti-SARS-CoV-2 at the cellular level or in clinical trials are 2, 15, 20, 22, 23, 25, 26 and 27 (Fig. 7). Like 22, these nucleotide analogues can converge into a central cavity of viral RdRp and inhibit viral RdRp through nonobligate RNA chain termination, a mechanism that requires conversion of the parent compound to the triphosphate active form. 2, 20, 22, 26 and 27 retain the entire ribose group, so they may be able to form a stable hydrogen bond network similar to natural substrates [31]. In addition, the unmodified side chain hydroxyl groups on the 20 and 27 nucleosides can also form hydrogen bonds. In particular, 2 has been shown to be 3 to 10 times as potent as 22 in blocking SARS-CoV-2 replication [54]. The N4 hydroxyl group off the cytidine ring forms an extra hydrogen bond with the side chain of K545, and the cytidine base also forms an extra hydrogen bond with the guanine base from the template strand. These two extra hydrogen bonds may explain the apparent higher potency of 2 in inhibiting SARS-CoV-2 replication [186]. However, 15 only formed 7H-bonds and two hydrophobic contacts with the SARS-CoV-2 RdRp residues. This is because fluorine substitution occurs on the ribose group of 15, so they cannot form a hydrogen bond network, but this is necessary to keep the incoming natural nucleotides stable. The same phenomenon occurs when compound 23 is combined with SARS-CoV-2 RdRp. The guanosine triphosphate derivative 25 formed 10H-bonds with the SARS-CoV-2 RdRp residues and two metal interactions with the active site residue of RdRp [101]. Moreover, among all intracellular NTPs, cytidine triphosphate is found at a lower intracellular concentration than other NTPs. Therefore, pyrimidine nucleoside inhibitors such as compound 2 are more likely to be developed into antiviral drugs. In addition, a variety of antiviral drugs were suggested as lead candidates against COVID-19 through homologue model-based virtual screening and molecular docking, including candidate drugs that have been applied in other diseases (Fig. 8). M.S.A. Parvez et al. [187] revealed that antibacterial drugs, including rifabutin, rifapentine, fidaxomicin, 7-methyl-guanosine-5 0 -triphosphate-5 0guanosine and ivermectin, have a potential inhibitory interaction with RdRp of SARS-CoV-2 and could be effective drugs for COVID-19. The drug surface hotspot study revealed that the molecular binding sites of all the compounds had a similar pattern. The vast number of noncovalent interactions between these screened compounds and RdRp suggests that the protein-inhibitor complexes are very stable. Considering that patients with COVID-19 may have combined bacterial or fungal infections, the proportion of antibiotics used in clinical treatment is relatively high [188]. If these antibacterial drugs that have inhibitory effects on RdRp can show a significant decrease in viral load in in vitro/vivo studies, they would be good therapies and play a dual antiviral/antibacterial role. For the more effective screening of candidate drugs, we compared the advantages and disadvantages of NIs and NNIs. NIs targeting the active site of virus RdRps and NNIs targeting allosteric sites have different biochemical properties. NIs mostly exhibit spectral antiviral activity. Many anti-HCV NIs are active against multiple HCV GTs, which indicates that the catalytic active sites bound by such inhibitors are relatively highly conserved. However, the problem faced in the development of NIs is the high concentration of intracellular natural NTP. For triphosphorylated NIs to compete with high concentrations of cellular NTP to exert their antiviral activity, the dose of the drug needs to be increased, which increases the risk of drug toxicity. In addition, NNIs acting on allosteric sites exert antiviral activity by affecting the binding of the catalytically active site of RdRp to the substrate. Such inhibitors do not need to undergo metabolic activation and do not need to compete with intracellular NTP. Combined with the structural diversity of such inhibitors, these compounds appear to be better antiviral drugs than nucleoside analogues. However, the structural variability and nonconservation of adjacent allosteric sites cause the RNA virus to rapidly develop resistance to allosteric site inhibitors. As part of ongoing global efforts to prevent the spread of SARS-COV-2 and treat the resulting infection, the use of approved drugs for nonapproved uses can alleviate urgent needs. However, to prevent viruses with similar genomic and pathological characteristics from returning a few years later, more specific, safe and effective drugs need to be developed. 2, the isopropyl ester prodrug of N4-hydroxycytidine, has been approved for use in clinical trials for the treatment of patients with COVID-19 and has shown positive effects. Since deuterium atoms are twice as heavy as hydrogen atoms, the vibrational zero-point energy of carbon-deuterium bonds (C-D) is lower than that of carbon-hydrogen bonds (CeH), and C-D is more stable than CeH [189]. Wen et al. replaced the hydrogen in the active molecular group with isotope deuterium to close the metabolic site and prolong the t 1/2 of the drug, which further improves the metabolism of remdesivir in vivo and expands the scope of the treatment window and thereby reduces the therapeutic dose [190]. Additionally, the pharmacophore model is a ligand-based drug design tool that starts from the structure of known active compounds to find common pharmacodynamic feature information, thereby guiding the rational design or virtual screening of new compounds. M.S.A. Parvez et al. [187] designed a pharmacophore using 22, which was used further for screening the ZINC database. Molecular docking analysis revealed that two compounds (ZINC09128258 and ZINC09883305) with pharmacophore features that interact effectively with RdRp of SARS-CoV-2, indicating their potential as effective inhibitors of the enzyme (Fig. 9). In addition, in recent years, the research and development of RdRp inhibitors for RNA viruses has continued to be hot. Wang et al. reported the synthesis and biological evaluation of a series of 2 0 ,3 0 -and 2 0 ,4 0 -substituted guanosine nucleotide analogues as HCV NS5B polymerase inhibitors [191]. 6 0 -Fluorinated aristeromycins were designed as dual-target antiviral compounds for the development of broad-spectrum antiviral agents that target RNA viruses [192]. These new compounds may also be a hope against the COVID-19 epidemic. Undeniably, the development of efficient anti-SARS-CoV-2 drugs over a short time is associated with considerable obstacles and unidentified threats. Nonetheless, efforts to establish antiviral medicines to fight the novel coronavirus are urgently needed. Scientists from clinical research organizations and pharmaceutical companies along with front-line doctors should enhance their cooperation to jointly advertise pharmaceutical, scientific and preclinical tests of appropriate anti-coronavirus medications. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-01-24T14:07:59.700Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "fe400a855581c620146df48b0ea1f0f6b197df28", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ejmech.2021.113201", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "701e939b32199bb8bcd688bb4fa030daa94fa31b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
117455699
pes2o/s2orc
v3-fos-license
Irregular Wave Validation of a Coupling Methodology for Numerical Modelling of Near and Far Field Effects of Wave Energy Converter Arrays : Between the Wave Energy Converters (WECs) of a farm, hydrodynamic interactions occur and have an impact on the surrounding wave field, both close to the WECs (“near field” effects) and at large distances from their location (“far field” effects). To simulate this “far field” impact in a fast and accurate way, a generic coupling methodology between hydrodynamic models has been developed by the Coastal Engineering Research Group of Ghent University in Belgium. This coupling methodology has been widely used for regular waves. However, it has not been developed yet for realistic irregular sea states. The objective of this paper is to present a validation of the novel coupling methodology for the test case of irregular waves, which is demonstrated here for coupling between the mild slope wave propagation model, MILDwave, and the ‘Boundary Element Method’-based wave–structure interaction solver, NEMOH. MILDwave is used to model WEC farm “far field” effects, while NEMOH is used to model “near field” effects. The results of the MILDwave-NEMOH coupled model are validated against numerical results from NEMOH, and against the WECwakes experimental data for a single WEC, and for WEC arrays of five and nine WECs. Root Mean Square Error (RMSE) between disturbance coefficient (Kd) values in the entire numerical domain ( RMSE K d , D ) are used for evaluating the performed validation. The RMSE K d , D between results from the MILDwave-NEMOH coupled model and NEMOH is lower than 2.0% for the performed test cases, and between the MILDwave-NEMOH coupled model and the WECwakes experimental data RMSE K d , D remains below 10%. Consequently, the efficiency is demonstrated of the coupling methodology validated here which is used to simulate WEC farm impact on the wave field under the action of irregular waves. Introduction Ocean waves are an enormous marine renewable energy source with the potential to contribute to a reduction in the world's fossil fuel dependency. The exploitation of wave energy is a complex and expensive process that takes place in a rough environment. As a result, a large number of Wave Energy Converters (WECs) technologies are under development [1], with none of them yet reaching a commercial stage. In addition, many WECs have to be deployed and arranged in WEC farms to produce large amounts of electricity and to have economically viable wave energy projects. The overall wave power absorption of a WEC farm will affect the surrounding wave field creating areas of reduced wave energy (areas of decreased wave height) in the lee of the WEC farm as seen in [2][3][4][5][6][7][8]. The hydrodynamic problem of wave power absorption between the WECs within a farm, and between the WECs and the incident wave field is characterized by three different problems namely: wave reflection, diffraction and radiation. The superposition of the reflected, diffracted and radiated wave fields results in a perturbed wave field. The perturbed wave field close to the WECs of the farm caused both by WEC-WEC and wave-WEC interactions is often referred to in literature as the "'near field" effects while the propagation of this perturbed wave field at a larger distance from the WEC farm e.g., in the coastal zone, is referred to as the "far field" effects [9][10][11][12][13][14][15][16]. Substantial numerical research has been carried out to study the "'near field" effects in WEC farms, focusing on optimizing the WEC farm layout and maximizing the power output by employing wave-structure numerical models. Typically, numerical models based on potential flow theory have been used either for calculating semi-analytical coefficients [17][18][19] or by means of Boundary Elements Method based models (BEMs) [20][21][22]. The aforementioned numerical models are suited to resolve more accurately the details of WEC (farm) "'near field" effects. However, they are not able to account for the physical processes that influence the "far field" effects such as wave propagation over a varying bathymetry and wave breaking. Furthermore, the numerical simulation time can increase considerably when increasing the number of WECs modelled and the size of the numerical domain. In recent years, the use of non-linear numerical models based on Computational Fluid Dynamics (CFD) [23,24] and Smoothed Particle Hydrodynamics (SPH) [12,25,26] has increased as these models can take into account non-linear effects for wave-structure interactions. Nonetheless, the use of these models is restricted to a small spatial and temporal scale and to an even more limited number of WECs, which makes them also not suitable to study WEC (farm) "far field" effects in a large numerical domain due to high computational cost. "Far field" effects are traditionally studied in a computationally cost-efficient way using wave propagation models. In [2][3][4]7,8,[27][28][29], phase-averaging spectral models are used to obtain the wave field in the lee of a WEC farm. The WEC farms in these studies are simplified as obstacles which have been assigned a fixed transmission (and thus wave power absorption) coefficient. In a similar way, Refs. [30,31] used a time-dependent mild slope equation model and simplified each WEC as a wave power absorbing obstacle. To obtain the frequency-dependent wave power absorption coefficient for phase-averaging spectral models and the wave power absorption coefficient (assigned to obstacles/structures) for time-dependent mild slope equation models, wave tank testing or numerical modeling are required. Therefore, the simplified parametrization of the wave power absorbed by WECs is not taking into account the wave-structure interactions of diffraction and radiation of the different WECs modelled [32]. This inaccuracy may lead to an overestimation or underestimation of the WEC farm power absorption and consequently an unrealistic estimation of the "far field" effects in the coastal zone. From the aforementioned studies, it is clear that modeling the perturbed wave field around a farm of WECs is a complex process. Usually "near field" and "far field" effects are approached separately due to the difficulties in using a single numerical model to obtain a fast and accurate solution for both effects. To rectify these limitations, different coupling methodologies between wave-structure interaction solvers and wave propagation models have been developed in the recent years [9][10][11][12][13][14][15]. This allows higher precision in the estimation of "far field" effects, by using a wave-structure interaction solver to obtain an accurate solution of the wave field in a limited area around the WECs of a farm and propagating this resulting wave field further away using a wave propagation model over a coastal zone. As pointed out in [12], there are different types of coupling methodologies which use one-way and two-way coupling, respectively. In one-way coupled models, there is information transfer in one direction only, where each numerical model is run independently. Examples of such studies, which present linear simulation of "far field" effects of WEC farms by coupling a wave propagation model and a BEM solver, are carried out by [9][10][11]13,33,34]. Alternatively, in two-way coupled models, both numerical models are run at the same time with a two-way transfer of information between them. Examples of two-way coupled models are provided by [12] who demonstrated coupling of a non-linear wave propagation model with an SPH wave-structure interaction solver, or by [35] who simulated a submerged buoy using a non-hydrostatic wave-flow model implemented in the wave propagation model SWASH [36]. In the present study, a continuation of the one-way coupling methodology presented in [13,14,37] for regular waves between the wave propagation model MILDwave [10,38] and the wave-structure interaction solver NEMOH [39] is performed. This coupling methodology is based on the work of [9,38], who first presented a coupling between a wave propagation model (MILDwave) and a wave-structure interaction solver WAMIT [40]. In [14] specifically, the step-by-step procedure of this coupling methodology is presented and its application range. Moreover, in [14], the theoretical background of both the coupling methodology and of the employed numerical models (MILDwave and NEMOH) is provided. Furthermore, in [14], experimental data from the "WECwakes" database [41] has been used and more specifically wave field measurements for a 9-WEC array interacting with the incoming waves. The latter was used to perform validation of the coupling methodology for regular waves propagating through the 9-WEC array, obtaining good agreement between the experimental and numerical results regarding the impact of the 9-WEC array on the surrounding wave field. In [14], irregular waves were briefly introduced, yet not validated, without presenting a fully developed coupling methodology for irregular wave simulations. Here, the novelty of this study is the validation of a fully developed coupling methodology for modeling irregular waves using available experimental data [16,41]. In the present manuscript, the coupling methodology is presented in detail for irregular wave generation. Furthermore, the irregular wave cases of a 9-WEC array, a 5-WEC array and a single WEC are selected from the "WECwakes" database for simulations using the coupling methodology and for validation purposes. Moreover, numerical results of the MILDwave-NEMOH coupled model are compared to NEMOH numerical results and experimental data, showing that the coupled model is able to accurately parse the information between the NEMOH and MILDwave numerical domains in the "near field". This information is then propagated into the "far field" in the MILDwave numerical domain as MILDwave correctly models coastal transformations [42]. Based on the results from [14] and on the current results from the present work, it is demonstrated that the developed and validated coupling methodology can be a useful tool for cost-efficient computational time simulations of coastal impacts of farms of floating structures and WECs over a large coastal zone. In contrast, it should also be noted that, due to the limitations of the numerical models employed here, the resulting MILDwave-NEMOH coupled model cannot be used for non-linear sea states and to model morphological coastal impacts. The structure of the paper is as follows: Section 1 provides a short overview of the state-of-the-art and problem statement. Section 2 presents a description of the generic coupling methodology. Section 3 illustrates the MILDwave-NEMOH coupled model, including a detailed description of the coupling methodology implementation, the wave propagation solver MILDwave and the wave-structure interaction solver NEMOH. A validation test case is described in Section 4 and the results are presented in Section 5. In Section 6, the capability of the "MILDwave-NEMOH" coupled model to simulate "far field" effects of WEC farms is discussed. Finally, the conclusions of this and future work are drawn in Section 7. Generic Coupling Methodology In this section, the generic coupling methodology first introduced by [9] is briefly presented. The objective of the coupling methodology is to obtain the total wave field around a (group of) structure(s), as a superposition of the incident wave field and the perturbed wave field (which is a combination of the reflected, diffracted and radiated wave fields). The incident wave field propagation and transformation is calculated over a large domain using a wave propagation numerical model. The perturbed wave field is simulated using a wave-structure interaction solver over a restricted domain around the structure(s), namely the coupling region. As it has been pointed out in [9], this coupling methodology can be applied by employing any wave-structure interaction solver that describes the perturbed wave field, any wave propagation model and any type of oscillating or floating structure(s). The general strategy for the coupling methodology has been also recently reported and updated in [14], but, for clarity, it is presented here briefly. It consists of four steps. Firstly (Step 1), a wave propagation model is used to obtain the incident wave field at the location of the structure(s) when the structure(s) is (are) not present. Secondly (Step 2), the obtained wave field from Step 1 is used as an input for the wave-structure interaction solver at the location of the structure(s). Then, the motion of the structure(s) is solved and an accurate solution of the perturbed wave fields around the structure(s) is obtained. Thirdly (Step 3), the perturbed wave field is used as an input in the wave propagation model and is propagated throughout a large domain. This is done by prescribing an internal wave generation boundary around the structure location. Finally (Step 4), the total wave field due to the presence of the structure(s) is obtained as the superposition of the incident wave field and the perturbed wave field in the wave propagation model. Application of the Coupling Methodology between the Wave Propagation Model, MILDwave, and the Wave-Structure Interaction Solver NEMOH for Irregular Waves In this section, the generic coupling methodology presented in Section 2 will be demonstrated for coupling between the wave propagation model MILDwave and the wave-structure interaction solver NEMOH. First, a description of the two numerical models employed is presented. Subsequently, a description of the irregular wave generation for the incident, perturbed and total wave fields is provided. The Wave Propagation Model, MILDwave and the Wave-Structure Interaction Solver, NEMOH The wave propagation model chosen for demonstrating the proposed coupling methodology is the mild slope model MILDwave [10,38], developed at the Coastal Engineering Research Group of Ghent University, in Belgium. MILDwave is a phase-resolving model based on the depth-integrated mild slope equations of Radder and Dingemans [43]. MILDwave allows for solving the shoaling and refraction of waves propagating above mild slope varying bathymetries, and it has been widely used in the modeling of WEC farms [10,11,13,30,31,41,44,45]. The basic MILDwave equations are reported in [10]. The wave-structure interaction solver chosen to solve the diffraction/radiation problem is the open-source potential flow BEM solver NEMOH, developed at Ecole Centrale de Nantes [39]. Linear potential flow theory has hitherto been utilized in a majority of the investigations into WEC array modeling-for example, see [11,19,46,47]. NEMOH is based on linear potential flow theory [48], and the basic equations and assumptions employed are reported in [14]. Generation of the Incident Wave Field for Irregular Waves Irregular waves can be generated by applying the superposition principle of a number of different linear regular wave components. The incident wave field for a linear regular wave is generated intrinsically in MILDwave. Moreover, MILDwave allows for solving shoaling and refraction of waves propagating over complex bathymetries. The numerical set-up of MILDwave is illustrated in Figure 1. Waves are generated along a linear offshore wave generation boundary by applying the boundary condition of linear regular waves generation: where η I,reg is the incident regular wave surface elevation, a is the wave amplitude, ω is the angular frequency, k is the wave number and θ is the wave direction. To minimize unwanted wave reflection, absorption layers are placed down-wave and up-wave in the numerical wave basin. By applying the superposition principle, a first order irregular wave is represented as the finite sum of N regular wave components characterized by their wave amplitude, a j , and wave period, T j , derived from the wave spectral density, S j : where where η I,irreg is the incident irregular wave surface elevation and a j is the wave amplitude, ω j is the wave angular frequency, f j is the wave frequency, k j is the wave number, θ j is the wave direction and ϕ j is the incident phase, of each wave frequency component. ϕ j is selected randomly between −π and π to avoid local attenuation of η I,irreg . Generation of the Perturbed Wave Field for Irregular Waves To calculate the irregular perturbed wave field around a (group of) structure(s) first, it is necessary to obtain the perturbed wave field for each wave frequency as a regular wave. The perturbed wave field in the time domain for a regular wave is obtained in two steps and the generic numerical set-up is illustrated in Figure 1. First, a frequency-dependent simulation is performed using NEMOH to obtain the complex perturbed wave field around the (group of) structure(s). NEMOH resolves the wave frequency-dependent wave radiation problem for each structure(s) and the diffraction (including radiation) over a predetermined numerical grid with the wave phase ϕ = 0 at the center of the domain. The resulting radiated and diffracted wave fields for each wave frequency depend on the shape and number of floating structure(s), the number of Degrees of Freedom (DOF) considered, the local constant water depth and the wave period. The radiated (for each structure) and diffracted (for all structures) complex wave fields in NEMOH are summed up to obtain the perturbed wave field, η pert : where η rad is the radiated wave field and η di f f is the diffracted wave field. Secondly, the perturbed wave field is transformed from the frequency domain to the time domain and imposed onto MILDwave using an internal wave generation boundary ( Figure 1). For this study, a circular wave generation boundary is prescribed; however, it can be defined using other shapes as well. Waves are forced away from the circular wave generation boundary by imposing values of free surface elevation η circ (x, y, t) as described by Equation (5): where η pert is the perturbed complex wave field in the circular wave generation boundary, and a c and ϕ pert,c are the wave amplitude of the incident wave and the wave phase of the perturbed wave at the center of the circular wave generation boundary, respectively. To avoid unwanted wave reflection, wave absorption layers or relaxation zones are implemented up-wave, down-wave and also in the sides of the MILDwave numerical domain ( Figure 1). As in the case for the calculation of the irregular incident wave field, the irregular perturbed wave field is calculated as the finite sum of N regular perturbed wave components characterized at the center of the wave generation boundary by their wave amplitude, a c,j , derived from the wave spectrum: where and η pert,irreg is the perturbed irregular wave surface elevation where S c,j is the spectral density and ϕ pert,c,j is the perturbed wave phase of each frequency component. ϕ pert,c,j is selected randomly between −π and π to avoid local attenuation of the surface elevation. Generation of the Total Wave Field for Irregular Waves The total wave field for irregular waves due to the presence of a (group of) structure(s) is obtained by applying the generic coupling methodology described in Section 2. This is performed by superimposing the irregular incident wave field and the irregular perturbed wave field generated in MILDwave as shown in Sections 3.2 and 3.3, respectively. Step 1 of the generic coupling methodology is applied N times for irregular waves to calculate the incident wave field for N regular wave components in MILDwave by applying a random phase ϕ i for each simulation. From each simulation a c,i , and ϕ c,i are obtained at the center of the circular wave generation boundary and are used as input values for NEMOH. In Step 2, the perturbed wave field is obtained in NEMOH. In NEMOH, ϕ pert,c,j is referenced with respect to the center of the domain (Section 3.3). Therefore, ϕ pert,c,j at the NEMOH numerical domain has to be corrected using the ϕ j of the regular incident wave field to assure wave phase matching between the incident and the perturbed waves in MILDwave. Afterwards, in Step 3, the perturbed wave field is then transformed from the frequency domain to the time domain and propagated into MILDwave for N regular perturbed wave components along the circular wave generation boundary. Finally, in Step 4, the irregular incident wave field is obtained as the superposition of the N incident regular waves simulations from Step 1. The irregular perturbed wave field is obtained as the superposition of the N perturbed regular wave simulations from Step 3. The total wave field for irregular waves is obtained as the combination of the irregular incident and perturbed wave fields: where η tot,irreg is the total irregular wave surface elevation, and η I,reg,j and η pert,reg,j are the incident and perturbed wave surface elevations of each wave frequency, respectively. Validation Strategy of the Coupling Methodology between the Wave Propagation Model, MILDwave, and the Wave-Structure Interaction Solver, NEMOH In this section, a validation test case is presented to validate the MILDwave-NEMOH coupled model against numerical results from NEMOH and experimental data. Showing that the perturbed wave field can be precisely parsed from the NEMOH to the MILDwave domain in the near field of the WEC array. The criteria evaluated for the numerical model validation are also described. Validation Test Cases The validation of the demonstrated generic coupling methodology is carried out by comparing the results from the MILDwave-NEMOH coupled model to those obtained from the numerical model NEMOH and the WEC array experimental data from the WECwakes project [9,16,41]. WECwakes Experimental Data-Set This section gives a short description of the experimental data-set from the WECwakes project [9,16,41] conducted in the Shallow Water Wave Basin of DHI, Hørsholm (Denmark). In the WECwakes project, arrays up to 25 point absorber type WECs (cylinders of a diameter of 0.315 m) were tested to study "near field" and "far field" effects of heaving point absorber type WECs. A Coulomb friction based damping is used. The DHI wave basin is 22 m wide and 25 m long and the overall water depth is fixed to 0.7 m. Different WEC array configurations have been tested during the WECwakes project under a wide range of sea states, a large experimental data-set has been generated and is publicly available for numerical validation purposes and for WEC array design guidelines. The wave field around the WECs has been recorded using 41 resistive wave gauges (WGs) distributed in the wave basin. For the present validation study, three different WEC configurations are selected: a single WEC, an array of five WECs arranged in a 1 × 5 WEC layout and an array of nine WECs arranged in a 3 × 3 WEC layout (see Figure 1B-D). A total of 15 wave gauges located in the front, leeward and sides of the WECs array configurations are used to compare the significant wave height, H s , and the spectral density, S, between the MILDwave-NEMOH coupled model and the experimental data-set. The separating distance between the different WECs is equal to 1.575 m (centre-to-centre distance). The incident irregular wave conditions used to generate waves during the experiments test are defined by a JONSWAP spectrum with H s = 0.104 m and two peak wave periods of T p = 1.18 s and 1.26 s. "Test Case" Program The primary objective of the present research is to validate the total wave field around a WEC array obtained using the MILDwave-NEMOH coupled model. For this reason, a "Test Case" (Table 1) program based on the WECwakes experimental data-set has been designed for different irregular wave cases and WEC (array) configurations: The different "Test Cases" included in Table 1 are performed both using the MILDwave-NEMOH coupled model, and NEMOH. NEMOH simulation results are used: (1) as input for the MILDwave-NEMOH coupled model, and (2) as a benchmark for the validation of the MILDwave-NEMOH coupled model, which is also compared with WECwakes data. Figure 1A). For the simulations carried out to obtain the perturbed wave field, waves are generated using an internal circular wave generation boundary ( Figure 1B-D). The three different WEC (arrays) configurations of Table 1 are simulated using different coupling radii for the circular wave generation boundary (see Figure 1B-D). Each coupling radius is obtained following the recommendations by [11] as 0.5 times the wave length (L) plus the radius of the WEC or the distance from the centre of the circular area to the most distant WEC for a single WEC and a WEC array, respectively. Four equally sized wave absorbing sponge layers are placed on all sides of the numerical domain. The dimensions of the total numerical wave basin in MILDwave are not always the same, as the length of the wave absorbing sponge layers (B) is different for each set of wave conditions and depends on L. As irregular waves are obtained as a superposition of N f regular wave components, B is calculated using L max , which corresponds to T max of the discretized spectra. An increase of B causes a decrease of wave reflection, and as pointed out in [5] for B = 3 · L max wave reflection coefficient drops to 1%. The total wave field of the MILDwave-NEMOH coupled model is obtained as the superposition of the numerical results from the domains of Figure 1A-D for a single WEC, five WECs and nine WECs, respectively. In NEMOH, the effect of the WEC's Power Take-Off (PTO) system is taken into account by adding a suitable external damping coefficient, B PTO = 28.5 kg/s as defined in [14]. Criteria Used for the Numerical Model Validation The accuracy of the obtained numerical results is evaluated in two steps. Firstly, results from the MILDwave-NEMOH coupled model are compared against the NEMOH results. Secondly, results from the MILDwave-NEMOH coupled model are compared against WECwakes experimental data. The comparison between the MILDwave-NEMOH coupled model and NEMOH is assessed by calculating K d coefficient values, as defined in Equations (9) and (10), respectively. The K d coefficient is defined as the ratio between the numerically calculated local total significant wave height, H s,tot , and the target incident significant wave height, H s,I , imposed along the linear wave generation boundary. In the MILDwave-NEMOH coupled model, the K d,coupled is obtained in the time domain as: where η I,irreg,t and η pert,irreg,t are the free surface elevations for irregular incident and perturbed waves in each time step dt, from the domains of Figure 1A-D, respectively, and ∆t is the time window over which K d is computed. In NEMOH, the K d,NEMOH is obtained in the frequency domain as: where η tot,irreg, f req is the absolute value of free surface elevation for the complex total wave obtained in the frequency domain. The K d value is a useful parameter that has been used extensively in literature to study wave field variations [9][10][11]22,30,31,34,42,45]. K d >1 and K d < 1 indicate increase and decrease of the local wave height, respectively. When studying WEC arrays, increases in the local wave height indicate the presence of "hot spots" [49], defined as areas of high wave energy concentration. Instead, decrease in the local wave height denotes "wake" effects, which result in an area of reduced wave energy. To evaluate K d differences between the MILDwave-NEMOH coupled model and NEMOH, three different outputs have been generated: 1. K d contour plots of the entire numerical domains; 2. K d cross-sections along the length of the numerical domains (parallel to the wave propagation direction); 3. Contour plots of the "Relative Difference" between the obtained K d values (RD K d ) defined as: where G is the number of grid points of the numerical domain (D). The validation of results obtained from the MILDwave-NEMOH coupled model against WECwakes experimental data is carried out using data recorded at the 15 numerical and experimental WGs, respectively, as these are illustrated in Figure 1A. For each WG, two different outputs have been generated: 1. Spectral density plots comparing the wave spectra between the MILDwave-NEMOH coupled model and the WECwakes experimental data for the 15 WGs. 2. The Root Mean Square Error between the K d of the MILDwave-NEMOH coupled model and the K d,WECwakes of the WECwakes experimental data for the 15 WGs, RMSE K d,WG : where C is the number of Test Cases. Validation Results In Sensitivity Analysis for Irregular Wave Generation Before performing the numerical simulations listed in Table 1, a sensitivity analysis is carried out to ensure a converging result of the irregular wave simulation, while keeping the computational time low. This sensitivity analysis is based on three numerical simulation criteria: (1) the total simulation time Q tot , (2) the number of regular wave components (N f ), and (3) the grid cell size (d x and d y ) employed in MILDwave. For each criterium, the studied parameter is varied while the other two are kept constant. The numerical domain in Figure 1A is used. Firstly, different Q tot are considered to ensure a fully developed wave spectrum. Secondly, N f is modified in order to achieve a wave spectrum close to the theoretical one. Thirdly, d x = d y is varied based on wave length, L p of the incident waves in order to achieve a convergent solution with the theoretical spectral density S t ( f ). The numerical spectral density in MILDwave S n,M ( f ) is obtained at the centre of the domain for the incident wave of "Test case 1" of Table 1. The shortest Q tot , smallest N f and largest d x = d y resulting in an accurate solution of S n,M ( f ) are selected then to perform the rest of the numerical simulations for the rest of the Test Cases. The results of the irregular wave generation sensitivity analysis are shown in Figure 2. S n,M ( f ) for different Q tot is plotted in Figure 2a, while N f = 15 and the d x = d y = 0.08 m are kept constant. It is clearly observed that, for Q tot of 100 s and 300 s, S n,M ( f ) does not represent S t ( f ). For Q tot of 600 s, there is a good agreement with S t ( f ) even for high frequency wave components, without leading to computationally expensive simulations. Figure 2b, while Q tot = 600 s and the d x = d y = 0.08 m are kept constant. Simulations are performed for N f = 15, 20 and 40. There is a good agreement between S n,M ( f ) and S t ( f ) for all three simulations showing a slight amount of spurious energy for high wave frequencies, which is reduced by increasing N f . Nevertheless, the accuracy gained by increasing N f from 20 to 40 is not significant as the S n,M ( f ) peak and the energy contained within the S n,M ( f ) curve is practically the same. Consequently, it is concluded that increasing N f is not required and therefore N f is kept to 20 to reduce the computational time. Irregular Waves with Wave Period T p = 1.26 s Using the MILDwave-NEMOH coupled model for Test Cases 2, 4 and 6 from Table 1, the total wave field around one, five and nine WECs, respectively, is simulated using the numerical domain of Figure 1B-D, respectively. K d results obtained for each considered Test Case are illustrated in Figure 3. The coupling region in the MILDwave-NEMOH coupled model is masked out using a white solid circle and is not considered for the validation. For all three Test Cases, the hydrodynamic behaviour and WEC motions obtained within the coupling region are affecting the incident wave field in the MILDwave-NEMOH coupled model. As a result, a wave reflection pattern is generated in front of the WECs with increased K d values, while, in the lee of the WECs, "wake effects" appear with reduced values of K d . The effect of the three different WEC (array) configurations is expressed by an increased impact in terms of wave reflection and wake effects. Table 1. Contour levels are set at an interval of 0.05 of K d value (-). The coupling region is masked out using a white solid circle which includes the WECs (indicated by using black solid circles). Incident waves are generated from the left to the right. S1 and S2 indicate the location of cross-sections. For the validation, K d values obtained with the MILDwave-NEMOH coupled model and with NEMOH are compared by means of the RD K d . Three contour plots for Test Cases 2, 4 and 6 are illustrated in Figure 4a-c, respectively. The MILDwave-NEMOH coupled model provides lower K d results than NEMOH in the wave reflection zone up-wave of the WECs indicated by positive values of RD K d , while the extent and magnitude of the wake effects are larger for the MILDwave-NEMOH coupled model as indicated by negative values of RD K d . The maximum and minimum values of RD K d are 4% and −4%, respectively, and are obtained for Test Case 6. These differences in the RD K d between the two models appears close to the coupling region and the wave diffraction zones around the WECs where increased K d values are observed, and are increased by increasing the number of WECs simulated. Nevertheless, these RD K d differences are reduced when moving away from the coupling region. Table 1. Contour levels are set at an interval of 2 of relative difference in K d value (-). The coupling region is masked out using a white solid circle which includes the WECs (indicated by using black solid circles). Incident waves are generated from the left to the right. To have a closer look at the comparison between the K d results from the MILDwave-NEMOH coupled model and NEMOH, for Test Cases 2, 4 and 6, two longitudinal cross-sections (indicated in Figure 3) are drawn through: the centre of the domain, at y = 0 m (S1) and through the location of WGs 17,18,19 and 20 (see Figure 1A), at y = 4.75 m (S2). Again, the coupled zone is masked out in cross-section S1 using gray colour. For all considered Test Cases, it can be observed in Figure 5 that there is very good agreement for K d results between the MILDwave-NEMOH coupled model and NEMOH. For the MILDwave-NEMOH coupled model K d values are lower in the wave reflection and diffraction regions in front and on the side of the WECs, and higher in the region where wake effects occur in the lee of the WECs, compared to NEMOH. -NEMOH coupled model and for NEMOH along two longitudinal cross-sections S1 (left) and S2 (right) as indicated in Figure 3 for: (a,b) Test Case 2; (c,d) Test Case 4, and (e,f) Test Case 6. The coupling region is masked out in gray colour and includes the WECs' cross-sections, which are indicated by black vertical areas. Comparison Summary To complete the validation of the MILDwave-NEMOH coupled model, the rest of the Test Cases of Table 1 are presented using the methodology used in Section 5.2.1. Similar conclusions for Test Cases 1, 2 and 5 are drawn: the MILDwave-NEMOH coupled model provides lower K d results than NEMOH in the wave reflection zone up-wave of the WECs, and increased magnitude of the wake effects down-wave of the WECs indicated by positive and negative values of RD Kd , respectively. The results for all six Test Cases of Table 1 are then summarized by calculating the RMSE K d ,D over all the grid points of the numerical domain. Figure 6 reports that RMSE K d ,D values remain below 1.60% for the simulated Test Cases. Test Case 6 Results for Test Case 6 are shown in Figures 7 and 8 for the 15 WGs shown in Figure 1A. The K d values from the MILDwave-NEMOH coupled model and from the experimental measurements, K d,coupled and K d,WECwakes , respectively, and numerical (using MILDwave-NEMOH coupled model) and experimental results of S n,M−N ( f ) and S WECwakes ( f ), respectively, are plotted in Figures 7 and 8. The MILDwave-NEMOH coupled model and the experimental data have a good agreement in the WGs in the lee of the WECs where wake effects take place and in the Bottom Lateral WGs (see Figure 1A) for both the K d and S( f ). Figure 1A for Test Case 6 of Table 1. Comparison Summary To complete the validation of the MILDwave-NEMOH coupled model against experimental data, the RMSE K d,WG is calculated between the K d,coupled and the K d,WECwavkes for all Test Cases of Table 1. Figure 9 shows the RMSE K d,WG obtained for each WG of Figure 1A. The K d obtained for the numerical data differs maximal by 10.03% from the experimental data. The RMSE K d,WG ranges between 2.00-10.03%, while the highest agreement is observed at the WGs located in the lee of the WECs and at the Bottom Lateral WGs. The largest RMSE K d,WG are obtained in the front WGs and at the Top Lateral WGs. Discussion An irregular wave generation sensitivity analysis for MILDwave was performed using the different simulation parameters of Section 5.1. The results show that keeping a small N f for discretizing the irregular wave spectra, using d x = d y = L p 20 and Q tot representing 500 waves is sufficient to obtain a good representation of the target irregular long crested sea state. Increasing N f , decreasing d x (= d y ) or increasing Q tot will not lead to a significant increase in the accuracy of the obtained results, which also leads to exponential increase of the computational time. This is illustrated for N f = 40 where the computational time is four times higher than the computational time for N f = 20. Section 5.2 demonstrates that the MILDwave-NEMOH coupled model can accurately propagate the perturbed wave field around different WEC (array) configurations for the linear wave theory based coupling employed here. The results of the MILDwave-NEMOH coupled model are compared against NEMOH results. Small discrepancies between NEMOH and the MILDwave-NEMOH coupled model are found close to the coupling wave generation circle in front of and in the lee of the WEC (array). These discrepancies increase as the number of WECs modelled increases, as shown in Figure 9a, though remaining between ±4%. This shows, as pointed out in [14], that the complexity of the hydrodynamic interactions when modelling the "far field" effects is not influential. Validation of the MILDwave-NEMOH coupled model against the experimental WECwakes data is performed in Section 5.3 showing a good agreement for the different Test Cases used in this study. An error in predicting the K d values measured at 15 WGs from the WECwakes tests is quantified in terms of RMSE Kd,WG (%). RMSE Kd,WG values range from 2-10.02% being the WGs in front of the WECs the ones with the least correspondence with the experimental data. On the contrary, for WGs that are further away from the WECs, a better agreement is obtained. The difference within the Front WGs arises due to the non-linear effect of the friction between the WEC shafts and the WEC buoys that cannot be represented with the BEM-based coupling methodology employed, as BEM is based on linear wave theory. This friction is causing the experimental WEC buoy to have smaller motion amplitude than the numerical one obtained in the BEM solver. Thus, the WEC is absorbing less energy from the incoming waves yielding a higher wave reflection in front of the WEC (array). Finally, the asymmetry in the K d results between the Bottom and the Top Lateral zones is caused by the non-linear behaviour of the WECs in the experimental model and unwanted wave reflection in the wave basin that cannot be modelled in the MILDwave-NEMOH coupled model. In the MILDwave-NEMOH coupled model, all the WECs of the array have an identical behaviour as shown by the symmetric values of K d given for the top and the bottom lateral zones in Figure 7 and the symmetric total wave field shown in Figure 3. Despite this, the following considerations have to be made: (1) a linear coupled model is compared to experimental data that is inherently non-linear, as confirmed by [11] who reported that the incident wave is a weakly non-linear Stokes second order wave; (2) moreover, the experimental PTO system behaves as a Coulomb damper, yet in the numerical model it is approximated as a linear damper. Figure 1A between the MILDwave-NEMOH coupled model and the WECwakes experimental data, this never exceeds 10.02%. Therefore, as there is a good parse of information between the two numerical models, it can be concluded that the coupling methodology can be used to extend the numerical domain for simulating an irregular long crested wave and thus simulate the "far field" effects of WEC farms and arrays in a cost effective way. However, and as it has already been mentioned in the authors' previous work [14], the coupling of MILDwave and NEMOH has some limitations. Firstly, despite the fact that the computational time for simulating different WEC arrays in this study is reasonable (the longest recorded computational time was that for Test Case 6, which lasted 2 h on 10 cores (Intel(R) Core(TM) i7-8700 CPU@3.2GHz), it can increase considerably when increasing the number of WECs. For an array of J WECs with six DOFs, the computational time for a BEM model increases as σ 6J , with increased computational time in larger numerical domains. Secondly, irregular waves are calculated as a superposition of regular waves. It has been proven that it is possible to obtain very good results with a low N f ; however, if a higher resolution of the S n,M−N ( f ) is needed, depending on the study case requirements, it would lead to an exponential increase of the computational time. Thirdly, NEMOH calculations can only be performed at a constant bathymetry introducing a limitation in that way. Moreover, MILDwave is applied for mild slope bathymetries limiting the MILDwave-NEMOH coupled model to coastal regions with a slope lower than 1 3 . Finally, a realistic modeling of the WEC PTO system is required to maximize the WEC (array) power output and quantify WEC effects on the surrounding wave field [50]. Modeling a resistive PTO system allows us to obtain a cost-efficient simulation regarding computational times, but may result in an overestimation of the incident wave power absorbed by the WEC(s). Realistic PTO systems lead to a reduction of the power output due to losses and differences between the predicted optimum damping and the optimum damping that can be achieved in operational conditions. The control and optimization of the PTO system, however, as shown in [37], does not have a significant influence on the wave field in the "far field". In terms of limitations of the proposed coupling methodology, these depend each time on the type of models that are coupled [14]. Specifically, for coupling between two linear models such as NEMOH and MILDwave, the resulting coupled model will provide conservative results in study cases when non-linear phenomena are dominating. On the other hand, the above limitations can be overcome when applying the proposed coupling methodology, for non-linear models. However, the use of non-linear models needs to be justified for each specific study case, as they often introduce computational instability and high computational costs. Conclusions In the present study, the validation of a novel generic coupling methodology for modeling both near and far field effects of floating structures and WECs is presented for the test case of irregular waves. This coupling methodology is demonstrated by employing the models MILDwave and NEMOH, used for generation of irregular long crested waves. The main objective of the coupling methodology is to obtain "far field" effects of WEC arrays at a cost-efficient computational time. To validate the coupling methodology, several Test Cases from the WECwakes experimental data-set have been considered for different WEC (array) configurations and wave conditions, and performed using NEMOH and the MILDwave-NEMOH coupled model. First, the total wave field evaluated in terms of K d was compared between the MILDwave-NEMOH coupled model and NEMOH. The MILDwave-NEMOH coupled model showed a good agreement with NEMOH for all the considered test cases, with an RMSE K d ,D below 2%. Next, the model was validated against the experimental WECwakes data obtaining a satisfactory agreement, with a RMSE K d ,WG smaller than 10% for all test cases. Despite some discrepancies between the numerical and experimental results, which are mainly caused due to the inherent non-linear behavior of the experiments, it has been demonstrated that the proposed coupling methodology between the wave propagation model MILDwave and the BEM solver NEMOH can accurately parse the information between the two models and simulate the hydrodynamic behaviour of a WEC array and obtain the modified total wave field in the "near field" for irregular long crested wave conditions. As MILDwave has proven to provide the required level of accuracy for coastal real-world applications, it is possible to extend the numerical domain of the coupled model and simulate "far field" effects over large coastal areas. Nevertheless, the MILDwave-NEMOH coupled model has some limitations: (1) its applicability is limited to linear and weakly non-linear wave conditions; (2) the computational time can increase considerably if a large number of frequencies and WECs or a complex PTO type is modelled; and (3) the extension of the WEC array is limited to a fixed bathymetry domain. Regardless of these limitations, based on the results from [14] and on the current results, we can conclude that the MILDwave-NEMOH coupled model introduced has proven to be a reliable tool that can be applied in a fast and efficient way to calculate "far field" effects of WEC arrays. The next step in our modeling work is to extend the methodology to short crested wave conditions. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2019-02-11T02:49:52.768Z
2019-02-08T00:00:00.000
{ "year": 2019, "sha1": "9f4db563d972c05226d1a0957384b0c690fb6c3d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/12/3/538/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2b3a6a561ad6111abeb635ec4a678e843d3a0074", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
201366898
pes2o/s2orc
v3-fos-license
Are e-scooters polluters? The environmental impacts of shared dockless electric scooters Shared stand-up electric scooters are now offered in many cities as an option for short-term rental, and marketed for short-distance travel. Using life cycle assessment, we quantify the total environmental impacts of this mobility option associated with global warming, acidification, eutrophication, and respiratory impacts. We find that environmental burdens associated with charging the e-scooter are small relative to materials and manufacturing burdens of the e-scooters and the impacts associated with transporting the scooters to overnight charging stations. The results of a Monte Carlo analysis show an average value of life cycle global warming impacts of 202 g CO2-eq/passenger-mile, driven by materials and manufacturing (50%), followed by daily collection for charging (43% of impact). We illustrate the potential to reduce life cycle global warming impacts through improved scooter collection and charging approaches, including the use of fuel-efficient vehicles for collection (yielding 177 g CO2-eq/passenger-mile), limiting scooter collection to those with a low battery state of charge (164 g CO2-eq/passenger-mile), and reducing the driving distance per scooter for e-scooter collection and distribution (147 g CO2-eq/passenger-mile). The results prove to be highly sensitive to e-scooter lifetime; ensuring that the shared e-scooters are used for two years decreases the average life cycle emissions to 141 g CO2-eq/passenger-mile. Under our Base Case assumptions, we find that the life cycle greenhouse gas emissions associated with e-scooter use is higher in 65% of our Monte Carlo simulations than the suite of modes of transportation that are displaced. This likelihood drops to 35%–50% under our improved and efficient e-scooter collection processes and only 4% when we assume two-year e-scooter lifetimes. When e-scooter usage replaces average personal automobile travel, we nearly universally realize a net reduction in environmental impacts. Introduction With a small electric motor and a deck on which a single rider stands, stand-up scooters are designed to transport riders short distances around urban settings. Ride share companies are introducing fleets of these vehicles into urban areas, allowing participants to rent the scooters for short periods of time. Dockless ride sharing allows the scooters to be left at a final destination of the user, ultimately to be retrieved by the next user or picked up for charging. Dockless shared e-scooters are touted as a solution to the last-mile problem, a means to reduce traffic congestion, and an environmentally preferable mode of transportation [1,2]. While these e-scooters have no tailpipe emissions, full consideration of the life cycle impacts is required to properly understand their environmental impacts. In this study, we use life cycle assessment (LCA) to quantify the total global warming, acidification, eutrophication, and respiratory impacts of shared dockless electric scooters. The goal of this study is to identify the key drivers for adverse environmental impacts, to offer recommendations on policies or practices that would reduce these impacts, and to compare the overall impacts to other modes of transportation. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. To the best of our knowledge, this is the first peer reviewed study that comprehensively examines the environmental life cycle impacts of shared e-scooters. Life cycle approaches have been used extensively to address comparable questions for other transportation technologies. For example, other studies have examined alternative transportation options, including electric vehicles [3,4], car-sharing programs [5], autonomous vehicles [6], and electric bicycles [7,8], as well as full urban transportation systems [9] and comparisons across two-wheel vehicles [10,11]. Although it is not peer reviewed, Chester published the results of an LCA on shared dockless e-scooters that is most relevant to our analysis [12]. This study included impacts from materials and manufacturing, collection and distribution, charging, and disposal of e-scooters. Results show that manufacturing and materials are responsible for the most lifecycle CO 2 emissions, followed by collection and distribution and charging of the scooter, for a total of 320 g CO 2 /mile in the best-case scenario. While similar to our study's goal, our analysis extends this work by collecting detailed primary data for the materials inventory, daily e-scooter usage, and survey data on transportation modes being displaced, among other differences. Compared to e-scooters, far more research has been conducted on the environmental impacts of bicycles, electric bicycles, mopeds, and motorcycles. In 2001, Zhang et al quantified the life cycle environmental impacts of electric bike applications in Shanghai and found that electrification improved the environmental performance of many, but not all, impact categories relative to gasoline-powered motorbikes [7]. Cherry studied life cycle environmental impacts of personal electric two-wheelers (moped scooters) on the transportation system in China [8], finding that the materials burdens far outweighed the impacts from assembly, that there were considerable emissions attributable to charging, and that the adverse impacts from the lead batteries was high. (For modern e-scooters, lead batteries have been replaced with lithium-ion batteries.) These e-bikes were found to have lower life cycle emissions than cars per mile traveled across all pollutants examined, with most lifecycle impacts not from local pollutants, but incurred during the production process, introducing environmental externalities into other regions [8]. Luo et al used a life cycle approach to compare station-based and dockless bike sharing programs in the US [13]. They found that rebalancing (collection and distribution) of shared bicycles was the main source of life-cycle emissions in dockless bicycles, while the docking infrastructure was a major source of impacts in station-based bicycles. Consistent with Cherry [8], Luo et al note that car displacement is the most important factor in reducing emissions, with at least 34% of bike sharing trips needed to replace car usage in order to realize net impact reductions [13]. Similarly, Weiss et al, Sheng et al, and Rose studied the impacts of personal electric motorcycles and e-bicycles on the environment and transportation system, each noting the importance of the modes of transportation being displaced [11,14,15]. In addition to these studies, Hsieh et al used a system dynamics approach to examine the air pollution mitigation potential of seated electric scooters in Taiwan, but limited the scope to use phase impacts [16]. Sheng et al [14] compared electric motorcycles to gasoline-powered motorcycles on urban noise, finding that electrification can reduce noise pollution. Additionally, Bishop et al examined the use phase environmental performance of seated electric scooters in the United Kingdom, while Leuenberger and Frischknect conducted a full comparative LCA for two wheeled vehicles including seated electric scooters [10,17]. Our study differs from previous research by conducting a full LCA to address the environmental impacts associated with the materials, manufacturing, transportation, charging, and end-of-life for shared dockless standing e-scooters. Conducting an LCA on an emerging technology offers a greater ability to inform policy makers and consumers, as regulations and market behavior are still developing [18]. Wender et al [19] describes approaches to early stage LCA, including the use of scenario development (e.g. [20]) to explore the range of possible outcomes. Although e-scooter technology is not in an early stage of development, the business model of shared, dockless e-scooters has emerged quickly in 2018 and is still evolving. Our study utilizes scenario analysis to better understand the ability to reduce the environmental impacts based on this model of shared dockless e-scooters. In this study, we collect primary data on e-scooter materials and use, coupled with scenarios and a Monte Carlo analysis to explore the magnitude and drivers for environmental impacts. In section 2, we describe our data sources and methods. In sections 3 and 4, we present our results and discussion. Methods In accordance with the ISO standards, our LCA includes goal and scope, inventory analysis, impact assessment, and interpretation. Figure 1 shows our system boundary diagram, which includes materials, manufacturing, e-scooter transport, use and charging, and end-of-life. Due to a lack of data availability and short scooter lifetime, we exclude routine maintenance such as replacing tires or parts during the lifetime of the scooters. The functional unit for our study is one passengermile traveled. Our Base Case assumptions for daily scooter usage and collection requirements are consistent with assumptions for Raleigh, North Carolina. We test a range of parameter values to assess each input's sensitivity and ensure broad applicability of results. Equation (1) describes the generalized calculation for impact per passenger-mile for each impact category I represents the life cycle impacts for a given impact category (kg-eq/passenger-mile). M represents the burdens associated with the materials and manufacturing of the scooter (kg-eq/scooter) and T is the burden associated with transportation of the scooter from shipping and trucking (kg-eq/scooter). MPS d is the auto-miles traveled per day (d) for collection and distribution of scooters (auto-miles/scooter-day) and EF auto is the emissions factor for the vehicle used to collect the scooters (kg-eq/auto-mile). E grid,i,d is the electricity used for charging in hour i, day d (MWh/ scooter). EF grid,i,d represents the emission factor associated with the specific grid region where the scooter is being charged (kg-eq/MWh). D d represents the scooter distance traveled on day d. We use TRACI v 2.1 characterization factors to convert inventory results to environmental impacts [21]. Materials and manufacturing To create an accurate materials inventory for an electric scooter, we disassembled a Xiaomi M365 scooter, representative of the model that shared scooter companies including Bird and Lyft currently deploy [22]. Table S1 is available online at stacks.iop. org/ERL/14/084031/mmedia in the supporting information provides the data for the materials inventory and the ecoinvent v3.3 process. The materials characterization was informed by the documentation provided by the manufacturer and the material codes imprinted directly on the components. The mass of each component was recorded to the nearest gram. The ecoinvent material for production of aluminum alloy was used for the frame, representing the best ecoinvent material for 'aerospace grade' aluminum alloy, as described by the manufacturer. The major materials and components of the e-scooter include an aluminum frame (6.0 kg), steel parts (1.4 kg), a lithium ion battery (1.2 kg), an electric motor (1.2 kg), and tires with tubing (0.83 kg), which in total account for 89% of the total scooter mass. The lithium-ion battery has a cathode material LiNi 1/3 Mn 1/3 Co 1/3 O 2 (NMC 111), as indicated by the battery manufacturer. We use the methods detailed by Ellingsen et al [23] and Ciez and Whitacre [24] to determine the environmental impacts of battery production and recycling. Manufacturing burdens are estimated from the ecoinvent process electric bicycle production, which is used as a proxy for the energy requirements to manufacture and assemble the scooter from components. We use a recycled content approach; our Base Case assumes 24% recycled content for aluminum, consistent with Chinese aluminum in 2017 [25]. Transportation to United States We assume the scooter and battery are assembled in Shenzen, China, as indicated by the manufacturer [26]. The total mass of the scooter, packaging, and accessories is 17.5 kg. We calculate the transportation burdens based on freight shipping to Los Angeles, California (estimated at 11 800 km) and trucking from Los Angeles to Raleigh, North Carolina (estimated at 4000 km), resulting in 207 ton-km and 70 ton-km for shipping and trucking, respectively, per scooter. Use phase The use phase impacts are influenced by the daily distance traveled on each e-scooter, the method of scooter pick-up for charging, the frequency of charging, and the time of day and location of charging. The electricity impacts of charging use seasonal marginal emissions factors from Azevedo et al [27], at an eGRID spatial resolution, which employs a statistical relationship between power plant emissions and the hourly generation from fossil fuel generators for a region [28,29]. We assume a charging rate of 84 W and a full battery charge of 0.335 kWh based on the manufacturer's specifications. E-scooter employees (chargers) can collect any scooters at any location in the city once the scooters become available for collection, without specified collection routes, areas for pick-up, or specified scooters. Matching current policy in Raleigh, we assume that the e-scooters are picked up each evening to be charged, regardless of the batteries' state of charge. To determine the distribution of the battery state of charge at pick-up, we collected end of day (8 pm-10 pm) data on 800 scooters through the Bird rider application. We found that 4.6% of scooters were fully charged (i.e. unused that day), as shown in figure S1. We assume that the distribution of personal vehicles in Wake County, North Carolina is representative of the vehicles used for collection and distribution. Using the EPA Motor Vehicle Emission Simulator (MOVES) version 2014a, we determine a global warming (kg CO 2 -eq/mile) distribution of passenger car and truck emissions for both gasoline and diesel vehicles. For respiratory effects (kg PM 2.5 -eq/mile), acidification (kg SO 2 -eq/mile), and eutrophication (kg N-eq/mile), we use a lognormal distribution with small passenger vehicle (EURO4) in ecoinvent 3.3. To bound the parameters in our analysis, we collected data from several employees of shared scooter companies on how many scooters are picked up per trip and the distance traveled for collection and distribution of scooters, finding a range of 0.6-2.5 miles per scooter for collection and distribution. Given that shared dockless e-scooters are a recent phenomenon, comprehensive data do not yet exist for the distribution of lifetimes for these products under these usage conditions. In our analysis, we test a wide range of plausible scooter lifetimes (0.5-2 years), informed by battery lifetimes, the manufacturer warranty, and reports of damage under shared usage programs [26,30]. A 500 cycle lifespan for NMC 111 batteries, as specified by the manufacturer, would result in a scooter lifetime of 18 months under a highusage approach. For the sale of these scooters to individuals, the manufacturer provides a warranty of 12 months on the main body and 6 months for the accessories [26]. Shared e-scooters may have much shorter lifetimes, however, due to mistreatment or scooters may last longer under lower usage scenarios. Recent reports have suggested that many scooters may be damaged by e-scooter users or citizens, and recent reports suggest that e-scooters may have far shorter lifetimes [30]. To better understand the net impacts of e-scooter usage, we compare our results to alternative modes of transportation. To properly bound this comparison, we conducted a survey of 61 riders and use another published survey [31] to gain insights into the mode of transportation that e-scooters are replacing (e.g. walking, personal automobile). See tables S6 and S7. Monte Carlo analysis and scenarios To investigate the inherent variability and uncertainty of several of the parameters used in this study, we conduct a Monte Carlo analysis with assumed distributions for relevant parameters to determine the overall distribution of life cycle impacts, as shown in table 1. Then, using the Base Case assumptions, we test the sensitivity of the results to each parameter in isolation to determine which parameters have the greatest impact on the results. In addition to the Base Case, we examine three scenarios relating to the e-scooter collection for charging and one additional scenario related to e-scooter lifetime. In 'Low Collection Distance,' we assume that the retrieval and distribution distance of e-scooters is reduced, resulting in 0.6 miles driven per scooter by Figure 2 shows the life cycle environmental impacts per passenger-mile traveled for each scenario. In the Base Case, the average global warming impact is 202 g CO 2 -eq/passenger-mile, with 50% from materials and manufacturing and 43% of impacts coming from collection and distribution. The burdens from the electricity used to charge the scooter contribute only 4.7% of the total, while the transportation from the manufacturer proves to be trivial. The error bars in figure 2 represent the range in which 95% of the Monte Carlo results fall. As shown in figures 2(b)-(d), respiratory effects, acidification, and eutrophication are also driven by a combination of the e-scooter materials and manufacturing and daily collection of the scooters. Using the recycled content approach with 24% recycled content of aluminum, the aluminum frame and lithium-ion battery make up 53%-73% of impacts in manufacturing and materials across all impact categories. The aluminum frame is found to be the highest impact driver of respiratory effects, accounting for 46% of the PM 2.5 -eq from materials and manufacturing, and the battery pack is found to be the highest driver of acidification, accounting for 46% of SO 2 -eq. Given that the e-scooters are manufactured in China and much of the primary materials are not sourced from the United States, these environmental harms are consequently not borne by the end users' community in our study. Results Alternative approaches to collect and distribute e-scooters can greatly reduce the adverse environmental impacts. Reducing the average driving distance for collection and distribution to 0.6 miles per scooter reduces the average life cycle global warming impacts by 27%, while the exclusive use of fuel-efficient vehicles for collection results in a 12% reduction. Limiting scooters collection to those with a low battery state of charge would require a change in policy to allow scooters to remain in public spaces overnight, but could yield a net reduction in global warming impacts of 19%. Figure 3 shows the distribution of the results for scenarios that may reduce global warming impacts. In all scenarios except High Scooter Life, we observe a wide range of outcomes which are driven primarily by the range of scooter lifetimes. The average values for each scenario, shown by vertical lines in figure 3, are further right than the mode value of each scenario due to short scooter lifetimes which yield a long rightward tail. Although figure 3 is truncated to more clearly display the mean values as vertical lines, the results extend as high as 514 g CO 2 -eq/passenger-mile. Table S5 in the supporting information provides the median values for the Monte Carlo results. Due to the long tail of high values for the Base Case, Low Collection Distance, Battery Depletion Limit, and High Vehicle Efficiency scenarios, the median results are 13% to 19% lower than the average results. Comparable results for respiratory impacts, acidification, and eutrophication are shown in the SI, Figures S2-S4. Figure 4 shows the results of the sensitivity analysis on global warming impacts. We see that the global warming impacts are most sensitive to the daily usage of the scooter, scooter lifetime, distance driven for collection, and vehicle fuel efficiency. Both low daily usage of the scooter and low scooter lifetimes show very high global warming impacts driven from the manufacturing and materials burdens, which are spread across a smaller number of passenger-miles traveled over the e-scooter lifetime. Figure 4 also shows that the results are insensitive to the distance for transporting the scooter from the manufacturer to the point of use and the grid emissions. While this study was conducted with parameters specific to Raleigh, North Carolina, the results can be interpreted and used for a wide range of locations. We found that the environmental impacts of the transportation of the scooter from the manufacturer to the end use location is trivial and the potential differences in grid emissions for charging the e-scooter yield small changes in the overall results. Relative to emissions from charging in Raleigh, charging with a 0 kg CO 2 /kWh power source (to approximate wind, solar, or nuclear) would decrease life cycle emissions by 6%, while charging with a 1 kg CO 2 /kWh power source (to approximate coal generation) would increase life cycle emissions by 4%. The most important parameter that would vary across locations is the collection miles driven per scooter mile. Densely populated metropolitan areas may enable higher densities of e-scooters and lower collection driving distances per scooter. Conversely, sparsely populated or sprawling areas would likely necessitate higher collection miles driven. Our sensitivity analysis shows that reduced collection distances of 0.6 miles per scooter reduce the life cycle CO 2 emissions by 27%, while longer driving distances of 2.5 miles per scooter increase life cycle CO 2 emissions by 27%. To better understand the net impacts of shared e-scooter use, we consider the modes of transportation that are being displaced. In our survey of e-scooter riders, 7% of users reported that they would not have taken the trip otherwise, 49% would have biked or walked, 34% would have used a personal automobile or ride-share service, and 11% would have taken a public bus (table S7). These results are consistent with a survey conducted in Portland, Oregon, which shows 8% would not have taken the trip, 45% would have biked or walked, 36% would have used an automobile, and 10% would have used a bus or streetcar [31]. To estimate the displaced burdens from e-scooter usage, we assume that each passenger-mile on an e-scooter displaces 0.34 passenger-miles in a personal car, 0.11 passenger-miles on a public bus, and 0.08 miles on a bicycle. We also assume the life cycle global warming impacts of personal car use is 414 g CO 2 -eq/passenger-mile, using Argonne National Laboratory's GREET 2 model with US average petroleum mix, vehicle model year 2012, 26 miles per gallon efficiency, and one passenger [32]. We assume impacts from bus ridership is 82 g CO 2 -eq/passenger-mile [33], consistent with the well-to-wheels calculation for urban diesel bus use during peak hours from Chester and Horvath, 2009, with the important caveat that emissions from buses do not decrease proportionally with the loss of one rider. We assume that the use of a personal bicycle results in 8 g CO 2 -eq /passenger-mile [11]. Using these assumptions, we calculate that the avoided life cycle emissions from car and bus use is 150 g CO 2 /passenger-mile, which we term the 'Benchmark Displacement.' This Benchmark Displacement rate is 26% lower than the average Base Case impacts associated with the use of shared e-scooters and very near the High Scooter Life and Low Collection Distance scenarios. In table 2, we present the likelihood that the e-scooter life cycle global warming impacts per passenger-mile traveled exceeds the impacts associated with the Benchmark Displacement and alternative single modes of transportation. For this assessment, we use representative life cycle emissions values for these alternatives and report the share of e-scooter Monte Carlo analysis results that exceed those values. The personal automobile, bus with high ridership, and bicycle emissions match those previously described in the calculation of the Benchmark Displacement. The shared dockless bicycles represent non-electric bikes that require 'rebalancing' [13]. The electric moped, electric bicycle, and bicycle values represent personal ownership, which do not require rebalancing [11]. These results show that dockless e-scooters consistently result in higher life cycle global warming impacts relative to the use of a bus with high ridership, an electric bicycle, or a bicycle per passenger-mile traveled. However, choosing an e-scooter over driving a personal automobile with a fuel efficiency of 26 miles per gallon results in a near universal decrease in global warming impacts. The use of dockless e-scooters are often preferable to dockless bicycles, yielding lower life cycle emissions 67% to 100% of the time across the scenarios. When compared to the Benchmark Displacement CO 2 emissions, our Base Case shows a 65% chance that the life cycle e-scooter emissions will be higher. This likelihood is reduced, but nontrivial, for our Low Collection Distance (35%), Battery Depletion Limit (40%), High Vehicle Efficiency (50%), and High Scooter Lifetime (4%) scenarios. These results underscore the importance of ensuring long lifetimes for e-scooters in reducing life cycle emissions. Discussion In this study, we found that the global warming impacts associated with the use of shared e-scooters are dominated by materials, manufacturing, and automotive use for e-scooter collection for charging. Increasing scooter lifetimes, reducing collection and distribution distance, using more efficient vehicles, and less frequent charging strategies can reduce adverse environmental impacts significantly. Without these efforts, our Base Case calculations for life cycle emissions show a net increase in global warming impact when compared to the transportation methods offset in 65% of our simulations. Taken as a whole, these results suggest that, while e-scooters may be an effective solution to urban congestion and last-mile problem, they do not necessarily reduce environmental impacts from the transportation system. Cities that seek to integrate e-scooters into their transportation system have several policy options available to reduce the life cycle environmental burdens associated with their use. Allowing e-scooters to remain in public areas overnight would decrease the automobile burdens associated with picking up fully charged or nearly fully charged e-scooters. Requiring central management or improved e-scooter collection processes could reduce the auto-miles traveled for collection and distribution. Additionally, cities could enact or enforce anti-vandalism policies to reduce e-scooter misuse or mistreatment which can result in short lifetimes (and thus high materials and manufacturing burdens per passenger-mile traveled). The scooter companies also can take meaningful action to reduce the life cycle burdens of their products. They can reduce collection and distribution burdens by incentivizing or requiring the use of efficient automobiles. In addition, they could reduce vehicle miles traveled for collection and distribution through centralized management or by allowing chargers to 'claim' e-scooters to eliminate unnecessary and competitive driving during daily collection. This study clearly demonstrates that there is the potential for e-scooters to increase life cycle emissions relative to the transportation modes that they displace. Although we use a Monte Carlo analysis with informed ranges for input parameters such as scooter lifetime, collection distance, and vehicle efficiency, cities and e-scooter companies alike can use this study to further explore life cycle impacts of e-scooters with a higher level of detail in the future. Claims of environmental benefits from their use should be met with skepticism unless longer product lifetimes, reduced materials burdens, and reduced e-scooter collection and distribution impacts are achieved.
2019-08-23T16:43:33.460Z
2019-08-02T00:00:00.000
{ "year": 2019, "sha1": "aadac82660ed0bd4df70793e82c03e7340c901c8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/ab2da8", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "163e75e40f816d95987b4c30fba0b8aaa1460936", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
119654362
pes2o/s2orc
v3-fos-license
Cocompact lattices on \tilde{A}_n buildings Let K be the field of formal Laurent series over the finite field of order q. We construct cocompact lattices \Gamma'_0<\Gamma_0 in the group G = PGL_d(K) which are type-preserving and act transitively on the set of vertices of each type in the building associated to G. The stabiliser of each vertex in \Gamma'_0 is a Singer cycle and the stabiliser of each vertex in \Gamma_0 is isomorphic to the normaliser of a Singer cycle in PGL_d(q). We then show that the intersections of \Gamma'_0 and \Gamma_0 with PSL_d(K) are lattices in PSL_d(K), and identify the pairs (d,q) such that the entire lattice \Gamma'_0 or \Gamma_0 is contained in PSL_d(K). Finally we discuss minimality of covolumes of cocompact lattices in SL_3(K). Our proofs combine a construction of Cartwright and Steger with results about Singer cycles and their normalisers, and geometric arguments. Introduction Let F q be the finite field of order q where q is a power of a prime p, and let K be the field F q ((t))of formal Laurent series over F q , with discrete valuation ν : K × → Z. Let ∆ be the buildingà n (K, ν), as constructed in, for example [R2,Chapter 9] (see also Section 2.2 below). Then ∆ is an affine building of typeà n , meaning that the apartments of ∆ are isometric images of the Coxeter complex of typeà n . The link of each vertex of ∆ may be identified with the n-dimensional projective space PG(n, q) over F q . Let d = n + 1 and let G be the group G = G(K), where G is in the set {GL d , PGL d , SL d , PSL d }. Then G is a totally disconnected, locally compact group which acts on ∆ with kernel Z(G). It follows from a theorem of Tits [T1] that G/Z(G) is cocompact in the full automorphism group of ∆. If G is GL d or PGL d , then the G-action is type-rotating and transitive on the vertex set of ∆, while if G is SL d or PSL d , then the G-action is type-preserving and transitive on each type of vertex. See Section 2 below for definitions of these terms. By definition, a subgroup Γ ≤ G is a lattice if it is a discrete subgroup such that Γ\G admits a finite G-invariant measure, and a lattice Γ is cocompact if Γ\G is compact. In the cases G = PGL d , SL d and PSL d , the centre of G = G(K) is compact, hence G acts on ∆ with compact vertex stabilisers. A subgroup Γ ≤ G is then discrete if and only if Γ acts on ∆ with finite vertex stabilisers, and if Γ ≤ G is discrete then Γ is a cocompact lattice if and only if, in addition, Γ acts cocompactly on ∆. Given any lattice Γ and a set A of vertices of ∆ which represent the orbits of Γ, the Haar measure µ on G may be normalised so that µ(Γ\G), the covolume of Γ in G, is given by the series a∈A |Stab Γ (a)| −1 (see [BL]). This is a finite sum if and only if Γ is cocompact. The existence of an arithmetic cocompact lattice in G = G(K) is due to Borel-Harder [BH]. By Margulis' Arithmeticity Theorem [M], if d ≥ 3 then every lattice in such G is arithmetic. In the rank 1 case, that is, for d = 2, the building ∆ is a tree of valence q + 1, and there are several additional known constructions of cocompact lattices in G. For example, Figá-Talamanca and Nebbia [FTN] constructed lattices in G = PGL 2 (F q ((t))) which act simply transitively on the set of vertices of the tree ∆. Such lattices are necessarily free products of s copies of the cyclic Date: May 2, 2014. This research of the third author was supported by EPSRC Grant No. EP/D073626/2 and is now supported in part by ARC Grant No. DP110100440. Thomas is also supported in part by an Australian Postdoctoral Fellowship. group of order 2, and t copies of the infinite cyclic group, where s + t = q + 1. The cocompact lattices of minimal covolume in G = SL 2 (F q ((t))) were constructed in [L1,LW]. These lattices are fundamental groups of finite graphs of finite groups which, using Bass' covering theory for graphs of groups [B], are embedded in G. Lubotzky [L2] also constructed a moduli space of cocompact lattices in SL 2 (F q ((t))) which are finitely generated free groups, using a Schottky-type construction. If d = 3, then additional constructions of lattices in G may be complicated by the fact that there exist uncountably many "exotic"à 2 -buildings, that is, buildings of typeà 2 which are not of the formà 2 (K, ν) for any field K, not necessarily commutative, with discrete valuation ν (Tits [T2]). On the other hand for d ≥ 4, that is, for n ≥ 3, there are no exotic building of typeà n (Tits [T3]). For d ≥ 3, there exists a chamber-transitive lattice in PSL d (F q ((t))) if and only if d = 3 and q = 2 or q = 8 (see [KLT] and its references). Lattices in the group G = PGL d (F q ((t))) which act simply transitively on the vertex set of the associated building ∆ were constructed for the case d = 3 in [CMSZ], and for d > 3 in [CS]. We will describe the work of [CMSZ] and [CS] further below. In addition, in the case d = 3, Ronan [R1] constructed lattices acting simply transitively on the set of vertices of the same type in some, possibly exotic,à 2 -building, and Essert [E] constructed lattices acting simply transitively on the set of panels of the same type in some, again possibly exotic, A 2 -building. Essert's construction used complexes of groups (see [BrH]), and had vertex stabilisers cyclic groups acting simply transitively on the set of points and lines of PG (2, q), the projective plane over F q . Our work resolves some open questions of [E], as we explain below. Our main results are Theorems 1 and 2 below. See Section 2.1 below for the definition of a Singer cycle in PGL d (q); such a group acts simply transitively on the set of points and lines of PG (2, q). We first construct lattices in PGL d (F q ((t))). Theorem 1. Let G = PGL d (F q ((t))) and let ∆ be the building associated to G. Then G admits cocompact lattices Γ ′ 0 ≤ Γ 0 such that: • the action of Γ ′ 0 and of Γ 0 on ∆ is type-preserving and transitive on each type of vertex; • the stabiliser of each vertex in Γ ′ 0 is isomorphic to a Singer cycle in PGL d (q); and • the stabiliser of each vertex in Γ 0 is isomorphic to the normaliser of a Singer cycle in PGL d (q). Moreover Γ ′ 0 and Γ 0 are generated by their d subgroups which are the stabilisers of the vertices of the standard chamber in ∆. In fact, the stabiliser of each vertex in Γ ′ 0 is always contained in a finite subgroup of G isomorphic to PGL d (q). However for the vertex stabilisers of Γ 0 the situation is trickier. If (p, d) = 1, then the stabiliser of each vertex in Γ 0 is indeed contained in a finite subgroup of G isomorphic to PGL d (q). On the other hand, as we discuss in Section 3.2, if p divides d, then the stabiliser of each vertex in Γ 0 intersects a finite subgroup of G isomorphic to PGL d (q) in a subgroup of index p a , where d = p a b and (p, b) = 1. We then construct lattices in PSL d (F q ((t))), where we identify the group PSL d (F q ((t))) with a subgroup of PGL d (F q ((t))). Our notation continues from Theorem 1. In particular, in Section 5 we give the precise structure of the vertex stabilisers in Λ 0 and Λ ′ 0 , and we describe the cases in which these lattices can be generated by their vertex stabilisers. Since the centre of SL d (F q ((t))) is finite and fixes ∆ pointwise, if Γ is any lattice in PSL d (F q ((t))) then the full pre-image of Γ under the canonical epimorphism is a cocompact lattice in SL d (F q ((t))). We thus obtain lattices in SL d (F q ((t))) as well. Of course if (d, q − 1) = 1, then the centre of SL d (F q ((t))) is trivial, and so, for example, ). Our original motivation was to find cocompact lattices of minimal covolume in SL 3 (F q ((t))). For this, it was natural to consider vertex stabilisers which are Singer cycles or normalisers of Singer cycles, since these are the vertex stabilisers of the cocompact lattices of minimal covolume in SL 2 (F q ((t))) (see [L1,LW]) and more generally in topological rank 2 Kac-Moody groups G over F q (see [CT], where the minimality result holds under the conjecture that cocompact lattices in such G do not contain p-elements). In Section 6.1 below, we show that a lattice Γ < SL d (F q ((t))) is cocompact if and only if it does not contain any p-elements. This analogue of Godement's Compactness Criterion will not surprise experts, but we were not able to find it in the literature. In Section 6.2, we are able to use this criterion to show that when (3, q − 1) = 1 and p = 2, the lattice Γ 0 is a cocompact lattice in SL 3 (F q ((t))) of minimal covolume. We also show that when (3, q − 1) = 1 and p = 3, Γ ′ 0 is a maximal lattice in SL 3 (F q ((t))), and that when (3, q − 1) = 1 and p = 3, Γ 0 is a maximal lattice in SL 3 (F q ((t))). We conclude the discussion of covolumes with a conjecture about the cocompact lattice of minimal covolume in SL 3 (F q ((t))) when (3, q − 1) = 1 and p is odd. Finally, in Section 7, we discuss how our results answer some open questions from the work of Essert [E]. For example, Theorem 2 implies that for all q such that (3, q − 1) = 1, the group SL 3 (F q ((t))) contains a lattice which acts simply transitively on the set of panels of each type in ∆. To obtain the lattices Γ ′ 0 and Γ 0 in Theorem 1, we use a construction of Cartwright and Steger from [CS], which generalises work of [CMSZ]. This construction gives cocompact lattices Γ < Γ in the automorphism group Aut( A) of a certain algebra A, such that Aut( A) is isomorphic to PGL d (F q ((t))). The lattice Γ acts simply transitively on the vertex set of ∆, and Γ = HΓ where H is a finite group which is the stabiliser in Γ of a vertex of ∆. We review and slightly extend this construction in Section 3. Our treatment applies to any cyclic Galois extension rather than just the extension of finite fields F q d ⊇ F q . In Section 3.3 we choose an explicit isomorphism Aut( A) → PGL d (F q ((t))) and so move our discussion explicitly into PGL d (F q ((t))). We also show that H is isomorphic to the normaliser of a Singer cycle S in PGL d (q). For expository reasons, we then divide the remaining proof of Theorems 1 and 2 between the case d = 3, in Section 4, and the cases d > 3, in Section 5. For all d ≥ 3, we define Γ ′ 0 and Γ 0 to be the subgroups of Γ generated by suitable Γ-conjugates of S or H, respectively. Since Γ is a discrete subgroup of PGL d (F q ((t))), it is immediate that Γ ′ 0 and Γ 0 are discrete. Using geometric arguments, we then show that Γ ′ 0 and Γ 0 act cocompactly on ∆, hence are cocompact lattices. The main additional ingredient in the proof of Theorem 2 is our determination in Section 3 of the intersection of H with PSL 3 (F q ((t))). This intersection is also used to show that, for certain values of d and q, in fact Γ 0 = Γ ∩ PSL d (F q ((t))) or Γ ′ 0 = Γ ∩ PSL d (F q ((t))). Preliminaries We briefly recall some definitions and results, and fix notation. 2.1. Singer cycles and projective spaces. The following definitions and results are taken from [CdR]. Let q be a power of a prime p and let V be the vector space F d q , for d ≥ 2. A cyclic subgroup S of GL d (q) that acts simply transitively on the set of non-zero vectors of V is called a Singer cycle of GL d (q). Its generator s is an element of GL d (q) of order (q d − 1) and so |S|= q d − 1. The image of a Singer cycle of GL d (q) in PGL d (q) under the canonical epimorphism is called a Singer cycle of PGL d (q). The intersection of a Singer cycle S of GL d (q) with SL d (q), that is, S ∩ SL d (q), is called a Singer cycle of SL d (q). Its image under the canonical epimorphism from SL d (q) onto PSL d (q) is called a Singer cycle of PSL d (q). A Singer cycle of PGL d (q) or of SL d (q) has order q d −1 q−1 , and a Singer cycle of PSL d (q) has order q d −1 (q−1)δ where δ = (d, q − 1). Note that a Singer cycle of PGL d (q) acts simply transitively on the set of 1-dimensional subspaces of V , and hence acts simply transitively on the set of (d − 1)-dimensional subspaces of V as well. We denote by PG(n, q) the projective space of dimension n = d − 1 over the finite field F q . Recall that the set of points of PG(n, q) is the set of 1-dimensional subspaces of V , and the set of lines is the set of 2-dimensional subspaces of V . Thus in particular, a Singer cycle of PGL 3 (q) acts simply transitively on both the set of points and the set of lines of the projective plane PG(2, q). If (3, q − 1) = 1, the order of a Singer cycle of PSL 3 (q), q 3 −1 q−1 , coincides with the order of a Singer cycle of PGL 3 (q). It follows immediately that in this case, if we identify PSL 3 (q) with a subgroup of PGL 3 (q), the Singer cycles of PSL 3 (q) and PGL 3 (q) coincide. On the other hand, if 3 divides q − 1 (that is, (3, q − 1) = 3 = 1), the order of a Singer cycle of PSL 3 (q) is q 3 −1 3(q−1) and so this subgroup cannot act transitively on the q 2 + q + 1 points of the projective plane PG (2, q). In fact, a simple application of Orbit-Stabiliser Theorem shows that even the normaliser of a Singer cycle of PSL 3 (q) cannot act transitively on the points of PG(3, q). Moreover, for large enough q, the only p ′ -subgroups of PSL 3 (q) that act transitively on the points of PG (2, q) are Singer cycles and their normalisers and only when (3, q − 1) = 1. This follows immediately from an inspection of the maximal subgroups of PSL 3 (q) that are provided by a result of Hartley and Mitchell (Theorem 6.5.3 of [GLS3]). Hence for large enough q, if 3 divides (q − 1) there are no p ′ -subgroups of SL 3 (q) that act transitively on the set of points of PG(2, q). 2.2. Buildings of type A n . We assume basic knowledge of buildings, and extract from [CMSZ] and [CS] the facts that we will need. A reference for this theory is [R2]. We also recall the Levi decomposition of a vertex stabiliser in SL d (F q ((t))) or PSL d (F q ((t))). Let ∆ be the buildingà n (K, ν) on which G(K) acts, where K = F q ((t)), as in the introduction. and two lattices L and L ′ are said to be equivalent if L ′ = La for some a ∈ K × . The vertices of ∆ are the equivalence classes of lattices in K d . The group G = PGL d (F q ((t))) acts transitively on the vertex set of ∆, so that the stabiliser of the equivalence class represented by O d is Thus we may identify the vertex set of ∆ with the set of cosets G/P 0 . For g ∈ GL d (F q ((t))), we denote the image of g in PGL d (F q ((t))) by g. The type of the vertex gP 0 is ν(det(g)) (mod d). Let v 0 be the vertex of ∆ identified with the trivial coset of P 0 . Then v 0 is the vertex of type 0 in the standard chamber of ∆. For i = 1, . . . , d − 1, the vertex v i of type i in the standard chamber is a coset of the form g i P 0 where g i ∈ GL d (F q ((t))) has entries in O, and ν(det(g i )) = i. The set of all vertices adjacent to v 0 corresponds to the elements of the projective space PG(n, q), and moreover we may choose the types so that for each i = 1, . . . , d − 1, the vertices neighbouring v 0 of type i correspond to the i-dimensional subspaces of V = F d q . The action of each g ∈ PGL d (F q ((t))) on ∆ induces a permutation of the set of types of the form i → i + c (mod d), where c = ν(det(g)). Any automorphism of ∆ which induces a permutation of types of the form i → i + c (mod d), for some c, is said to be type-rotating. In particular, a type-rotating automorphism fixes either no type or all types. We will need the following decomposition of vertex stabilisers, which is a special case of a result for topological Kac-Moody groups in [CR]. and q is a power of a prime p. Let v be a vertex of the building ∆ associated to G. Then the stabiliser of v in G has Levi decomposition Generalisation of Cartwright-Steger construction We first in Section 3.1 describe the basics of cyclic algebras, following Pierce [P]. We then in Section 3.2 extend the construction of [CMSZ] and [CS] to general cyclic extensions, using invariant language. For brevity, we will refer to the construction in [CMSZ] and [CS] as the Cartwright-Steger construction. Finally in Section 3.3 we restrict to the case of finite fields and recall or prove facts that will be useful for our constructions of lattices in Sections 4 and 5 below. 3.1. Basic definitions and properties. Let E ⊇ K be a cyclic Galois extension of degree d, σ ∈ Gal(E/K) a generator and a ∈ K × an element. The cyclic algebra (E, σ, a) is generated as a ring by E and an extra element t, with E a subring so that the ring operations of E are retained in (E, σ, a). The relations involving t are The following are well-known properties of the cyclic algebras: (2) E is a maximal subfield of (E, σ, a); and (3) the elements 1, t, t 2 , . . . , t d−1 form a basis of (E, σ, a) over E. In particular, each cyclic algebra defines an element [ (E, σ, a)] in the relative Brauer group Br(E/K). Recall the definitions of the trace and the norm T, N : E → K: The norm image N (E × ) is a subgroup of K × . We also need the following properties [P]: The cyclic extension E ⊇ K gives rise to two further cyclic Galois extensions: the fields of rational functions E(Y ) ⊇ K(Y ) and the fields of Laurent series E((Y )) ⊇ K((Y )). One can think of them as Galois extensions with the same Galois group, so that σ acts on the coefficients while σ(Y ) = Y . The construction. The first cyclic algebra of interest to us is It is a division algebra, by property (5) [J, p.84]: the equation Comparing the highest terms, md = kd + n. Hence n must be divisible by d, to be a norm of some element. Since The second cyclic algebra of interest is It is isomorphic to the matrix algebra M d (K((Y ))) by (4). To observe this, let us note that the trace T : , and each consecutive term x n will be a solution of T (x n ) = f n (x 1 , . . . , x n−1 ) for a certain function f n of all the previously found terms. We would like to write an explicit isomorphism Ψ from A to a matrix algebra. Observe that in A for any a, b ∈ E((Y )) is an isomorphism from A to (E((Y )), σ, 1). The latter is known as the skew group algebra and admits an explicit isomorphism to the matrix algebra End K((Y )) (E((Y )) given by at j : b → aσ j (b). Composing these isomorphisms, we arrive at an explicit isomorphism given by ). We will abuse notation by denoting various restrictions of Ψ, for instance to A, by the same letter. On the level of multiplicative groups we have an injective homomorphism By the Skolem-Noether Theorem, every K(Y )-linear automorphism of A is inner, so we have another injective group homomorphism Now we are ready to introduce the Cartwright-Steger groups [CMSZ, CS]. Why is Γ a subgroup? To show this we choose a K(Y )-basis B of A consisting of the elements at m , m < d, a ∈ E. The basis B is also a K((Y ))-basis of A. Writing automorphisms in this basis gives an injective homomorphism [CS,p.129]. Thus, we can restrict the image of Φ to the special linear group: The map Θ is a semigroup homomorphism from a group, so its image consists of invertible elements: In essence, Θ is the Y -degree zero term of Φ: the basis B defined above gives an K-basis of A + . The basis B has a partial order coming from the degree of t in [at j ] = at j + Y −1 A 0 . Let T be the group of "unitriangular" transformations in this basis, that is, Finally, the "small" Cartwright-Steger group is (Since not all of T may be in the image of Θ, we should perhaps write that Γ = Θ −1 (T ) ∩ Im(Θ).) Proof. By definition of Γ, for some a, b i ∈ E × . Let us analyse the key equation If a = 0 then we immediately conclude that all σ(b i ) = 0. Hence all b i = 0 and we are done. If a = 0 then we conclude that all b 0 = 0. Then b 1 = 0. Recursively, all b i = 0 and we are done. To contemplate the difference between Γ and Γ, let us introduce another group H: as a set H consists of γ ∈ Aut(A) that are conjugations by at j , where a ∈ E and j < d. (3) HΓ is a subgroup of Γ and Γ is normal in HΓ. As recalled in Section 3.3 below, in the case of finite fields HΓ = Γ, which may or may not hold over arbitrary fields. This is an interesting question. Proof. Let us calculate in A, writing x ∼ y when x and y give the same conjugation in Aut(A). for all b and i if and only if a ∈ K and j = 0 if and only if γ = 1. This proves (2). Finally, it suffices to check that γΓγ −1 ⊆ Γ where γ is a conjugation by x, and x is either t or a ∈ E × . If β ∈ Γ, then γβγ −1 (y) = xβ(x −1 )β(y)(xβ(x −1 )) −1 . Note that elements of Γ are characterised by the fact that because there would not be enough powers of t to cancel all of the Y −j using t d = 1 + Y and produce at least an i-th power of t. as in the case of x = a. It would be useful for us to know how the image Ψ( Γ) intersects with PSL K((Y )) (E((Y ))). We can understand this for the image of H. By (K × ) k we denote the subgroup of the multiplicative group K × consisting of k-th powers. Let γ : A × → Aut(A) be the homomorphism assigning the conjugation by x to each x ∈ A × . Proposition 6. Let p be the characteristic of K. Denote by Ord p (m) the largest power of p that divides an integer m (or 1 if p = 0). Then Proof. The element Ψ(γ(at k )) is in PSL K((Y )) (E((Y ))) if and only if one can multiply Ψ(at k ) by a scalar matrix zI d , z ∈ K((Y )), so that the determinant of the product is 1. Now the product is a composition of four linear maps Here we use the fact that the determinant of the multiplication (b → ab) is the norm N (a). In particular, we see three norms, including N (z) = z d and N (X k ) = (1 + Y ) k . From Galois theory, we know that the action of σ on E((Y )) is conjugate to the permutation matrix of a cycle of length d that gives the last determinant. Thus, we just need a d-th root of (−1) k N (a)(1 + Y ) k in K((Y )). The free term of such a root is a d-th root of N ((−1) k a). Therefore it is necessary and sufficient to have d-th roots of both N ((−1) k a) and (1 + Y ) k . The existence of the former is equivalent to N ((−1) k a) ∈ (K × ) d , while the existence of the latter is equivalent to Ord p (k) ≥ Ord p (d). The last statement needs an explanation. Write d = Ord p (d)d ′ . Extracting a d ′ -th root of (1 + Y ) k can be done because d ′ is invertible in K: the equation can be solved recursively: x 1 is a solution of d ′ x 1 = k, and each consecutive term x n will be a solution of d ′ x n = f n (x 1 , . . . , x n−1 ) for a certain function f n of all the previously found terms. It remains to contemplate extracting of the p-th root in characteristic p: since can be done if and only if (1 + Y ) k is already a p-th power, that is, if and only if p divides k. 3.3. Application to the case of finite fields, and summary of useful results. While the algebraic properties of the construction in Section 3.2 above are upheld in any cyclic extension, we would like to move to its topological and metric properties. For this, from now on we assume that the extension E ⊇ K is a finite field extension F q d ⊇ F q with q = p a , p a prime. Proposition 7. Let E = F q d and K = F q . Then where δ is the greatest common divisor of d and (q − 1) (note that δ is a divisor of (q d − 1)/(q − 1)). Proof. Clearly at k ∼ bt m (with k, m < d) if and only if ab −1 ∈ K and k = m. Thus, we can compute the contributions to the index from a and from t separately. The powers of t of degrees Ord p (d), 2 Ord p (d), . . . , d − Ord p (d) are exactly those that produce elements of the subgroup. So, Ord p (d) is the contribution from t. The contribution from a is the index The first equality holds because The second equality holds since N is surjective and (K × ) d has index n in K × . Using the explicit expression for Ψ at (1) above, one can construct an explicit image of H in the locally compact, totally disconnected group G = PGL d (F q ((t))) under Ψ. Interestingly enough, if (p, d) = 1, one can see that Ψ(H) can be realised as a subgroup of PGL d (q) naturally embedded in ). However, if p | d, this is not possible and Ψ(H) ∩ PGL d (q) is a subgroup of index Ord p (d) in Ψ(H). This difference comes from the fact that in the former case X (a solution of N (X) = 1 + Y ) can be realised over F q , while in the latter case this is not possible. So far we have been working in Aut( A). However, it will now be convenient to switch our discussion explicitly into G = PGL d (F q ((t))). To avoid excessive notations, we identify Γ with its image Ψ( Γ) in G. From now on we call this image Γ. Likewise, we call Γ v , now in G, again by H (instead of using Ψ(H)). We now recall the facts about Γ that will be useful for us. Most of them can be derived from Section 3.2 but, as they already appear in [CS], we just restate them. We have: (1) Γ is a cocompact lattice of PGL d (F q ((t))); (2) Γ acts simply transitively on the set of vertices of the building ∆ associated to PGL d (F q ((t))); (3) H = Γ v for a vertex v of ∆; (4) |H|= q d −1 q−1 d; and (5) Γ = HΓ. We will now discuss the structure of H and some of its properties. Lemma 8. Let H = Γ v for a vertex v of ∆ the building associated to G = PGL d (F q ((t))). Then the following conditions hold: (2) H contains a normal cyclic subgroup S of order q d −1 q−1 where S is a Singer cycle of PGL d (q); (3) H ∼ = N PGL d (q) (S); and (4) if we identify PSL d (F q ((t))) with a subgroup of G, then . Proof. Part (1) follows immediately from the fact that H = Γ v , hence H ≤ G v , and the fact that ), as discussed in Section 2.2. For (2), using the notation of Proposition 5, let S be the image of mE × in PGL d (q). Obviously, S is a cyclic subgroup of H of order q d −1 q−1 . Now from the proof of (1) of Proposition 5, it follows that S indeed is normal in H. Moreover, as S is an abelian subgroup of PGL d (q) of order q d −1 q−1 , Proposition 2.2 of [CdR] implies that S is a Singer cycle of PGL d (q). To prove (3) Then there exists 1 = h ∈ H ∩ U v , an element of order p. It follows that [h, S] ≤ U v ∩ S = 1 since on the one hand h ∈ U v ⊳ G v and S ≤ G v , while on the other, h normalises S and (p, |S|) = 1. Thus h centralises S. Using calculations from the proof of Proposition 5(1) we observe that S is self-centralising in H. We have reached a contradiction that proves that H Now H contains a normal subgroup S ∼ = S which is a Singer cycle of G v , by Proposition 2.2 of [CdR]. Moreover, |H|= |N PGL d (q) (S)|. Therefore (3) holds. 4.1. Lattices in PGL 3 (F q ((t))). Recall the construction of the cocompact lattice Γ ≤ PGL 3 (F q ((t))) described in Section 3 above. As noted in Section 3.3(5) above, the lattice Γ is a product of a vertex stabiliser H of order 3(q 2 + q + 1), and a vertex-regular lattice Γ. By Lemma 8 above, H contains a Singer cycle S of PGL 3 (q). Denote by Γ ′ the subgroup of Γ which is the product of S and Γ. Then by construction, S is a vertex stabiliser in Γ ′ . (Since Γ ≤ Γ ′ ≤ Γ, the group Γ ′ is also a cocompact lattice in PGL 3 (F q ((t))).) Let v 0 , v 1 and v 2 be the vertices of the standard chamber of ∆, as in Section 2.2 above. For i = 0, 1, 2 let N i be the stabiliser of v i in Γ, and let S i be the stabiliser of v i in Γ ′ . Since Γ and Γ ′ act transitively on the vertices of ∆, we have that each N i ∼ = H and each S i ∼ = S. We now define Γ ′ 0 := S 0 , S 1 , S 2 to be the subgroup of Γ ′ generated by S 0 , S 1 and S 2 , and to be the subgroup of Γ generated by N 0 , N 1 and N 2 . Clearly Γ ′ 0 ≤ Γ 0 . We claim that Γ ′ 0 and Γ 0 are cocompact lattices in G = PGL 3 (F q ((t))). Recall from the introduction that Γ < G is a cocompact lattice in G if it is a discrete subgroup of G which acts cocompactly on ∆. Hence it suffices to show that Γ 0 is a discrete subgroup of PGL 3 (F q ((t))) and that Γ ′ 0 acts cocompactly on ∆. The following lemma is immediate, since by construction Γ 0 is a subgroup of the discrete group Γ ≤ PGL 3 (F q ((t))). To show that Γ ′ 0 acts cocompactly on ∆, we first consider the action of the groups S i which generate Γ ′ 0 . Lemma 10. For i = 0, 1, 2 and j = i − 1, i + 1 (mod 3), the group S i acts simply transitively on the vertices neighbouring v i of type j. Proof. From the discussion of Singer cycles in Section 2.1 and types in Section 2.2, the group S 0 acts simply transitively on the vertices neighbouring v 0 of type j, for j = −1, 1 (mod 3). Now Γ ′ consists of type-rotating automorphisms, since the Cartwright-Steger lattice Γ, which contains Γ ′ , consists of type-rotating automorphisms. By construction and the definition of type-rotating, for i = 1, 2 the group S i is the image of S 0 under conjugation by an element of Γ ′ which adds i (mod 3) to each type. Thus for i = 1, 2, the group S i acts simply transitively on the vertices neighbouring v i of type j = i − 1, i + 1 (mod 3). Proposition 11. For i = 0, 1, 2, the group Γ ′ 0 acts transitively on the vertices of type i in ∆. Proof. We will show that Γ ′ 0 acts transitively on the vertices of type 0 in ∆. The same argument will apply for types 1 and 2. It suffices to show that for each vertex w 0 of type 0, there is an element of Γ ′ 0 which takes w 0 to v 0 . We prove this by induction on the distance from w 0 to v 0 in the natural graph metric δ on the edges of ∆. Note that δ(w 0 , v 0 ) will always be an even integer since no two vertices of type 0 are adjacent. If δ(w 0 , v 0 ) = 2 we consider two cases. The first is when w 0 is adjacent to either v 1 or v 2 . By Lemma 10 above, S 1 and S 2 act transitively on the type 0 neighbours of v 1 and v 2 respectively, and so the claim follows in this case. Otherwise, w 0 is adjacent to some vertex s 0 v 1 or s ′ 0 v 2 where s 0 , s ′ 0 ∈ S 0 , since S 0 acts transitively on the vertices of types 1 and 2 which neighbour v 0 . Then s −1 0 w 0 is adjacent to v 1 or (s ′ 0 ) −1 w 0 is adjacent to v 2 , and we apply the argument from the first case. Now suppose that δ(w 0 , v 0 ) = 2k. Then there is a vertex w ′ 0 of ∆ of type 0 such that δ(w 0 , w ′ 0 ) = 2(k − 1) and δ(w ′ 0 , v 0 ) = 2. By the base case of the induction there is an element γ ∈Γ 0 such that γw ′ 0 = v 0 . But then δ(γw 0 , v 0 ) = δ(γw 0 , γw ′ 0 ) = δ(w 0 , w ′ 0 ) = 2(k − 1) so by inductive assumption there is a γ ′ ∈Γ 0 such that γ ′ γw 0 = v 0 , as required. Corollary 12. Γ ′ 0 acts cocompactly on ∆. Proof. By Proposition 11 above, Γ ′ 0 has finitely many (at most 3) orbits of vertices on ∆. Since ∆ is locally finite, this implies that Γ ′ 0 acts cocompactly. We have established the claim that Γ ′ 0 and Γ 0 are cocompact lattices in PGL 3 (F q ((t))). To finish the proof of Theorem 1 in the case d = 3, we further describe the actions of Γ ′ 0 and Γ 0 on ∆. Corollary 13. The action of Γ ′ 0 and of Γ 0 is type-preserving and transitive on each type of vertex in ∆. For i = 0, 1, 2, the stabiliser of v i in Γ ′ 0 is the group S i , and the stabiliser of v i in Γ 0 is the group N i . Proof. Each N i is a subgroup of the type-rotating group Γ and stabilises a vertex of type i, hence each N i fixes all types. It follows that Γ 0 and thus Γ ′ 0 is type-preserving. By Proposition 11, the action of Γ ′ 0 and thus of Γ 0 is transitive on each type of vertex of ∆. For i = 0, 1, 2, the stabiliser Since Γ 0 is discrete, it is immediate that Λ 0 is a discrete subgroup of PSL 3 (F q ((t))). Now Γ 0 acts cocompactly on ∆, so to show that Λ 0 act cocompactly on ∆ it suffices to show that Λ 0 is of finite index in Γ 0 . The group Γ 0 is finitely generated by torsion elements, since each N i is finite. Hence the restriction of det to Γ 0 has finite image. But the kernel of this restriction is Γ 0 ∩ PSL 3 (F q ((t))) = Λ 0 . Thus Λ 0 has finite index in Γ 0 , as required. We conclude that Λ 0 is a cocompact lattice in PSL 3 (F q ((t))). Our further discussion is divided into cases depending upon the value of q. We will establish the remaining claims of Theorem 2 and specify the relationship between our lattices and the Cartwright-Steger lattice Γ in Sections 4.2.1 and 4.2.2, then in Section 4.2.3 explain why, if (3, q − 1) = 1, we are not able to describe any more precisely the actions of Λ 0 and Λ ′ 0 . We can now specify the relationship between our lattice Γ 0 and the Cartwright-Steger lattice Γ, in this case. In this case, as (3, q − 1) = 1, we have by Proposition 7 and Lemma 8(4) above that H ∩ PSL 3 (F q ((t))) is equal to the Singer cycle S < H. By similar arguments to those in Section 4.2.1 above, it follows that Λ ′ 0 = Γ ′ 0 is a cocompact lattice in PSL 3 (F q ((t))) with action as described in Corollary 13 above. The proof of the following lemma is similar to that of Lemma 14 above. 4.2.3. Case 3 | (q − 1). In this case, by Lemma 8(4) above, H ∩ PSL 3 (F q ((t))) has order (q 2 + q + 1). Moreover, as H ∩ PSL 3 (F q ((t))) = H ∩ PSL 3 (F q [[t]]), H is a normaliser of a Singer cycle of PSL 3 (q). Thus as discussed in Section 2.1, H ∩ PSL 3 (F q ((t))) cannot act transitively on the set of points and the set of lines of the projective plane over F q . Hence the arguments used to prove Proposition 11 above cannot be applied. We do not know in this case whether Λ 0 or Λ ′ 0 acts transitively on the set of vertices of ∆ of each type. (Since Λ 0 and Λ ′ 0 are type-preserving cocompact lattices, we do know that they have finitely many orbits of vertices of each type.) Lattices in cases d > 3 As in the case d = 3, we first construct and establish the properties of lattices Γ ′ 0 and Γ 0 in PGL d (F q ((t))), then consider their intersections with PSL d (F q ((t))). Many arguments from the case d = 3 apply immediately for d > 3. (t))). For d > 3, the construction of the cocompact lattice Γ in PGL d (F q ((t))) described in Section 3 above appears in [CS]. As recalled in Section 3.3(5) above, the lattice Γ is a product of a vertex stabiliser H of order d q d −1 q−1 and a vertex-regular lattice Γ. Denote by Γ ′ the subgroup of Γ which is the product of Γ with the Singer cycle S < H guaranteed by Lemma 8 above. Then by construction, S is a vertex stabiliser in Γ ′ . Lattices in For i = 0, . . . , d − 1 let v i be the vertex of type i in the standard chamber, as in Section 2.2 above. Let N i be the stabiliser of v i in Γ and S i be the stabiliser of v i in Γ ′ . Then each N i ∼ = H and each S i ∼ = S. We define Γ ′ 0 := S 0 , . . . , S d−1 ≤ Γ ′ and Γ 0 := N 0 , . . . , N d−1 ≤ Γ. Clearly Γ ′ 0 ≤ Γ 0 . We claim that Γ ′ 0 and Γ 0 are cocompact lattices in PGL d (F q ((t))). As in the case d = 3, it suffices to show that Γ 0 is a discrete subgroup of PGL d (F q ((t))) and that Γ ′ 0 acts cocompactly on ∆, and the following lemma is immediate. The proof of the next lemma is the same as that of Lemma 10 above, after replacing 3 by d. Lemma 17. For i = 0, . . . , d − 1 and j = i − 1, i + 1 (mod d), the group S i acts simply transitively on the vertices neighbouring v i of type j. Compared with the proof of the corresponding result in the case d = 3, Proposition 11 above, the proof of Proposition 18 below requires some extra care in the base case of the induction. Proposition 18. For i = 0, . . . , d − 1, the group Γ ′ 0 acts transitively on the vertices of type i in ∆. Proof. We will show that Γ ′ 0 acts transitively on the vertices of type 0 in ∆. The same argument will apply for types i = 1, . . . , d − 1. It suffices to show that for each vertex w 0 of type 0, there is an element of Γ ′ 0 which takes w 0 to v 0 . We prove this by induction on the distance δ(w 0 , v 0 ) ∈ 2N. If δ(w 0 , v 0 ) = 2 we consider the following cases. (1) w 0 is adjacent to v 1 . By Lemma 17 above, S 1 acts transitively on the type 0 neighbours of v 1 , and so the claim follows in this case. (2) w 0 is adjacent to some vertex s 0 v 1 with s 0 ∈ S 0 . Then s −1 0 w 0 is adjacent to v 1 , and we apply the argument from Case (1). (3) w 0 is adjacent to v i where i ∈ {2, . . . , d − 1}. Then there is a vertex v ′ i−1 of type (i − 1) so that v i , w 0 and v ′ i−1 are mutually adjacent. Since S i acts transitively on the type (i − 1) neighbours of v i , we have that s i v ′ i−1 = v i−1 for some s i ∈ S i . Thus s i w 0 is adjacent to v i−1 . By repeating this argument, we obtain after finitely many steps that for some γ ∈ Γ 0 we have γw 0 adjacent to v 1 , and we may then apply the argument from Case (1). Then there is an s 1 ∈ S 1 such that s 1 v ′ 2 = v 2 , and hence s 1 s 0 v ′ i is a neighbour of v 2 of type i. By repeating this argument, we obtain that γv ′ i is a neighbour of v i−1 of type i, for some γ ∈ Γ ′ 0 . Then there is an s i−1 ∈ S i−1 such that s i−1 γv ′ i = v i . Thus s i−1 γw 0 is a neighbour of v i , and so we may apply the argument from Case (3). The inductive step is exactly as in the case d = 3. Corollary 19. Γ ′ 0 acts cocompactly on ∆. Proof. As in the case d = 3 (Corollary 12 above), this follows from the fact that Γ 0 acts on ∆ with finitely many orbits of vertices. We have established the claim that Γ ′ 0 and Γ 0 are cocompact lattices in PGL d (F q ((t))). To finish the proof of Theorem 1 in the case d > 3, we further describe the actions of Γ ′ 0 and Γ 0 on ∆. The proof of the following result is the same as for Corollary 13 above. Corollary 20. The action of Γ ′ 0 and of Γ 0 is type-preserving and transitive on each type of vertex in ∆. For i = 0, . . . , d − 1, the stabiliser of v i in Γ ′ 0 is the group S i , and in Γ 0 is the group N i . 5.2. Lattices in PSL d (F q ((t))). The proof in Section 4.2 above that when d = 3 the groups Λ 0 := Γ 0 ∩ PSL d (F q ((t))) and Λ ′ 0 := Γ ′ 0 ∩ PSL d (F q ((t))) are cocompact lattices in PSL 3 (F q ((t))) generalises immediately to the cases d ≥ 3. However, describing these intersections becomes a bit more complicated, due to the various numerical possibilities. We list the outcomes for various pairs of d and q in the next statement, which follows from Proposition 7 and Lemma 8 above. Recall that S i is a Singer cycle of PGL d (q), hence S i ∼ = C q d −1 q−1 , and is a proper subgroup of N i . Moreover, S i ≤ N i ∩ PSL d (F q ((t))). Hence Γ ′ 0 = Λ ′ 0 and Λ 0 is a proper subgroup of Γ 0 . (a) If p does not divide d, then The following relationships between the lattices Γ 0 and Γ ′ 0 and the Cartwright-Steger lattice Γ are implied by Lemma 21 above, together with similar arguments to those used in Lemmas 14 and 15 above. ). Minimality of covolumes In Section 6.1 we discuss whether cocompact lattices in the matrix groups we have been considering can contain p-elements. We then in Section 6.2 discuss minimality of covolumes of cocompact lattices in G = SL 3 (F q ((t))). 6.1. Cocompact lattices, do they contain p-elements? We begin by establishing an analogue for G = SL d (F q ((t))) of Godement's Cocompactness Criterion. This result, which was proved by Borel and Harish-Chandra [BHC] and independently by Mostow-Tamagawa [MT], states that for G a semisimple Q-algebraic group and Γ a lattice in G, Γ is cocompact if and only if Γ contains no non-trivial unipotent elements. An element of GL(n, C) is unipotent if all of its eigenvalues are equal to 1. We will use the general result contained in Proposition 23 below. A similar statement can be found in, for example, [GGPS,page 10]. The proof in [GGPS] requires a compact fundamental domain, that cannot be assured in our case. Hence, for the sake of completeness, we exhibit a variation of their argument here. The existence of a discrete cocompact subgroup will make the group G locally compact, but we still formulate the result for a topological group because local compactness is not used in the proof. Proposition 23. Let G be a topological group and Γ a discrete cocompact subgroup of G. If u ∈ Γ, then is a closed subset of G. Proof. Let g i ug −1 i , g i ∈ G, be a net converging to v ∈ G. Since Γ is cocompact, the set {g i Γ} admits a convergent subnet, so without loss of generality, g i Γ → gΓ. Thus, there exist such x i ∈ Γ that g i x i → g. Since g i ug −1 i ux i are elements of the discrete subgroup Γ, the net must stabilise, hence, x −1 j ux j = g −1 vg for some j, and so we arrive at v ∈ u G . It is an interesting question whether cocompact lattices in groups defined over a field of characteristic p contain p-elements. In [L1] Lubotzky uses Proposition 23 above to show that cocompact lattices in SL 2 (F q ((t))), where q = p a , contain no p-elements. In fact, this statement can be generalised in the following way. Proposition 24. Let G = SL d (F q ((t))) where q = p a with p prime and d ≥ 2. Let Γ be a lattice in G. Then Γ is cocompact if and only if Γ does not contain any elements of order p. Proof. First suppose that Γ is non-cocompact and let A be a set of vertices of the building for G which represent the orbits of Γ. Then by the remarks in the introduction, A is infinite and the series µ(Γ\G) = a∈A |Stab Γ (a)| −1 converges, hence Γ contains vertex stabilisers of arbitrarily large order. The Levi decomposition (Proposition 3 above) then implies that Γ must have elements of order p. For the converse, by Proposition 23 above, it is enough to show that if u ∈ G is a p-element then there is g ∈ G such that g k ug −k → I as k → ∞, where I is the identity matrix in G. So let u ∈ G be such that u p = I = u. Since we are working over a field of characteristic p, it follows that (u − I) p = 0 and thus u is a unipotent element of G = SL d (F q ((t))) (recall that by definition, unipotent elements are those with all eigenvalues equal to 1). Thus u is conjugate in G to a matrix with all 1s on the diagonal and all below-diagonal elements 0. Without loss of generality we may assume that u itself has all 1s on the diagonal and all below-diagonal elements 0. It is then not hard to construct a suitable diagonal matrix g ∈ G such that g k ug −k converges to I. For example, for d = 3, g can be taken to be the following matrix: The proof of Proposition 24 makes essential use of the fact that in SL d (F q ((t))), an element of order p is a genuine unipotent element (that is, is conjugate of a matrix with eigenvalues 1). However, one needs to be careful about cocompact lattices in other matrix groups! Let us look again at the Cartwright-Steger lattice Γ in PGL d (F q ((t))). As we saw, Γ = ΓH where H is a finite subgroup of PGL d (F q ((t))) of order d (q d −1) (q−1) . Suppose that p divides d (for example, if p = 3 = d). Then obviously H, and thus Γ, contains an element h ∈ H of order p. On the other hand, Γ is a cocompact lattice in PGL d (F q ((t))). What is going on? The answer comes from the fact that under the natural map GL d (F q ((t))) → PGL d (F q ((t))), h is the image of an element h ∈ GL d (F q ((t))) of infinite order. Hence, h is not "genuinely unipotent" and the proof of Proposition 24 above does not work. In fact the conjugacy class of h in PGL d (F q ((t))) is closed, so there is no contradiction with Proposition 23 above. 6.2. Minimality of covolumes. As discussed in the introduction, our original motivation was to find cocompact lattices of minimal covolume in SL 3 (F q ((t))), and this led us to considering vertex stabilisers which are Singer cycles or normalisers of Singer cycles. We now consider covolumes of cocompact lattices in the special case that G = SL 3 (F q ((t))) and (3, q − 1) = 1. Notice that in particular, SL 3 (F q ((t))) = PSL 3 (F q ((t))). By Theorem 2 and the remarks in the introduction, we have that Γ ′ 0 is a cocompact lattice in G of covolume Also, if p = 3, then Γ 0 is a cocompact lattice in G of covolume Now let Γ be any cocompact lattice in G = SL 3 (F q ((t))). Then by Proposition 24 above, each vertex stabiliser in Γ is a finite p ′ -subgroup of a vertex stabiliser in G. The Levi decomposition (Proposition 3 above) then implies that each vertex stabiliser in Γ is isomorphic to a p ′ -subgroup of SL 3 (q) = PSL 3 (q). We thus consider maximal p ′ -subgroups of PSL 3 (q), in Lemma 25 below. Note that since Γ is type-preserving, Γ has at least one orbit of vertices of each type i = 0, 1, 2. It follows that if |Stab Γ (v i )|≤ q 2 for each i, then µ(Γ\G) > µ(Γ ′ 0 \G) and so Γ is not a cocompact lattice of minimal covolume. Hence in the next statement we consider only maximal p ′ -subgroups of order greater than q 2 . A lattice Γ ′ ≤ G = SL 3 (F q ((t))) is said to be maximal if for every lattice Γ ≤ G such that Γ ′ ≤ Γ, in fact Γ ′ = Γ. It is clear that a cocompact lattice of minimal covolume must be a maximal lattice. In fact, the following is true. Proof. We give the proof for p ≥ 5. The proof for p = 3 is similar. Suppose that Γ is a lattice in G such that Γ 0 ≤ Γ. Then Γ is cocompact, since Γ 0 is cocompact. Since Γ is type-preserving and Γ 0 is transitive on each type of vertex, Γ is transitive on each type of vertex. By Lemma 25, the vertex stabilisers in Γ 0 are maximal p ′ -subgroups of PSL 3 (q). It follows that for i = 0, 1, 2 we have Stab Γ (v i ) = Stab Γ 0 (v i ) and hence µ(Γ\G) = µ(Γ 0 \G). Thus Γ = Γ 0 as required. For p ≥ 5, we have found a candidate besides Γ 0 for the cocompact lattice of minimal covolume. Let H 1 be the normaliser of a maximal split torus of PSL 3 (q). Using complexes of groups (see [BrH]), for p odd and (3, q − 1) = 1 we are able to construct a group Γ 1 which acts transitively on the set of vertices of each type in some building of typeà 2 (possibly exotic), so that each vertex stabiliser in Γ 1 is isomorphic to H 1 . However, for p ≥ 5 we do not know whether Γ 1 embeds in G = SL 3 (F q ((t))) as a cocompact lattice acting transitively on the set of vertices of each type in the building for G, with Stab Γ 1 (v i ) ∼ = H 1 for i = 0, 1, 2. (For p = 3, the whole group H 1 cannot be a vertex stabiliser, since it contains an element of order 3.) If there is such an embedding of Γ 1 , then by the same arguments as for Proposition 27, Γ 1 is a maximal lattice in G, and it will have a smaller covolume than Γ 0 : 1 |6(q − 1) 2 | = 3 6(q − 1) 2 = 1 2(q − 1) 2 < 1 q 2 + q + 1 . Hence, we would like to finish this section with the following question and conjecture. Relationship with the work of Essert Recall from the introduction that Essert [E] constructed cocompact lattices which act simply transitively on the set of panels of the same type in someà 2 -building, possibly exotic. We now conclude by resolving some open questions from [E]. To explain these questions, let ∆ be the buildingà 2 (K, ν), for some field K with discrete valuation ν, and let G = G(K) where G is in the set {PGL 3 , SL 3 , PSL 3 }. Suppose that Γ is a cocompact lattice in Aut(∆), meaning that Γ acts cocompactly on ∆ with finite stabilisers. Since G/Z(G) is not equal to Aut(∆), it is possible that Γ is not contained in G even though Γ acts on the building associated to G. On the other hand, since G/Z(G) is cocompact in Aut(∆), if Γ is a cocompact lattice in G, then Γ will be a cocompact lattice in Aut(∆). The Mostow-Margulis Rigidity Theorem (see [M]) implies that the group Γ cannot be a lattice in G(K) for two different fields K. With the exception of one lattice which is realised explicitly in the group SL 3 (F 2 ((t))) (see the Remark in [E,Section 5.2]), it is an open question in [E] whether the lattices constructed there act on any buildingà 2 (K, ν), and also whether they can be embedded in any G(K). We consider these questions in the case that K = F q ((t)). Let ∆ =à 2 (F q ((t)), ν). We first consider the lattice Γ ′ 0 ≤ PGL 3 (F q ((t))) constructed in Section 4.1 above. Since the vertex stabilisers of Γ ′ 0 are Singer cycles of PGL 3 (q), and Γ ′ 0 acts transitively on the set of vertices of each type in ∆, it follows that the lattice Γ ′ 0 acts simply transitively on the set of panels of each type in ∆. Thus the lattice Γ ′ 0 is of the form considered by Essert [E], and is contained in PGL 3 (F q ((t))) for all q. From the discussion above, it follows that for all q, there is a lattice in Aut(∆) acting simply transitively on the set of panels of the same type. Next suppose that (3, q − 1) = 1. We showed in Section 4.2 above that in this case, the lattice Γ ′ 0 is also contained in PSL 3 (F q ((t))) = SL 3 (F q ((t))). Hence for all q such that (3, q − 1) = 1, there is a lattice in SL 3 (F q ((t))) which acts simply transitively on the set of panels of the same type. Finally suppose that 3 | (q − 1). From the Levi decomposition (Proposition 3 above) and Proposition 24 above, if Γ is a cocompact lattice in SL 3 (F q ((t))), then the vertex stabilisers in Γ are isomorphic to p ′ -subgroups of SL 3 (q). However, when q is large enough and 3 | (q − 1), there is no p ′ -subgroup of SL 3 (q) which acts transitively on the points of the projective plane (see Section 2.1). Hence no vertex stabiliser in Γ can act transitively on the set of adjacent panels of the same type. Thus if q is large enough and 3 | (q − 1), there is no lattice Γ < SL 3 (F q ((t))) which acts (simply) transitively on the set of panels of the same type.
2012-06-23T04:14:02.000Z
2012-06-23T00:00:00.000
{ "year": 2012, "sha1": "297bbfbb4d23ad2fd4ed071442575e1834c2474a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "297bbfbb4d23ad2fd4ed071442575e1834c2474a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
262934730
pes2o/s2orc
v3-fos-license
Dual-Level Attention Based on Heterogeneous Graph Convolution Network for Aspect-Based Sentiment Classification We introduce a flexible HIN (Heterogeneous Information Network) framework to model user-generated comments. It can integrate various types of additional information and capture the relationship between them to reduce the semantic sparsity of a small amount of labeled data. It can also take advantage of the hidden network structure information by spreading the information together with the graph. Then we propose to use a dual-level attention-based heterogeneous convolutional graph network to understand the importance of different adjacent nodes and of different types of nodes to the current node. By doing this, we can mitigate the shortcomings that most existing algorithms ignore, i.e. the network structure information between the words in the sentence and the sentence itself. The experimental results on the SemEval dataset prove the validity and reliability of our model. I. INTRODUCTION Opinion Mining and sentiment analysis has captured increasing attention in academic and industrial circles in recent years due to its wide use. With the increasing popularity of e-commerce services such as Yelp and Meituan, the number of user-generated reviews has increased dramatically. Many user-generated reviews have become important resources for producers to improve product quality. We have also found that with the development of blockchain, a pioneer cryptocurrency, media and public opinion has impacted its development and use [1], [3], [16], [18]. There are also a large number of IoT research which have begun using sentiment analysis to optimize the user experience [2], [7], [17]. In recent years, people have proposed an aspect-based sentiment analysis task, which is a sub-task of sentiment analysis. It aims to identify the sentiment polarity towards a given aspect and provide more detailed feedback information than traditional sentiment analysis jobs [9]. Most existing methods tend to leverage the neural networks to extract contextual information and the representations of aspect categories, such as long/short term memory network (LSTM) and recurrent neural network (RNN) [13], which usually lead to the mismatching of the sentiment polarity with the target aspect. Recently, with the wide use of attention mechanisms in NLP, many models use attention mechanisms to capture semantic information between aspect features and context [10]. The method of combining attention models can further improve the performance of the model, however it ignores the network structure information between the words in the sentence and the sentence itself, and it also reduces performance by introducing inherent disturbances in the attention mechanisms. In addition, the previous studies did not consider how to add additional information to dig and enrich semantic information. To address the issues listed above, we propose a dual-level attention based on heterogeneous graph convolutional network for aspect-based sentiment analysis. II. RELATE WORK The goal of aspect-based sentiment analysis is to identify the specific sentiment word that is aimed towards the target aspect, which is a fine-grained task in ABSA. The development of aspect-based sentiment analysis will be introduced here in the following three stages. Early studies mainly focused on training a classifier and feeding the feature vector of text into the classifier, then obtaining the classification results such as SVM (support vector machine) and some improved models of SVM. For instance, Wagner et al. [8] introduced the relationship between sentiment word and target aspect to assist in training an improved SVM classifier. Later on, the recurrent neural network (RNN) got more attention due to it being widely used in NLP [4]. Lots of studies applied RNN to aspect-based sentiment analysis to achieve better performance [5], [6], [13]. For example, Tang et al. and Ruder et al. [5] implemented a hierarchical bi-directional LSTM model to learn the sentences' contextual information. These RNN-based models can achieve better classification results because RNN has many advantages, e.g. the LSTM is better at extracting short-range dependencies among words in sentences. However, these RNN-based methods cannot extract potential correlations between sentiment words and aspect words that are relatively far away in complex sentences. Recently, lots of studies indicate that the introduction of attention mechanisms can alleviate the above-mentioned problems [6], [10]. A complex sentence may contain several aspects, each word in a sentence may be associated with one or more aspect terms, and a phrase in a sentence may convey sentiment information about a particular aspect term. By introducing the attention mechanism, we can capture the detailed sentiment features towards a specific aspect in the complex sentences. In particular, Wang et al. [10] proposed the ATAE-LSTM model which combines LSTM and the attention mechanisms. The aspect of embedding was used to calculate the attention weights. Ma et al. [6] proposed a model with bi-directional attention mechanisms for interactive learning context and attention weights of aspect terms respectively. Obviously, these models further improved the accuracy of sentiment analysis. Inherent noise, however, was introduced as a result of the attention mechanisms. To be more specific, the network structure information between the words in the comment text and the comment text itself was missed. III. DAHGCN MODEL Our method is divided into two steps. First, in order to address the semantic sparsity and the hidden network structure information, we propose a flexible HIN [14] framework. It can also integrate several additional information, which can greatly enrich the semantic information. Then, we propose the DAHGCN model which uses both type-level and node-level attention to analyzes the relationship between aspect term and sentiment term. A. HIN for User-generated Review We first present the HIN framework for modeling the review texts which can alleviate the semantic sparsity by integrating some additional information. The HIN is construct as G = (V, ε), which contains the review texts T = {t 1 , t 2 , . . . , t n }, aspect term A = {a 1 , a 2 , . . . , a k }, and sentiment term S = {s 1 , s 2 , . . . , s n } as nodes. Where V = T ∪ A ∪ S, ε is the set of edges between nodes which represent the relationship between two nodes. B. DAHGCN Model As shown in Figure 1, we propose a novel dual-level attention based on heterogeneous graph convolutional network for aspect-based sentiment analysis. It contains nodelevel attention and type-level attention. We embed HIN into HGCN and then introduce the dual-level attention mechanisms mentioned earlier to compute the attention weights of different adjacent nodes and different types of nodes. HGCN(Heterogeneous Graph Convolution Network): In this paper, the HIN framework integrates two kinds of additional information. Due to the heterogeneousness among the different types of nodes, the HIN framework cannot be directly applied to traditional GCN (graph convolutional network) [15]. To address this issue, we introduce HGCN (heterogeneous graph convolutional network) which is an improved model of GCN. Usually, for a graph G = (V, ε), V and ε represent a set of nodes and a set of edges, respectively. It makes X ∈ R |V |×d represent the matrix of all nodes with their nodes, where |V | is the number of nodes and d is the dimension of the feature vectors. In a graph, due to the self-connected of each node, we set the adjacent matrix A = A + I and the layer-wise propagation of GCN is defined as follows: where σ is an active function, A is the normalize adjacency matrix of A, H (l) ∈ R |V |×d is the hidden state of all nodes in l th layer. Initially, H (0) = X and W (l) is a layerspecific trainable transformation matrix. HGCN considers the heterogeneity of all types of nodes and projects them into an implicit common space with their respective transformation matrices. The layer-wise propagation of HGCN is defined as follows: where A τ ∈ R |V |×|Vτ | is the submatrix of A and rows and columns represent all nodes and their neighboring nodes with type τ , respectively. The representation of the nodes H (l+1) is obtained by aggregating information from the features of their neighboring nodes H Type-level Attention. For a target node v, type-level attention can learn the weights of adjacent nodes of different types. Particularly, we first use the embedding vector h τ = v A vv h vv to represent the type τ , it represents the sum of adjacent nodes features with type τ . Then we calculate the type-level attention score. Where μ T is the attention vector of type τ , || is "concatenate". Then, we use the softmax function to normalize all attention scores to get the final attention weight. Node-level Attention. We introduce node-level attention to learn the importance of different adjacent nodes and reduce the weights of the noise information. If a target node v and its neighboring nodes v are given, we use the embedding vector h v of node v, embedding vector h v of node v and the type-level attention score of node as the input to calculate the node-level attention score. It is defined as follows: where v T is the attention vector of node v. Then we normalize the node-level attention scores with softmax function to the final attention weight. Finally, we add the dual-level attention into the HGCN. where B τ is the attention weights matrix of all nodes. we train the embedding into softmax for classification and obtain Z. We use the L 2 -norm as the loss function in our model training, it defined as, Where C is number of sentiment polarity category, D train is the set of review numbers for training, η is the L2regularization term, and Θ is the parameter set. A. Datasets and Comparison models We will use the SemEval competition dataset to evaluate the DAHGCN model at the sentence level of aspect-level sentiment classification. The data set contains user-generated reviews of restaurants. Each data set contains a target aspect, an aspect term, and the aspect-specific sentiment polarity. The data sets are detailed in Table I to make explicit use of the dependency relationships between words to directly spread emotional features from the syntactic context of aspect targets, and to classify aspect-level emotions [12]. B. EVALUATION AND ANALYSIS As the comparison results are shown in Table II, the accuracy of our DAHGCN method is far better than other models. Firstly, we analyze these algorithms from the attention mechanism. ATAE-LSTM is used attention mechanisms to learn the relationship between aspects and emotions, but it does not consider the relationship between sentences. DAHGCN adds node-level attention and can effectively learn the importance of adjacent nodes to the current node so it can get the connection between sentences. Secondly, we investigate from the perspective of HIN. It can be seen from the classification results of each data set in Table II, the accuracy of HGCN is generally better than that of ASGCN or ATAE-LSTM. Compared with other algorithms, we argue that the main reason is that HGCN uses HIN to further dig the semantic information of the text. In this way, its node feature representation is more accurate, so its results are better. In summary, DAHGCN adopts the HIN mechanism and improves the attention mechanism, hence why its result is the best among all algorithms. In order to have an intuitive understand how DAHGCN works, a case study is used. For instance, the sentence "Great food but the service was dreadful", it contains two aspects which may result in sentiment mismatch by using the model which combines attention and recurrent neural network. The other sentence "If the service is better, I will go back again." uses a word "if", bringing extra difficulty in detecting implicit semantics. For the first sentence, our model can easily supervise the sentiment words and enable the model to concentrate the aspect-specific sentiment word. But it is challenging for our model to collect the logical information in the second sentence. That is because the second sentence expresses negative feelings towards the service without any obvious negative words, hence why it is difficult to make an accurate prediction. V. CONCLUSION AND FUTURE WORK In this paper, we proposed novel dual-level attention based on heterogeneous graph convolutional network for aspectbased sentiment analysis. We first proposed a HIN framework that integrates two kinds of additional information (aspect term and sentiment term). Then we introduced a dual-level attention mechanism to learn the importance of different adjacent nodes and different types of nodes. The experimental results have shown that this method is superior to other methods.
2020-12-01T14:17:03.834Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "a7c421bf6211a72d310e5bcef6a199788900bb88", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/6625899.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "a7c421bf6211a72d310e5bcef6a199788900bb88", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237091572
pes2o/s2orc
v3-fos-license
Lengths of Irreducible and Delicate Words We study words that barely avoid repetitions, for several senses of"barely". A squarefree (respectively, overlap-free, cubefree) word is irreducible if removing any one of its interior letters creates a square (respectively, overlap, cube). A squarefree (respectively, overlap-free, cubefree) word is delicate if changing any one of its letters creates a square (respectively, overlap, cube). We classify the lengths of irreducible and delicate squarefree, overlap-free, and cubefree words over binary and ternary alphabets. Introduction In combinatorics on words, it's common to study repetitions in words. Three kinds of repetition are squares, overlaps, and cubes. A square is a word of the form XX, where X is a nonempty word; an example in English is "hotshots". An overlap is a word of the form xY xY x, where x is a letter and Y is a possibly empty word; an example in English is "alfalfa". Finally, a cube is a word of the form XXX, where X is a nonempty word; an example in English is "hahaha". A factor of a word is a contiguous subword. For instance, every cube contains an overlap as a factor, and every overlap contains a square as a factor. We say that a word is squarefree (respectively, overlap-free, cubefree) if none of its factors is a square (respectively, overlap, cube). It's natural to ask over which alphabets there exist arbitrarily long squarefree, overlapfree, or cubefree words. It's easy to see that every binary word of length at least 4 contains a square. At the same time, over 100 years ago, Thue [12,13] (see also [3]) proved that there are arbitrarily long overlap-free binary words and arbitrarily long squarefree ternary words. Thus, when studying squarefree words, we use a ternary alphabet, and when studying overlap-free and cubefree words, we use a binary alphabet, since these are the smallest alphabets for which the questions are interesting. Recently, there has been interest in studying words that are only barely squarefree, overlap-free, or cubefree, for several senses of "barely". To this end, Grytczuk, Kordulewski, and Niewiadomski [5] introduced the notion of extremality. Namely, they defined an extremal squarefree word to be a squarefree word such that inserting any letter from the alphabet into the word (possibly at the beginning or end) creates a square. The definition analogously extends to overlap-free and cubefree words. The same authors proved that there are infinitely many extremal squarefree ternary words. Mol and Rampersad [8] refined this result by determining for which lengths extremal squarefree ternary words exist; in particular, such words exist for all lengths at least 87. In the same vein, Mol, Rampersad, and Shallit [9] determined for which lengths extremal overlap-free binary words exist. While there are infinitely many such words, they do not exist for all sufficiently large lengths. It remains unknown whether there are extremal cubefree binary words. Harju [6] introduced the notion of irreducibility as a dual to extremality. First, we say a letter in a word is interior if it is not the first or last letter of the word. Then an irreducible squarefree word is a squarefree word of length at least 3 such that removing any one of its interior letters creates a square. The definition again analogously extends to overlap-free and cubefree words. Notice that requiring the letter one removes to be interior is essential, because any squarefree, overlap-free, or cubefree word will remain so after removing the first or last letter. Harju proved that there is an irreducible squarefree ternary word of length n if and only if n P t3, 6, 8, 9, 10, 11u Y tm | m ě 13u. We solve the analogous problems for irreducible overlap-free binary words and irreducible cubefree binary words with the following theorems, which are proved in Section 2. As a third sense in which a word can be barely squarefree, overlap-free, or cubefree, we introduce the notion of delicacy. A delicate squarefree word is a nonempty squarefree word such that changing any one of its letters to another letter from the alphabet creates a square. The definition again analogously extends to overlap-free and cubefree words. We determine the possible lengths of delicate squarefree ternary words, delicate overlap-free binary words, and delicate cubefree binary words with the following theorems, which are proved in Section 3. In Section 4, we exhibit an infinite family of overlap-free binary words that are simultaneously extremal, irreducible, and delicate. Theorem 1.6. There are infinitely many simultaneously extremal, irreducible, and delicate overlap-free binary words. Finally, in Section 5, we conclude by introducing a natural generalization of delicacy and raising a question about it for further study. Irreducible Words Let µ be the binary morphism defined by µp0q " 01 and µp1q " 10, and let t " µ ω p0q be the Thue-Morse word, which is known to be overlap-free [13]. For Theorem 1.1 we need the following lemma. Proof. Thue [13] proved that for any word w, w is overlap-free if and only if µpwq is. Since 010110t " µp001tq and 101001101001t " µp110110tq, it suffices to know that 001t and 110110t are overlap-free, which was proven by Allouche, Currie, and Shallit [1]. We now prove Theorem 1.1, which says that there is an irreducible overlap-free binary word of length n if and only if n P t6, Proof of Theorem 1.1. First, suppose n P t6, 10u. In this case we use one of the following words: 6 010010 10 0100101101. Now suppose n P t8, 9u Y tm | m ě 12u. Let t k be the first 8k letters of t. We claim that t k is irreducible overlap-free. Observe that T 0 " 01101001 and T 1 " 10010110 are irreducible overlap-free. Since t k is a concatenation of copies of T 0 and T 1 , it suffices to verify that T 0 T 0 , T 0 T 1 , T 1 T 0 , and T 1 T 1 are irreducible overlap-free. (We have incidentally proven that t is irreducible overlap-free. ) We first prove the theorem when n ı 7 pmod 8q. We denote the length of a word w by |w|. Let k be the largest integer k such that |t k | ď n. If n´|t k | " 0, we're done. Otherwise, based on n´|t k |, we choose the word of the desired length from the following table. The irreducibility of these words can be verified by removing the interior letters at the beginning one at a time and finding the overlaps in the resulting words. It follows from the lemma that these words are overlap-free. Next, we prove the result when n " 7 pmod 8q by giving a family of words that are irreducible overlap-free for n ě 39 and n " 7 pmod 16q and a family of words that are irreducible overlap-free for n ě 15 and n congruent to 15, 23, or 31 modulo 32. The first family is obtained by removing the first 14 letters of t and then taking prefixes, while the second is obtained by removing the first 15 letters of t and then taking prefixes. The words in these families are overlap-free since t is, so we only need to prove irreducibility. We prove irreducibility by induction on the length. The base cases can be verified. For the first family, we now show that appending the next 16 letters to a word in the family creates another irreducible overlap-free word. Notice that the next 16 letters must be of the form z " X 1 T i X 2 , where X 1 P t001, 110u, X 2 P t01101, 10010u, and i P t0, 1u. Not every combination of these is a possible value of z, however. First, z ‰ 001T 0 01101 and z ‰ 110T 1 10010 because they require z to be from a factor of the form T 0 T 0 T 0 or T 1 T 1 T 1 , respectively. Further, z ‰ 001T 0 10010 and z ‰ 110T 1 01101 because they require z to be from a factor of the form T 0 T 0 T 1 or T 1 T 1 T 0 , respectively, and such factors can only occur in t starting at an index congruent to 8 modulo 16. Thus, the only possibilities for z are 001T 1 01101, 001T 1 10010, 110T 0 01101, and 110T 0 10010. In the first two cases, the 5 letters preceding them must be 01101, and these suffixes of length 21 are irreducible overlap-free. Similar reasoning holds for the latter two cases, in which case the 5 letters preceding them must be 10010. We now prove the induction hypothesis for the second family by showing that appending the next 32 letters to a word in the family creates another irreducible overlap-free word. The next 32 letters must be of the form z " X 1 T i T j T k X 2 , where X 1 P t01, 10u, X 2 P t011010, 100101u, and i, j, k P t0, 1u. Again, not every combination of these is a possible value of z. First, any combination that requires z to be from a factor containing an overlap is impossible. Further, the combinations that require z to be from a factor of the form T 0 T 1 T 0 T 1 T 1 or T 1 T 0 T 1 T 0 T 0 are impossible because such factors can only occur in t starting at an index congruent to 16 modulo 32. Finally, t contains no factors of the form T 0 T 0 T 1 T 0 T 0 or T 1 T 1 T 0 T 1 T 1 . This leaves 10 possibilities for z. For each possibility, we can deduce the 6 letters preceding them (either 011010 or 100101), and the resulting suffixes of length 38 are irreducible overlap-free. For n R t6, 8, 9, 10u Y tm | m ě 12u, a computer search shows there are no irreducible overlap-free binary words of length n. For Theorem 1.2, we need the following lemmas due to Richomme Notice that ϕ 2 p0q and ϕ 2 p1q are the reversals of ϕ 1 p0q and ϕ 1 p1q, respectively. The images of all cubefree binary words of length 7 under ϕ 1 and ϕ 2 are cubefree, so by Lemma 2.2, ϕ 1 and ϕ 2 preserve cubefreeness. Further, ϕ i p0q, ϕ i p1q, ϕ i p00q, ϕ i p01q, ϕ i p10q, and ϕ i p11q are irreducible cubefree for i P t1, 2u, so applying ϕ 1 or ϕ 2 to a prefix of a cubefree binary word results in an irreducible cubefree binary word. Let w 1 and w 2 be the images of t under ϕ 1 and ϕ 2 , respectively, and let w 1,k and w 2,k be the images of the first k letters of t under ϕ 1 and ϕ 2 , respectively. Notice that |w 1,k | " |w 2,k |. Let k be the largest integer k such that |w 1,k | ď n. If n´|w 1,k | " 0, we're done. Otherwise, based on n´|w 1,k |, we choose a word of the desired length from the following table. Proof. We prove the stronger result that no prefix of w 1 or w 2 is a square. Suppose w 1 has a square prefix XX. Then |X| ě 8. But in w 1 , 01100100 occurs only at the beginning of ϕ 1 p0q, so XX must be the image of a square prefix of t under ϕ 1 , which contradicts Lemma 2.3. Similarly, suppose w 2 has a square prefix XX. Then |X| ě 9. But in w 2 , 010010110 occurs only at the beginning of ϕ 2 p0q, so we again have a contradiction. Now suppose one of the words in the above table contains a cube XXX. Then XXX must start in the prefix (before w 1,k , w 1,k´1 , or w 2,k ), since w 1 and w 2 are cubefree. But one can verify that X also can't be entirely within the prefix, so X " P Y , where P is a suffix of the prefix and Y is a prefix of w 1 or w 2 . Thus, XXX " P Y P Y P Y , and Y P Y P Y is a prefix of w 1 or w 2 , which contradicts the claim. For n R t10, 14, 18, 19, 20u Y tm | m ě 22u, a computer search shows there are no irreducible cubefree binary words of length n. Proof. Berstel [2] proved that v is equivalently characterized by letting the i-th letter be the number of 0s between the i-th and pi`1q-th 1s in t. Thus, 2v is defined by letting the i-th letter be the number of 0s between the i-th and pi`1q-th 1s in 10t. Since 10t is overlap-free by a result of [1], 2v is squarefree. We also need the following lemma due to Crochemore [4,Corollary 5]. Lemma 3.2. A ternary morphism preserves squarefreeness if and only if the images of all squarefree ternary words of length 5 are squarefree. We now prove Theorem 1.3, which says that there is a delicate squarefree ternary word of length n if and only if n P t5u Y tm | m ě 7u. The images of all squarefree ternary words of length 5 under ϕ are squarefree, so by Lemma 3.2, ϕ preserves squarefreeness. Further, ϕpaq is delicate squarefree for a P t0, 1, 2u, so applying ϕ to a prefix of a squarefree ternary word results in a delicate squarefree ternary word. Let w be the image of v under ϕ, and let w k be the image of the first k letters of v under ϕ. Let k be the largest integer k such that |w k | ď n. If n´|w k | " 0, we're done. Otherwise, based on n´|w k |, we choose a word of the desired length from the following table. 1 010210120102w k´1 2 02w k 3 102w k 4 0121w k 5 12021w k 6 012102w k 7 0212021w k 8 02120121w k 9 021012102w k 10 1202120121w k The delicacy of these words can be verified by changing the letters at the beginning one at a time and finding the squares in the resulting words. We prove that these words are squarefree with the following claims. Proof. Suppose one of these words contains a square XX. Then XX must start in the prefix (before w), since w is squarefree. Further, one can verify that X must contain the factor 20120, which is a contradiction, since w does not contain this factor. Proof. This follows from Lemma 3.1. Proof. Suppose this word contains a square XX. Since w does not contain 12101 as a factor, XX is contained in 21w. But 21w is squarefree by Lemma 3.1, which is a contradiction. For n R t5u Y tm | m ě 7u, a computer search shows there are no delicate squarefree ternary words of length n. We now prove Theorem 1.4, which says that there is a delicate overlap-free binary word of length n if and only if n P tm | m ě 7u. 2 Proof of Theorem 1.4. If n " 9, we use the word 001011001. Now suppose n P t7, 8u Y tm | m ě 10u. We use a construction for each residue class modulo 8. Based on the residue class, we use a prefix of t after removing the first k letters, where k is given by the following table. 0 k " 0 1 k " 7 2 k " 6 3 k " 13 4 k " 12 5 k " 3 6 k " 10 7 k " 1 As factors of t, these words are overlap-free. Their delicacy is shown by induction. The base cases can be verified. The induction hypothesis is simple because both T 0 and T 1 are delicate overlap-free. For n R tm | m ě 7u, a computer search shows there are no delicate overlap-free binary words of length n. We now prove Theorem 1.5, which says that there is a delicate cubefree binary word of length n if and only if n P t20, 21, 22, 29, 33, 34, 35u Y tm | m ě 38u. The images of all cubefree binary words of length 7 under ϕ are cubefree, so by Lemma 2.2, ϕ preserves cubefreeness. Further, both ϕp0q and ϕp1q are delicate cubefree, so applying ϕ to a prefix of a cubefree binary word results in a delicate cubefree binary word. Let w be the image of t under ϕ, and let w k be the image of the first k letters of t under ϕ. Let k be the largest integer k such that |w k | ď n. If n´|w k | " 0, we're done. Otherwise, based on n´|w k |, we choose a word of the desired length from the following table. 1 00101001101001101011001w k´1 2 011001001100110110011001w k´1 3 0010100110100110101101001w k´1 4 00101001101001101011001010w k´1 5 001010011010011010110010110w k´1 6 0010100110100110101100101001w k´1 7 01100100110011011001100100110w k´1 8 001010011010011010110100101001w k´1 9 0010100110100110101100101001010w k´1 10 00101001101001101011001010011010w k´1 11 001010011010011010110010110011001w k´1 12 001010011010w k 13 1001010011010w k 14 001010011010011010110010100101001101w k´1 15 0010100110100110101100100110011011001w k´1 16 01100100110011011001100100110011011001w k´1 17 00101001101001101w k 18 100101001101001101w k 19 1101011001011001010w k 20 01101011001011001010w k 21 001010011010011010110w k The delicacy of these words can be verified by changing the letters at the beginning one at a time and finding the cubes in the resulting words. To prove that these words are cubefree, we need the following claim. Claim. No prefix of w is an overlap. Proof. We prove the stronger result that no prefix of w is a square. Suppose w has a square prefix XX. Then |X| ě 12. But in w, 011010110010 occurs only at the beginning of ϕp0q, so XX must be the image of a square prefix of t under ϕ, which contradicts Lemma 2.3. Now suppose one of the words in the above table contains a cube XXX. Then XXX must start in the prefix (before w k or w k´1 ), since w is cubefree. But one can verify that X also can't be entirely within the prefix, so X " P Y , where P is a suffix of the prefix and Y is a prefix of w. Thus, XXX " P Y P Y P Y , and Y P Y P Y is a prefix of w, which contradicts the claim. Case II: 1 ď |w 1 | ď 4. The first 8 letters of w i are 01101001. If |w 1 | " 1 and a " 0, this is equivalent to |w 1 | " 0 and a " 0, so the result follows from Case I. Otherwise, inserting 0 or 1 into 01101001 creates an overlap. Claim. w i is irreducible overlap-free for all i. Proof. We again verify the result for w 0 and then suppose i ě 1. Since w i is a concatenation of copies of T 0 and T 1 , it suffices to verify that T 0 T 0 , T 0 T 1 , T 1 T 0 , and T 1 T 1 are irreducible overlap-free. Claim. w i is delicate overlap-free for all i. Proof. We again verify the result for w 0 and then suppose i ě 1. Since T 0 and T 1 are delicate overlap-free, the claim follows. This concludes the proof of the theorem. Generalizing Delicacy We conclude by introducing a natural generalization of delicacy and raising a question about it for further study. A k-delicate squarefree word is a nonempty squarefree word such that changing between 1 and k of its letters to other letters from the alphabet creates a square. The definition again analogously extends to overlap-free and cubefree words.
2021-08-17T01:16:19.433Z
2021-08-15T00:00:00.000
{ "year": 2021, "sha1": "b403f7f61053f99beabec90b26282ff27c1e2ae4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f0fd9601bce04aad07484d331e8bf17975bc7ff2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
243793632
pes2o/s2orc
v3-fos-license
Influence of friction in the drawing a cylindrical part steel – part II In this paper we analyze the variations of the deformations, of the flow stresses, of the thickness of the walls, in the case of the simulation of the drawing process carried out under two conditions: without using the lubricant and using one in liquid form. The analysis presented aims to determine the influence of friction on the quality of the drawing B steel piece. Introduction It is known that lubrication influences the quality of a work-piece. This quality implies both the low surface roughness and the dimensional precision of the work-piece. An analysis of the state of stress and deformation of the material during processing with or without lubrication gives information on this quality. Deep drawing process simulation was performed without thin wall thickness with the aid of finite element software named Marc-Mentat [1]. The piece is cylindrical flange, flat base, connected with the vertical wall. The blank is a disc made of a deep drawing steel disc A5 mark being considered as steel B [2], with a diameter of 17 mm and a thickness of 0.4 mm ( [3], [4]). To achieve the the simulation had to be introduced through the points of the characteristic curve σ-ε ( Figure 1) [1], which has been used characteristic curve recorded data from the tensile ( Figure 2 [5], [7]). The form of the active elements is shown in Figure 3. They were considered rigid linear elastic with E = 2.1 · 10 5 N/mm 2 and Poisson's ratio ν = 0.3 [1]. Their dimensions are: the punch diameter d p =7.7 mm, the die diameter d m = 8.5 mm, the punch radius r p = 2 mm, the die radius r m = 2.5 mm, the height of piece h = 5 mm, the clearance of active elements j = 0,4 mm, coefficient of deep drawing without thinning admissible m = 0.56 ( [3], [5]). The blank The punch The die Figure. 3 Meshing of the active elements, of the blankholder and of the blank in finite Results of numerical simulation Initial deep drawing process simulation was performed under the following conditions of friction:  = 0.08 for contact between the blank and the die and  = 0.25 for contact between the blank and punch. To see the influence of friction on the development process resumed deep drawing simulation for  = 0.22 value at the contact between the blank and the die. The coefficient of friction between the blank and punch is kept the same. (Figure 4). This is justified the deformation history of this portion of material: to get here this portion of the material passed from a flat blank shape by the tensile + compressive dominant compression in the connection area between the flange and the wall piece, then by the tensile-compression dominant stretch in the wall piece and finally has reached this strong area harden. The high level of deformation is explained by obtaining cylindrical walls. The values of the flow stress are maximum in the whole piece, with a very small exception in the inner part of the cylindrical wall ( Figure 5). Compared with other types of steels [6], these stresses have lower values due to a very good plasticity of A5 steel. The diagram force -stroke punch is shown in Figure 6. The force of the punch required to deform A5 steel is in accordance with its good plasticity; we also see the correlation of this diagram with both the variations in deformations as well as the flow stress. Variation of wall thickness in longitudinal section From figure 7a deduce the conclusion that there is a bulging piece of the bottom. It is due to a residual bending moment caused by bending and straightening of the material which passes over the die radius and the stretch corresponding formation of part wall. This moment determines the final curving, which represents a form of the elastic comeback, because the material of the basis punch is in contact with only the punch radius. From figure 7b is observed that the thinning increases from the point of connection to the radius of the bottom of the punch, on a part of the area connected, after which it begins to decrease slightly right of this area so that the thickness achieve the nominal value on the input portion in the radius of the die, the material is then thickened to flared end portion of the work-piece. The diagram forcestroke. The force-stroke diagram of the punch is shown in Figure 10. It is noted that the maximum force is less than when the lubricant is used, because the part breaks during processing. Figure 10. The diagram forge-stroke in case of steel B piece, without lubricant Variation of wall thickness in longitudinal section Variation of wall thickness in longitudinal section is shown in Figure 11. Thinning material is evidenced in the bottom flat. In the case of steel wall thickness decreases abruptly at the beginning of the bottom punch radius. The large distance between nodes suggests loss of continuity of the material, which is in accordance with the maximum deformation values corresponding to that area. Then the thickness starts to increase slightly reaching the nominal value near the exit of the die radius; follows thickening material to end the flared portion of the piece. a) b) Figure 11. Variation of thickness in longitudinal section, without lubricant, in case of steel B piece Conclusions Comparing the latest results (corresponding to conduct of the drawing process without lubricant) to those outlined in section 3.1, for steel B (brand A5), the following conclusions results: -Increase the coefficient of friction leads to scrap; -Can't discuss about manufacturing quality of steel piece drawing, because is not met the criterion of physical continuity of the material.
2020-08-01T00:31:24.182Z
2018-07-15T00:00:00.000
{ "year": 2018, "sha1": "ae693a701c91e197d1c7bdf540295df080fba9c2", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.21279/1454-864x-18-i1-072", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "459fdfd6ff491231111a41e62ccaac5c7c284170", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
265156925
pes2o/s2orc
v3-fos-license
Burden of Ischemic Heart Disease and Its Attributable Risk Factors in North Africa and the Middle East, 1990 to 2019: Results From the GBD Study 2019 Background The North Africa and Middle East (NAME) region has one of the highest burdens of ischemic heart disease (IHD) worldwide. This study reports the contemporary epidemiology of IHD in NAME. Methods and Results We estimated the incidence, prevalence, deaths, years of life lost, years lived with disability, disability‐adjusted life years (DALYs), and premature mortality of IHD, and its attributable risk factors in NAME from 1990 to 2019 using the results of the GBD (Global Burden of Disease study 2019). In 2019, 0.8 million lives and 18.0 million DALYs were lost due to IHD in NAME. From 1990 to 2019, the age‐standardized DALY rate of IHD significantly decreased by 33.3%, mostly due to the reduction of years of life lost rather than years lived with disability. In 2019, the proportion of premature death attributable to IHD was higher in NAME compared with global measures: 26.8% versus 16.9% for women and 18.4% versus 14.8% for men, respectively. The age‐standardized DALY rate of IHD attributed to metabolic risks, behavioral risks, and environmental/occupational risks significantly decreased by 28.7%, 37.8%, and 36.4%, respectively. Dietary risk factors, high systolic blood pressure, and high low‐density lipoprotein cholesterol were the top 3 risks contributing to the IHD burden in most countries of NAME in 2019. Conclusions In 2019, IHD was the leading cause of death and lost DALYs in NAME, where premature death due to IHD was greater than the global average. Despite the great reduction in the age‐standardized DALYs of IHD in NAME from 1990 to 2019, this region still had the second‐highest burden of IHD in 2019 globally. regional and national variations. 6One of the highest age-standardized IHD prevalence and incidence rates is observed in the North Africa and Middle East (NAME) region. 2,4,6AME is an economically diverse region, where countries are at different stages of development with a diverse distribution of wealth and resources, and remarkable socioeconomic disparities.Besides the ongoing epidemiologic transition in NAME with evolving patterns of fertility, death, life expectancy, and leading causes of death, socioeconomic disparities worsen the access to preventive measures to control noncommunicable diseases (NCDs). 7,8][10] IHD seems to affect younger individuals across NAME countries compared with other regions. 8,10Consequently, the age-standardized disability-adjusted life year (DALY) rates for IHD are considerably higher than the global picture, and IHD is estimated to cause 26% of all deaths in NAME. 4,6,118][9] Therefore, updated estimates of regional and national IHD burdens are essential in establishing benchmarks for decisions in policy making and resource allocation.It is also important to evaluate the role of demographic changes, sex differences, risk factor patterns, and national trends to inform timely and effective strategies.Nevertheless, to the best of our knowledge, no updated study has reported detailed epidemiologic estimates with time comparisons for IHD within countries in this region. 8,11,12As part of the GBD (Global Burden of Diseases, Injuries, and Risk Factors study 2019), this investigation was conducted to provide an update on the imposed burden of IHDrelated death and morbidity in NAME. METHODS Data Source Epidemiological data from the GBD was obtained from the Institute for Health Metrics and Evaluation to evaluate the epidemiology of the Institute for Health Metrics and Evaluation and its attributable risk factors in NAME between 1990 and 2019.The GBD is a multinational collaboration that reports comprehensive and systematic estimations for 369 diseases and injuries in 204 countries.The study was initiated in 1990 and has been updated regularly ever since with the addition of new data sources and methodological revisions to enable comparisons across time.2][3][4] The GBD study is approved by the University of Washington Institutional Review Board (STUDY00009060).Informed consent forms were not deemed necessary as the GBD uses deidentified collective data.The data used in this study are accessible via the Global Health Data Exchange (http:// ghdx.healt hdata.org/ gbd-2019) and the GBD Compare webpage (https:// vizhub.healt hdata.org/ gbdcompa re/ ).The reporting of the present study complies with Guidelines for Accurate and Transparent Health Estimates Reporting. 13 CLINICAL PERSPECTIVE What Is New? What Are the Clinical Implications? • There are some control plans for IHD in place in some NAME countries, including multisectoral and interdisciplinary committees for control of noncommunicable diseases and their risk factors, and expansion of catheterization laboratories and fibrinolytic injection facilities for the acute management of IHD; although some of them proved effective in reducing the IHD burden according to this study, they are yet to be adopted and evaluated by other countries in the region.6][17][18] The Institute for Health Metrics and Evaluation uses a 4-level hierarchy to classify risk factors.The sociodemographic index (SDI) is a summary measure of country development that incorporates income per capita, educational attainment level, and the total fertility rate, and is presented in quintiles. 19efinition of the NAME region is ambiguous and varies between institutions and studies.GBD defines 7 super-regions on the basis of epidemiologic similarity and geographic closeness, including high income; Latin America and Caribbean; sub-Saharan Africa; NAME; South East Asia, East Asia, and Oceania; Central Europe, Eastern Europe, and Central Asia; and South Asia.This analysis focused on the GBD NAME region, composed of 21 countries: Afghanistan, Algeria, Bahrain, Egypt, Iran, Iraq, Jordan, Kuwait, Lebanon, Libya, Morocco, Oman, Palestine, Qatar, Saudi Arabia, Sudan, Syrian Arab Republic, Tunisia, Turkey, United Arab Emirates, and Yemen. 20,21 Estimation Framework The burden of IHD was determined via standard epidemiologic measures including prevalence, incidence, death rates, years of life lost (YLLs), years lived with disability (YLDs), and DALYs measured with regard to age, sex, country, and SDI, with comparisons over time.Incidence and prevalence were based on a broad range of population-representative data.GBD employs a highly standardized method to estimate death due to different causes, which uses vital registration based on International Classification of Disease (ICD) codes 1,22 and verbal autopsy data as inputs to the Cause of Death Ensemble Model to produce time trends. 1,2,23LLs were calculated as the difference between the maximum country-specific life expectancy and the age at death.YLDs were the product of the prevalence and disability weights of IHD.DALYs were produced as the sum of YLLs and YLDs. Statistical Analysis In this study, we reported incidence, prevalence, deaths, premature deaths, YLLs, YLDs, and DALYs of IHD for NAME and its countries from 1990 to 2019.Moreover, we explored the impact of changes in age structure, population growth, and variations in incidence rate on the incidence of IHD using decomposition analysis.Briefly, expected IHD cases in 2019 were estimated using 2 hypothetical scenarios: first, applying the 1990 age-specific IHD incidence rates to the 2019 population size; and second, applying the 1990 age structure and age-specific IHD incidence rates to the 2019 population size.The difference between the 2 scenarios was considered as the contribution of population aging.Differences between the second scenario and the actual IHD incidence in 2019 were considered as the contribution of population growth.Finally, any remaining changes in IHD incidence were attributed to alterations in age-specific IHD incidence rates. 24ata are reported as point estimates (95% uncertainty interval).All statistical analyses and visualizations were done using R software version 4.0.4(R Foundation for Statistical Computing, Vienna, Austria). Disability-Adjusted Life Years In 2019, 18.0 million DALYs (61.3% men) were lost due to IHD in NAME.From 1990 to 2019, the agestandardized DALY rate of IHD significantly decreased by 33.3% (95% uncertainty interval, −41.4 to −25.0) and reached 4158.9 per 100 000 population (3650.7-4751.7) in NAME (Table 1; Figure 1).The corresponding worldwide figure was −28.6% (−33.3 to −24.2) and 2243.5 (2098.7-2385.0),respectively.Although NAME had the highest worldwide age-standardized DALY rate of IHD in 1990, the steady decrease of this measure in the study period resulted in convergence with the global average.Noticeably, this decreasing trend was consistent across all 21 countries of NAME ranging from a 10.5% reduction (−30.2 to 16.8) in Libya to a 69.1% decrease (−75.1 to −60.7) in Bahrain; nevertheless, it did not reach statistical significance for Egypt, Libya, Saudi Arabia, Syrian Arab Republic, Tunisia, and Yemen (Table S1; Figure S1).Egypt had the highest and Turkey had the lowest age-standardized DALY rate of IHD in NAME in 2019 (Figure 2).The decreasing trend of age-standardized DALYs mostly arises from the reduction of YLLs rather than YLDs.From 1990 to 2019, the contribution of YLDs to the age-standardized DALY rate attributed to IHD was <4% for all NAME countries (Figure 3). Mortality Rate In 2019, 799 000.1; Figure 1).rates varying from a 7.2% (−28.4% to 22.9%) decrement in the Syrian Arab Republic to a 64.9% (−71.4% to −55.9%) decrease in Bahrain; however, the decreasing trend was not statistically significant for Egypt, Libya, Morocco, Saudi Arabia, Syrian Arab Republic, Tunisia, and Yemen.Similarly, the decrease in age-standardized YLL rate ranged from 11.0% (−31.1% to 16.9%) in Libya to 69.9% (−76.0% to −61.3%) in Bahrain with no statistically significant difference in Egypt, Libya, Saudi Arabia, Syrian Arab Republic, Tunisia, and Yemen (Table S1; Figure S1).Syrian Arab Republic had the highest and Kuwait had the lowest age-standardized death rate of IHD in NAME in 2019 (Figure 2). Premature Death In 2019, the share of premature IHD death was 26.8% and 18.4% for women and men, respectively, in the NAME region, compared with 16.9% and 14.8%, accordingly, at the global level (Table 2).The greatest reduction of IHD premature death from 1990 to 2019 was recorded by Turkey (55.6% for women and 37.8% for men) followed by Iran (48.3% for women and 21.6% for men) and Lebanon (39.0% for women and 33.5% for men) in NAME.In contrast, the United Arab Emirates (64.8% for women and 14.9% for men) and Saudi Arabia (60.4% for women and 48.7% for men) showed a dramatic increase in the share of IHD premature deaths.A similar pattern was observed considering premature YLLs due to IHD (Table 2).From 1990 to 2019, Afghanistan demonstrated the widest sex inequality in IHD premature death, with 102.6% and 3.4% increase in the share of IHD premature death in men and women, respectively.In Palestine, the share of IHD premature death decreased by 10.1% in women from 1990 to 2019, while it increased by 31.5% in men (Table 2). Morbidity In S1; Figure S1).In 2019, Iran had the highest and Turkey had the lowest age-standardized prevalence rate of IHD in NAME (Figure 2). Incidence and Decomposition Analysis In 2019, 2.6 million incident cases (60.2% men) of IHD occurred in NAME.From 1990 to 2019, the agestandardized incidence rate of IHD remarkably decreased by 9.0% (−10.3% to −7.5%) and reached 613.9 (555.8-675.2) in NAME in 2019 (Table 1, Figure 1).The variation of this measure during the study period ranged from a significant 31.4% (−34.9% to −27.4%) decrease in Turkey to a significant 11.4% (6.1%-17.4%)increase in Oman (Table S1).In 2019, Iran had the highest and Turkey had the lowest age-standardized incidence rate of IHD in NAME (Figure 2).The number of incident cases of IHD grew by 135.1% in NAME from 1990 to 2019.According to the decomposition analysis, this increase was attributed to population growth (76.4%),age structure change (80.4%), and incidence rate change (−21.7%).Population growth and age structure change contributed to the increased number of incident cases of IHD in all NAME countries except for Afghanistan and Sudan; nonetheless, the incidence rate change varied from −80.3% in Turkey to 65.7% in the United Arab Emirates from 1990 to 2019 (Table S2). Risk Factors From 1990 to 2019, metabolic risk factors, followed by behavioral and environmental/occupational risks, contributed the most to the number of IHD DALYs attributed to risk factors in the NAME region.In 2019, 15.9 (13.7-18.6),12.3 (10.5-14.4),and 6.9 (5.7-8.2) million IHD DALYs were attributed to metabolic, behavioral, and environmental/occupational risks, respectively, in NAME region (Table S3).The age-standardized DALY rate of IHD attributed to risk factors significantly decreased for combined all risk factors by 32.5% (−40.7% to −24.2%), metabolic risks by 28.7% (−37.1% to −18.9%), behavioral risks by 37.8% (−45.6% to −29.6%), and environmental/occupational risks by 36.4% (−44.7% to −27.7%) (Table S4).Considering the age-standardized DALY rate of IHD attributable to level 4 of risk factors, high systolic blood pressure, high low-density lipoprotein cholesterol, and high body mass index remained among the top 5 risk factors of IHD in NAME from 1990 to 2019.Although the age-standardized DALY rate attributable to high systolic blood pressure and high lowdensity lipoprotein cholesterol decreased by −28.9% (−38.3% to −18.7%) and − 32.9% (−41.4% to −24.0%), respectively, the age-standardized DALY rate attributable to high fasting plasma glucose increased by 31.0%(9.1%-55.6%).Notably, the attributable burden of smoking significantly decreased by 41.5% (−49.6% to −32.6%) during the study period.Considering level 2 of risk factors, dietary risks, high systolic blood pressure, and high low-density lipoprotein cholesterol were the top 3 risks contributing to age-standardized DALYs and death rates in most countries of NAME in 2019 (Figure S2). DISCUSSION In 2019, IHD was the leading cause of burden of disease and death in NAME and accounted for 18 million DALYs and 0.8 million deaths in this region.Noticeably, people in the NAME region were more likely to die prematurely due to IHD compared with the rest of the world.Although NAME had the greatest age-standardized DALY rate of IHD in the world in 1990, the burden of IHD remarkably decreased from 1990 to 2019.Despite this great reduction, NAME still has the second rank of the burden of IHD globally, after Central Europe, Eastern Europe, and the Central Asia super-region.From 1990 to 2019, the age-standardized rates of DALYs, YLLs, and YLDs due to IHD significantly decreased; however, the reduction of DALYs arise mostly from a decrease in YLLs or death, rather than YLDs or morbidity of IHD.The decomposition analysis of the IHD incidence revealed that the inherent incidence change accounted for a >20% reduction in the incidence of IHD.Despite major reductions in the DALYs attributed to risk factors in NAME, metabolic risk factors and dietary risks remained to be addressed in IHD control. NAME had the greatest burden of IHD in 1990, and despite a considerable reduction of its burden, NAME remained as one of the most alarming regions for IHD compared with others.In this study, we found that the reduction in death is the main driver for the decrease in DALYs of IHD in NAME, and YLDs of IHD decreased significantly but far less than YLLs of IHD.Generally, this finding underscores the greater importance of preventive and therapeutic measures rather than rehabilitation facilities in the region; nevertheless, each country should tailor this recommendation on the basis of its resources and epidemiologic profile of IHD such as YLLs/YLDs ratio and trend. 25here are 2 widely accepted strategies to prevent IHD, population-based and individual-based. 26,27The presence of an extensive primary health care network enables the health system to implement populationbased measures for risk factor control and, consequently, a more equal reduction of the IHD burden.The Individual-based approach, however, depends on sophisticated costly health care personnel and results in a more diverse reduction of the IHD burden. 28lthough each of these policies requires certain infrastructures and human resources, they can be adopted by NAME countries on the basis of their health system profile.For instance, the extensive primary health care network of Iran can be leveraged for expanding primary and secondary preventive measures to control IHD risk factors. 9However, tackling NCDs, including IHD, requires multilevel approaches to implement tailored action plans for the control of these conditions.Some NAME countries, including Iran, 29,30 Turkey, 31,32 and the United Arab Emirates, 33 developed customized strategies to tackle NCDs, which may have lessons for other countries in the region.These strategies included the establishment of NCD monitoring programs as well as multisectoral collaboration between government, ministries of education and health, municipalities, and academia. 34Moreover, tax implementation on tobacco products in some NAME countries might lead to the observed significant reduction of the attributable burden of smoking in the study. 34Finally, the World Health Organization's best buys for control of NCDs should be evaluated in terms of cost-effectiveness and adopted on the basis of health system priorities, especially in low-and middle-income countries. 35he proportion of IHD premature death decreased in NAME from 1990 to 2019; however, this reduction was not consistent across NAME countries.7][38] Adoption of premature death as part of the Sustainable Development Goal target 3.4 by the United Nations highlights its public health priority. 39It should be noted that we defined sexspecific premature death for IHD in this study, and this definition is not the same as the definition of the United Nations for NCDs, albeit both definitions share mutual concepts.Given the higher mortality rate of young adults, especially women, in the setting of acute IHD, [40][41][42][43] improved health care access and quality of care, 5,44 particularly in the acute setting, along with enhancement of medical education and health care workers' awareness on unusual signs and symptoms of IHD in the young population may address the high rate of IHD premature death in the NAME region.6][47] In addition, access to coronary artery bypass grafting procedures, which showed cost-effectiveness in NAME countries, should be improved to diminish the IHD burden. 48,49owever, the establishment of these treatment facilities is resource-intensive, imposing a financial burden on the health system.This should be factored into public health policymaking and tailored on the basis of the health priorities and health expenditures of each country.Notably, expanding therapeutic infrastructures may cut the burden of IHD in the short term; nevertheless, implementing nationwide preventive programs, especially for high-risk patients such as those with a family history of IHD or patients with hereditary dyslipidemia, may be more costeffective as a long-term policy. 26,27he NAME region demonstrated a remarkable improvement in the age-standardized DALY rate of IHD attributed to risk factors, which is comparable with the global measure, 32.5% versus 28.8%. 1,2This decrease might play an important role in the significant reduction of the IHD burden in this region; however, much remains to be done.Metabolic risks remained the major contributor to IHD in NAME.These risks need to be reduced by multiple risk factor interventions that have shown effectiveness in low-and middle-income countries. 50nother strategy should focus on enhancing awareness and effective treatment of cardiovascular risk factors.][53] The predominance of metabolic and dietary risks contributing to IHD calls for expanding social media campaigns, a cost-effective tool, to encourage people to adopt a more healthy lifestyle, including a healthy diet and increased physical activity. 35][56] Our findings provide a general epidemiologic picture of IHD in NAME countries.These data inform public health policymakers to adopt customized and cost-effective strategies according to their available infrastructure and resources to control IHD; nonetheless, disparities across demographic subgroups of age, sex, race, and ethnicity should be addressed. 5Between 1990 and 2019, we found a remarkably greater increase in IHD premature death in Afghan men (>100%) compared with women (<5%), and an increase in IHD premature death in men compared with a decrease in women in Palestine.These sex disparities may arise from poor quality of data sources including vital registrations, which in turn introduces uncertainty into epidemiologic modeling.However, they may imply a true inequality that should be further delineated and addressed.8][59][60][61] This method involves dimension reduction strategies to combine death, DALYs, YLLs, YLDs, incidence, and prevalence to provide an epidemiologically plausible index representing the care quality of IHD. 5 Therefore, the Quality of Care Index holds promise to assess the disparities in the quality of care of IHD across demographic subgroups in NAME and other regions using GBD. Among NAME countries, Turkey had one of the lowest burdens of IHD in the region from 1990 to 2019 and successfully managed to cut the IHD burden by >50%.This achievement correlates and may originate from the successive growth of the gross domestic product per capita and SDI of Turkey during the study period. 19,62In addition to the control of NCDs, 31,32 Turkey established percutaneous coronary intervention facilities across the country and is actively monitoring the management of IHD by conducting nationwide studies and acute myocardial infarction registries. 63Reforming the national health system of Turkey may also be a factor in the significant reduction of the IHD burden.While Iran had the highest incidence and prevalence of IHD in NAME from 1990 to 2019, its DALYs and death rates due to IHD were far less than other NAME countries.Although the higher prevalence of IHD in this country might be the result of higher-quality screening and diagnostic strategies resulting in identifying more cases, it may also reflect inadequate primary prevention strategies for controlling risk factors before they result in a clinically identifiable IHD. 64The significantly lower DALYs and death rate might also partially arise from better diagnosis strategies, resulting in diagnosis in earlier stages and therefore milder IHD, although it might be partially a reflection of better secondary prevention of IHD in Iran. 9,25In this country, availability and affordability of acute care for IHD were improved in recent years, 45 and national efforts were directed at a multisectoral approach to address NCDs 29,30 ; nevertheless, these efforts aimed at the primary prevention of IHD should be strengthened to reduce the incidence of IHD in future years. 64The dramatic increase in the share of IHD premature death in the United Arab Emirates and Saudi Arabia is alarming and necessitates an appropriate public health response.These increases may be due to their increase in gross domestic product and the bell-shaped association between SDI and IHD death. 4,11,622][3][4] Incompleteness or even lack of vital registrations, verbal autopsy, and other health data sources are major drawbacks for the estimation of IHD burden in NAME.These limitations may be more exaggerated in this region due to the ongoing wars and conflicts in NAME.Investment in health data management systems and other infrastructures is highly encouraged to improve the quality of input data for IHD modeling.While we interpreted the trends of the IHD burden in the context of current IHD control plans in NAME countries to highlight the possibly successful health policies, we cannot establish causal relationships between the policies and trends in this study.Future studies are warranted to evaluate the downstream effects of specific IHD control policies. CONCLUSIONS IHD was the leading cause of death and lost DALYs, accounting for 0.8 million deaths and 18 million DALYs in NAME in 2019.The age-standardized DALY rate of IHD decreased by >30% in NAME from 1990 to 2019; however, NAME still has the second rank in the burden of IHD globally.People in NAME were more likely to die prematurely due to IHD compared with the global average.While the age-standardized DALY rate of IHD attributed to risk factors significantly decreased from 1990 to 2019, metabolic and dietary risks are still major setbacks for the control of IHD.There are some control plans for IHD in place in some NAME countries, including multisectoral and interdisciplinary committees for control of NCDs and their risk factors, and expansion of catheterization labs and fibrinolytic injection facilities for the acute management of IHD.Although some of them proved effective in reducing the IHD burden according to this study, they are yet to be adopted and evaluated by other countries in the region.Health inequalities, and sex, ethnicity, and age disparities should be on the agenda of future research in NAME to ensure uniform improvement of the IHD burden in the region. Figure 1 . Figure 1.Time trend of age-standardized incidence, prevalence, death, and DALY rates of IHD in NAME from 1990 to 2019.DALY indicates disability-adjusted life years; IHD, ischemic heart disease; and NAME, North Africa and Middle East. Figure 2 . Figure 2. Ranking of NAME countries in terms of age-standardized incidence, prevalence, death, and DALY rates of IHD from 1990 to 2019.DALY indicates disability-adjusted life years; IHD, ischemic heart disease; and NAME, North Africa and Middle East. Figure 3 . Figure 3. Age-standardized DALY rate of IHD and the contribution of YLLs and YLDs in NAME countries from 1990 to 2019.DALY indicates disability-adjusted life year; IHD, ischemic heart disease; NAME, North Africa and Middle East; YLDs, years lived with disability; and YLLs, years of life lost. Table 1 . Age-Standardized Incidence, Prevalence, Death, DALY, YLL, and YLD Rates of IHD and Their Percentage Change From 1990 to 2019 in NAME Data in parentheses are 95% uncertainty intervals.DALYs, indicates disability-adjusted life years; IHD, ischemic heart disease; NAME, North Africa and Middle East; YLDs, years lived with disability; and YLLs, years of life lost. Table 1 ; Figure1).The variation of age-standardized YLD rate during the study period ranged from a 26.9% (−32.7% to −21%) significant decrease in Turkey to a significant 23.4% (15.3%-32.4%)increase in Oman.Comparably, the percentage change in the agestandardized prevalence rate from 1990 to 2019 varied Table 2 . Share of Premature Deaths and YLLs due to IHD and Their Percentage Change From 1990 to 2019 in the World, NAME, and NAME Countries IHD indicates ischemic heart disease; NAME, North Africa and Middle East; and YLLs, years of life lost.
2023-11-15T06:17:29.621Z
2023-11-13T00:00:00.000
{ "year": 2024, "sha1": "dbdb5e448d06bb85f1e3f7c3d0c9aab713e7ad5a", "oa_license": "CCBYNCND", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.123.030165", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "749302a94c089b94dd3004c6ce964c3c3a1aa9c6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265059764
pes2o/s2orc
v3-fos-license
Occluded Face Recognition based on Deep Learning : Compared to the traditional sparse representation and the dictionary processing method of occlusion, deep learning-based face recognition methods are being used more and more widely in the field of face recognition. However, in practice, face recognition results are greatly influenced by light intensity, shooting Angle, mask and sunglasses occlusion and other factors. Therefore, this paper will discuss the face recognition under the occlusion situation. In order to solve the problem of large pose change of human face and local occlusion respectively, an offset network and a weight network was introduced into the convolutional neural network. In the following paper, the facial recognition accuracy of the introduction of the offset network, the facial recognition accuracy of the weight network and the recognition accuracy of the unification of the two are compared with the traditional facial recognition model VGG16. Introduction Face recognition in the past decade has been active in the field of computer vision and biological statistics is an important research topic [1], face recognition technology is widely used in criminal identification, company attendance, community management and the station customs monitoring areas.Face recognition models such as linear discriminant analysis, Gaussian hybrid model, support vector machine face recognition has greatly promoted the positive face recognition technology progress [2], and the traditional face recognition technology has become increasingly mature, in normal conditions can achieve high accuracy. However, the accuracy of face recognition in traditional algorithm is influenced by objective factors such as shooting Angle, the illumination intensity, mask shielding, eye shielding, which makes the recognition accuracy in the presence of face mask.In the background of disease transmission, face recognition of mask will increase the efficiency and at the same time, the face recognition maintains social security and the regulation of masked robbery and theft in the community.Therefore, improving the existing mask blocking face recognition algorithm can reduce the cost of epidemic prevention and public security, and can also have reference value for other related fields. For occlusion of face recognition, academic has been studied for many years, in recent years, the emergence of deep learning technology gradually replaced the traditional, such as sparse representation and occlusion dictionary processing method, in 2011, Zhaohua Chen and Rui Min use the principal component analysis and improve the support vector machine method detection, and then use block based weighted local binary mode only handle the occlusion face area.[3] In 2014, A Morelli Andres proposed a face recognition algorithm for occlusion detection based on compression sensing, which extracts recognition information by excluding occlusion regions during the recognition process.[4] In 2018, Weitao Wan proposed a MaskNet model which can set higher weights for the hidden units of the non-occluded part of the face and lower weights for the hidden units of the occluded part of the face.The experimental results show that the MaskNet model can effectively improve the robustness of the convolutional neural network model in occluded face recognition.[5] In conclusion, the predecessors on improving recognition accuracy breakthrough generally to reduce the occlusion part of the weight value and through the mapping function correction face, this study will combine the two into a new recognition network optimization, design a new loss function detection model accuracy, in order to improve the algorithm under the condition of face with occlusion recognition accuracy. Multi-angle Face Recognition When face Angle changes, it will not only cause some misses of facial features, but also the feature vector after face coding change.At present, there are the following ideas to solve the problem of face angle: 3D reconstruction of the face, and identification of facial information, which is costly, which is commonly used in deep learning.The face recognition method based on poses estimation estimates the face pose, so as to perform face alignment, which can solve the problem of face offset to a certain extent.The face recognition method based on facial key points is to detect the features of the face that has not changed due to rotation, and it is also a commonly used method in deep learning. And this paper will adopt the attitude of positive, face Angle changes after the coded vector and face attitude after coding the vector difference between the vector, and this paper will use the offset vector to represent the vector difference, the vector difference is the connection between offset face and positive face, vector of different pose face plus offset vector can get the vector of positive face.There is a mapping relationship between the vector difference between the offset face and the front pose face.The frontal pose face can be regarded as an offset face with an offset angle of zero.Therefore, we designed the offset network function to use the fitting ability of the deep neural network to learn this mapping function [6].By observing the face images of different poses in reality, we can find that even in the case of the posture face, we can still capture a part of the face information, but the information has geometric deformation relative to the positive image. The generalization ability of the deep learning model is greatly affected by the distribution of data sets.It is difficult for the model to learn the accurate depth features when the face attitude changes greatly.Therefore, the model should not only be able to output the corresponding vector difference according to the face offset Angle, but also apply this method to the future attitude recognition task.So before training face recognition model need to establish a vector compensation mechanism, assuming that there is a complex function y=a (x), x represents the vector of posed face, and y represents the vector difference between offset face and positive face, the offset network mechanism is to learn a (x).The learning process of the offset network is the process of continuously assigning learning tasks to the network, and the network continuously generates corresponding mapping functions to complete the learning. In this paper, the training method is based on supervised learning.The output vector of the offset network is the weighted sum of the output vector of the convolution layer and also the linear combination of the coding layer output, so the output of the ReLu function in the offset network can be applied to many nonlinear models, the ReLu function is a piecewise linear function that treats negative values as 0 while leaving positive values unchanged.At the same time, compared with other activation functions, so it does not have the problem of gradient disappearance. When two images of the same person are encoded by the same coding layer, X and x are output, and then the two establish a vector compensation mechanism through the offset network, and the offset network outputs A and a to achieve the effect of X+A=a+x.During the learning process, the model will gradually learn the compensation function of the vector difference to realize face recognition in different poses. The offset network will be directly stitched after the convolutional neural network and the batch normalization layer.In the case of supervised learning, the vectors difference of the front image and the side image can be learned without affecting any performance of the original model.And this learning result can be applied to multi-classification tasks of face data. Face Local Occlusion Recognition For facial mask, glasses, shadow occlusion problem, this paper adopts the method of introducing a weight network, through the smaller weight value of occlusion area to reduce the influence on recognition results, this requires the establishment of a weight allocation model in face recognition to combine the coding vector after passing the coding layer with the weight vector after the allocation.Face recognition result comes after the coding layer coding vector and allocate the combination of the weight vector.In this paper, whether the learning of the compensation vector or the weight vector, the underlying idea of the network is based on the twin network. When learning the weight vector, two face images are generally needed.At least some data of the triplet input data pair can meet the randomly selected front image as the reference sample.At the same time, the local occluded image of the same category is used as the positive sample.Each kind of other faces feature vector distance close and approximate overlap, at this time due to the reference samples and shade samples are different, the latter because shade will lose part of the facial information, and the feature vector coded by the coding layer is also useless.The role of the weight network is to emphasize the similarity in the shade part, weaken the influence of local occlusion on identification results. Weight vector essence is the allocation of different parts of the feature vector, the reference sample and local occlusion of the positive samples for reference samples and samples after the convolution layer will be N * N feature matrix, the reference sample matrix is defined as X, shielding sample defined as x, the two matrices exist local different because of occlusion, and the weight vector of different parts of the two matrix characteristics gives different weight w.The smaller w is, the smaller the effect of the feature on the result.Thus, different weight vectors Y and y are output.For the weight vector output vector Y and y, the vector y has eliminated the invalid features of the occluded part, and assigned 0 to the value of some features in the vector, but the output vector Y does not assign 0 to any feature value.So, in order to ensure that the two images except for the same feature value, here defined vector z, each dimension in the vector z value, are the smallest vector of Y and y in the same dimension, the eigenvectors of the unconcluded part of the two images can be made the same, and the difference in the occluded part can be minimized.Then the vector z is matrix multiplied with the original encoded vector, making F (x) =x * z, f (x) = x * z, and making the two equal.In this learning process, the weight vector can gradually learn the real weight, and the learning results of the model will be used for the classification task in future tests. Model Training In this paper, LFW [7] faces database and private data are used for result verification and analysis.The training set is divided into: (1) 1989 categories from LFW, with 1 to 10 images in each category.(2) 200 category labels come from the private data training set, in which 75 different categories are used as the test set, and the rest are used as the training set.There are about 10 images in each category in the private data set, and the pose faces data with different angles and facial occlusion data such as sunglasses are added to the private data set.Due to the diversity of LFW datasets in terms of partial occlusion and illumination, even in the same category of labels, the differences between different face images are very large, and there is only one image in some categories.In the private data, many pose images and partial occlusion images from different angles are used, such as wearing a hat, wearing sunglasses, and hairstyle occlusion.These data make the training set a face recognition data set that is currently difficult and challenging. Data Set Selection The training data used in this paper is extracted from the LFW and private data sets.The extraction method of the training data set is random sampling with replacement.Each set of training data retains the data of 125 different people in the private data set and six percent of the total number of categories is extracted from the LFW dataset.Then further extract the training set from it.The test data set used in this paper were generated from LFW and private data sets, and a total of 100 samples of different people was selected.The face data of these people comes from different data sets, 25 of them come from LFW, and a certain proportion of the data of these people is positive face data.The other 75 people come from the author's hand-made private data set.The data of these people are basically face data of different poses (side, slope) and face data under occlusion (sunglasses, hats, etc.).These data are combined into a set of sample pairs, and then 3000 samples are further extracted from several test samples composed of these data as a test set, of which the front face data accounts for about 25%, with poses of different deflection angles face and occluded face data account for about 75%, which shows that some of the samples are difficult data. In the training process of the overall face model, the test data used in this paper is generated from LFW and private data, a total of 100 different samples was selected, 50 from LFW, 50 from private data, these data contain positive face with different posture or partial occlusion photos, they pair of samples, and then extract 3000 samples as the test set. In the training process of the overall faces model, three important parts are included: the custom splicing convolutional network, the side face the vector difference of the offset network, and the adaptive weight for different local occlusions.The offset network is essentially the same as the weight network, both of which are composed of fully connected neural networks, and the ReLu function is added in the neurons to improve the applicability.Since the posture offset and facial occlusion may exist at the same time in the actual situation, the processing of the feature vector is carried out in no order at the same time.Assuming that the two exist at the same time, the algorithm in this paper will compensate the original vector of the face and then carry out the weight distribution.We define the original vector is a, the compensation vector is y, the weight vector is w, the above process can be expressed as H (x) =w (a + y).Then the network will perform Gaussian distance calculation on the output H(x), and the distance among the vectors of the same category will be shortened, and vice versa. Performance Evaluation Method According to the above data extraction method, images of about 250 different people are extracted as training data.They have a total of about 2000 images.Among these images, the images in the private data set are fixed data, and the images from the LFW data set are extracted.Obtaining data, even if a small proportion of data is extracted using the extraction method with replacement, it is difficult for these data to reappear. When testing the network proposed in this chapter, in theory, the offset network and the weight network should be used together to deal with various complex situations in the data set.In order to verify the different networks in the process.Here, it is hoped that the two defined networks can be tested separately, so as to judge the effect of different networks on identifying difficult data.Finally, put the two together in the ordinary VGG [8] network, and use the test data extracted by the above method to test the effect, but in the effect comparison experiment, the test data obtained once should be saved.To ensure that different network models are tested using the same data. In the test process of the images of sample input have trained model, get the face image encoding vector, then calculate the cosine similarity, both to measure the two feature vectors are similar, is similar to the feature vector to show the similarity between different people, where the similarity is greater than a certain threshold is the same person, similarity below the threshold as different people. Analysis of Test Results Adding the offset network proposed in this paper to the model, the performance of the original network is improved, and the performance of the weight network is still improved after adding to the model.The relationship between the number of training rounds and accuracy after adding the offset network is shown in Table 1.The result of the addition of the weight network is shown in Table 2.After training all the two networks, the accuracy this paper and VGG network is shown in Table 3.It can be seen that by introducing the offset network and the weight network into VGG 16 model, the accuracy of VGG network can be effectively improved, which can be increased by 5 percentage points in the complex background such as multiple poses and occlusion. Summary On the basis of VGG network, this paper proposes a method to improve the accuracy of face recognition in the case of multiple poses and occlusion, and significantly improves the accuracy of face recognition of VGG network in the case of occlusion, but only obtains a rough result for the identification of difficult data.At the same time, the subsequent research will explore the measurement scheme of different vectors, and train multiple models to improve the face recognition accuracy of the model in complex situations. The face recognition model proposed in this paper also has some shortcomings.The offset network and the weight network are both fully connected networks.Trying more complex networks (such as adding convolution, residual and other structures) may bring further performance improvements.Promote.In addition, the premise that this method can be used normally is to capture RGB images in a natural environment, and in a dark environment, the images captured by the infrared camera may not be used directly, which will have a certain impact on normal face recognition.In future research work, we will consider designing a more compatible algorithm to improve the universality of the algorithm, and at the same time be able to deal with face images in different complex environments, which will be an important problem that this algorithm hopes to solve. Figure 1 . Figure 1.A Schematic diagram of the offset vector Table 1 . Model accuracy for the different number of training rounds Table 2 . Model accuracy under different training rounds Table 3 . Comparison of network training accuracy and VGG16
2023-11-09T16:13:55.739Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "5aea638df9474b2cafcd71a9ad1f0d6c8d239abf", "oa_license": "CCBY", "oa_url": "https://drpress.org/ojs/index.php/fcis/article/download/13134/12774", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4eec8193b4dd588a0b92c8cd609471f4e0d1eb92", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
265254956
pes2o/s2orc
v3-fos-license
Predictive value of serum creatinine and total bilirubin for long-term death in patients with ischemic heart disease: A cohort study Background Ischemic heart disease (IHD) has a high mortality in the population. Although serum creatinine (Cr) and serum total bilirubin (TBil) are rapid and readily available biomarkers in routine blood tests, there is a lack of literature on the prognostic value of combined Cr and TBil tests for IHD. This study aimed to evaluate a combined equation based on Cr and TBil to predict the long-term risk of death in IHD and to find indicators sensitive to the prognosis of IHD patients. Method In this study, 2625 patients with IHD were included, and the combined value and combined equations of Cr and TBil were obtained by logistic regression analysis based on Cr and TBil collected at the time of admission. Patients were divided into four groups according to the quartiles of the combined value. COX proportional hazard regression model was used to analyze the risk factors for long-term death in IHD patients. Receiver operating characteristic (ROC) curves were used to evaluate the prognostic effect of Cr, TBil and combined value on long-term death events. Results Logistic regression analysis was performed for long-term death events with Cr and TBil as independent variables, and the logit regression model was Logit(P) = 0.0129×TBil+0.007×Cr-0.417. Multifactorial Cox regression analysis showed that high values of the equation were independent risk factors for long-term death events (all-cause death: HR 1.457, 95% CI 1.256–1.689, P<0.001; cardiovascular death: HR 1.452, 95% CI 1.244–1.695, P<0.001). Combined Cr and TBil value are more valuable in predicting long-term death (AUC: 0.609, 95% CI 0.587–0.630, P<0.001). Conclusion Combined Cr and TBil assay is superior to single biomarkers for predicting long-term death in patients with IHD. High values of the equation are independent predictors of long-term death and can be used to identify patients at high risk for IHD. Introduction The prevalence of ischemic heart disease (IHD) in the population is growing by the day, and IHD can cause severe cardiac complications and has a worldwide trend of increasing mortality [1].Screening for sensitive and specific biological indicators, timely and accurate identification of high-risk patients, accurate assessment of their clinical prognosis, and early intervention are important to improve the prognosis of IHD patients. Myocardial energy demand and coronary blood flow imbalance determine IHD.Most commonly, atherosclerotic lesions result in coronary artery obstruction or stenosis, blocked coronary circulation flow, and inadequate myocardial blood supply.In addition, coronary microvascular dysfunction, inflammation, and vasospasm contribute to the multifaceted and complex pathophysiological mechanisms of IHD [2,3].Studies suggest that high mortality in IHD may be the result of worsening metabolic risk factors, particularly high body mass index (BMI), diabetes, hypertension, high cholesterol, and renal insufficiency [4]. Elevated serum creatinine (Cr) levels are associated with an increased risk of IHD.Studies have shown that Cr may accelerate atherosclerosis by increasing the extent and number of atherosclerotic lesions, increasing the number and modification of low-density lipoprotein particles, and vascular inflammation, which can lead to IHD [5].Bilirubin is the end product of heme degradation and is an endogenous oxidant in the body.Heme oxygenase (HO) can maintain the dynamic balance of bilirubin content in the body by regulating the synthesis and catabolism of bilirubin [6].It has been shown that acute myocardial ischemia can activate stress processes in the body, producing oxygen radicals and oxidants that significantly increase the activity of HO-1 and eventually lead to elevated serum total bilirubin (TBil).Increased HO-1 activity in acute myocardial infarction corresponds to increased TBil levels, with a significant positive correlation between the two [7].In conclusion, both Cr and TBil are closely related to metabolic risk factors of IHD. Cr and TBil are rapid and readily available biomarkers in routine blood tests, but there are no literature reports on the prognostic guidance of combined Cr and TBil tests for IHD.Therefore, in this study, for the first time, the two were combined into a simplified equation to assess whether this combined value can predict long-term death in IHD patients, to find a simple and reliable adjunct to assess and predict clinical prognosis in IHD patients for early intervention to improve prognosis. Research design and study population The study was designed in strict accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [8].This study complied with the Declaration of Helsinki and was approved by the Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (approval number S220722-25).As this was a retrospective cohort study, follow-up was conducted by telephone.Written informed consent was obtained from individuals for the publication of any potentially identifiable images or data included in this study. Our research is a single-center, retrospective cohort study.The study population consisted of 3010 consecutive IHD patients admitted to the Heart Center of the First Affiliated Hospital of Xinjiang Medical University from January 2010 to December 2020.All cases met American Heart Association guidelines for the diagnosis and treatment of IHD [9].Inclusion: Diagnosis of ischemic heart disease on admission and all of the following conditions are present: (1) Angina pectoris or equivalent symptoms during rest, manual labor, or relief with nitroglycerin. (2) The electrocardiogram on admission showed obvious signs of myocardial ischemia, and the results of the exercise test suggested the presence of myocardial ischemia. Data collection All patients had a detailed medical history documented by a professionally trained investigator who collected general information about the patients through electronic and paper medical records, including age, sex, body mass index (BMI), systolic and diastolic blood pressure at admission, history of smoking, history of alcohol consumption, history of hypertension, history of diabetes, history of cardiovascular disease, history of percutaneous coronary intervention (PCI), admission status (myocardial infarction, heart failure, arrhythmia), laboratory tests, and echocardiography. Laboratory indices and cardiogram tests Peripheral venous blood samples were collected from all patients early in the morning on the second day of admission on an empty stomach, and complete routine blood and biochemical tests were performed.The fully automated hematology analyzer XE-5000 (Sysmex, Japan) was applied to determine hemoglobin, white blood cell count, neutrophil count and platelet count.Blood Cr, fasting blood glucose and serum triacylglycerol, TBil, total cholesterol, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C) and albumin levels were measured with a fully automated biochemical analyzer VITROS 5600 (Johnson & Johnson, USA).All patients underwent echocardiography within 24 h of admission by a specialized sonographer applying an EPIQ 7C (PHILIPS, The Netherlands) color Doppler ultrasound diagnostic instrument to measure left ventricular end-diastolic diameter (LVEDD) and left ventricular ejection fraction (LVEF). Definition of clinical outcomes and follow-up process The primary outcome of this study was death from any cause, and the secondary outcome was death of cardiovascular origin.Cardiovascular deaths were recorded as deaths related to myocardial infarction, congestive heart failure, sudden cardiac death, or arrhythmias.Follow-up information was collected by trained follow-up personnel who contacted the patient or the patient's family through telephone follow-up.Death cases were obtained by confirming death certificates with patients' families.All data were matched to hospital records.All patients were followed up by telephone from September 2022 to October 2022, of whom 2625 (87.2%) provided verbal consent and completed telephone follow-up. Statistical analysis Statistical analyses of the data in this study were performed using R software (The R Foundation; http://www.r-project.org;version 4.2.1).First, binary logistic regression analysis was calculated for Cr and TBil in the sample to obtain the combined value of Cr and TBil and the combined regression equation.Patients were divided into four groups according to quartiles of combined value and baseline characteristics were compared between groups.Normality tests for all variables in this study yielded that all variables were non-normally distributed, with continuous variables expressed as medians (interquartile spacing) and categorical variables expressed as frequencies (percentages).The Mann-Whitney U test was used for all continuous variables and the chi-square test was used for all categorical variables.Kaplan-Meier survival curves were established to assess the survival rates of different groups, and the log-rank test was used to compare whether there was a significant difference in survival rates between these groups.The hazard ratios (HRs) and 95% confidence intervals (CIs) between Cr, TBil, combined value, and the incidence of all-cause death and cardiovascular death were calculated separately using Cox proportional hazards regression models.In the calculation of HRs for combined value, group 1 (Combine �0.236) was used as a reference.For these models, we used both unadjusted and adjusted models.Firstly, model 1 was unadjusted for any confounding factors.Second, in model 2 we adjusted for gender, age, BMI, smoking, drinking, hypertension, and diabetes.we further adjusted for albumin, LVEF, fasting plasma glucose, triglyceride, total cholesterol, and the covariates of model 2 (model 3).Finally, we also adjusted for admission of myocardial infarction, heart failure, arrhythmia, past PCI, and the covariates of model 3 (model 4).Moreover, a restricted cubic spline (RCS) analysis was performed to reflect the dose-response relationship between the combined value and the risk of the two primary outcomes.Finally, the area under the curve (AUC), sensitivity, specificity and best cutoff values of Cr, TBil and combined value were obtained by receiver operating characteristic (ROC) curve analysis to assess the predictive efficacy of the three on long-term death events.In all analyses, a two-sided P < 0.05 was considered statistically significant. Comparison of baseline characteristics Among the cases between January 2010 and December 2020, we selected 3010 patients with IHD.Of these patients, 385 were excluded according to exclusion criteria.A total of 2625 patients with IHD (1974 men and 651 women) with a mean age of 65 years were finally included in this study.1611 patients experienced all-cause death, of which 1472 were cardiovascular deaths.A flow chart of the study design is depicted in Fig 1 .We derived the combined value for Cr and TBil by binary logistic regression analysis using all-cause death as the dependent variable and determined the combined equation for the combined value (Table 1).We compared baseline characteristics between groups based on the quartiles of binding values, with value �0.236 as group 1, 0.236 < value �0.381 as group 2, 0.381 < value �0.592 as group 3, and value >0.592 as group 4 (Table 2). As shown in Table 2, group 4 had the oldest IHD patients among the four groups (63.00 [54.00,72.00]vs. 65.00[56.00,73.00]vs. 67.00[59.00,75.00]vs. 68.00[59.00,76.00],P < 0.001), and their median left ventricular end-diastolic internal diameter (interquartile spacing) was the same as in group 3 and greater than in the remaining two groups (60.00[54.00,64.00]vs 61.00[56.00,66.00]vs 62.00[57.00,67.00]vs 62.00[57.00,67.00],P < 0.001).BMI (25.21 [22.78,28.44], respectively, all with statistically significant (P < 0.001).The white blood cell count, uric acid, fasting glucose, lactate dehydrogenase, and creatine kinase levels were significantly higher in group 4 compared to the other groups and showed a gradual increase from group 1 to group 4 (P < 0.01), and their platelet count, total cholesterol, HDL cholesterol, and albumin levels were lower compared to the other groups (P < 0.001), and their triglyceride levels were similar to group 3 and lower than the other two groups (P = 0.002).Group 4 had significantly higher rates of men, hypertension, diabetes mellitus, all-cause death, and cardiovascular death compared to the other groups, while arrhythmia rates were similar to group 3 and higher than the other two groups (P<0.001).There were no statistical differences between the four groups in systolic blood pressure, diastolic blood pressure, history of smoking, history of alcohol consumption, history of PCI, admission conditions (myocardial infarction and heart failure), and LDL cholesterol. Discussion This study shows for the first time that combined value of Cr and TBil exhibit a stronger risk relationship with long-term death events in patients with IHD compared with single factors, and that the combination of the two better predicts the occurrence of long-term death in patients with IHD.Our study showed that patients with higher levels of both Cr and TBil had higher long-term death, although after adjusting for numerous confounding factors.The pathophysiologic mechanism of IHD is more complex than a single, simple causal event.It can develop from coronary microvascular dysfunction, inflammation, or vasospasm, but more often it is generally an obstruction or stenosis of the coronary arteries due to atherosclerotic lesions that obstruct coronary circulation flow and inadequate myocardial blood supply, which in turn develops into IHD.It has been shown that in the general population, elevated plasma Cr levels are associated with an increased risk of early death from ischemic heart disease, whereas low levels of glomerular filtration rate were not significantly correlated [5,10].Mechanistic studies suggest that creatinine may contribute to the progression of ischemic heart disease through several pathways, including increasing the extent and number of atherosclerotic lesions, increasing the number of low-density lipoprotein particles, and vascular inflammation [11,12].Elevated Cr levels may alter protein turnover, and proteinuria leads to the retention of cholesterol from the degradation of chylomicron and very low-density lipoprotein cholesterol particles in plasma [13], and the accumulation of residual cholesterol can inhibit interleukin-10 production.Produced mainly by activated helper T cell subsets Th2 cells, B cells, and monocytes/macrophages, interleukin-10 inhibits the prototypical proinflammatory transcriptional nuclear factor-kB, which in turn inhibits inflammatory cytokine production, inhibits matrix-hydrolyzed metalloproteinases, attenuates tissue factor expression, promotes a shift in lymphocyte phenotype toward Th2 cells, and overall suppresses the inflammatory response [14,15].IL-10 also has a potential anti-atherosclerotic effect, as IL-10 inactivates local macrophages and T lymphocytes and can modulate the local inflammatory response [16,17]. High levels of Cr are often accompanied by decreased renal function in patients, and chronic kidney disease is a common co-morbidity in patients with IHD and is associated with worse short-and long-term clinical prognosis [18][19][20], and patients with chronic kidney disease exhibit accelerated atherosclerosis and an increased risk of multivessel coronary artery disease [21], which may explain the increased risk of death in IHD patients with high levels of Cr.Susanne et al. found in animal experiments that chronic renal failure accelerated the process of atherosclerosis and that mice with chronic renal failure had higher plasma total cholesterol concentrations than control mice, a difference that likely contributed to their accelerated atherosclerosis formation [11].Julio et al. performed a mortality risk analysis of 122 patients with ischemic cardiomyopathy and similarly obtained a trend of higher mortality associated with higher Cr levels [22].Joachim et al. showed that in patients with coronary artery disease, Cr excretion rates were strongly associated with their mortality, and they found that lower Cr excretion rates corresponded to a higher risk of death (HR:2.30(95% CI: 1.51-3.51),P = 0.001), after adjusting for various confounders [23]. In addition to the pathophysiological mechanisms analyzed above, oxidative stress and inflammatory responses play a large role in the development of the IHD disease process.Hemoglobin is highly reactive in its unbound form.In the free state, highly reactive heme accelerates the production of reactive oxygen species and lipid oxidation, thus increasing the risk of cardiovascular disease [24,25].Heme oxygenase (HO) cleaves the pro-oxidant heme on the α-methylene bridge to form biliverdin, carbon monoxide, and ferrous iron.The biliverdin is then reduced to bilirubin by biliverdin reductase.Thus, HO has antioxidant and antiinflammatory effects [26,27].A study by Ozturk et al. of 782 patients with acute coronary syndrome found a significant positive correlation between TBil levels and troponin levels at admission.They hypothesized that HO-1 is a stress-inducing enzyme and that the increase in HO-1 activity during acute myocardial infarction corresponds to elevated TBil levels [28].Okuhara et al. found a significant positive correlation between HO-1 enzyme levels and TBil levels at admission in patients with acute myocardial infarction, further supporting the argument of Ozturk et al. and also suggesting that elevated serum TBil reflects HO-1 activation [7].Elevated levels of the HO-1 enzyme represent the presence of high-intensity oxidative stress and inflammatory response in the body, and high levels of HO-1 correspond to high levels of TBil.Therefore, we hypothesize that the degree of HO-1 activation may reflect the intensity of the inflammatory response to myocardial injury during the course of IHD and that high levels of TBil suggest an increased risk of adverse cardiovascular events. Halit et al. reported that patients with acute myocardial infarction with impaired blood flow had higher bilirubin levels than the group with normal blood flow, suggesting that the more severe the degree of atherosclerosis and the higher the post-infarction HO-1 enzyme activity, the more marked the increase in bilirubin levels, and the degree of increase was related to the severity of the lesion [29].Huang et al. showed that serum TBil levels were associated with higher acute myocardial infarction group mortality (OR: 2.35, 95% CI: 1.15-4.77,P<0.05) [30].Chung et al. followed up on 1,111 patients with ST-segment elevation infarction in-hospital and 12 months postoperatively.The results showed that the incidence of adverse cardiovascular events and cardiac mortality were higher in the hyperbilirubinemic group than in the hypobilirubinemic group [31].This suggests that oxygen radicals and various oxidants produced by oxidative stress injury may damage the organism under stressful conditions more than the anti-inflammatory and antioxidant effects of HO-1 on the organism, and high levels of TBil instead show a strong correlation with poor prognosis. In summary, not only one system or one pathophysiological mechanism is involved in the disease process of IHD, and a single inflammatory biomarker or oxidative stress alone is not sufficient to elucidate the entire pathophysiological process involved in IHD.On the contrary, combining multiple markers can provide a more comprehensive picture of the developmental process of IHD. In the present study, the area under the ROC curve values for the combined equations predicting mortality, although not very high, was still higher than the AUC values predicted by Cr and TBil alone, respectively.More importantly, multifactorial COX regression analysis showed that high values of the equation were independent predictors of out-of-hospital long-term death events.Despite adjusting for numerous confounders, the combined value of Cr and TBil exhibited a high mortality risk correlation compared with the separate metrics.This suggests that combining Cr and TBil is a stronger predictor of long-term death events than the markers alone.And since Cr and TBil, two rapidly available biomarkers commonly used in routine blood tests, are suitable for most hospitals due to their low cost and ease of detection, we still suggest that the combined use of Cr and TBil is of particular clinical importance for predicting long-term prognosis in patients with IHD. Advantages and limitations The strength of this study is the use of two biomarkers that are readily available in the clinical setting combined with regression equations to obtain a highly correlated and more sensitive indicator of the risk of death from IHD.However, the present study still has some limitations.This study is a single-center trial with the limitations of a retrospective cohort design.Plasma Cr and TBil levels vary over time, change with diet, and are affected by medications; therefore, a single measurement provides an insensitive indicator and should be compared to repeated measurements.This study did not directly measure oxidative stress-related indicators and HO-1 enzyme activity, nor did it include various inflammatory markers such as C-reactive protein, calcitonin gene, and interleukins.More basic experimental studies are needed to validate the exact role of Cr and TBil in the long-term prognosis of patients, in addition to determining the exact role of Cr and TBil. Conclusion This study showed that the combined Cr and TBil assay is superior to single biomarkers for predicting out-of-hospital long-term death events in patients with IHD.High values of the equation are independent predictors of out-of-hospital long-term death events and can be used to identify patients at high risk for IHD and accurately predict their clinical prognosis for early intervention. Fig 1 . Fig 1.The flow chart illustrates the inclusion and exclusion criteria for patients in this study, and the entire process of data collection and follow-up.https://doi.org/10.1371/journal.pone.0294335.g001 Fig 3 . Fig 3. Restricted cubic spline (RCS) plot of the association between the combined value of Cr and TBil with the risk of long-term death in IHD patients.(A) All-cause death; (B) Cardiovascular death.Blue lines represent references for HRs, and blue areas represent 95% confidence intervals.The model was adjusted for gender, age, BMI, smoking, drinking, hypertension, diabetes, albumin, left ventricular ejection fraction, fasting plasma glucose, triglyceride, total cholesterol, admission of myocardial infarction, heart failure, arrhythmia, and past percutaneous coronary intervention.https://doi.org/10.1371/journal.pone.0294335.g003
2023-11-18T05:09:32.365Z
2023-11-16T00:00:00.000
{ "year": 2023, "sha1": "2c487e710ac17ef52a3d08ddab8af04f4203dc9b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2c487e710ac17ef52a3d08ddab8af04f4203dc9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261018404
pes2o/s2orc
v3-fos-license
An improved blood hemorrhaging treatment using diatoms frustules, by alternating Ca and light levels in cultures Hemorrhage control requires hemostatic materials that are both effective and biocompatible. Among these, diatom biosilica (DBs) could significantly improve hemorrhage control, but it induces hemolysis (the hemolysis rate > 5%). Thus, the purpose of this study was to explore the influence of Ca2+ biomineralization on DBs for developing fast hemostatic materials with a low hemolysis rate. Here, CaCl2 was added to the diatom medium under high light (cool white, fluorescent lamps, 67.5 µmol m−2 s−1), producing Ca-DBs-3 with a particle size of 40–50 μm and a Ca2+ content of Ca-DBs-3 obtained from the higher concentration CaCl2 group (6.7 mmol L−1) of 0.16%. The liquid absorption capacity of Ca-DBs-3 was 30.43 ± 0.57 times its dry weight; the in vitro clotting time was comparable to QuikClot® zeolite; the hemostatic time and blood loss using the rat tail amputation model were 36.40 ± 2.52 s and 0.39 ± 0.12 g, which were 40.72% and 19.50% of QuikClot® zeolite, respectively. Ca-DBs-3 showed no apparent toxicity to L929 cells (cell viability > 80%) and was non-hemolysis (the hemolysis rate < 2%). This study prepared Ca-DBs-3 with a rapid hemostatic effect and good biocompatibility, providing a path to develop diatom biosilica hemostatic materials. Supplementary Information The online version contains supplementary material available at 10.1007/s42995-023-00180-3. Introduction Uncontrolled bleeding is a leading cause (nearly 40%) of death in military and civilian trauma (Eastridge et al. 2011;Sauaia et al. 1995;Teixeira et al. 2007). Rapid application of bleeding control can effectively prevent bleeding-related death (Kragh et al. 2009). Therefore, developing efficient hemostatic agents is critical. QuikClot ® (QC) is an inorganic zeolite that can powerfully treat fatal bleeding and save lives (Alam et al. 2005;Rhee et al. 2008). QC is considered to be one of the fastest and the least bleeding hemostatic agents in commercial products with many advantages, including stability, portability, and non-biological toxicity, and it has been used in the emergency treatment of bleeding in wars, traffic accidents, and other emergency situations (Li et al. 2013;Wright et al. 2004a). However, QC has an exothermic effect when used to treat bleeding, which will almost certainly result in secondary injury to the wound (Wright et al. 2004b). Therefore, safer fast hemostatic materials still need to be developed. Diatom biosilica (DBs) has been found to stop bleeding rapidly and has no exothermic effect (Feng et al. 2016;Luo et al. 2021;Tramontano et al. 2020;Uthappa et al. 2018Uthappa et al. , 2019. Diatoms are unicellular eukaryotic algae found in almost all types of aquatic environment, including marine and freshwaters (Armbrust 2009;Falciatore and Bowler 2002) and are extremely diverse, with over 1 × 10 5 known species (Mann and Droop 1996). Diatoms have an inorganic biosilica cell wall called a frustule, which is primarily composed of amorphous silica and distributed with numerous micro-nano pores (Dobrosielska et al. 2020;Hildebrand and Lerch 2015;Kröger and Brunner 2014;Zurzolo and Bowler 2001). A high porosity will result in a Edited by Jiamei Li. 1 3 high absorption rate, which can rapidly agglutinate coagulation factors and accelerate the occurrence of coagulation cascade reactions (Li et al. 2012;Rhee et al. 2008). Although DBs could rapidly cease bleeding, they have a hemolytic effect (the hemolysis rate > 5%) (Feng et al. 2016). Chitosan of marine origin is widely used in biomedical materials research (Deng et al. 2020). To decrease the hemolysis rate, chitosan was used to coat the surface of DBs and was found to be effective (Feng et al. 2016). However, chitosan will obstruct the porous structure of DBs, reducing their capacity for liquid absorption. Additionally, the chitosan-DBs mixture is an organic-inorganic composite material, and chitosan is sticky and will adhere to the wound, tearing the granulation tissue and causing allergies (Wang et al. 2021a, b). Recently, our research group added CaCl 2 to the diatom culture medium and obtained an inorganic Ca-biosilica that was relatively pure (Li et al. 2018). Ca-biosilica's hemolysis rate was less than 2%, implying that adding Ca 2+ to DBs could reduce the hemolysis rate. However, the relationship between CaCl 2 concentration and the amount of Ca 2+ biomineralization is not well understood. Application properties are based on the material's physical properties. The physical structure of post-harvested DBs can be changed by treating them with chemical reagents, such as SnO 2 and hydrofluoric acid (Weatherspoon et al. 2007;Zhang et al. 2011). Chemical reagents can modify the physical properties of DBs. However, these procedures are complex, costly, and leave harmful residue. Thus, gentler modifications are needed. Since the frustule is prepared in the laboratory via the culture of living diatoms, the culture medium and culture conditions will affect the growth and biomineralization of diatoms, such as the frustules' morphology and structure. Wei Li et al. found high light stimulated diatom's growth and made diatom smaller (Li et al. 2023). Yanyan Su et al. investigated the effect of varying light intensities on the diameter of the frustule and discovered that as light intensity increased, the diameter of the frustule decreased (Su et al. 2015). Lulu Wang et al. described three types of frustules with a rapid hemostatic effect and the hemostatic effect increased with decreasing frustule size (Wang et al. 2019). To develop hemostatic materials that are effective and biocompatible, it is worthwhile to investigate the effects of light intensity and CaCl 2 on the frustule microstructure. In this study, we investigated the effect of 4.05, 40.5, and 67.5 µmol m −2 s −1 (cool white, fluorescent lamps) on diatom growth and studied the impact of CaCl 2 on diatoms under 67.5 µmol m −2 s −1 . The microstructure, biomineralized Ca 2+ content, liquid adsorption capacity, and hemostatic properties of the frustules were investigated to establish a relationship between the frustule's physical and chemical properties and hemostatic efficacy. Effect of white light intensity on diatoms The effect of light intensity on diatom's carrying capacity was studied at three intensities (cool white, fluorescent lamps: 4.05, 40.5, 67.5 µmol m −2 s −1 , Supplementary Fig. S1). Diatoms continued to multiply under the light of 4.05, 40.5, and 67.5 µmol m −2 s −1 during the 7-day incubation period, and the maximum carrying capacity of diatoms were reached at 6.98 × 10 5 , 8.07 × 10 5 and 8.86 × 10 5 cells L −1 , respectively. The number of diatoms increased with increasing light intensity (P < 0.05), indicating that the increase in light intensity favored the cells division of diatoms in the range of 4.05-67.5 µmol m −2 s −1 . The growth curves did not follow the typical sigmoidal curves due to the overly dense number of initial diatom inoculations. However, the results and findings are still useful. In future optimization experiments: (1) diatoms will be acclimated to the light conditions in an exponential phase acclimation culture for 10 generations; (2) the initial inoculum density of diatoms will be reduced to establish the typical sigmoidal growth curves; and (3) growth rate and carrying capacity will be calculated separately to guide industrial applications based on the exponential and steady-state growth periods of the diatoms. A BET analysis was conducted to better understand the effect of light intensity on the microstructure of DBs (Supplementary Fig. S3; Table S1). DBs-67.5 had the largest specific surface area (7.26 m 2 g −1 ) and total pore volume of 0.016 cm 3 g −1 . As the intensity of the light increased from 4.05 to 67.5 µmol m −2 s −1 , the specific surface area and total pore volume of frustules increased. The average pore diameter was around 8-10 nm. The Barrett-Joyner-Halenda (BJH) pore diameter was 2.39-2.41 nm. The average pore diameter and BJH pore diameter were not affected by the light intensity. 1 3 EDXS, FTIR, and XRD were used to determine the elemental composition, groups, and crystal form of respectively. Elements C,O,. However, only Fe was detected for DBs-67.5 (Fig. 1A). The higher light intensity may be responsible for the appearance of Fe. The FTIR curves of DBs-4.05, DBs-40.5, and DBs-67.5 were similar (Fig. 1B). The peaks at 471 cm −1 , 796 cm −1 , and 1097 cm −1 all correspond to Si-O-Si; the peak 959 cm −1 corresponds to Si-OH; the peak 1637 cm −1 corresponds to amide I (Zając et al. 2015); the peaks 3100-3600 cm −1 represent O-H. XRD analysis revealed that DBs-4.05, DBs-40.5, and DBs-67.5 were composed of amorphous silica (Fig. 1C). When exposed to white light at 4.05, 40.5, and 67.5 µmol m −2 s −1 , DBs were similar in disk morphology, tertiary pores, surface clusters, and crystallization but differed in diatom growth rate, particle size, the specific surface area, total pore volume and ion biomineralization. An earlier study found that the smaller the frustule size, the more potent the hemostatic effect (Wang et al. 2019). Thus, 67.5 µmol m −2 s −1 white light was chosen to investigate the Ca 2+ biomineralization on DBs because this wavelength showed the smallest frustule particle size, the largest specific surface area, the largest total pore volume, differences in ion biomineralization, and the fastest diatom growth rate. Effect of CaCl 2 on diatoms CaCl 2 was added to the diatom medium at concentrations of 1.675, 3.35, and 6.7 mmol L −1 to determine its effect on diatom growth ( Supplementary Fig. S4). Within 0-8 days, no significant difference in diatom number was observed between DBs (control), Ca-DBs-1, and Ca-DBs-2. Within 0-2 days, no significant difference was found in diatom number between DBs and Ca-DBs-3, but from day 2 to day 8, Ca-DBs-3 had significantly lower diatom number than DBs (P < 0.01). At concentrations ranging from 1.675 to 3.350 mmol L −1 , CaCl 2 exhibited no detectable inhibitory effect on the diatom's carrying capacity. However, at the higher concentration (6.7 mmol L −1 ), CaCl 2 reduced the diatom's carrying capacity. Frustule formation is an essential step in the division of diatom cells. The CaCl 2 concentration as high as 6.7 mmol L −1 may affect the frustule biomineralization process and the diatoms' cell division rate. The growth curve did not follow the typical sigmoidal curves due to the initial cultures were too dense. But these results and findings remain informative. In our future optimization experiment, diatoms will be acclimated to the Ca 2+ conditions in an exponential phase acclimation culture for 10 generations. The initial cultures will be reduced to establish the typical sigmoidal growth curves. And a two-way ANOVA will be used to analyze the effects of light and Ca 2+ on the growth rate and carrying capacity of diatoms for direct adaptation to industrial production. SEM and TEM were used to examine the surface morphology and pore structure of DBs (control) and Ca-DBs (Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3). Girdle bands connected the epitheca and the hypotheca of DBs and Ca-DBs Field et al. 1998;Smetacek 1999). DBs and Ca-DBs were disc-shaped with a dense radial arrangement of graded circular/hexagonal and micro-nano pores ( Supplementary Fig. S5). The average diameter of DBs and Ca-DBs was 40-50 μm; the maximum aperture was 1-1.5 μm; the second aperture was 200-250 nm; the minimum aperture was 50-100 nm. According to the BET analysis, the specific surface area of DBs and Ca-DBs was around 6-7 m 2 g −1 ; the average pore diameter was 8-10 nm; the BJH pore diameter was around 2 nm; the total pore volume was around 0.015 cm 3 g −1 . DBs and Ca-DBs had similar specific surface areas and pore diameters (Supplementary Fig. S6; Table S2). The morphology and pore size of DBs and Ca-DBs are identical, indicating that CaCl 2 did not affect the frustule's shape or size. XRD analysis revealed that DBs and Ca-DBs (Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3) were composed of amorphous silica (Fig. 2C), implying that DBs and Ca-DBs could be considered safe biomedical materials (Monich et al. 2017). Figure 2D illustrates the capacity of DBs and Ca-DBs (Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3) to adsorb SBF. The gauze was as a control. The liquid absorption capacity of DBs and Ca-DBs was three times that of gauze (P < 0.01). DBs and Ca-DBs had 30 times their dry weight in liquid absorption capacity. DBs and Ca-DBs showed no significant difference. SEM, TEM, FTIR, XRD, and the liquid absorption capacity of DBs and Ca-DBs (Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3) were comparable. Due to the biomineralization of Ca 2+ in Ca-DBs-3, DBs was used as a control for analyzing the coagulation and hemolysis of DBs and Ca-DBs-3. In vitro whole blood clotting time The clotting time of DBs and Ca-DBs-3 was measured in vitro using whole blood ( Supplementary Fig. S7A). Ca-DBs-3 had the shortest clotting time (163.80 ± 5.00 s). The hemostatic ability of Ca-DBs-3 is comparable to that of QuikClot ® zeolite. Ca-DBs-3 had a greater hemostatic effect than DBs (P < 0.05). Ca-DBs-3's superior blood clotting ability benefits from many ways. First, the large micro-nano pores in Ca-DBs-3 have a high capacity for absorbing liquid, which allows for the rapid agglutination of coagulation factors and the acceleration of the coagulation cascade reaction (Na et al. 2011;Rhee et al. 2008). Second, Ca-DBs-3 contains large negatively charged polar silanol groups that interact actively with blood cells, promoting the formation of blood clots (Slowing et al. 2006). Third, the Ca 2+ on the surface of Ca-DBs-3 acts as a coagulation factor IV, promoting blood coagulation further (Ratnoff and Potts 1954). Supplementary Fig. S7B depicts the SEM observation of blood clots. The blank control group's erythrocytes had a typical disc shape, forming fibrin in the blood clot. Erythrocytes were adsorbed on the surface of the DBs with fibrin surrounded in the blood clot. Ca-DBs-3 adsorbed erythrocytes and fibrin networks were found in blood clots. The production of fibrin is an important step in blood coagulation. Fibrin acts as a net for blood cells, converting blood from a liquid to a solid state, thereby promoting blood clotting (Butenas and Mann 2002). Coagulation cascade activation pathway The PT test primarily indicates the status of the exogenous coagulation system, whereas the aPTT test mainly shows the activity and function of endogenous coagulation factors (Kamal et al. 2007). For the PT, neither DBs nor Ca-DBs-3 exhibited significant differences from the control (Supplementary Fig. S7C). For the aPTT, both DBs and Ca-DBs-3 significantly reduced the reaction time ( Supplementary Fig. S7D, P < 0.01), which was roughly 70%-80% shorter than the control, and Ca-DBs-3 was markedly faster than DBs (P < 0.01). The PT and the aPTT assays demonstrated that DBs and Ca-DBs-3 primarily promoted blood coagulation via the endogenous coagulation pathway. Negatively charged polar silanol groups on the surface of DBs and Ca-DBs-3 can activate coagulation factors (XII and XI) and bind to cofactors (prekallikrein and HWK-kininogen), increasing endogenous coagulation (Pourshahrestani et al. 2016). Additionally, the Ca 2+ on the surface of Ca-DBs-3 was coagulation factor IV, enhancing the endogenous coagulation pathway (Chen et al. 2015). Viscoelasticity analysis of blood clots TEG was used to assess the integrity of whole blood coagulation and guide the treatment of bleeding (Mohamed et al. 2017) (Supplementary Fig. S8). The R value for Ca-DBs-3 was 37% and 59% of the control and DBs, respectively (P < 0.05). The K value for Ca-DBs-3 was 27% and 68% of the control and DBs, respectively (P < 0.05). The angle for Ca-DBs-3 was 198% and 122% of the control and DBs, respectively (P < 0.05). No significant difference was found in MA between DBs and control, but Ca-DBs-3 was significantly greater than control by 18.4% (P < 0.05). Both DBs and Ca-DBs-3 have the ability to stimulate blood coagulation start, expansion, and spread, with Ca-DBs-3 having a larger effect than DBs. The coagulationpromoting effect of Ca-DBs-3 is due to its microstructure. First, Ca-DBs-3 has a thick micro-nano layered pore structure and a high liquid absorption capacity, allowing it to rapidly agglutinate coagulation components and activate the coagulation cascade reaction (Na et al. 2011). Second, Ca-DBs-3 are silicon-based materials having a strong negative charge polarity silanol group, which aids the reaction at the blood cell interface, resulting in enriched blood cells and fibrin networks that form more tightly linked blood clots (Slowing et al. 2006). Third, the Ca 2+ on the surface of Ca-DBs-3 is coagulation factor IV, which strengthens blood coagulation even more (Ratnoff and Potts 1954). Cytotoxicity and hemolysis rate of DBs and Ca-DBs-3 DBs and Ca-DBs-3 were co-cultured in vitro with L929 cells to assess the cytotoxicity. The cytotoxicity increased as the sample concentration increased between 0.625 and 10 mg mL −1 (Fig. 3). After 24 h of culture, both DBs and Ca-DBs-3 exhibited mild cytotoxicity with over 80% of cells surviving. DBs and Ca-DBs-3 may be cytotoxic because negatively charged silanol groups interact with cells at the interface. The cell viability increased as the culture time increased. The cell viability remained greater than 100% at 72 h, indicating DBs and Ca-DBs-3 were cytocompatible. Hemolysis rate primarily reflects free hemoglobin concentration in plasma following complete contact with blood. Hemolysis criteria for biomaterials can be classified as nonhemolysis (hemolysis rate < 2%), mild hemolysis (hemolysis rate 2%-5%), or hemolysis (hemolysis rate > 5%) (Huang et al. 2016;Song et al. 2019). The hemolysis rate of DBs and Ca-DBs-3 is shown in Fig. 4. For DBs, the hemolysis rate increases as the DB's concentration increases from 0.3125 to 10 mg mL −1 . The hemolysis rate of DBs was less than 2% in the concentration range of 0.3125-2.5 mg mL −1 , 4.51 ± 0.18% in 5 mg mL −1 , and 13.15 ± 0.39% in 10 mg mL −1 , respectively. The hemolysis rate for Ca-DBs-3 was always less than 2% in 0.3125-10 mg mL −1 . Ca-DBs-3 had superior blood compatibility than DBs. The numerous negatively charged polar silanol groups on the surface of DBs and Ca-DBs-3 interact with erythrocytes and thus may lead to hemolysis. Ca-DBs-3's hemolysis rate was lower than DBs, which may be related to its incorporation of Ca 2+ . Ca 2+ may decrease Ca-Bs-3's hemolysis rate by weakening its negative charge polarity. Hemostatic in vivo The hemostatic time and blood loss of DBs and Ca-DBs-3 in vivo were determined using a rat tail amputation model (Fig. 5). Ca-DBs-3 had the shortest hemostatic time (36.40 ± 2.52 s) and the least blood loss (0.39 ± 0.12 g). The in vivo hemostasis time of Ca-DBs-3 was 40.72% of QuikClot ® zeolite and 53.37% of DBs, and the blood loss of Ca-DBs-3 was 19.50% of QuikClot ® zeolite and 33.05% of DBs. The control group had gelatinous blood clots from the cross-section of the tail, whereas DBs and Ca-DBs-3 had no blood adsorption and stopped bleeding completely (Fig. 5D). The in vivo hemostatic test further supported the excellent hemostatic ability of Ca-DBs-3. Ca-DBs-3 possessed a stronger procoagulant effect than DBs. First, rich micro-nano pores endow Ca-DBs-3 with an exceptional capacity for liquid absorption, facilitating the rapid agglutination of coagulation factors and the initiation of the coagulation cascade (Na et al. 2011). Second, Ca-DBs-3 contains abundant negatively charged polar silanol groups that interact with blood cells, promoting the formation of blood clots (Slowing et al. 2006). Third, Ca 2+ contained in Ca-DBs-3 is the coagulation factor IV, which aids in the coagulation reaction's progression (Ratnoff and Potts 1954). Conclusions In this work, CaCl 2 was added to the diatom medium under 67.5 µmol m −2 s −1 (cool white, fluorescent lamps) to obtain Ca-DBs-3. Ca-DBs-3 was 40-50 μm in diameter and had hierarchical micro-nano pores: the first-order aperture was 1-1.5 μm; the second-order aperture was 200-250 nm; the third-order aperture was 50-100 nm. While the lower CaCl 2 concentration had no apparent effect on the diatom's carrying capacity, the higher CaCl 2 concentration of 6.7 mmol L −1 inhibited the diatom's carrying capacity significantly. The higher CaCl 2 concentration influenced the frustules' biomineralization, and Ca 2+ was biomineralized in Ca-DBs-3 at a content of 0.16%. Ca-DBs-3 had the shortest hemostatic time (36.40 ± 2.52 s) and the least bleeding loss (0.39 ± 0.12 g) in the rat tail amputation model, which were 40.72% and 19.50% of QuikClot ® zeolite, respectively. Additionally, Ca-DBs-3 exhibited superior liquid absorption capacity, showed no apparent toxicity toward L929 cells (cell viability > 80%), and demonstrated good blood compatibility (hemolysis rate < 2%). In conclusion, Ca-DBs-3 has the potential to be developed into a rapid hemostatic material, laying the groundwork for developing the frustulebased hemostatic materials and the study of coagulation mechanisms. Conditions for diatom cultivation The diatom Coscinodiscus sp. (CCAP 1013/11) was provided by the Ocean University of China's Key Laboratory for Marine Genetics and Breeding. Diatoms were cultured at 21 °C, in 300 mL of high-pressure sterilized natural seawater (30 PPT) supplemented with F/2 (Guillard 1975), on a 12 h: 12 h light-dark cycle. Diatoms were cultured in a light incubator (GXZ-280B, Ningbo Jiangnan Instrument Factory, China). The type of lights were cool white, fluorescent lamps. Diatoms were acclimated in the exponential phase by serial transfers for 14 days prior to light and calcium chloride experiments. The first cultivation stage: set three light groups to 4.05, 40.5, and 67.5 µmol m −2 s −1 and the initial inoculation density of diatoms was identical. The diatoms were cultured for seven days. Each day, 1 mL diatom solution was collected concurrently to count cells. For the second stage of the cultivation, CaCl 2 was added to seawater at three concentrations (1.675, 3.35, 6.7 mmol L −1 ) under 67.5 µmol m −2 s −1 . The group lacking CaCl 2 served as the control. Every 48 h, the number of diatoms was counted. Diatoms were harvested on the last day of cultivation. Frustule preparation and characterization Diatoms were filtered on to a 500 Nitex mesh and then soaked in a solution containing 2 mol L −1 HCl and 30% H 2 O 2 (V HCl :V H2O2 = 1:1) until a white precipitates (frustules) formed. The frustules were collected on 500 mesh Nitex mesh, washed three times with deionized water, and then vacuum dried for 24 h at 60 °C. Finally, frustules in light intensity group ) and CaCl 2 concentration group (DBs, Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3) were obtained. Scanning electron microscopy (SEM, JSM-6010LA, JEOL, Japan) and transmission electron microscopy (TEM, H-9500, Hitachi, Japan) were used to examine the surface morphology and pore structure of frustules DBs,. The surface elements of frustules were determined using energy-dispersive X-ray spectroscopy (EDXS, SEM, JSM-6010LA, JEOL, Japan). The BET surface areas and pore diameters were determined by a Gas Sorption Analyzer (Autosorb-IQ, Konta, USA). Fourier transform infrared spectroscopy was used to analyze the chemical groups of frustules (FTIR, 5700, Nicolet, USA). A powder X-ray diffractometer was used to determine the crystallinity of frustules (XRD, SmartLab, Rigaku, Japan). The capacity of DBs, Ca-DBs-1, Ca-DBs-2, and Ca-DBs-3 to absorb liquid was determined in vitro using simulated body fluid (SBF) (Dai et al. 2010;Saxena et al. 2008). The liquid absorption capacity was determined using Eq. (1): where W dry and W wet denote the weights of DBs and Ca-DBs before and after SBF soaked, respectively. In vitro evaluation of blood coagulation The in vitro coagulation test was conducted following established methods (Behrens et al. 2014). Gauze, QuikClot ® zeolite, DBs, and Ca-DBs-3 (5 mg) were placed in 2 mL centrifuge tubes and incubated at 37 °C, respectively. Whole blood was extracted from the heart of a New Zealand white rabbit and injected into a tube containing 3.8% sodium citrate. The (1) Capacity for liquid adsorption (in times) = W wet − W dry ∕W dry , volume ratio of 3.8% sodium citrate to whole blood was 1: 9. Whole blood (1 mL) was added evenly to the centrifuge tubes containing the samples. Then 100 μL CaCl 2 solution (0.2 mol L −1 ) was added, and the clotting time was recorded (i.e., the time for blood to clot completely). The tubes were rotated 180° every 10 s to determine whether the blood was clotted or not. The blank control was the whole blood without samples. Blood clots were washed three times with phosphate-buffered saline (PBS, pH 7.4) and fixed in 2.5% glutaraldehyde for 4 h. Finally, blood clots were dehydrated with gradient ethanol (30%, 50%, 70%, 90%, 100%) and observed by SEM after supercritical drying. A semiautomatic coagulation analyzer was used to determine the prothrombin time (PT) and the activated partial thromboplastin time (aPTT) (TS6000, MD PACIFIC, China) (Wang et al. 2021a, b). Whole blood was extracted from the heart of a New Zealand white rabbit and injected into a tube containing 3.8% sodium citrate. Blood was centrifuged at 3000 rpm for 15 min to get the platelet plasma with a low platelet count (PPP). PPP (100 μL) was incubated at 37 °C for 180 s before being added to 100 μL PT reagent containing samples for the PT detection. At 37 °C for 180 s, aPTT (100 μL) reagent and PPP (100 μL) were co-incubated, and 100 μL CaCl 2 (0.025 mol L −1 ) containing samples was added for the aPTT test. The control test was conducted without samples. A thrombelastography analyzer was used to determine the viscoelasticity of blood clots (TEG 5000, Haemonetics Corporation, USA) (Wang et al. 2021a, b). Whole blood was extracted from the heart of a New Zealand white rabbit and injected into a tube containing 3.8% sodium citrate. The temperature of the TEG calibration was set to 37 °C. CaCl 2 (20 μL, 0.1 mol L −1 ) was added into the TEG cup, followed by 340 μL blood samples (5 mg mL −1 ). The blood without samples was the blank control. The TEG test yielded four parameters: R (the time interval between the initial reaction and the formation of a measurable clot), K (the time interval between R and the construction of a clot with a specified hardness), angle (the rate of the clot formation), and MA (the maximum amplitude of the shear elasticity of the clot). In vitro biocompatibility evaluation L929 cells viability was determined using CCK8 (Gao et al. 2016). L929 cells were cultured at 37 °C in 5% CO 2 and seeded into 96-well plates (1 × 10 4 cells well −1 ). The initial medium was replaced with the medium containing DBs and Ca-DBs (10, 5, 2.5, 1.25, 0.625 mg mL −1 ) after 12 h of cultivation. Cells cultured in the absence of samples (DBs and Ca-DBs) served as the control. The time intervals (24, 48, 72 h) were established for incubation. Following the completion of each culture, the wells were filled with the CCK8 solution (10 μL well −1 ) and incubated at 37 °C for 4 h. A micro-flat panel reader was used to determine the optical density of the supernatant at 450 nm (Sunrise, TECAN, Switzerland). The percentage of the surviving cells was calculated using Eq. (2): The hemolysis rate was tested according established methods (Zhao et al. 2011). DBs and Ca-DBs with different concentrations (10, 5, 2.5, 1.25, 0.625, 0.3125 mg mL −1 ) were prepared using saline in 1.5 mL centrifuge tubes and preheated at 37 °C. Whole blood was drawn from the heart of a New Zealand white rabbit and placed into a tube containing heparin sodium (heparin sodium: whole blood = 1:9). After that, the blood was diluted with saline (saline: blood = 5:4). Diluted blood (20 μL) was added to a suspension of 1 mL samples and thoroughly mixed. The mixture was then incubated at 37 °C for 1 h in a water bath. Following incubation, the tubes were centrifuged at 2000 rpm for 5 min, and optical density (545 nm) in the supernatant was measured by a micro flat-panel reader (Synergy HT, BioTek, USA). The positive and negative control were distilled water and saline, respectively. The hemolysis rate (%) was calculated using Eq. (3): where, OD s , OD p , and OD n were the optical density of the supernatant of groups in samples, distilled water, and saline, respectively. In vivo hemostasis assay Sprague Dawley rats (SD, 200-250 g, seven weeks old) were used for the tail amputation hemostasis experiment (Wang et al. 2019). Rats were anesthetized using a 10% chloral hydrate solution (0.005 mL g −1 ). The tail was cut (50% of the length), and 100 mg of the target material (gauze, QuikClot ® zeolite, DBs and Ca-DBs-3) was immediately applied to the wound. The time of hemostasis and the amount of blood loss were recorded. Statistical analysis Data were expressed as mean ± standard deviation. One-way ANOVA was performed using Microsoft Excel 2019 MSO (16.0.14228.20216) 64 bits. The "n" in the legends represent the number of repetitions of the data. P < 0.05 represents a significant difference in the data.
2023-08-20T15:04:28.007Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "5478878cf2154eab51cace2052b2c4de2402d74e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "3ba99e212fc96dfe717b468d5bd2e68b68ffbcb7", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
26927823
pes2o/s2orc
v3-fos-license
Thymoquinone Inhibits the Migration and Invasive Characteristics of Cervical Cancer Cells SiHa and CaSki In Vitro by Targeting Epithelial to Mesenchymal Transition Associated Transcription Factors Twist1 and Zeb1 Cervical cancer is one of the most common gynecological malignant tumors worldwide, for which chemotherapeutic strategies are limited due to their non-specific cytotoxicity and drug resistance. The natural product thymoquinone (TQ) has been reported to target a vast number of signaling pathways in carcinogenesis in different cancers, and hence is regarded as a promising anticancer molecule. Inhibition of epithelial to mesenchymal transition (EMT) regulators is an important approach in anticancer research. In this study, TQ was used to treat the cervical cancer cell lines SiHa and CaSki to investigate its effects on EMT-regulatory proteins and cancer metastasis. Our results showed that TQ has time-dependent and dose-dependent cytotoxic effects, and it also inhibits the migration and invasion processes in different cervical cancer cells. At the molecular level, TQ treatment inhibited the expression of Twist1, Zeb1 expression, and increased E-Cadherin expression. Luciferase reporter assay showed that TQ decreases the Twist1 and Zeb1 promoter activities respectively, indicating that Twist1 and Zeb1 might be the direct target of TQ. TQ also increased cellular apoptosis in some extent, but apoptotic genes/proteins we tested were not significant affected. We conclude that TQ inhibits the migration and invasion of cervical cancer cells, probably via Twist1/E-Cadherin/EMT or/and Zeb1/E-Cadherin/EMT, among other signaling pathways. Introduction Cervical cancer, also known as invasive cervical carcinoma, is one of the most common gynecologic malignant tumors worldwide [1][2][3], representing a serious threat to female health. Annually, about 530,000 new cases of cervical cancer are documented [4]. The American Cancer Society (ACS) Thymoquinone Inhibits Cervical Cancer Cell Growth, Migration, and Invasion To investigate the effects of TQ on cancer cell growth, migration and invasion, the cellular indexes were evaluated by real time cell analysis, which showed that TQ at a dose of 5 µM or more can inhibit growth, migration and invasion in both of CaSki and SiHa cells ( Figure 1A). Further we used CCK-8 analysis for a cell viability assay, which showed that TQ exerts cytotoxic activity on both CaSki and SiHa cells in a dose-and time-dependent manner ( Figure 1B). After 12 h of TQ treatment, there was no clear effect of TQ on SiHa cells, but after 24 h treatment of TQ, we found significant effects of TQ, and so on after 36 and 48 h (p < 0.05). However, in CaSki cells, after 12 h of TQ treatment, it showed in dose dependent effects, and so on after 36 and 48 h (p < 0.05). These indicate that treatment of TQ at a dose of 5 µM or more for 24 h or more shows significant cytotoxic effects on CaSki or SiHa cells. Thymoquinone Induces Apoptosis in Cervical Cancer Cell Lines To evaluate whether TQ activity is related to programmed cell death, we measured the percentage of apoptotic cells in TQ-treated CaSki and SiHa cells. Annexin V and PI double staining can discriminate between apoptotic and necrotic cells. Here, flow cytometric analysis showed that TQ increases the apoptosis rate in both CaSki and SiHa cells. In contrast, the necrotic cells were reduced after treatment with TQ. The result shows that an increase in exposure dose leads to the enhancement of apoptotic cell levels ( Figure 2). Thymoquinone Regulates EMT Associated Genes/Proteins in Cervical Cancer Cells CaSki and SiHa The mRNA and protein levels of expression of EMT associated genes/proteins, namely Twist1, Snail1, Slug, Zeb1, E-Cadherin, N-cadherin, MMP-9 and Vimentin, as well as anti-apoptotic and pro-apoptotic proteins Bcl-2, Bax, PARP, Caspase-3 and Caspase-9, were investigated in TQ treated and non-treated cells. Both of CaSki and SiHa cells were treated with 5 µM and 10 µM of TQ for 24 h, and then total RNA was extracted from cells for quantitative RT-PCR (qRT-PCR), while DMSO treated cells were used as control. The qPCR analysis showed that TQ treatment inhibits the expression of Twist1, Zeb1 expression, and increased E-Cadherin expression in both CaSki and SiHa cell lines ( Figure 3A). TQ also affected Snail1, Slug, Vimentin and MMP9 in CaSki, but the results were not consistent in SiHa. N-Cadherin expression was found unaffected. Bax and Bcl-2 were Thymoquinone Regulates EMT Associated Genes/Proteins in Cervical Cancer Cells CaSki and SiHa The mRNA and protein levels of expression of EMT associated genes/proteins, namely Twist1, Snail1, Slug, Zeb1, E-Cadherin, N-cadherin, MMP-9 and Vimentin, as well as anti-apoptotic and pro-apoptotic proteins Bcl-2, Bax, PARP, Caspase-3 and Caspase-9, were investigated in TQ treated and non-treated cells. Both of CaSki and SiHa cells were treated with 5 µM and 10 µM of TQ for 24 h, and then total RNA was extracted from cells for quantitative RT-PCR (qRT-PCR), while DMSO treated cells were used as control. The qPCR analysis showed that TQ treatment inhibits the expression of Twist1, Zeb1 expression, and increased E-Cadherin expression in both CaSki and SiHa cell lines ( Figure 3A). TQ also affected Snail1, Slug, Vimentin and MMP9 in CaSki, but the results were not consistent in SiHa. N-Cadherin expression was found unaffected. Bax and Bcl-2 were Thymoquinone Regulates EMT Associated Genes/Proteins in Cervical Cancer Cells CaSki and SiHa The mRNA and protein levels of expression of EMT associated genes/proteins, namely Twist1, Snail1, Slug, Zeb1, E-Cadherin, N-cadherin, MMP-9 and Vimentin, as well as anti-apoptotic and pro-apoptotic proteins Bcl-2, Bax, PARP, Caspase-3 and Caspase-9, were investigated in TQ treated and non-treated cells. Both of CaSki and SiHa cells were treated with 5 µM and 10 µM of TQ for 24 h, and then total RNA was extracted from cells for quantitative RT-PCR (qRT-PCR), while DMSO treated cells were used as control. The qPCR analysis showed that TQ treatment inhibits the expression of Twist1, Zeb1 expression, and increased E-Cadherin expression in both CaSki and SiHa cell lines ( Figure 3A). TQ also affected Snail1, Slug, Vimentin and MMP9 in CaSki, but the results were not consistent in SiHa. N-Cadherin expression was found unaffected. Bax and Bcl-2 were remained unaffected ( Figure 3A). Proteins in PARP, Caspase-3, Caspase-9 in CaSki and SiHa cells were also nearly unaffected ( Figure 2C). remained unaffected ( Figure 3A). Proteins in PARP, Caspase-3, Caspase-9 in CaSki and SiHa cells were also nearly unaffected ( Figure 2C). For the study of protein level expression for EMT-TFs, CaSki and SiHa cells were treated with 5 µM and 10 µM of TQ for 36 h, and proteins were extracted for western blot analysis, while DMSO treated cells were used as control. The western blot analysis showed that TQ treatment down-regulates Twist1, Zeb1 proteins and up-regulated E-Cadherinin in both CaSki and SiHa cell lines ( Figure 3B). Thymoquinone Directly Targets Twist1 and Zeb1 Gene To investigate whether TQ directly targets Twist1/Zeb1 genes, a luciferase reporter assay was performed. The Twist1 and Zeb1 reporter genes were transfected with or without TQ treatment into SiHa cell line, and after 48 h of transfection, luciferase activity was measured. Results showed that TQ dose dependently decreases the Twist1 and Zeb1 promoter expression respectively (relative light units or RLU, Figure 4A), indicating that Twist1 and Zeb1 promoter might be directly affected by TQ. Effects of Thymoquinone on Twist1 Promoter Methylation in Cancer Cells To further investigate the epigenetic mechanism whether promoter methylation affect Twist1 expression, methylation assays for Twist1 promoter on its CpG islands in cervical cancer cells were performed by the pyro-sequencing. The results shown in Figure 4B for CaSki and Figure 4C for the SiHa cell line indicate that the proximal promoter methylation of Twist1 gene was found slightly increased by TQ treatment (5 µM for 24 h) (quantitative data in Figure 4D). Thus, promoter methylation of Twist1 gene might be one of the mechanisms of Twist1 down-regulation by TQ. However we did not test effects of TQ on Zeb1 promoter methylation in cervical cancer cell lines due to the unavailability of this assay. For the study of protein level expression for EMT-TFs, CaSki and SiHa cells were treated with 5 µM and 10 µM of TQ for 36 h, and proteins were extracted for western blot analysis, while DMSO treated cells were used as control. The western blot analysis showed that TQ treatment down-regulates Twist1, Zeb1 proteins and up-regulated E-Cadherinin in both CaSki and SiHa cell lines ( Figure 3B). Thymoquinone Directly Targets Twist1 and Zeb1 Gene To investigate whether TQ directly targets Twist1/Zeb1 genes, a luciferase reporter assay was performed. The Twist1 and Zeb1 reporter genes were transfected with or without TQ treatment into SiHa cell line, and after 48 h of transfection, luciferase activity was measured. Results showed that TQ dose dependently decreases the Twist1 and Zeb1 promoter expression respectively (relative light units or RLU, Figure 4A), indicating that Twist1 and Zeb1 promoter might be directly affected by TQ. Effects of Thymoquinone on Twist1 Promoter Methylation in Cancer Cells To further investigate the epigenetic mechanism whether promoter methylation affect Twist1 expression, methylation assays for Twist1 promoter on its CpG islands in cervical cancer cells were performed by the pyro-sequencing. The results shown in Figure 4B for CaSki and Figure 4C for the SiHa cell line indicate that the proximal promoter methylation of Twist1 gene was found slightly increased by TQ treatment (5 µM for 24 h) (quantitative data in Figure 4D). Thus, promoter methylation of Twist1 gene might be one of the mechanisms of Twist1 down-regulation by TQ. However we did not test effects of TQ on Zeb1 promoter methylation in cervical cancer cell lines due to the unavailability of this assay. Luciferase assay Discussion Epithelial-mesenchymal transition (EMT) plays an important role in cancer metastasis. The transcription factors, Twist1, Snail1, Slug and Zeb1, play vital roles in initiation of EMT process [18][19][20][21][22]. Studies revealed that abnormal expression of EMT-TFs are associated with metastatic process [9,17]. Cervical cancer, which is the second most common gynecological malignant tumor in females worldwide, has a high morbidity in China, and resistance to chemotherapy is a major obstacle for effective treatment of cancers, including cervical cancer [23,24]. The acquisition of EMT features has been proposed as the key contributor of chemoresistance in cancer cells. Hence, it is crucial to obtain better insights into the mechanisms underlying the induction of EMT and to explore novel approaches to improve drug sensitivity in cervical cancer patients [25,26]. In the current study, we found that TQ has time-dependent and dose-dependent cytotoxic effects on cervical cancer cell lines. Moreover TQ dose dependently inhibited the migration and invasion processes in cervical cancer cells. The anticancer and antimetastatic activities of TQ have been previously reported in certain cancers by other studies [9][10][11][12]. However, the mechanisms of the antimetastatic role of TQ are extremely complex and still obscure, and very few studies have specifically explored the effects of TQ on cervical cancer metastasis. In this study, we found that TQ at a molecular level, decreases the expression of Twist1 and Zeb1 and increases the expression of E-Cadherin at both mRNA and protein levels. Our previous study reported that TQ inhibits metastasis via downregulation of Twist1 and upregulation of E-Cadherin in metastatic breast cancer Discussion Epithelial-mesenchymal transition (EMT) plays an important role in cancer metastasis. The transcription factors, Twist1, Snail1, Slug and Zeb1, play vital roles in initiation of EMT process [18][19][20][21][22]. Studies revealed that abnormal expression of EMT-TFs are associated with metastatic process [9,17]. Cervical cancer, which is the second most common gynecological malignant tumor in females worldwide, has a high morbidity in China, and resistance to chemotherapy is a major obstacle for effective treatment of cancers, including cervical cancer [23,24]. The acquisition of EMT features has been proposed as the key contributor of chemoresistance in cancer cells. Hence, it is crucial to obtain better insights into the mechanisms underlying the induction of EMT and to explore novel approaches to improve drug sensitivity in cervical cancer patients [25,26]. In the current study, we found that TQ has time-dependent and dose-dependent cytotoxic effects on cervical cancer cell lines. Moreover TQ dose dependently inhibited the migration and invasion processes in cervical cancer cells. The anticancer and antimetastatic activities of TQ have been previously reported in certain cancers by other studies [9][10][11][12]. However, the mechanisms of the antimetastatic role of TQ are extremely complex and still obscure, and very few studies have specifically explored the effects of TQ on cervical cancer metastasis. In this study, we found that TQ at a molecular level, decreases the expression of Twist1 and Zeb1 and increases the expression of E-Cadherin at both mRNA and protein levels. Our previous study reported that TQ inhibits metastasis via downregulation of Twist1 and upregulation of E-Cadherin in metastatic breast cancer cell lines [8,9]. Here we report for the first time the effectiveness of TQ in controlling cell growth and metastasis in cervical cancer cell lines via regulation of Twist1 and E-cadherin expression. Moreover, Zeb1 has also been found as a new target for TQ potential therapy in cervical cancer cells. Evidences showed that Twist1 decreases the expression of E-cadhrin [19,[27][28][29], and the lack of E-cadherin can induce the expression of Twist1, so forms a positive feedback, keep cells in an interstitial state, so as to induce the EMT. Like in breast cancer cells [9], in SiHa and CaSki cervical cancer cell lines, we also found by luciferase assay that Twist1 promoter activity and expression were decreased by TQ. This indicates that Twist1 might be a direct target of TQ. Over the past few years, Zeb1 has increasingly been considered as an important contributor to the process of malignancies including endometrial cancer, breast cancer, lung adenocarcinomas as well as cervical cancer. Li et al. [30] found that the downregulation of Zeb1 expression might reduce the proliferation and motility of cervical cancer cells. Besides, Zeb family factors (Zeb1 and Zeb2) promote EMT by repressing expression of E-cadherin [19,23,31,32]. Thus, the results of our study linked to previous studies indicating that TQ inhibits the migration and invasion of cervical cancer cells probably via Twist1/E-Cadherin/EMT or/and different Zeb1/E-Cadherin/EMT signaling pathways. In addition, flow cytometric analysis showed that TQ increases the apoptotic rate in cells. But the expression of PARP, Caspase-3, Caspase-9, Bax and Bcl-2 in CaSki and SiHa cells were nearly unaffected by TQ, indicating that TQ effects on CaSki/SiHa apoptosis might involve other mechanisms. TQ have been previously reported by other studies, to induce apoptosis via a number of mechanisms of actions, such as modulating p53 pathway, NF-κB pathway, ROS generation etc. [9,[33][34][35]. Even if literatures support the hypothesis for a role of DNA methylation in the control of Twist1 expression, the differences treated with TQ are really too low in cervical cancer. Regulation of EMT and EMT-TFs should be involved by different pathways [36,37], as well as other epigenetic mechanisms. It has been recently reported that many long non-coding RNAs (lncRNAs) are pivotal regulators in the oncogenesis and progression of cervical cancer [38]. For example, Ji et al. [39] reported that the HOTAIR, a lncRNA, is able to sponge miR17-5p and Battistelli et al. [40] demonstrated that this lncRNA is involved in the repression of E-cadherin expression in EMT, a typical signature observed in cancer cells, as reported by us in this study. With this regard, lncRNAs could be regulated after TQ treatment, which has been demonstrated in our previously study that co-delivery of TQ and miR-34a, a small non-coding RNA molecule, is able to enhance to inactivate EMT signaling by directly targeting Twist1 and Zeb1 [13]. Thus, it could be hypothesized that TQ treatment could be responsible of the reversal of EMT also through their down regulation. Future study should be performed to validate above hypothesis. Cell Culture and Thymoquinone Treatment Human cervical cancer cell lines CaSki and SiHa were cultured in RPMI1640 media (Thermo Fisher Scientific, Waltham, MA, USA) with 10% fetal bovine serum (FBS) (Pan Biotech, Bavaria, Germany). TQ was purchased from Sigma-Aldrich (St. Louis, MO, USA) and suspended in dimethyl sulfoxide (DMSO). Different concentrations of TQ were used to treat cancer cell lines, while DMSO was used as control. Cell Viability Assay Cell viability was examined by CCK-8 assay (Beyotime, Jiangsu, China). Briefly, in a 96-well cell culture plate, 5 × 10 4 cells (in 100 µL media) were cultured per well and after incubation overnight they were treated by various concentrations of TQ (1, 5, 10, 20 and 40 µM) for 12 h, 24 h, 36 h and 48 h, respectively. At the end of incubation periods, 10 µL of CCK-8 reagent was added to each well, and kept in room temperature for 1 h. Then absorbance (optical density) was recorded at 450 nm in a microplate spectrophotometer (Multiskan™ GO, Thermo Scientific, Ratastie, Finland). The color intensity (OD values) represented the percentage of live cells in a given value. Cell Growth, Migration and Invasion Assays A real time cell analyzer (xCELLigence RTCA DP, Roche, Penzberg, Germany) was used for the real time analysis of cell migration, invasion and growth index [14,15]. 100 µL of cell suspensions (5 × 10 4 cells/mL) were seeded on each of the 16 well E-plate for cell growth index. CMI plates were used for the analysis of cell migration and invasion, where the lower chamber wells were filled with chemotaxis inducer (10% serum supplemented media), and 100 µL of cell suspensions (5 × 10 4 cells/mL) in serum free medium were added into the wells of upper chamber. For cell invasion assay, the membrane of the CMI plate was pre-coated with Matrigel (354277, BD Biosciences, Sparks, MD, USA) with 1:40 dilution in 1× PBS before cells were seeded. After a certain period of cell growth (usually 4 h, indicated in the figures), TQ of different concentrations (1-10 µM) were added into the wells. The process of cell migration and invasion was monitored every 30 min till the experimental endpoint. RNA Extraction, RT-PCR and qPCR Analysis After TQ treatment for 24 h, cellular total-RNA was extracted by using RNAsimple Total RNA kit (Cat No: DP419, TIANGEN, Beijing, China), following the manufacturer's guideline. RNA concentration was measured by using ND-2000 UV/Vis spectrophotometer (NanoDrop, Wilmington, DC, USA) and final concentration was set as a 150 ng/µL for cDNA synthesis (reverse transcriptase/RT-PCR). In a 20 µL of RT reaction system, 4 µL of 5× RT buffer, 2 µL of dNTPs, 1 µL of random primer, 1 µL of Rev. Ace (enzyme, purchased from TOYOBO, New York, NY, USA and BIOBRK, Chengdu, China), 0.5 µL of super RI, 0.5 µL of RT-enhancer, 4.5 µL of RNase free water and 6.5 µL of RNA (150 ng/µL) were mixed. The reaction was completed in a thermocyler (Mastercycler Gradient, Eppendorf, Germany) with the following steps: 10 min at 30 • C, 30 min at 42 • C, 5 min at 99 • C, 5 min at 4 • C, followed by final holding at 16 • C. The synthesized cDNAs were then diluted by adding 80 µL ddH 2 O, and used as templates for quantitative PCR (qPCR) for the measurement of expression levels of Bcl-2, Bax (anti-apoptotic and pro-apoptotic proteins) and Twist1, Snail1, Slug, Zeb1, Vimentin, E-Cadherin, N-Cadherin and MMP9 (major metastasis associated EMT-TF proteins) [16,17]. The sequence-specific fluorescence-labeled probes and primers for Taqman qPCR were matched by the Universal Probe Library Center (Roche) [8,14]. The primer sequences for the investigated RNA of precursor genes are presented in Table 1. Table 1. Primer sequences for qPCR used for mRNA isolated from human cervical cancer cells. Protein Extraction and Western Blot Analysis After TQ treatment for 36 h, cellular proteins were extracted by using EBC lysis buffer [14]. Proteins were then separated on vertical polyacrylamide gel electrophoresis, and transferred to nitrocellulose membrane. The membrane was kept in 5% milk (in 1 × TBST) at 4 • C for 1 h, and then incubated with primary antibody solution at 4 • C for 12 h with gentle shaking. The membrane was then washed thrice with TBST, and incubated with secondary antibody tagged with horseradish peroxidase for 2-4 h at room temperature with gentle shaking. The membrane was again washed thrice with TBST, and protein bands were visualized after the chemiluminiscence reaction by using a digital imaging system (Universal Hood II, Bio-Rad Lab, Segrate, Italy) [8,14]. The primary antibodies used in this study were anti-Twist1 (Abcam), anti-Zeb1 (Cell Signaling Technology, Danvers, MA, USA), anti-PARP (#9532, Cell Signaling Technology), anti-Caspase-3 (#9665, Cell Signaling Technology), anti-Caspase-9 (#9508, Cell Signaling Technology), anti-E-cadherin (Cell Signaling Technology), anti-beta actin (Beyotime Biotechnology, Jiangsu, China), anti-HSP70 (Cell Signaling Technology). Corresponding to primary antibodies, anti-mouse (Bioworld Technology, Dublin, OH, USA) and anti-rabbit antibodies (Beyotime Biotechnology, Jiangsu, China) were used as secondary antibody. The comparative level of protein expression was measured by analyzing the visualized protein bands using the ImageJ software (National Institutes of Health, Rockville, MD, USA). Then, 60%-confluent SiHa cells in 12-well plates were used to transfect with 100 ng of the pGL3-hTwist1-Luc promoter/reporter or pGL3-hZeb1-Luc promoter/reporter plasmid without or with indicated different concentration of TQ (0, 1, 2, 4, 8, 16 µM). TQ was also treated in control cells (transfected with pGL3-Basic-Luc promoter/reporter plasmid) and the activity of promoter expression of Twist1 and Zeb1 was measured respectively by using luciferase assay system (Promega, Madison, WI, USA). The relative luciferase activity, expressed as 'Relative Light Units' (RLU) was determined by using 3010 Luminometer (BD Monolight, Franklin Lakes, NJ, USA) after two days of transfection. Measurement of Cellular Apoptosis Detection of apoptosis by Annexin V binding Apoptosis detection was performed using the FITC Annexin V Apoptosis Detection Kit (BD Pharmingen, Sparks, MD, USA). CaSki and SiHa cells (10 6 cells/mL) were plated and incubated overnight, prior to being treated with different concentration of TQ (0, 5, 10 µM). The cells were harvested, washed with PBS, re-suspended in 1 × Annexin V binding and stained with 5 µL annexin V and 5 µL PI for 10 min at room temperature in the dark. The distribution of cell populations in different quadrants was detected using BD FACSCF Calibur Cell Analyzer. Twist1 Gene Methylation Assay CaSki and SiHa cells were treated with TQ (5 µM) for 24 h, and DNA was extracted by using TIANamp genomic DNA kit (TianGen, Beijing, China).The PCR products from bisulfite-treated genomic DNA samples were analyzed with pyrosequencing technology, in order to quantify the site-specific methylation. The Qiagen bisulfite kit was used for the treatment of genomic DNA and the primers used for the amplification of Twist1 gene promoter by PCR were as follows: F: 5 -GGGAGAGATGAGATATTATTTATTGTGT-3 ; R: 5 -CTCCTCCCAAACCATTCAA-3 . The sequencing was performed using sequencing primer (5 -AGGAGGGGAAGGAAA-3 ), which described previously [8]. Each site is analyzed as a C/T-polymorphism and the percentage of methylation is displayed in a small colored box just above each CpG site, where a 100% denotes a fully methylated C, a 0% denotes an unmethylated C, and intermediate C/T percentages denote partial methylation in the genomic DNA. Statistical Analysis Data was analyzed by one-way ANOVA and then post-hoc comparisons by using the SPSS v. 20 software (IBM, New York, NY, USA), and MS-Excel 2010 (Microsoft, Washington, DC, USA). Results are usually presented as mean ± SD. p < 0.05 was considered as significant differences. Conclusions Our findings suggest that TQ markedly inhibited the proliferation of cervical cancer cells in a time-dependent and dose-dependent manner and suppressed the migration and invasion of cancer cells. Targeting EMT-TFs like Twist1 and Zeb1 might be the possible mechanism of action of TQ in controlling metastasis in cervical cancer. This study indicates that TQ is as a possible chemotherapeutic agent against cervical cancer, however for the further development and establishment of TQ as a clinical drug, clinical investigations are necessary. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2017-12-10T04:15:24.704Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "616f60c531b52d6f58ad0bed2bf9dad267359269", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/22/12/2105/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "616f60c531b52d6f58ad0bed2bf9dad267359269", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
17639297
pes2o/s2orc
v3-fos-license
The effects of milk thistle on hepatic fibrosis due to methotrexate in rat. BACKGROUND Extracts of milk thistle (MT), Silybum marianum, have been used as medical remedies since the time of ancient Greece. Methotrexate is a potentially hepatotxic drug. OBJECTIVES To clarify the hepatoprotective effects of MT on methotrexate. MATERIALS AND METHODS From January 2010 to April 2010, 30 male rats were recruited into three 10-rat subgroups in Tabriz University of Medical Sciences. Normal saline was injected intraperitoneally in the first group (A; the controls); intraperitoneal methotrexate plus oral MT extract were administered to the second group (B) and intraperitoneal methotrexate alone was given to the third group (C). Pre- and post-interventional measuring of serum parameters were carried out every 15 days. After six weeks, the rats were decapitated and histopathological evaluation of liver was done. RESULTS Serum liver enzymes (AST, ALT), alkaline phosphatase, total and direct bilirubin, creatinine and BUN were measured on days 0, 15, 30, 45. They were significantly higher in the group C, comparing with other two groups. Serum albumin was the least in group C animals as well. There were no significant differences between groups A and B. The mean±SD fibrosis score using semi-quantitative scoring system (SSS) was 1.25±0.46, 1.40±0.52 and 6.70±0.82, in groups A, B and C, respectively (p<0.001). CONCLUSIONS MT extract can effectively prevent methotrexate-induced liver dysfunction and fibrosis in rats. Background Methotrexate (MTX) is a potent hepatotoxic agent. This drug is effective in various cancers and immunologic disorders. It is used frequently in rheumatoid arthritis and psoriasis. This drug when used without followup has many side effects like hepatotoxicity and bone marrow suppression. MTX is accumulated in liver and is hepatotoxic. It seems that folic acid can reduce MTX side effects but it is not completely clarified. Clinicians use the drug frequently, so they would like to reduce its side effects especially its hepatotoxic effects (1). It was shown that milk thistle (Silybum marianum) has beneficial effects on hepatotoxicity (2). The objective of this study was to clarify the effect of MT on MTX-induced hepatotoicity in an animal model. In animal models, it has been shown that MT prevents atherosclerotic plaque formation in aorta. It has been shown that the cisplatin and cyclosporine Milk thistle on hepatic fibrosis Ghaffari AR et al. side effects reduced when MT was administered in mice (3,4). Reports showed that silymarin promote DNA polymerase, stabilize all membranes, inhibits free radicals and increases glutathione concentration, so it could protect liver from hepatotoxic agents. Silibinin is able to stimulate the activity of the DNA-dependent RNA polymerase I and causes an increase in rRNA synthesis. It accelerates formation of intact rRNA polymerase with resultant formation of new hepatocytes (5). Silymarin inhibits lipoxygenase cycle, leukotrienes and free radicals formation in mice Kupffer cells, so inflammation may be reduced (6). Treatment with MT has been usual since 2000 years ago and it is mentioned as a hepatoprotective agent (7). MT is found in many areas all around the world and is cultured in North and South parts of Iran. This drug is absorbed via the gastrointestinal tract; the maximum blood level is reached after 2-4 hours. The half-life of the drug is six hours. About 80% of MT is secreted into the bile and its bioavailability depends on its formulation (8). Sylibin is the most effective agents in MT and is known as an antioxidant and hepatoprotective agent. Its concentration in bile is 60 times greater than the blood. Silymarin has various cardiovascular effects (9). Silymarin inhibits liver enzymes like gamaglutamil transpeptidase (GGT), alanine transaminase (ALT) and aspartate transaminase (AST) in rats (5). This drug blocks hepatic fibrosis due to biliary obstruction in mice. In one study a formulation of silymarin extract (Legalon) was used in 2637 patients with chronic liver disease for eight weeks when the liver enzymes remarkably decreased in 88% of patients. Side effects were seen in lesser than 1% of patients (10). Silymarin is widely used in poisoning with Amanita fungus and reduces mortality significantly (60%-80%). The effects of silymarin on alcoholic liver damages are controversial, but in a controlled double blind clinical trial this drug could improve liver enzyme level and histopathologic liver features after four weeks in alcoholic hepatotoxicity (11). In one study, silymarin could reduce mortality in patients with alcoholic cirrhosis after four years. In another study, however, silymarin could not reduce hepatic mortality in cirrhotic patients (12). Silymarin effects on hepatic damages due to hepatitis are also controversial. In a double blind clinical trial, 20 patients with chronic active hepatitis received 240 mg of silybin complex (silipide) two times a day for seven days; the GGT level reduced significantly (13). In another study, 29 patients with viral hepatitis treated with silymarin and 28 patients received placebo; serum bilirubin, AST and ALT levels significantly reduced in the treatment group but in another study with 151 patients with viral hepatitis this drug could not improve their clinical condition (14). Silymarin has other therapeutic effects. It reduces LDL cholesterol level and atherosclerotic plaque formation in rabbit and mice (15). This agent has some mild side effects like allergic reactions in sensitive patients. In animal models, silymarin in higher doses has not any side effects. Long-term use of this drug was safe. Silymarin also is safe in pregnancy, lactation and children (12). Objectives This study tries to determine the hepatoprotective effects of MT on MTX-induced hepatotoxicity. Materials and Methods In an experimental study, 30 rats (weigh 250-300 g) were used. The rats were in animal house for one week and had access to water and food ad libitum. Temperature was kept at 37 °C. After one week, the rats were randomly divided into three equal groups: Group A rats received normal saline (600 mg/kg); group B received MTX (100 μg/ kg) intraperitoneally and silymarin (600 mg/kg) orally; group C rats received MTX (100 μg/kg) intraperitoneally alone. We used 1 mL of MTX (1000 mg/10 mL) and diluted it in 99 mL of normal saline and then 1 mL of product was diluted in 9 mL of normal saline (100 μg/mL MTX). It was injected by insulin syringes. MT was administered orally by a special syringe for rats after four hours. This study was done in Education Development Center of Tabriz University of Medical Sciences, Tabriz, Iran. The study was conducted under supervision of a zoologist and pharmacist, professional in these kinds of animal studies. MT seeds were obtained from different areas of Aras river in East Azarbayjan, North-West of Iran. In pharmacognosy laboratory of pharmacy faculty, dry seeds were milled and processed with hexane; the silymarin and its flavonolignan were extracted using succilating method. After solvent evaporation, the amount of the total flavonolignans of silymarin was measured by spectrophotometry. Extracted materials were fractionated, using SPE (sep-pak) cartridges and methanol-water mixture. Solvent of the obtained fractions was evaporated and the product was used in this study. Tabriz University of Medical Sciences Ethical Committee approved this study. All animals received humane care according to the criteria outlined in the "Guide for the Care and Use of Laboratory Animals" prepared by the National Academy of Sciences and published by the National Institutes of Health (NIH publication 86-23 revised 1985). For biochemical studies, blood samples were taken before the intervention and 15, 30 and 45 days post-intervention. Serum samples were examined for alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkalin phosphatase (ALP), blood urea nitrogen (BUN), creatinine (Cr), bilirubin (Bil), and albumin (Alb). Every morning (8-9 am), MTX was injected; MT was given orally by special syringe four hours later. Six weeks after intervention, rats were killed, their livers were fixed in 10% formalin and studied for histopathologic changes. The samples were scored using a semi-quantitative scoring system (SSS) (16) as below: • < 2: normal; • 2-6: mild fibrosis; • 6-10: moderate fibrosis; • > 10: sever fibrosis; The data were analyzed by SPSS ® for Windows ® ver 17. The continuous variables were first tested to see if they were distributed normally. Descriptive statistics, including the mean and standard deviation (SD) was calculated for all variables. One-way ANOVA or Kruskal-Wallis test was used to compare means of three or more groups. Pairwise comparisons of the study groups were performed using Tukey's HSD and Mann-Whitney U test as post hoc tests. A p < 0.05 was considered statistically significant. Results Biochemical parameters were measured before the intervention, and 15, 30, and 45 days after the intervention. The creatinine level was significantly different among all the studied groups (p < 0.001) but groups A and B. Therefore, creatinine level was lower in patients treated with MT (Table 1). BUN was also significantly different among the studied groups (p < 0.001). BUN level was better in rats treated with MT (Table 1). ALT was also significantly different among all (p < 0.001) but groups A and B (p = 0.211). ALT level was lower in group B (Table 1). AST level was also significantly different among all studied groups (p < 0.001) ( Table 1). Total bilirubin level was significantly different among all (p < 0.001) but groups A and B was not remarkable. Total bilirubin level was lower in rats treated with MT (Table 1). Direct bilirubin level was significantly different among all studied groups (p < 0.001) ( Table 1). Albumin level was significantly different among the studied groups (p < 0.001) ( Table 1). Bilirubin was lower and albumin was higher in group B rats. ALP was also significant among the studied groups (p = 0.01). The difference between groups A and C was significant (p = 0.01) but it was not among other paired groups (Table 1). It must be emphasized that we did not measure prothrombin time. The mean ± SD SSS score was 1.25 ± 0.49 in group A, 1.40 ± 0.52 in group B and 6.70 ± 0.82 in group C rats. The score was significantly higher in group C (p < 0.001)-fibrosis was more severe in rats exposed to MTX without MT extract. The mean SSS score was significantly different among the studied groups. Difference between paired groups was only significant between groups A and C (p < 0.001) and also groups B and C (p < 0.001) (Figure 1). Discussion In this study protective effects of MT on MTX-induced liver damage in rats was investigated. We found that Milk thistle on hepatic fibrosis Ghaffari AR et al. the mean levels of ALT, AST, ALP and bilirubin in rats that received MTX plus MT were significantly lower than those animals received only MTX. Difference of studied parameters between MTX plus MT and control group (that received only normal saline) was not significant. Up to now, various studies revealed protective effects of MT in hepatic damages (17). In these studies, it was shown that extract of MT reduced treatment period of acute and chronic hepatitis (18). Protective effects of MT in fatty liver, cirrhosis, viral hepatitis, ischemic liver damage and cancer was shown previously (19). In our study, also SSS score in MTX plus MT group was significantly lower than that in MTX group which reflected protective effects of MT. Buzzelli and colleagues showed that administration of MT extract to 20 patients with chronic active hepatitis reduced serum liver enzymes, ALP and bilirubin levels after seven weeks (20). In our study, liver enzymes and bilirubin were reduced in rats received MT. Giese showed that MT extract has antihepatitic effects (21). Dhiman also emphasized protective effects of MT on hepatic disease (22). Mayer revealed protective and curative effects of MT on viral hepatitis (23). Rambaldi and colleagues showed these protective and curative effects in viral and alcoholic hepatitis (24). Jacobs et al. emphasized the protective effects of MT extracts in hepatic disease (25). Other investigators like Gassileth (2008), Ross (2008), Raina (2008), and Ramakrishnan (2008), also emphasized the efficacy of MT extract on liver disease and its safety (7,(26)(27)(28). Protective mechanisms of MT extract in liver disease are mentioned in different studies (29)(30)(31). Some of them are antioxidant, anti-lipid peroxidase, anti-fibrinolytic, anti-inflammatory and immunomodulatory effects, induction of cell formation, glutathione inhibition, reduction of leukotrienes, reduction of tumor promoters and P450 inhibition. We found that the mean BUN and creatinine level were lower in the group that received MT extract. So reno-protective effects are also proposed (32). In animal studies, this drug prevented atherosclerotic plaque formation in aorta (33). Previous studies revealed that silymarin prevents acetaminophen and tetrachloromethane hepatotoxic effects (22). Reports showed that MT may promote DNA polymerase, stabilize all membrane, inhibit free radicals and increase glutathione concentration, so it protects liver against hepatotoxic agents (34). Promotion of DNA polymerase leads to rRNA synthesis and hepatocellular regeneration. By increasing glutathione concentration, it stabilizes superoxide dismutase and glutathione peroxidase (18). Different animal studies showed that silymarin protects hepatocytes against viruses, chemical agents, fungal toxins and alcohol-premedication. This drug protects animals against fatal effects of Amanita toxins (12,35). Silymarin premedication prevents hepatotoxic effects of halothane, thallium tetrachloride and acetaminophen in animal studies (7). Silymarin inhibits liver enzymes like gamma-glutamyl transpeptidase (GGT), ALT and AST in rats (36). In one study, Silymarin could reduce mortality in patients with alcoholic cirrhosis after four years. On the other hand, in another study, silymarin could not reduce hepatic mortality in cirrhotic patients (35). In our study it was shown that MT extract protects liver against MTX hepatotoxic effects. Liver enzymes (AST, ALT and ALP), bilirubin and albumin remain unchanged. Also MT extract prevents kidney injury due to MTX in rat. BUN and creatinine levels remain unchanged. MT extract significantly prevents liver damage due to MTX in rat, so similar human studies are required. Studies on the effect of MT extracts on drug-induced kidney injury are therefore warranted. Similar results would justify use of MT extract in patients receiving MTX as a prophylactic agent against hepatic side effects. Financial support None declared. Conflict of interest None declared.
2016-05-12T22:15:10.714Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "3e0e1dc280084734740f2dc5c284952d9e73ed93", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3e0e1dc280084734740f2dc5c284952d9e73ed93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247068724
pes2o/s2orc
v3-fos-license
Cigarette Smoking and Human Gut Microbiota in Healthy Adults: A Systematic Review The intestinal microbiota is a crucial regulator of human health and disease because of its interactions with the immune system. Tobacco smoke also influences the human ecosystem with implications for disease development. This systematic review aims to analyze the available evidence, until June 2021, on the relationship between traditional and/or electronic cigarette smoking and intestinal microbiota in healthy human adults. Of the 2645 articles published in PubMed, Scopus, and Web of Science, 13 were included in the review. Despite differences in design, quality, and participants’ characteristics, most of the studies reported a reduction in bacterial species diversity, and decreased variability indices in smokers’ fecal samples. At the phylum or genus level, the results are very mixed on bacterial abundance both in smokers and non-smokers with two exceptions. Prevotella spp. appears significantly increased in smokers and former smokers but not in electronic cigarette users, while Proteobacteria showed a progressive increase in Desulfovibrio with the number of pack-years of cigarette (p = 0.001) and an increase in Alphaproteobacteria (p = 0.04) in current versus never smokers. This attempt to systematically characterize the effects of tobacco smoking on the composition of gut microbiota gives new perspectives on future research in smoking cessation and on a new possible use of probiotics to contrast smoke-related dysbiosis. Introduction The pivotal role of the gut microbiota is now an unquestionable scientific assumption [1][2][3]. Several studies have demonstrated that it significantly contributes to maintaining the physiological equilibrium of the mucosal microenvironment, and it also interacts intimately with the intestinal immune system [1][2][3][4][5][6][7][8]. In particular, the microbiome is considered the "new" biomarker of human health because of its fundamental role in maintaining normal body physiology while developing and educating the immune system [1]. Indeed, the intestinal microbiota maintains the mucosal integrity, regulates the absorption of ingested food, and exerts a competitive inhibition by preventing invasion or colonization by any other potential pathogenic microorganism [2]. Microbial products, such as short chain fatty acids (SCFAs) and polysaccharide A, modulate immune homeostasis and local immune response towards pro-inflammatory or anti-inflammatory status [3]. The clinical importance of the microbiota in maintaining the homeostasis in the human body is clear, particularly considering its involvement in a wide spectrum of human carefully examine the interaction between smoking and microbiota on the development of intestinal and systemic diseases. In 2021, Gui et al. reported that tobacco smoking has been associated with significant changes in gut bacterial taxa [24]. Indeed, smoking implies the assumption of more than 7000 toxic substances that could play a role in gut microbiota composition, however research to identify the specific influence of these toxic substances on gut microbiota is still ongoing. Even electronic cigarette (e-cigarette) users are exposed to toxic substances, which can modify the inflammatory human response. In particular, the in vitro study by Lee et al. found that "exposure of endothelial cells to e-liquid, conditioned media induced macrophage polarization into a pro-inflammatory state, eliciting the production of interleukin-1β (IL-1β) and IL-6, leading to increased ROS" [25]. This systemic pro-inflammatory status might also have an impact on the gut microbiota composition, as suggested by available studies on the impact of e-cigarette use on animals' gut microbiota and on oral microbiota composition in humans [26,27]. To date, the effects of smoking on gut microbiota have not been systematically evaluated, especially in humans. The aim of this systematic review is to analyze the available evidence concerning the relationship between cigarette smoking and human intestinal microbiota, in order to contribute to the characterization of the gut microbiota profile of healthy smokers and to highlight its potential impact on the host health status. Materials and Methods We conducted this systematic review in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [28,29]. It has been registered with the International Prospective Register of Systematic Reviews (PROSPERO registration number: CRD42021169423). Identification of Studies We identified our MeSH terms and developed the search strategy using the PICO process (POPULATION-INTERVENTION-COMPARISON-OUTCOME) [30]. We searched all the available literature published until 7 June 2021 on three electronic databases: PubMed, Scopus, and Web of Science. The search was conducted using the following keywords: (smok* OR cigarette* OR tobacco OR e-cig* OR "electronic cigarette" OR vaporizer*) AND ((microbio* OR bacteria* OR microbial OR flora OR microflora) AND (gut OR intestinal)) AND (English[lang]). Eligibility Criteria All study designs (systematic review, randomized controlled trial, cohort study, casecontrol study, cross-sectional study, narrative review) on healthy adults with an age range of 18-65 years, no gender difference, and only tobacco smokers and e-cigarette users were considered. We evaluated only the intestinal microbiota collected on fecal samples and analyzed with genome sequencing of rRNA 16S. The search was limited to the English language. Grey literature and studies considering second-hand smoke, air pollution, and upper airway microbiota were excluded. Variability Indices The primary aim was to assess the abundance of Phyla, the Phyla's ratio, and the species' variability measured through any variability index (mathematical measure) for alpha diversity and beta-diversity indices [31]. Briefly, the Shannon index provides a statistic of diversity species assuming all species are represented in a sample and that they are randomly sampled, while the Simpson and Pielou indices are dominance indicators providing the description of species distribution [31,32]. Moreover, Sobs, Chao1, and Heip indices are mainly sensitive to the variation of rare species, could indicate rare OTUs [33]. Beta-diversity indices, such as Bray-Curtis dissimilarity or UniFrac, were used to evaluate the different structures of the communities between samples, both considering samples' phylogeny (weighted UniFrac) and evaluating the presence/absence of genera in the samples (unweighted UniFrac) [34]. Primary and Secondary Level Screening Three authors independently screened for relevance a total of 1217 articles by titles and abstracts using Jabref [35]. The first level of screening was based on the inclusion and exclusion criteria. In the second level of screening, studies indicated as relevant were subsequently reviewed as full-text. Disagreements were solved with third-party consultation. Authors reached a consensus for all included studies. Data Extraction Data were extracted using a standardized extraction table in Microsoft Excel and verified for completeness and accuracy by all authors. We collected information on study characteristics (author, country, year of publication, study design); methods of study (setting, population characteristics, timing of tobacco exposure); outcomes (abundance of phyla, variability index, phyla ratios); and the main results. Quality Assessment We assessed the methodological quality of included studies by using the following scales. The "Methodological index for non-randomized studies" (MINORS) [36] for nonrandomized studies; it is composed of eight items for non-randomized studies and four more items in the case of comparative studies, it is based on a scoring system from zero to two, so that zero is "not reported", one is "reported but inadequate", and two is "reported and adequate". The global ideal score is at least 16 for non-comparative studies and 24 for comparative studies. The Joanna Briggs Institute Critical Appraisal tool [37] was used for cross-sectional studies; it consists of a scoring protocol from one to eight, based on the presence, absence, how unclear the information was, or the non-applicability of the item. Studies were considered of good/high quality when a total score of 5/8 was reached in the quality assessment, whereas a lower score was classified as poor quality. All studies considered cigarette smoking as a source of tobacco, except for two where both cigarette smoking and e-cigarette smoking were considered [39,41]. In their controlled prospective study, Biederman et al. analyzed stool samples of healthy smoking human subjects undergoing controlled smoking cessation during a 9week observational period compared with two control groups, consisting of ongoing smoking and nonsmoking subjects [47]. General Characteristics of the Studies The main features of the included studies are summarized in Table 1. • N = total sample size, F = number of females and M = number of males; age (range or mean +/− SD), characteristics of subgroups. Diversity Analysis The results of the selected studies are summarized in Table 2. A statistically significant reduction of the Shannon index among tobacco smokers was shown in four studies [39,41,45,48] and just one study found a significant reduction of the Pielou index [45]. A statistically significant reduction of the Shannon index was also found in e-cigarette users in the study by Curtis et al. [39]. However, a decreasing trend of the Shannon index, both among tobacco [28,44] and among e-cigarettes smokers [41], was found in studies that did not produce statistically significant results. On the other hand, it is interesting to note that Biedermann et al. found an increase in the alpha diversity after smoking cessation [47]. When beta diversity was considered, Biedermann et al. found a statistically significant difference between the UniFrac distance in subjects undergoing smoking cessation, comparing the time points prior to and after the smoking cessation intervention [47]. Another study found similar results between tobacco smokers and non-smokers [39]. These results were confirmed by Lee et al., who showed statistically significant beta diversity, using Jaccard-based diversity analysis, between former smokers and current smokers and between never smokers and current smokers [48]. Finally, Chen et al. found that tobacco use showed a trend toward association with the microbiota using UniFrac distance [38]. Methodological Quality of the Studies The quality assessment for all included studies is summarized in Supplementary Tables S1 and S2 and in Supplementary Figure S1. In general, according to Joanna Briggs Institute Critical Appraisal tools, the quality of cross-sectional studies included in the review was good, since 6 out of 7 studies obtained a score higher than 5/8, while just 1 scored 4/8; all studies satisfied the items 4,5,7 and 8 of the JBI tool. The item 6, concerning the application of strategies to deal with confounding factors, though identified by all authors, was the most neglected. The quality of the only controlled prospective study, according to the "Methodological index for non-randomized studies" (MINORS), was moderate and scored 17 out of 24 points. Cigarette Smokers, Electronic Cigarette Users, Former-Smokers, and Never-Smokers The cross-sectional studies of Kato et al. and Nolan-Kenney et al. find a significant increase in Proteobacteria (at the genus level) in the smokers' sample [40,43]. Specifically, a progressive increase in Desulfovibrio DNA, related to the number of pack-years of cigarette smoking (p = 0.001) [40], and in Alphaproteobacteria [43] were found. A significant increase in Bacteroides was found by Ishaq, Lee, Zhang, Lin, and Harakeh et al. [44][45][46]48,49], in contrast with a significant decrease found by Curtis et al. (valid both for tobacco and e-cigarette smokers), Stewart, and Biedermann et al. [39,41,47]. Concerning the Bacteroidetes phylum: the characteristics of Prevotella are unexpected, according to Curtis' team results, and Prevotella seems to have a different behavior depending on the tobacco source, with a significant increase in tobacco smokers and a significant decrease in e-cigarette smokers [39]. A significant increase in Prevotella in smokers in comparison to controls was also found by Stewart et al. and Prakash et al. [41,42]. ↓* Shannon index ↓* Pielou index /// ↑*Bacteroides /// /// /// ↑= increase; ↓=decrease; ↑*= statistically significant increase; ↓*= statisticlly significant decrease; ↔ = no differences between groups; /// = not reported. In particular, Stewart et al. analyzed tobacco and e-cigarette smokers, finding two different profiles at the genus level: increased Prevotella (p = 0.006) and decreased Bacteroides in tobacco smokers [41]. Specifically, Prevotella had significantly increased relative abundance in tobacco smokers compared to controls (p = 0.008) and e-cigarette users (p = 0.003), but no difference between e-cigarette users and controls (p = 0.99) was found. Meanwhile, Bacteroides showed significantly decreased relative abundance in tobacco smokers compared to controls (p = 0.017) and e-cigarette users (p = 0.003), but no difference between e-cigarette users and controls (p = 0.684). In 2013, in a prospective controlled study on smoking cessation, Biedermann et al. reported that smokers showed less Firmicutes and Actinobacteria than non-smokers, which tend to increase after smoking cessation. At the same time, the proportion of Proteobacteria and Bacteroidetes, higher in smokers than non-smokers, tends to decrease after smoking cessation [47]. The results of Lee et al., Shima et al., and Lin et al. are in accordance with this [46,48,50]. Specifically, Lee's team found a significantly decreased proportion of Firmicutes in smokers than former-smokers and never-smokers (respectively increasing from the status of smokers to never-smokers, p = 0.015) and a significantly increased proportion of Bacteroidetes in smokers than former and never-smokers (decreasing from current smokers to neversmokers, p = 0.047) [48]. Smokers before and after Smoking Cessation Intervention The article of Biedermann et al. is the only controlled prospective study of this systematic review [47]. Analyzing the results of this eight-week smoking cessation intervention, they found an increase of sequences from Firmicutes and Actinobacteria and a simultaneous decrease of Proteobacteria and Bacteroidetes fractions after smoking cessation. These changes were observed exclusively in the intervention group and particularly between screening phase (t1) and four weeks after smoking cessation (t2) [47]. Statistical significance was found in the increase in Firmicutes (p = 0.027) and Actinobacteria (p = 0.014) as well as in the decrease of Proteobacteria (p = 0.041) between t1 and t2, but not for the decrease of Bacteroidetes (p = 0.109). These changes were enhanced at t3, eight weeks after smoking cessation, even though the composition of phyla between t2 and t3 remained strikingly similar with the exception of Bacteroidetes, leaning to both a relatively brisk (within four weeks) and durable (eight-week interval) effect of smoking cessation on microbial composition. In contrast, in the control groups, there was no significant change in the microbial composition at the phylum level. Alpha diversity was shown to be substantially higher four weeks after smoking cessation compared to the samples obtained whilst smoking. After 8 weeks there still was a tendency towards increased diversity levels compared to baseline. In the control groups, both diversity indices were relatively stable. In conclusion, Unifrac distance was analyzed as a measure of difference in the phylogenetic lineages in different environments. Biedermann and his team determined the highest Unifrac distance in subjects undergoing smoking cessation at t1 in comparison with t2 and t3, whereas no difference was observed both in the intervention and control groups after intervention (t2 vs. t3). Discussion This review was aimed at evaluating the available evidence on the interaction between cigarette smoking and intestinal microbiota of healthy humans. Although the examined studies differed in design, quality, and participants' characteristics, it is of concern that the majority of them reported lower levels of bacterial species diversity in smokers' fecal samples. This evidence is in accordance with previous results obtained analyzing oral and gut microbiome, coming from animal and human models [24,51]. Despite the limited number of dedicated studies, even the use of e-cigarettes seems to be associated with a low gut microbiota variability [39,41]. Conversely, inconsistent results were reported for Firmicutes and Bacteroidetes at the phyla or genus level. In particular, the genus Bacteroides was reported to be mainly represented in smokers by four studies [44][45][46]49] and in non-smokers by two other studies [39,41]. Prevotella spp. was found to be highly abundant in cigarette smokers [39,41,42] but lower in e-cigarette smokers [41]. It should be noted that these results could have been affected by confounding factors. Only four studies adjusted their results for identified confounders [40,42,43,50]. The underlying mechanism linking cigarette smoking with intestinal microbiota dysbiosis is largely unknown. Several compounds and mechanisms have been proposed that may regulate this interaction [24]. Cigarette smoke contains many toxic substances, including polycyclic aromatic hydrocarbons (PAHs), aldehydes, nitrosamines, and heavy metals, which are inhaled into the lungs. These substances may reach the gastrointestinal tract and induce microbiota dysbiosis via different mechanisms, such as antimicrobial activity or regulation of the intestinal microenvironment [24,52]. Exposure to smoke components can benefit some bacteria populations by elevating the intestinal pH or decreasing the production of organic acids, enabling some species to thrive, and cause intestinal microbiota dysbiosis [53,54]. Changes in the concentration of bacteroides, which normally constitute about 25% of all gut microbiota and provide amino acids and vitamins from dietary proteins, seem to modulate gut production of amino acids (serotonin, catecholamines, glutamate), with a possible role in the alteration of vagal nerve transmissions to the brain [55]. Polycyclic aromatic hydrocarbons, which result mainly from the thermal cracking of organic resources and incomplete burning of organic material at low temperatures, may cause various diseases due to their toxicity, mutagenicity, and carcinogenicity. Intestinal microbiota can transform these compounds into non-hazardous or less toxic substances through fermentation [56]. However, evidence suggests that excessive ingestion of these substances may significantly alter the diversity and abundance of the intestinal microbiota, causing moderate inflammation and increasing the penetrability of intestinal mucosa [57]. Cigarette smoke contains high levels of toxic volatile organic compounds (VOCs), such as benzene. Some studies have shown that benzene may alter the overall structure of intestinal microbiome [58]. Acetaldehyde, a low-molecular-weight aldehyde, is a highly reactive substance that may cause different diseases, such as liver injury and gastrointestinal cancers. Many intestinal bacteria can convert acetaldehyde into ethanol through fermentation, which can lead to the overgrowth of relevant bacteria species [59]. Furthermore, acetaldehyde increases the permeability of the intestinal tract, allowing microorganisms and endotoxin to cross the intestinal mucosal barrier. Acetaldehyde also induces endotoxemia, with subsequent injuries to liver and other organs, intestinal inflammation, and rectal carcinogenesis [60]. In addition, acetaldehyde and reactive oxygen species induce neutrophil infiltration and consequent release of tissue-damaging compounds, which cause translocation of intestinal microbiota [61]. The main toxic gases contained in tobacco smoke enter into blood through alveolar exchange, which affects O 2 transport, decreases blood pH, and induces systemic inflammation and diseases. Exposure to carbon monoxide in particular alters the intestinal microbiome by favoring bacterial species that express molecules involved in iron acquisition [62]. Moreover, cigarette smoke contains heavy metals (such as cadmium, arsenic, chromium, iron, mercury, nickel) which may be ingested and cause intestinal microbiota dysbiosis affecting the transport, oxidative, and inflammatory status of gut epithelium [63,64]. The human gut microbiome has a pivotal role in regulating inflammatory pathways taking part in the so-called gut-brain and gut-lung axes and there is evidence that pulmonary disorders may be implicated in the development of intestinal diseases. Patients with chronic lung diseases, whose pathogenesis is strictly related to cigarette smoking, have a higher prevalence of intestinal diseases, such as Intestinal Bowel Disease and Intestinal Bowel Syndrome [65,66]. Nicotine, or its metabolites, reduces gut microbial diversity and it worsens the symptoms in patients with Crohn's disease [67]. There is much evidence that gut-residing microorganisms interact with the immune system, linking gut dysbacteriosis with inflammation progression and tobacco-related illnesses (e.g. asthma, COPD). Furthermore, tobacco is a well-known factor related to the release of inflammatory cytokines, which are a milestone for the development of diseases, such as cancer [1,3,68]. Similarly, the vapor of e-cigarettes seems to contribute to exposure to toxic aldehydes (e.g., formaldehyde and acrolein) released by thermal decomposition of the major vehicle components of e-cigarette e-liquids (propylene glycol and glycerol) and flavorings [69]. Dysbiosis of intestinal microbiota is also closely associated with skin diseases, such as acne, psoriasis, and atopic dermatitis. Cigarette smoking may lead to intestinal microbiota dysbiosis through the skin-gut axis. Skin inflammation might contribute to intestinal disorders through immunologic regulations and shifts in the microbiota composition [70]. For all these reasons, research on humans is needed to better clarify these mechanisms and to provide possible methods to counteract their effects after smoking cessation. This review has limitations. First, selected studies show important differences in sociodemographic characteristics (two studies enrolled only males) and smoke exposure of participants (mainly self-reported), which limited the comparison and may affect the consistency of results. Furthermore, the studies differed in quality, and the main quality item involved was related to the lack of strategies to take account of the confounding factors, which weakened the strength of the findings. In particular, only a few studies considered the possible interference of diet on smoking-related effects on gut microbiota composition. However, this review represents the first attempt to characterize, systematically, the effects of tobacco smoking on gut microbiota composition in healthy humans and it opens new perspectives for future research about strategies of smoking cessation and the possible role of probiotics to counteract smoke-related dysbiosis. Conclusions The evidence shows that intestinal microbiota dysbiosis is closely associated with intestinal and extra-intestinal diseases. Smoking seems to alter gut microbiota composition, inducing dysbiosis. However, the mechanisms by which the smoke toxicants alter human intestinal microbiota are not yet clearly defined, as well as the influence of the type of cigarettes (traditional and electronic) and the conditions of smoking (indoor/outdoor, active/passive smoking, amount of cigarettes/day, etc.) on the impact of these substances. Maintaining the balance of intestinal microbiota represents a new possibility for therapeutic approaches to smoking-related diseases. Further research is needed in this direction. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
2022-02-24T16:20:46.451Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "183d5c851c91bf5b75f827b635d48db03fa41372", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/10/2/510/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45791cb7d0a6905162c8e5918794aef5c7ab3168", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15599953
pes2o/s2orc
v3-fos-license
Multicriticality of the (2+1)-dimensional gonihedric model: A realization of the (d,m)=(3,2) Lifshitz point Multicriticality of the gonihedric model in 2+1 dimensions is investigated numerically. The gonihedric model is a fully frustrated Ising magnet with the finely tuned plaquette-type (four-body and plaquette-diagonal) interactions, which cancel out the domain-wall surface tension. Because the quantum-mechanical fluctuation along the imaginary-time direction is simply ferromagnetic, the criticality of the (2+1)-dimensional gonihedric model should be an anisotropic one; that is, the respective critical indices of real-space (\perp) and imaginary-time (\parallel) sectors do not coincide. Extending the parameter space to control the domain-wall surface tension, we analyze the criticality in terms of the crossover (multicritical) scaling theory. By means of the numerical diagonalization for the clusters with N\le 28 spins, we obtained the correlation-length critical indices (\nu_\perp,\nu_\parallel)=(0.45(10),1.04(27)), and the crossover exponent \phi=0.7(2). Our results are comparable to (\nu_{\perp},\nu_{\parallel})=(0.482,1.230), and \phi=0.688 obtained by Diehl and Shpot for the (d,m)=(3,2) Lifshitz point with the \epsilon-expansion method up to O(\epsilon^2). Such an anisotropic criticality is realized by the d-dimensional Ising model fully-frustrated within the m-dimensional subspace. The problem is that a naive computer simulation for the equilateral cluster does not yield adequate finite-size scaling. Rather, one has to adjust the shape of the cluster (that is, the system sizes of each subspace L ,⊥ ) so as to fix the following scaled ratio to a constant value; Here, the index z denotes the dynamical critical exponent, which characterizes the anisotropy. The significant point is that the exponent z itself is an unknown parameter, and it has to be determined through some preliminary analyses. After that, one is able to perform large-scale simulations. So far, the case of (d, m) = (3, 1), namely, the axial-nextnearest-neighbor-Ising model, has been studied extensively by means of the Monte Carlo method [9,10,11]. The simulation results are in agreement with the above-mentioned field-theoretical considerations as well as the series-expansion results [12,13]. In this paper, we consider the case of (d, m) = (3,2). For that purpose, we investigate the ground-state phase transition of the gonihedric model in 2 + 1 dimensions. The gonihedric model is a fully frustrated Ising magnet with the finely tuned plaquette-type (four-body and plaquette-diagonal) interactions, for which the domain-wall surface tension vanishes; so far, the classical version has been studied in detail [14,15,16,17,18]. Making a contrast to the frustrated magnetism within the real space (⊥), the quantum fluctuation along the imaginary-time direction ( ) is simply ferromagnetic, and the ground-state criticality should be an anisotropic one. In Fig. 1 (a), we present a schematic phase diagram of the (2 + 1)-dimensional gonihedric model subjected to the transverse magnetic field Γ and the frustration j; we explain the details in Sec. II. The multicritical point at j = 1, where the magnetism is fully frustrated, is our main concern. In order to simulate the (2 + 1)-dimensional gonihedric model, we utilize the numericaldiagonalization method. This approach may have the following advantages. First, we implemented Novotny's method [19] to represent the Hamiltonian-matrix elements; this method is readily applicable to the quantum-mechanical system as well [20]. Owing to this method, we are able to treat an arbitrary number of spins N = 8, 12, . . . , 28 constituting the d = 2 cluster; note that conventionally, the number of spins is restricted within N(= L 2 ) = 9, 16, 25, . . . . Such an arbitrariness allows us to make a systematic finite-size scaling analysis. Second, the diagonalization method is free from the slowing-down problem; this problem becomes severe for such a frustrated magnetism, deteriorating the efficiency of the Monte Carlo sampling. Last, the constraint L z ⊥ /L → 0 [Eq. (1)] is always satisfied, because the system size along the imaginary-time direction is infinite L → ∞; note that the system size along the imaginary time corresponds to the inverse temperature L = 1/T → ∞. In fairness, it has to be mentioned that our research owes its basic idea to the following pioneering studies. First, an equivalence between the (2 + 1)-dimensional fully frustrated magnetism and the (d, m) = (3, 2) Lifshitz point was argued field-theoretically in Refs. [21,22]. Second, in Ref. [23], the biaxial-next-nearest-neighbor Ising model in d = 3 was studied with the Monte Carlo method. It was reported that the Lifshitz (multicritical) point collapses at zero temperature. On the contrary, the gonihedric model has an extra tunable parameter κ. Setting κ ≥ 2, we attain desirable multicriticality as depicted in Fig. 1 (a). The rest of this paper is organized as follows. In Sec. II, we explain the (2+1)-dimensional gonihedric model. To elucidate the underlying physics, we make an overview of the classical gonihedric model in d = 3. In Sec. III, we present the simulation results. The simulation scheme is explained in the Appendix. In Sec. IV, we present a summary and discussions. A. Quantum gonihedric model in d = 2 As mentioned in the Introduction, we propose the (2+1)-dimensional gonihedric model as a realization of the (d, m) = (3, 2) Lifshitz point. To be specific, we consider the Hamiltonian with the coupling constants J 1 = κ, J 2 = −κ/2, and J 3 = (1 − κ)/2. Here, the operators {σ α i } denote the Pauli matrices placed at the square-lattice points i. The summations ij , ij , and [ijkl] run over all possible nearest-neighbor, next-nearest-neighbor (plaquette diagonal), and plaquette-four-body spins, respectively. The transverse magnetic field Γ controls the amount of quantum fluctuations. At a certain Γ c , a ground-state phase transition may occur. As mentioned above, the gonihedric model has finely tuned coupling constants {J i }, which cancel out the domain-wall surface tension. Actually, the domain-wall energy of the gonihedric model (apart from the off-diagonal term −Γ i σ x i ) admits a geometric representation E = n 2 + 4κn 4 [14]. Here, n 2 denotes the number of points where two domain walls meet at a right angle (domain-wall undulation), and n 4 is the number of points where four domain walls meet at a right angle (self-intersection point). That is, the parameter κ controls the self-avoidance of the domain walls with the bending elasticity unchanged. (Notably enough, the interfacial energy lacks the surface-tension term. Accordingly, the domain-wall undulations are promoted, giving rise to a peculiar type of criticality.) The gonihedric model has a tunable parameter κ with the zero surface tension maintained. This redundancy is an advantage over other frustrated magnetisms such as the biaxial-next-nearest-neighbor Ising model. We survey the regime κ ≥ 2, where we observed a clear indication of the Lifshitz-type criticality. In this paper, we extend the above-mentioned parameterization space. That is, introducing a new controllable parameter j, we investigate the parameter space Note that at j = 1, the parameter space, Eq. (3), reduces to the above-mentioned one (original gonihedric model). Owing to the extension, the magnetic domain wall now acquires a finite domain-wall surface tension ∝ 1 − j. In other worlds, in terms of this extended parameter space, we identify the Lifshitz point as a multicritical point; see the phase diagram in Fig. 1 (a). This viewpoint was proposed in Ref. [16], where the authors investigate the criticality of the classical d = 3 gonihedric model with the cluster-variation method. In the next section, we will overview the properties of the classical gonihedric model, which may be relevant to the present study. The (d, m) = (3, 3) criticality may be realized by the ternary mixture [24] of water, oil and surfactant [25,26,27,28]; actually, a crossover from the d = 3-Ising universality to an exotic one was reported in Refs. [29,30]. We present a schematic phase diagram of the (classical) d = 3 gonihedric model in Fig. 1 (b) [16,31]. The Hamiltonian of the classical d = 3 gonihedric model is given by (The Ising-spin variables {S i } are placed at the d = 3 lattice points.) We notice that the phase diagram resembles that of the quantum gonihedric model; the discrepancy as to j ↔ −j is merely due to the difference of parameterization, and the subspace −j = 1/4 corresponds to the fully-frustrated gonihedric model. A few remarks on the phase diagram follow: First, the Lifshitz point at −j = 1/4 is identified as an end-point of the critical branch (−j < 1/4) belonging to the d = 3-Ising universality. In fact, the multicritical (crossover) scaling theory applies successfully [16,31] to clarifying the nature of the Lifshitz point. (Direct numerical simulation at −j = 1/4 appears to be rather problematic [32].) We will accept this cross-over viewpoint as for the quantum gonihedric model. Second, in Refs. [17,18], it was reported that for small κ < 0.5, the multicritical point becomes a discontinuous one, accompanied with pronounced hysteresis. In particular, at κ = 0, the model reduces to the so-called p-spin model [33], which is notorious for its slow relaxation to the thermal equilibrium (metastability). We found that a similar difficulty arises in the quantum gonihedric model. Hence, we devote ourselves to the large-κ regime such as κ ≥ 2, where we observed a clear indication of the Lifshitz-type criticality. Last, the phase boundary separating the lamellar and ferromagnetic phases is (almost) vertical. This feature ensures that the multicritical point is located at −j = 1/4. The quantum gonihedric model possesses this property as shown in the next section. Actually, this is the most significant benefit of the parameterization scheme, Eq. (3). III. NUMERICAL RESULTS In this section, we present the numerical results. Our aim is to estimate the critical indices (ν ⊥ , ν ) and φ. As mentioned in the Introduction, we utilize Novotny's method to diagonalize the Hamiltonian (2) numerically. We explain the technical details in the Appendix. By means of this method, we simulated finite clusters with N ≤ 28 spins. The linear dimension of the cluster L is given by the formula because the N spins constitute a d = 2 cluster. A. Finite-size scaling of the critical branch: d = 3-Ising universality In this section, we survey the critical branch j < 1; see Fig. 1 (a). We show that the criticality belongs to the ordinary d = 3-Ising universality class. This finding provides a foundation for the subsequent analyses with the crossover-scaling theory. In Fig. 2, we plot the Roomany-Wyld approximate beta function [34] with the excitation energy gap ∆E N (Γ) for the system size N. Here, we fixed the selfavoidance parameter κ = 2, and varied the frustration as j = −1. Basically, the critical branch depicted in Fig. 1 (a) follows from this analysis; afterward, we determine the critical point Γ c more precisely. The slope of the beta function at Γ = Γ c yields an estimate for the inverse of the correlation-length critical exponent, 1/ν. In Fig. 2 Actually, we consider this crossover behavior rather in detail in the following sections. [35]. The approximate critical point Γ c (L 1 , L 2 ) is determined by the zero point of the beta function. That is, it satisfies the equation From the least-squares fit to the data in Fig. 3, we obtained the critical point Γ c = 7.073(55) in the thermodynamic limit L → ∞. We make use of Γ c in the following scaling analyses. B. End-point singularity of the critical amplitude The above analysis indicates that the multicriticality at j = 1 is merely an end-point singularity of the ordinary d = 3-Ising critical branch. That is, the crossover-scaling theory should apply to clarifying the nature of the multicritical point. In this section, we consider the singularity of the critical amplitude of ∆E beside the multicritical point. The amplitude G ± is defined by the relation The amplitude exhibits the singularity with the crossover exponent φ. (As mentioned in the Introduction, the exponent ν denotes the critical index along the imaginary-time direction.) The variable ∆ stands for the distance from the multicritical point Here, we postulated that the multicritical point locates at j = 1, and we justify this claim in Sec. III D. The above formula is a straightforward consequence of the crossover-scaling Actually, this relation provides a definition of the crossover exponent φ. To begin with, we determine the critical amplitude G + . In Fig. 4, we plot the scaled Similarly, we determined G + for various values of j and κ = 2, 4, and 6. In Fig. 5, we plotted the amplitude G + for ∆(= 1 − j) with the logarithmic scale. [In the cases of κ = 2, 4, and 6, we read off G + from the scaling plot at the scaling regime (Γ − Γ c )L 1/ν = 15, 40, and 60, respectively. In the case of κ = 2, we omitted the data of N = 16 for its rather insystematic behavior particularly for small ∆.] In the plot, we also presented a slope (dotted line) of G + ∝ ∆ 0.6 . We observe a signature of the power-law singularity with the exponent (ν − ν)/φ ≈ 0.6 as ∆ → 0. Hence, we confirm that the crossover behavior (9) is realized in the vicinity of the multicritical point. In fact, from ν = 0.63020 (12) [35] and the present results, Eqs. (19) and (16), obtained in Sec. III C, we arrive at the slope fairly consistent with the above observation. With use of G + calculated in this section, we crosscheck the validity of the critical indices obtained in the following section. C. Finite-size-scaling analysis of (ν ⊥ , ν ) and φ In this section, we make an analysis of each critical exponent with use of the crossover scaling, Eq. (11). First, we consider the Binder parameter with the magnetization M = N i=1 σ z i . [Note that the simulation was not done right at the Lifshitz point; we calculated the data in the vicinity of the Lifshitz point (crossover scaling). Hence, the ferromagnetic order parameter M is still of use in the data analysis.] The symbol . . . denotes the expectation value at the ground state. According to the crossover-scaling theory, the Binder parameter obeys the formula (Here, we made use of the fact that the Binder parameter is scale-invariant at the critical point.) As noted in the Introduction, the index with the subscript ⊥ denotes the critical exponent within the real space. In Fig. 6, we present the crossover-scaling plot, (Γ−Γ c )L 1/ν ⊥ -U, with κ = 2 and ∆L φ/ν ⊥ = 8. Here, we set the scaling parameters ν ⊥ = 0.45 and φ = 0.7, where we found the best data collapse. Surveying κ = 4 and 6 as well, we arrive at the estimates ν ⊥ = 0.45(10), and φ = 0.7(2). Second, we consider the energy gap ∆E. The energy gap obeys the crossover-scaling with the dynamical critical exponent z. In Fig. 7, we present the crossover-scaling plot, Through z = ν /ν ⊥ , the above results lead to ν = 1.04 (27). (19) Let us address a remark. As mentioned in the above section, the indices, Eqs. (16) and (19), are consistent with the end-point singularity of G + , indicating the self-consistency of the present analyses. D. Phase transition between the ferromagnetic and lamellar phases The above analysis stems from the proposition that the multicritical point locates at j = 1; in other words, the phase boundary separating the ferromagnetic and lamellar phases is (almost) vertical. In this section, we justify this proposition. (Actually, this feature was confirmed in the case of the classical d = 3 gonihedric model [16].) In Fig. 8, we plot the ground-state energy per unit cell E 0 /N with the system sizes N = 8, 12, . . . , 28 for κ = 2 and Γ = 0.6; namely, we surveyed the regime slightly below the multicritical point. We observe a distinct signature of the first-order phase transition around j ≈ 1, where the slope of E 0 /N changes rather abruptly (level crossing). The transition point seems to converge into the regime 0.9 ≤ j c ≤ 1 as N → ∞. Noticeably enough, the transition point is close to j = 1. We argue this behavior more in detail: First, the data E 0 /N in j < j c (ferromagnetic phase) appear to reach the thermodynamic limit, whereas in j > j c (lamellar phase), the plots are still scattered insystematically. Possibly, the incommensurability of the lamellartype structure (periodicity of the domain walls) causes such an irregularity. Surveying the cases of κ = 2, 4, and 6, we found that the data of N = 8, 16, and 24 are rather robust against this incommensurability effect. Hence, we conclude that the transition point locates within 0.9 ≤ j c ≤ 1. Last, we found that such a slight deviation of j c from j = 1 is negligible in the sense that the influence is less than the error margins. In other worlds, the parameterization, Eq. (3), is sensible to explore the multicriticality in terms of the cross-over scaling; this point was noted in the case of the classical gonihedric model [16]. IV. SUMMARY AND DISCUSSIONS We investigated the criticality of the (2 + 1)-dimensional gonihedric model, Eq. (2), with the extended parameter space, Eq. (3). This extended parameter space allows us to survey the criticality in terms of the crossover-scaling theory; see Fig. 1 (a). We employed Novotny's method to diagonalize the Hamiltonian. With use of this method, we treated an arbitrary They also provided the convergence-accelerated results with the [1/1] Padé method; (ν ⊥ , ν ) = (0.482, 1.230) and φ = 0.688. Our simulation data support their claim. Lastly, let us make a few comments on the advantages of the diagonalization approach. First, the numerical diagonalization is free from the slowing-problem problem, which deteriorates the efficiency of the Monte Carlo sampling for the frustrated magnetism. Second, we do not have to worry about the constraint (1). The constraint is always satisfied, because the system size along the imaginary-time direction is infinite. However, the diagonalization method suffers from the severe limitation as to the available system sizes. In this paper, we surmount this difficulty with the aide of Novotny's method, which allows us to treat a variety of system sizes N = 8, 12, . . . , 28 sufficient to manage systematic finite-size scaling. MENTS: QUANTUM NOVOTNY'S METHOD In this Appendix, we explain the simulation scheme. As mentioned in the Introduction, we applied the Novotny method [19] to diagonalizing the Hamiltonian (2). Novotny's method allows us to construct the Hamiltonian-matrix elements systematically for the cluster with an arbitrary (integral) number of spins N = 8, 12, . . . , 28; note that conventionally, the number of spins is restricted within N(= L 2 ) = 9, 16, 25, . . . . Originally, Novotny's method was formulated for the classical Ising model (transfer-matrix formalism) [19]. In Ref. [20], it was extended to adopt the quantum-mechanical interaction (Hamiltonian formalism). Here, we follow the notation of Ref. [20], and make a slight extension to incorporate the plaquette-type interactions; see Eq. (A5). Before we commence a detailed discussion, we explain the basic idea of Novotny's method. In Fig. 9, we present a schematic drawing of a finite-size cluster for the d = 2 gonihedric holds. We decompose the Hamiltonian into two components The First, we consider the diagonal component H (D) . We propose the following formula [20] Here, the component H(v) is a diagonal matrix, which describes the vth-neighbor interaction among the N-spin alignment. The diagonal elements are given by Here, the matrix T denotes the plaquette-type interaction between the arrays {σ i } and {τ i }; The operator P denotes the translational operator, which satisfies P |{σ i } = |{σ i+1 } ; here, we imposed the periodic-boundary condition. Note that the operator insertion of P v in Eq. Lastly, we consider the off-diagonal component H (O) . The matrix element is given by The expression is quite standard, because the component H (O) simply concerns the individual spins, and has nothing to do with the connectivity among them. The above formulas complete our basis to simulate the Hamiltonian (2) (classical) gonihedric model, Eq. (4), with κ = 1 [16,31]; here, the parameter T denotes the temperature. The phase diagram is essentially the same as that of the quantum-mechanical model; the discrepancy j ↔ −j is due to the difference of parameterization.
2007-04-30T00:30:58.000Z
2007-04-30T00:00:00.000
{ "year": 2007, "sha1": "ae42fafc7588fd97cd440f93bbb9f76f6c43c766", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0704.3865", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ae42fafc7588fd97cd440f93bbb9f76f6c43c766", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
221295123
pes2o/s2orc
v3-fos-license
Modeling COVID-19 with Human Pluripotent Stem Cell-Derived Cells Reveals Synergistic Effects of Anti-inflammatory Macrophages with ACE2 Inhibition Against SARS-CoV-2 Dysfunctional immune responses contribute critically to the progression of Coronavirus Disease-2019 (COVID-19) from mild to severe stages including fatality, with pro-inflammatory macrophages as one of the main mediators of lung hyper-inflammation. Therefore, there is an urgent need to better understand the interactions among SARS-CoV-2 permissive cells, macrophage, and the SARS-CoV-2 virus, thereby offering important insights into new therapeutic strategies. Here, we used directed differentiation of human pluripotent stem cells (hPSCs) to establish a lung and macrophage co-culture system and model the host-pathogen interaction and immune response caused by SARS-CoV-2 infection. Among the hPSC-derived lung cells, alveolar type II and ciliated cells are the major cell populations expressing the viral receptor ACE2 and co-effector TMPRSS2, and both were highly permissive to viral infection. We found that alternatively polarized macrophages (M2) and classically polarized macrophages (M1) had similar inhibitory effects on SARS-CoV-2 infection. However, only M1 macrophages significantly up-regulated inflammatory factors including IL-6 and IL-18, inhibiting growth and enhancing apoptosis of lung cells. Inhibiting viral entry into target cells using an ACE2 blocking antibody enhanced the activity of M2 macrophages, resulting in nearly complete clearance of virus and protection of lung cells. These results suggest a potential therapeutic strategy, in that by blocking viral entrance to target cells while boosting anti-inflammatory action of macrophages at an early stage of infection, M2 macrophages can eliminate SARS-CoV-2, while sparing lung cells and suppressing the dysfunctional hyper-inflammatory response mediated by M1 macrophages. Introduction The infection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has already caused more than 5.4 million Coronavirus Disease-2019 (COVID-19) cases internationally (https://google.org/crisisresponse/covid19-map). Most COVID-19 patients show mild to moderate symptoms of fever, dry cough, fatigue and diarrhea, however, approximately 15% of con rmed cases progress to severe pneumonia, acute respiratory distress syndrome (ARDS) or multi-organ failure (Guan et al., 2020). The progression from mild to severe disease or death is principally attributed to dysfunctional immune responses (Mehta et al., 2020;Wang et al., 2020) together with viral damage of target cells. Given the lack of an effective vaccine or medication, a thorough understanding of immunological features caused by SARS-CoV-2 is critically important for studying viral pathobiology and therapeutic development. Alveolar macrophages (AMs) are key sentinel cells for host defense in the respiratory system, producing cytokines and chemokines that are crucial components of innate immunity and mediators of immunopathology (Allard et al., 2018). The polarization of macrophages confers a heterogeneous function and plasticity depending on the duration of stimulation and microenvironment, which are discrete phenotypes associated with different in ammatory responses, typically termed the M1φ /pro-in ammatory and M2φ /anti-in ammatory macrophages (Gomez Perdiguero et al., 2015;Wynn et al., 2013). The distinction is known to be oversimpli ed, with macrophage dynamic activities spread along the M1-M2 phenotypic spectrum (Bian, 2020;Shapouri-Moghaddam et al., 2018). However, in general, M1φ destroys pathogens by producing a large number of pro-in ammatory cytokines such as IL-1β, TNFα, IL-6 and IL18. In contrast, M2φ exhibits higher activity in phagocytosis against pathogens and for anti-in ammation (Mills, 2015;Murray, 2017). Recent studies (Liao et al., 2020;Xu et al., 2020) on immunity of COVID-19 patients indicate that the cells damaged by SARS-CoV-2 infection induced innate in ammation in the lungs that is largely mediated by pro-in ammatory macrophages and granulocytes. In addition to local damage, the pro-in ammatory macrophages release cytokines/chemoattractants and prime adaptive immune cell responses, which in some cases lead to dysfunctional immune responses and cytokine storm, followed by respiratory and even multi-organ failure . These studies imply a crucial role for macrophages in the progress of SARS-CoV-2 infection; a deeper understanding of the interactions among targeted cells, macrophages and SARS-CoV-2 could offer new ideas to help combat this deadly contagious disease. The current most widely used model for SARS-CoV-2 research is the African green monkey derived Vero cells, which are very limited for modeling human disease. Although primary macrophages are more functionally or phenotypically representative of native macrophages in the tissue from which they are derived, they are di cult to obtain, proliferate slowly, and are often poorly characterized (Jobe et al., 2017). In this study, we generated lung cells and macrophages paired from the same cell origin, human pluripotent stem cell (hPSC) lines. This strategy overcomes a common concern about histocompatibility when studying human immune cells with other cell types, and provides theoretically unlimited cell resources for reliably modeling and studying immunology of macrophages and human lungs during SARS-CoV-2 infection. Our results using this platform demonstrate a potential therapeutic strategy through a combination of boosting anti-in ammatory macrophages and intervention of viral entry, to control SARS-CoV-2 infection at the immune defense-based protective phase while circumventing the in ammation-driven damaging phase. Results Macrophage involved at the severe stage of COVID-19 To better understand how macrophages impact COVID-19 progression, we compared immune cells and in ammatory factors in lung tissues obtained from autopsies of COVID-19 patients or healthy donors. First, histological changes in lung tissues from COVID-19 patients were examined. Compared to healthy lung tissues, this revealed extensive necrotizing bronchiolitis with necrotic bronchial epithelial cells and severe alveolitis with atrophy and desquamation, displayed in the lumen of the patient's lung ( Figure 1A). Of note, pulmonary hemorrhagic infarct with abundant in ammatory in ltration (arrow heads) were extensively present through the whole alveoli and bronchial regions ( Figure 1A). Recently, it was reported that proin ammatory FCN + monocyte-derived macrophages were mainly present and FABP4 + alveolar macrophages were greatly reduced in the bronchoalveolar lavage uid from patients with severe COVID-19, whereas mild and moderate cases were characterized by the presence of highly clonally expanded CD8 + T cells (Liao et al., 2020). Therefore, we examined if macrophages were dominantly present in the diseased patient's lung. Immunostaining against pan macrophage marker CD68 showed abundant macrophages were extensively distributed through the whole lung tissue with aggregated phenotypes (Figure 1B), in agreement with the above-cited report. However, macrophages are multifaceted and distinct functions of macrophages highly depends on polarization, characterized generally as M1/pro-in ammatory or M2/anti-in ammatory macrophages. We thus further examined M1 macrophage marker CD80, and M2 macrophage marker CD163 ( Figures 1C-F). The results revealed that cells positive for either CD80 or CD163 were both aberrantly represented in the patient's lung tissue (Figures 1C-F). Indeed, CD68 + , CD80 + and CD163 + macrophage populations were signi cantly expanded in the patient's lung tissue, suggesting expansion of both M1 and M2 macrophage populations in severe disease. We also examined several cytokines that are mainly produced by macrophages and found key pro-in ammatory cytokine IL-6 was intensively expressed in the lumen of the patient's lung tissue ( Figure 1G). Taken together, the data supports a need to further examine the roles of M1 and M2 macrophages in COVID-19 progression. Co-culture of lung cells and macrophages derived from hPSCs To further investigate the interaction among macrophages, lung cells and SARS-CoV-2, we established a co-culture model using cells derived from the same hPSC line (RUES2 or H1), which provide a genetically de ned background for immune study. (Figures S3A-D). ACE2, the putative SARS-CoV-2 receptor, and TMPRSS2, the co-effector for viral entry (Hoffmann, 2020), were detected in AT2, AT1 and ciliated cells, in clusters 0, 2, 3 ( Figures S3E-F). The immunostaining results further validated that ACE2 is mainly co-expressed with SP-B or pro-SP-C in AT2 cells, and FOXJ1 in ciliated cells ( Figure S2B), consistent with results previously reported in primary human lung tissues (Ziegler, 2020). Next, the hPSC-induced lung cells (iLung) and macrophages (iMφ) were plated and cultured together in a 1:1 ratio (Figure 2A), similar to the ratio of lung cells and macrophages in distal bronchial or alveolar regions in human lung (Kyle J. Travaglini, 2020). The iLung was derived from the hPSC lines carrying a Doxycycline-inducible GFP reporter gene, which allowed the distinction of iLung and iMφ in live cultures ( Figure 2B). A signi cantly lower number of GFP + iLung were observed after four-day co-culture with iMφ of M1 phenotype (iM1φ), than seen in the co-culture with iMφ of M2 phenotype (iM2φ) or control 293T cells ( Figure 2C). The scRNA pro les further revealed decreased expression of proliferation-associated genes MKI67 and TOP2A and increased expression of apoptosisrelated genes TP53, CASP3, BAX, MCL1, in the iLung co-cultured with iM1φ, but not in co-cultures with iM2φ ( Figure S6D). These results were in alignment with the phenotype of pro-in ammatory activities of iM1φ, as scRNA-seq data detected a set of proin ammatory factors, IL1B, IL18, STAT1, FCN1, CXCL9, CXCL10, CXCL11, CXCL16, CCL2 highly expressed in iM1φ ( Figure 2F-G, S5B-C). In contrast, iM2φ mainly expressed anti-in ammatory factors or immunoregulatory genes such as TGM2, APOE, A2M, CCL13, CCL26 and TREM2 ( Figure 2F-G, S5B-C). Gene Ontology (GO) enrichment analysis comparing iM1φ and iM2φ revealed overactivation of differential signaling pathways such as pro-in ammatory IFNγ, type I IFN, and neutrophil activation in iM1φ; antiin ammatory and tissue damage-repair process of RNA catabolic process, protein co-localization to endoplasmic reticulum in iM2φ ( Figures S6B, C). Similar phenotypes were observed in the iLung co-cultured with THP-1, an established monocyte line, upon activation of M1 or M2 phenotype ( Figure 2C). The results indicate that activation of M1-macrophage was su cient to create a toxic environment for the iLung even in the absence of viral infection. Immune response of macrophages following SARS-CoV-2 infection To model the immune response of macrophages to SARS-CoV-2 infection on lung cells, virus was introduced to the co-culture system ( Figure 3A). As a rst step to measure effects of macrophages on viral entry into lung cells, we used a SARS-CoV-2 pseudoentry virus, in which the backbone of a VSV-G pseudo-typed ΔG-luciferase virus carries the SARS-CoV-2 spike protein incorporated in the surface of the viral particle (Nie et al., 2020;Whitt, 2010). High luciferase activity was readily detected in iLung 24 hours after viral infection at MOI = 0.01, but not in iMφ or 293T in the co-culture (293T cells were used as a co-culture control, based on our preliminary data and previous report that the permissiveness of 293T to SARS virus is low (Wenhui Li, 2003)) ( Figure 3B), and immunostaining con rmed that the viral luciferase protein was co-localized with ACE2 + cells in the iLung cultures ( Figure S7B). Since the luciferase gene was expressed after the virus entered host cells, the luciferase activity correlated to the amount of viral entry host cells. Luciferase activity was markedly decreased in the co-cultures of iLung with all three lines of macrophages, iMφ, THP-1 and U937; no signi cant difference was found between hPSC-derived iM1φ or iM2φ, indicating they have the similar inhibitory effects on viral infection ( Figure 3B, Figure S7A). The results were further validated by immunostaining study that substantial decrease of luciferase protein was detected in iLung cells co-cultured with iMφ, compared to those co-cultured with 293T ( Figure S7A). The potential of iMφ to inhibit viral replication and spreading was next studied by infection with a patient-derived SARS-CoV-2 virus in the co-cultures. After 24 hours incubation with the SARS-CoV-2 virus (USA-WA1/2020, MOI = 0.01), a signi cant decrease of viral protein was observed in the co-culture of iLung and iMφ, compared to the co-culture of iLung and 293T. Strikingly, most SARS-CoV-2 virus SARS-N protein was detected in the M2-iMφ when co-cultured with iLung, while in contrast, substantial levels of SARS-N protein was detected in iLung cells in the co-cultures using M1-iMφ or 293T ( Figure 3D). The ndings suggest that phagocytosis activity of M2-iMφ functioned as protection for iLung from viral infection. Several approaches were taken to thoroughly examine the immune response following iMφ on SARS-CoV-2 infection. First, a cohort of cytokines and in ammatory factors that are known to be important for innate or adaptive immune responses were pro led, in the culture medium 24 hours after infection with the SARS-CoV-2 pseudo virus. Increased levels of IFNγ, IL-6, and IL-18 were found in the co-cultures of iLung with M1-iMφ, while these were decreased in the co-cultures of iLung with M2-iMφ ( Figure 3C). To further characterize at the transcriptomic level the response of iLung and iMφ following viral infection, scRNA-seq was performed on the cocultures with SARS-CoV-2 pseudo virus infection and the analysis revealed that a set of anti-in ammatory factors and anti-viral activity related genes, such as CCL26, CCL13, ISG15, IFITM2 and IFITM3, were clearly upregulated when cultures contained M2-iMφ ( Figure 4A and C, Figures S8A). In contrast, pro-in ammatory factors, such as IL-6, S100A8/A9, LYZ and TLR4 were highly expressed when the cultures contained M1-iMφ. ( Figure 4A, C, Figures S8A). Gene enrichment analysis comparing iM1φ and iM2φ revealed over-activation of differential signaling pathways such as neutrophil degranulation and antigen processing and presentation, regulation of T cell mediated cytotoxicity in iM1φ; granulocyte chemotaxis, response to interferon−gamma as well as phagocytosis in iM2φ ( Figures 4 E and F). Moreover, IL10 signaling related genes such as IL10RA, IL10RB, STAT3, SOCS3, TIMP1 and IRS2 were enriched in iM2φ, suggestive of anti-in ammatory macrophages ( Figure S8D). The above results demonstrate a differential immune response of iM2φ versus iM1φ upon viral entry into host cells, as iM2φ increased phagocytosis activity and released antiin ammatory factors, while iM1φ increased antigen-presenting activity and released pro-in ammatory factors. Correlating with the above phenotypes, up-regulation of cell growth arrest or death-related genes, such as GAS6, BTG2, PDCD6, CCAR1, TP53I11, TP53INP1, and activation of programmed death signaling pathways as well as higher mitochondrial genes,MT−CYB,MT−CO1, MT-CO2, MT-ND1 ( Figure S8B and C), were detected in the co-cultures with iLung with iM1φ, but not with iM2φ ( Figure S8B). Previous studies by us (Yuling Han, 2020) and others (Conti et al., 2020) suggested that lung cells display selfimmune defense after SARS-CoV-2 infection, releasing proin ammatory factors, such as CXCL2, CCL2, CXCL3 and IL1A, as well as BCRC3, AADAC, and ATPB4. The GO and KEGG analysis in our current co-culture based data suggest that upregulation in pathway networks including leukocyte chemotaxis NF-κB signaling, IL-17 signaling, viral protein interaction with cytokine-cytokine receptor, and response to type I interferon, combined with the pro-in ammatory reaction of M1 macrophages, could lead to further pulmonary in ammation and damage ( Figure S8E). Moreover, the scRNA-seq pro ling data further validated the immunostaining results showing that few if any iLung cells in the co-culture with M2-iMφ displayed detectable viral gene expression, in contrast to a signi cantly higher number of iLung cells in the co-culture with M1-iMφ ( Figure 4B). Most infected AT2 cells and ciliated cells were also found in the co-culture with M1-iMφ, indicating a stronger protective effect on iLung cells by M2-iMφ ( Figure 4D). Altogether, these ndings suggest that activation of pro-in ammatory macrophages can aggravate lung cell damage, beyond the destruction by viral infection; in contrast, activation of anti-in ammatory macrophages provides a protective effect for lung cells from viral infection. Blockage of ACE2 enhances elimination of SARS-CoV-2 by macrophages Several studies (Tay et al., 2020) on mild or recovered COVID-19 cases indicated that in a healthy immune response, neutralizing antibodies produced in these individuals can block viral infection, followed by alveolar macrophages recognizing the neutralized viruses and clearing them by phagocytosis. We sought to model this process using an ACE2 blocking antibody to inhibit virus entry to target cells, thus decreasing the viral loads ( Figure 5A), to test if this enhances phagocytosis activity of macrophages. As expected, incubation with ACE2 blocking antibody two hours prior to infection of SARS-CoV-2 pseudo virus, reduced markedly the luciferase activities in co-cultures of iLung with either M1 or M2-iMφ, although the decrease of luciferase signal was most pronounced in the cocultures with M2-iMφ ( Figure 5B, Figure S9A). Immunostaining results validated that luciferase protein dramatically decreased in the iLung cells co-cultured with iMφ, compared to those co-cultured with 293T ( Figure S9A). Similarly, immunostaining results from the experiments performed using SARS-CoV-2 virus further revealed that that most SARS-CoV-2-N protein was found in the M2-iMφ, but not in the iLung cells, while the N protein was clearly found in iLung cells in the coculture with M1-iMφ or 293T ( Figure 5C). These results demonstrated that an early intervention of viral infection by blocking ACE2 in target cells can increase the clearance of virus by macrophages, especially synergizing with the phagocytosis activity of M2macophages to further provide protection for target cells and reduce the damage by in ammatory factors produced by M1macrophages. Discussion The study of human host-immune systems with pathogens has depended historically on the use of animal models, largely due to limited cell resources derived from human tissues. Immune research on COVID-19 is limited by the types of models available for study. Recently, a transgenic mouse strain(McCray et al., 2007) has been made with human ACE2 expression regulated by human cytokeratin-18 promoter, but the ACE2 expression in human is more complex than that in the mice. Another model is ferret (Kim et al., 2020), which can be infected with SARS-CoV-2, but does not develop hyper-in ammation in the lung. Recent advances in stem cell biology, especially the technology to differentiate human pluripotent stem cells (hPSCs) into functional immune cell types, provide a rigorous human system for studying immunology and therapeutics. In this report, we describe a new cell co-culture system in which the immune cells, speci cally monocytes/macrophages, and lung lineage cells are produced by directed differentiation of hPSCs. Several key features make the human cell model an ideal system for studying immunology of SARS-CoV-2. The model contains the host cells and immune cells from the same hPSC lines, avoiding concern of histocompatibility, while it can provide abundant numbers of cells with a genetically de ned background for robust mechanistic or therapeutic studies. The innate immune response mainly mediated by macrophages or granulocytes, responding to tissue damage caused by SARS-CoV-2 infection, likely contributes to acute respiratory distress syndrome (ARDS) that is characterized by the rapid onset of widespread in ammation in lung and subsequent respiratory failure . Our study in COVID-19 patient samples validated a correlation of macrophages and the disease, showing a heavy in ltration of pro-in ammatory macrophages in tissue samples from distal lung regions with high levels of in ammatory cytokine IL-6 in the severe cases. The macrophage and lung cell co-culture model combined with single cell transcriptomics was then applied to interrogate the differential immune responses of proor anti-in ammatory macrophages following SARS-CoV-2 infection. We discovered that pro-and anti-in ammatory macrophages both have similar capacity to eliminate the virus in the context of a moderate viral load. However, the immune reaction of proin ammatory macrophages following SARS-CoV-2 pseudo-virus infection led to more damage on lung cells and secretion of a set of in ammatory factors including IL6, IL18 and CXCL10 that are known to be mediators in dysfunctional immune responses and cytokine release syndrome (CRS). In contrast, anti-in ammatory macrophages protected lung cells from viral infection, and diminished pulmonary in ammation by phagocytosis and production of anti-in ammatory factors related in IL10 signaling. Finally, inhibiting viral entry in target cells using an ACE2 blocking antibody, diminished viral infection and enhanced the elimination of viruses. In particular, the intervention on viral entry can synergize with the phagocytosis and antiviral activity of macrophages, resulting in a more pronounced clearance of virus and protection of target cells. The para n-embedded lung tissues were acquired from the department of pathology in the 3 rd people's hospital of Shenzhen, China. They recently reported the pathological changes of lungs from a 66-year-old male died in critical COVID-19 infection(Weiren Luo). The patient developed respiratory failure and septic shock during the treatment and was done with transplant. Informed consent was obtained from the patient and family. The diagnosis of COVID-19 pneumonia was based on the "Coronavirus Pneumonia Prevention and Control Plan" (7th edition) newly issued by the National Health Commission, China (Commission, 2020). Nasopharyngeal swabs were collected and COVID-19 was detected by real-time polymerase chain reaction. Infection was de ned as at least two positive test results. Surgical informed consent was obtained and the study was approved by IRB in the third People's hospital of Shenzhen. . For macrophage differentiation, at day -2, hESCs were digested into single-cell suspension by 1 mg/ml Accutase (Stemcell Technologies) and plated onto Matrigel-coated culture dishes at a density of 2× 10 4 cells/cm 2 in mTeSR1 medium with 5uM Y27632 (MedchemExpress). After 24 h, Y27632 was withdrawn from the medium and cells were cultured for another 24 h. At day 0, cells were rstly induced by macrophage differentiation basal medium (SFD-M) which is RPMI 1640 medium supplemented with 2% B27 (Thermo Fisher Scienti c), 1% L-GlutaMAX-I and 50 μg/ml ascorbic acid (Sigma Aldrich) and 10 ng/ml BMP4 (R&D Systems) for 24 h. Afterward, the medium was changed to SFD-M medium containing 10 ng/ml BMP4 and 2 μM GSK3 inhibitor CHIR99021 (Cayman Chemical) for another 48 h. At day 3, cells were replated onto Matrigel-coated dishes at a density of 4 × 10 4 cells/ cm 2 in SFD-M medium with 50 ng/ml VEGF (R&D Systems) and 10 ng ng/ml FGF2 (R&D Systems) for 48 h. At day 5, the medium was replaced with basal medium with 50 ng/ml VEGF, 10 ng ng/ml FGF2 and 10uM TGFβ inhibitor SB431542 (R&D Systems) for another 72 h. At day 8-10, oating cells were collected and medium was changed and supplemented with 50ng/ml M-CSF and 10ng/ml IL3 (R&D Systems) for another 3-5 days. From day 11-13 onward, the medium was changed to SFD-M medium with 50 ng/ml M-CSF for 3 days. All differentiation steps were cultured under normoxic conditions at 37 ℃, 5% CO 2 . The protocol details are summarized in Figure S4A. All embryonic stem cell studies were approved by the Institutional Review Board (IRB) at the University of Chicago, or by the Tri-Institutional ESCRO committee (Weill Cornell Medicine, Memorial Sloan Kettering Cancer Center, and Rockefeller University). hPSC monocyte polarization hPSC-derived CD14 + cells were plated on tissue culture plates at a density of 2x10 4 cells/cm 2 in SFD-M medium supplemented with 50 ng/mL M-CSF. After 2 days of culture, monocytes differentiated into M0 macrophages and polarized to M1 or M2 macrophages. For macrophages polarization, 100ng/mL LPS (Sigma-Aldrich) and 10ng/mL IFNγ (R&D Systems) were added for M1 induction, or 20 ng/m IL-4 (R&D Systems) was added for M2 induction in SFD-M medium supplemented with 50 ng/mL M-CSF, respectively. These cells were cultured for another three days before examination for expression of the M1 or M2 makers. Giemsa Staining Differentiating day 11-13 monocytes/macrophages were xed on slides using Cytospin, followed by staining using Wright-Giemsa Stain (Sigma-Aldrich) according to the manufacturer's instructions. Immunohistochemical staining Histological study of lung tissues was performed on para n-embedded sections as previously described (Li et al., 2014). For immunohistochemical staining, para n-embedded sections were depara nized and incubated with primary antibodies at 4°C overnight and secondary antibodies at room temperature for 1h. Primary antibodies and secondary antibodies are described in the supplementary Table. Nuclei were counterstained by Hoechst 33342 (Sigma). positive cells in lungs were randomly counted from For FACS analysis, cells were resuspended in a FACS buffer (PBS with 0.1 % BSA and 2.5 mM EDTA). The cell suspension was then stained with PE-conjugated CD43 (Biolegend, clone MEM-59), APC-conjugated CD34 (BD, clone 581) to detect hematopoietic stem/progenitor cells (HSPC). PE-conjugated CD68 (Biolegend, clone Y1/82A), APC-conjugated CD11b (Biolegend, clone ICRF44), FITC-conjugated CD14 (Biolegend, clone HCD14) were used to detect monocyte/macrophages. Basically, cells were incubated with antibodies for 30 minutes at 4°C, followed with washed and suspended in 0.1% BSA/PBS buffer. PE and APC lters were then used to detect cells double positive for CD43 and CD34 or CD68 and CD11b by signal intensity gating, FITC and APC were used to detect cells double positive for CD14 and CD11b. Negative controls stained with control IgG instead of primary antibodies were always performed with sample measurements. Flowcytometry machine of BD FACSAria II and software of Flowjo were mainly used to collect and analyze the owcytometry data. SARS-CoV-2-Pseudo-Entry Viruses Recombinant Indiana VSV (rVSV) expressing SARS-CoV-2 spikes was generated as previously described (Nie et al., 2020;Whitt, 2010;Zhao et al., 2017). HEK293T cells were grown to 80% con uency before transfection with pCMV3-SARS-CoV2-spike (kindly provided by Dr. Peihui Wang, Shandong University, China) using FuGENE 6 (Promega). Cells were cultured overnight at 37°C with 5% CO 2 . The next day, the media was removed and VSV-G pseudotyped ΔG-luciferase (G*ΔG-luciferase, Kerafast) was used to infect the cells in DMEM at an MOI of 3 for 1 hr before washing the cells with 1X DPBS three times. DMEM supplemented with 2% FBS and 100 I.U. /mL penicillin and 100 μg/mL streptomycin was added to the infected cells and they were cultured overnight as described above. The next day, the supernatant was harvested and clari ed by centrifugation at 300xg for 10 min before aliquoting and storing at −80°C. To assay pseudo-typed virus infection, cells were seeded in 96 well plates. Pseudo-typed virus was added for MOI=0.01. At 2 hpi, the infection medium was replaced with fresh medium. At 24 hpi, cells were harvested for luciferase assay or immunohistochemistry analysis. For liver and lung organoids, organoids were seeded in 24-well plates, pseudo-typed virus was added for MOI=0.01 and centrifuged the plate at 1200g, 1 hour. At 24 hpi, organoids were xed for immunohistochemistry or harvested for luciferase assay following the Luciferase Assay System protocol (E1501, Promega) SARS-CoV Single cell sequencing of hPSC-derived lung cells Single-cell capture, reverse transcription, cell lysis, and library preparation was performed using the Single Cell 3′ version 3 kit and chip according to the manufacturer's protocol (10x Genomics, USA). Single-cell suspensions were generated by dissociating the cultured RUES2 cells with 0.05% Trypsin/0.02% EDTA for 10-15 min, followed with passing through 40µM strainer. The single cell suspension was achieved through sorting the dissociated cells in ow cytometry singlets. Cell count was adjusted to 1000-2000 cells per ul to target an estimated capture of 8000 cells. Six input wells were used. Sequencing was performed on NovaSeq6000 with setting 28 for read 1 and 91 for read 2. The sequencing data were primarily analyzed by CellRanger pipeline v3.0.2 (10x Genomics, USA). In particular, raw fastq data were generated by CellRanger mkfastq; A custom reference genome was built by integrating the virus and luciferase sequences into the 10x pre-built human reference (GRCh38 v3.0.0) using CellRanger mkref. Alignment of the raw reads to the custom reference genome, removing duplicated transcripts using the unique molecular identi ers (UMIs) and assignment to single cells was performed using CellRanger count. Brie y, we used cells Seurat 3.1.4 R package for data analysis and visualization (Butler et al., 2018). The Seurat object is required at least 200 and at most 6000 unique molecular identi ers (UMIs), genes detected (UMI count > 0) in less than two cells were removed. In addition, cells were excluded if more than 10% of sequences mapped to mitochondrial genes. In total, 5,080 cells from the sample passed these lters for quality. Following the package suggestions, we used a linear model to mitigate the variation stemming from the number of detected unique molecules per cell. The differentially expressed genes were found by ''vst'' method and the top 3,000 differentially expressed genes were selected for PCA analysis. We used an elbow plot to determine the number of PCs. 20 PCs were used for each group of cells. Clustering resolution was set at 0.2. For co-culture analysis, Macrophages and lung cells were re-clustered and re-analyzed, respectively. Macrophages were integrated using the rst 20 dimensions of PCs and clustering resolution was set at 0. DATA AVAILABILITY scRNA-seq data is available from the GEO repository database with accession number GSE150708 (hPSC-derived lung cells, Coculture of macrophage and lung cells derived from hPSC, Co-culture of macrophage and lung cells derived from hPSC in SARS-CoV-2 infection). Competing Financial Interests The authors have no nancial con icts of interest. Figure 1 Macrophages were highly involved at the severe stage of COVID-19 (A) H+E (Hematoxylin and Eosin) staining on the bronchial or alveolar region in healthy or severe COVID-19 case. Pulmonary hemorrhagic infarct (denoted by arrow heads) (B) Immunohistochemistry (IHC) using antibody against CD68 revealed macrophage with aggregated phenotype and enlarged nuclei in COVID-19 lung, compared to the ones in healthy lung. (C) Immuno uorescence (IF) staining on healthy or COVID-19 distal lung tissues using antibodies against CD68 (pan-macrophage marker), and CD80 (M1 macrophage marker) (D) Quanti cation on CD68+ or CD80+ macrophages in healthy or COVID-19 distal lung tissues. (E) IF staining on healthy or COVID distal lung tissues using antibodies against CD68 and CD163 (M2 macrophage marker) (F) Quanti cation on CD68+ or CD163+ macrophages in healthy or COVID-19 distal lung tissues. (G) IF staining on healthy or COVID-19 distal lung tissues using antibodies against CD68 and IL-6. Scale bar = 100 µm in all images in Figure 1. Data was presented as mean ± STDEV. P values were calculated by unpaired two-tailed Student's t test. **P < 0.01, ***P < 0.001, and ****P < 0.0001. The effects of macrophages in combination with ACE2 blockage on SARS-CoV-2 infection (A) Schematic of the experimental owchart on the co-cultures. (B) The ACE2 blockage antibody was applied two hours prior to the virus presence, and the luciferase activity of the co-cultures of lung cells and M1, M2 macrophages (iMφ or THP-1) or 293T cells (control) was measured at Mock or infected with SARS-CoV-2 pseudo-entry virus at 24 hpi (MOI=0.01). P values were calculated by unpaired two-tailed Student's t test. ***P < 0.001, ****P < 0.0001. (C) The ACE2 blockage antibody was applied two hours prior to the virus presence, IF staining was performed on the co-cultures of iLung cells and iM1φ, iM2φ, or 293T, at Mock or infected with SARS-CoV-2 virus at 24 hpi (MOI=0.01), using antibodies detecting SARS-CoV-2 NSP14 protein, CD80 or CD206. ILung cells expressed GFP. Scale bar = 100 µm Supplementary Files This is a list of supplementary les associated with this preprint. Click to download.
2020-08-26T05:10:20.550Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "e1c8ebe41c95aefbf9a2c8bbe2df4903075f5e23", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-62758/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "358c9940b52690d843763d92eebceafa7ab7687c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
51730462
pes2o/s2orc
v3-fos-license
Complexity Study of a Single Particle Under q-Deformed Potentials We have studied the variation of the position space statistical complexity measure defined by L\'{o}pez-Ruiz, Mancini, and Calbet such as the product of exponential of the Shannon information entropy and the disequilibrium by using the 1D-normalized probability densities derived from solutions of the Schr\"{o}dinger equation corresponding to the q-deformed harmonic oscillator and q-deformed Morse potentials. An analysis of the numerical results in terms of Shannon information entropy, disequilibrium and complexity measure are presented. In q-deformed harmonic oscillator, q-dependence of the complexity shows a minimum point for all excited energy levels. In the case of q-deformed Morse Potential, complexity decreases with increasing $q$ for the investigated diatomic molecules. I. INTRODUCTION Complexity measures are being increasingly employed in order to understand the behavior of systems encountered in several disciplines of scientific inquiry. Everybody knows that it is very difficult to define a universal definition of complexity for all systems. Therefore, it can be seen that different measures for complexity have been proposed in the literature [1]. A few examples of them are (i) algorithmic complexity [2,3], (ii) a measure of the self-organization capacity of a system [4], (iii) Crutchfield and Young's complexity [5] etc. For a finite many-particle system, complexity may be regarded as a measure of it's internal order/disorder which can be represented by the corresponding information entropy and the distance from equilibrium which is called as disequilibrium. In the context of electronic structure of atoms and molecules, a suitable measure which is so called as C LM C has been proposed by López-Ruiz, Mancini and Calbet (LMC) [6,7] to analysis the statistical complexity. This measure has been widely employed to many problems in the literature [7][8][9]. Indeed, C LM C allocates a multiplicative role to the measure of distance from equilibrium in conjunction with the information entropy to define a measure for the complexity of a finite system. In most of the previous studies focused on quantum systems, the wave function of a particle has been used to obtain quantities such as information entropy, disequilibrium and complexity. It is known that a Gaussian type potential leads to a Gaussian wave function, which strongly satisfies the Heisenberg inequality. However a non-Gaussian type potential leads to a more non-exponential wave function which does not strongly satisfy Heisenberg inequality. In this case, instead of Heisenberg inequality, a different one such as Bialynicki-Birula and Mycielski (BBM) inequality [10] can be often preferred. BBM inequality is a theoretic-informatic inequality which can be applied to many different problems. In the complexity discussion of quantum systems, the form of the wave function is very important, which may present different aspects of the complex behavior of these systems. One of the source of the non-exponential form is q-deformation over the wave function, which can emerge from non-linear interactions and non-Markovian memorial effects within the systems or from the interactions between the system and its surrounding environment. True understanding of the effect of deformed potential can shed light on the deep physics of complexity of several real systems. q-deformed potentials have been considered in physics to discuss many problems. For example; the qdeformed hyperbolic potentials have been proposed firstly by Arai [11,12] and they have found several applications in various fields of physics and chemistry. They are used for modeling and describing electronic conductance in disordered metals and doped semiconductors [13], phonon spectrum in 4 He [14], oscillatory-rotational spectra of diatomic [15] and multi-atomic molecules [16]. Furthermore, q-deformation of the Morse potential has been investigated by [17][18][19]. Some recent works are the investigation and comparison of the energy spectrum of q-oscillator and Morse-like anharmonic potential in Ref. [20] and derivation of the exact normalized wave functions for the q-deformed screened Coulomb Hulthen potential in Ref. [21] etc. The aim of this work is to relate complexity of a a quantum system with its energy level n and potential deformation parameter q. Therefore, in this letter we consider two simple q-deformed potentials and analyze complexity of the system depending on these potentials. The outline of article is like the following: a definition and meaning of López-Ruiz, Mancini and Calbet complexity measure is presented in Sec. 2. Afterwards in Sec. 3, C LM C is applied to the probability distribution of a single particle under q-deformed harmonic oscillator and q-deformed Morse potentials. As a summary, significant results and remarks are presented in Sec. 4. II. COMPLEXITY MEASURE OF LÓPEZ-RUIZ, MANCINI AND CALBET The LMC complexity measure is defined for continuous systems as the following [7], where S x and D x are Shanon information entropy and disequilibrium, respectively. These are defined in one dimension as where ρ(x) is the probability density given in terms of the wavefunction as ρ(x) = |ψ(x)| 2 . Above expressions are obtained by taking the continuous limit of the following expressions which are expressed in terms of a discrete probability distribution. where p i is the probability of occupying the state i, and N is the total number of accessible states in position space. For a reminder, this definition of complexity is valid only for position space. However, a complexity measure for momentum space can be also obtained by applying a Fourier transformation to probability distribution expressed in the position space. On the other hand, according to the LMC definition one can deduce that complexity is like predictability P . LMC can be used to predict whether a system is fully ordered or not by using P = 1−C LM C . For instance, at the statistical limit case C LM C becomes zero for both totally ordered (perfect crystal) and totally disordered (random gas) systems. Ideal gas is totally random and one can predict its randomness. On the other hand, perfect crystal is also totally predictable, because it can be constructed by a unit cell and a symmetry operation transformation. It is generally assumed that C LM C in Eq. 1 is a well known statistical measure of complexity for ergodic systems. On the other hand, we show in this study that C LM C is also a novel candidate measure to identify complexity for q-deformed quantum systems. Therefore, we discuss the complexity in the whole paper caused from q-deformed harmonic and Morse potentials by using Eqs. 1-3. III. COMPLEXITY STUDY OF Q-DEFORMED POTENTIALS A. q-Deformed harmonic oscillator Now, we consider, as a first example, q-deformed quantum harmonic oscillator. The q-oscillator is described by the Hamiltonian where ω is the oscillator frequency. In the complexity study of q-oscillator is set to 1. a + and a − are creation and annihilation operators satisfying the commutation relation with the deformation parameter q taking values in the interval (0, 1). The effect of annihilation and creation operators to the states are given as below, where [n] q = 1−q n 1−q which satisfies the limits, lim q→1 [n] q = n and lim q→0 [n] q = 1. The wave function of q-oscillator is more complex than that of simple quantum oscillator. However, if the calculation steps in Ref. [22] are repeated, the normalized wave functions can be obtained as where α = i (log q)/2. In the above formula [n] q ! is the q-factorial and defined as where (q; q) n is the q-Pochhammer symbol. On the other hand, energy eigenvalues for q-oscillator is given by [22], We note that, in the limit of q → 1, energy eigenvalues reduce to E n = ω n + 1 2 of the ordinary quantum harmonic oscillator. However, under a small perturbation from unity (q = 1 − ε), the energy spectrum becomes quadratic and energy eigenvalues can be approximated as the following, For = 0, Eq. 12 reduces to eigenvalue of the harmonic oscillator. We note that the q-oscillator has a nonlinear spectrum and it satisfies the Heisenberg uncertainty as [22] ∆x∆p = for all states where q changes in the interval of (0, 1). Therefore, uncertainty of q-deformed quantum harmonic oscillator is less than the uncertainty of quantum harmonic oscillator. Applying a simple numerical procedure to the related equations, with help of the Eq. 9, probability distribution ρ(x) of q-oscillator for different energy levels can be computed. Here we present the results for n = 1, 2, 5, 6 to quantify discussing. Related numerical results for these excited states for different q values are given in Figs. 1a-d. As it can be seen from figures, q-oscillator show very interesting behavior depend on both of exited states and q-parameter. The shape of the probability distribution dependence upon n is generally known which corresponds to solution of the quantum harmonic oscillator i.e, q=1. Here we note that the number of peaks in the probability distribution increases with n. In the case of q = 1, the most probable value of position for the lower states is very different from the classical harmonic oscillator where it spends more time near the boundaries of its motion. But as the quantum number increases, the probability distribution becomes more like that of the classical oscillator -this tendency to approach the classical behavior for high quantum numbers is called the correspondence principle. This is the fist important point to understand of the solution of the quantum harmonic oscillator. We will remind this point in below discussing. On the other hand, in the numerical study, we found that the q-dependence of q-oscillator is very remarkable. For q = 1 the solution gives the standard ρ(x) of the quantum harmonic oscillator. This is a known case as mentioned above. However, while q parameter decreases from 1 to 0 which indicates that deformation increases and ρ(x) becomes more localized in both figures. This behavior is the second important point. If we turn back to figures, we can see from Figs. 1a and 1b that the shape of the probability function has similar characteristic form for some q values. However, for very small q values this behavior of the ρ(x) dramatically changes. For instance; at q=0.001, probability surprisingly becomes localized at the x=0. We say that q dominates the oscillator from quantum mechanical behavior to the classical Gaussian distribution one. For q=1, there is no memory effects in the system which behaves completely quantum mechanical. However, when q decreases, indicating that memory effects in the system emerges. We can note that source of the memory effect is the correlations between peaks of the wave functions which might be arisen from the broken symmetry of the harmonic oscillator potential with increasing q deformation. It is possible to see similar behavior in the case of higher excited states for example n = 5 and 6. The oscillating form of the ρ(x) clearly changes depending on q values. When q decreases from 1 to 0.1, the form gradually becomes more narrow and the oscillating behavior of the ρ(x) between two edges decreases and vanishes so that oscillating form evolves into a straight oscillating plateau. In the case of n = 1, 2, 5, 6, for a small q = 0.001 value, as it can be seen from Fig. 1a-d localization behavior in the the probability appears i.e., probability distribution becomes more Gaussian. We note that q dependence behavior of the ρ(x) is a very interesting result which probably emerges from deformation of the external potential. These results can be concluded that when q approaches zero, probability of the corresponding wave function of the quantum q-oscillator becomes localized at the origin which might be a result of memorial or non-Markovian effects. So far, we have discussed the probability distribution and eigenenergies of q-oscillator. Now we can analyze disequilibrium, information entropy and complexity properties of q-oscillator. Based on Eqs. 1-3, these quantities can be numerically obtained for ground and several exited states depending on q. All numerical results are given in Fig. 2. As it can be seen from Fig. 2a that the ground state information entropy is constant and independent from q. However, it has different shapes for all excited states. For example the information entropy increases and smoothly reach up to a saturation for small n values while q increase from 0 to 1. However, this behavior rapidly changes for higher n values. Information entropy increases while q increases and after a critical saturation it decreases again between q = 0.8 − 1.0. This behavior appears depending on n values. Therefore, we see a smooth transition from quantum to classical behavior in information entropy. This is a novel result. This important result shows that q plays an important role as a memory or non-Markovian effects on the system, which drives the system from quantum to classical one. In Fig. 2b q dependence of the disequilibrium is given for different excited states. As it can be seen from Fig. 2b disequilibrium of ground state is independent from q and disequilibrium has a large value. However, disequilibrium of q-oscillator decreases with increasing q and n values, which means that the system passes from quantum to classical behavior with increasing q and n values. On the other hand, after attaining a certain minimum value in the region q = 0.7−0.9, the disequilibrium is found to increase as q decreases below 0.8. For a given value of q, the disequilibrium is found to decrease with the increasing quantum number of the excited state. The amount of decrements increases near the minima of disequilibrium versus q plot. According to Fig. 2c, with reference to the ground state, for a given excited state, complexity of the q-oscillator has a very interesting and very complicated behavior. For all excited states complexity decrease depending on the increment in q, however, it starts to increase from above a critical minimum q value. This complication of the complexity is caused by the confliction between disequilibrium and information entropy. We can conclude that system has a critical transition in complexity. Around the value of q in the neighbourhood of 0.9 where information entropy exhibits a maximum and the disequilibrium exhibits a minimum, the statistical complexity attains an approximately similar value for all excited states. Below this bunching point of statistical complexity for the excited states, the C LM C vs q curves are found to cross over with the inversion of relative ordering of the quantum number n of the excited states. Beyond the bunching region, as q decreases, for each excited state the statistical complexity goes through a minimum at a certain q. The depth of the minimum value is found to decrease with decreasing quantum number of the excited state. With further decrease in q beyond the minimum value statistical complexity, the C LM C values are found to increase, finally, culminating at a common value at the lowest q. Another interesting result that appears in q-oscillator is the following, for low n values, entropy takes low values, however, disequilibrium is also large. Then complexity also takes large values for low n values. Finally, in order to comprehend the numerical changes, Shannon information entropy, disequilibrium and complexity values obtained by using Eqs. 1-3 are given in Table I. After quantifying the complexity of q-oscillator, here we consider, as an second example, q-deformed Morse potential [17][18][19][20]. The general form of the q-deformed Morse potential is expressed in the Ref. [23] as the following, where α = ar e , x = (r − r e )/r e , V 1 and V 2 are the repulsive and attractive terms, respectively. r e is the equilibrium position of the nuclei, a is a constant related with the range of potential and D e is a measure of the depth of potential well at equilibrium distance. This form of the q-deformed Morse potential is the special case of the well-known diatomic Morse potential. Wave function of a particle for q-deformed Morse potential can be obtained after several steps as in Ref. [18] (15) where N n is the normalizing factor. L α n (x) are called generalized Laguerre polynomials or associated Laguerre polynomials. In this work, we numerically computed the normalization constant N n by using Mathematica. On the other hand, the energy eigenvalues of q-Morse potential are given as where λ and E 0 are given as the following form, here µ = m 1 m 2 /(m 1 + m 2 ) is the reduced mass of the diatomic molecule. As it can be seen from Eq. 16, the form of the energy is clearly different from that of the classical Morse potential one. For example; energy levels corresponding to q-deformed Morse potential are upper bounded by q and maximum level number is restricted by the inequality n max ≤ (λq − 1/2). Here, we also investigate q dependence of the complexity, disequilibrium and information entropy for q-Morse potential. However, in the present case, we compute and discuss these quantities by using well known experimental parameters of HCL and H 2 molecules, which are given in Table II. Here we note that, in numerical procedure, we defined q = 0.35 as a lower bound. Under this lower bound of q, numerical error appears in computation. Therefore, we study q dependence between 0.35 and 1.0. However, on the other hand, the lower bound of the q leads to an upper bound which covers the number of possible excited states. For example, by using these values, in our work for q = 0.35, n max value is found as 8 and 5 for HCl and H 2 molecules, respectively. The spatial probability distribution change with the q-deformation parameter is presented for HCl and H 2 molecules in Figs. 3 and 4, respectively. As it can be seen from figures that probability distribution of Morse potential has an asymmetric shape. For q = 1 peaks take large values however, when the q parameter decreases ρ(x) becomes much more spreading over position space. This change can be more easily seen in excited states such as shown in Figs. 3b and 4b for state n = 4. Here we compute Shanon information entropy, disequilibrium and complexity by using solutions of q-deformed Morse potential for HCl and H 2 diatomic molecules. By using parameters in Table II, we compute all quantities. The numerical results are given in Figs. 5 and 6 for several excited states for HCl and H 2 molecules, respectively. As it can be seen from figures that there is no any complicated behavior in entropy, disequilibrium and complexity as in q-harmonic oscillator. For both molecules, entropy S and complexity C smoothly decreases with q, however, disequilibrium D smoothly increases with q. FIG. 5: Shanon information entropy (a), disequilibrium (b) and complexity (c) for HCl molecule at energy levels n in [1,7]. FIG. 6: Shanon information entropy (a), disequilibrium (b) and complexity (c) for H 2 molecule at energy levels n in [1,5]. Now we can compare some result of the q-deformed Morse potential with the q-oscillator. When comparing results we can see that for low n values entropy takes low values, however disequilibrium is also large. These results are compatible with the results of q-oscillator. However, the complexity of q-Morse takes low values for low n values unlike q-oscillator. IV. CONCLUDING REMARKS When q-deformation applied to a quantum harmonic oscillator, it is found that probability distribution becomes more classical rather than quantum mechanical as q approaches to zero. In q-deformed harmonic oscillator, as q decreases ρ(x) becomes more localized. q deformation enforces q-oscillator to behave like a classical oscillator without spreading the probability distribution but preserving the discreteness of energy. In other words, in the limit case in which deformation increases, q approaches to 0, probability distribution of a single particle collapses in to a Gaussian form. q-deformation phenomena reduces the Heisenberg uncertainty ∆x∆p product for the sake of gradual lose of quantumness. Entropy, disequilibrium and complexity values are independent of q for the ground state of the qoscillator. However, q-deformation causes a q dependent change in entropy, disequilibrium and complexity calculated for excited states. q-deformation affects the oscillator differently depending on its energy level n. For instance, minima of the complexity shift to higher q values as the quantum number n increases. q-oscillator takes the same complexity value, which is calculated for position space, independent of its energy when q is set to 0.9. Complexity measure of probability distribution of an electron belonging to a q-deformed diatomic Morse potential decreases with increasing q. Meanwhile, when q is increased, entropy decreases and disequilibrium increases. When q-dependence of the probability distributions of position space for q-harmonic and q-Morse potentials are compared it is seen that q-deformation (as q approaches zero) causes localization in q-oscillator by squeezing ρ(x), whereas it causes a spreading of ρ(x) under q-Morse potential. Complexity measure of probability distribution of an electron belonging to a q-deformed diatomic Morse potential decreases with increasing q. Meanwhile, when q is increased, entropy decreases and disequilibrium increases. When q-dependence of the probability distributions of position space for q-harmonic and q-Morse potentials are compared it is seen that q-deformation (as q approaches zero) causes localization in q-oscillator by squeezing ρ(x), whereas it causes a spreading of ρ(x) under q-Morse potential. V. ACKNOWLEDGMENTS KDS is grateful to The Scientific and Technological Research Council of Turkey (TÜBİTAK) for a visiting scientist award under its 2221-program, grant number 1059B211601794 and acknowledges with thanks the support received under the Emeritus Scientist scheme , C.S.I.R. New Delhi.
2018-06-25T10:30:53.000Z
2018-06-25T00:00:00.000
{ "year": 2018, "sha1": "95e70e95bfcfd72662f19814f8be9de2bd4201b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "95e70e95bfcfd72662f19814f8be9de2bd4201b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
237352946
pes2o/s2orc
v3-fos-license
A closed loop gradient descent algorithm applied to Rosenbrock's function We introduce a novel adaptive damping technique for an inertial gradient system which finds application as a gradient descent algorithm for unconstrained optimisation. In an example using the non-convex Rosenbrock's function, we show an improvement on existing momentum-based gradient optimisation methods. Also using Lyapunov stability analysis, we demonstrate the performance of the continuous-time version of the algorithm. Using numerical simulations, we consider the performance of its discrete-time counterpart obtained by using the symplectic Euler method of discretisation. I I Recent advances in the eld of deep learning has rekindled interest in gradient-based optimisation algorithms. Often we nd the training loss for machine learning models is dependent on the nature of such algorithms. This motivates us to study these algorithms from a control systems perspective. While studying such algorithms, we encounter unconstrained optimisation problems which are of the form: ∇f (y) − ∇f (x) ≤ L y − x ( ) ∀ x, y ∈ R. A su cient condition for learning the gradient of such a cost function is to take a small enough step-size s such that: Thus, we consider methods which require rst order gradient knowledge to obtain the minima. However, the performance of such methods is heavily dependent upon the cost function's spectral condition number, its overall geometry, the presence of saddle points, and local minimas [ ]. * Mr. Subhransu S. Bhattacharjee is a student at the Australian National University, Canberra, Australia. Please direct all queries to Subhransu Bhattacharjee at u @anu.edu.au. †Dr. Ian R. Petersen, FAA is a professor at the College of Engineering and Computer Science, Australian National University, Canberra, Australia. One such method is the Nesterov's accelerated gradient descent algorithm which is given by: with a reported convergence rate of O( 1 k 2 ) [ ] for a convex cost function f . In continuous-time this algorithm takes the form [ ]: Su et al. show in their paper [ ] how the possible values of the damping coe cient, α a ect the rate of convergence of this algorithm. For convex functions, and α ≥ 3, the rate of convergence of ( ) is of the . Though there is a lack of mathematical literature to explain the acceleration of this algorithm, recent advances show a variety in the rate of convergence by re-scaling the gradient ow which corresponds to a Bregman Lagrangian [ ]. While this does provide some insight, all derivations in the class of inertial gradient systems like ( ) are based on realising Lyapunov functionals which themselves rely on the geometry of the cost function to approximate an upper bound on the convergence rate. These algorithms in continuous-time can be regarded as open loop systems which show variance in their dynamic evolution in discrete-time, depending upon the order of discretization used. The Nesterov scheme for example takes multiple forms depending upon the discretisation method used [ ]. Notwithstanding the di culty in explaining the phenomenon of acceleration in the Nesterov and Heavy-ball [ ] schemes, a number of studies [ ] have shown that momentum de nitely plays a central role in the acceleration of such optimisation algorithms. It should be noted that the continuous-time analysis of such systems is achieved by using high-resolution ordinary di erential equation approximations with small step sizes. The literature on rst order momentum-based methods shows that, in the deterministic setting, such methods reveal a stabilising e ect in their transient phases. Recent results indicate that these momentum methods admit an attractive invariant manifold on which the dynamics reduce to a gradient ow [ ]. In this paper, we will discuss a particular method within a relatively new class of gradient descent schemes which are closed loop in nature and show qualitatively better numerical performance in continuous-time compared to an open loop approach like Nesterov's scheme. Our study begins with understanding the problem generalised as: where γ is the damping coe cient. We will consider the construction of γ as a feedback control problem to optimise the dynamics for rapid convergence. To test the performance of our method, we use Rosenbrock's function (a test-bench for global optima-seeking) as the cost function (See Figure ). The Rosenbrock's function is a non-convex function which is known for its hard to nd minima. This global minima exists at (1, 1) located within a large valley making the optimisation computationally hard. Rosenbrock's function is given as: As mentioned earlier, the performance of an optimisation scheme is characterised by the spectral condition number of the Hessian of the cost function calculated at its minima. The spectral condition number of a matrix given as κ = λmax λmin , where λ signi es the eigenvalues [ ]. At its minima, the spectral condition number of the Hessian of Rosenbrock's function is calculated to be κ = 2508. [ ] This indicates that the system is illconditioned and most rst order gradient-based methods would require a large number of iterations to nd its minima. In the continuous-time method, it has been found that it can only be solved by considering implicit methods which can handle such sti systems. [ ] First order gradient algorithms when applied to Rosenbrock's function performs quite poorly in discrete-time; i.e. they are unable to nd the minima within a practical runtime. While ADAM [ ] and other hybrid algorithms are able to solve this problem, they do so at a high computational cost; i.e. it takes them a large number of iterations to minimise Rosenbrock's function and often converge slowly due to the sti ness of the system as shown in Figure Rosenbrock's function [ ]) . This paper is divided into two main sections to deal with the continuous and the discrete-time analyses of the proposed algorithm. A. Motivation The design of the proposed algorithm was inspired by a physical understanding of the dynamical system ( ) which involves control using its momentum [ ]. This has also been motivated by the recent paper by Attouch In [ ], closed-loop control is considered using multiple scenarios with a damping constant of the form γ = r |ẋ| p−2 , where p and r are positive constants (control parameters) andẋ is the velocity. However, the simulation results we obtained for this algorithm were not particularly satisfactory for the minimisation of Rosenbrock's function. We make two particular observations: ) The damping function did not adequately stabilise the system over long intervals for various values of p and r. This inspired us to use the control parameter r as t (time) and p = 4. ) The lack of stability of the system over large intervals of time led us to look at ( ) as a linear time-variant system and to perform a corresponding pole placement. The linearised ODE for the method ( ) applied to Rosenbrock's function is equivalent to the form: where the Hessian ∇ 2 f for Rosenbrock's function at the minima X = X * , is: Please note that in this paper, all simulations and results have been compiled in the MATLAB™ and SimuLink™ environments. All graphs where the X-axis has not been explicitly mentioned, denotes time in seconds. Note that this implies for the vector form ||Ẋ|| 2 2 becomes the inner product of the velocity vector with itself. Hence, we look at the eigenvalues for the linear time variant system matrix of ( ) to understand its convergence rate [ ]. For this purpose, we de ne a new system with state y for the underlying linearised state space systeṁ x = A x, which is given as: ( ) This leads to the rede ned system: which in matrix form can be written as: The minimum real part of the eigenvalues of the system ( ) can be written as: which indicates an exponential convergence rate of at least η. Based on this, we choose the value of η = 1. Thus we arrive at our hybrid gradient descent optimisation method which we shall refer to as the whiplash inertial gradient optimisation method ( ): A block diagram for this algorithm is shown in Figure . The motivation for this nomenclature can be found in subsection II-C. B. Convergence Analysis using the Lyapunov Method For an autonomous system, of the form: we can guarantee global asymptotic stability if there exists a functional, V (x), such that: ( ) Furthermore, we consider the La-Salle principle of invariance [ ] and suppose there exists a continuously di erentiable, positive de nite, radially unbounded func- Then, x e is a Lyapunov stable equilibrium point, and the solution always exists globally. "Moreover, x(t) converges to the largest invariant set M contained in E = {z ∈ R n : W (z) = 0}. When W (z) = 0 only for z = x e then E = {x e }. Since M ⊂ E therefore x(t) → x e which implies asymptotic stability. Even when E = {x e }, we often have the condition M = {x e } from which we can conclude asymptotic stability" [ ]. This is used in our analysis for the general inertial gradient dynamical system [ ] by de ning a candidate Lyapunov function W (t) for all damping functions γ such that: where f (x * ) denotes the minima of the cost function, which satis es all the conditions in ( ). Upon replacing the time derivative in the equation ( ) forẌ, we obtain: This shows that the time derivative of our Lyapunov candidate is negative semi-de nite. This shall su ce to show using La-Salle's principle of invariance that the set of accumulation points of any trajectory is contained in I, where I is the union of complete trajectories contained entirely in the set {x :Ẇ (x) = 0} [ ]. Thus by Lyapunov's Second Theorem, we have the functional W is positive de nite; i.e. "I contains no trajectory of the system except the trivial trajectory x(t) ≡ 0 and as W is radially unbounded; i.e. W (x) → ∞ as ||x|| → ∞", we conclude that the origin is globally asymptotically stable [ ]. C. Simulation Results We modelled the whiplash inertial gradient dynamic system ( ) using the Euler xed step integrator (ode1) on Rosenbrock's function, using a step-size of 0.001. A few examples of the state trajectories have been shown in Figure starting from di erent initial conditions. While all starting speeds achieve convergence (unlike the Nesterov scheme which must be started atẊ(0) = 0 [ ] [ ] to ensure convergence) it should be noted that we start the system at an arbitrary xed velocity oḟ X(0) = −1000 which shows rapid convergence for all initial conditions . If one looks at Figure , where we have analysed the system's damping coe cient γ = 1 + t Ẋ ,Ẋ , it shows a sharp rise followed by an abrupt fall for high starting speeds (which ensures faster convergence). This phenomenon replicates the physical process of the whiplash e ect, and hence motivated its nomenclature. Further research will be required to study the e ect of various starting speeds for this system. We hypothesise that (as explained by Kovachki and Stuart in their paper [ ]) this e ect might be responsible for the rapid stabilisation of the system in the transient phase. A. Discretisation The discretization that is used is the semi-implicit or symplectic Euler method [ ]. Using a discrete-time step s and sampling of t ≈ k √ s, we obtain a two-state estimate of the acceleration and velocity as shown below: ( ) Now, we modify ( ) to add a xed mass to the system, which up until this point has been considered to be of unit magnitude. The choice of mass that we shall make is m = 1 √ s . This idea of introducing this mass in inertial gradient ow methods while discretising them has been inspired from the idea of selective mass scaling in nite element methods [ ], where the iterative process can be scaled by choosing an e ective mass. The rationale behind this is since the discrete-time method depend heavily on the step-size, they take much longer to attenuate for smaller step-sizes. Hence, to counter this e ect, we may introduce such a xed mass, which scales the dynamics, depending on the step-size. Upon making these substitutions and modi cation to ( ), we obtain ( ) We can re-write ( ) using ( ) as: ( ) Now, we consider the symplectic approximation for the Lyapunov stable system [ ] such that ẋ(t) → 0 as t → ∞. This implies that v k → 0 as k → ∞. Therefore, we introduce z k = x k − x k−1 = √ sv k−1 . For all asymptotic analyses, there is no practical di erence between the sequences z k and v k as We may consider this as two transforms. First as a scaling of the system, followed by a backward recursion: ( ) This trick simpli es our system's updates while keeping intact the geometry of the dynamical system and does not change the global nature of the system's convergence . This particular choice of design for the algorithm simpli es the computation and makes discrete-time analyses of convergence considerably easier. We nally have the consolidated scheme as: B. Algorithm This discrete-time scheme ( ) can be translated to the following algorithm using a step-size s and n iterations, with initial starting point x 0 and nal point x n (1). Unlike prior gradient descent algorithms, which are capable of minimising Rosenbrock's function, this algorithm does not use any hyper-parameters. Instead, it uses a simple two-step assignment to update the discretetime damping in every iteration. The zeroth step ( ) of the iteration x 1 is a gradient descent step [ ] which assigns the initial momentum for the rst iteration as We have veri ed this claim using numerical results. The code is available on the licensed repository: https://github.com/ SubhransuSekharBhattacharjee-/Whiplash.git Algorithm The whiplash gradient descent algorithm Input: ∇f (x), n, s, x 0 : Initialise: C. Numerical Results Numerical results for the whiplash gradient descent algorithm applied to Rosenbrock's function are promising. As explained previously, for the sti ness of the system to be taken into account, we need a step-size of no more than 10 −5 . This is because the algorithm is unable to learn the gradient of the cost function and picks up momentum without correcting the damping. For a su ciently small step-size, the whiplash gradient descent algorithm successfully found the minima of Rosenbrock's function for all initial conditions. We have shown a few examples in Figure . A plot of the momentum growth Figure shows a saturation e ect. This indicates that Rosenbrock's function optimisation has been achieved over the given time interval. Figure shows the nature of the trajectory, as it approaches the minima. IV C F W From the above results, we can see that the proposed whiplash gradient descent algorithm is capable of fast optimisation of Rosenbrock's function. We constructed this algorithm using a non-linear controller motivated by the nature of the momentum-control structure. However, it must be understood that this controller might not be optimal even for Rosenbrock's function. This is because unlike linear systems, where we could predict the results from clear theoretical motivations, we do not have any such tools for analysis for the non-linear case. Thus, as a direction for future research, we will need to reconsider the classical Lur'e problem, for the absolute stability of the entire class of the inertial gradient systems, involving a feedback path that contains a sector-bound non-linearity [ ]. We will also need to research further to understand the theoretical and practical limitations of closed loop control for the generalised inertial gradient system. Furthermore, we need to study the e ect of variation in starting speed and the hypothesis regarding the stabilisation e ect. Deriving upper bounds for the rates of convergence, using Lyapunov arguments, will be another direction for research. For that we may consider a Lyapunov argument of the form: where f * denotes the optimal value of the cost function where {x * :
2021-08-31T01:16:13.073Z
2021-08-29T00:00:00.000
{ "year": 2021, "sha1": "d0680d4f362d8717daf88f4c9939d6080cfca767", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.12883", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d0680d4f362d8717daf88f4c9939d6080cfca767", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
210914810
pes2o/s2orc
v3-fos-license
Geological Heritage, Geotourism and Local Development in Aggtelek National Park (NE Hungary) We examine how geoconservation and geotourism can help the local development of an economically underdeveloped karst area. First, we briefly present the geoheritage of Aggtelek National Park, which largely overlaps the area of the Aggtelek Karst. The area is built up predominantly of Triassic limestones and dolomites. It is a typical temperate zone, medium mountain karst area with doline-dotted karst plateaus and tectonic-fluvial valleys. Besides caves, the past history of iron mining also enriches its geoheritage. Aggtelek National Park was set aside in 1985. The caves of Aggtelek Karst and Slovak Karst became part of the UNESCO World Heritage in 1995 due to the high diversity of cave types and morphology. Socially, the area of the national park is a disadvantaged border region in NE Hungary. Baradla Cave has always been a popular tourist destination, but visitor numbers fell significantly after 1985. Tourism is largely focused on Baradla Cave, and thus it can be considered “sensu lato” geotourism. Reasons for the changes in visitor numbers are discussed in this paper. Tourist motivations, the significance of geotourism and other tourism-related issues were explored in our study by questionnaire surveys and semi-structured interviews. Furthermore, the balance of geoconservation versus bioconservation is also examined. Finally, the relationship of geotourism, nature protection and local development is discussed. We conclude that the socio-economic situation of the Aggtelek Karst microregion is relatively better than that of the neighbouring regions, and this relative welfare is due to the existence of the national park and Baradla Cave. Introduction Tourism is one of the most quickly growing sectors in the world economy (UNWTO 2019). Within this sector, the proportion of nature-based tourism is also growing (Kuenzi and McNeely 2008;Liu et al. 2016), and geotourism as a branch of nature-based tourism is also becoming more and more important (Dowling and Newsome 2006;Dowling 2011). Tourism to national parks is on a rising trend at the global level (Balmford et al. 2009;Stemberk et al. 2018), although the scenarios vary over time and space. For example, total visitor numbers at national parks in the USA moved on a remarkable upward trend from 1945 to 1987, followed by stagnation with some fluctuations until 2013 and then a sharp increase again after that (NPS 2019). In Hungary, the total number of registered national park visitors rose from 0.9 million to 1.6 million in the period 2005 to 2017 (Pádárné Török 2018). We would note here that only tourists who visit ecotourism facilities such as visitor centres or caves are registered in Hungary. Five Hungarian national parks (Balaton-felvidéki, Bükk, Duna-Ipoly, Körös-Maros, Hortobágy) saw significant increases in visitor numbers during that period, while five other national parks (Aggtelek, Duna-Dráva, Fertő-Hanság, Kiskunság, Őrség), including Aggtelek National Park presented in this paper, experienced only a slight increase with occasional declines (Pádárné Török 2018). Various factors can affect visitor numbers, including the economic crisis in general, higher fuel prices (Stevens et al. 2014), visitor opinions, government policies and national park characteristics (Stemberk et al. 2018). The values and philosophy of national parks have undergone several modifications over the last century and a half (Frost and Hall 2009). In the early days, conserving pristine nature was the main goal, but the exploitation of tourism potential was also a significant aspect. The preservation of wildlife was added to the goals in the second step, especially when national parks were established on large African territories. The principles of ecological integrity and biodiversity were only formulated after World War II. The preservation of cultural landscapes and historical heritage, as well as the promotion of scientific research and education, were also mainly added to the objectives after World War II. Today, it is often experienced that bioconservation is more pronounced and receives significantly higher financial support than geoconservation (Brilha 2002;Crofts 2018;Gordon et al. 2018;Stepišnik and Trenchovska 2018). Protected areas including national parks are often located in relatively sparsely populated and less developed areas, and it is common that they are situated along national borders (Butler and Boyd 2000;Mose 2007;Frost and Hall 2009). As a result of this, it has been possible to better preserve the natural environment in these circumstances. Consequently, the idea that nature protection in otherwise disadvantaged areas should contribute to local development logically came into light (Mose 2007). Municipalities located in national parks have a higher income than those located elsewhere (Stemberk et al. 2018). This idea can be valid also for places where geoconservation is in the focus (Ateş and Ateş 2019). Former mining areas are typical for this situation as they are significant from a geoconservation point of view and at the same time they are socially depressed zones (Evans 2005). Recently, geoparks have become the flagships of geoconservation, and sustainable development is one of the main aims of geoparks (Zouros and McKeever 2004;McKeever and Zouros 2005;Farsani et al. 2011;Lazzari and Aloia 2014;Han et al. 2018). In many cases, karst terrains are economically underdeveloped areas (Telbisz et al. 2014(Telbisz et al. , 2015(Telbisz et al. , 2016a(Telbisz et al. , 2019, but they have varied attractions from the perspective of geotourism (Dowling and Newsome 2006). As Cigna and Forti (2013) stated, caves are the most important geotouristic features in the world. In this paper, we examine Aggtelek National Park (ANP, Hungary), which was established on a well-known karst terrain, situated in a socially depressed area. Our first aim is to briefly present the geological heritage of ANP. Second, we aim to explain the changes in visitor numbers and understand tourist motivations and attitudes using statistical data, tourist questionnaires and interviews with managers and local stakeholders. We also intend to estimate the proportion of pure geotourists and general geotourists (in the meaning of Božić and Tomić 2015). Third, our goal is to examine the relationship of bioconservation (biological research) and geoconservation (earth science research) in the case of ANP. Finally, our most important aim is to demonstrate that geotourism and nature protection may have a significant impact on local socio-economic development. Location of the Study Area ANP lies in NE Hungary (Fig. 1) at the border between Hungary and Slovakia. ANP was set aside in 1985 to protect the karst terrain and its caves. The northern border of ANP coincides with the national border, and on the other side, in Slovakia, there is also a national park, the "Slovak Karst National Park". The area of ANP largely overlaps the area of Aggtelek Karst. Historically, Aggtelek Karst (in Hungarian: Aggteleki-karszt) and Slovak Karst (in Slovakian: Slovenský kras) are hilly and middle mountainous areas, which belong to Gömör-Torna/Gemer-Turňa Karst. Altogether, they are parts of the Inner West Carpathians (Less 2000). Geological and Geomorphological Settings Aggtelek Karst is mostly famous for its caves, especially Baradla Cave, which is often called as "Aggtelek Dripstone Cave" in Hungarian popular literature, as it is located next to Aggtelek village. As Baradla Cave always had an open, visible entry, it has been known and used since prehistorical times. On the other hand, Domica Cave in Slovakia was explored in 1926 and the connection between these two caves was explored in 1932, so since that time, we can speak about the Baradla-Domica cave system. Besides this cave system, there are many other caves with varied types, morphology and depositions. Aggtelek Karst is one of Hungary's geologically most diverse areas despite its relatively small size, considering both stratigraphy and tectonics (Less 2000). From a geotectonic viewpoint, the local Silicicum-Aggtelek and Meliata units are parts of the Inner West Carpathians and belong to the Adria-derived nappes and Meliata Ocean remnants of the ALCAPA Mega-Unit (Schmid et al. 2008). Aggtelek Karst is built up mainly of Late Permian to Jurassic sedimentary nappe stacks (Kövér et al. 2009), in which Triassic carbonate rocks are the most important (Fig. 2). These rocks were deposited on the carbonate platforms of the Neotethys Ocean. The most widely distributed formation is the Upper Triassic Wetterstein Formation, which consists of well karstifiable limestones, and to a lesser extent, of less karstifiable dolomites (Less 2000). Furthermore, Gutenstein, Steinalm, Halstatt, Dachstein and Pötschen Limestones also occur in the area. In the Jurassic, the ocean became deeper, and thus carbonate deposition was halted. At the end of this period, the subduction of the oceanic crust took place, but a smaller part of the oceanic crust was obducted on the continental crust, and therefore some pieces of the former oceanic crust can be found on the surface near Meliata (in Slovakia; Gaál and Bella 2005). As the northern part of the area was slightly uplifted, the Triassic carbonate sediments slid towards the south on the plastic Permian evaporites, and nappe stacks were formed. In the Cretaceous, the north-south compression resulted in folding structures, and a second generation of nappes was created (Less 2000). The third important tectonic phase affected the area in the Oligocene, when mainly horizontal deformations occurred. The previous tectonic boundaries were revived, and traces of these horizontal faults can be still recognized in the present topography as W-E oriented valleys. In the Miocene, most of present-day Hungary was flooded by the Pannonian Sea, which later gradually decreased in size and became a lake. The lake penetrated into the lower terrains of the carbonate area, and lacustrine sediments were deposited (Less 2000). Karstification began in the area in the second part of the Miocene period with a subtropical climate and at a relatively low elevation (Zámbó 1998). Finally, the area north of the karst terrain started to uplift in the Pliocene, and the whole karst area had a large-scale, north-south sloping surface, which is a dominant characteristic of the present-day topography (Gyuricza and Sásdi 2009;Telbisz 2011). The rivers flowing southwards from the northern mountains created a pediplain and covered the area with fluvial sediments (gravels), but later on-due to the uneven uplift of tectonic blocks-the uplifted segments became karst plateaus from which the fluvial deposits were gradually eroded, and between them, valleys (gorges or wider, gentler valleys) were formed, and karst processes gained dominance (Zámbó 1998;Móga 1999;Telbisz 2011). As a result of these tectonic processes, the Aggtelek Karst is characterized by 350 to 600 m a.s.l. high flat summits and small area plateaus, whereas the plateaus of the northern Slovak Karst are larger and higher. The mosaic-like topography and the rivers (streams) flowing from the northern mountains made karst development highly diverse. Plateaus are mostly dotted by solution dolines with different densities (1-30 dolines/km 2 ; Telbisz et al. 2016b). Dolines are typically arranged into rows along dry valleys. The genesis of valleys can be explained partly by tectonic reasons and partly by the earlier drainage network (Zámbó 1998;Telbisz 2011). The valleys of the plateaus are dry, whereas the valleys between the plateaus have active streams or rivers. Plateaus are covered by soil and vegetation, and thus bare karren are rare, and rounded subsoil karren are typical, but on steep plateau slopes, especially with S-SW exposition, some spectacular karrenfields were formed partly due to human impact (Zámbó 1998). At the bottom of the karst plateaus, along the contact of karst and non-karst areas stream sinks or springs are found depending on the topographic situation. Naturally, the karst area is poor in lakes, but in some plugged stream sinks or dolines, small lakes can be found (e.g. Red Lake between Aggtelek and Jósvafő villages). Methods Several types of data have been used in our analysis. First, we acquired statistical data to briefly present the demographic and social situation of the Aggtelek microregion. Second, we examined the changes in visitor numbers at Baradla Cave. Visitor statistics are available for a period of more than 100 years. Third, we conducted semi-structured interviews with ANP managers, mayors of the neighbouring settlements and external experts, who performed research within ANP or know the area as nature protection officials. Finally, a questionnaire survey was performed with locals and tourists. Due to space limitations, the questionnaires with locals are not evaluated in this paper. A questionnaire survey is a widely applied method to examine visitors' perceptions, characteristics, motivations and attitudes about development priorities. For example, surveys have been used in relation to Natura 2000 protected areas (Pietrzyk-Kaszyńska et al. 2012), national parks (Trakolis 2001;Papageorgiou and Kassioumis 2005), geosites (Zgłobicki and Baran-Zgłobicka 2013;Štrba 2019) and caves (Kim et al. 2008). In our survey, we used two, A4sized forms with 26, mostly closed-ended questions ranging from basic demographic data through the mode and motivation of tourism to some potential development-related questions. In order to obtain some information about foreign tourists as well, the forms were also created in English and Slovakian with some minor adaptations. The semi-structured interviews were carried out in several phases in 2018, whereas most of the questionnaires were completed by tourists in July 2018. The location of the mass survey was basically near Baradla Cave entrances (in Aggtelek and Jósvafő, Fig. 1), but a smaller proportion of questionnaires were completed at other settlements. The tourists could fill out the forms by themselves or with the help of assistants. In connection with the relative importance of geological versus biological values, we performed a bibliographic analysis. We searched publications related to the word "Aggtelek" in the largest Hungarian journal database (https://matarka.hu/) and in Scopus (https://www.scopus.com) and analysed the thematic distribution and temporal changes of these publications. Geological Heritage of Aggtelek National Park A complete geosite inventory has not yet been undertaken in Aggtelek National Park. However, we briefly list here the elements of the local geoheritage. Caves are clearly the most significant elements, but exokarst landforms and geological type sections are also abundant. There are 24 geological type sections at the Aggtelek Karst (representing mostly Triassic formations), and one unique feature is that seven protected type sections are found underground, in Baradla Cave. In addition, mining history makes the local geo-image even more colourful. Caves of the Area There are around 1200 caves in the 200 km 2 area of ANP. Caves formed by meteoric waters are the most abundant, including inflow caves, outflow caves and through caves. Branchwork caves (Palmer 1991) are the most common type. They are generally rich in different types of speleothems. In addition to the most common speleothems such as soda straws, stalactites and stalagmites, one can also find helictites, coralloids and bulbous forms (Fig. 3). Some plateaus (e.g. Alsó Hill) are extremely rich in shaft caves (known locally as "zsomboly") due to the nearly vertical, well karstifiable limestone beds. Vecsembükki-zsomboly is a typical example and is the third deepest cave in Hungary (236 m depth). In the eastern part of the karst, in Esztramos Hill, thermal caves are also found (e.g. Rákóczi Cave). The passage pattern and deposition forms of these caves are entirely different from those previously discussed (Takácsné Bolner 1998). These caves were first explored by iron and rock mining because they did not have natural entrances at that time. Mining ceased in the area in 1996, but there are still many former mining passages in the hill, and they are now under restoration to make them usable for a modern, experience-focused mining museum. In other parts of the karst, some caves with natural entrances have long been known, but many new caves were discovered in the decades after World War II, when systematic investigation and scientific methods such as water tracing were applied. Béke Cave, Meteor Cave, Kossuth Cave or Szabadság Cave can be mentioned as the most important post-war discoveries. The Baradla-Domica Cave System The longest (25.5 km long) and most diverse cave in the area is the Baradla-Domica cave system (Gruber and Gaál 2014;Fig. 4). According to Ford and Williams (1989), it is a typical example of multi-storey through caves. On the Hungarian side, small streams flowing on covered karst terrain reach the karst contact and sink into Baradla Cave. The natural entrance of the cave is found at the edge of Aggtelek village. Due to the openness of the cave, it was inhabited in prehistorical times. The oldest undisputed findings are from the Neolithic Period, when the so-called "Bükk Culture" people settled in the cave around 5000 BC (Székely 1998;Holl 2007). In addition to artefacts (ceramics, bone and stone tools), these people left a special print in the cave, as many dripstones were coloured to grey or black from the torches and fires they used. However, when they left the cave, it remained uninhabited for several millennia and thus younger speleothem layers coated these dark layers (Gradziński et al. 2007). The main branch of the cave is spacious and has many large rooms; it is roughly horizontal, which makes walking easy. It is also rich in speleothems and thus everything is in place for an ideal show cave. Scientific exploration of the cave began at the end of the eighteenth century. The first map of the thenknown cave was drawn in 1794 based on a careful survey, and is thus thought to be the first engineering cave map in the world (Szvoboda 1998). Scientific research has continued since then, at varying intensities. Present-day investigations focus on cave genesis theories (Gyuricza and Sásdi 2009;Veress and Unger 2015;Bosák et al. 2004;Bella et al. 2019), hydrogeologic studies (Borbás et al. 2011;Gruber et al. 2012), exploration for new passages at the lower, inundated levels, speleobiology and speleothems (Zámbó et al. 2002;Galbács et al. 2011;Czuppon et al. 2018). Today, cave depositions are studied principally because of their palaeoclimatic significance (Demény et al. 2017). The outflow of the cave is found at the end of the 7-km long main branch near Jósvafő village, but the terminal passages are narrow and partly inundated, so there was no natural entrance to the cave from this side. However, in order to reach the inner cave passages from that side, an artificial tunnel was created near Jósvafő village and another one about midway between Aggtelek and Jósvafő villages (next to Red Lake). On the Slovakian side, the Domica Cave is another story. In historical times it had no natural entrance, and thus it remained unknown until 1926. However, explorers soon realized that Neolithic people had inhabited this cave. Consequently, it must have had a natural entrance that time, but it was closed naturally after the Neolithic Period (Gruber and Gaál 2014). The connection between Domica and Baradla is a very narrow, stream passage because there are some less soluble beds in-between the well karstifiable limestone layers. Previously, this passage was entirely flooded and was hard to get through. This section was first explored from the Hungarian side in 1932. One peculiar feature of the Baradla-Domica cave system is that the state border can be crossed below the surface. In the twentieth century, this crossing was closed with an iron gate for political reasons. Since 2007, it is free to move from one country to the other, but the narrow, flooded section in the Hungarian side is passable only by cavers. The state border can be relatively easily reached from the Domica side. Exokarst Landforms as Potential Geosites First, we mention the karren as the smallest karst features. The karrenfield formed at the edge of Aggtelek village on the hillslope of Tó-hegy is the most spectacular form of its kind, where bare and soil-covered karren can be seen. In addition, Aggtelek Lake, a plugged former stream sink is also found at the foot of this hillslope. A further advantage of this site from a geotouristic viewpoint is its proximity to the main tourist centre of Aggtelek. Sinkholes are abundant throughout the area, as there are ca. 1100 dolines in Aggtelek National Park (Telbisz 2001). Stream sinks are also numerous: typical examples are the Zombor-lyuk, Nagy-Ravasz-lyuk and Kis-Ravasz-lyuk near Aggtelek village, which drain water to Baradla Cave, but many other good examples could be mentioned. As for larger landforms, we can mention the Jósvafő-fennsík (plateau), a large doline-dotted, slightly uplifted depression surrounded by higher plateaus. It is considered to be a paleo-polje and is a very interesting site from the viewpoint of landform development (Bella et al. 2016). Finally, springs are also important geosites. They have high discharge fluctuations with occasional floods, and travertine depositions are generally formed in the outflowing streams. The Baradla Cave was first protected in 1940. Since 1961, all caves in Hungary are protected ex lege, and this also applies to Baradla Cave. In 1978, due to the geological, geomorphological and speleological values, the surface area of the karst became a "landscape protected area", which is a nature protection category in Hungary below the national park level. In 1979, the area became a Biosphere Reserve (Szvoboda 1998). ANP was set aside in 1985. It is very important to mention that it was the first Hungarian national park which was especially created for the protection of a karst terrain i.e. geoconservation was the focus from the beginning. We would also note that Bükk National Park (Hungary) founded in 1977 also has significant karst areas, but it is a more complex landscape, and karst protection was only one of the reasons but not the Telbisz (b, c, e) primary aim when that national park was created. In 1995, the caves of Aggtelek Karst and Slovak Karst were inscribed on the UNESCO World Heritage List as a transboundary property, because of the high diversity of temperate karst cave morphology, a fact that also underlines the outstanding geological values in ANP. Cave Tourism and Tourism Infrastructure in Aggtelek National Park Tourism in the Baradla Cave goes back more than 200 years (Gruber and Gaál 2014). In the nineteenth century, it was known as the longest cave in Europe, and thus many Hungarians and foreigners came here to visit including famous poets and powerful people, who are documented in the guest books of the cave. In Hungary, the popular name "Aggtelek Dripstone Cave" is deeply rooted in public awareness, because it is the main example in the schools when caves, speleothems or karst processes are taught. The significance of this fact is also reflected in our survey (see below). Briefly, the Baradla Cave is regarded as a "must see" in the Hungarian context. At present, there are short show cave tours starting from Aggtelek village (1-km length) and from Red Lake to Jósvafő village (2.3 km), but a long trip along the main branch from Aggtelek to Jósvafő (6.7 km) is also available for tourists. The Rákóczi Cave at Esztramos Hill is another show cave with several ladders, and the national park also provides adventure cave tours to some other caves (Kossuth Cave, Meteor Cave, Vass Imre Cave, Béke Cave). However, the overwhelming majority of tourists visit only the Aggtelek part of the Baradla Cave. Tourism facilities are just presented very briefly here. As for surface hikes, there is a well-marked system of hiking paths in the national park. For education purpose, there are ten education trails, concentrated mostly (but not exclusively) around Aggtelek and Jósvafő villages. The education trails have leaflet guides in three languages. However, only three of them can be called geotrails, because the others focus rather on plants, animals and historical heritage. There is one visitor centre at the Vörös-tó (Red Lake) entrance of Baradla Cave, but it does not have a modern exposition and most of the visitors do not come to this site. Another very small museum is found at the Jósvafő entrance of the Baradla Cave, focusing on caving and karst hydrogeology through the life of Hubert Kessler, a famous hydrogeologist, who conducted several explorations and introduced innovations around Aggtelek in the first half of the twentieth century. Furthermore, there are three small education centres maintained by the national park, which provide programmes predominantly for school children. Accommodation possibilities are limited: there are two small hotels (one in Aggtelek and one in Jósvafő), and there is a tourist hostel with a campground in Aggtelek village. Visitors staying for several days generally reside in private guesthouses or rooms. Social Situation of the Aggtelek Karst Microregion Settlements As for the microregion, the Aggtelek Karst has always been a sparsely inhabited area due to the relatively harsh natural settings of the karst terrain, and its population stagnated or even slightly decreased during the nineteenth century. In the twentieth century, one can observe a moderate increase until 1970, but since that time there has been a strong downward trend (Telbisz et al. 2015;Fig. 5). The reasons are both natural decrease and emigration, since there is a lack of employment opportunities in the microregion. The border position and the distance from major transport routes further amplifies these trends. There are 21 settlements in the immediate vicinity of the national park, and two villages (Aggtelek and Jósvafő) are directly located in the area of ANP. The largest of these (Bódvaszilas village) has 1101 inhabitants (data from 2011), but most settlements have only around 100 people, and there are some micro-villages with only some tens of inhabitants. Since the political transition in 1990, only three settlements (Szin, Tornanádaska, Szalonna) have increased their populations. This increase is due to the growing proportion of Roma people in these villages as they have a higher birth rate. Aggtelek village has had a stagnating population since 1990, and all other settlements have decreasing populations. The closest larger city, Miskolc, is located 60 km from Aggtelek village, which is not such a great distance in many countries, but here, in the karst area, almost everyone feels that Aggtelek is "far from everywhere" and is the "outback" of Hungary. Temporal Changes in Cave Tourism Since the Baradla Cave has long been the most important tourist destination in the microregion, and practically all tourists arriving here visit the Cave, its visitor numbers well characterize the tourism of the area from the beginning of the twentieth century until now (see Tózsa 1996). Based on these data (Fig. 6), it is stated that the main period of local tourism growth occurred from 1950 to 1975. This was followed by a 10-year-long peak, with about 250,000 visitors a year. Thereafter, a decrease took place in several steps. The period of decline started just 3 years after the foundation of the national park. However, the reason of the decline is not the foundation of the national park, but rather the fact that this was the time when Hungarians could get a "world passport" in the final years of the communist regime since 1988. This opened the world to them, and Hungarians began to travel to international tourist destinations instead of the traditional domestic locations. The fact remains that the foundation of the national park was not able to prevent the decline in tourism. A decade later, when the caves were recognized as World Heritage, the decline in visitor numbers halted for a while, but it is difficult to judge the role of this title in that process. Furthermore, the decline has continued since the early 2000s. In 2007, Hungary became a member of the Schengen zone, which means that border crossings became absolutely free, which may have boosted local cross-border tourism. Nevertheless, the data show just the opposite. Due to the even more open borders, the wealthy travelled even more to international destinations, whereas the poorer classes could afford less and less travelling even within the country, especially when the 2008 economic crisis hit Hungary. Our survey demonstrates that the wealthier western part of Hungary (the socalled Transdanubia) is only responsible for relatively few tourists to Aggtelek, and the area of attraction of Aggtelek is practically restricted to eastern Hungary, though the capital city, Budapest is still an important source of visitors travelling to Aggtelek (Fig. 7). Finally, the downward trend came to an end some years ago, and a slight increase was even observed in the number of tourists. At present, however, it is not possible to say whether this marks the start of a new uptrend or is just a temporary development. Lessons from the Tourist Questionnaires In this section, we analyse some characteristics of Aggtelek tourism including tourists' motivations and the role of geotourism. The tourist questionnaires were answered by 380 persons, of whom 44 people (11%) completed the Slovakian or English language version. This corresponds to the national park's own measurements i.e. that about 10-15% of the tourists are foreign. Nonetheless, it should also (1995) mentioned that ethnic Hungarians living in neighbouring or other countries also completed the Hungarian version, and so the number of people with residence in another country is 72 (20%). General data about the respondents is presented in Table 1. Some general information on the tourists: as in the case of other nature-based tourist attractions, there is a high proportion of one-day tourists (59%) and this raises certain problems. Moreover, there is a high seasonality in tourist numbers, which is also a disadvantage. On the other hand, 72% of tourists are recurrent tourists, which is a rather positive characteristic. As the primary aim of tourists is to visit the Baradla Cave, we may call all visitors sensu lato "geotourists" (Dowling and Newsome 2006;Hose 2008), because they view a geologic value, and due to information boards and cave guides they are also "educated" in a certain sense. Using the questionnaire survey, we examined the awareness of geotourism, and also the geotourist identity. In response to the question "Have you ever heard the expression: 'geotourism'?", 62% answered 'yes'. However, we believe that this proportion is too high, and it is the result of the fact that people generally do not admit their ignorance. As a kind of check, we put in a simple question "Do you know what is the meaning of the word 'KARST'?" to test the basic geologic knowledge of visitors. 50% said 'yes', and a significant proportion of them relatively correctly described the meaning of the word. Finally, the last question in this issue was "Are you to some extent a 'geotourist'?", to which 20% answered 'yes'. Thus, geotourism unambiguously exists at Aggtelek Karst region even sensu stricto. However, ANP managers do not consider geotourism an important issue, and although they are aware of this notion, they think it negligible for ANP, except of course the cave-related issues, which were considered as highly important by all interviewed managers. They also noted that there are no (human) resources in ANP to promote the issue of geotourism, and possibly to start a process aimed at creating a geopark. At this juncture, we would note that there are examples of this approach in Hungary, when a national park plays a significant role in the professional support of a geopark. This is true for the existing Bakony-Balaton UNESCO Global Geopark and for the Bükk Region Geopark (Baráz et al. 2018), which is presently in development. The tourist motivations were examined using several questions. The Hungarian and foreign language questionnaires had some minor differences in the optional answers (there were some additional answer options in the Hungarian version). For the question "Why did you personally choose this site?" (Fig. 8), the most popular answer among Hungarians was: "I wanted to see the famous Aggtelek Dripstone Cave" (49%), and 33% checked the answer "I'm generally interested in caves". The option "I wanted an adventurous tour" was only selected in 17% of the cases, which means that adventure is not so important, but is an existing viewpoint in the motivations of Baradla visitors. In the Hungarian version, there was a question about the values of the landscape (Fig. 9). Of the answers given, "caves" was overwhelmingly the most frequent (93%) before "forests" (63%) and "surface karst" (55%), underlining that tourists visit ANP because of its geological/geomorphological values. As for the foreign questionnaires, 68% of tourists checked the answer "I'm interested in caves", and 27% selected "I'm interested in karst terrains". The demand for surface hiking is clearly much lower, as only 29% of all respondents answered that they also plan a surface hike. By comparison, a surprisingly high 91% answered that they consider education trails slightly or very important. Internationally, research at several locations has shown how important it was for tourists if an area was declared a national park or a UNESCO World Heritage (Reinius and Fredman 2007). In our questionnaire, there were two questions about this ("Was it important for you that Aggtelek is a National Park?" and "Was it important for you that the Aggtelek caves are part of the World Heritage?") and three optional answers (not at all/slightly/very importantly) to each question (Fig. 10). For more than half of the Hungarian tourists (52%) the "national park" title is "not at all" important, whereas this answer is much less frequent (16%) in case of foreign tourists. The "World Heritage" title is more important for all groups, but foreign tourists selected the "very important" option much more frequently (42%) than Hungarian visitors (only 25%). In addition to tourists' motivations, it is essential to know how tourists get information about ANP (Fig. 11). A remarkable result of our survey is that 49% of Hungarian tourists chose "school education" as an information source! (This option was only in the Hungarian questionnaire.) This means that public education still has a very significant role in (geo)tourism, and second, that it is a key question for Aggtelek to remain in the curriculum of elementary schools for the future as well. In this context, there is also a terminological question: the official/ speleological name of the cave is Baradla, but the public knows it-primarily from school-as the "Aggtelek Dripstone Cave". Our survey demonstrates that this popular name is of great significance, and besides the official name it should be used in the future as well, because people can link it to the settlement of Aggtelek, which helps the tourism of ANP. The second most important information source is "personal relations" (40%), whereas "internet in general" option is only the third (37%), though this latter answer was significantly more popular in the English version (63%), but less frequent in the Slovak version (25%). Quite surprisingly, "social media (e.g. Facebook)" was the least important information source (8%) among the fixed answers. The Balance of Geo-versus Bioconservation As mentioned above, ANP was especially created for the protection and management of karst and caves. In the interviews with ANP managers, they clearly expressed that the national park's two most important aims, geoconservation and bioconservation, are of equal importance. The third in the order of aims is landscape protection (Fig. 12). In most Hungarian national parks, bioconservation is (much) more important than geoconservation, and thus we can say that in the special case of Aggtelek, the equality of the above aims is basically in agreement with the intention of the founders. The interviews with ANP managers clarified that during everyday operations, biology related activities and land management receive a higher proportion of the budgetary and human resources. However, the importance of the caves is acknowledged by all of the managers. The budget of large projects funded by different organizations (EU or state) occasionally surpass the base budget of the national park and they may shift the balance either towards geoconservation or towards bioconservation (or land management). To see the proportion of geoconservation in terms of funding, we present some data from ANP. In the period 2007-2013, the ANP received EUR 10.6 million of funding from the EU, of which EUR 2 million was spent on abiotic goals. In the period 2014-2020, ANP received EUR 13.8 million, of which EUR 3.1 million was spent on abiotic goals, i.e. the proportion of geoconservation in these projects is around 20%. Even if it is the smaller part, it is very significant, and while bioconservation needs a lot of everyday activities, in geoconservation the different measures may have longer-term results, and so this distribution of financing is in agreement with geoconservation and other goals according to ANP managers. Here we briefly mention some of the geoconservation related tasks supported by these EU projects. For instance, parts of the former iron mine passages in Esztramos Hill were stabilized, which makes it possible to reach the spectacular caves found in this hill (e.g. Rákóczi Cave). When more passages become safe, Esztramos Hill will become suitable for the presentation of the mining history of this area in an adventurous and authentic way. Another project supported the reconstruction of lighting in several show caves using up-to-date LED technology. A third project made it possible to thoroughly measure the main branch of Baradla cave using 3D LiDAR technology. Béke Cave, a fascinating stream cave discovered in the 1950s, is available only for special groups. However, it had to be closed for several years due to high CO 2 concentration. Ventilation was improved by cleaning and widening certain passages within the framework of a project, and thus the CO 2 concentration was lowered to a safe level. Geological type sections throughout the karst area were also cleaned from overgrown vegetation. So, these projects have an impact on geotourism and also on local earth science research (such as water tracing, microclimate or speleobiology). The interviews with external experts included mostly earth scientists or cavers, so it is less surprising that most of them expressed the opinion that geoconservation should have a slightly higher emphasis in ANP. Some of the interviewees even felt that bioconservation is a bit over-emphasized in ANP, and as a symbolic fact, they mentioned that in the logo of ANP there is a salamander instead of a cave or a more closely cave-related animal (Fig. 13). Nonetheless, most of them acknowledged that geoconservation and bioconservation are well-balanced in ANP, and deviations from equilibrium are only small. In order to judge the scientific output of geologic/ geographic versus biologic research in ANP, we carried out a search in the largest Hungarian journal database (https:// Fig. 13 Logo of Aggtelek National Park Fig. 12 Order of aims of Aggtelek National Park (ANP) according to external experts and ANP managers (1: not important, 5: most important). man.: management, reg.: regional matarka.hu/) and found 422 publications related to the word "Aggtelek". Two hundred and thirty-five of them were published in scientific journals and 187 in popular magazines. After a thematic categorisation, we concluded that biology (48%) and earth sciences (39%) are dominant in the scientific issues, but popular articles are more diverse, including historical, touristic, archaeological, architectural, agrarian and folklore topics, in addition to 25% geology/geography and 19% biology related papers. The temporal changes presented in Fig. 14 are also interesting. Early on, there were only earth science papers, and biological articles appeared only in the second half of the twentieth century, but one can observe an abrupt increase in biological papers in the 1980s, surpassing even cumulatively the number of geo-publications. However, as for the international output, it is observed that according to Scopus database, geosciences are still dominant if Aggtelek is the topic, because 61% of Aggtelek-related publications come from earth sciences, in contrast to 31% from biological research. Regional Development and ANP In the official documents (founding document, rules of organization and operation) of ANP, regional development is not mentioned at all, since it is generally not among the official aims of nature protection organizations. According to this background, ANP managers put regional development in last place in the order of national park goals (Fig. 12). Nevertheless, during the interviews many of them expressed that ANP should and can do certain things in favour of local development. Especially in a microregion which faces several social problems (emigration, ageing, unemployment). In the following, we present how ANP impacts the socioeconomic situation of the surroundings. First of all, this occurs in its role as an employer. In the microregion, the national park is the largest employer. At present, ANP officially has 130 permanent employees. About twothirds of their salaries are covered by the state budget, but the remaining one-third must be produced by the national park itself. The self-generated incomes of the national park mainly originate from cave tourism, area-based agricultural subsidies and various project funds. In addition to the permanent employees, another 110 people were employed by the national park in the so-called "Public Work Scheme". The public employment system is a special form in Hungary, its main task is to activate long-term unemployed people and to prevent permanent job seekers from exiting working life. It is mainly for people with low education and no professional skills, living in regions where market employment possibilities are limited. The Ministry of Interior offer temporary employment to these individuals by financing the direct expenses on their employment. National parks had the possibility to employ these people in the framework of this system. However, national parks were abruptly excluded from this system in July 2018, and thus public workers at ANP lost their employment, resulting in some tensions. Beyond permanent or fixed-term employment, ANP has contracts with local entrepreneurs, which means an additional 300 people who make money at least partly due to the presence of the national park. These numbers are remarkable in a microregion, where settlements of some 100 inhabitants are typical. The proportion of permanent employees within the population is high only in Aggtelek and Jósvafő settlements. Tourism is also an economic sector dependent on ANP, and it provides employment for another 150-200 people, who work in accommodation business or commerce. Therefore, a very high proportion of local people are directly or indirectly affected by the presence of ANP. In addition to the aforementioned positive effects, some local people mentioned certain negative effects, too, which are generally due to protection related limitations (e.g. prohibition of wood or other plant collection in the protected area; complicated authorization procedures for any investment). Furthermore, there is a conflict with local people about the share of the profit from cave tourism. Some people from Aggtelek and Jósvafő emphasize that the cave was visited even before the national park was created, and that there were more visitors in those days than now. In addition, in those days, the settlement had a more significant share from the cave business. A further complicating factor is that both Aggtelek and Jósvafő would like to get a certain part of the profit, because the Baradla Cave has tourism access sites in both villages. However, the present situation is that the management of the cave and also the profit of cave tourism is under the authority of ANP. Nevertheless, the national park uses it for the benefit of local people as well, but some of the local people do not acknowledge this. Probably, the benefits for the settlements should be communicated more effectively. Furthermore, ANP is unquestionably a guarantee for the protection and professional management of the Baradla Cave. Based on the interviews, it is perceivable that the philosophy of ANP is in a transitional phase now, due to both personal and deeper socio-economic reasons. Previously, the general principle was that ANP, as an organization with relatively significant human and financial resources (in a local context), should provide certain operation sectors within the microregion including accommodation and other tourist institutions as well. However, in the present situation when the state budget is decreased year by year, and thus human resources are also cut, ANP can undertake less, and tries to focus only on direct nature protection issues, while leaving the economic development (in tourism and other sectors) for local entrepreneurs. Nonetheless, local entrepreneurship is rather weak in the microregion for several reasons and entrepreneurs do not have enough capital. Discussion Based on the above aspects, we can compare the decreasing visitor numbers of ANP during the last three decades to the worldwide growing general tourist trend (UNWTO 2019; Kuenzi and McNeely 2008). First, we note that the general growing trend masks many individual cases. In fact, when either caves or karstic national parks are considered, there are examples for quickly growing visitor numbers as well as for stagnating or declining trends (Spate and Spate 2013). Some examples for quickly growing cave visitor numbers are Postojna Cave in Slovenia (Šebela et al. 2015) or Mogao Grottoes in China (Jinshi 2014). The preliminary data of Spate and Spate (2013) show that in Western Europe or in Slovakia, the visitor numbers tended to stagnate or fall in the twenty-first century. Gessert et al. (2018) published an analysis of show cave visitor numbers in Slovakia. Their dataset clearly indicates that there was a decline in cave tourism after 2008, that is due to the financial crisis and the change of Slovakia's currency to the euro. Furthermore, they found that the proximity of other cultural and natural facilities, the quality and amount of services, and the overall economic situation also affect cave visitor numbers. The situation in Hungary is similarly complex. Using official visitor numbers data received from the State Secretariat for Environmental Affairs (Ministry of Agriculture), it is observed that most caves had more visitations in the 1970s and 1980s than now. In the twenty-first century, the only cave with a significant increase in visitor numbers is the Tapolca Lake Cave, which at present attracts more tourists than the Baradla Cave despite the fact that the Baradla Cave is much larger and richer in speleothems. Tapolca Lake Cave is found in the Balaton-felvidék National Park and in the Bakony-Balaton UNESCO Geopark. Besides the good management and the possibility of a short underground boat trip, the popularity of this cave is definitely due to the fact that it is close to the western Hungarian Balaton Lake, which is a very popular tourist resort. Similar results were mentioned by Bao and Zhang (2006), who found that cave visitor numbers are dependent on nearby tourist attractions. Spate and Spate (2013) enumerate several potential factors which can affect the number of cave visitors. They categorize the factors as international (e.g. oil prices, disease outbreaks), national (e.g. Gentle Revolution in Slovakia) or local (e.g. weather events, roadworks). As for karstic national parks visitations, the increase of tourism can be observed in certain cases (Mari and Telbisz 2018), for instance in the Croatian national parks (Petrić and Mandić 2014). In some cases, the rise in visitor numbers is even too fast, putting an extra burden on nature, such as in Krka National Park (NPKRKA 2018) or in Plitvice Lakes National Park (UNESCO World Heritage Committee 2017). By contrast, there are national parks at "remote" locations, where karst features and show caves are not enough to attract people to the area. The neighbouring Slovak Karst National Park can be mentioned as an example (Clarke et al. 2001;Nolte 2004). In Hungary, the situation is varied (Pádárné Török 2018): Bükk National Park, which has significant karst areas, has been able to increase its visitor numbers since 2005. The partly karstic Balaton-felvidék National Park and Duna-Ipoly National Park, which are near tourist hotspots (Balaton Lake and Budapest, respectively) have also seen significant growth in visitor numbers. By contrast, Duna-Dráva National Park, which also includes a karstic area, can be characterized by stagnating visitor numbers since 2005. This latter national park lies in the southern part of Hungary, in a region, where socio-economic conditions are generally weaker. So, considering the above examples, we believe that in the case of ANP, visitor numbers are admittedly affected by the fact that Aggtelek is far from the populatar tourist destinations. However, it is possible that well-designed marketing incorporating geotourism elements could partially improve the situation. More generally speaking, it is concluded that visitor numbers are significantly influenced by broader socioeconomic factors, and the scientific value of a cave or the efforts of the local stakeholders may be occasionally less effective. The "feeling of remoteness" at Aggtelek mentioned by many interviewees is certainly an interesting fact. The possible reasons are the following: transport inconveniences (bad road conditions at some places, infrequent public transport), the rural and traditional character of the area, the hilly and forested landscape, the weak socio-economic development in general, and the lack of services (e.g., specialized shops). In general, managers and mayors consider this "feeling of remoteness" as a disadvantage, because it results less tourists and makes socio-economic development more difficult. However, the "feeling of remoteness" can be considered also as a value-as expressed by some tourists and even by some local people, who believe that the "quiet, rural landscape" is a positive characteristic. As for the depopulation of the study area, we can compare it to European and Hungarian trends. In general, rural depopulation is a widespread and thoroughly studied phenomenon in Europe and in other continents alike. On a European scale, most rural settlements have decreasing populations, but the exact proportions vary. In Hungary, the proportion of shrinking settlements (81%) was among the highest in the EU in the period 2001-2011, while in Slovakia, this figure was much lower (42%; ESPON 2017). In Hungary, 31% of settlements are classified as rural settlements (based on Hungarian Central Statistical Office data). The mean annual population change in this category was − 0.8% for the period 1970-1990, and − 0.06% for the period 1990-2011. For the studied settlements of the Aggtelek Karst, these values were − 1.4% and − 0.6% respectively, which means that this microregion is more seriously impacted by depopulation than other rural areas in Hungary. Furthermore, several settlements experienced a mean annual decrease of − 3% to − 5% during these periods, underlining their critical demographic situation. The composition of Aggtelek tourists is somewhat different from the typical composition usually observed in geotouristic or ecotouristic sites. As Allan (2011), Zgłobicki andBaran-Zgłobicka (2013), Štrba (2019) demonstrated "geotourists are predominantly young to middle-aged, well-educated, and preferring internet as their primary source of information" Allan (2011) used onsite questionnaires in Australia and Jordan, whereas Zgłobicki and Baran-Zgłobicka (2013) and Štrba (2019) conducted online surveys focusing on SE Poland and Slovakia, respectively. In our ANP survey, middle-aged people are the most frequent. People with a university degree are the majority, but their proportion is not as high as in the samples of Zgłobicki and Baran-Zgłobicka (2013) or Štrba (2019). One reason for this is that many people come to Aggtelek for recreation with family from nearby areas, where the proportion of highly educated people is lower. Another difference is that the main information source is not the internet, but school education and personal relations. It can be explained by the same factors. Since the proportion of recurrent tourists is high, we can agree the opinion of Allan et al. (2015), who wrote that "retaining the first time tourists or geotourists, is more effective than promoting the geosites to new tourists". One view often articulated in the literature is that cooperation with local people is indispensable for the effective operation of a national park (Butler and Boyd 2000; Papageorgiou and Kassioumis 2005;Frost and Hall 2009;Pietrzyk-Kaszyńska et al. 2012). In the ANP interviews, both sides (ANP managers and mayors) clearly expressed the need for cooperation. However, a systematic consultation or forum does not exist, which could serve as a basis for such cooperation. Therefore, coordination between ANP managers and local leaders is rather informal and ad-hoc. Conclusions The geoheritage of Aggtelek Karst is highly diverse due to the exokarst and especially the endokarst features. These natural values are complemented by anthropogenic factors, such as the remains of mining history. Geotourism in the broader sense is of primary importance in ANP due to the Baradla Cave. However, it is present rather in an "anonymous" form, and geotourism is not part of the strategic thinking of the national park. The demand for geotourism in the strict sense is less significant, but does exist. The more explicit expression of geotourism would be important, in order to build further tourism links. Moreover, it is also recommended that ANP should be viewed itself as part of a larger touristic destination, which includes a larger region in NE-Hungary and incorporates also the areas on the other side of the state border. The relationship between ANP and Slovak Karst National Park is good, but the linguistic and cultural opportunities due to Hungarian people living in Slovakia could be more effectively utilized in tourism development. A coordinated (geo)touristic management would be preferable for both countries, but it is still to be developed. In ANP, everybody (managers, mayors, local people, external experts) agree that tourism should be developed to higher levels, and that the harmful effects of tourism are negligible at present. The ANP managers hope that incomes from tourism can replace to some extent the decreasing state revenues, whereas local people expect more employment from tourism. It seems that in the values and operation of ANP, the geoand bioconservation is balanced by consensus, that should be preserved by all means. As one of the experts said, geoconservation cannot be realized without bioconservation and vice versa. This principle should be recommended to other karstic national parks as well. Finally, it is concluded that the existence of ANP is of particular importance for local people. In spite of all difficulties, the look of the villages, the employment possibilities, and the general socio-economic situation is relatively better in ANP, than in the neighbouring hilly areas, which are similar in relief and in "remoteness", but are not built up of karstifiable rocks, and therefore have no special geosites in their areas and consequently they are not protected by a national park. Thus, we conclude that the (geo)conservation in the case of ANP has a positive impact on local development, but it is not enough to solve deeply rooted social problems. However, for geotourists, Aggtelek National Park is a perfect destination with its varied landforms and caves.
2020-01-27T16:09:52.982Z
2020-01-25T00:00:00.000
{ "year": 2020, "sha1": "2fc6116ca08b80a52dbfdc078da5e900a3c364d1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12371-020-00438-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "2fc6116ca08b80a52dbfdc078da5e900a3c364d1", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
25407551
pes2o/s2orc
v3-fos-license
Delayed Newcastle disease virus replication using RNA interference to target the nucleoprotein Each year millions of chickens die from Newcastle disease virus (NDV) worldwide leading to severe economic and food losses. Current vaccination campaigns have limitations especially in developing countries, due to elevated costs, need of trained personnel for effective vaccine administration, and functional cold chain network to maintain vaccine viability. These problems have led to heightened interest in producing new antiviral strategies, such as RNA interference (RNAi). RNAi methodology is capable of substantially decreasing viral replication at a cellular level, both in vitro and in vivo. In this study, we utilize microRNA (miRNA)-expressing constructs (a type of RNA interference) in an attempt to target and knockdown five NDV structural RNAs for nucleoprotein (NP), phosphoprotein (P), matrix (M), fusion (F), and large (L) protein genes. Immortalized chicken embryo fibroblast cells (DF-1) that transiently expressed miRNA targeting NP mRNA, showed increased resistance to NDV-induced cytopathic effects, as determined by cell count, relative to the same cells expressing miRNA against alternative NDV proteins. Upon infection with NDV, DF-1 cells constitutively expressing the NP miRNA construct had improved cell survival up to 48 h post infection (h.p.i) and decreased viral yield up to 24 h.p.i. These results suggest that overexpression of the NP miRNA in cells and perhaps live animal may provide resistance to NDV. Introduction Virulent strains of Newcastle disease virus (NDV) are the causative agent of Newcastle disease (ND), a devastating disease of poultry worldwide [1]. Since highly virulent NDV strains cause up to 100% mortality in infected flocks, adequate control of ND is vital to ensure healthy, productive poultry populations [2]. While vaccine campaigns are routinely practiced, these are severely limited by elevated costs, need of trained personnel for adequate administration, and long term thermostability when transporting the vaccine [3]. These problems are heightened in developing countries, where vaccine costs become excessive in subsistence farming settings [3e5]. Such limitations have led researchers to explore new avenues to control problematic pathogens, such as NDV, to generate new, sustainable antiviral strategies [3,5,6]. Ribonucleic acid interference (RNAi) is a naturally occurring intracellular process found in most organisms in which gene expression is controlled through silencing of specific messenger RNAs (mRNA) [7]. RNAi pathways are capable of being activated by several avenues including micro RNA (miRNA), small interfering RNA duplexes (siRNA) and short hairpin RNA (shRNA). These mechanisms of gene silencing are evolutionarily conserved and can silence mRNA at multiple stages of expression including transcription, posttranscription, and translation [8,9]. In the miRNA pathway, double stranded miRNAs are processed by the protein Drosha, further modified by Dicer, and the product is then integrated into the RNAinduced silencing complex (RISC) [10]. This multi-protein complex will then unravel the double stranded RNA, and retain a single guide strand which will direct the knockdown of the target sequences bases on Watson-Crick base paring [8,11]. In the laboratory, synthetic vectors can be used to express miRNA, siRNA or shRNA, and have been widely used in the scientific community as a means to decrease viral yields by lowering the amount of viable mRNA that encodes for viral proteins, or other RNA intermediates that are necessary for viral replication [12e14]. RNAi approaches have been successfully used in studies of respiratory syncytial virus (RSV), avian influenza virus (AIV), parainfluenza virus, and coronavirus (including severe acute respiratory syndrome virus) [12e14]. Alvarez et al. reported reduction in RSV viral concentrations induced by siRNA without inducing off target pro-inflammatory effects, a potential problem [15]. In similar research, suppression of AIV replication was achieved by using shRNA to knockdown expression of the viral polymerase leading to reduced bird to bird transmission of the virus [16]. These studies suggest RNAi may be highly effective in disrupting viral replication and decreasing expression of virus genes [17]. These approaches may now be applied to NDV. NDV is classified as Avian Paramyxovirus serotype 1 (APMV-1), and is an enveloped virus with a negative sense, single stranded RNA genome of approximately 15 kb, which encodes six structural proteins, from 3 0 to 5': nucleoprotein (NP), phosphoprotein (P), matrix protein (M), fusion protein (F), hemagglutinin-neuraminidase (HN), and the large polymerase protein (L) [18,19]. Using pre-miRNA to activate the cellular RNAi pathway, a miRNA can be used to target the messenger RNA of NDV structural proteins, leading to the degradation of the transcripts and inhibiting viral replication [11]. Furthermore, by coupling miRNA expression with a lentiviral (LV) delivery system, it is possible to create stable cell populations that constitutively express the miRNA sequence [8]. The use of LV vectors to incorporate exogenous genetic material into the host genome also lends to the possibility of creating transgenic animals capable of germline transmission of the transgene [20]. An LV delivering a miRNA that can induce resistance to NDV could be delivered to a donor bird at various stages resulting in an animal with an endogenous antiviral defense against NDV [6,20]. This approach could lead the basis for a functional, preventative antiviral strategy that does not require the use of additional prophylaxis in chickens. In this study, we attempted to determine if constitutive expression of miRNA sequences targeting the mRNA of five of the structural NDV proteins in chicken embryo fibroblast cells (DF-1) would lead to decreased viral yield after infection, and/or resistance against NDV cytopathic effects. Virus NDV strains LaSota-Virulent (LS-V) was obtained by the Southeast Poultry Research Laboratory (SEPRL) repository. LS-V is a virulent strain derived from the non-virulent LaSota wild-type by site directed mutagenesis of the F gene cleavage site, and it has an intracerebral pathogenicity index (ICPI) of 1.69 [21]. LS-V stock was produced as follows [22]. Briefly, virus (100 uL) was propagated in the chorioallantoic cavity of 9e10 day old embryonating specific pathogen free (SPF) eggs (SEPRL White Leghorn SPF flock). Dead eggs, or eggs surviving after 5 days of incubation were chilled at 4 C for 24 h, and HA-positive allantoic fluid was extracted, pooled, clarified by centrifugation (5000 rpm for 10 0 ), and divided into 1 ml aliquots in cryovials, which were stored at À80 C. Virus stock was tittered in DF-1 cells in 96-well plates, and titer expressed as tissue culture infection dose 50% (TCID50)/ml. For the purpose for this study, multiplicity of infection (MOI) calculations were carried out using the viral titer expressed in TCID50. miRNA design Sequences were designed based on BLOCK-iT™ Pol II miR RNAi Expression Vector Kit (Invitrogen (Grand Island, CA) CAT# K4938-00) guidelines, and aimed against the transcribed sequences (mRNA) of five NDV genes: one each for NP, P, M, F, and three for the L gene (L1, L2, L3), for a total of seven miRNA sequences. Sequences were designed based on the consensus alignment of multiple NDV strains representing the most commonly circulating NDV genotype II. A genotype II strain representative (such as LS-V) was used since it had been characterized extensively by our group [21,23]. An additional sequence (scramble, SCR) with no identity to known chicken or NDV genes was used as a control in all downstream experiments. The complete sequences of the miRNAs are provided in Table 1. No miRNA was designed for the HN protein due to lack of highly conserved sequences for the protein. Production of expression constructs For each of the eight sequences, double-stranded oligonucleotides (ds oligo) containing the engineered pre-miRNA cassettes were reconstituted by annealing two HPLC-purified singlestranded oligos (custom made, Integrated DNA Technologies). The oligonucleotides were designed according to the Invitrogen manual guidelines (BLOCK-iT™ Pol II miR RNAi Expression Vector Kit): from 5 0 to 3 0 , the top oligo contained a 5 -nucleotide (nt) overhang for ligation in the vector, 21-nt reverse target sequence (Table 1), a 19nt spacer (terminal loop), and a 19 -nt sense target sequence, with an internal 2 -nt deletion (inner loop). The bottom oligo consisted of the reverse complement to the top sequence, with 5 0 overhang and no 3 0 overhang. Annealing of ds oligos was verified by Ethidium Bromide gel electrophoresis. Each resulting ds oligo was ligated into pcDNA™ 6.2-GW/± EmGFP-miR vector. Ligated vectors were used to transform chemically competent E. coli cells (TOPO10, Invitrogen (Grand Island, CA) CAT# C4040-03) following manufacturer's protocol. Transformed cells were grown in LB agar plates with 100 mg/mL blasticidin (Gibco CAT# A11139-03) for selection. Production of lentivirus Previously verified expression constructs were subjected to Rapid BP/LR recombination reaction per manufacturer's protocol (Invitrogen's Gateway ® Technology; CAT# K4938-00) in order to transfer the pre-miRNA cassette to the pLenti6/V5-DEST destination vector (Invitrogen (Grand Island, CA) CAT# V496-10), which is used for lentiviral packaging. Plasmids were transformed into One Shot ® Stbl3™ Chemically Competent E. coli (Invitrogen (Grand Island, CA) CAT# C7373-03) according to manufacturer's protocol. Transformed cells were selected in LB agar plates supplemented with 100 mg/mL ampicillin (Sigma CAT# A0166). Transformant cells were grown in LB broth supplemented with 100ug/ml ampicillin for DNA extraction using MINI prep (Qiagen (Valencia, CA) CAT# 12123), or MIDI prep (Qiagen (Valencia, CA) CAT# 12143). Ampicillin-resistant colonies were screened by electrophoretic banding upon double enzymatic restriction digestion with Xhol and AFLII nucleases, to assess correct insertion of the pre-miRNA cassette (New England BioLabs (Ipswich, MA) CAT# R0156; CAT# R0520S). DNA sequencing was then used to confirm the restriction digestion results. Confirmed lentiviral vectors were used to package lentiviruses, and to produce virus stocks. Production of lentiviruses was conducted following ViraPower™ Lentiviral Expression System (Invitrogen (Grand Island, CA) CAT# 1165651) protocol using 293T producer cell line (Invitrogen (Grand Island, CA) CAT# R700-07). Produced lentiviral populations were concentrated using PEG-it™ Virus Precipitation Solution (5X) following manufacturers protocol (System Biosciences (Mountain View, CA) CAT# LV810A-1). Lentivirus stock was aliquoted in 1 ml cryovials and stored at À80 C. Efficacy of lentiviral vectors (transient expression) against NDV cytolytic challenge In order to evaluate the efficacy of designed miRNA cassettes to protect against NDV cytolytic effect, DF-1 cells were transfected with lentiviral plasmids (able to express the miRNA cassette) and subsequently infected with LS-V to assess the amount of cellular death. DF1-cells were plated at 5 Â Transduction of DF-1 cells 2 Â 10 5 DF-1 cells were plated into 6 well plates, 24 h later cells were exposed to 500 mL of 1X Polybrene (Sigma CAT# H9268) solution in fibroblast medium. Previously frozen lentiviral stocks for the NP and SCR targets were thawed and ice and gently mixed. Lentiviruses were diluted 1:10 in 500 mL fibroblast medium, gently mixed by pipetting, then added to each well. Plates were incubated for 72 h at 37 C in a humidified 5% CO 2 incubator. After 72 h, medium was changed to fibroblast medium containing blasticidin (10 mg/mL). FACS sorting Since in the lentiviral constructs the EmGFP gene is co-cistronic with the miRNA cassette, transduced cells underwent two rounds of clonal sorting for GFP in order to produce cell populations expressing high level of miRNAs. Briefly, transduced DF-1 cells were clonally sorted using Beckman Coulter MoFlo XDP based on the highest level of expression of the EmGFP reporter system (530/40 BP filter) into 1 well of a 96-well plate, and expanded in fibroblast medium. Upon expansion, cells were sorted a second time using the same criteria and culture method. After 2 passages, blasticidin (10 mg/mL) was added to fibroblast medium for selection. In this way, two stably transduced, highly fluorescent DF-1 expressing miRNA for NP, and SCR were produced. Viral challenge of transduced cells Stably transduced DF-1 cells containing miRNA for NP and SCR targets and naïve DF-1 cells were plated at 8 Â 10 5 cells/well into 6well plates, with three technical replicates for each group. 24 h later, cells were infected with LS-V at MOI of 0.01 (MOI calculation was based on counting naïve DF-1 in extra wells). Cells were then infected with LS-Vir NDV for 1 h at 37 C in modified fibroblast medium containing only 1% FBS, as previously described. Post infections, cells were washed twice with PBS, and then returned to fibroblast medium. To assess viral growth in transduced cells, 200ul of supernatant were collected at 1, 12, 24 and 72 h.p.i., and replaced with 200ul fresh media each time. At 72 h.p.i, phase and fluorescent images were collected, and viable cells counted by Nexcelom Bioscience Cellometer Auto T4 using Trypan Blue (Sigma CAT# T8154) dye exclusion. The amount of virus in the collected supernatant was assessed by limiting dilution in DF-1 cells in 96-well plates, and expressed as TCID50/ml according to the Spearman-Karber method. Table 1 List of sequences in miRNA design. Gene Target Sequence of miRNA Bold indicates directional overhangs for ligation into the expression construct. Statistical analysis Means from multiple groups in the experiment (both for cell count or virus titer per time point) were analyzed by ANOVA with Tukey post hoc test. When only two groups were compared, twosample t-test was performed. For all tests, significance was reported at the level of P 0.05. Transient expression of the NP miRNA construct leads to reduced cytopathic effects and increased cell survival in NDV challenged cells To evaluate the ability of miRNA constructs to targeting and knockdown NP, P, F, L and M mRNA, miRNA constructs expressing miR-NP, miR-P, miR-F, miR-L1, miR-L2, miR-L3 and miR-M were individually transfected into DF-1 cells. To control for potential off target effects, a scramble miRNA (miR-SCR) was also transfected into DF-1 cells. All constructs contained an EmGFP reporter to determine if cells were successfully transfected. At 0 h, phase images showed an intact monolayer consisting of healthy DF-1 transfected cells (Fig. 1AeD, IeL). Expression of EmGFP in transfected cells showed DF-1 cells are capable of being transfected and can successfully express constructs (Fig. 1EeH, (Fig. 2AeD, IeL). However, DF-1 cells transfected with miRNA targeting the NP mRNA (miR-NP) were able to maintain their monolayers up to 72 h.p.i before displaying CPE ( Fig. 2A). Cell counts also confirmed that targeting the NP mRNA could attenuate cell death triggered from NDV infection indicated by the significant increase (up to a 15 fold increase) in cell survival at 72 h.p.i (Fig. 3). Considering these observations, subsequent experiments were conducted exclusively using the miR-NP construct as a potential viral knockdown target. Enhancing and challenging transduced DF-1 cells To determine the potential of the miR-NP construct to convey long term protection at the cellular level, DF-1 cells were transduced with miR-NP and miR-SCR constructs that had been packaged in LV. To isolate a homogeneous population of cells that highly express pLV-shNP and pLV-shSCR, transduced DF-1 cells underwent two rounds of FACS sorting to isolate cells that were highly GFP positive. Due to the EmGFP gene and miRNA cassette being cocistronic, cells expressing the EmGFP should also express the miRNA product. After stable cultures of post-sorted cells were established, each of the DF-1 transduced cell populations (pLV-miR-NP and pLV-miR-SCR) and a naïve control cell line (a nontransduced cell line) were challenged with LS-V at MOI 0.01. Phase contrast images at 48 and 72 h.p.i showed similar results to transfection results (Fig. 4AeF). pLV-miR-NP cultures retain an intact monolayer at 48 h.p.i ( Fig. 4A; as indicated by arrow) while the SCR control and naïve DF-1 cultures show substantial syncytia formation and destruction of the monolayer (Fig. 4BeC; as indicated by arrowheads). However, by 72 h.p.i additional CPE were apparent in all of the cultures including pLV-miR-NP as demonstrated by the overwhelming presence of syncytia and little to no visibly healthy cells (Fig. 4D). Further characterization of NDV resistance of pLV-miR-NP was evaluated by determining viral titers in supernatant collected after LS-V infection (MOI 0.01) at 0, 12, and 24 h.p.i. pLV-miR-NP resulted in significantly (p < 0.05) lower NDV viral titers at both 12 and 24 h time compared to pLV-miR-SCR (Fig. 4G). Taken together, these results suggest that DF-1 cells transduced with pLV-miR-NP are capably of decreasing the amount of NDV viral replication following in vitro viral challenge with a velogenic strain of NDV. Discussion In this study we demonstrated that knockdown of the NP NDV viral mRNA in DF-1 cells could lead to decreased cell death and reduced titers (up to a 2-log decrease compared to pLV-miR-SCR control) during early stage infection with highly virulent LaSota NDV. Knockdown of the NP protein resulted in delayed viral replication when compared to the scramble control. In unsegmented, negative strand RNA viruses, such as NDV, the NP protein plays a significant role in the replication and transcription of NDV [24]. Specifically, NP, together with both the P and L proteins, interacts with the genomic RNA to form the ribronucleoprotein (RNP) which is the template for RNA synthesis. In this protein complex, NP encapsulates the RNA genome allowing proper function of the NDV polymerase [25,26]. NP knockdown lowers the amount/ viability of RNP and, by disrupting this essential lifecycle step, it is reasonable to suspect that knocking down the NP transcripts within infected cells resulted in the observed delay of viral replication, as shown by another group working with NDV [17]. A similar study conducted with AIV (an orthomyxovirus) also used RNAi to target several of AIV's proteins and reported knockdown of NP was notably effective in limiting production of the virus compared to other targets [27]. Furthermore, other influenza studies have also targeted the NP protein using RNAi and observed a decrease in the amount of the reciprocal viral mRNA, virion RNA and its complementary RNA [28]. The NP protein has able been indicated as a target for administering antiviral drugs [29]. Results such as these suggest that NP may be a prime target for controlling and limiting viral replication NDV and similar viruses. pLV-miR-NP transduced DF-1 cells were observably healthier than control cells types up to 72 h after challenge with LS-V. However, these cells were unable to survive long term in culture with cells at 72 h.p.i., showing significant cell death and syncytia formation by at this time point. While long-term survival was not observed in vitro, it is possible that delaying the rate of infection can ultimately lead to improved animal survival. Commonly, a standard challenge dose between 10 5 and 10 6 EID50 is used experimentally to induce 100% infection and clinical signs in chickens [18]. Based on our observation that pLV-miR-NP led to a 2fold decrease in viral titers in vitro, in theory, a transgenic bird containing our construct may require an increased amount of virus to generate an infection. It is also reasonable to consider that a decreased amount of viral titers detected could translate to a reduction in viral shedding of infected birds. Such is the case in a study completed by Lyall et al. exploring the amount of viral shedding in transgenic chickens expressing an shRNA against the polymerase of AIV [16]. Lyall describes that after viral challenge with a highly pathogenic strain of AIV, efficiency of transmission of the virus to other transgenic birds as well as non-transgenic animals was mitigated as assessed by histopathology and immunohistochemistry [16]. While some birds died in the study, the decrease of viral transmission can translate into a reduction of viral propagation to birds in close contact, decreasing the spreading of the disease and substantially contributing to outbreak control. With the success of infectivity studies with AIV, it is conceivable that similar results could be obtained with NDV challenge studies in transgenic birds. Other researchers have had success using the NP protein as an anti-viral target utilizing RNAi methodology to inhibit NP expression and reduction of viral titers in culture, however this work was done in non-avian Vero cells [30]. It is more beneficial to conduct studies in an avian cell type as the NDV virus does not have to adapt to a non-native cell type, which may lead to mutations that are not naturally found in chickens. In addition, the study of RNAi approaches in avian cultures is likely to be more representative with respect to the pathophysiology of NDV. Yue et al. also showed that shRNA targeting of NP in chicken embryonic fibroblasts lead to knockdown of viral NP mRNA [17]. However, this study failed to examine the effect of constitutive expression of the NP shRNA and only examine transient expression. Understanding the effect of continued long term expression is a key component if this technology is ever to be translated to use in live animals in a production setting. In this study, the miRNA sequences were designed based on conserved regions among genotype II representatives. This genotype was selected because of the extensive characterization of LS-V (a representative of genotype II), both in vivo and in vitro. In order to provide protection against the many NDV genotypes circulating worldwide (eighteen described so far [31]), other conserved genomic regions, in close proximity of the one used here, could be used to generate a broadly protective effect. For instance, this could be accomplished by deploying chaining pre-miRNAs that could deliver multiple miRNA species, therefore targeting multiple conserved regions at the same time. Conclusion pLV-miR-NP constructs constitutively expressed in DF-1 cells led to attenuated CPE and LS-V titers after NDV viral challenge. This study suggests that future transgenic animal studies are warranted. This would likely result in animals that are capable of having endogenous resistance to NDV infections and can have progressive benefits for small village farmers as well as large scale international poultry operations. With potential widespread adoption of resistant transgenic birds, one day the production of resistant animals maybe even more cost effective than the use of standard vaccine programs. Traditional vaccine programs have associated costs of transportation, logistics and infrastructure to maintain a cold chain, labor costs to administer vaccines and the cost of lost birds due to ineffective vaccines. In addition, the use of miRNA NDV resistant birds in economically impoverished countries and rural areas where vaccines are not readily available would be potentially paradigm changing by providing food and financial security.
2017-10-19T14:10:33.642Z
2015-06-04T00:00:00.000
{ "year": 2015, "sha1": "f130c4ed4ed4f50d6c6ab8319578844130fbd296", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc7106533?pdf=render", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e5e02b557a028fc301ed79faa364ad0396cf2f32", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13891858
pes2o/s2orc
v3-fos-license
A Contrastive Approach to Multi-word Extraction from Domain-specific Corpora In this paper, we present a novel approach to multi-word terminology extraction combining a well-known automatic term recognition approach, the C--NC value method, with a contrastive ranking technique, aimed at refining obtained results either by filtering noise due to common words or by discerning between semantically different types of terms within heterogeneous terminologies. Differently from other contrastive methods proposed in the literature that focus on single terms to overcome the multi-word terms' sparsity problem, the proposed contrastive function is able to handle variation in low frequency events by directly operating on pre-selected multi-word terms. This methodology has been tested in two case studies carried out in the History of Art and Legal domains. Evaluation of achieved results showed that the proposed two--stage approach improves significantly multi--word term extraction results. In particular, for what concerns the legal domain it provides an answer to a well-known problem in the semi--automatic construction of legal ontologies, namely that of singling out law terms from terms of the specific domain being regulated. Introduction Terminology extraction is a central field of research for a number of Knowledge Management applications, such as Ontology Learning, Text Mining, Information Retrieval, etc. Starting from the assumption that terms unambiguously refer to domain-specific concepts, a number of different methodologies has been proposed so far to automatically extract domain terminology from texts. Generally speaking, the term extraction process consists of two fundamental steps: 1) identifying term candidates (either single or multi-word terms) from text, and 2) filtering through the candidates to separate terms from non-terms. To perform these two steps, term extraction systems make use of various degrees of linguistic filtering and, then, of statistical measures ranging from raw frequency to Information Retrieval measures such as Term Frequency/Inverse Document Frequency (TF/IDF) (Salton et al., 1988), up to more sophisticated methods such as the C-NC Value method (Frantzi et al., 1999), or lexical association measures like log likelihood (Dunning, 1993) or mutual information. Others make use of extensive semantic resources (Maynard et al., 1999), but as underlined in Basili et al. (2001b), such methods face the hurdle of portability to other domains. Another interesting line of research is based on the comparison of the distribution of terms across corpora of different domains. Under this approach, identification of relevant term candidates is carried out through inter-domain contrastive analysis ( (Penas et al., 2001;Chung et al., 2004;Basili et al., 2001a) ). Interestingly enough, this contrastive approach has so far been applied only to the extraction of single terms, while, multi-word terms' selection is based upon contrastive weights associated to the term syntactic head. This choice is justified by the assumption that multiword terms typically show low frequencies making contrastive estimation difficult (Basili et al., 2001a). On the contrary, we aim at focusing our attention on the extraction of multi-word terms, which have been demonstrated to cover the vast majority of domain terminology (85% according to Nakagawa et al. (2003)); for this reason, we believe that they have to be considered independently from the head. Aware of the problem of data sparseness of multi-word terms, we propose a two-stage approach where we firstly extract a shortlist of well-formed and relevant candidate multi-word terms, and secondarily we apply a contrastive method against the selected terms only. The proposed methodology has been tested on Italian text collections belonging to two different domains, presenting different degrees of complexity: the Art History domain and the Legal domain. The latter appears to be quite challenging because of the acknowledged difficulties in discerning law terms from terminology of the regulated domain (Lame, 2005;Lenci et al., 2009). General extraction method The multi-word term extraction methodology we propose here is based on a combination of "termhood" measures, assessing the likelihood of being a valid technical term, and contrastive methods. In particular, multi-word term extraction is carried out by identifying candidate multi-word terms in an automatically POS-tagged and lemmatized text, which are then weighted with the C-NC value, currently considered as the state-of-the-art method for terminology extraction. The ranking of identified multi-word terms is then revised on the basis of a contrastive score calculated for the same terms with respect to corpora testifying general language usage. The main novelty of the proposed approach lies in the fact that, differently from previous studies, here the contrastive analysis is applied to previously identified multi-word terminology, with the aim of further filtering it. Starting from the assumption that domain relevant multi-words are unique elements, separate from single terms, we rather prefer basing multi-word extraction on their concrete frequency of occurrence in corpora. Such an approach becomes particularly useful when the domain text collection also includes particularly frequent common words which make the final result noisy or, more crucially, when the resulting terminology is highly heterogeneous as in the case of legal texts. In the following sections we describe, in 2.1., the multi-word candidates extraction process, and in 2.2. the subsequent contrastive ranking process. Multi-word term extraction In this section, we discuss the candidate extraction process, that makes use of: i) linguistic filters; ii) stoplist; iii) statis-tical filters (C-NC Value). Linguistic filters The linguistic filters operate on the automatic POS-tagged and lemmatized text, making use of different kinds of linguistic feature. The POS-tagged text, obtained with the tagger described in Dell'Orletta (2009), is searched for on the basis of a set of POS patterns encoding morphosyntactic templates of candidate complex terms covering the main nominal modification types. Specifically, for each multi-word term to be identified in texts, we used POS-restrictions constraining the start-token and finaltoken POSs, but also the internal-tokens POSs. Since we were interested in nominal "chunks", which consist of nouns, adjectives and prepositions (Justeson et al., 1995), we use linguistic filters that accept only those kind of part of speech. Specifically, we identify sequences of allowed POS patterns in order to cover most of the Italian morphosyntactic multi-words structures, using the following pattern: Noun+(Prep+(Noun|ADJ)+ |Noun|ADJ)+ The choice of linguistic filters affects the precision and the recall of the output list, e.g a restrictive filter will have a positive effect on precision and a negative effect on recall (Basili et al., 2001a). In our method, we use a filter which also constrains the maximum number of words of which a complex term can be made. In fact, we operate on the candidate terms' length (l) as one of the main linguistic constraints to be ruled. We believe that such a measure is to be considered as domain-dependent, being related to the linguistic peculiarities of the specialised language we are dealing with. Stoplist At this stage, linguistically filtered candidate multi-word terms are screened by using a multi-word preposition stoplist; in order to extract domain-specific multi-word prepositions, this list is obtained with a first run of the same termextraction procedure operating on the corpus from which we are going to extract multi-word terms. To extract prepositional candidates, this method uses, specifically, the linguistic pattern (Noun+Prep+Noun). Resulting multi-word prepositions won't be considered as a start or end element of a term: in this way, non-sense terms such as sensi della legge lit. 'senses of the law are avoided due to the overlapping with the multi-word preposition ai sensi di, 'by law'. With these types of constraints (linguistic filtering, terms' length and stoplist filtering) the typology of multi-word term candidates is anyway quite varied, ranging from terms such as ricerca artistica 'artistic research', Ministro dei Beni culturali 'Minister of Cultural Heritage' to piano di gestione di bacino 'management plan of the basin'. Statistical filters based on C-NC Value As a statistical filter, we use the C-NC Value measure as described in Frantzi et al. (1999) and Vintar (2004). The C-Value method aims at bringing out those terms which tend to occur as nested terms, then, the NC-Value incorporates context information to the C-Value, aiming at improving term extraction in general. C Value. The C-Value calculates the frequency of a term and its subterms. If a candidate term is found as nested, the C-Value is calculated from the total frequency of the term itself, its length and its frequency as a nested term; while, if it is not found as nested, the C-Value, is calculated from its length and its total frequency. Given the candidate term t , and being |t| its length, the C-Value of t is given as: where f (t) is the frequency of t in the corpus, T t is the set of terms that contain t, P (T t ) is the number of candidate terms in T t , and b∈Tt f (b) is the sum of frequencies of all terms in T t . NC Value. The NC-Value measure (Frantzi et al., 1999) aims at combining the C-Value score with the context 1 information. A word is considered a context word if it appears with the extracted candidate terms. The algorithm extracts the context words of the top list of candidates (context list) 2 , and then calculates the N-Value on the entire list of candidate terms. The higher the number of candidate terms with which a word appears, the higher the likelihood that the word is a context word and that it will occur with other candidates. If a context word does not appear in the extracted context list, its weight for such term is zero. Formally, given w as a context word, its weight will be: weight(b) = t(w) n where t(w) is the number of candidate terms w appears with, and n is the total number of considered candidate terms; hence, the N-Value of the term t will be w∈Ct f t (w) * weight(w), where f t (w) is the frequency of w as a context word of t, and C t is the set of distinct context words of the term t. Finally, the general score, NC-Value, will be: where, in our model, α and β are set empirically (α = 0.5 and β = 0.5). Multi-word terms contrastive ranking The list of multi-word terms extracted at the processing stage described in section 2.1. is then ranked by resorting to a contrastive method. Differently from the other contrastive methods ( (Basili et al., 2001a;Penas et al., 2001;Chung et al., 2004;Kozakov et al., 2004)) that are applied only to single terms for avoiding the multi-word terms' sparsity problem, we apply the contrastive function directly to complex terms. However, being aware of such a problem, we overcome the sparsity issue by splitting the process into two different steps. First we select well-formed, relevant multi-words, having significant distributional tendencies; afterwords we apply the contrastive function only to these pre-selected multi-word terms. With this procedure we focus, firstly, on the retrieval of valid technical terms, and secondarily on domain pertinence, in two distinct but consequent moments. In what follows we start describing the approach used by Basili et al. (2001a) (section 2.2.1.). Then, we describe a new approach where Basili's approach is applied directly on multi-word terms showing that, multi-word terms can be treated as autonomous entities (section 2.2.2.). In this method, we differ from other TF-IDF-like approaches since we obtain a modular structure which allows us to use different functions both for multi-word extraction, and for contrastive ranking, according to the specific tasks. In this case, we focus on the double domain terminology problem, as described in 3.2., and, for this purpose, we propose a new contrastive function aiming at distinguishing double domain terminology (described in 2.2.3.). Basili et al. (2001a) proposed a Contrastive method, henceforth referred to as Contrastive Selection via Heads (CSvH), where the selection of multi-word terms in the target domain is done according to contrastive information related to their head. The CSvH method can be divided in two steps: Contrastive Selection via Heads • single candidate terms are selected using a contrastive function based on their distribution in the target and contrastive corpora; • the single weighted terms are the heads of multi-word terms and the multi-word term scores are calculated by multiplying the head contrastive value with the frequency of the multi-word term in the target domain. The contrastive function used by Basili et al. (2001a) is a TF-IDF inspired measure. However instead of Inverse Document Frequency, they used Inverse Word Frequency (IWF): where, st is a candidate single term, N is the size of the contrastive corpus and F (st) is the frequency of st in all domain corpora. In the same way as the TF-IDF measure, the TF-IWF measure takes into account the frequency of the candidate term st in the target domain to avoid the penalization of high frequency terms. Therefore, the contrastive function is: where f i (st) is the frequency of st in the target domain i. In the second step of the CSvH method, multi-word terms are weighted by multiplying the contrastive value of their head with the frequency of the term in the target domain. Hence, the contrastive weight (Cw) of the multi-word term t in the domain i is defined as: where f i (t) is the frequency of the term t in the target domain i and w i (h(t)) is the contrastive weight of the term's head. Term Frequency Inverse Term Frequency The Term Frequency Inverse Term Frequency (TFITF) method is a variant of Basili et al. (2001a). Differently from CSvH, the contrastive function is applied directly on a list of previously selected candidate multi-word terms. In our work we use the multi-word extraction process described in Section 2.1. for obtaining the list of candidate multi-word terms. Given the set of multi-word terms T extracted from the target domain i, the TFITF value of term t ∈ T is: where f i (t) is the frequency of t in the target domain i and IW F (t) is defined: F (t) is the frequency of t in all domain corpora and N is: Contrastive Selection of Multi-word terms Starting from the assumption that multi-word terms are less frequent than single terms, we introduce a new Contrastive method, called Contrastive Selection of multi-word terms (CSmw), particularly suitable for handling variation in low frequency events. As in the TFITF method, the CSmw statistical weight is assigned directly to multi-word terms. The CSmw function is based on an arctangent function of this form: where K is a coefficient. This function presents two interesting features: • the presence of an asymptote in the point (0, π/2), • the higher the coefficient K the faster the knee of the function gets closer to the asymptote. Therefore, given the set of multi-word terms T extracted from the target domain i and a set of contrastive domains C, we defined the coefficient K as: Where t ∈ T , K(t) is the coefficient of t, F c (t) is the sum of the frequencies of t in the contrastive corpora and N c is the sum of the frequencies of all elements of T in the contrastive corpora. More formally and K(t) has the property that when F c (t) increases, K(t) decreases and vice-versa. Hence the statistical function is: where f i (t) is the frequency of t in the domain corpus. This function guarantees three fundamental properties for tackling our tasks, given two terms t1 and t2: Finally, we moderated the positive effect of the low frequency of t in the contrastive corpora (F c (t)) by multiplying the argument of the arctangent for the logarithm of the frequency of t in the domain corpora (log(f i (t))). So the CSmw function is: Figure 1 illustrates the CSmw function. Given a target domain D and three terms extracted from D (d1, d2, d3) with three different frequencies in a contrastive domain C (F c (d1) = c1 = 10000, F c (d2) = c2 = 1000, c3 = 100), Figure 1 shows the CSmw contrastive function as the number of occurrence of d1, d2, d3 in the target domain changes. Case studies The term extraction methodology described above has been tested in two case studies carried out in the History of Art and Legal domains. The Art History corpus has been collected by a domain expert and includes texts representative of different artistic periods, for a total of 326,066 tokens. The legal corpus is constituted by a collection of European legal texts of 394,088 word tokens concerning the environmental domain; this corpus will be hereafter referred to as "Environmental Corpus". As a general contrastive corpus we used the PAROLE Corpus (Marinelli et al., 2003), made up of about 3 million words and including Italian texts of different types (newspapers, books, etc.). Extraction of domain specific terminology from an Art History corpus In this case study we used the Art History corpus as the target domain corpus and the PAROLE Corpus as the contrastive corpus. We selected a top list from the candidate term list ranked on C-NC Value score (2.1.3.), which was obtained by setting an empirically defined threshold: i.e. the first 600 terms of the ranked list were selected. Such a selected list turned out to include domain-specific terms (e.g. pittura italiana, 'Italian painting') but also opendomain ones (e.g. ente locale, 'local authority'). The final term list is represented by the top list of 300 terms ranked according to the contrastive score: such a list includes domain-specific terms only, without noisy common words. It should be noted that the two thresholds for top lists' cutting as well as the maximum term length can be customized for domain-specific purposes through the configuration file 3 . As it was discussed in Section 2.1.1., the length of multi-word terms is dramatically influenced by the linguistic peculiarities of the domain document collection. We empirically tested that for the Art History domain multi-word terms longer than 4 tokens introduce noise in the acquired term list. Table 1 contains a fragment of the acquired list of 300 multi-word terms we obtained following the contrastive approach described in 2.2.. Artistic Multi-words movimento artistico (artistic movement) figura umano (human figure) arte contemporaneo (contemporary art) produzione artistico (artistic production) pittore italiano (Italian painter) mostra online (online exhibition) percorso espositivo (exhibition path) collezione privato (private collection) arte italiano (Italian art) bene culturale (cultural heritage) Extraction of domain specific terminology from a legislative corpus The second case study has been carried out on the legal domain which poses the further challenge of the highly het-erogeneous nature of the extracted terminology, typically including legal terms as well as terms of the domain being regulated. So far, term extraction applied to legal domain corpora results in a hybrid term glossary, including terminology of mixed nature. We believe that the proposed approach can be of some help in discriminating legal terms from regulated-domain terms, a crucial topic that -to our knowledge -has never been tackled in the terminology extraction literature. This has been achieved by iterating the contrastive process more than once against two contrastive corpora of different nature. As it is illustrated in Figure 2, the Environmental corpus we exploited in this second case study has been contrasted, first, against the open-domain PAROLE Corpus and, then, against a legal corpus belonging to a domain other than the environmental one. This latter corpus, of 74,210 word tokens, containing European law texts on consumer protection, will be hereafter generically referred to as "Legal Corpus". Similarly to the Art History case study, from the C-NC Value ranked terms' list, we selected a top list 4 , thus obtaining a shortlist of 600 either legal (e.g. norma europea, 'European norm'), environmental (e.g. emissione di gas a effetto serra, 'emission of greenhouse gases') or opendomain terms (e.g. direttore generale, 'director-general'). Afterwards, we firstly contrasted a top list of 600 multiword terms against the PAROLE Corpus, in order to reduce the noise deriving from highly frequent common words. Then, we contrasted a top list of 300 environmental-legal multi-word terms against the Legal Corpus, obtaining a final list of 300 terms ranked on the contrastive score (as described in 2.2.3.). Also in this case, it should be noted that all thresholds for top lists' cutting have been empirically defined after several experimental tests. This double contrast was aimed at discerning, in the input list, terms belonging to the two different target domains, namely environmental and legal terms: whereas the former were expected to be found at the top of the final list ranked according to the contrastive score, the latter were expected at the bottom. In this case we empirically tested that in corpora of environmentallegal texts, relevant domain-specific information is carried by multi-word terms longer than those occurring in the Art History texts; for this reason the maximum multi-words' length has been set at 6 tokens. It is the case of both legal terms, such as e.g. testo della disposizione essenziale del diritto, 'text of the law essential provision', and terms which belong to the regulated domain, such as e.g. inquinamento atmosferico trasfrontaliero a grande distanza, 'long distance atmosferic transfrontier pollution'. Tables 2 and 3 report two fragments of the 300 multi-word term list we obtained by iterating the contrastive process. In particular, Table 2 contains the first 10 terms of the final list while Table 3 shows the last 10 terms. Interestingly enough, our initial hypothesis seems to be proved: the top of the final list as reported in Table 2 contains environmental terms, while the legal terms can be found at the bottom (see Table 3). These results will be discussed more in detail in Section 4.. Environmental terms sostanza pericoloso (hazarous substance) salute umano (human health) sviluppo sostenibile (sustainable developement) principio attivo (active ingredient ) inquinamento atmosferico (air pollution) valore limite di emissione (emission limit value) effetto serra (greenhouse effect) rifiuto pericoloso (hazardous waste) corpo idrico (water body) cambiamento climatico (climate change) Legal terms funzionamento di mercato interno ( functioning of national market) disposizione essenziale di diritto interno (essential internal provision of national law) diritto nazionale (national law) disposizione nazionale (national provision) diritto interno (national law) norma nazionale (national rule) disposizione legislativo (legislative measure) responsabile di formulazione (formulator) legislazione comunitario (community legislation) disposizione comunitaria (community provision) Evaluation The evaluation of the acquired multi-word term lists was carried out by adopting similar evaluation criteria for the two case studies even though partially different extraction methodologies have been exploited. General evaluation criteria The multi-word term lists extracted in the case studies described in 3.1. and in 3.2. have been evaluated both against gold-standard resources and through manual validation by domain experts. These two different evaluation types were specifically aimed at dealing with two general issues of multi-word terms evaluation: i) the considered reference resources have a good coverage of domain specific single terms, but they do not have a proper coverage of domain-specific complex terms (e.g. scena di genere, 'genre works'); ii) many terms cannot be easily unambiguously categorized as belonging to a specific domain. As it will be discussed in Section 4.3., ii) is often the case of those terms that occur in legal documents but refer to objects or concepts of the real world, regulated by the law; e.g. terms such as rifiuto pericoloso 'dangerous waste' or inquinamento atmosferico 'atmosferic pollution' label environmental concepts which typically occur in environmental-specific laws. Consequently, they are included in both environmental and legal terminological re- Evaluation results of the Art History case study The first phase in the evaluation of the 300 multi-word term lists, extracted from the Art History Corpus, was carried out by automatically comparing the acquired list against a Art glossary 5 . Afterwards, the results of this first evaluation phase have been manually validated by a domain expert. Eventually, we obtained four lists of 300 validated terms, further divided in 30-term groups which show domainspecific terms' distribution. Contrastive-based methods have in general better performances in extracting domainspecific multi-words. Table 4, in fact, reports the amount of domain specific terms for each group. Even though, the four extraction methods have similar results in the first group, the CSmw method has the best Sub-TOT, with 124 artistic terms out of 150 candidate terms. TFITF approach extracts 119 artistic terms out of 150, having a slightly better performance than the CSvH. This result witnesses that a better extraction of multi-word terms can be carried out by applying TFITF measure, directly on complex terms (see Section 2.2.), instead of on single terms. The CSvH method gets the higher total number of artistic multi-word terms, but these terms are uniformly spread on the entire range. On the contrary, the CSmw function shows considerable better results in the top list, being able to discriminate artistic terms on top. As well, the TFITF function is able to group artistic specific domain terms at the top of the list, maintaining anyway good scores on the entire list. Figure 3 shows the trends of the four functions in retrieving artistic terms. Finally, it is interesting to notice that, the CSmw method, acting directly on multi-words, turns out to extract those terms which are not only domain-specific terms, but also domain-specific terms for the analyzed text; on the other hand, although CSvH extracts good domain specific terms, these terms are not necessarily relevant in the considered text. It is the case of arte concettuale ('conceptual art') which is an artistic term with high rank in CSvH, but with very low frequency in the analyzed text. line 7 , including 1,800 terms, was used for the legal domain. According to the general evaluation criteria, we compared the four multi-word term lists, extracted following the NC-Value, the CSvH, the TFITF and the CSmw approaches, against the two aforementioned gold standard resources. Afterwards, the term lists have been manually validated by a legal and an environmental expert. Table 5 reports the amount of environmental (referred to as Env) and legal (referred to as Leg) terms for each 30-term groups we computed. As we can see, the CSmw method is able to distinguish clearly environmental terms from legal terms. In the first group we see 19 environmental terms against 5 legal terms; in the last: 22 legal terms, and no environmental terms. This trend is pointed out in Fig. 4, where the divergent lines show the different distributions of environmental and legal terms. The central zone of the chart, with lines crossing each other, shows a twilight zone of terms which contains both environmental and legal terms and terms that can refer to both domains (such as politica ambientale, 'environmental policy'). Fig. 5 sketches the absolute value of the difference between environmental and legal terms for 7 http://www.simone.it/newdiz Figure 5: Absolute value of the difference between environmental and legal terms with CSmw every group. The continuous line shows the CSmw trend, while the dashed one shows the TFITF trend, and in both lines the bold part refers to predominance of environmental terms. As we can see, the two peaks at the extremities, due to high differences in values, point out the function's success in distinguishing double domain terminology. The CSvH method turned out not to be suitable for this task, since this method cannot deal with double domain terminology by discerning different term types. In the first group of terms, as we can see from Table 6, the function seems to respect the general trend extracting more environmental than legal terms. But setting the usual threshold at 300, the proportion of environmental terms is still higher than legal term. For this reason, in order to find a turning point of this trend, where the legal terms would have been more than the environmental ones, we keep analyzing sample groups around 600 terms. At this point we see that there is still a stable ratio between terms belonging to the two different domains. We stop our evaluation where the list becomes too noisy for being analyzed. A possible explanation is that, since the CSvH method extracts multi-word terms from the single head term previously acquired, it extracts all complex terms which share the same single head term, including complex terms which are not relevant for that particular text. Namely, it could be the case that both principio attivo 'active ingredient' and principio di sussidiarietà 'principle of subsidiarity' were extracted since they share the single head term, i.e. principio 'principle'. However, we cannot discriminate that the first one belongs to the environmental domain while the second one to the legal domain. Conclusion In this paper we presented a novel approach to multi-word terminology extraction combining a well-known automatic term recognition approach, the C-NC value method, with a contrastive ranking technique, aimed at refining obtained results either by filtering noise due to common words or by discerning between semantically different types of terms within heterogeneous terminologies (as in the legal case). In the framework of this study, two new contrastive functions have been proposed, called TFITF and Contrastive Selection Multi-words function, which turned out to be particularly suitable for handling variation in low frequency events, typically represented by multi-word terms. The proposed methodology has been tested in two case studies carried out in the History of Art and Legal domains respectively. The evaluation of achieved results showed that the proposed two-stage approach improves significantly multi-word term extraction results. For what concerns the legal domain, the proposed approach provides an answer to a well known problem in the semi-automatic construction of legal ontologies, namely that of singling out law terms from terms of the specific domain being regulated; as a matter of facts, ontology learning efforts in the legal domain mainly focus on the latter (Francesconi et al., 2010). Current directions of reseach include: i) the definition of new functions for an in depth analysis of the 'twilight zone' described in Section 4.3. as part of the "double terminology" extraction task, and ii) the use of this approach to identify neologisms from a comparative analysis of diacronic corpora of newspapers texts.
2015-07-20T18:47:21.000Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "06d0eb74d19ec721d54cfb653f32adac83fbac46", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "06d0eb74d19ec721d54cfb653f32adac83fbac46", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
219012106
pes2o/s2orc
v3-fos-license
A Coordination Strategy between PV Generators and Storages based on Droop Control in AC Microgrids In an islanded AC microgrid, the coordination strategy between generations and storages is often designed to ensure the stability and economy of the system. In this article, for an AC microgrid dominated by photovoltaic (PV) generators, a strategy based on droop control is proposed to realize the generation-storage coordination. PV generators change their active and reactive power output by catching the changes in bus frequency and voltage, which can prevent storages from over-charged and make the bus voltage more stable. Storages are controlled by a droop algorithm based on state-of-charges (SOCs) to achieve SOCs balance among multiple storages. When the SOCs are too low, secondary loads (SLs) will be removed through a loads management, and the rest primary loads (PLs) will be powered entirely by PVs, which can prevent storages from over-discharged. Simulation based on Matlab successfully verifies the proposed coordination strategy. Introduction With the rapid development of the society, the traditional power grids have more and more disadvantages. In recent years, the construction of AC microgrids with renewable energy sources (RESs) has become an important strategy in various countries. AC microgrids are the main form of microgrids, they can effectively integrate distributed generations, energy storages and loads in the distribution network. Figure 1 shows a typical AC microgrid structure. When connected to utility, AC bus is supported by the utility, the PV generators, wind turbines (WTs) and fuel cells all output maximum power. In island mode, the AC bus is supported by storages, RESs will be controlled as a power source. Considering the intermittence and volatility of RESs [1], unbalanced power flow may lead to overcharge or over discharge of the storages. And with the diversification of the loads, when the reactive power demand between storages and loads cannot be met, the AC bus voltage (ACBV) will exceed the limit, at that time, generations are required to provide reactive power to protect microgrids. In addition, the SOCs between storages are often unbalanced, which resulting in low efficiency. According to these conditions, a coordinated control strategy between storages and generations is necessary. Considering there are lots of PVs access in the distribution network, the islanded AC microgrids dominated by high-density PVs will be the main research object of this paper. The rest of this paper is organized as follows. In section 2, some related research works will be introduced. In section 3, the coordinated control strategy will be described in detail. In section 4, simulation results based on Matlab are presented. In section 5, the conclusion of this article will be obtained. Related research works Some researchers have proposed their own control methods. Lu et. al. proposed a modified droop control based on SOCs for the storages, SOCs were added into the P-f droop coefficients as their denominators, so the storages with higher SOCs would have higher power output, the storages with lower SOCs would have lower power output, finally they achieved SOCs balancing [2]. But this study only focuses on the cooperative control among multiple storages, the control of generations is not discussed. Park et. al. took the RESs generations into consideration, they proposed a strategy of using multiple storages and multiple generations to improve the reliability of microgrids with all generations operated at maximum power point tracking (MPPT) mode [3]. However, the cooperative control method between storages and generations is not designed in this strategy. Urtasun et. al. put forward a distributed generation-storage cooperative control strategy, pointing out that if a storage was overcharged, its output voltage or input current would exceed the limit, therefore, using the voltage and current as the main basis of inverter frequency, when the storage was over-charged, the AC bus frequency (ACBF) would rise, PVs could decrease their output power by capturing this signal to prevent storage from over-charged, to prevent storage from over-discharged, when the SOCs of the storage were too low, diesel engine would be started [4]. This strategy can effectively prevent storage from overused, but it only considers a single storage, without considering the SOCs balancing in the case of multiple storages. Sorouri et. al. proposed a microgrid frequency control (MFC) method based on multiple storages, for the multiple parallel storages, using SOC-f droop control to generate the inverter reference frequency, when the SOCs increased, the ACBF raised, when the SOCs decreased, the ACBF dropped, then a PV power control (PVPC) method was used to capture this signal, which let PVs output power was adjustable to prevent the storages from over-charged or over-discharged [5]. However, this method only focuses on the SOCs, the maximum and minimum output power of the storages are not considered, so there is a possibility of unbalance in power output, at the same time, PVPC ignores the problems of reactive power support. Further more, L.Dí az et. al. proposed a PVs reactive power support method under voltage control mode (VCM) [6], but it is no uesful in current control mode (CCM), it means PVs unable to support reactive power when they operate in MPPT. After that, Moondee et. al. proposed a modified V-Q droop control for PVs in the case of grid connection [7], which provided ideas for solving this problem. In order to understand the research works intuitively, these related methods are summarized in table 1. In order to solve these problems, this paper proposes a generation-storage coordination strategy based on droop control. In islanded mode, multiple storages support AC bus through P-f and Q-V droop control, they can realize the reasonable distribution of output power by designing different droop coefficients. Since the SOCs of different storages may be different, a reference frequency offset algorithm based on SOCs is introduced to the P-f droop control, the higher SOCs will lead to the larger offset, therefore the storages with higher SOCs are slower to charge or faster to discharge than the storages with lower SOCs, finally they achieve a SOCs balance. If SOCs are too high, PVs will enter the power limit mode instead of MPPT mode to prevent storages from over-charged. When the SOCs are low, the PVs will be in the MPPT mode. However, if SOCs continue to decrease beyond a threshold, loads management will automatically cut off the SLs, leaving only PLs to prevent storages from over-discharged. Considering the reactive power flow, a reactive power support algorithm based on dq transform is designed on PVs to realize the reactive power support to AC bus. In these ways, the coordination between PV generators and storages can be realized, the following goals will be achieved: First, storages can reasonably distribute the power output, no overuse, and their SOCs are balanced. Second, PVs can automatically adjust the output power according to the bus signals, they will provide active power support and reactive power support for the bus. Third, the loads in the AC microgrids will be designed and managed reasonably to ensure the stable operation of the system. Generation-storage coordination strategy The coordination strategy will be divided into two parts for storages and PV generators respectively. The storages through a SOCs based droop control will be controlled as voltage sources to maintain the ACBV, and the PV generators through an adjustable power control will be controlled as power sources to provide active and reactive power support. ACBF can be divided into two regions according to its height, named high frequency region (HFR) and low frequency region (LFR), PV generators operate in power limit mode and MPPT mode at HFR and LFR respectively, as shown in figure 2. SOCs based droop control for storages There are multiple storages connected on the AC bus. The control block diagram for any i-th storage is shown in figure 3. Considering the limitations of P-f and Q-U droop control [8], this article assumes the line is inductive. As shown in figure 3, the strategy is divided into outer loop and inner loop. Outer loop is based on traditional P-f and Q-V droop control. Active power P and reactive power Q can be calculated by sampling the three-phase voltage and the three-phase current , three-phase voltage reference amplitude can be obtained by using Q-V droop method. Due to the difference of SOCs of different storages, the reference frequency rise ∆ based on SOCs is introduced, which can get the final reference frequency with the P-f droop method together. and will provide the three-phase voltage reference for the inner loop after passing a three-phase voltage generator. Figure 3. SOCs based droop control for i-th storage. Inner loop is based on three-phase dq transformation, a proportional-integral (PI) controller is designed to be a voltage controller to make the output voltage and track their reference value and accurately. A proportional controller is designed to be a current controller to increase the damping of the system. Inner loop finally generates three-phase inverter pulse width modulation (PWM) waves to the converter. According to this strategy, the inverter reference frequency of i-th storage can be expressed as: is the reference value of ACBF, which is limited between the bus maximum frequency and minimum frequency , f is the output of P-f droop control, ∆f is the reference frequency rising, * is the rated frequency of AC bus, , is a droop coefficient, ∆, is a rising coefficient, is the active power output, is the SOCs of the battery. The design of the droop coefficient , is related to the capacities of the storage, it can be set as: Where , and , are the maximum active power output and maximum active power input of the storage respectively. The detailed expression of ∆, is: (3) It can be seen from equation (3) all the rise coefficients should be consistent in an AC microgrid. For any one storage, its SOCs can be calculated as: Where , =0 is the initial SOCs of the storage, is its capacity, and is its output current. According equation (1) to equation (4), for active power, its output value with lower SOCs is smaller or its input value is larger, and its output value with higher SOCs is larger or its input value is smaller, finally SOCs balancing between storages will be realized. After that, the size of power output or input will only depend on storages own capabilities. IOP Conf. Series: Earth and Environmental Science 461 (2020) 012086 IOP Publishing doi:10.1088/1755-1315/461/1/012086 5 The reference amplitude of inverter voltage can be expressed as: is limited between the highest peak-to-peak value of the phase voltage and the lowest peak-to-peak value of the phase voltage, * is the rated peak-to-peak voltage of the bus, , is the droop coefficient, is the reactive power exchanged by the converter. The design of the droop coefficient , is related to the reactive power exchange capability of the converter, it can be set as: Where , and , are the maximum positive reactive power and the maximum negative reactive power of the converter respectively. Considering the relationship between active power and reactive power exchange of the converter, they should always meet: 2 + 2 ≤ 2 (7) Where is the apparent power of the converter. According equation (5) to equation (7), when the converter can provide enough reactive power, ACBV will not exceed the limit, but in fact, due to the limited apparent power of the converter, the supply of reactive power will be restricted after thinking to meet the active power output first. To solve this problem, the converters connected to the PVs can provide reactive power support. It will be described in detail in next section. Adjustable power control for PV generators According to figure 3, when SOCs are too high, ACBF will enter the HFR. When the reactive power supply by storages is insufficient, ACBV will exceed the limit. PV generators can adjust their active and reactive power output by catching these two signals to ensure the system stability. For any m-th PV, the control block diagram is shown in figure 4. Where m is any m-th PV, is the ACBF, is the threshold to enter HFR, , is the output power when PV operates in MPPT mode, , is regulation coefficient, it can be set as: Combining figure 2 and equation (8), when ACBF is located in LFR, PV operates in MPPT mode which can ensure economy. When ACBF is located in HFR, PV will operate in power limit mode, and its active power output is negative correlation of the frequency, the higher the frequency is, the lower the output power is. When the frequency is higher than , the reference output power will be zero. Since the reactive power exchange needs to be considered, two thresholds of the ACBV are set to and . When the bus voltage is lower than or higher than , it means the system has a high demand for reactive power. In this case, PV converter will be asked to support reactive power. According to figure 4, the reference value of converter reactive power is expressed as: Where , is the component of the inverter voltage on d-axis in dq transformation, and are the maximum positive reactive power and the maximum negative reactive power of the converter respectively. Their relationship with the active power should also meet equation (7). , and , are control coefficients, they can be set as: From equation (8) to equation (11), we can know the outer loop can make PV generator automatically adjust the active power output and reactive power output according to ACBF and ACBV, PV can provide reactive power support for AC bus while preventing the storages from over-charged. The inner loop control of PV is mainly to achieve accurate tracking of active power and reactive power. The active power output of the PV depends on the P-V characteristic curve. By using the perturbation and observation (P&O) method to control the output voltage of PV, the output power can be controlled to achieve active power tracking. Based on instantaneous reactive power theory of three-phase circuits, we can get: Because this coordination strategy is a master-slave mode, for PV generators, the values of and are little changed. At this time, for any given and , the unique and can be get according to equation (12). The decoupling algorithm designed with this characteristic can realize the decoupling from to , and the reactive power tracking can be realized by controlling . Characteristics of the coordination strategy First of all, the analysis starts from active power characteristics. When all PV generators are running in MPPT, the total active power output is set as , , which satisfies: (13) Set the total active power consumption of PLs and SLs are and respectively, when they meet: In this situation, it is considered that the system is initially located in LFR. PV will operate in MPPT mode, storages will be continuously charged, SOCs will continue to rise, ACBF will continue to rise and finally enter HFR. Then, PVs will operate in power limit mode, their active power output will continue to decrease, the charging power of the storages will continue to decrease until it is zero. Finally all the active power of the loads will only be provided by PVs, SOCs of all the storages are equal and there is no power exchange. At this time, the loads should meet: Combined equation (1), equation (8) and equation (15), under the same initial conditions, in the steady state, it can be concluded that the smaller the total active power consumption of the loads, will have the lower PVs active power output, the higher bus frequency and the higher SOCs balancing state, vice versa. When the total loads meet: In this case, considering the system is initially located in LFR, the PVs will always run in the MPPT mode, the storages will continue to discharge, and SOCs will continue to decrease. To prevent storages from over-discharged, a loads management mechanism is introduced to set a minimum threshold value for SOCs. When any SOC falls to the threshold value, SLs will be cut off, and the total active power consumption of the rest PLs should be less than , , then, the system can be analyzed as before. Next, considering the reactive power characteristics, ACBV can be expressed as , and according to equation (5), reactive power exchange for any i-th storage can be expressed as: Let the total reactive power consumption of loads be , when it meets: At this time, the reactive power of the loads will be supported by storages, and the reactive power flow at PVs is zero. When does not meet equation (18), the reactive power of loads will be shared by storages and PVs together. In order to protect the system, should meet: Where , and , are the maximum negative reactive power of PVs and storages respectively, and , and , are the maximum positive reactive power of PVs and storages respectively. Simulation based on Matlab In this section, a four nodes AC microgrid coordination simulation is carried out based on Matlab, two storages and two PV generators are connected. The parameters are designed as follows: rated ACBF * is 50Hz, while is 49.7Hz, is 50.3Hz and is 50.5Hz, rated ACBV * is 311V, while is 300V, is 305V, is 315V and is 320V. The PLs are 4kW, -1.2kVar, SLs are 6kW, -4.1kVa. The MPPT power of first PV (PV_1) is 2.5kW, and the MPPT power of second PV (PV_2) is 3.5kW, the maximum positive and negative reactive power provided by the two PV generators are the same, which are 7kVar and -7kVar respectively. The maximum active output power of first storage (Storage_1) is 5.6kW, its maximum active input power is 1.4kW, and the maximum active output power of second storage (Storage_2) is 7kW, its maximum active input power is 2.1kW, their maximum positive and negative reactive output power are the same, which are 6kVar and -6kVar respectively. The initial SOCs of two storages are 80% and 65% respectively. The related coefficients can be calculated through these parameters, Simulation steps are as follow: first, two storages are used DC/AC conversion to support AC bus, then two PVs are input. After the bus voltage is stable, PVs begin decoupling algorithm to provide reactive support for the bus, at the same time, they will capture the ACBF to determine their own operation mode. When any SOC is detected to be too low, the system will cut off SLs to prevent the storages from over-discharged. The total simulation time is 80 seconds, the results are shown in figure 5. Figure 5. Simulation results. figure 5 (d), it can be seen that when the simulation starts, VCBF is in LFR, both PV generators operate in MPPT mode. Since the total loads meet equation (14), tow storages will continue to discharge, the SOCs will continue to decline, and the VCBF will continue to decrease. When it is detected that the SOCs of Storage_2 reach the minimum threshold (20%), the SLs are cut off, and the total power consumed by the PLs less than , . Because the VCBF is still in LFR, the PVs still operate in MPPT mode, and the extra power will be used to charge the storages, their SOCs will rise, and the VCBF will continue to rise. After a period of time, when the VCBF is in HFR, the PVs will operate in the power limit mode, their active output power decreases with the rising of VCBF, and the charging power of these two storages will decrease. In the steady state, the two storages have no power exchange with microgrids, their SOCs are balanced, and the total power of the PLs will be only supported by PVs. From figure 5 (e) to figure 5 (g), it can be seen that there is a large reactive power exchange in the system at the beginning of simulation. When the bus is stable, if the PVs do not provide reactive power support, ACBV exceeds , after adding the reactive power support algorithm in 6s, the PVs continue to provide reactive power according to the ACBV, the value of will decrease. After SLs are removed, the demand of reactive power decreases, ACBV is between and , and PVs no longer provide reactive power. Since the reactive power control coefficients of PVs are the same, the reactive power provided by two PV generators is the same, so are the storages. Conclusion In this paper, a control strategy is proposed to realize the generation-storage coordination based on droop control. Through the SOCs-based storages droop control, the SOCs between the storages can be balanced. With the adjustable power control for PV generators, PVs can automatically adjust the active power output according to ACBF to prevent the storages from over-charged, and they also can automatically adjust the reactive power output according to ACBV to provide reactive power support for the bus to keep the bus voltage stable. When the SOCs are too low due to the continuous discharge of storages, SLs will be cut off through loads management, and the rest PLs will be powered entirely by PVs, which can prevent storages from over-discharged. Simulation based on Matlab successfully verifies the proposed control strategy.
2020-04-30T09:01:25.810Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "f765e21287adb0c3af4a37bfe02d3b82a9316e4d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/461/1/012086", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9389e1940745cac1f94da42f17ddf89063756a80", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
248928990
pes2o/s2orc
v3-fos-license
Experience with sonidegib in patients with advanced basal cell carcinoma: case reports Sonidegib is a Hedgehog signalling pathway inhibitor approved for use in patients with advanced basal cell carcinoma (BCC) not eligible for surgery or radiotherapy. This report describes clinical experience with sonidegib in two patients with locally advanced BCC (one with a tumour adjacent to the right eye and the other with a tumour associated with the left ear) and in one patient with Gorlin syndrome. Two of the patients had recurrent and intractable tumours. Treatment with sonidegib 200 mg/day led to remission in both patients with locally advanced BCC within 7 months and to a reduction in the size and number of lesions after 4 months in the patient with Gorlin syndrome. Adverse effects reported in these patients were cramps, alopecia, ageusia and weight loss, all of which were mild and consistent with the known toxicity profile for sonidegib. Sonidegib has an important role to play in the effective treatment of challenging cases of advanced BCC. In parallel, a need remains to improve management protocols for patients with advanced BCC, particularly through earlier intervention and a multidisciplinary team approach. Introduction Non-melanoma skin cancer is the fifth most commonly occurring cancer in men and women globally. 1 Basal cell carcinoma (BCC) accounts for the majority (~80%) of nonmelanoma skin cancer cases, 2,3 and the incidence has been increasing worldwide since the 1970s. 3,4 People with white skin are most susceptible to BCC, with sun-exposed areas, such as the head and neck, being particularly vulnerable. 2,5 As BCC tumours are generally slow growing, the condition is highly treatable if detected early and managed appropriately. 3 Metastatic BCC (mBCC) is extremely rare (0.003-0.55% of cases) 2 but, when it does occur, the prognosis is poor. 3 Large, aggressive or recurrent BCC tumours, and those which penetrate deeper into the skin or associated tissues, are referred to as locally advanced BCC (laBCC). laBCC poses several therapeutic challenges: the tumours may not be amenable to radiation therapy, surgery may be impractical due to the risk of morbidity, loss of function or disfigurement, and risk of recurrence is high. 3 Until relatively recently, mBCC and laBCC were considered untreatable conditions, with palliative care being the only option. The Hedgehog (Hh) signalling pathway is involved primarily in embryonic development and control of gene activation. Normally suppressed in adults, aberrant activation of the Hh pathway through gene mutations or excessive expression of Hh signalling molecules can lead to development of certain cancers, including BCC and Gorlin syndrome. 'Smoothened' (SMO), a transmembrane protein and main transducer of the Hh signalling pathway, initiates a signalling cascade that increases the expression of glioma-associated oncogene transcription factors. By binding to SMO, Hh pathway inhibitors (HPIs) prevent downstream activation of Hh pathway signalling. 6,7 Vismodegib was approved in 2012 for treatment of adults with mBCC or laBCC not eligible for surgery or radiotherapy. 8 Subsequent approval of sonidegib in 2015 provided an alternative option for laBCC patients. 9,10 European consensus guidelines recommend that patients with mBCC or laBCC be offered treatment with vismodegib or sonidegib. 11 Although the compounds share the same mechanism of action, 12,13 their pharmacokinetic profiles differ. Notably, sonidegib has a large volume of distribution, as indicated by steady-state concentrations sixfold higher in skin than in plasma, whereas vismodegib is confined mainly to the plasma and extracellular spaces. 10,14,15 The efficacy of sonidegib was established in the phase II, multicentre, randomized, double-blind BOLT (Basal Cell Carcinoma Outcomes with LDE225 [sonidegib] Treatment) study in adults with histologically confirmed laBCC or mBCC not amenable to curative surgery or radiotherapy. [16][17][18][19] Cut-off for the final analysis was 42 months. 19 The primary endpoint was the objective response rate (ORR) by central review (ORR-CR), defined as the proportion of patients with complete or partial response assessed by the BCC-modified Response Evaluation Criteria in Solid Tumors (mRECIST). ORR by investigator review (ORR-IR) was a secondary endpoint. In the final analysis, the ORR-CR was 56.1% and the ORR-IR was 71.2% as per mRECIST in patients with laBCC. 20,21 A pre-planned sensitivity analysis compared these efficacy outcomes with those obtained using the composite RECIST criteria (≥30% reduction in externally visible tumour or radiographic dimension, or complete ulceration resolution) applied in the ERIVANCE study of vismodegib. 21,22 Based on these less stringent RECIST criteria, efficacy outcomes with sonidegib were even higher, with an ORR-CR and ORR-IR of 60.6% and 74.2%, respectively. 20 Irrespective of the criteria used to assess response, long-term positive responses to sonidegib 200 mg/day were similar for patients with aggressive or non-aggressive histological subtypes of laBCC. 20,23 The safety profile of sonidegib was consistent throughout the course of the BOLT study. Median duration of exposure to sonidegib 200 mg/day was 11 months, with 24% of patients having ≥20 months' exposure. Adverse effects (AEs) were mainly grade 1/2 in severity. The most common AEs (≥25% of patients) were muscle spasms, alopecia, dysgeusia, nausea, elevated creatine kinase, weight decrease, fatigue and decreased appetite. Grade 3/4 treatment-related AEs and serious treatment-related AEs were reported in 32% and 5% of patients, respectively. Common AEs requiring treatment interruption or a dosage reduction were elevated creatine kinase, nausea, vomiting, diarrhoea and elevated lipase. Most AEs were manageable and reversible with dose interruptions, with no overall impact on efficacy. 19 Given that the strict adherence to protocol required in randomized controlled trials can limit generalizing the results to external populations, interventions must also be evaluated under real-world conditions. 24 Herein, we present clinical experience with sonidegib in three patients with advanced BCC. Patient consent All data presented in this article have been de-identified to ensure patient confidentiality. Patient consent was not required. A biopsy in January 2018 confirmed a diagnosis of BCC with an infiltrative pattern. The patient underwent maxillofacial surgery and was referred for radiation therapy (80 kV X-rays at a daily dose of 300 cGy, reaching a total dose of 4800 cGY) ending on 8 February 2018. She achieved a complete clinical remission. In October 2019, the tumour recurred and the patient was referred for Mohs micrographic surgery. Due to the COVID pandemic, she declined transfer to mainland Spain, which delayed the consultant oncological dermatology assessment until March 2020. At assessment, an erythematous and crusty plaque of approximately 16 × 8 mm located in the immediate vicinity of the caruncle of the right eye was evident. No adenopathies, masses or megaly were palpated. No significant laboratory abnormalities were identified, and magnetic resonance imaging (MRI) was negative for intraorbital invasion Sonidegib 200 mg/day was prescribed, beginning 26 August 2020, based on published evidence of its efficacy in laBCC. Tumour evolution is shown in Figure 2. A response was evident within the first month of treatment. By 19 March 2021 (7 months after treatment commencement), the patient had achieved a complete clinical response. After confirming the clinical response by control biopsy, sonidegib was discontinued on 27 May 2021 (after 9 months). In February 2022, 9 months after treatment discontinuation, the patient remained in remission. Throughout treatment, the patient experienced mild toxicity consisting of grade 1 cramping, ageusia and weight loss. Three months after discontinuing sonidegib, all AEs had resolved. Case 2 A 69-year-old man was referred for a tumour located in the left ear. The tumour had developed over 6 years, growing slowly until it occupied about two-thirds of the ear, with occasional bleeding. A biopsy indicated metatypical BCC with a mixed pattern (nodular and infiltrative). The patient had no comorbidities and was not receiving any chronic medication. Examination revealed a plaque of 4 × 3 cm in the helix and antihelix of the left ear, poorly delimited, with retraction of the pinna. The plaque was infiltrated, with clinical evidence of cartilage involvement. The clinical diagnosis was BCC ulcus rodens. In November 2020, the patient began treatment with sonidegib 200 mg/day. Improvement was evident after 2 months and, by 6 months, a complete clinical response was achieved ( Figure 3). A biopsy including cartilage indicated no evidence of tumour. Sonidegib was well tolerated by this patient, with grade 1 alopecia being the only reported AE. Due to good tolerability, and in the presence of a complete clinical and histological response, sonidegib treatment was continued for 6 months after complete clinical response. Case 3 A 38-year-old man with a long-term history of Gorlin syndrome (nevoid basal cell carcinoma syndrome) had been in dermatological follow-up since childhood. He had no comorbidities and was not receiving any chronic medication. From the age of 8 years, the patient had been treated for multiple BCCs by surgery, cryotherapy and electrocoagulation. Most lesions were located on the face and scalp. Mohs surgery was required on some occasions to treat infiltrative pattern lesions. Oral surgery was performed as needed to remove several mandibular keratocysts. Due to an acceleration of BCCs at the time of consultation, the patient was treated with vismodegib 150 mg/day from November 2019 to October 2020. Treatment response was good, with complete disappearance of lesions. AEs were mild: grade 2 alopecia and grade 1 cramps. Three months after stopping vismodegib, the lesions began to recur. In May 2021, the patient presented with more than 50 lesions on his face and scalp, mostly pigmented BCCs 4-6 mm In September 2021, after 4 months of sonidegib treatment, there was a decrease in the number and size of the lesions (Figure 4). The only side effect the patient experienced was grade 1 alopecia and self-reported changes in hair texture. Laboratory tests performed during follow-up (at 1 month and every 2 months thereafter) showed no abnormalities. After 6 months of treatment with sonidegib, a complete clinical response in all lesions was observed. With the patient's agreement, treatment was discontinued with a plan to reassess clinically every 3-6 months and reintroduce sonidegib intermittently between rest periods as necessary. The patient tolerated sonidegib better than vismodegib. He was satisfied with sonidegib treatment with no need to implement every-other-day dosing. Clinical overview Periocular localization of BCC (Case 1) with lesion development adjacent to and invasion through the caruncle is appropriately diagnosed as laBCC. The patient's response to sonidegib was notably rapid (within 1 month), consistent with that described in a man with laBCC of the nuchal region who had a 95% reduction in tumour size 3 months after starting treatment with sonidegib 200 mg/day; 25 a 71-year-old man with clinically important tumour regression within 2 months of starting The patient with metatypical BCC (mixed nodular and infiltrative pattern) of the left ear (Case 2) also showed an excellent response to sonidegib, achieving a complete clinical response after 6 months of treatment. Numerous other case reports or case series have documented clinical improvement or clearance of laBCC lesions, including complete responses, with sonidegib 200 mg/day most often within a few months of treatment start. [28][29][30][31][32][33][34] There are also anecdotal reports of the effectiveness of sonidegib for locally advanced basosquamous carcinomas, 31 locally advanced anal and rectal BCC, 35 and in combination with fractionated radiation for recurrent advanced BCC of the head and neck. 36 Case 3 shows the promising effectiveness of sonidegib for the treatment of Gorlin syndrome, including in patients with disease recurrence after receiving vismodegib, which is supported by the collective clinical experience of a multidisciplinary expert panel pointing to optimal responses to HPIs in these patients. 37 Italian investigators reported the case of an 89-year-old woman with Gorlin syndrome who had been treated successfully with vismodegib but had to discontinue treatment due to severe asthenia; all lesions relapsed after discontinuation. After 3 months of treatment with sonidegib 200 mg/day, partial re-epithelization and tumour shrinkage were observed in all target lesions, with no AEs. At 6 months, there was further improvement and complete healing of lesions on the face, again without AEs. 38 This outcome, together with Case 3 herein, shows that lesion recurrence after HPI discontinuation following response does not constitute resistance. As such, rechallenge with a drug of the same class should be considered before switching to a completely different treatment such as second-line immunotherapy. The usefulness of switching between HPIs was also reported in a case involving an 87-year-old man with inoperable laBCC involving the sinuses, nasal cavity and brain. Sonidegib 200 mg/day in combination with itraconazole pulse dosed at 100 mg/day (2 weeks on, 2 weeks off) was effective whereas previous vismodegib had proved inadequate. Vismodegib successfully reduced tumour size by 70% over 3 months when tumour involvement was limited to the nasal cavity and sinuses, but the effects diminished over time. Vismodegib was discontinued and the patient received radiation therapy (total dose of 70 Gy). Two years later, BCC recurred and, despite vismodegib treatment for 6 months, the lesions progressed. Pembrolizumab was tried without success, and the tumour progressed into the brain. Sonidegib/itraconazole combination therapy led to significant improvement after 3 months. After approximately 8 months, the intracranial lesion was no longer visible on MRI and the intranasal and sinus lesions were stable and improved. 39 Treatment resistance in patients receiving HPIs may be due to the development of SMO mutations, which impair effective binding. The efficacy of an alternative HPI in this setting may depend on the specific mutation, the binding location of the drug and whether the mutation produces a conformational change influencing drug binding. Identifying biomarkers of resistance or response would be useful to target HPIs to patients most likely to benefit. 40 Follow-up of patients with laBCC is of considerable interest to establish whether full lesion clearance can be achieved and whether tolerability is maintained during continued treatment. Our experience suggests that clinical follow-up every 3-6 months during maintenance treatment is appropriate. After discontinuation, consideration can be given to reintroducing sonidegib upon the appearance of multiple new lesions. An alternative approach is to continue sonidegib at reduced dosing (every other day or twice a week) as maintenance therapy. For patients with chronic Gorlin syndrome, a suitable treatment protocol may be sonidegib 200 mg/day until lesion clearance, alternating with no treatment during periods of remission. Irrespective of the condition being treated, any decisions regarding long-term management should be discussed and agreed with the patient. Telemedicine can be a useful adjunct tool for patients who are able to provide digital photos of their lesions of suitable standard for comparison with previous images. 29 Two of our patients (Cases 2 and 3) reported alopecia as the sole AE to sonidegib, and both opted to continue treatment. The remaining patient (Case 1) experienced mild cramps, ageusia and weight loss, all of which resolved once treatment was discontinued. These AEs are within the established toxicity profile of sonidegib. 9 In the BOLT study, AEs were common but rarely serious in patients receiving sonidegib. At final analysis at 42 months, alopecia (grade ≤2) had been reported in 49% of patients receiving sonidegib 200 mg/day. 19 HPIs may induce alopecia by interfering with the transition of follicles from the telogen phase of hair shedding to the anagen growth phase. 41 The most common grade 3-4 AEs associated with sonidegib 200 mg/day in the BOLT study were elevated creatine and lipase (each in 6% of patients). 19 Elevated creatine levels are also associated with vismodegib, 42 suggesting that this is a class effect. No abnormal laboratory data were recorded in any of the three cases presented. Most AEs associated with HPI treatment are thought to result from inhibition of the Hh signalling pathway in normal tissue. Although generally not severe, AEs can be persistent, reducing patients' quality of life and necessitating treatment interruption or discontinuation. Indeed, it has been suggested that some patients with laBCC who discontinue HPI therapy due to AEs do so because their intolerance for AEs begins to outweigh the extent of clinical improvement. Healthcare professionals need to be aware of this possibility and manage patients accordingly in order to facilitate continued treatment, if appropriate. 19,41 With respect to sonidegib, a retrospective case series of 20 ISSN: 1740-4398 CASE SERIES -Sonidegib for advanced BCC drugsincontext.com patients 43 and a post hoc analysis of the BOLT study 44 found that dose adjustments (e.g. dose reductions, alternate day dosing) or treatment delays were practical solutions to reduce the need for treatment discontinuation with no detriment to sonidegib efficacy. Prior to initiating HPI therapy in patients with laBCC receiving statin therapy, it may be prudent to stabilize the patient on a low dose of a hydrophilic statin (e.g. rosuvastatin) not metabolized by CYP 3A4 to reduce the risk of muscle-related AEs. 45 The use of concurrent superficial radiotherapy at the time of maximal therapeutic effect of an HPI led to a high clinical response rate with minimal toxicity in a retrospective review of 12 patients with laBCC treated with vismodegib or sonidegib, and merits further investigation in well-controlled clinical trials. 46 Conclusions Collectively, case reports/case series of sonidegib use in clinical practice support its effectiveness in patients with advanced BCC, as demonstrated in the BOLT study. [16][17][18][19][20] Although based on only three cases, and with treatment ongoing in two of these cases at the time of case submission, the clinical experience with sonidegib presented herein demonstrates its ability to induce tumour remission in patients with laBCC or Gorlin syndrome, including recurrent and intractable cases. AEs recorded with sonidegib in these patients were mild and manageable, consistent with its known safety profile. Importantly, strategies such as dose reductions and treatment interruptions to manage patients unable to tolerate dosing schedules or with suspected treatment-related AEs do not appear to undermine sonidegib efficacy and may improve patient outcomes by preventing treatment discontinuations. Overall, sonidegib appears to be a useful addition to the therapeutic armamentarium for patients with advanced BCC, showing efficacy in patients who have previously failed on vismodegib. Despite the wealth of high-quality evidence regarding therapeutic options for laBCC, delays in disease management are not uncommon in these patients, even in centres with active multidisciplinary teams. As such, continued efforts are required to define and implement optimal care for patients with advanced BCC. Contributions: Conception and design: all authors; clinical data: CS-G, GP-P, AM-D and RFdMC; critical review of the manuscript: all authors. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole and have given their approval for this version to be published. Disclosure and potential conflicts of interest: SP has received honoraria as speaker or for participating in advisory boards from Regeneron, Roche, Sanofi and Sun Pharma. CS-G has received honoraria for lectures, presentations, speakers bureaus, manuscript writing or educational events from Sun Pharma; and payment for expert testimony from Sun Pharma. GP-P has received honoraria as speaker or for participating in advisory boards from Amgen and Sun Pharma. AM-D has no conflicts of interest to declare. RFdMC has received consulting fees and/or honoraria for expert testimony from Sun Pharma; honoraria for lectures, presentations, speakers bureaus, manuscript writing or educational events from Kyowa, MSD and Takeda; support for attending meetings and/or travel from Kyowa, Sun Pharma and Takeda; participation on a Data Safety Monitoring Board or Advisory Board for Sun Pharma; member of non-melanoma skin cancer expert committee. The International Committee of Medical Journal Editors (ICMJE) Potential Conflicts of Interests form for the authors is available for download at: https://www.drugsincontext.com/wp-content/uploads/2022/05/dic.2022-3-8-COI.pdf
2022-05-21T15:06:36.188Z
2022-05-19T00:00:00.000
{ "year": 2022, "sha1": "8ce7daaeceffd86953ab586bbebcb85419b8f20f", "oa_license": "CCBYNCND", "oa_url": "https://www.drugsincontext.com/wp-content/uploads/2022/05/dic.2022-3-8-COI.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d42e88aa2eeee1366239408037def75ec35102ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225097125
pes2o/s2orc
v3-fos-license
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the data that can be entirely novel and have demonstrated superior performance over classical machine learning models across a range of tasks and, therefore, serve as a promising tool for making new discoveries in psychiatry. A key concern for the wider application of DNNs is their reputation as a “black box” approach—i.e., they are said to lack transparency or interpretability of how input data are transformed to model outputs. In fact, several existing and emerging tools are providing improvements in interpretability. However, most reviews of interpretability for DNNs focus on theoretical and/or engineering perspectives. This article reviews approaches to DNN interpretability issues that may be relevant to their application in psychiatric research and practice. It describes a framework for understanding these methods, reviews the conceptual basis of specific methods and their potential limitations, and discusses prospects for their implementation and future directions. INTRODUCTION Psychiatric disorders are common and a leading cause of disability worldwide. Substantial research has been done in the field, but major questions about their causes, treatment, prediction, and prevention remain unanswered. In part because mental phenomena and their disorders are inherently multidimensional and reflect complex dynamic processes, psychiatric research comprises a unique set of challenges that have not been tractable to date using conventional approaches. The validity of psychiatric constructs and their measurements and the interplay between and within bio-psycho-social factors to determinants might not be readily describable by heuristic knowledge or by simple models of dynamics currently established. Despite tremendous efforts, overall progress in understanding and treating psychiatric illnesses has been modest in the past decades. The emergence of Big Data and recent developments in machine learning (ML) might provide a venue to tackle some of the challenges. Deep neural networks (DNNs) (1,2), a specific type of ML model, could be particularly helpful in some cases. DNN models are inspired by biological brains, using artificial neurons (e.g., mathematical analog to biological neurons) as units and, with those, building a network by wiring a large number of units together in specific ways. Two unique theoretical properties make them particularly appealing to psychiatric research, namely the capability of finding and mapping more complex patterns in data compared to other models, and the ability to automatically learn important and, at times, novel aspects of information through sequential data transformations (e.g., "representation learning"). Empirically, in the field of healthcare, they have already achieved groundbreaking progress in various applications: for example, drug discovery (3), protein folding (4), and clinical risk prediction (5). There is also published work on using representation learning to potentially enhance the validity of psychiatric taxonomy (6). However, it is also known that DNNs possess a set of lingering issues that remain to be improved. For example, there is increasing awareness of the challenge of model interpretability (7)(8)(9)(10)(11)(12)(13)(14)(15). Complex ML models, such as DNNs, are sometimes referred to as "black box" models because their mechanisms of making decisions are not explicitly accessible to human cognition. In the context of psychiatric research, model interpretability is desirable for the following reasons: (1) for clinical applications, building trust between the model and stakeholders is fundamental for adoption of the tool. Trust is directly related to the level of understanding of the inner working of the model (e.g., "knowing why"). It is known that DNNs are capable of making accurate predictions based on "peripheral" features or noise but that contain no heuristic or scientific meaning other than statistical correlation with the labels. In this context, the models could be more vulnerable to adversarial attacks and noise when applied to out-of-distribution data (16,17). For example, one can build a well-performing classifier for diagnosis of depression based on internal data that is, in fact, less reliable when applied to real-world data. With appropriate model interpretation, researchers and clinicians can make better judgments about whether the model is trustworthy in a given scenario, supported by their expert knowledge. (2) Model interpretation helps to identify critical aspects of the data (e.g., the underlying biological mechanism as shown in neuroimaging tools) and could help the progress of science in both better understanding the subject matter and improving the model. (3) For psychiatry in particular, a significant proportion of clinical decisions are made by jointly considering objective conditions and subjective considerations-for example, preferences to choose over a certain medication side effect profile vs. another, etc. Knowledge about how the model makes decisions allows the flexibility to adjust to additional human preferences and value judgments not readily incorporated in each instance. (4) Finally, on the legal side, model interpretability is explicitly stated as a requirement by the General Data Protection Regulation set by the European Union (18). Efforts to improve the interpretability of complex ML models has been an active area of research, and several recent reviews have addressed recent developments in this area for an ML audience (7)(8)(9)(10)(11)(13)(14)(15). In this article, we aim to summarize some of these issues in the context of their potential application to psychiatric research. Starting from a brief introductory sketch of DNNs, we then discuss general considerations regarding DNN interpretation methods; the current status of available interpretation methods; and their limitations, implementations, and possible future directions. Our main goal is not to provide an exhaustive review, but to introduce basic principles and emerging approaches to DNN interpretability that may provide context for psychiatric researchers interested in applying these methods. On the other hand, ML researchers interested in mental health research might also find this article helpful. In this paper, we discuss interpretability for supervised learning as most interpretation methods were developed under this context, but many of the methods can be generalized to semisupervised learning as well. Figure 1 may serve as a guide to aid readers in navigating the conceptual flow of this paper. Basics of DNNs DNNs belong to the broader class of neural networks (NNs) (1,2). As mentioned, the basic unit of an NN is an artificial neuron, which is a simple simulation of biological neurons. Biological neurons take input from other neurons, form an action potential, and then output signals to subsequent neurons via synapses. Artificial neurons are also connected in an analogous way, and synaptic strengths are designated by numeric weights with higher weights indicating a stronger connection. Action potential is simulated by a nonlinear "activation function, " which typically has a drastic output value change after input value exceeds a certain threshold. NNs are typically composed of "layers" of artificial neurons, which take a signal from their counterparts in the preceding layer and output to the next after the aforementioned transformation although, typically, there is no connection between neurons in the same layer. In real-world applications, the number of neurons in each layer is usually large (starting from the order of hundreds to tens of thousands depending on the design). To further link NNs to other statistical models, it is noteworthy that logistic regression can be expressed as a simple case of NN-it is a, NN with only two layers (input and output) with all inputs linked to a single output cell and applying a logistic activation function at the output cell. A DNN is a specific case of the general class of NNs such that it has at least three layers in its structure: an input layer, an output layer, and at least one layer in between, designated as a "hidden" layer. What makes DNNs unique are the hidden layers; because each layer includes a step of linear and nonlinear transformation, hidden layers make DNNs "compositional" in nature (e.g., functions of functions), which is shown to greatly increase the patterns that can be expressed by the model (19). On the other hand, it is in part the existence of these hidden layers in NNs that has contributed to concerns about their interpretability. Common DNN Architectures Currently, there are three common types of DNN architectures, namely (1) feed-forward NN, (2) convolutional neural networks (CNNs) (20), and (3) recurrent neural networks (RNNs) (21) (Figure 2). These structures can be used as building blocks (in the FIGURE 1 | Conceptual flow chart connecting ideas and articles reviewed and discussed in this paper. Each block between a set of arrows corresponds to a particular section of the paper. Numbers in parenthesis indicate relevant referenced articles. form of layers) for a more complicated DNN in flexible ways as long as model training is computationally feasible. Feed-forward NNs are the basic type of DNNs in which the batch of neurons within each layer is connected to and only to those in the previous and the following layers. Information is propagated from the input layer through a sequence of hidden layers to the output in a straightforward fashion. CNNs are motivated by imitation of biological vision systems and have been widely adopted for (but not limited to) computer vision and image-related tasks, such as reading pathology slides or brain images. The aim is to simulate the hierarchical nature of neurons in the vision cortex. The input neurons, usually representing pixels of an image, are typically connected to a smaller group of neurons that act as "filters, " which scan through the entire image. The filter processes through the entire input image by moving one or several pixels at a time, which, in the end, outputs a filter-transformed version of the image. The sliding nature of the scan is why the term "convolution" was coined as it operates analogously to the convolution operation in mathematics. The main characteristic of the CNN is the "weight sharing" of the filter in which the parameters defining the filter are fixed throughout scanning of the whole image. Filters are capable of learning to recognize meaningful abstracts (e.g., representations) of the image that are directly correlated with the task at hand. For example, to identify a table, representations such as sharp edges or a flat surface might be captured automatically. An RNN is motivated by the fact that there are forms of data in which observations may not be independent, such as time sequential or text data. An RNN by design takes into account dependencies over the sequence by taking in inputs sequentially and allowing information contained in the hidden layer of the previous step to enter that of the following. Variants Here, each circle represents a layer of neurons. Information from the hidden layer of the previous step is allowed to enter the following step. X: input layer; H: hidden layer; Y: output layer. (C) A convolutional NN with two convolutional layers, two pooling layers, and two feed-forward layers. At the first step, the convolutional filter transforms the input image into six "feature maps" (e.g., image transformed by the filter). The feature maps are then summarized by pooling, which reduces the dimension of the feature map, usually by taking the maximum values of smaller regions (i.e., 3 × 3) that covers the whole feature map and combining them with spatial relations pertained to produce a new feature map. This procedure is repeated two times and then the network is connected to a two layer feed-forward network to derive the final output. of basic RNNs, such as long-short term memory (LSTM) (22) and gated recurrent unit (GRU) (23), differ in the way information is allowed to propagate over time. These are motivated by addressing known issues in vanilla RNNs when transmitting information across a larger number of time steps and have shown actual improved performance in various tasks (24,25). Attention Mechanism Having discussed the three basic architectures of NNs, we turn our attention to computational processes that have developed to further improve learning by DNNs. An "attention mechanism" was first described by Bahdanau et al. (26) as a novel component to a DNN-based machine translation model [which is a type of task in natural language processing (NLP)]. Its idea is to incorporate the learning, for each input example, on which part of the input information the model should put emphasis, in the form of weights applied to specific inputs (e.g., the larger the weight, the more emphasis) into the DNN model. Implementing an attention mechanism with DNNs of various types provides powerful model improvements and can yield state-of-the-art performance across many NLP tasks (27). In fact, application of an attention mechanism is not confined to modeling texts, but rather natural to any DNN model assuming a sequential data structure. Attention mechanisms have also been adopted for modeling imaging data although they have not been as prevalent as they are for sequential data (28,29). Because an attention mechanism is an indicator of importance, it could provide a venue to understanding model decisions, which are discussed in a later section. Learning Model Parameters-Gradients and Back-Propagation In simpler models, such as linear regression, model parameters can sometimes be estimated with a closed form, fixed solution. For complex ML models, the parameters are usually difficult to solve directly, and one would have to rely on numerical approximations to obtain their estimates. An important class of these methods is based on updating the parameter to be learned with "gradients" during model training. In a heuristic sense, gradients express how model behavior would change with respect to a small change in a certain parameter or input value (e.g., the respective partial derivative) at the given value. Gradients are calculated for DNNs for all parameters using a technique called "back-propagation" (30), which is essentially applying a chain rule in calculus to derive derivatives with regard to the objective function (e.g., the function on which the model is being optimized) for every parameter in the model, taking advantage of the compositional nature of DNNs. Since gradient values corresponds to how model output would change by a subtle change in the input, they can also be used as a means for model interpretation. DATA TYPES, DNN MODELING, UTILITIES, AND EXAMPLES OF MODEL INTERPRETATION IN PSYCHIATRIC RESEARCH The scope of psychiatric research is massive and involves a variety of data types and structures, upon which model interpretation also depends. For this reason, we briefly discuss the structure of commonly seen data types in psychiatric research, how to model them efficiently by DNNs for a given research question, and why model interpretation could be beneficial. It is important to note that DNNs are inherently flexible, and there is no presumption of a "definitive" way to fit a particular type of data. In many cases, data can be fit well with more than one method or a mixture of methods. Neuroimaging Data Electroencephalogram (EEG) (31,32), event-related potential (ERP) (33), magnetic resonance imaging (MRI) (34), functional magnetic resonance imaging (fMRI) (35), and positron emission tomography (PET) (36) are imaging tools that are commonly used in psychiatric research. These techniques generate images that are either static (MRI) or time-varying (EEG, ERP, fMRI, PET). Static images are mostly suitable to be fit with CNN-based models (37). When time is taken into account, the time series can either be modeled with a CNN (31) or a mixture of CNN and RNN (35). In the case of neuroimaging, interpretations of the DNN models applied could shed light on the underlying brain structure or mechanism that corresponds to a measured phenotype or any other metrics of interest. An example is the EEG classifier and its interpretation described in Ke et al. (31). Omics and Molecular-Level Data Genetics and other-omics data are important in psychiatric studies as most psychiatric disorders are at least partially heritable (38,39). Although genetic coding information is sequential by physical structure, the exact dynamics of interplay between genetic and other molecular-level information is largely undetermined, less the typical structure of the DNA-RNAprotein cascade. Therefore, in the context of functional genomics, one might be inclined to model such assuming the least on data structure, such as a deep feed-forward network. For example, in Wang et al. (40), the authors employ a model based on the deep Boltzmann machine (41), the probabilistic analogue of feed-forward NN, to classify cases versus noncases for several psychiatric disorders using integrated-omics data. In this particular work, the authors derive model interpretation to a certain degree by assigning the hidden nodes to inherit heuristic meanings from observed nodes using a defined rule to construct linkage pathways between genotypes and phenotypes. More sophisticated model interpretation methods would allow additional biological insight to be drawn, for example, the contribution of a particular gene to the phenotype of interest in a certain cell type. Clinical and Epidemiological Data Clinical and epidemiological data range from those focusing more on individuals (i.e., more in-depth data collected from relatively fewer subjects)-such as interview records in the forms of text of videos or comprehensive questionnaires-to those that collect data from a larger group of subjects but possibly less indepth per subject, such as electronic health records (EHR), health insurance claims databases, and cohort data (42)(43)(44)(45). In most cases, these data are heterogeneous in structure. For example, EHR can contain quantified as well as text data. An optimal choice of model class depends on the actual data type involved and the study question. For example, if one is analyzing text scripts, then a model with an attention mechanism might be appropriate. If one is building a risk-prediction model for a certain phenotype, a feed-forward NN might be plausible due to minimization of prior assumptions to the data. On the other hand, derivation of model interpretation could be particularly crucial for models that are developed for clinical applications (e.g., decision support system) given the natural tendency for one to learn the basis for decisions made, not to mention those involving medical considerations that might involve risks and benefits. For example, one might build a suicide risk-prediction model to stratify those with higher or lower risk. Both the clinician and patient would then be inclined to be informed why, in a particular situation, the patient was classified as such. Behavioral Data Broadly speaking, any data that reflect human behaviors would be potentially informative to psychiatric research. For example, data collected from mobile phones and Facebook use may provide clues to depression and anxiety (46,47). In Dezfouli et al. (6), data collected through a bandit task is used to classify patients with bipolar disorder versus controls. Video or audio recordings of patients may also be used for modeling tasks, such as phenotyping. Again, the optimal model structure and interpretation method depend on the specific data collected. Interpretation of these models could facilitate a better understanding of the roles of behavioral features relative to the phenotype of interest. PRELIMINARY ISSUES AROUND MODEL INTERPRETABILITY In this section, we briefly discuss some of the background issues of which to be aware around model interpretability. "Interpretability" Is Not a Precisely Defined Term One of the recurring themes in the ML interpretability literature is the constant efforts toward a universally accepted definition of the term "interpretability" (8,13,14). Despite efforts (7)(8)(9)(10)(11)(12)(13)(14), thus far, there has not been an established consensus as how interpretability should be best defined in the context of ML. In our view, the question of interpretability can be approached from two different perspectives: (1) the perspective of science, in which a precise, formulated definition is required, and (2) the perspective of the interpreter, which arises from the psychological need to construct meaning out of things. Although most previous works discuss the issue from the former (7-14), we begin our discussion with the latter. In this paper, we define interpretability as the capability of a subject matter to be faithfully translated into a language available and a meaning sensible to the interpreter. By "faithfully, " we emphasize alignment with science despite the definition being human-centered. Last, we avoid using commonly used synonyms of "interpretability" to minimize confusion. The Importance of Context in Interpretation Discussions around model interpretability are often focused on its mathematical aspects. When applied to a specific task, however, it requires an added step of translating mathematical model components to the actual substantives. There is a substantial amount of contextual subtlety in psychiatry that cannot be readily extracted from quantified data, making this step particularly critical for psychiatric researchers. Thus, it is important for them to work closely with ML specialists to start from the design phase of a model and make valid and meaningful interpretations that naturally align with the contextual need. The General Accuracy-Interpretability Trade-Off DNNs are not alone in being tagged the "black box" property. Model classes that are usually deemed "interpretable" mathematically can become opaque when translated into context as model size grows. For example, it is reasonable to state a linear regression model containing fewer terms to be easily interpretable. However, it is less clear if a linear regression with thousands of dimensions and a collection of higher order interaction terms can be called interpretable; the model parameters would retain the same interpretation, but constructing a heuristic explanation from the subject matter becomes hard. The same applies to single decision trees when the tree grows deeper. Ensemble tree-based ML models, such as random forests (48) and gradient boosting (49), as well as support vector machines (50) and their variants are among the best performing non-DNN ML models by accuracy metrics. However, they also are less directly interpretable compared to decision trees and logistic regression, and methods were developed to help better understand how these models make decisions (51). As discussed in later sections, many of these methods can be applied to any ML models (i.e., model-agnostic) and, thus, can be applied to DNNs. Interpreting DNNs comes with a unique set of challenges. DNNs, unlike other models, consist of hidden layers in which automatic feature learning occurs, and one would be inclined to know the actual workings (i.e., the transformations taking place and the correlation between arbitrary layers) in these nodes in addition to the relationship between inputs and outputs. Also, as introduced previously, the structure of DNNs can vary, and the delicate information flow (i.e., via gradients, weights, and transformations) make them intrinsically a more complex subject to study. That said, there do exist tools to help us understand their mechanisms to an extent as we discuss in the following sections. GENERAL PROPERTIES OF APPROACHES FOR DNN INTERPRETATIONS Before we introduce specific methods, we first categorize them into a structure consisting of two important dimensions as previously described (51): (1) the classes of models to which the method is applicable (i.e., model-specific vs. model-agnostic) and (2) the scope of data at which the method looks (i.e., local vs. global). Model-Specific vs. Model-Agnostic Methods Model-specificity means that the interpretation method at hand can only be applied to a certain class of models. On the other hand, model-agnostic methods are applicable to any ML models in general. Model interpretation is carried out by inspecting components of a given model. Some components are universal to all models (i.e., inputs and outputs), and some are specific to certain structures of models; the same applies to the corresponding interpretation methods. For example, feature importance for tree-based methods-such as random forest or gradient boosting-are calculated from the number of split nodes involved for each feature and can only be carried out to models with the corresponding structure (52). DNNs are unique in ways that they are structurally compositional, followed by delicate calculations in gradients, and sometimes incorporate an attention mechanism. Accordingly, methods utilizing these structures would then be specific. In contrast, methods that involve direct manipulation of common structures, such as model inputs, are generally model-agnostic. Local vs. Global Interpretations An interpretation method can provide either summarized information about model behavior for each respective feature regardless of its value (i.e., global) or information about model behavior around the neighborhood of a specific data point (i.e., local, which may be data for a single patient or a single image). The decision between global vs. local interpretations needs to be made with respect to the context of the application. For example, the former might be more suitable when a model is applied to determine the strength of a relationship between a certain predictor and population-level (e.g., aggregated) outcomes, and the latter might be preferable when the goal is to inform rationales for modeled decision making for a specific patient. Many of the DNN-specific interpretation methods are local because most DNN-specific components behave differently in accordance with their value at model evaluation. Table 1 summarizes selected interpretation methods and their characteristics along the axes of locality and specificity to models. In psychiatry, researchers may be working withomics/molecular data, cohort or EHR data, free text, imaging or magnetic/electrophysiological data, behavioral records, questionnaires, or time series signals. To bridge this interest from psychiatry in interpretation of their use of DNN models, we introduce specific interpretation methods in three categories, namely (1) methods applicable to data of any type (hereafter referred to as "general inputs"), (2) visualization techniques for medical imaging data, and (3) utilizing an attention mechanism for model interpretation with free text data (26). Interpretation Methods Applicable to General Inputs Interpretation methods under this class utilize model components common to all DNNs, thus making them applicable in most research contexts. For example, one might be interested in building a risk-prediction model for a disorder of interest with a mixture of different data types (e.g., quantified clinical measurements, text data, imaging data, or genetic data) as predictors. The following methods allow interpretation of each predictor regardless of its data type. Permutation Feature Importance Scores Permutation feature importance score is a model-agnostic and global method (48, 77). The idea is to permute values of each predictor one at a time and evaluate performance metrics of models in which values of each predictor are permuted against those of the model in which the original input is used. Predictors contributing to a larger drop in model performance are given a higher importance score. In the case of DNNs, it is preferable to retrieve permutation importance scores from test data (instead of training data). The first reason is due to computing time; to run on training data would require retraining the model the number of times equal to the number of features, which is, in some cases, not computationally feasible. A second reason is that researchers are generally interested in generalizing the model to data outside of training. Given that DNNs tend to over-fit during training, the interpretation methods that rely on importance scores might just capture noise that contributed to over-fitting (51). An issue permutation importance scores possess and share with other methods involving singling out a particular predictor and then either performing shuffling or extrapolation on that predictor is when the predictor of interest is correlated with other predictors, which would result in making inferences with unrealistic data points or biased results (53). Another issue, as discussed in (59), is that permutation methods may underestimate the importance of features that have saturated their contribution to the output. Partial Dependence Plot (PDP) A PDP (49) is a model-agnostic and global interpretation method. It intuitively plots one or two predictors of interest on one axis and the output on the other axis, averaging out the effects of other predictors over their respective marginal distributions. Despite its simplicity, PDP is also known to produce biased results when predictors are correlated; it represents a commonly violated assumption of pair-wise independence among predictors (54). It also becomes increasingly difficult to visualize information with a large number of predictors (55). Individual Conditional Expectation (ICE) ICE plots the predictor of interest against the outcome in the same way that PDP does (56). However, it differs from PDP in that ICE plots a graph for each example while holding all other predictors constant at their observed values. Although ICE gives more detailed information on interactions between predictors than PDP (56), because configurations of other predictors are not collapsed to average values, it is similarly prone to bias when predictors are correlated; the plot may end up in regions where the combination of input values are improbable in such cases (51,54). Local Interpretable Model-Agnostic Explanations (LIME) LIME is a model-agnostic and local interpretation method (57), which produces interpretations for specific examples. Local behavior of complex functions can be reasonably approximated by a simpler function, such as the first-or second-order approximation. In the same vein, LIME approximates the actual prediction model locally by training a model that is deemed interpretable (i.e., a linear model). The LIME procedure first converts the input data from its original form into a set of "interpretable representations." Using text data as an example, in current practice, one might first transform "word embeddings" (78), which are vectorized representations of words and by itself incomprehensible to humans, to binary indictors for whether or not a particular word is present. Then, LIME constructs a "neighborhood data set, " which includes the example of interest, and a number of data points sampled close to that specific example in the interpretable representation space. After sampling, the neighborhood data are converted back to original features and run through the model to be explained, which, in turn, generates prediction for these inputs. The explanation model is then trained supervised on the labels generated by the actual prediction model using the interpretable representations as predictors. It is trained based on optimizing metrics that would encourage (1) closeness between the results coming from the explanatory model and the actual prediction model and (2) simplicity of the explanation model. Neighboring data points closer to the actual example of interest are given higher weights. Although LIME is conceptually intuitive, two general issues should be considered: (1) The procedure of finding the neighboring sample points in LIME are defined arbitrarily, and the generated neighborhood data set may include data points that would rarely occur in real-world settings and are also at risk of over-weighting them (51). (2) It has been shown that LIME explanations may not be robust when attempting to explain nonlinear models (58). For example, in DNNs, attributions of predictors can vary significantly for neighboring data points, which is unfavorable. Therefore, although the idea of LIME is appealing, there are questions waiting to be solved, and researchers should remain cautious when applying this method. Gradient-Based Methods Gradient-based methods are mostly DNN-specific and local (62, 67-72, 79, 80). They take advantage of the fact that the compositional nature of DNNs allows the use of backpropagation (mentioned in section DNNs in a nutshell), which enables efficient calculations of the gradients. Intuitively speaking, the greater the gradient for a predictor, the more important it is for model output at the input value of interest. Because the gradient for a certain predictor varies across values and usually interacts with other predictors, these methods provide local explanations. Also, because gradients are calculated along the path of the whole DNN model, gradient-based methods can be applied between any two layers of the model. In this class, integrated gradients and its variant (68,80) is based on integrating gradients on a linear path from a reference value, chosen by prior knowledge, to the actual value of a predictor of interest for a certain data point. For example, for an imaging data set, the reference value could be zeroes for each color channel for each pixel. This method avoids the pitfall of using vanilla gradients such that it avoids assigning zero attribution to a predictor when the gradient at that data point is zero, but the output does, in fact, change when the value of this predictor changes from reference to the actual value (68). Although most gradient-based methods are theoretically applicable to any DNNs, most of these were originally developed for imaging data to visualize locations in images important for modeling. Some gradient-based methods are specific to model CNNs, which are widely adopted for imaging data modeling. Visualization methods used for imaging data ("saliency maps") are discussed in a later section. Deep Learning Important FeaTures (DeepLIFT) DeepLIFT is a DNN-specific and local interpretation method (59). Like integrated gradients (68), DeepLIFT attributes the importance of each predictor by comparing the model prediction using an actual predictor value to a reference value. However, instead of actually integrating gradient values within the range of interest, DeepLIFT defines "multipliers" as its base building block, which is a simple averaged corresponding change in the output by changing the input of interest from a reference value to the actual value of the data point in question (the "contribution") and, therefore, can be perceived as a fast approximation to integrated gradients (59). The overarching guidance of DeepLIFT is that the contributions of all predictors are linearly added to give the total change in the output [e.g., the "summation-to-delta" (59)]. Analogous to gradients in DNNs, the multipliers follow a chain rule-like property, analogous to that in calculus and back-propagation. With an attempt to preserve the summationto-delta property, total contribution is allocated to each input and then further separated into positive and negative compartments within input. This way, DeepLIFT avoids misinterpretations that might arise from cancellation of numeric values with different signs. Note that, because the multipliers possess a chain rule-like property, DeepLIFT can assess contributions between any two arbitrary layers of neurons as other gradient-based methods can. It is not recommended to use DeepLIFT in models (60) in which multiplicative interactions occur because it loses the summation-to-delta property [i.e., LSTMs (22)]. As a heuristic explanation, problems may arise in cases in which approximating gradients across a range of input using its average value is inappropriate. SHapley Additive Explanations (SHAP) SHAP (61) is a local interpretation method. It can be either model-or model-specific, depending on which variation is being used. It extends from Shapely values from cooperative game theory (51). A Shapely value is by itself a metric to calculate feature attribution. The idea of Shapely values is that all features "cooperate" to produce model prediction. In its classical form, the Shapely value is calculated as the weighted average of a change in modeled prediction comparing a model with and without a given predictor across all possible configurations (presence or absence) of other predictors. Because this approach requires repeated assessment of model performance for a large number of iterations, it is computationally intensive and infeasible for DNNs. The SHAP framework starts with the observation that many of the feature attribution methods (e.g., LIME and DeepLIFT) can be categorized under a common class of "additive feature attribution models" for model interpretation. Then, within this additive model class, a unique solution to the explanatory model-the one that uses Shapley values as their coefficients to generate interpretations-would satisfy a set of favorable mathematical properties, such as accuracy in approximations (61). SHAP then introduces several efficient methods to obtain the Shapley value solutions (in contrast to the classical approach mentioned in the above paragraph) to additive feature attribution models. For example, Kernal SHAP is a model-agnostic method combining LIME and Shapely values; Deep SHAP (61, 81) is a DNN-specific method combining DeepLIFT and Shapely values, making use of the compositional nature of DNNs to improve computation efficiency to obtain Shapely value approximations. Compared to general cases of LIME and DeepLIFT, SHAP interpretations provide an additional theoretical guarantee of several favorable properties, grounded by established proofs originating from game theory (61). However, as noted in Alvarez-Melis et al. (58), SHAP may also be vulnerable to the nonrobustness problem as observed in LIME. Visualization of Imaging Data It is natural to use visualization techniques to make sense of imaging data. In psychiatry, these data are primarily generated through neuroimaging techniques described in section "Interpretability" is not a precisely defined term. As a hypothetical example for the utility of model visualization tools, say a researcher is interested in building a classifier for schizophrenia based on a set of imaging data, such as fMRI scans. Once the classifier is built, these techniques allow the researcher to highlight which particular areas of an image would be primarily responsible for a case-control classification. The main idea of the perturbation-based approach is to remove or occlude a particular part of the input and observe the change of modeling prediction. Methods within this class differ in how the optimal areas to be perturbed are chosen and how the "change" in model prediction is assessed. One advantage of this approach is that we are then measuring the actual change of model output by intervening on the input of interest, instead of merely measuring the association between the output and input (62). However, due to the greater quantity of computation needed to implement this approach, in most cases, these require longer computation time (66) and are less widely used in the literature compared to gradient-based methods. As mentioned, numerous gradient-based methods have been originally proposed to create saliency maps. These explanation methods include using original vanilla gradients for explanation [referred to as "vanilla" gradients (71)], guided back-propagation (69), deconvolutional networks (79), input × gradient (72), integrated gradients (68), grad-CAM (67), and guided grad-CAM (67). We can calculate the gradient for each feature (e.g., pixel) as the importance measure locally by using vanilla gradients for explanation. Deconvolutional networks and guided back-propagation differs from vanilla gradients in the way the nonlinear transformation was performed. Gradient X input calculates score based on the product of the gradient and the value of the feature. Grad-CAM is specific to models comprising a convolutional layer (CNN) and produces a feature importance heat map based on the product between the global average of gradients and the values of each feature for each channel of the convolutional layer of interest. Guided grad-CAM combines guided back-propagation and grad-CAM to enhance the spatial resolution from the original grad-CAM. Given the popularity and plethora of saliency maps, Ancoma et al. (60) investigated whether or not these methods satisfy two criteria: (1) sensitivity to model parameter randomization and (2) sensitivity to data randomization. In this context, sensitivity means whether and how much the output of the explanatory model would change if either parameter was randomly shuffled or data was randomly shuffled and the model was trained on the permutated labels. In their work, gradient-based methods are compared along with one perturbation-based method (65) and an edge detector (e.g., an algorithm that always illustrates the borders within an image). In the case in which parameters are randomized, the prediction model still preserves some capability to process information, using its structure as a prior. If a method is not sensitive to this permutation, then the explanation would not facilitate debugging the model, which is related to parameter learning. In the case in which labels are permuted, the relationship between the predictors and the labels based on the data-generation process is lost, and the model remembers each permutated example by "memorizing" it with over-fitting. If a method is insensitive to randomizing the labels, it implies the explanation generated does not depend on the data-generation process recorded by the model and, therefore, cannot explain the model from this perspective (60). In addition, Kindermans et al. (82) note that saliency maps can change their explanation when a transformation has no effect on how the model makes the decision, which suggests that these methods, although informative, still preside over robustness issues. Aside from saliency maps, which are a local method, we may also visualize models trained on imaging data using a global metric to obtain a global interpretation. For example, in Ke et al. (31), the authors built an online EEG classifier for depression, in which they measured and visualized the information entropy (i.e., the amount of uncertainty contained in a particular random variable) of the activation matrix of each EEG channel. Because entropy correlates with the possible amount of information the model can utilize during learning, such a measure can serve as an indicator of importance for the model to make decisions. Interpreting Models With the Attention Mechanism As mentioned previously, the attention mechanism was originally developed for applications in NLP. Language is the primary form of thought expression, carrying both contextual and syntactical information indispensable if one intends to learn about the mental state of another person. Indeed, in current psychiatric practice, assessments, diagnoses, and the majority of psychotherapies are carried out mostly by conversations or interviews in various forms as well as observations made and recorded by the therapist as clinical notes. Because the majority of such information is recorded in the form of text, NLP, the automation of text data analysis, is naturally an appealing component for psychiatry research. In recent years, DNN-based NLP has undergone significant progress and has achieved state-of-the-art performance across many NLP tasks (27), which can be translated for use in psychiatric research. Recent progress began with the invention of distributed word representations, such as word2vec and GloVe (78,83), which record co-occurrence information of words in vectorized forms. These techniques allow dimension reduction as well as a form of transferring prior knowledge of text into downstream models. A further breakthrough occurred with the invention of the attention mechanism (26), which attempts to build weights that reflect which part of the input is more important for decision making. Although originally invented for NLP, models with attention mechanism are not confined to applications with text data, but they can also fit any sequential and imaging data as well, making these models highly relevant to psychiatric research. By design, it is natural to think that attention weights provide information on how decisions are being made. For example, through visualization, Clark et al. (73) systemically analyze attention layers of a landmark NLP model ["BERT" (27)] by first comparing behaviors across different layers as a trend and then focusing on behaviors of each attention head (e.g., a single set of attention values derived in an attention layer), during which they find certain attention heads were specialized in finding syntactic relations. The authors then probe the combined action of the attention heads within a single layer, and train supervised models based on labels of the location of the actual syntactic head of interest, using attention weights as predictors, showing the attention values are indeed predictive of the outcome. Last, they perform cluster analysis of all the attention heads in the model and show that heads in the same layer tend to be more proximate. Interpreting through attention weights is not free of problems. For example, Jain et al. show that (74), (1) although perturbation-and gradient-based methods are consistent to a degree between their interpretations, attention-based methods yield interpretations that correlate more weakly to those two approaches; (2) changes in prediction outcome upon permutation of attention weights are modest in many cases; and (3) it is not impossible to find another set of attention weights that are quite different from the original while fixing other parts of the model, and the predictions are unchanged. These suggest that attention weights might not play the main "causal" role in making modeled decisions as it intuitively suggests. Alternative approaches are created considering the issues. Ghaeini et al. (75) propose "attention saliency, " which, instead of looking at attention per se, visualizes a score defined by calculating the absolute value of the derivative of the model output with respect to the unnormalized attention weight and show that the attention saliency score provide more meaningful interpretation compared to vanilla attention weights on a natural language inference task. Instead of exploring attention weights, Aken et al. (76) takes advantage of the position-preserving nature of a BERT model in which the number of positions is constant across layers, and thus, the output of each layer can be perceived as a transformation of the input at the same position. They analyze the tokens produced by each attention layer from the BERT model, probe their properties with specific tasks, perform principle component analysis, and visualize clusters of token outputs at each layer. DISCUSSION AND FUTURE DIRECTIONS In this paper, we review existing methods for DNN model interpretation that are suitable for most commonly collected types of data in contemporary psychiatric research. We also discuss a substantial proportion of research questions that can be addressed using ML approaches. Indeed, the compositional nature and flexibility of DNNs carry both a blessing and a curse. Although DNNs bring out numerous breakthrough performances in a wide and growing variety of tasks, such complex systems are by nature more difficult to interpret thoroughly. In fact, our scientific body is just in the beginning stages of understanding some of their mathematical guarantees and why they works so well on many problems. For example, Poggio et al. (84) recently showed that the reason deep networks generalize well is partially explained by the fact that the gradient flow of a normalized network is intrinsically regularized and prove that approximation power of deep networks is superior to that of shallow networks under particular hierarchical compositional data structures. That being said, in real application, a sufficient interpretation for a particular instance does not necessarily involve a fully detailed understanding of all the mechanics. As summarized in this review, many of the interpretation methods utilize information that is most proximate to either the input or the output, and despite some mathematical properties yet to be met, these methods do yield interpretations that could serve a variety of purposes. Making DNNs more interpretable is a fast-moving field on both the theoretical and engineering sides. To facilitate application and ease the burden of engineering for researchers, many methods discussed in this paper are published with their respective code libraries. At the moment of the preparation of this paper, libraries such as PyTorch Captum (85), tf-explain (86), and Google LIT (87) further simplify the process by providing compilations of off-the-shelf, easy-to-use implementations of the methods in common frameworks. It is noteworthy that, although the advent of new tools provides convenience, it is necessary to explicitly describe the underlying metric [i.e., what quantity is actually being visualized; for example, information entropy is measured in (31)] when applying these tools or visualizing for interpretations because different metrics provide measurements to different constructs, and their properties should be transparent and open to interrogation whenever necessary. Many of the interpretability metrics are either a statistic or derived from a model apart from the model to be explained, and each may have its own issues in satisfying some desirable conditions (57-59, 68, 82, 88). Newer methods improve previous ones in accordance to some axioms, for example, integrated gradient improves on vanilla gradient in being sensitive when a specific neuron is saturated (68). New criteria are being iterated alongside new methods, and it is unsettled what would consist a standard set of desirable properties. This is also complicated by the fact that some of the properties are specific to the approach of interpretation. For example, "summation to delta" is specific to gradient-based methods. As new approaches are developed, new criteria specific to novel designs might be required. That said, it is desirable that at least a set of general properties, such as robustness as proposed by Alvarez-Melis (58)-e.g., interpretations for data sufficiently close should also be similarshould converge and be agreed upon. It is well known that current DNN models by themselves are not entirely robust (i.e., sensitive to small perturbations in input data) (58), and their interpretation may be nonrobust to artifacts as well (62). As raised in Alvarez-Melis et al. (58), it is an open question if the method for interpretation should be required to be robust when the model itself is not. Indeed, it has not been directly investigated how desirable properties correlate between the model and its interpretation method. One potential direction for both modeling and interpretation is to capture more invariant structure (e.g., invariant to noise or certain transformations) in the data, for example, conceptbased methods, i.e., approaches that either carry out predictions or can be interpreted by human-understandable concepts, such as ConceptSHAP (89,90). Heuristically, finding or imposing invariant structures into the inner working of a model indeed should improve robustness. Nevertheless, if a model is expected to deliver supra-human performances, then it might not be always reasonable to expect the model to be fully interpretable using concepts that are readily understandable to humans. An additional related issue arising from the nature of DNNs is the nonuniqueness of solutions, i.e., different sets of parameter estimations can be derived through repeatedly training the same model with the same data under slightly different conditions (e.g., initialization or hyper-parameters) as current optimization methods of DNNs can find different local minima across attempts (91). Selecting one particular solution undeterministically and automatically by the training algorithm among a set of possible solutions without a clear indication as to why that particular solution is chosen can by itself be seen as a violation of interpretability because it is then not clear why the algorithm would choose to do so. Therefore, an additional future direction to enhance DNN interpretability might be working toward model solutions that are more unique, possibly through ways such as improved data denoising, feature extraction, and representation learning that are more disentangled in order to smooth the landscape of the loss function. One last noteworthy consideration is DNN interpretation in the context of few-shot of transfer learning (92), which seeks to enhance model performance under the constraint of small sample sizes by utilizing information not explicitly or directly related to the current task (i.e., injection of prior knowledge in various possible forms). Although not as pervasive as conventional methods in the meantime in psychiatric research, few-shot learning methods are in a rapid phase of development and is particularly of interest to the psychiatric research setting due to possible difficulties in case or label collection. At the time of the making of this paper, to the author's best knowledge, there have not yet been published articles that formally discuss the interpretation methods reviewed here in the context of fewshot learning. That said, heuristically, it is obvious that the meaning of derived interpretations can change based on the way few-shot learning is performed. In the case in which few-shot learning is done through data augmentation, the impact might be less likely to be significant, as the structure of the model, the initialization of parameters, the hypothesis space, and the amount of information used during model training are mostly identical to a non-few-shot setting. However, when few-shot learning is done with a change in some of the aforementioned conditions, the amount of information utilized during training can drastically decrease, and the pathway on which parameter values change over the course of training (i.e., gradients) can be shortened and altered. In these cases, the meaning given by the interpretation methods is then no longer "marginal" (i.e., relative to noninformative initiation) but conditional on the given known prior. In practice, depending on the architecture of the fewshot learning model, interpretation methods may have to be specifically tailored to the given model to perform well. As an example, in a recent work from Karlinsky et al. (93), the authors propose a few-shot learning model for image classification and show that vanilla GradCAM fails to provide visualization for some of the modeled examples, in which a back-projection map designed as an integral part of their model performed nicely. In conclusion, with the current tools for interpretation, the "black box" of DNNs can have light shed on it and be inspected to an extent, and further improvements are constantly being made. After all, psychiatry itself is a very complex field, which implies mathematical models describing the patterns that have emerged in the field that would also be complicated and difficult to interpret. To solve this long-standing challenge, we must all be equipped to deal with and embrace the complexity when necessary. DNNs may act as a set of tools to help us discover patterns in psychiatric phenomenon that cannot be found otherwise. The various interpretation methods described above translate such discoveries into clinically meaningful and actionable findings. Alongside efforts to construct large and multidimensional data sets, a new wave of exciting exploration in psychiatric research awaits. AUTHOR CONTRIBUTIONS Y-hS is the sole author of this article and contributed to conception of idea, collection and summarizing materials, and the writing of the article.
2020-10-29T13:13:38.191Z
2020-10-29T00:00:00.000
{ "year": 2020, "sha1": "844edcb9961c4b00f5df9a3fa1935905e5f8a2ad", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.551299/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "844edcb9961c4b00f5df9a3fa1935905e5f8a2ad", "s2fieldsofstudy": [ "Psychology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249653450
pes2o/s2orc
v3-fos-license
Collaborative smartphone experiments for large audiences with phyphox We present methods to implement collaborative experimentation with smartphone sensors for larger audiences as typically found at Universities. These methods are based on the app"phyphox", which is being developed by the authors, and encompass simple data collection via web forms as well as a new network interface for"phyphox", allowing to collect real-time experiment data from an audience on-site or easy data submission for remote participants. Examples are given with practical considerations derived from first implementations of this method in a lecture hall with 350 undergraduate students as well as a global experiment to determine the Earth's axial tilt with smartphones. Introduction Ever since smartphones have become ubiquitous among students, the sensors in these devices have been used for experimentation in science education [1,2,3,4,5]. Allowing students to discover the world with their own measuring devices is not only considered to be a refreshingly unusual variation of student experimentation, but it is also a free and readily available chance to do quantitative measurements with digitally acquired data and digital data analysis. This aspect becomes even more relevant if dedicated measuring equipment is not available due to limited resources or because of disproportionate logistical requirements. The latter in particular applies to larger courses in higher education. In the context of a lecture or its accompanying exercise courses, this organized experimentation quickly becomes impractical and for this reason student experimentation is rarely realized in the context of a lecture so far. In contrast, experimental assignments using the student's own devices are an engaging alternative to purely mathematical assignments [6,7]. Surprisingly, while smartphones are mostly known and widely used because of their connectivity and networking capabilities, smartphone experimentation so far rarely takes advantage of this. Aside from few dedicated apps that aim at data collection for citizen science projects [8,9], typical smartphone experiments only use the data collected on a single device or require a manual export and subsequent merging of the data in a separate analysis tool. In contrast, the phone's connectivity could be used for more accessible and engaging collaborative experimentation across a large audience. In this paper we report on a new network interface for the education-centric sensor app "phyphox" [10] with examples for different learning situations and scenarios in which this new interface has been used. These include real-time collaborative experiments in a lecture hall with hundreds of undergraduate university students and a real-time collaborative experiment with users around the globe in an informal learning setting. The paper is structured to incorporate additional challenges or peculiarities with each example, starting with (a) an example for manual data collection not yet using the new interface to demonstrate how experimentation in a lecture context can be used to flip the experiment experience for physics students. Then (b) the new interface is introduced, followed by (c) an example on how it is used in the same lecture to transform the concept into a real-time experience for students. In examples (a) to (c) the physics students were asked to use their smartphones as an ocillating balance to determine the unknown mass of different objects [11]. We then discuss with example (d) the requirement to filter data from incorrectly conducted experiments to go on to (e) an example of data collected from even less reliable experimentators as users across the globe collect data to determine Earth's axial tilt. Finally, we present (f) the requirements to implement the network interface in other courses. a. Flipping the class room with a web form In order to perform a collaborative experiment with hundreds of students without a function to directly submit experiment data from within the experiment app, a simple web form can be used. In contrast to other methods like collecting results via email, a dedicated form forces students to enter their data in a given machine processable format, allowing for easy scaling to hundreds of students without an increased effort to manually merge differently formatted emails. We employed this solution in the lecture "experimental physics 1" of the winter term 2019/2020 for about 350 first semester students aiming for a Bachelor's degree in physics at the RWTH Aachen University. The assignment was part of the compulsory exercises that accompany the lecture, but this particular assignment was considered optional, allowing the students to earn bonus points which would allow them to leave out a different assignment. The idea was to flip the typical experiment experience known from most lectures on experimental physics. Instead of letting the lecturer derive a theory and confirm it with an experiment on stage, we aimed to let the students conduct the experiment without prior knowledge of its expected outcome and let the lecturer explain the data in a later lecture. A good experiment for this is a simple gravity pendulum as its construction only requires household items available to most of the students. The students were instructed to build a pendulum using their smartphones, a piece of string and a small plastic bag or paper roll to hold the phone. We specifically asked them to use a swing-like suspension with two or four strings as seen in figure 1a instead of a single string to avoid an additional rotation about the axis of the single string. This way, the pendulum experiment configuration included in phyphox can be used to precisely determine the pendulum frequency. Figure 1. a) Example setup for a home-built smartphone pendulum. b) Screenshot of the translated submission form for the students. The original form was in German. c) Plot of pendulum frequency as a function of distance from pendulum axis to center of mass. A total of 195 data points submitted by 65 student groups from multiple years are plotted as well as the expected behavior of a mathematical pendulum. Few data points corresponding to larger constructions of up to 5 m (done in a stairwell) were left out in favor of scaling the axis such that the other data points can be well distinguished. The assignment was done by students in groups of two or three and each group should repeat the experiment for three different lengths of string. The resulting data pairs of length and frequency were collected with a web form (figure 1b) that was integrated in the lecture's digital script which in turn was embedded in the lecture's virtual room on the University's learning management system. The experimental results were due before the lecture reached the topic of oscillators, so the lecturer, was able to use the student's data in this particular lecture. After deriving and solving the equation of motion for a mathematical pendulum, he could compare the theoretical expectation with the collective data of the students, mimicking the scientific method and using the student's unbiased experimental data. This procedure replaced the part of the lecture when typically an experiment with very few data points was conducted on stage. We received data sets from 49 student groups, who each submitted three pairs of pendulum length l and measured frequency f. While most pendulums were limited to a string length of less than 2 m, some setups in stairwells featured string lengths of up to 5 m, exhibiting a strong motivation for the students. As shown in figure 1c, despite few outliers, the data set generally reproduces the theoretically expected behavior of 2πf = g/l very well. b. Implementation of a network interface for automated data collection Although this approach had worked reliably and was well received by the students since we had first used it in winter term 2016/2017, there are some drawbacks to using a web form separate from the experimentation app. In particular collecting the data in one tool and then entering it in another one makes the procedure impractical for a real-time application. It also introduces an unnecessary hurdle, especially if used in informal learning settings where users are not familiar with an accompanying website like a digital lecture script. Therefore we decided to implement a generic network interface to our data acquisition app "phyphox" [10] to allow for an easy exchange of data by the students. Phyphox is add-free, open-source and specifically designed for science education. It is available for free on Android and iOS and features a fully documented file format that allows educators to design their own specific experiment configurations including choice of multiple smartphone sensors, experiment-specific data analysis, custom layout of graphs and visualizations, informative texts and user inputs like buttons or text boxes for numeric values. Every experiment configuration listed in the main menu of phyphox (figure 2a) is defined in this XML-based format. Educators can modify these or create entirely new experiment configurations specific for their classes or courses. These configurations can then be shared with students using QR codes (figure 2b). They can also be permanently integrated into the main menu of the locally installed phyphox app in a custom category and with a specific icon (figure 2c), thus allowing lecturers to build a section in phyphox that is specific to their course. The new network interface is also designed to be defined in this XML format. Since it is supposed to be open and compatible with as many server structures and protocols as possible, it allows to freely define which data is being sent and how data received from the network should be handled. Sensor data as well as user input or the result from the data analysis defined in the XML file can be sent. Conversely, data received from the network can be analyzed or displayed. Also, metadata like the make and model of the device, the phyphox version or a uniquely generated user id can be submitted. To protect the users' privacy, phyphox will inform the user about the data sources a) b) c) Figure 2. Series of screenshots of phyphox on Android. a) Main menu of phyphox after a clean installation. b) Menu to add new experiment configurations, in particular by scanning a QR code. c) After scanning a QR code, another configuration is available in the main menu. The blue color and the "x" as an icon are customizable as well as titles and labels. used in an experiment and that data will be submitted to a network service. Critical data sources like the microphone or GPS are indicated separately as well as critical metadata like the unique user id, which is unique only for a single service address to avoid tracking across services. A URL to a privacy policy can be added in the XML definition to inform the user about how the data is being processed. At the moment, only a static server address can be configured and HTTP and MQTT are implemented as available protocols with variants like GET or POST methods for submission via HTTP and JSON or CSV payloads for MQTT. Phyphox and the XML format are designed such that new protocols can easily be implemented and non-static server addresses (for example discovered via mDNS) can be supported in the future. For details about the exact configuration and current state of the network interface, please refer to the documentation on phyphox.org [12] or its snapshot for the time of this article in the supplementary material. c. Collaborative live experiments in a lecture hall The new network interface allowed us to do another oscillator experiment similar to the pendulum experiment in the same physics course but as a real-time experiment. In the context of being the last lecture before winter break we handed out a small clear plastic bag, a spring and chocolate bars to pairs of students. With these, students could construct a spring oscillator by putting the phone into the bag and attaching the bag to the spring (figure 3a). The chocolate bar could be used to modify the mass of the oscillator with additional 100 g per bar or fractions thereof that can easily be achieved thanks to the chocolate bar's sections. The chocolate adds to the phone's weight, which the students could measure with scales in the lecture hall or simply research online. Of course, the additional mass of the chocolate bars can be replaced by other low cost material like a set of appropriate metallic nuts or shims. b) c) a) Figure 3. a) Example for a spring oscillator made from a spring and a plastic bag. b) A lecture hall participating in a live network experiment with collaborative results being presented on the main projector. c) Screenshot of the experiment configuration used in the collaborative spring experiment. Note that there is no submission button, but that results are being submitted periodically in this example. The goal of the collaborative experiment was to generate a frequency over mass plot to determine the spring constant for the identical springs used by the students. In order to collect the data, a QR code was given to the students with a phyphox experiment configuration specific for this event. This configuration allowed the students to enter the current mass of their oscillator and recorded data from the accelerometer, which is continuously analyzed to determine the oscillation frequency from an autocorrelation (figure 3c). Since the students could not easily press a button to submit data while the phone oscillates on the spring, the configuration was set up to periodically submit the current frequency to a server every ten seconds. A simple PHP script on the server was used to store the incoming data into a plain text file. To avoid duplicates and meaningless data points from handling the phone, the ratio of the maxima in the autocorrelation were used to get a measure for the quality of the data, which allowed the server to reject unusable data and to only keep a single data set per user id per oscillator mass. Finally, the real-time experience of the experiment was achieved by a Python script running on the server, which periodically ran through the collected data to generate the frequency over mass plot. The result was displayed and updated every few seconds on the main projector of the lecture hall (figure 3b). Multiple variants of data representation were available, including a plot with a model fit to extract the spring constant, allowing the lecturer to switch between views and discuss the results as he moderated the ongoing experiment. d. Filtering requirements for large data sets As the automated submission through the network interface along with the automated data analysis and presentation from the Python script allows to scale an experiment for large audiences while still being able to get real-time results, filtering incoming data becomes an important aspect. With an increasing number of participants the probability for erroneous data sets increases, which might interfere with the automated data analysis as fits are distorted by extreme outliers or the visualization might become unusable with erroneous data masking the actual measurement or throwing off the formatting as automated axes scale to extreme values. As larger audiences tend to be more anonymous, these erroneous data points can be intentional attempts by students to test the limits of the experiment, but in most cases simple mistakes can be sufficient to generate problematic data points. While we did not notice any intentional attempts to challenge our scripts, we noticed two common mistakes that highlight the importance to filter incoming data and the need for a suitable experimental design to avoid simple mistakes. The first problem was anticipated and had been accounted for in the design of the experiment. As the phyphox configuration submits derived quantities of the sensor data every ten seconds without confirmation from the students, this will naturally include frequencies derived from noise while handling the phone and partially recorded oscillations if the analysis interval overlaps with starting or stopping the oscillator. These situations could reliably be detected by evaluating the ratio of the peak of the autocorrelation used to determine the frequency to the maximum of the autocorrelated data. For proper harmonic oscillations, this ratio will be close to 1.0 and any ratio below 0.75 is rejected. In the real-time analysis in the lecture for each user id only the result with the highest ratio was taken into account for each mass entered. Additionally, the accepted frequency range was 0.1 Hz to 4 Hz and the mass entered by the students was limited to a range of 50 g to 1 kg. While we did not see any noise from handling the phone passing this filter, we did not take into account simply forgetting to update the mass after varying it. The expected result of decreasing frequency with increasing mass can clearly be seen in the student data, but there is a significant number of additional seemingly chaotic data points in the bottom left corner influencing the fit. These data points occur if a student changes the mass of the oscillator without entering the new value into phyphox. The resulting measurement passes all filters, but the frequency will be associated with the wrong mass and even substitute the former correct measurement if a high ratio for the autocorrelation maxima is achieved. As the students start with only their phone as the mass of the pendulum and usually add additional mass later, this mistake will systematically move data points from the correct high frequencies with low masses to the bottom of the graph as the same low mass is now associated with the lower frequency of an oscillator that actually has a higher mass. After repeating this oscillation balance experiment with a group of 50 first year physics teacher students during an experimental physics lecture in January 2019 at the University of Leipzig and observing the same mistake, we decided to group all data submissions for each student with the same mass and only take into account a single submission that was submitted in the middle of a series of identical mass values. The idea is that in most cases, the forgotten mass update would eventually be corrected by the students, so the wrong mass values would be at the end of the previous data series. Taking a value from the middle avoids early values that might include handling, but also cuts off contributions from a modified mass before its values has been entered. The other values have been discarded, but are still plotted as open circles in figure 4. The effectiveness of this strategy is supported by the observation that most outlying data points get eliminated this way. Note, that there are plenty of open circles at reasonable frequency/mass combinations that are discarded as well. These are additional valid measurements within a series submitted by the same user id and are therefore duplicates from the same individual experiment. e. Global experiment to determine Earth's axial tilt Network-based collaborative smartphone experiments can be pushed to a global level and can produce astonishing results. If these experiments are performed with experimentators who are not part of a physics course, the requirements for proper filtering algorithms increase. We conducted such an experiment on 21st December 2019, which was the day of the winter solstice, with the goal to trace the Sun's path across the sky and thereby determine Earth's axial tilt. To do this, we published a call to phyphox users through our website and social media channels to load a prepared experiment configuration (figure 5b) into phyphox. This experiment was designed to determine and submit the Sun's position as seen from the user's location multiple times throughout the day. To do so, the users have to align their phone with the Sun such that its y axis (long side of the phone) is pointing directly at it. This can easily be achieved by observing the shadow of the phone and turning it slowly until the shadow becomes as small as possible, matching the cross-section of the phone. The experiment configuration that we provided then uses the magnetometer as a compass to determine the Sun's azimuth and the accelerometer to determine its altitude by considering the orientation at which Earth's acceleration occurs in the accelerometer's frame of reference. The experiment configuration would also determine the user's location which is submitted along with a unique user id and the determined azimuth and altitude when the user presses a button. All submissions are collected on our server and a Python script was used to derive the Sun's path from these submissions. However, the error rate we observed in the raw data is very high, which we attribute to users pressing the submit button out of curiosity without aligning their device with the sun, miscalibrated magnetometers or measurements near magnetic fields and misunderstandings of the experiment instructions like pressing the button after the phone has been moved away from its aligned position. In order to filter these results in an unbiased way, our script takes into account data points that span the period of one hour. For each group of data points, the script then determines an average location for the Sun and removes the farthest outlier from that result. The remaining points are used again for a new average and removing another outlier, which is iteratively repeated until 50% of the data points in the period have been removed. This method favors the data points in the middle of the one hour period and ideally the data points at the beginning and the end of the period will be removed, too. Such a long period still helps to stabilize the averaging until the worst outliers have been filtered while still yielding a much better temporal resolution than one hour. Therefore, this process has been applied repeatedly to one hour intervals that are shifted by five minutes each. The result can be seen in figure 5a with several interesting features. As black points represent the location of a contributing user and red crosses the derived location of the Sun for each five minute shift, it becomes clear that we mostly have contributions from the northern hemisphere tracing the Sun that travels above a southern latitude. Unfortunately, there was some heavy overcast over Europe and parts of North America on that day, which made contributions from users in Central or Northern Europe very difficult and which might also have reduced the number of contributions from North America. Still, aside from few deviations, the filtered Sun positions form a clear path and a fit (blue line) yielded an average latitude of 23.03 • ± 0.07 • , which matches the actual value of 23.4 • surprisingly well with only a little systematic skew to the North which we attribute to the fact that almost all measurements were done from the North. With such unbalanced distribution of users we expect that any systematic deviation in the execution of the experiment, like for example a slight raising of the phone to view the screen, also translates into a North-South deviation. We have made the raw data with reduced user location accuracy and replaced user ids available for students to work with on our website [13] and it is also available in the supplementary material of this article. f. Implementation for other courses As demonstrated in these examples, the network interface of phyphox is very versatile and can be adapted for many experiments. Although phyphox is open source (GNU General Public Licence v3) and the app itself could be modified by any user with sufficient skills, there is no need to do so as all examples can be realized within our XML format defining which sensor are read, which mathematical operations to apply to the data and how to transmit it. The file format and the network interface are fully documented on the project's website. However, using this interface for a custom experiment in another science course still requires some technical knowledge as the phyphox project cannot offer hosting services for this type of experiment. Therefore, a server to receive the data needs to be set up to at least store incoming data and some basic knowledge of the used protocols (HTTP or MQTT at the time of this article) is necessary. Depending on the target audience, scale and duration of the experiment, this should also include sufficient IT security understanding to not expose the server and the experiment data to attackers. Still, we assess that this type of experiment can be set up in a trusted network environment by anyone with basic web hosting knowledge using PHP. An example is provided in the documentation and supplementary material. Conclusion and outlook Combining the data from smartphone experiments that are being conducted by multiple learners can open up many exciting and engaging new learning experiences. Integrating a network interface into an app that is used to access the sensors can effectively automate the entire process, especially when paired with automated data analysis and an automated process to present the data. The latter are not strictly necessary, but it allows for an immediate and conclusive feedback and we are convinced that seeing the result developing in real time is an important part of this concept to engage students. The biggest problem at the moment is a relatively high technical hurdle for the educators as a server and the technical knowledge to set it up are required. The best solution would be offering this concept as a ready-to-use service, but this would require funding for staff and hardware. The next best are examples that should allow teachers in higher education to find an assistant who can use these as templates to set up such an experiment. Once the technical conditions are met, this concept is very scalable and lifts almost any limits to the number of participants and their location. This allows for collaborative experimentation in remote learning situations, for a community of casual learners around the globe or simply to transform the experience of an entire lecture hall from the purely passive observation of a demonstration experiment to active participation.
2022-06-15T15:28:23.747Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "283f91648ae48428dacf3bfbe8a8a0ccb3357f34", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "585fe90bee5415dcfa7274605182e45bd9062a04", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212548077
pes2o/s2orc
v3-fos-license
Perceptions of Maternal Discrimination and Pregnancy/Postpartum Experiences Among Veterinary Mothers Objective: To describe perceptions of maternal discrimination and to begin to understand patterns around timing of starting families, infertility, and post-partum depression among veterinary mothers. Design: Cross-sectional questionnaire with closed and open-ended questions posted to a social media platform “Moms with a DVM.” Sample: 1,082 veterinary mothers in the United States. Procedures: An online questionnaire was administered regarding perceived discrimination, inequities in the work-place due to pregnant or maternal status, desired accommodations, timing of pregnancy(ies), fertility issues, and postpartum experiences. Results: At least one form of perceived discrimination was reported by 819 (75.7%) respondents (M = 2.6, SD = 2.1, range 0–10). Specifically, 789 (72.9%) reported maternal discrimination. Over half of the sample (n = 632, 58.4%) reported at least one instance of perceived inequity in the workplace due to status as a mother (M = 1.23, SD = 1.4, range 0–5). A majority (906, 83.7%) reported that their career had “definitely” or “maybe” affected the timing of their children. One hundred eighty-nine respondents (17.5%) experienced at least one miscarriage, and 192 (17.6%) used fertility treatment due to difficulty conceiving. Postpartum depression was diagnosed in 181 respondents (16.7%), and 353 (32.6%) reported symptoms consistent with postpartum depression but did not seek medical care. Of 953 participants who needed accommodations for breastfeeding and/or pumping while at work, 130 (13.6%) reported excellent accommodations, 454 (47.6%) adequate, 258 (27.1%) inadequate, and 111 (11.6%) had no accommodations provided. Conclusions and Clinical Relevance: Participants reported experiences of perceived maternal discrimination, as well as inequities and lack of support services due to status as a mother. These results highlight the need for attention and changes to ensure veterinarians have supportive and sustainable career options. INTRODUCTION In the last 60 years, veterinary medicine has shifted from a maledominated (nearly 90%) to a mostly female-dominated (about 80%) profession (1). Despite these demographic changes, female veterinarians are still paid less than their male counterparts, have a higher debt to income ratio (2) and experience gender discrimination (3). The number of women in the United States becoming mothers has increased over the last 20 years and in 2016, 86% of women were also mothers by the end of their childbearing age (4). A majority (51%) of working women in the United States say that having children has "made it harder for them to advance" in their career compared to 16% of men (4). In particular, women in science, technology, engineering, and math (STEM) careers specifically experience discrimination and face challenges navigating parenting and demanding careers. One recent study found that young parents were more likely to leave full time employment in STEM careers compared to their non-parent counterparts and that mothers left at twice the rate of fathers (5). A recent survey of physician mothers found that 66.3% perceive gender discrimination and 35.8% perceive discrimination based on their pregnant or maternal status at work (6). Navigating the challenges of parenting and pursuing a veterinary career contributes to overall wellness among veterinary professionals. Previous research by the authors shows that parental support by veterinary schools and training programs is lacking and that many trainees perceive that having children during their training years (veterinary school, internships and residency training programs) is not feasible (7,8). There is currently no data regarding maternal discrimination and the effects it may have on veterinarian mothers. The goal of this research was to explore perceived discrimination among veterinary mothers in the United States and was modeled after a study of physician mothers (6) to compare experiences with a similar population. An additional goal was to look at baseline data on decisions to start a family, infertility and post-partum depression in veterinary mothers in order to inform and direct future research in this area. Study Design and Overview This study was cross-sectional in design and used an online anonymous questionnaire, composed of both closed and openended questions, that was posted to a social media platform closed group "Moms with a DVM." Questions were designed to mirror data presented in the Journal of the American Medical Association (JAMA) (6) that investigated perceived rates of discrimination among physician mothers so we could compare their results to the experiences of veterinary mothers. Additional questions about infertility, workplace accommodations for parenting, and postpartum depression were added. Participants were eligible if they were over the age of 18 years, identified as a mother or pregnant, and had received a DVM or equivalent degree and lived in the United States. The research was reviewed and granted exempt status from the Tufts University Social, Behavioral, and Educational Research Institutional Review Board. The survey was administered by Qualtrics and was posted to the group three times between Nov 28 and Dec 10, 2018, with additional posts to sub-groups in the same time-frame. "Moms with a DVM" had over 10 thousand members at that time with approximately 200 new posts per day. Survey Inclusion criteria selected participants who were members of the group "Moms with a DVM" who were over 18 and who self-identified as pregnant or a mother. The questionnaire was composed of closed-ended questions to obtain the following data: demographic information, number and age of children, level of post-veterinary training obtained, type of current employment, whether participants had "ever felt discriminated against" based on 11 factors: their gender, maternal status, being pregnant or breast-feeding, taking maternity leave, race, ethnicity, age, sexual orientation, mental health status, or physical disability [derived from Adesoye et al. (6) study assessing maternal workplace discrimination in physicians]. Additionally, participants were asked about inequities in the workplace due to their maternal status: pay or benefits not equal to peers, not fairly considered for promotion or senior management, treated with disrespect by support staff, held to a higher performance standard than peers, and not included in administrative decision making (6). Participants were asked to select the top three workplace changes that would be most important "to you as a mother" from a set list. Options included: more flexible weekday schedule, higher pay, longer paid maternity leave, option to work part-time, support with home services, childcare onsite, backup childcare, option to not work on weekends, more vacation days, option to not take on-call, flexibility to work from home, additional support for breastmilk pumping, more sick days, and other (6). In addition, participants were asked about support and accommodations for breast-feeding or pumping, a question about how career choices influenced timing of pregnancy(ies), and if mothers experienced any infertility issues or post-partum depression. Finally, there was a space for open comments on any aspect of maternal discrimination. Data Analysis Descriptive statistics and frequencies were calculated using statistical software 1 . To evaluate associations between demographic variables and material discrimination, adjusted logistic region models were used to estimate odds ratios and 95% confidence intervals adjusting for age and race/ethnicity (6). For sexual orientation and race/ethnicity, descriptive categories were collapsed into binary variables for the regression analysis since sample sizes in the non-majority individual categories were low (see Table 1). Qualitative data collected in the open comments section were managed using a qualitative data analysis software tool 2 . Responses were sorted into themes where each response could be tagged in as many thematic categories as appropriate. RESULTS A total of 1,160 respondents participated in the survey. There was a response rate of approximately 10% based on total number of members in the group. Four surveys were removed for not meeting inclusion criteria, and 74 were removed for incomplete quantitative data (only participants with complete questionnaires were retained), leaving an analytic sample of 1,082 participants. Age of the participants ranged from 24 to 71 years old, M = 36.3, SD = 5.1; demographic characteristics are listed in Table 1. Of the 1,082 respondents, 819 (75.7%) reported experiencing at least one form of perceived discrimination (M = 2.6, SD = 2.1, range 0-10), see Figure 1. There was overlap between maternal and non-maternal discrimination with 317 (29.3%) participants reporting both types. Likelihood of experiencing maternal discrimination did not vary significantly by the demographic variables, although veterinarians who worked in large animal practice were more likely to have experienced discrimination ( Table 1). Over half of the sample (n = 632, 58.4%) reported experiencing at least one instance of perceived inequity in the workplace due to status as a mother (M = 1.23, SD = 1.4, range 0-5) (Figure 2). Specifically, 346 (32%) reported not being included in administrative decision making, 312 (28.8%) reported having pay or benefits not equal to peers, 289 (26.7%) were treated with disrespect by support staff, 206 (19.0%) felt they were held to a higher performance standard than peers, and 179 (16.5%) felt they were not fairly considered for a promotion or senior management position due to their status as a mother. A majority of the sample (906, 83.7%) reported that their career had "definitely" or "maybe" affected the timing of their children. Maternal age at the time of first child ranged from 18 to 44 years (M = 31.2; SD = 3.7). With regard to fertility, 189 (17.5%) of the sample experienced at least one miscarriage, and 192 (17.6%) used fertility treatment due to difficulty conceiving. During the postpartum period, 181 (16.7%) experienced diagnosed postpartum depression, and 353 (32.6%) reported symptoms but no diagnosis, yielding a total of nearly 50% of the study population who experienced symptoms of postpartum depression. See Table 2 Open-Ended Responses There were a total 269 meaningful responses to the question open to any comments on maternal discrimination or challenges in the workplace due to status as a parent. Comments that included "none, " "N/A" or an incomplete thought were excluded. Comments that illustrate the range of responses for each category are provided in Table 3. Sixty-three responses (23.4%) were coded as "sexist, discriminatory or disrespectful comments made by staff due to maternal or pregnant status." There were 54 responses (20.1%) regarding pay or promotion status. Of these responses, 20/54 (37.0%) describing losing a job due to maternal or pregnancy status, 14/54 (25.9%) described pay or status (full-time vs. part-time) was negatively impacted by maternal or pregnancy status, 10/54 (18.5%) said their promotion status was negatively impacted based on pregnancy or maternal status, 10/54 (18.5%) described being discriminated against during an interview process due to future or current maternal or pregnancy status, and 10/54 (18.5%) said they were not hired for a job due to pregnant or maternal status. There were 53 comments (19.7%) on issues of time pressure related to childcare and working status; 22/53 (41.5%) described difficulties around lack of a flexible schedule related to securing childcare, 18/53 (33.9%) described lack of ability to take time off to care for sick children, and 11/53 (20.8%) described other types of challenges around childcare and working. Forty-six respondents (17.1%) commented on lack of adequate leave time and/or pay. Thirty-six respondents commented on lack of appropriate time (20/36; 55.6%) or lack of appropriate space (14/36; 38.9%) for pumping. Sixteen respondents (5.9%) commented on safety issues during pregnancy; 12/16 (75%) said they had inadequate accommodations and 4/16 (25%) said they felt unsafe during their pregnancies. Five respondents (1.9%) said they regretted their choice to be veterinarians and/or were actively looking to leave the profession. Eighteen (6.7%) had positive comments and 38 (14.1%) were categorized as "other." DISCUSSION In this anonymous survey of veterinarians who are also mothers, the vast majority (about 75%) reported experiencing at least one type of perceived discrimination, with nearly 73% of respondents reporting discrimination based on their maternal status. In addition, more than half of respondents reported perceived inequity based on their maternal status. Although these responses targeted a specific social media group, a subjective description of the group is an inclusive, supportive and diverse group of women that offer support and advice on a wide range of topics both professional and personal. These data were from a small group of women who likely have an interest in this topic, however, the responses indicate that maternal discrimination and other issues for veterinary mothers are problematic, deserve additional research with more robust methodology and should prompt discussion of systemic institutional changes in the profession. Given that the veterinary profession is now largely made up of women (1), the widespread perceived discrimination likely has far reaching and long-lasting impacts for the profession. As has been demonstrated in the human medicine literature (6), perceived discrimination may impact rates of burnout, retention and career satisfaction in addition to impacting earning power. Overall frequency of perceived discrimination among veterinarian mothers as compared to a similar survey of physician mothers were similar: 75.7% of veterinarians and 77.9% of physicians experienced discrimination of any type (6). However, in our study, 72.9% of veterinarians reported perceived maternal discrimination as compared to 35.8% of physician mothers responding to a similar survey (6). Discrimination based on gender demonstrated a reverse pattern, with 39.1% of veterinarians reporting perceived discrimination and 66.3% of physician mothers (6). One possible explanation is the higher percentage of women in veterinary medicine as compared to human medicine (in 2017, 80.5% of matriculating veterinary school were women, compared to 50.7% of medical school students) (1,9) influences the prevalence of gender discrimination. Compared to veterinary medicine, in which the first published papers exploring the social and cultural implications of the increasingly female workforce began to emerge in the late 90s (10) and the first paper focusing on parenting was published in 2018 (7), attention to the struggle of female physicians dates back to the late 70s (11) and attention to the struggle physician mothers face as they balance dual roles (parenting and being a physician) dates back to the late 90s (12). It is possible that the human medical profession has dedicated more attention to this issue dating further back, which has resulted in increased awareness and in lower rates of perceived maternal discrimination in physicians as compared to veterinarians. Regardless of the differences between perceived maternal discrimination among veterinary and physician mothers, the high prevalence of perceived discrimination in the workplace in both populations is significant and warrants attention as the professions work to improve wellness. The top three ranked accommodations desired by veterinary mothers were flexibility in the workday schedule, longer paid maternity leave, and childcare onsite. According to a recent survey of veterinarians by DVM 360, 64% of women and 42% of men would take less pay for more flexibility in working hours, highlighting the importance of flexibility in the workforce (13). Our results suggest that employers could improve job satisfaction by prioritizing flexibility for parents in the workplace. More research into types of flexibility that are desired by parents (i.e., can leave for an extended lunch break to visit child, taking a weekday off as needed, revisiting schedule yearly as parenting roles change with age) and the feasibility and management systems that can be applied to provide flexibility are needed. This may differ by workplace setting and this data is skewed toward small animal veterinarians. Additional research to further describe accommodations desired and possible in different settings would be needed to help guide any future recommendations. Nearly 84% of respondents reported that timing of children was definitely or maybe influenced by their career choices. Recent literature found similar results among veterinary surgeons and found that women delay childbearing for longer than men (14). Given that the profession is predominantly made of women and childbearing age overlaps with veterinary training and early career building phases for most people, this is unsurprising. In this study, over 30% of respondents said they had experienced at least one miscarriage, which is higher than nationally reported rates of 8-20% (15). Reasons for the higher rates are unknown, but delaying pregnancy due to career choices and/or lack of accommodations and unsafe workplace environments may be contributing factors, as it is widely accepted that veterinarians face numerous hazards to reproductive health in the workplace (16). This study also showed higher rates of fertility treatment (17.6%) as compared with national rates (12%) (15), and higher rates of self-reported post-partum depression (over 30% in this study as compared with about 10% reported by CDC) (17,18). However, subclinical depression is underexplored, and should be an important component of future research in this area. Infertility has previously been shown to evoke distress, anxiety, and feelings of failure, loss and pain (19). This initial survey of veterinary mothers indicate that rates of infertility, and as a result stress associated with infertility, may be higher among the veterinary profession, contributing to recent literature and commentary on mental health in the veterinary profession. Additional data to determine if this is true across more diverse samples of female veterinarians is needed. The higher rate of fertility treatment observed among our sample may be associated with intentional delays in starting a family among the profession due to the perception that it is not feasible to do both at the same time (7), however more research is needed to determine the drivers of fertility treatment among veterinary women, as well as the financial burden of fertility treatment to the open-ended question regarding maternal discrimination or challenges in the workplace due to status as a parent. Comment type Representative comment(s) Number of comments Sexist, discriminatory or disrespectful comments due to maternal or pregnant status Office manager commented that we should only hire male vets in the future so they don't leave to start a family. I have had clints choose other Drs since I am not as a available after office hours. I devote that time to family. A client actually told me she was appalled I chose to be a mom and a vet. She felt I couldn't do that as a vet since my primary duty should be to my patients as a vet and not my kids. 63 Pay or promotion negatively impacted or loss of job or not hired due to maternal or pregnancy status I was fired from my last job 2 days before returning from maternity leave. I was replaced by the doctor I recommended to cover my maternity leave. I had been the only associate at practice for 7 years and no problems or anything other than praise until I announced my pregnancy. I watched 4 support staff get fired while pregnant or on maternity leave prior to me being fired. I was not considered for partner even though I was a high producer and had a large client base. When I asked my boss for consideration he flat out to my face said no because I chose the family track. At an interview, a male owner told me that I could never be a good vet and a good mom. 54 Difficulties around lack of a flexible schedule related to securing childcare My Chiefs of Staff were fine with schedule modifications for employees to care for their own pets, yet considered it unfair if I needed to leave at a certain time to meet the school bus or worked fewer nights than the other associates (even though I had reduced pay due to these scheduling necessities to provide care for my child). 22 Lack of ability to take time off to care for sick children The few times my child has been sick, I have been unable to care for her adequately due to lack of support from my job to help find coverage. 18 Other types of challenges around childcare I requested to move my lunch break to the afternoon to pick up my kids from school, am so was told that I was "stealing company time" when I was simply moving the hour provided to me for lunch. 11 Lack of adequate leave time and/or pay The biggest struggle as a mother was the length of maternity leave: only 6 weeks and unpaid. I work at a small practice, so being short a vet is tough for my coworkers, but 6 weeks was not enough time home with my baby! 46 Lack of appropriate time or lack of appropriate space for pumping I'm having problems finding the time to pump as I'm not allowed to block out time, and when we get busy, that ends up dropping to the way side. I am also expected to answer phones and write charts while I pump, and therefore can never get a good letdown like I get at home, so I end up engorged and sore at the end of every day. My staff sees me pumping as an inconvenience and gets huffy when I ask them to finish things up while I go pump. I was shamed for pumping at work. I was told it was disgusting and reprimanded for washing my pumping equipment at work after pumping. I pumped in a supply closet with chemotherapeutic waste! 36 Inadequate safety accommodations Unsafe radiation practices continued although I requested they end (rads taken without warning while unshielded people were in the way). 16 Regretted their choice to be veterinarians and/or were actively looking to leave the profession I am actively seeking to leave the profession. The stress, lack of adequate pay, and time I am required to spend away from my children is not worth it. By the time I can get home from my job, there is minimal to no time to interact with my children. I am seeking to completely leave veterinary medicine. It has been detrimental to my mental, financial, and physical health. 5 Positive I was working in a corporate hospital while pregnant and pumping, and I was treated with respect and given the time I needed. My short-term disability and generous PTO helped pay for most of my 12 weeks maternity leave. I have been very lucky to have a supportive male boss who allowed me with no complaints 3 months of unpaid maternity leave, pumping accommodations, and the freedom to pick the days and hours I wanted to work part time. It has made returning to work very manageable and he has beyond earned my loyalty as an associate to stay indefinitely with the practice. 18 Other My work was supportive -more invasive comments from clients. Previous employer (equine private practice) asked that I give a three-year verbal commitment to not having a baby when I joined the practice. 38 on a profession known to be plagued by high student debt upon graduation. Veterinarians who worked in large animal practice were more likely to have experienced discrimination than veterinarians in other specialties. A recent study found that among veterinary surgeons, large animal private practitioners worked longer hours and had the most on-call responsibility, and that women earned less than men in this field even after adjusting for all relevant covariates (20). In another study of veterinary surgeons, the same group found that women in large animal practice were less likely to be married, in a domestic partnership, and to have children compared to women in small animal practice (14). Collectively these findings indicated that there are differences in work-culture regarding gender dynamics among subspecialties in veterinary medicine, and that issues surrounding gender equity and maternal discrimination warrant closer attention-and provide an opportunity for meaningful intervention-across the profession. Women in veterinary medicine (14,20) and STEM professions in general are adversely affected in terms of their earning power and having children may widen the gap. "Even mothers who remain in the professional workforce full time encounter stereotypes painting them as less competent than equally qualified men and childless women, and face salary penalties and career barriers even while contributing the same dedicated work" (5). Maternal discrimination and lack of perceived support for veterinarians who also are parenting contributes to the mental health load and stress of many. This survey was a convenience sample administered through a Facebook group and limitations include a lack of diversity among respondents, possible selection bias and small sample size. Additional studies are needed to determine if these data are replicable in a larger population of veterinary mothers in the US. Despite these limitations, the high frequency of perceived discrimination among veterinarian mothers should be considered when thinking about the future of the profession and how to support current veterinarians. Recently, an article with a description of parental leave policies during medical training was published and included a call to action in the medical profession (21). The results from this study and prior related work (7,8) support the need for similar recommendations in the veterinary profession and indicate that veterinarians want changes. Qualitative comments from participants in this survey said "I feel like we are still in the dark ages. I faced discrimination when all three [of my] children were born and it has continued. My children were referred to as parasites. My maternity leaves were considered hardships for my co-workers. The other women without children I work with are resentful and have continued to insinuate I don't work as hard [as they do] due to my children." "During veterinary school one of the doctors in the clinic during fourth year told me that I could choose to be a mother or a doctor, but I couldn't do both effectively. She was a woman. I'll never forget how that statement made me feel as I already had two children. It was terribly deflating." The real changes needed to accommodate all veterinarians who also wish to be parents and have work-life balance are far reaching and require commitment at all levels of training and employment. In order to continue to attract top level talent and to create successful long-term careers, the professional organizations should consider implementing changes that support veterinary mothers (and fathers). The findings from this study support the need for future research in this area to further encourage changes to the profession that support veterinarian mothers and fathers as well as to further describe the ways in which maternal and gender discrimination impact the profession and how changes can be incorporated into veterinary medicine in a sustainable way. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author, pending IRB approval.
2020-03-07T14:08:15.239Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "8913ba884ed73da9283d671bca602724741bf559", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.00091/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8913ba884ed73da9283d671bca602724741bf559", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
272817506
pes2o/s2orc
v3-fos-license
MetRec: A dataset for meter classification of arabic poetry In this data article, we report a dataset related to the research titled “Meter Classification of Arabic Poems Using Deep Bidirectional Recurrent Neural Networks”[2]. The dataset was collected from a large repository of Arabic poems, Aldiwan website [1]. The data collection was done using a Python script that scrapes the website to find the poems and their associated meters. The dataset contains the verses and their corresponding meter classes. Meter classes are represented as numbers from 0 to 13. The dataset can be highly useful for further research in order to improve the field of Arabic poems’ meter classification. Subject Artificial Intelligence Specific subject area Natural Language Processing Type of data Text How data were acquired The data was collected by scraping Aldiwan website [1] .The website is a large repository of Arabic poetry. Data format Raw text containing poems' verses along their associated classes numbered as integers in the range of 0 to 13 representing the 14 poem meters. Parameters for data collection Aldiwan website [1] was scrapped to collect poems along with their meters.14 m were collected. Description of data collection Python code was created to web-scrap the public webpages of the website.We developed a python script to collect these poems.Poems meters were also collected which were converted to integers between 0 and 13.Each class indicates a specific meter. Data source location The source of the data are the public web pages of Value of the Data • Arabic Poetry is an important part of Arab heritage.The process of identifying Arabic poem meters is not straightforward.This dataset can be used for the purpose of automating this process. • The dataset can be considered as a benchmark for Arabic poetry classification. • Researchers can use the dataset to investigate further directions in the field.One possible research interest where this data can be useful is meter-based poem generation.• To the best of our knowledge, there exists no public dataset for Arabic poems with their associated meters. Data Description The data is divided into two sets: training and testing.The training set is stored in the file named 'train.txt'.It includes more than 47,124 rows.The file contains the rows formated as follows: Each row contains the verse and its meter id separated by a space.The row starts by the meter then the verse.The verse consists of two parts separated by a special character, '#'.Meters are encoded as class numbers from 0 to 13.These numbers refer to the order of these meters in 'lables.txt'file.Table 1 shows the meter labels and their corresponding names.The testing set is in the file named 'test.txt'.It includes more than 8316 verses with their corresponding meters.The test set was formated in the similar way to the training set.The total number of rows in the full dataset is 55,440.In Fig. 1 , we show the distribution of meters in the training set while in Fig. 2 , we show the distribution in the test set. Experimental Design, Materials and Methods The data is collected from Aldiwan website [1] .The website has a large collection of Arabic poems.The number of Arabic poem meters is sixteen in total.However, not all meters are commonly used in Arabic poetry.Some of the meters are highly used while others are less common.Additionally, there are two meters that are extremely rare.Some Arabic scholars consider them non-existent [3] .We did not include them in the dataset.The distribution for the meters in the training dataset is illustrated in Fig. 1 .Three meters have less than 1200 verses while most of the meters have more than 40 0 0 verses.Fig. 2 The website from where we collected the dataset classifies the poems according to their meters.All poems titles and links belonging to the same meter are accessible through a stand-alone web page.We scraped the website by accessing each meter page first to get its name and poems links.Each poem is then accessed to scrape its verses.The programming language used for this task is Python with requests library. Table 1 Meter label and its associated Name. shows the distribution on the test set.The meter class '12 corresponding to Hazaj meter has the least number of poem verses-1033 verses in the training set and 168 verses in the test set.Whereas, meter class '7 corresponding to Ramal meter has the most number of poem verses-4281 verses-in the train set and the meter class '11 corresponding to Wafer meter has the most number of poem verses-4281 verses-in the test set.
2020-11-05T09:08:59.212Z
2020-11-04T00:00:00.000
{ "year": 2020, "sha1": "22e7c2cac5f6d810f3777a0d966f79caabdd3910", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "79ca35ab05a0f0526053a4b321046d00361a8607", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
266413131
pes2o/s2orc
v3-fos-license
TRIBOLOGICAL IN METAL FORMING PROCESS AND THE USE OF BIO LUBRICANT AS METAL FORMING LUBRICANT: A REVIEW Abstract Due to its low scrap, high production rates, and higher yield strength of cold-formed products following forming operation, cold metal forming methods stand out as the most beneficial method of metal forming. The use of metal forming process is one of the major aspects in manufacturing process and the utilised of lubricant is extremely vital. When referring to lubricants, the term "bio-lubricant" refers to a stock that is both renewable and biodegradable. Fats and oils, which are derived from fatty acids, may be reacted with alcohols to create esters. It is crucial for the economy, people, and the environment that lubricants be developed and used efficiently. For this article, we scoured the literature and analyzed the most up-to-date research on the topic of metal forming process improvement and the use of vegetable oil as a metal forming lubricant. This research intends as a case study and demonstrating the tribological analysis in metal forming process and the potential of using bio lubricant in the metal forming. Future manufacture of new oil blends from new and conventional oil sources will be offered to the market for a variety of economic and health concerns. INTRODUCTION According to DIN 8580 from the German Institute for Standardization, metal forming is the "manufacturing by plastic (permanent) alteration of the shape of a solid body while retaining both its weight and its cohesiveness".Depending on the forming mechanism (tensile, compressive, bending effect, by shearing), part to be formed (bulk, sheet), time dependence (time independent and dependent processes like extrusion and upsetting), or forming temperature, metal forming processes can be broken down into several distinct categories (cold, warm, hot forming).The metal may be softer and more malleable when hot formed, and recrystallization can help when cold formed, yet strain hardening can increase the strength of a cold-formed product.By eliminating thermal issues like oxidation and deformation, cold forming may also allow for greater geometric precision and surface polish. Forging is a common manufacturing process because it can be used for both high-volume manufacturing and the creation of prototypes [1].Forging is used in a wide range of industries to create a wide range of components, including balls for rolling bearings, small bolts, pins, gears, cam and crankshafts, shafts, axles, holding hooks, flanges, hand tools, aircraft landing structures, turbine blades, some components of medical instrument like surgery blades, etc.The need for micro-components has been on the rise in a wide variety of sectors recently.Making these kinds of intricate components requires the advancement of micro-manufacturing techniques, making this a critical concern.Metal forming is a potential method of producing metallic components because to its advantageous features, such as high productivity, cheap cost, and excellent quality of the formed parts [2].Over the last two decades, researchers have dedicated a great deal of time and energy to studying the size influence on deformation behaviours in the forming process and improving the process across the board.Having a comprehensive understanding of these studies is crucial for advancing metal forming as a means of supporting component design and development and for bringing this micromanufacturing technique into the industrial world [3].Some researchers have started to combine numerical analysis with experimental research.Finite element method is very helpful way to help the researcher in predicting what will happen under very extreme condition. Closed die forging is the main metal forming process for the mass-production of middle-size or small forging parts [4].Defining the material's behaviour in reaction to process factors including billet shape, final forged component shape, and applied stress is crucial considering the mechanical nature of the deformation in the closed die forging process.In addition, controlling process parameters so that, the optimum product is obtained, are the goal of the manufacturing operation.Closed die forging processes, in contrast to other types of forging process, may create a large number of unique components.Because the direction of material flow and the atomic structure of the material can be regulated, forged components are robust and durable.As a result, forgings are a reliable option for demanding and essential uses [3] [4]. Process factors, such as die shape and material behavior during deformation, as well as friction, and material flow properties in a mold cavity, are crucial in proposed design.It is also crucial to choose the right die materials, temperature, speed, lubrication, and machinery.There has been a significant rise in the use of Design software systems in all stages of the forging production process [5].Finite element techniques in conjunction with validation methods (such as mathematical validation, backward tracing, artificial intelligence, experimental validation and automatic control algorithm) are used to simulate and validate metal forming process.In order to find an optimal die or billet shape, mathematical validation has been predominant [6].Validation techniques are utilized to optimize forging process for a certain objective function to improve the product quality and save power, material, time and cost.This study used a finite element approach to simulate the forging process, with the goal of better comprehend the friction, metal flow of the palm oil lubricant and the stress-strain distribution in the forged product.Besides that, this research also to analyses the process in terms of its independent and dependent factors. The use of vegetable oil as a lubricant was of growing interest in the industrial sector [7].This is due to the fact that natural oils and fats were approved for use in the food, lubricant, and biodiesel industries due to their unique chemical characteristics and properties [8].Green chemistry and the use of renewable raw materials have a bright future, and the use of environmentally friendly lubricants will play a crucial part in both of these fields.In general, vegetable oil has characteristics that are quite distinctive, offering a variety of options accessible in conventional petrochemistry [9].Using vegetable oil as a lubricant in internal combustion engines has been shown to have the potential to lower carbon monoxide and hydrocarbon emissions [10]. The concern over the environmental impact of using petroleum-based products has grown in conjunction with the rate of mineral oil resource depletion around the world [13].The petroleum base stocks used in most industrial lubricants today make them harmful to the environment and make it difficult to dispose of the products.Because vegetable oils with a high melting point are not simple organic compounds, the sample's consistency at any given temperature might range from completely solid to completely liquid to a mixture of the two [12][13]. Vegetable oils are flexible lubricants that have seen extensive application.High-oleic vegetable oils are a viable alternative to traditional mineral oil-based lubricants and synthetic esters [14].Using vegetable oil as an automotive lubricant is a step in the right direction since it is inexpensive, clean, sustainable, biodegradable, non-toxic, and ecologically beneficial [15]. In conclusion, the metal forming process is one of the vital areas to research, and the presence of lubricant is essential for the process.However, compared to the study on its usage as an engine oil, the investigation on the use of bio lubricant in metal forming is still in its infancy.This review discusses type, tribological analysis, and the usage of bio lubricant in the metal forming process in comprehensive manner. METAL FORMING Metal forming is a large set of manufacturing processes in which the material is deformed plastically to take the shape of the die geometry.The tools used for such deformation are called die, punch etc. depending on the type of process.The first manufacturing forging process which using people to produce tools, utensil as weapon is estimated about 7000 BC as mention by Canter., [16].There are a few types of metal forming that is bulk deformation [17] and sheet metal working [18] as shown in Figure 1.Common application of metal forming product are component for automobile, machine tools, construction, transportation and many more. Nowadays, metal forming processes may be used to a wide variety of materials, including ceramics and polymers.The industry's rapid evolution has resulted in a simpler, quicker, and more consistently cutting-edge procedure.A recent improvement to the fabrication department of Atlas Manufacturing, a precision sheet metal fabricator and stamper in the United States, was the adoption of many new laser and CNC punch equipment.As a means of increasing the stamping division's speed and adaptability, they concentrated on shortening the amount of time it takes to switch between dies.Because of this situation, it develops new products and hand tools that reduce die-swap times by 50-70%.The efficiency of the worker also will be increase as they can focus into making many parts with less setup time [19]. Taking into consideration One of the most wellknown methods for creating metals is forging, it was started by using hammer and anvil, though introducing water power to the production and working of iron in the 12th century allowed the use of large trip hammers or power hammers that exponentially increased the amount and size of iron that could be produced and forged easily [20].Afterward, as the technology advanced forging is used to produce cannon and rifle parts.Today, forging is used in different industries for the manufacturing of variety parts such balls for rolling bearings, small bolts, pins as well as gears, cam and crankshafts, shafts, axles, holding hooks, flanges, hand tools aircraft landing structures, turbine blades, some medical instrument components such as surgery blades etc.As in metal forming there are different classifications for forging such as in terms of temperature (hot, isothermal, warm and cold forging) [21], die (open, closed-die) [22], shape (compact shapes, disk shapes, long shapes) [23].Figure 2 shows the typical of bulk deformation and sheet metal working that been used in the industry.Figure 2 Example of bulk deformation and sheet metal working [25] Figure 3 illustrate the general forging process which consist top die bottom die and billet at the middle between both die.Forging is the process where the work piece (billet) is compressed inside the chamber to form the desired shape through the die [26][27].The finishing product will strictly form according to the die pattern.As mention earlier forging has two type of process that is cold forging and also hot forging. Classification of Operation Forging and extrusion are the two main processes that are frequently used in the development of lubricants while studying the tribological performance of metal forming processes.Cold forming process is strongly advised for lubricants with low melting point since low temperature metal forming procedures do not necessitate heating the material. Forging Process The forging process results in separate components being produced.It is possible to regulate the flow of metal and the grain structure, resulting in components with excellent mechanical characteristics (higher strength and toughness).As a result, they may be utilized with reliability in high-stress and sensitive applications.Open die forging, impression die forging, and closed die forging are the three primary types of forging. Figure 4 Open Die Forging (a) Ideal with no Friction (b) with Friction [28] As seen in Figure 4, open die forging normally entails inserting a solid cylindrical workpiece between two flat dies and compressing it to reduce its height [28].This process is commonly referred to a simple upsetting.Under ideal condition, a solid cylinder deforms as shown in Figure 4(a).In reality, the specimen takes on the form of a barrel, as seen in Figure 4(b).In most cases, barreling is generated by friction forces at the die-workpiece interfaces, which act to prevent the outward flow of material at these interfaces from occurring. The workpiece acquires the form of the die cavities (impression) as it is being agitated between the closing dies in closed-die forging.A typical example is shown in Figure 5, where some of the material flows radially outwards and forms flash [28].A significant level of pressure is applied to the flash as a result of its large length to thickness ratio.These pressures in turn mean high friction resistance to material flow in the radial direction in the flash gap.Because high friction encourages the filling of the die cavities, the flash has a significant role in the flow of material in impression die forging. Figure 5 Closed Die Forging [28] Because dies have a significant impact on the quality of the finished product, extra attention must be given during the design of the dies and the selection of the die materials.As seen in Figure 6, close die forging necessitates larger forging loads, it means that forging equipment with a larger capacity than other forging techniques is required. Extrusion Process Extrusion began with the extrusion of main pipes in the nineteenth century and was regarded as a newbie in the industry, despite the fact that it is one of the most well-known metal forming techniques.This is due to the fact that the extrusion chamber could not have been correctly developed before 1930 in order to withstand the high temperature and pressure required for the extrusion process of steels.It is now frequently used for both metals and polymers.Railings for sliding doors, window frames, aluminum ladder frames, rods, tubes, and numerous other solid and hollow parts are examples of often extruded metal products.Extrusion of plastics can also produce sheets, films, and wire coatings. Figure 7 General extrusion process The general extrusion process, which has a chamber, die, ram, dummy block, and billet as its main components, is clearly shown in Figure 7. Workpieces are forced through a die during the extrusion process [98].One long continuous product was generated from the extruded material, which will closely resemble the die pattern.Extrusion is a continuous operation, and the finished product, also known as extrudate, is then trimmed to the necessary lengths.Figure 8 shows the typical extrusion load where the pattern is very different from the forging process where the load has a steady state region. Forging 8 Punch load versus punch displacement curves in forward rod extrusion [127] Despite the cold extrusion technique requiring higher loads, more expensive lubrication, simpler shapes, and less deformation, more benefits were found.The extruded item will get stronger and harder due to a strain hardening action.Additionally, it is crucial to generate metal components through cold forming if you want to create the directional strength attribute that is brought on by the direction of the grain in the metal.Additionally, accurate production of geometric tolerances and net-shaped features is also possible with a superior surface polish [30].Researchers studied the cold extrusion technique in area of specialization is the cold extrusion.They tested it using a variety of industrial lubricants, including mineral oil-based ones and alternative lubricants like wheat flour, powdered soap, vegetable oil, and palm oil [30] [36]. Friction in Metal Forming The term "friction" is often used to describe the force that opposes the motion of two or more sliding objects.There are two types of friction that occur naturally in the world: dry surface friction and lubricated surface friction [29].Friction and lubrication play critical roles in several manufacturing processes, including forging, sheet metal forming, rolling, and extrusion.In metal-forming operations, friction influences both the forming forces and the stability of the deformed workpiece.A number of investigations on the role of stress in various stages of development have been undertaken [30][31][32].Examination often focused on how to determine the friction coefficient.Today, the finite element approach is the basis for a wide variety of computer programmes used in industry with the goal of optimizing the metal-forming process.Lack of information about the precise numerical value of the contact friction coefficient, however, may drastically restrict the accuracy of such research.Evidence from a number of research suggests that the coefficient of friction may be calculated using just data regarding the typical material flow during a deformation phase [33].Subsequently, this information offers expertise to automate a process and to design an effective die to avoid material defects and failures during a deformation phase. Schroeder and Webster [34], conducted a series of compression experiments of aluminum and magnesium disks under different disc conditions in an early study.Schroeder subsequently developed experimental friction curves from the dimensional ratio of the pressure applied to the flow stress to the ratio of the compressed surface radius to the deformed surface thickness.The coefficient of friction for different disc conditions can be estimated by matching the experimental curves with theoretical curves.Although it is believed that their findings had to be validated, this approach was a useful method to measure the friction coefficient for the metal forming process.In many later investigations, Schroeder's studies were continued [35][36][37].The authors generally agreed that the friction coefficient could be estimated by using the knowledge of material flow characteristics for a large deformation of plastics, but the applicability of this approach was limited to a maximum limit of the friction value coefficient that could be calculated [38]. This technique of disk compression is considered a difficult method to perform in practice, since direct force measurement, knowledge of material properties and a special measuring tool are needed if the investigation is carried out at a high temperature.This method was therefore modified in an attempt to estimate the coefficient of friction for processes of metal forming [39][40], by following the same material flow characteristic theory but by modifying the specimen from cylindrical to ring geometry.This testing method was later referred as the test for ring compression. Amontons, [41] made the most detailed approach to the modem ideas on friction about hundred years ago.Almost a century later, Coulomb, [42] largely established the basic laws of friction and while recognizing that adhesion could play a part in friction, he considered the interaction of surface roughness to be the major factor.Tabor, [43] provided the general vital picture of the current understanding of the frictional mechanism.Tabor has illustrated three fundamental elements involved in the sliding motion of unlubricated solids: 1) The real contact area between the rough surfaces 2) Form and strength of the bond formed at the contact interface 3) The way in which the material in and around the contacting regions is sheared and ruptured during sliding From the definition of the friction coefficient, it is easy to understand the importance of these three elements; (1) Where; Q -Tangential force needed to shear the junction between the contacting surfaces F -External normal force. The actual contact load, P, in the true area of contact is different from F by the amount of the intermolecular forces acting between the surfaces in contact.These forces are referred to as adhesion forces, Fs, and hence. (2) (3) Via the broader issue of contacting rough surfaces, the contact load, P, is connected to the actual area of contact.Fs, or adhesion force, measures how strongly two surfaces are stuck together at an interface [44]. In metalworking theory, slip boundary conditions have several forms.They are constant kinds of friction and are referred to as the Coulombic and Tresca parameters, respectively [45][46].For the case of a Coulombic boundary condition, it is assumed that the frictional shear stress, τ, is directly proportional to the normal stress, p.The adherence results in the presence of adequate lubricant, the effect of adhesion, Fs in equation ( 3) can be neglected, therefore, these are referred to as the parameters Coulombic and Tresca, also known as constant forms of friction. (4) However, in metal forming processes, the interface pressure, p, can reach a multiple of the yield strength of material.Thus, the linear relationship between τf and P in -Coulomb's model is not valid at high contact pressure levels because the shear stress, τf, cannot exceed the shear strength, k, of the deformed material that is normally workpiece.Therefore, the coefficient of friction becomes meaningless when μp exceeds τf.Thus, to avoid this limitation of Coulomb's model, the shear friction model was proposed by Orowan [47].In this model, as shown in Figure 9, the frictional shear stress, τf, at low pressure is proportional to the normal pressure such as Coulomb's model, however it equals to the shear strength, k, at high pressure. For a cylindrical specimen subjected to uniaxial deformation, the values of τ and P would typically be derived from the radial coordinate, r, which is centered on the axis of the specimen.The Tresca boundary condition, in contrast, defines the wall traction as being some function of shear flow stress, τf , thus (5) Where, m is known as the interface shear or the friction factor. The least value in the fully lubricated case where the normal wall stress induces plastic flow is given as the uniaxial yield stress, y or the flow stress, f.Then, the maximum value of the coefficient is given by a ratio of the shear yield stress, τy, to the flow stress f Hence, using the Von Mises criterion, µ= τf/ f=1.1552/2,where µmax=0.57or m=1.Several studies have expanded the approach of approximating the friction equations to achieve a better estimate of the material flow [38][48].According to Tan [48], the method known as a general friction law, gives a better solution in the analysis of finite elements.The frictional stress, τ is described by; (6) Where f is the frictional factor, ɑ is the ratio between the real and apparent contact areas and, τf is defined by . The above approximation in the estimation of p is graded under the theorem of limits of plasticity theory or more specifically known as the lower and the upper bound theorem [49].This approach is valid for a steady state problem.There is difference assumption under different approach (see Table 1): Based on experimental findings, multiple studies were performed to model the friction stress as a function of the distance from the deformation zone.The main difference between these theories as shown in Table 1 is the assumption of the type of friction (viscous or sticking and slipping) and how the tool/workpiece interface distributes the frictional stress.In particular, these techniques are hardly used by current FEM codes, but attach a friction model generally to the whole tool/workpiece interface interaction zone and for the entire process (Table 2). Many researchers have conducted the metal forming test and used the Trecca shear friction model as shown in Table 2, such as the work done by Harikrishna et al. [58], who investigated the FEM Simulation Analysis of Ring Compression Test Using Stationary and Rotating Die under Constant Shear Friction using a stationary and rotating die (TSF).It seems that the usage of Tresca shear friction has a high connection to experimental results, and the findings also show a resemblance to the findings of Yahaya et al [59]. Another model of friction is Coulomb shear friction (CFC) is one of the most well-known friction models that has been utilised in metal forming processes, according to a research by Sofuoglu and Rasty [60], who investigated the measurement of friction coefficient using the ring compression test to determine the friction coefficient and in the end, it was discovered that the usage of coulomb shear friction as a simulation model for the research study has good interpretation, and that the experimental model can be approximated simply using the simulation.This study investigation has the same results as Robinson et al., [61], who used the CFC model to arrive at their conclusion, and the outcome is comparable. There have also been a few researchers who have advocated for the employment of both friction models, i.e. the Tresca shear friction and the Coulomb shear friction as explored by Zhang et al., [62], to study the connection between these two friction properties.The findings showed that the ratio (k) of Coulomb friction coefficient to Tresca friction factor changes depending on whether the friction is lubricated or not (dry).Under lubricated circumstances, the ratio k may be characterized by a parabolic function, whereas under dry conditions, it can be described by an exponential function.Kocaker, [50] There have also been a few researchers who have advocated for the employment of both friction models, i.e. the Tresca shear friction and the Coulomb shear friction as explored by Zhang et al., [62], to study the connection between these two friction properties.The findings showed that the ratio (k) of Coulomb friction coefficient to Tresca friction factor changes depending on whether the friction is lubricated or not (dry).Under lubricated circumstances, the ratio k may be characterized by a parabolic function, whereas under dry conditions, it can be described by an exponential function. Lubrication in Metal Forming Process Friction, lubrication, and wear between moving surfaces are the focus of tribology.The significance of tribological factors in the formation of bulk metal was generally recognized as influencing the life of the tool, the product flow during the forming, the load applied, the lubricant relationship to the process elements and the surface finish consistency.There are a few different lubricants available for use in metal-forming processes.Water-based lubricants, synthetic oils, and low-viscosity mineral oils are all viable choices for low press-working.Phosphate and other additive and low-friction coatings are often employed in elevated forming processes [63].The effectiveness of lubricants in forging and other metal forming operations has been investigated by several prior researchers [64][65][66].These scientists have not only studied the effects of different lubricants on the deformation process, but they have also monitored other factors such as specimen configuration, coefficient of friction, change in material characteristics, and normal and shear stress.There are three type of metal contact during lubrication in process, that is Hydrodynamic, Mixed and Boundary regimes of lubrication as shown in Figure 10.[28] There have been many iterations of lubricants created and tested for use in the manufacturing sector.Mineral oil, oleic acid, and stearate soap are some of the most often used lubricants [67][68][69].Researchers employed a broad variety of lubricants, each of which created a unique lubricated interface state and, hence, a unique coefficient of friction.To this end, researchers have been working on improving lubricant technology and creating novel lubricant formulas.In order to make the normal lubricants suitable for use in a metal working process, chemical additives were added to either lower or replace the quantities of active components.In 2001, Rao et al. [70] showed that boric acid worked well as a lubricant for aluminum sheets.Value is equivalent throughout a broad range of forming parameters when compared to other commonly used lubricants such as polytetrafluoroethylene (PTFE), molybdenum disulfide (MoS2), graphite (Graphite), and oleic acid. Figure 10 Regimes of lubrication It was recognized that the efficacy of lubricants used in metal forming depends on many variables such as thickness of lubricant, viscosity, original contact surface, magnitude of the load, chemical reaction between contact surface and movement speed [71][72][73].Lubrication is the process of providing the wearing interface with either air, liquid or solid powder that serves as a film medium or promotes chemical transformation into a film material.Different types of lubrication regimes can be related to a mechanism that can be considered the most significant lubrication element in metal formation.Lubrication is controlled by different physical and chemical factors in each regime.As a result of minor changes in lubricant and workpiece materials, various regimes may occur, such as speed temperature, speed, surface roughness and geometry.Figure 8 described and highlighted the three major regimes concerned. There are four primary lubrication mechanisms (i.e.dry / boundary /mixed film / hydrodynamic) observed in metal forming processes.As shown in Figure 11, the Stribeck curve illustrates the onset of these various types of lubrication as a function of lubricant viscosity, η, sliding velocity, v, and normal pressure, p [75]. In a dry state, there is no lubrication between the two surfaces, hence friction is increased.This is an ideal state to achieve just in a few forming processes (e.g.hot rolling of plates or slabs and non-lubricated extrusion of aluminum alloys).When two solid surfaces are in such close proximity to one another, surface interaction between mono-or multi-molecular layersof lubricants and the solid asperities becomes the dominant contact mechanism [76].For most stamping, forging, and hydroforming operations, boundary lubrication is the most common lubrication state.It is also common for sheet metal forming to involve mixed-layer lubrication.In this case, the micro-peaks of the metal surface experience boundary lubrication conditions and the micro-valleys of the metal surface become filled with the lubricant.In metal forming, hydrodynamic lubrication is a rare occurrence that only happens under extremely particular temperature and velocity circumstances (e.g.sheet rolling operation). The Stribeck Curve is a friction plot as it relates to speed, load and viscosity.The friction coefficient is on the vertical axis, f.The horizontal axis presents a function combining the other variables, μN/P, where N is the relative speed, μ is the viscosity and P is the force on the contact [77].Stribeck curve is literally related to the concept of tribology, where the science and technology of interacting surfaces in relative motion is concerned.This includes understanding and applying the concepts of lubrication, wear and friction.Numerous research has examined the impact of lubricant viscosity in metal forming processes so far.Higher viscosity mineral oil was investigated by Kawai et al., [78] under lubricated and dry frictional circumstances.In the dry frictional condition, the metal surface was polished smooth, in contrast to the lubricated state, when the frictional surface increased by over 26 times.However, no seizures were seen throughout the extruded journey.In their plane strip drawing of modified aluminium sheets, Bech and Eriksen, [79] found that higher interface gradients (decreasing pressure towards the exit) lead to higher friction when viscosity is lower. Loads tend to diminish as long as oil remains in the contact, as pointed out by Shirizly and Lenard, [80]. Lee et al., [81] used a variety of drawing oils in their analysis to show that when the viscosity of the lubricant decreased, the coefficient of friction increased.Also, surface roughness may go from very smooth to very rough if the coefficient of friction is significant. For the purpose of cold rolling low carbon steel strips, Dick and Lenard, [82] conducted an experiment utilizing three different commercial oil-in-water emulsions.Unexpectedly, they discovered that oil viscosity had minimal to no impact on the loads.There may have been less strain with greater roll roughness as a result of the increased speed.Golshokouh et al., [83] utilized palm oil with a greater viscosity to test the impact of employing a vegetable oil as a hydraulic fluid on the efficiency of a hydraulic system.An increase in oil viscosity was shown to be responsible for an improvement in volumetric efficiency over the ageing period.In addition, vegetable oil degrades more quickly than mineral oil, leading to a marked rise in viscosity as the oil aged. A study by Syahrullail et al., [84] identified that high viscosity palm oil-based lubricant, RBD palm has a decent opportunity of lowering the friction and extrusion force.It is because the lubricant has the capability of reducing the frictional constraint in the cold metal forming which is similar to the mineral base lubricating oil.However, the existence of RBD palm stearin in a solid state at room temperature affects the surface condition of the work piece as a coarser roughness condition is observed in cold extrusion.Likewise, Norhayati et al., [85] discovered that the extrusion load and frictional constraint were reduced even when the experiment was conducted on the redesigned surface of the taper die.Researchers Hafis et al., [30] observed that by applying the proper amount of lubricant, the load might be lowered after testing a semi-solid mineral oil.Different oil viscosities were tested on a textured track surface and compared to a smooth track surface in a scientific study by Sudeep et al., [86].According to the results, oil with a greater viscosity may reduce the friction value of the textured track surface. Nature of Material Flow Analyzing the properties of a considered trying product or material has always relied on experimental methods.The same can be said regarding the metal forming process, where this approach has been widely adopted by researchers from all around the world.Extrusion load, sliding velocity, force, metal flow pattern, and influence of temperature are the most often analyzed variables in metal forming.The process includes research into the nature of the flow of materials both before and after metal is formed. In most cases, the size of the metal that was mined was larger than the final product's dimensions required for fabrication.Therefore, metal forming processes like forging, extruding, or rolling are necessary to distort the thick rod and bring it down to the appropriate size.There's no doubt that this procedure devours a lot of resources and necessitates pricy equipment.Therefore, it is essential to understand the optimal forming loads required to produce the necessary substantial deformation. Several studies up to this point have focused primarily on the study of forming loads in metal forming processes.Researchers Lakshmipathy and Sagar, [87] looked at how the direction of die grinding marks affected friction in lubricated open die forging.Both the work piece and the die were made from commercially pure aluminium and H11 steel.It was discovered that the forging loads and friction might be reduced by using a crisscross grinding pattern between two sets of die instead of a single, straight grinding route. Kim and Kim, [88] performed friction and wear studies to learn how sliding velocity and normal load impact the tribological properties of a diamond-like carbon (DLC) coating for machine components.Sliding velocities ranged from 0.0625 m/s to 2 m/s, with normal loads ranging from 6.1 kN to 49 kN, and temperatures were kept constant during all experiments against AISI 52100 steel balls.A higher sliding velocity and normal load both lead to lower friction coefficients.A higher sliding velocity also causes wear rates to rise, eventually reaching a maximum. Figure 12 shows the schematic of the four different types of flow in extrusion process.When friction is not present between the container and the die contacts, the flow pattern S is called an extrusion of homogenous materials.Flow pattern C, which results in a non-uniform temperature distribution in the billet, was chosen to accommodate the billet's heterogeneous material qualities.Closer to the wall of the container, materials experience more shear deformation and develop a larger dead-metal zone.The existence of friction in homogenous materials during the extrusion process results in flow patterns A and B. Unlike flow pattern A, in which friction is just at the die surface, flow pattern B has friction at both the container and die interfaces.As a result, Flow Pattern A works well for Indirect Extrusion, whereas Flow Pattern B is dependable for Direct Extrusion.Because a longer dead-metal zone develops on flow pattern B, shear deformation also increases there. Surface Finish and Precision Tests for surface finish and precision are required to investigate product quality.All stages of material extraction and manufacturing are vulnerable to the effects of friction and wear [90].Several factors must be taken into account in order to lessen the friction, wear, and surface roughness.Numerous research up to this point have revealed factors that have been linked to such issues.Geiger et al., [91] state that characterizing and qualifying the surface of the work piece during metal forming depends on a firm grasp of the trapping behavior of liquid lubricant and the contact behavior of asperities at the tool work piece interface.Therefore, a proper lubricant choice may aid in mitigating the significant impact of such defects.Figure 13 shows the sample analysis for surface finish that been done by Aiman et al., [3], [6] that study the effect of surface workpiece that lubricated with different derivatives of palm oil. Figure 14 Correlation coefficient between coefficient of friction and roughness parameters under lubricated conditions.[92] Surface roughness of the die is a crucial component influencing friction, which plays a vital role in metal forming operations as proposed by Menezes et al., [92].Figure 14 shows the analysis of typical roughness parameters that reflect to the correlation coefficient in order to analyze the surface of the metal.Friction has a significant impact on the contact area between die surfaces and drawing direction, as determined by Costa and Hutchings's, [93] research that set out to quantify these effects.Another method of surface analysis is based on surface topography as shown in Figure 15 which provide more detail on how surface behavior after the deformation process.Since cold forming was selected as the metal forming procedure for this investigation, it was crucial to look into how these factors affected the forming process itself.The longer the billet is, the more friction it will encounter as it advances through the taper die, leading to a greater compression force [94].While some friction is required for metal forming operations, too much may cause die wear, which in turn can disrupt metal flow and lead to product flaws [95][96].Many researchers are devoting time and energy to finding solutions to the problems of friction and wear in materials processing because they are so important to the efficient and safe extraction and primary processing of raw materials. MATERIALS AND LUBRICANTS SAMPLE TEST Testing the viscosity and density of a lubricant is crucial for categorizing each lubricant according to its unique qualities.A fluid's viscosity is linked to its density, with a greater density potentially resulting in a more viscous fluid.The temperature of the fluid also plays a role in its viscosity; a decrease in viscosity may occur at higher temperatures.On the other hand, gas behaves differently, with its temperature tending to rise. A variety of tests, including tensile tests, hardness tests, and heat treatments, are often performed on experimental materials before they are put through the rigors of the actual experiment.It is useful for amplifying findings from other testing parameters; thus, it should be employed for that purpose.In order to determine whether or not a material can resist a certain stress level before breaking, a tensile test must be performed.The ductility of a material may be evaluated by calculating its elongation at break and flexural strength under tensile stress.In contrast, since there is no material provided on this feature, the material is at risk of fracturing or rupture when it is exposed to overload loads.Metals must have certain characteristics, and hardness is one of them before they can be formed.In order to choose the appropriate material for tooling and work piece, hardness testing, a destructive method, plays a key role.A tooling part's substance, in order to deform a work piece into the correct shape, must be harder than the work piece.Offering compliant items that are to the satisfaction of the final consumer might help lessen the likelihood of product failure. Several researchers have used the same tool hardness value in their analysis.The hardness of dies used in metal-forming operations is summarized in Table 3. Softer work piece materials including pure aluminum and aluminum alloy were employed and tested with a variety of tool steels.From the data in the table, it is clear that the normal range of tooling hardness for metal forming operations, and the cold extrusion process in particular, is between 48 and 66 HRC.Furthermore, because to the high levels of pressure and wear that dies are exposed to, it is necessary for these materials to provide adequate protection against wear, high fatigue strength, and high compression strength [97]. The qualities of the created metals may differ from one another, depending on their intended use.Steel beams used for supporting loads, for instance, are very sturdy.The steel used to make the wall frame must have some ductility so that it can be bent into the right shape without breaking, but it must also be sufficiently strong to prevent cracking under stress.Based on the findings of the prior research, many methods of employing lubricant, testing methodology, and friction analysis were examined in the metal forming process.Mineral oil and molybdenum disulfide were the two types of lubricant samples that were used most often in metal forming.However, some researchers have begun looking into the possibility of utilizing biodegradable oil in substitute of commercial metal forming oil.This is because commercial metal forming oil is made from mineral oil, which is harmful to the environment.Additionally, palm oil demonstrates an intriguing lubricant as a metal forming lubricant, in which part of the fractionated product is obtained only through the use of experimental methods. Method Approach and Analysis The open forging test, the extrusion test, and the closed forging test are the typical types of testing that are performed throughout the metal-forming process.These tests are well-known in the scientific community [105].The most of the test methods involved the use of lubricant throughout the trial.Experimentation is the major technique used in some of the method approaches; however, finite element modelling (FEM) software is now widely used and is a well-known tool that researchers may use to assist them in analyzing the metal-forming process.The most of the lubricated samples used an analysis that focused on the interaction of frictional behavior, and the most of friction correlations are only based on a single form of frictional interaction.The investigation of various kinds of friction that occur during the metal-forming process assists researchers in the process of modelling the most optimal model and assists researchers in observing how various kinds of friction affect the examination of lubricated samples.Table 4 summarizes the methodology and analysis used in previous studies.Ring compression test is one of the famous metals forming test and it is based on open forging condition [4], [23], [106], [68].Figure 16 shows the typical experimental testing of ring compression test as proposed by Zhang et al. [4], [23], where the analysis is based on the deformation of the ring diameter.Extrusion is another method for producing metal that has been suggested by some researchers.Some of the types of approach extrusion processes are the double cup extrusion process, which Lee et al. [105] explored, and the cold forward extrusion process, which Nurul et al. [23] investigated.Figure 17 represent the typical extrusion experimental set up for both method approach. Since the sample can be evaluated based on the model sample needed, the closed forging process is one of the flexible metal forming processes with a very wide approach.Figure 18 displays some findings by Aiman et al. [3][6] study that used a closed forging process, where the process involving a mold for the workpiece. Lubrication Sample Derived from Bio-based Lubricant A few researchers have tried to design a more sustain lubricant that may be utilised to replace the mineral oil-based lubricant.Tatematsu and his colleague [107] investigated the usage of beef fat in comparison to mineral oil-based lubricants in 2018, and the results suggest that beef fat has reduced friction in ring compression tests, where the deformation of the workpiece has the maximum formation.However, the result was limited to the friction and distribution of stress in workpiece only, the information on the surface behaviour of the workpiece is not discussed clearly. When discussing lubricants made from renewable resources, vegetable oil is often cited as one of the most significant sources currently available.In the process of metal forming, Abdulquadir and Adeyemi [108] have discussed how crucial it is to employ lubricants made from bio-based materials.In their research, a few vegetables oil (palm kernel, groundnut, shear butter and red palm oil) has been selected as metal forming lubricant and been compared to the black soap sample as a benchmark lubricant.From the result it shows that some vegetable oil sample has a lower friction compare to black soap, that is red palm oil and sheabutter oil.The lower friction for both of the sample oil is may due to the different contamination of free fatty acid composition inside the lubricant as discussed by Aiman et al., [33], where he addressed that the higher in unsaturated fatty acid in the lubricant has a stronger bond to break during the metal-to-metal contact that can reduce the friction.Campen et al., [122] confirmed that this double bond caused the oleic acid (unsaturated) to form cis-configuration which bend the molecules and hard to adopt linear molecules configuration.Unsaturated fatty acids are therefore far less efficient in forming tightly packed monolayer soap films.Less compact packing density causes fatty acid chain molecules to have less attraction for metal surfaces.Meanwhile, the molecules of palmitic and stearic acid, which are saturated fatty acids, have a great capacity to pack tightly and effectively on metal surfaces [123][124]. Another research that using vegetable oil based in metal forming is done by Syahrullail et al., [111][112], [125] that using palm stearin as metal forming lubricant and later be developed by Nurul in [113][114] that use another type of palm oil based with modification of mould die in getting the most appropriate mould design in reducing the compression force and improving the extrusion process.From their research, palm oil based has potential to be utilize as metal forming lubricant, where the friction coefficient of palm oil based is somewhat lower compare to the mineral oil based [112].This finding has been proved by Nurul in 2015 and2016 where it shows that palm stearin has better performance compare to VG 460 and VG 95 (viscosity grade mineral oil) [113][114].At room temperature, RBD palm stearin exists in a semi-solid state, and it completely transforms into a liquid at 40°C.It's important to note that PMO VG460 is thicker than PMO VG95 and may move more slowly during the extrusion process.The extrusion process will be easier and require less force thanks to this physical condition.The modification of the taper die during research shows an improvement in extrusion process where the extrusion load has decrease significantly for the taper die with the micro pit [126] for both type of lubricant (vegetable and mineral oil based). CONCLUSION This article provides a thorough analysis of current advancements in the field of metal forming, including, tribological in metal forming, method approach of metal forming testing, and the use of bio lubricants as metal forming lubricants.According to this assessment, the metal forming process involves a variety of testing methods, the majority of which are by experimental testing.To improve the scientific study, a finite element analysis is required to observe stress-strain within the workpiece under various levels of lubrication and to assess friction during compression.There is still a dearth of theoretical knowledge of the many lubrication processes, despite the fact that some of these mechanisms have been documented in the scientific literature as being responsible for increased tribological performance.In order to have a complete understanding of the lubricating mechanism of the metal forming process, more research studies, both theoretical and experimental, are required. Most metal-forming lubricants were mineral oilbased, and the introduction of bio lubricants has attracted the attention of scientists as a means of creating a manufacturing process that is more environmentally friendly.From this review, a few researchers have started the used of bio lubricant in their research that's come from animal and plant.According to the findings of the study, several of the bio-lubricants have the potential to be used as metal forming lubricants due to long polar fatty acid chain and molecules are able to provide protection on the contact surface, thus resulted in less wear and friction.Besides that, the present of unsaturated fatty acid also able to reduce the contact impact between the workpiece and the packed alkyl chain, which reacts with the cumulative short range of van der Waals forces that exist between neighbouring groups.A greater amount of closed-packed material led to improved affinity on the metal surface where the unsaturated fatty acid has a double bond on its ninth and tenth carbon chain, which separates it from saturated fatty acids.Nevertheless, most of the technique of testing in bio lubricant were restricted to experimental testing only.In order to conduct indepth research on vegetable oil, a few different methodologies, such as the finite element technique, are required. Figure 3 Figure 3 General Forging process Figure 6 Figure 6Typical Load Displacement Curve for Closed Die Forging[28] Figure 9 Figure 9 Relationship between contact pressure and frictional shear stress Figure 12 Figure12 Schematic of the four different types of flow in extrusion, S: frictionless; A: friction at die surface; B: friction at both container and die surfaces and C: more friction at container wall with more extended dead metal zone[89] Friction model Friction stress distributions Main assumptions and applications Reference➢ Dry slipping occurs over the whole tool/workpiece interface ➢ Friction stress τ is directly proportional to local normal pressure p ➢ It is mainly used for cold metal forming due to its simplicity. [54]y slipping occurs over the whole tool/workpiece interface ➢ is the shear flow stress, and σ is the yield stress.➢ Since its simplicity, it is the most popular model and seems to suggest the material feature of plastic deformation.Siebel,[54] Friction model Friction stress distributions Main assumptions and applications Reference➢ Sticking occurs over the whole interface between tools and workpiece➢.➢ It is used for hot metal forming or unlubricated cold forming of "soft" materials. Table 2 Friction models normally applied in FEM Table 3 Summary of tooling hardness for metal forming processes' die Table 4 Reports on lubricant and analysis for metal forming processes' die
2023-12-21T16:06:39.442Z
2023-11-18T00:00:00.000
{ "year": 2023, "sha1": "b5dd6d7dbeb37c0324a44cf019dab3c25d78dfd4", "oa_license": null, "oa_url": "https://journals.utm.my/jurnalteknologi/article/download/20444/8363", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a21d24837049ddbac6c6ba6408e9625d992ad370", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
139737981
pes2o/s2orc
v3-fos-license
Effect of Chitosan Treatments and Vacuum Packaging on the Shelf Life of Spangled Emperor Lethrinus nebulosus Fillets Stored in Refrigerator The effect of vacuum packaging (VP) on the quality changes of Spangled emperor (Lethrinus nebulosus) treated with or without chitosan during refrigerated storage of 12 days was investigated. Treatments included the following: control (untreated, control samples stored in air), VP (untreated, stored under vacuum packaging), FV (treated with chitosan film, stored under vacuum packaging) and CV (treated with chitosan coating stored under vacuum packaging). Chitosan coated and wrapped samples prior to VP can remarkably delay the growth of total viable count and psychrotrophilic count. Production of total viable bases nitrogen and trimethylamine for FV and CV Spangled emperor samples was significantly lower than for untreated samples at day 12 of storage. The coincidental lowered rate of increase thiobarbitoric acid and free fatty acid were obtained in Spangled emperor coated chitosan stored under vacuum packaging. Vacuum-packaged samples coated with chitosan showed significantly (P < 0.05) lower changes in a color value than uncoated samples. Therefore, Spangled emperor treated with chitosan stored under vacuum packaging had the lowest losses in quality during refrigerated storage. In addition, there was no significant difference between coating and film in reducing bacteriological, physicochemical and color parameters. Introduction The Spangled emperor (Lethrinus nebulosus) is one of the valued fish species in Persian Gulf, which due to its high nutritional quality and excellent sensory properties, is preferred by the customers in the south of Iran. Because this species is consumed domestically, it is very important to extend its shelf life, which is normally quite limited when kept refrigerated. In this case, correct methods of packaging can help to preserving food for longer time. Vacuum packaging (VP) is one of the methods of the natural preservation in order to delay the degradation and maintain the quality of the products longer [26]. VP is widely used as a supplement to ice or refrigeration to decrease the supply of oxygen to the aerobic bacteria in the flesh to extend the shelf life of product [2]. Chitosan have been used in seafood products to inhibit the growth of bacteria in stored fish in refrigerator and retarded the oxidation of unsaturated fatty acids in fish muscle before vacuum packaging [10,19,24,33]. Chitosan, a linear polysaccharide of randomly distributed b-(1-4)-linked D-glucosamine and N-acetyl-D-glucosamine, is a biocompatible polysaccharide obtained from deacetylation of chitin. Edible coating is a thin layer of edible material formed as a coating on a food, while an edible film is a preformed thin layer which once formed can be placed on or between food components [6]. In food industry, Chitosan coatings have been used successfully because of some advantages such as edibility, biodegradability, aesthetic appearance and barrier properties, being nontoxic and non-polluting, as well as carrier of foods additives (i.e.: antioxidants, antimicrobials). Therefore, these coatings can retain quality of raw, frozen and processed foods including fish items by preventing bacterial growth and delaying lipid oxidation. Major changes occur in proximate, microbiological, chemical and sensory composition of fish fillets during storage in the refrigerator [30]. Because the Spangled emperor is consumed domestically and exported in large quantities, it is very important to extend its shelf life of this fish during refrigerated storage. There has not been research to determine the shelf life of Lethrinus nebulosus during chilled storage. Though the shelf life of Lethrinus nebulosus could be extended by VP, VP is still questionable to ensure quality and safety of the foods. Such a limited success of VP has led to the processing of seafood and seafood products prior to packaging. Thus the aim of this study was to assess the effect of chitosan (coating and film) under VP condition on the quality of Lethrinus nebulosus fillets during refrigerated storage. Sample Preparation and Storage Condition Lethrinus nebulosus with an average weight of 500 g was caught with gill net in the Persian Gulf, Khorramshahr, Iran in July 2016. Fishes were placed in crushed ice with a fish/ ice ratio 1:3 (w/w) and transported to the fish processing laboratory with 2-3 h after catch. They were washed with tap water and two fillets were obtained from each fish after removing the head and gutted. Preparation of Coating Solution and Chitosan-Based Edible Films Chitosan solution was prepared with 1% (w/v) chitosan (Sigma Chemical Co., medium molecular weight, viscosity 200-800 cP) in 1% v/v acetic acid [21]. To achieve complete dispersion of chitosan, the solution was stirred at room temperature to dissolve completely. Glycerol was added at 0.75 mL/g concentration as a plasticizer and stirred for 10 min [21]. All films were obtained by casting 100 mL film forming solution on a nonstick surface (16 9 27 cm), dried at ambient temperature (20°C) until a firm surface but still with adhesive properties was obtained. After evaporation the films were peeled off from the plates. Fillet samples were randomly assigned into four treatment lots consisting of: one control lot (un-coated), second lots packaged with VP, third lots wrapped with films prior to VP, fourth lots immersed for 30 s in chitosan solution, then the fish fillets were removed and allowed to drain for 2 h at ambient temperature (20°C) in order to form the edible coating. All samples were stored at 4 ± 1°C for 12 days. Microbiological, physicochemical, color and sensorial analyses were performed at 3-day intervals to determine the overall quality of fish. Bacteriological Analysis Bacteriological counts were determined by homogenizing 10 g sample in 90 mL of 0.85% NaCl solution. Other decimal dilutions were prepared from this dilution and plated in the appropriate media. Total viable aerobic bacterial counts were determined by the pour plate method, using plate count agar (PCA, Merk, and Darmstadt, Germany). The inoculated plates were incubated at 37°C for 2 days for total viable counts, and at 10°C for 7 days for psychrotrophilic counts. All counts were expressed as log 10 CFU/g [27]. Chemical Analyses Determination of Total Volatile Base Nitrogen (TVB-N) TVB-N muscle was determined according to the method proposed by Goulas and Kontominas [9]. Ten grams of meat was homogenized with 2 g MgO and 300 mL distilled, and seven drops of anti-foam and some boiling stones were added. The blend was heated for 45 min until the volume of boric acid solution reached 150 mL. Boric acid containing methyl red reagent, which initially due to its acidity was red, gradually became alkali and turned green. Finally, the solution obtained from the accumulation of distillation gases by 0.1 N sulfuric acid to reach the onion skin color was titrated. Determination of pH The pH measurement was carried out using a Metrohm model 713 pH meter. Fish muscle (2 g) was homogenized thoroughly with 10 mL of distilled water and the homogenate was subjected to pH determination according to the method of Masniyom et al. [17]. Determination of TMA One hundred grams of fish muscle were deproteinized as described previously and the filtrate was collected. TMA was assayed as the picrate salt by colorimetry. One milliliter of filtrate and 3 mL of water were added to a test tube. Three other test tubes received 1, 2 and 3 mL of a standard TMA solution (concentration = 0.01 mg/mL) and 3, 2 and 1 mL of distilled water, respectively. The TMA stock solution was prepared by adding 1 mL of HCl to 0.682 g of TMA and making up to a final volume of 100 mL with distilled water. To prepare the TMA standard solution, 1 mL of the stock solution was mixed with 1 mL of HC1 and diluted to 100 mL with water. A 5th tube containing 4 mL of water was used as a blank for colorimetry. One milliliter of a 20% formaldehyde solution, 10 mL of toluene and 3 mL of saturated potassium carbonate solution were placed in the 5 tubes. The 20% formaldehyde solution was prepared as follows: 100 g of magnesium carbonate and 1 L of commercial formaldehyde (40%) were shaken and filtered. One hundred milliliters of this stock solution were diluted to 200 mL with water. The 5 tubes were shaken vigorously for 40 s; 8 mL of the toluene phase were then transferred to a tube containing 0.2 g of anhydrous sodium sulfate and shaken until dehydrated. Five milliliters of the dehydrated toluene phase were mixed in another tube with 5 mL of picric acid solution prepared by diluting 1 mL of a picric acid stock solution to 100 mL with toluene. The picric acid stock solution was prepared by dissolving 2 g of picric acid in 100 mL of toluene and spectrophotometer at a wavelength of 410 nm [1]. Determination of Thiobarbituric Acid (TBA) Thiobarbituric acid (TBA) measurement was determined following the method of Siripatrawan and Noipha [32]. Ten grams of homogenized sample were added with 97.5 mL of distillated water and 2.5 mL of 4 N HCl. The mixture was heated with steam distillation. Five milliliter of distillate was added to 5 mL of thiobarbituric reactive reagent containing 0.02 MTBA in 90% glacial acetic acid and incubated in boiling water for 35 min. After cooling, the absorbance of the pink solution was measured at 538 nm using a spectrophotometer. The constant 7.8 was used to calculate the TBA number using Eq. (1). The TBA value is expressed as mg malonaldehyde/kg sample: Determination of Free Fatty Acid (FFA) The free fatty acid content was determined in the lipid extract by Woyewoda'smethod according to the Eq. (2). Results were expressed in % of oleic acid [35]: N = normality of NaOH, V 2 = mL NaOH for samples, V 1 = mL NaOH for blank, W = weight (g) of lipid Color Measurements A Minolta Chroma Meter CR400 (Minolta, Osaka, Japan) was used for color measurements. Colors were expressed as CIELab coordinates. In this system, L * represents the color lightness on a 0-100 point scale from black to white; a * is the position between red (?) and green (-); and b * is the position between yellow (?) and blue (-). The color intensity is expressed by a chroma value (C Ã ab ), while hue (H 0 ab ) corresponds to the name of the color as found in its pure state on the spectrum. These values were calculated according to the formulae: Statistical Analysis All measurements were replicated three times for each lot and mean values ± standard error were reported for each case. Analysis of variance (ANOVA), Duncan's test, with confidence intervals set for a level of significance of P \ 0.05 on SPSS to evaluate the significance of differences among mean values. Results and Discussion Bacteriological Analysis The initial total viable counts (TVC) and psychrotrophilic counts (PTC) bacteria of control sample were found to be 2.8 log 10 CFU/g and 3.93 log 10 CFU/g, respectively, indicating a high quality of the fish fillet [31]. TVC and PTC values in all the treated samples with chitosan(coating and film) and VP did not exceed the maximal permissible limit at day 9 of the storage trial (Fig. 1a, b) while that of PTC did not achieve this count to the end of 12 days storage time in FV samples. A microbiological acceptability limit is 7 log CFU/g for fresh water and marine species that is fit for human consumption [12]. The coated and wrapped samples prior to VP had the minimum initial amount of bacteria, might be attributed to the immediate antimicrobial effect of chitosan [13,15,21,23]. Chitosan might show synergistics effect with VP on inhibition of total viable counts and psychrophilic bacteria counts in Lethrinus nebulosus. The antimicrobial effect of chitosan has been ascribed to the presence of the positively charged on the NH 3 ? group of glucosamine monomer in chitosan molecules which interact with negatively charged macromolecules on the microbial cell surface, leading to the leakage of intracellular constituents of the microorganisms, moreover, the mechanism of action of chitosan appears to be related to disruption of the lipopolysaccharide layer of the outer membrane of gram-negative bacteria [23], also to its function as a barrier against oxygen transfer [13]. This result was in agreement with Günlü and Koyun [10] who reported that sea bass treated with chitosan stored under VP had the lowest mesophilic aerobic bacteria, compared with the control and VP during refrigerated storage. Tsiligianni et al. [33] observed that samples treated with chitosan and stored under VP reduced microbial growth on swordfish. This result suggested that chitosan coated and wrapped prior to VP decreased the microbial count of the FV and CV during refrigerated storage. Figure 2 showed the variation of TVB-N value of Spangled emperor during the storage. The initial TVB-N varied from 17/20 mg N/100 g to 24.66 mg N/100 g. The TVB-N level increased gradually along with the time of storage in all samples (P \ 0.05), but the increasing rate varied with treatments. TVB-N usually include calculation of trimethylamine, dimethylamine, ammonia and other volatile bases, which impart characteristic off-flavors to fish [9]. TVB-N is products of bacterial spoilage such as S. putrefaciens and P. phosphoreum, autolytic enzymes and endogenous enzymes, which used as index to assess the keeping quality and shelf life of seafood products [5]. Total Volatile Base Nitrogen (TVB-N) A level of 25 mg N/100 g muscle has been considered the highest acceptable level [14] and above 30-35 mg N/ 100 g muscle indicate that fish is decomposed and inedible [4]. TVB-N level of CV was less 35 mg N/100 g muscle, indicating that the fillets of fish maintained at a good quality during storage. From the result, it was found that using a coating of chitosan prior to VP resulted in a more rapidly reduced bacteria population or decreased capacity of bacteria or both [33]. At the end of the storage, TVB-N value of control was more than the others (P \ 0.05). TVB-N value of FV was higher than CV. It can be concluded, coating is more effective than film in controlling TVB-N of fillets. These effects may be attributed to the inhibitory activity on microbial growth exerted by chitosan. Edible coatings act as antimicrobial materials and affect the TVB-N value. [8,21,23,34] have announced that chitosan antimicrobial properties of coating solution is more than film structure, due to further migration of antimicrobial agents. The longer storage period of chitosan treated samples compared to untreated samples may has been due to a lower microbial counts which breakdown compounds like trimethylamine oxide (TMAO), peptides, amino acids, etc. [11], resulted in a decrease in the basic nitrogen fraction [18]. For the result, chitosan coated samples prior to VP appeared to be the most effective in TVB-N during storage. pH Changes in pH of Lethrinus nebulosus muscle during storage are presented in Fig. 3. The initial pH of fish samples was found between 6.96 and 7.21. During the storage time, the pH values increased gradually, presumably due to accumulation of basic compounds generated from both autolytic processed by endogenous enzymes and microbial enzymatic actions [20], although it could also be associated with the increase in bacterial counts especially psychrophilic bacterial counts. pH values of VP and/or chitosan treated (VP, CV and FV) samples were lower than control, due to its property of inhibiting the growth of bacteria, yeasts and moulds [29]. Furthermore, chitosan pretreatment prior to VP could minimize the microbial growth. Fig. 4. The initial TMA value of control samples was 7.25 mg N/100 g sample which increased up to 16.27 mg N/ 100 g sample at the end of the storage period. The present study revealed that TMA value of control Spangled emperor fillets increased during storage but chitosan-coated and chitosan-film prior to VP of Spangled emperor fillet retarded the decomposition of trimethylamine N-oxide caused by bacterial spoilage and enzymatic activity. This reduction in TMA production when using chitosan-coated and chitosan-film samples in fish has also been reported by [10,33]. Acceptability limit of TMA for various fish species are different: sea bass (5 mg N/100 g) [16]; sardines (5-10 mg N/100 g) [22]; hake (12 mg N/100 g) and 10-15 mg N/100 g as a general limit for fish [3]. Such variations in the limit values of fish may be related to the fish species, season, initial bacterial count and storage conditions [3]. Thiobarbituric Acid (TBA) The TBA index has been widely used as an indicator of degree of lipid oxidation. TBA values of fish stored in refrigerator are presented in Fig. 5. At 0 day, TBA value of all samples was found between 0.29 and 0.55 mg malonaldehyde/kg muscle. TBA value of all samples increased when the storage time increased (P \ 0.05). The increase in TBA during storage may be attributed to the partial dehydration of fish and interacting lipids with air oxygen [14]. There is an increase in TBA values of CV and FV on day 6 and again a dip on day 9. The decline in TBA values was probably due to the reaction of malonaldehyde with various other constituents of muscle. There was no significant difference between control and treatments. This result suggested that oxidation of lipid in fish samples could be minimized by the use of chitosan probably due to the antioxidant activity as well as its low oxygen permeability characteristic of chitosan. It is reported that the antioxidant mechanism of chitosan could be by chelate action of ion metals and/or the combination with lipids of meat during storage [15]. Furthermore, chitosan coating and film have been known to be good barriers to oxygen permeation [28]. In addition, using a combination of chitosan and VP can reduce the degree of lipid oxidation in fish tissue. TBA value of 5 mg malonaldehyde/kg muscle is an acceptable limit, while the fish may be consumed up to the level of 8 mg malonaldehyde/kg [27]. In the current study, Free Fatty Acid (FFA) Both the primary and secondary oxidation products have been assessed to consider the complexity of the lipid oxidation process. The initial FFA value was from 0.95 to 1.56% of oleic acid (Fig. 6). A gradual increase in FFA formation in all samples was observed due to hydrolysis of phospholipids and triglycerides [25]. FFA value of control samples was higher than treated samples, significantly (P \ 0.05). There was no significant difference between coating and film in reducing FFA of fillets. As it was concluded from TBA values, chitosan coatings and films protect Spangled emperor fillets so would reduce production of free fatty acids. [25] was shown that FFA are known to undergo further oxidation to produce low molecular weight compounds that are responsible for off-flavor and undesirable taste of fish and fish products. This study showed that VP can reduce FFA content in samples treated with chitosan. Color The effect of chitosan and vacuum packaging on the changes of color of Lethrinus nebulosus during refrigerated storage are shown in Fig. 7. The appearance of food products is an important parameter to consumer, both from the point of view of acceptability and preference. Surface color is influenced by both muscle structure characteristics and pigment concentrations [7]. Color values including lightness (L *) coordinate, redness (a * ) value and blueness (b * ) value of control and sample stored under VP with and without chitosan pretreatment during storage 4°C are shown in Fig. 7. The a * and L * value of all samples gradually decreased during the refrigerated storage. On the other hand, b * values of Lethrinus nebulosus fillets increased as storage time increased (P \ 0.05), reflecting an evolution toward grey-blue tones. The fillets treated with chitosan film had significantly higher blueness (b * ) value than fillets treated with chitosan coating. Coating did not affect color redness (a * ), lightness (L * ), chroma (color intensity) (C Ã ab ) and hue values of Spangled emperor fillets. The application of a coating or film during storage barely altered the lightness and a * values (P \ 0.05) of the Spangled emperor. Chroma (color intensity) (C Ã ab ) and hue values decreased with time in storage, indicating a reduction in color intensity. Color loss in fish fillets during storage might be attributed to the lipid oxidation, oxidation of proteins with haemo groups (haemoglobin and myoglobin), non-enzymatic browning reactions between lipid oxidation products and the amine groups in proteins, and microbial spoilage [32]. The greatest rate of decrease in a * value was found in control sample. The highest increase in L * and b * values was also observed in control sample. From the result, degree of changes in color caused by VP could be lowered with chitosan pretreatment, especially coating fillets in chitosan of Lethrinus nebulosus. In the present experiment, however, the better color might have resulted from the well-known antioxidant property of chitosan. No previous study was encountered about the color changes of Spangled emperor fillets containing chitosan during refrigerated storage. Conclusion Combination of vacuum packaging and chitosan treated samples effectively retarded the TVB-N and TMA values and inhibited the growth of total viable count and psychrotrophilic count bacteria during refrigerated storage. Therefore, to extend the shelf life and delay the deterioration of fresh Spangled emperor fillets during refrigerated storage, chitosan coating prior to vacuum packaging is more appropriate. These coatings and films also showed antioxidant effect, since TBA and FFA values was lower than control samples at the end of the storage. There was no significant difference between coating and film in reducing TBA of fillets and bacterial contamination. Therefore, chitosan coating and film prior to vacuum packaging provides a type of active packaging that can be utilized as a safe preservative for fish under refrigerated storage. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2019-04-30T13:04:40.872Z
2017-11-02T00:00:00.000
{ "year": 2017, "sha1": "edae93cd59abc872f6b8a0a1479e0d8e687a3f9f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41783-017-0015-0.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "fb68fb6a91f5694702ea0933ceb3983387235a36", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }