text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
UPLC-Q–TOF–MS, network analysis, and molecular docking to investigate the effect and active ingredients of tea-seed oil against bacterial pathogens Object: This research intended to probe the antibacterial effect and pharmacodynamic substances of Tea-Seed Oil (TSO) through the use of ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-Q-TOF/MS) analysis, network analysis, and molecular docking. Methods: The major chemical components in the methanol-extracted fractions of TSO were subjected to UPLC-Q-TOF-MS. Network pharmacology and molecular docking techniques were integrated to investigate the core components, targets, and potential mechanisms of action through which the TSO exert their antibacterial properties. To evaluate the inhibitory effects, the minimum inhibitory concentration and diameter of the bacteriostatic circle were calculated for the potential active ingredients and their equal ratios of combinatorial components (ERCC) against Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and Candida albicans. Moreover, the quantification of the active constituents within TSO was achieved through the utilization of high-performance liquid chromatography (HPLC). Results: The methanol-extracted fractions contained a total of 47 chemical components, predominantly consisting of unsaturated fatty acids and phenolic compounds. The network pharmacology analysis and molecular docking analysis revealed that various components, including gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phlorizin, have the ability to interact with critical core targets such as serine/threonine protein kinase 1 (AKT1), epidermal growth factor receptor (EGFR), a monoclonal antibody to mitogen-activated protein kinase 14 (MAPK14), HSP90AA1, and estrogen receptor 1 (ESR1). Furthermore, these components can modulate the phosphatidylinositol-3-kinase protein kinase B (PI3K-AKT), estrogen, MAPK and interleukin 17 (IL-17) signaling pathways, hereby exerting antibacterial effects. In vitro validation trials have found that seven components, namely gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phloretin, displayed substantial inhibitory effects on E. coli, S. aureus, P. aeruginosa, and C. albicans, and are typically present in tea oil, with a total content ranging from 15.87∼24.91 μg·g−1. Conclusion: The outcomes of this investigation possess the possibility to expand our knowledge base concerning the utilization of TSO, furnish a theoretical framework for the exploration of antibacterial drugs and cosmetics derived from inherently occurring TSO, and establish a robust groundwork for the advancement and implementations of TOS products within clinical settings. Introduction Tea seed Oil (TSO), is extracted from the seeds of Camellia oleifera Abel.It is rich in unsaturated fatty acids, theasaponins, tea polyphenols, vitamins, squalene, and carotenoids, and is typically used as food (Zhang et al., 2022;Hu et al., 2022).In addition, TSO possesses medicinal value, is widely used as an injection material and ointment base, and is currently indexed in the Chinese Pharmacopoeia (2020 version) (Xu et al., 2021).Modern pharmacological studies have revealed that TSO possesses antibacterial (Duan et al., 2021;Zhang et al., 2021), anti-inflammatory (Duan et al., 2021), and antioxidant effects (Wang et al., 2022a) and is clinically used to treat diseases such as dermatitis, pressure ulcers, burns, neonatal red glut, diaper dermatitis, and pediatric diaper rash induced by bacterial fungal infections with excellent efficacy (Zou and Yi, 2013;Cheng et al., 2014).TSO has been used as a matrix excipient in the formulation of cosmetic and medicinal products due to its favorable attributes, including antimicrobial properties and compatibility with the skin (Wang and Wu, 2018), as well as its ability to enhance the penetration of fat-soluble substances (Wang et al., 2022b;Duan et al., 2022).Nevertheless, the specific antibacterial components of TSO have yet to be identified, impeding its widespread application in the fields of medicine and cosmetics (Shi et al., 2020).Consequently, a comprehensive and rigorous investigation into the chemical composition and pharmacological effects of TSO is imperative to furnish substantial scientific evidence for its effective utilization and further development. Chemical composition and pharmacological activity of TSO vary depending on the method used for extraction, including mechanical pressing and solvent extraction.A prior study found that solvent extraction of TSO produced a superior antibacterial effect than pressing.Interestingly, we also found that the pretreatment of medicinal TSO, including deacidification, decolorization, and deodorization processes, considerably influences its antibacterial activity, especially the deacidification (Duan et al., 2022).This indicates that the composition of tea oil components is closely associated to the extraction methodology, and also affects its pharmacological properties.In the decidification procedure, the primary purpose is to remove free fatty acids and enhance the stability of TSO.However, many acidic biological small molecules such as phenolic acids and organic acids are also removed, which may be the main reason for their diminished antibacterial activity.Therefore, the antibacterial effect of TSO can be researched by analyzing the changes in its composition during deacidification.In this study, employed ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-Q-TOF-MS) to qualitatively ascertain the primary chemical constituents in the methanol-extracted fractions of TSO.Additionally, network pharmacology and molecular docking techniques were integrated to investigate the fundamental components, targets, and potential mechanisms of action through which the functional constituents exert their antibacterial properties.Drawing from these preliminary findings, the antibacterial effects of potential active ingredients and their equal ratios of combinatorial components (ERCC) and the contents of active ingredients from TSO have been researched and determined.We anticipate that this study will provide guidelines for the development, utilization, and clinical application of TSO. Sample preparation and standard solution The Camellia oleifera seeds were collected from Xiangchun Planting Base (113.5 °E, 28.3 °N) in Liuyang, Hunan Province in October 2019.The samples were deposited at Herbarium, College of Pharmacy, Hunan University of Chinese Medicine (Changsha, China), voucher number is YCZ-1, YCZ-2, YCZ-3, YCZ-4, YCZ-5, YCZ-6, YCZ-7, YCZ-8, YCZ-9 and YCZ-10, respectively.(Figure 1).Camellia oleifera seeds (10 g) were ground into powder and sieved through a 60-mesh sieve, then added with 50 mL of n-hexane for reflux extraction at 90 °C for 1 h.The clear solution is filtered and collected, and the residue is then added with 50 mL of n-hexane for repeat the previous steps, merge the clear solution, and steam it under a temperature of 50 °C and a vacuum of 0.1 MPa to obtain TSO. Prior to the experiment, the preparation method was optimized to determine the optimal conditions.TSO samples (10.0 g) were weighed and ultrasonicated for 10 min in 30 mL methanol.The supernatant was concentrated under reduced pressure at 50 C to a solvent-free drop, weigh and calculate the yield of methanol-extracted TSO (3.69%), then a suitable amount of methanol was added to dissolve it to 10 mL.For each standard component, methanol was used to dissolve the stocks, which were further diluted with methanol to reach a suitable concentration.An appropriate amount of single standard substance configuration equal-ratio combinatorial components (ERCC), in which the ratios of gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phlorizin were 14:6:13:4:1:1:1.Before use, all solutions were stored at −4 °C in the dark. UPLC-Q-TOF-MS analysis of the chemical ingredients of tea-seed oil 2.3.1 UPLC-Q-TOF-MS qualitative analysis A Waters AcquityTM Ultra Performance LC system coupled with a Xevo ™ QTof mass spectrometer (Waters, United States) was used for UPLC-Q-TOF-MS analysis, and Masslynx 4.1 was used to process the data. Detection was performed in both electrospray positive and negative ion modes using magnetic resonance elastography (MRE) scanning and corrected for accurate mass using the ESI-L Low Concentration Tuning Mix (G1969-8500).The primary full scan mass range was from m/z 100-1200 with a resolution of 30,000, except the solvent gas, which was N 2 ; the drying gas flow rate was 6.8 L•min −1 ; capillary voltage was 4.0 kV; fragment voltage was 110 V; sheath gas temperature was 350 C. Secondary mass spectral data were acquired using dependent scanning, with the top three strengths selected for induced collisional dissociation based on the primary scan. Ingredient identification and analysis A sample solution of TSO was analyzed using the UPLC-Q-TOF-MS detection system.Based on these data, the elements and molecular formulas of the ingredients in TSO were deduced.Please refer to websites, such as ChemSpider, ChemicalBook, and TCMSP, to clarify the structural formula of the compound. Exploration of the antibacterial effect of tea-seed oil in the treatment by network pharmacology The action targets of the primary components of TSO were retrieved and predicted using the TCMSP (https://old.tcmsp-e.com/tcmsp.php),PubChem, and Swiss Target Predict websites and imported into the Uniport online website to convert the targets into corresponding gene ID.The Genecards, DisGeNET website was used to locate disease gene targets using the keywords "bacteriostat" and "bacterial infection," which were filtered and identified.Through Venny2.1.0intersection targets, the common targets after intersection were imported into the STRING database to construct a protein interaction network diagram, and the common targets of R software were used for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis to draw GO and KEGG plots. Molecular docking between the potential compounds and the core predicted targets Genes with Betweenness greater than 20 and degree greater than 10 were selected as the core target genes.Docking between the core target protein and small molecules, the main active ingredients of TSO, was performed using LeDock software, and the docking results were visualized and modified using PyMOL software. Determination of minimal inhibitory concentration (MIC) All model bacterial strains, Escherichia coli (E.coli, ATCC NO. 44102), Staphylococcus aureus (S. aureus, ATCC NO. 26003), Pseudomonas aeruginosa (P.aeruginosa, ATCC NO. 90271), and Candida albicans (C.albicans, ATCC NO. 10231), were purchased from Nanjing Bianzhen Biotechnology Co., Ltd.(Nanjing, China).According to (Li et al., 2022), MIC testing was performed using the broth microdilution method.E. coli, S. aureus, and P. aeruginosa (1 × 10 5 CFU•mL −1 ~5 × 10 5 CFU•mL −1 ) were inoculated in a nutrient broth medium, and C. albicans was inoculated in a martin-modified medium; the samples were also diluted in a twofold gradient from 500 μg•mL −1 -3.9 μg•mL −1 , bacteria solutions and the diluted samples were added in a ratio of 1:10.96-well plates were incubated at 37 C for 24 and 48 h, and absorbance at 600 nm was measured using a microplate reader.In a triplicate experiment, the lowest concentration that could inhibit bacterial growth by 100% was defined as the minimum inhibitory concentration. Determination of the diameter of the bacteriostasis circle The diameters of the bacteriostatic circles of the samples were determined using the hole-punch method referred to and modified by Mohamed (Mohamed et al., 2022).Briefly, the bacteria were evenly distributed on the surface of the agar plate, and a well with an inner diameter of 6 mm was punched into the middle of the agar plate.After half an hour of incubation at 37 °C, 0.1 mL of the sample solution dissolved in ethanol was added to each well.Ethanol was used as a negative control, amoxicillin as a positive control for S. aureus, flupironic as a positive control drug in E. coil and P. aeruginosa, and fluconazole as a positive control for C. albicans.After 24-48 h of incubation, the inhibition zone formed around the cylinder was measured, and the test was repeated three times. Statistical analyses Statistics were performed using SPSS 25.0 software and data are presented as mean ± SD based on three independent experiments.p < 0.05 indicates a significant difference. Result 3.1 UPLC-Q-TOF-MS analysis of the chemical ingredients of tea-seed oil UPLC-Q-TOF-MS analysis of the methanol extraction site of tea seed oil (TSO-M) was performed, and the BPC of sample under positive and negative ion detection modes were superimposed and compared, as displayed in Figure 2. The number of compounds detected in the positive ion mode was greater than that of all compounds detected in the negative ion mode, and 47 compounds were identified in TSO-M, as shown in Table 1, which primarily contained unsaturated fatty acids such as oleic, linoleic, palmitic, stearic, behenic, and arachidic acids, as well as phenolic compounds such as phloretin, puerarin, chlorogenic acid, catechin derivatives, tocopherols, and quinic acid.It also contains components such as theophylline, lutein, and carotene. 3.2 Exploration of the antibacterial effect of tea-seed oil in the treatment by network pharmacology Target screening of active ingredients of teaseed oil The active ingredients in TSO, including gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phlorizin, were screened for prediction targets using the TCMSP, PubChem, and Swiss target prediction websites, which were imported into the UniProt database for correction and then dereplicated, resulting in a total of 164 potential targets. Common targets and protein interaction network of antibacterial activity of tea-seed oil ingredients A total of 25 common targets were identified from the intersection of 164 potential principal component targets and 819 antibacterial targets (Figure 3A).These 25 common targets were imported into the STRING database to construct a PPI network, see Figure 3B.A "TSOactive ingredient-core target" network diagram was constructed using Cytoscape software by importing the active ingredients and their targets, see Figure 3C.The results revealed that there were several active ingredients corresponding to one target, and there was also one active ingredient acting on multiple targets, which embodied the effect characteristics between the active ingredients in TSO and the potential targets of the antibacterial effects. GO and KEGG pathway enrichment analysis GO enrichment analysis of common targets was performed using R software, in which biological pathway analysis resulted in a total of 1368 entries, principally related to the cellular stress response, regulation of apoptotic signaling pathways, and inflammatory signaling pathways.A comprehensive analysis was conducted on a total of 58 entries peretaining to the cellular composition, with a predominant focus on cellular vesicles, organelles, outer membranes, extracellular membranes, and nuclear envelopes.Additionally, a total of 111 entries were obtained for the analysis of molecular function, primarily centered around protein phosphatase binding, ubiquitin protein ligase binding, tyrosine kinase activity, and transcriptional regulation.The findings from this analysis suggest a potential correlation between the antibacterial effects of the major components of TSO and their ability to modulate signaling pathways, altering cellular reactions to external stress, and affecting protein modifications.The top 10 entries with the smallest p-values were selected as bar charts according to p < 0.05, as shown in Figure 4. KEGG pathway enrichment was screened with p < 0.05, and the top 30 ranked pathways were selected to create a bubble plot, as displayed in Figure 4.The results demonstrated that the enriched pathways included the phosphatidylinositol-3-kinase protein kinase B (PI3K-AKT), estrogen, MAPK, and interleukin 17 (IL-17) signaling pathways, implying that TSO components may exert antibacterial effects through multiple pathways and targets.Figure 5. Results of molecular docking between the potential compounds and the core predicted targets Seven potential bacteriostatic components of camellia oil were selected for molecular docking with six core protein targets, and the strength of small-molecule protein binding was evaluated based on the magnitude of the absolute value of the binding energy.In overall, when the binding energy is lower than −4.25 kcal•mol -1 indicates some interaction between ligand small molecules and receptor proteins, −5.0 kcal•mol -1 or greater is reached between the two molecules, the binding activity is favorable.If −7.0 kcal•mol -1 or greater was achieved between the receptor and ligand, the conjugation activity was potent.The docking data were analyzed with a heat map, see Table 2, and the shades of color represent the strength of the correlation; the darker the color, the smaller the binding energy, and the stronger the binding effect, illustrating that some small molecules, such as gallocatechin, chlorogenic acid, epigallocatechin, ellaic acid, and puerarin, have better affinity with the core targets.Using the PyMOL software, we visualized the results obtained when small molecules were blended with proteins, as shown in Figure 6.Gallocatechin forms hydrogen bonds with amino acid residues ALA-202, THR-206, HSD-207, PHE-210, THR-212, ASN-382, and TRP-387 in PTGS2. The diameter of the bacteriostasis circle of potential compounds The diameters of the bacteriostatic circles for the potential compounds are shown in Figure 6 and Table 3.The antibacterial activity of each sample was measured by the size of the bacteriostatic circle.The results showed that the blank control (ethanol) had no antibacterial effect, indicating that the solvent had no interference, and the seven components and equal component groups had some inhibitory effect on the four bacteria.The inhibition zones of ERCC and phlorizin were relatively larger than those of the other components, implying that ERCC and phlorizin have better antibacterial effects. 3.5 Determination of the contents of active ingredients from tea-seed oil by HPLC Calibration curves Calibration curves were established by measuring the mixed standard solutions at seven different concentrations.The calibration curves presented in Table 5 show excellent linearity with high squared correlation coefficients (R 2 > 0.999) within the detected range. Method validation The precision, stability, and repeatability were tested and analyzed to validate the method.Under chromatographic conditions, an identical 10 µL mixed standard solution was injected six times consecutively and the RSDs were calculated.This methodexhibited a range of relative standard deviations (RSD) from 0.75% to 1.90%, indicating its appropriateness for quantitative analysis.In the stability trial, the mixed standard solution underwent testing at room temperature for various time intervals (0, 4, 6, 8, 10, 12, and 24 h).Based on the RSD values, the sample solutions demonstrated notable stability within a 24-h period, with values ranging from 1.5% to 1.9%.To ascertain the repeatability of the method, six independently prepared solutions from the same batch were analyzed.The RSD values for gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phloretin were determined to be 0.46, 0.81, 0.34, 0.72, 1.30, 1.05, and 1.35%, respectively.According to the results, the proposed method exhibited a high level of repeatability.To verify the accuracy of the method, seven standard substances were precisely added to the TSO samples, and the samples were prepared using the above-mentioned method.In the peak area, the average recovery rate (n = 6) ranged from 99.91% to 102.35%, whereas the RSD ranged from 0.39% to 2.70%.Chromatograms of the mixed standard solutions (A), sample solution (YCZ-5) (B) and blank control (C) are presented in Figure 7. Discussion Camellia oleifera Abel., a notable oilseed tree native to China, stands out in comparison to oil olives from the Mediterranean coast and oil palms from Southeast Asia.It is recognized as one of the three largest wooden oilseed tree species globally (Wu et al., 2020).The liquid oil derived from the seeds of Camellia oleifera Abel., referred to tea seed oil (TSO), primarily consists of unsaturated fatty acids, with oleic acid comprising up to 80% and linoleic acid approximately 8% (Chang et al., 2020;Liu et al., 2021).Additionally, TSO is abundant in squalene, phytosterols, and vitamin E (Liu et al., 2022).Long-term consumption of TSO is beneficial to human health, as it is the highest-quality edible vegetable oil recommended by the World Health Organization (Wang et al., 2018).In addition to its edible value, TSO is commonly employd as a matrix excipient in cosmetic and pharmaceutical formulations due to its remarkable affinity for the skin and its capacity to facilitate drug penetration (Zhu, 2017).Additionally, TSO exhibits notable potential in inhibiting sebum secretion and improving the metabolic state of the skin (Li et al., 2010).Although clinical investigations have demonstrated the efficacy of topical TSO against diverse pathogenic microorganisms responsible for skin ailments, further research is warranted to elucidate its antibacterial constituents and underlying mechanisms (Zhang et al., 2022).In this study, UPLC-Q-TOF-MS, network pharmacology, and molecular docking were combined to explore the antibacterial functional ingredients in TSO that play an auxiliary role in its development. The integration of environmentally sustainable principles in the process of product development enables the utilization of natural materials for the creation of novel pharmaceuticals and cosmetics (Espro et al., 2021).Moreover, natural products exhibit a commendable safety record and possess diverse biological properties (Burdock and Wang, 2017), thereby presenting prospects for enhancing the efficacy and safety of pharmaceuticals and cosmetics (Hill and Fenical, 2010).Nevertheless, the intricate nature of natural raw materials and the multitude of disease targets pose obstacles to the advancepment of drugs and drug-related products (Al-Awadhi and Luesch, 2020).Consequently, recent research endeavors have underscored the need for innovative methodologies to modify the existing paradigms of functional ingredient discovery and have enhanced our understanding of their mechanisms of action.In our study, the chemical constituents of TSO were identified through the utilization of UPLC-Q-TOF-MS.Network pharmacology and molecular docking techniques were concurrently employed to elucidate the potential bacteriostatic components and their underlying mechanisms of action in TSO.Subsequently, the antibacterial efficacy of seven potential active constituents was validated, and their concentrations in ten varieties of TSO were quantified.These discoveries hold significance for future investigations aimed at enhancing the pharmacodynamic properties of TSO through structural modifications of its active constituents.The chemical compounds present in TSO have the potential to produce synergistic effects, leading to the identification of active antibacterial ingredients suitable for combination therapy (Zhang et al., 2022).In light of this, we have introduced the concept of ERCC, which is based on the average proportions of inhibitory components found in ten different types of TSO.Our aim is to investigate the antibacterial activity of ERCC and explore the synergistic effects of TSO's chemical components.The results demonstrated that a homogeneous amount of ERCC had a better antibacterial effect than a single component, implying that multi-component action was more efficient. This study has some limitations that must be acknowledged.Firstly, the complex structures of certain ingredients may have hindered their identification as components, and the TCM chemical database is continuously being enhanced and supplemented, thus potentially leading to unidentified ingredients.Furthermore, the lipophilic penetration-enhancing effect of TSO on the antibacterial effect and its underlying mechanism remain unclear.While this study presents encouraging findings, further research is necessary to elucidate the dose-effect relationship pertaining to the antibacterial efficacy of TSO and its ingredients. Conclusion This study employed UPLC-Q-TOF-MS to identify 47 chemical ingredients present in the methanol-extracted fractions of TSO.Notably, seven components, namely, gallocatechin, gallic acid, epigallocatechin, theophylline, chlorogenic acid, puerarin, and phloretin, demonstrated substantial inhibitory effects on E. coli, S. aureus, P. aeruginosa, and C. albicans.The underlying mechanism behind these effects is likely linked to the PI3K-AKT, estrogen, MAPK, and IL-17 signaling pathways.Moreover, the quantification of active ingredients in ten distinct varieties of TSO was conducted through the utilization of HPLC.The outcomes of this investigation possess the potential to expand our knowledge base regarding the utilization of TSO, furnish a theoretical framework for the exploration of antibacterial drugs and cosmetics derived from naturally occurring TSO, and establish a robust groundwork for the advancement and implementation of TOS products within clinical settings. FIGURE 2 FIGURE 2 Base peak chromatogram (BPC) of sample solution.BPC of TSO-M sample solution in the positive ion mode (A) and the negative ion mode (B), BPC of blank sample solution in the positive ion mode (C) and the negative ion mode (D). FIGURE 3 FIGURE 3 The prediction results of network pharmacology (A) Venn diagram of the active component targets of TSO and the targets related to antibacterial effect.(B) PPI network of the antibacterial effects of TSO.(C) The"drug-component-target"network of the antibacterial effects of TSO. FIGURE 4 FIGURE 4GO and KEGG enrichment analysis of the potential antibacterial targets of TSO. FIGURE 5 FIGURE 5Molecular docking of the antibacterial core components and key targets of TSO. TABLE 1 Ingredients identified in TSO based on UPLC-Q-TOF-MS (positive ion mode). TABLE 2 Binding energy of core components and key targets of TSO. TABLE 3 The diameter of the bacteriostasis circle of potential compounds. TABLE 5 Calibration curves of seven compounds of the reference substance. TABLE 6 The contents of the seven compounds in different kinds of TSO.Validation.ZW and S-XL: Writing-reviewing and Editing.All authors contributed to the article and approved the submitted version.
5,378.8
2023-09-07T00:00:00.000
[ "Biology" ]
TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. Introduction Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs). The target MRs can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λcalculus (Zettlemoyer and Collins, 2005) or the abstract meaning representations (Banarescu et al., 2013). Alternatively, for more task-driven approaches to semantic parsing, it is common for meaning representations to represent executable programs such as SQL queries (Zhong et al., 2017), robotic commands (Artzi and Zettlemoyer, 2013), smart phone instructions (Quirk et al., 2015), and even general-purpose programming languages like Python (Yin and Neubig, 2017;Rabinovich et al., 2017) and Java (Ling et al., 2016). Because of these varying formalisms for MRs, the design of semantic parsers, particularly neural network-based ones has generally focused on a small subset of tasks -in order to ensure the syntactic well-formedness of generated MRs, a parser is usually specifically designed to reflect the domain-dependent grammar of MRs in the structure of the model (Zhong et al., 2017;Xu et al., 2017). To alleviate this issue, there have been recent efforts in neural semantic parsing with general-purpose grammar models (Xiao et al., 2016;Dong and Lapata, 2018). Yin and Neubig (2017) put forward a neural sequence-to-sequence model that generates tree-structured MRs using a series of tree-construction actions, guided by the task-specific context free grammar provided to the model a priori. Rabinovich et al. (2017) propose the abstract syntax networks (ASNs), where domain-specific MRs are represented by abstract syntax trees (ASTs, Fig. 2 Left) specified under the abstract syntax description language (ASDL) framework (Wang et al., 1997). An ASN employs a modular architecture, generating an AST using specifically designed neural networks for each construct in the ASDL grammar. Inspired by this existing research, we have developed TRANX, a TRANsition-based abstract syntaX parser for semantic parsing and code generation. TRANX is designed with the following principles in mind: • Generalization ability TRANX employs ASTs as a general-purpose intermediate meaning representation, and the task-dependent grammar is provided to the system as external knowledge to guide the parsing process, therefore decoupling the semantic parsing procedure with specificities of grammars. • Extensibility TRANX uses a simple transition system to parse NL utterances into tree- Figure 1: Workflow of TRANX structured ASTs. The transition system is designed to be easy to extend, requiring minimal engineering to adapt to tasks that need to handle extra domain-specific information. • Effectiveness We test TRANX on four semantic parsing (ATIS, GEO) and code generation (DJANGO, WIKISQL) tasks, and demonstrate that TRANX is capable of generalizing to different domains while registering strong performance, out-performing existing neural networkbased approaches on three of the four datasets (GEO, ATIS, DJANGO). Methodology Given an NL utterance, TRANX parses the utterance into a formal meaning representation, typically represented as λ-calculus logical forms, domain-specific, or general-purpose programming languages (e.g., Python). In the following description we use Python code generation as a running example, where a programmer's natural language intents are mapped to Python source code. Fig. 1 depicts the workflow of TRANX. We will present more use cases of TRANX in § 3. The core of TRANX is a transition system. Given an input NL utterance x, TRANX employs the transition system to map the utterance x into an AST z using a series of treeconstruction actions ( § 2.2). TRANX employs ASTs as the intermediate meaning representation to abstract over domain-specific structure of MRs. This parsing process is guided by the userdefined, domain-specific grammar specified under the ASDL formalism ( § 2.1). Given the generated AST z, the parser calls the user-defined function, AST to MR(·), to convert the intermediate AST into a domain-specific meaning representation y, completing the parsing process. TRANX uses a probabilistic model p(z|x), parameterized by a neural network, to score each hypothesis AST ( § 2.3). Modeling ASTs using ASDL Grammar TRANX uses ASTs as the general-purpose, intermediate semantic representation for MRs. ASTs are commonly used to represent programming languages, and can also be used to represent other tree-structured MRs (e.g., λ-calculus). The ASDL framework is a grammatical formalism to define ASTs. See Fig. 1 for an excerpt of the Python ASDL grammar. TRANX provides APIs to read such a grammar from human-readable text files. An ASDL grammar has two basic constructs: types and constructors. A composite type is defined by the set of constructors under that type. For example, the stmt and expr composite types in Fig. 1 refer to Python statements and expressions, repectively, each defined by a series of constructors. A constructor specifies a language construct of a particular type using its fields. For instance, the Call constructor under the composite type expr denotes function call expressions, and has three fields: func, args and keywords. Each field in a constructor is also strongly typed, which specifies the type of value the field can hold. A field with a composite type can be instantiated by constructors of the same type. For example, the func field above can hold a constructor of type expr. There are also fields with primitive types, which store values. For example, the id field of Name constructor has a primitive type identifier, and is used to store identifier names. And the field s in the Str (string) constructor hold string literals. Finally, each field has a cardinality (single, optional ? and sequential * ), denoting the number of values the field holds. An AST is then composed of multiple constructors, where each node on the tree corresponds to a typed field in a constructor (except for the root node, which denotes the root constructor). Depending on the cardinality of the field, a node can hold one or multiple constructors as its values. For instance, the func field with single car- Purple squares denote fields with sequential cardinality. Grey nodes denote primitive identifier fields. Fields are labeled with time steps at which they are generated. Right The action sequence used to construct the AST. Each action is labeled with its frontier field n f t . APPLYCONSTR actions are represented by their constructors. dinality in the ASDL grammar in Fig. 1 is instantiated with one Name constructor, while the args field with sequential cardinality have multiple child constructors. Transition System Inspired by Yin and Neubig (2017) (hereafter YN17), we develop a transition system that decomposes the generation procedure of an AST into a sequence of tree-constructing actions. We now explain the transition system using our running example. Fig. 2 Right lists the sequence of actions used to construct the example AST. In high level, the generation process starts from an initial derivation AST with a single root node, and proceeds according to a top-down, left-to-right order traversal of the AST. At each time step, one of the following three types of actions is evoked to expand the opening frontier field n ft of the derivation: APPLYCONSTR[c] actions apply a constructor c to the opening composite frontier field which has the same type as c, populating the opening node using the fields in c. If the frontier field has sequential cardinality, the action appends the constructor to the list of constructors held by the field. REDUCE actions mark the completion of the generation of child values for a field with optional (?) or multiple ( * ) cardinalities. GENTOKEN[v] actions populate a (empty) primitive frontier field with a token v. For example, the field f 7 on Fig. 2 has type identifier, and is instantiated using a single GENTOKEN action. For fields of string type, like f 8 , whose value could consists of multiple tokens (only one shown here), it can be filled using a sequence of GENTOKEN actions, with a special GENTOKEN[</f>] action to terminate the generation of token values. The generation completes once there is no frontier field on the derivation. TRANX then calls the user specified function AST to MR(·) to convert the generated intermediate AST z into the target domain-specific MR y. TRANX provides various helper functions to ease the process of writing conversion functions. For example, our example conversion function to transform ASTs into Python source code contains only 32 lines of code. TRANX also ships with several built-in conversion functions to handle MRs commonly used in semantic parsing and code generation, like λcalculus logical forms and SQL queries. Computing Action Probabilities p(z|x) Given the transition system, the probability of an z is decomposed into the probabilities of the sequence of actions used to generate z p(z|x) = t p(a t |a <t , x), Following YN17, we parameterize the transitionbased parser p(z|x) using a neural encoderdecoder network with augmented recurrent connections to reflect the topology of ASTs. Encoder The encoder is a standard bidirectional Long Short-term Memory (LSTM) network, which encodes the input utterance x of n tokens, The decoder is also an LSTM network, with its hidden state s t at each time temp given by where f LSTM is the LSTM transition function, and [:] denotes vector concatenation. a t−1 is the em- Rabinovich et al. (2017) bedding of the previous action. We maintain an embedding vector for each action.s t is the attentional vector defined as in Luong et al. (2015) where c t is the context vector retrieved from input encodings {h i } n i=1 using attention. Parent Feeding p t is a vector that encodes the information of the parent frontier field n ft on the derivation, which is a concatenation of two vectors: the embedding of the frontier field n ft , and s pt , the decoder's state at which the constructor of n ft is generated by the APPLYCONSTR action. Parent feeding reflects the topology of treestructured ASTs, and gives better performance on generating complex MRs like Python code ( § 3). Action Probabilities The probability of an AP- PLYCONSTR[c] action with embedding a c is 2 For GENTOKEN actions, we employ a hybrid approach of generation and copying, allowing for out-of-vocabulary variable names and literals (e.g., "file.csv" in Fig. 1) in x to be directly copied to the derivation. Specifically, the action probability is defined to be the marginal probability = p(gen|a t , x)p(v|gen, a t , x)+ p(copy|a t , x)p(v|copy, a t , x) 2 REDUCE is treated as a special APPLYCONSTR action. The binary probability p(gen|·) and p(copy|·) is given by softmax(Ws t ). The probability of generating v from a closed-set vocabulary, p(v|gen, ·) is defined similarly as Eq. (1). The copy probability of copying the i-th word in x is defined using a pointer network (Vinyals et al., 2015) p(x i |copy, a <t , x) = softmax(h i Ws t ). Datasets To demonstrate the generalization and extensibility of TRANX, we deploy our parser on four semantic parsing and code generation tasks. Semantic Parsing We evaluate on GEO and ATIS datasets. GEO is a collection of 880 U.S. geographical questions (e.g., "Which states border Texas?"), and ATIS is a set of 5,410 inquiries of flight information (e.g., "Show me flights from Dallas to Baltimore"). The MRs in the two datasets are defined in λ-calculus logical forms (e.g., "lambda x (and (state x) (next to x texas))" and "lambda x (and (flight x dallas) (to x baltimore))"). We use the pre-processed datasets released by Dong and Lapata (2016). We use the ASDL grammar defined in Rabinovich et al. (2017), as listed in Fig. 3. Code Generation We evaluate TRANX on both general-purpose (Python, DJANGO) and domain-specific (SQL, WIKISQL) code generation tasks. The DJANGO dataset (Oda et al., 2015) consists of 18,805 lines of Python source code extracted from the Django Web framework, with each line paired with an NL description. Code in this dataset covers various real-world use cases of Python, like string manipulation, I/O operation, exception handling, etc. WIKISQL (Zhong et al., 2017) is a code generation task for domain-specific languages (i.e., SQL). It consists of 80,654 examples of NL questions (e.g., "What position did Calvin Mccarty play?") and annotated SQL queries (e.g., "SELECT Position FROM Table WHERE Methods GEO ATIS ZH15 (Zhao and Huang, 2015) 88.9 84.2 ZC07 (Zettlemoyer and Collins, 2007) 89.0 84.6 WKZ14 (Wang et al., 2014) 90.4 91.3 Neural Network-based Models SEQ2TREE (Dong and Lapata, 2016) 87.1 84.6 ASN (Rabinovich et al., 2017) (Ling et al., 2016) 31.5 SEQ2TREE (Dong and Lapata, 2016) 39.4 NMT (Neubig, 2015) 45.1 LPN (Ling et al., 2016) 62.3 YN17 (Yin and Neubig, 2017) 71.6 TRANX (w/o parent feeding) 72.7 TRANX (w parent feeding) 73.7 Extending TRANX for WIKISQL In order to achieve strong results, existing parsers, like most models in Tab. 3, use specifically designed architectures to reflect the syntactic structure of SQL queries. We show that the transition system used by TRANX can be easily extended for WIKISQL with minimal engineering, while registering strong performance. First, we use define a simple ASDL grammar following the syntax of SQL (Fig. 4). We then augment the transition system with a special GENTOKEN action, SELCOLUMN[k]. A SELCOLUMN[k] action is used to populate a primitive column idx field in Select and Condition constructors in the grammar by selecting the k-th column in the table. To compute the probability of SELCOLUMN[k] actions, we use a pointer network over column encodings, where the column encodings are given by a bidirectional LSTM network over column names in an input table. This can be simply implemented by overriding the base Parser class in TRANX and modifying the functions that compute action probabilities. Results In this section we discuss our experimental results. All results are averaged over three runs with different random seeds. Code Generation Tab. 2 lists the results on DJANGO. TRANX achieves state-of-the-art results on DJANGO. We also find parent feeding yields +1 point gain in accuracy, suggesting the importance of modeling parental connections in ASTs with complex domain grammars (e.g., Python). Tab. 3 shows the results on WIKISQL. We first discuss our standard model which only uses information of column names and do not use the contents of input tables during inference, as listed in the top two blocks in Tab. 3. We find TRANX, although just with simple extensions to adapt to this dataset, achieves impressive results and outperforms many task-specific methods. This demonstrates that TRANX is easy to extend to incorporate task-specific information, while maintaining its effectiveness. We also extend TRANX with a very simple answer pruning strategy, where we execute the candidate SQL queries in the beam against the input table, and prune those that yield empty execution results. Results are listed in the bottom two-blocks in Tab. 3, where we compare with systems that also use the contents of tables. Surprisingly, this (frustratingly) simple extension yields significant improvements, outperforming many task-specific models that use specifically de-signed, heavily-engineered neural networks to incorporate information of table contents. Conclusion We present TRANX, a transition-based abstract syntax parser. TRANX is generalizable, extensible and effective, achieving strong results on semantic parsing and code generation tasks.
3,669.4
2018-10-05T00:00:00.000
[ "Computer Science" ]
Intersection of Epigenetic and Metabolic Regulation of Histone Modifications in Acute Myeloid Leukemia Acute myeloid leukemia (AML) is one of the most lethal blood cancers, accounting for close to a quarter of a million annual deaths worldwide. Even though genetically heterogeneous, all AMLs are characterized by two interrelated features—blocked differentiation and high proliferative capacity. Despite significant progress in our understanding of the molecular and genetic basis of AML, the treatment of AMLs with chemotherapeutic regimens has remained largely unchanged in the past 30 years. In this review, we will consider the role of two cellular processes, metabolism and epigenetics, in the development and progression of AML and highlight the studies that suggest an interconnection of therapeutic importance between the two. Large-scale whole-exome sequencing of AML patients has revealed the presence of mutations, translocations or duplications in several epigenetic effectors such as DNMT3, MLL, ASXL1, and TET2, often times co-occuring with mutations in metabolic enzymes such as IDH1 and IDH2. These mutations often result in impaired enzymatic activity which leads to an altered epigenetic landscape through dysregulation of chromatin modifications such as DNA methylation, histone acetylation and methylation. We will discuss the role of enzymes that are responsible for establishing these modifications, namely histone acetyl transferases (HAT), histone methyl transferases (HMT), demethylases (KDMs), and deacetylases (HDAC), and also highlight the merits and demerits of using inhibitors that target these enzymes. Furthermore, we will tie in the metabolic regulation of co-factors such as acetyl-CoA, SAM, and α-ketoglutarate that are utilized by these enzymes and examine the role of metabolic inhibitors as a treatment option for AML. In doing so, we hope to stimulate interest in this topic and help generate a rationale for the consideration of the combinatorial use of metabolic and epigenetic inhibitors for the treatment of AML. INTRODUCTION All cancers are a result of highly proliferative cells but accompanying blockade in differentiation is particularly pronounced in certain leukemias such as AML. The process of differentiation typically induces two fundamental cellular programs: halting of proliferation and acquisition of a permanent cellular identity. The former entails dramatic metabolic state changes, and the latter invokes chromatinbased epigenetic dynamics. Notably, the enzymology of mitochondrial and chromatin reactions is well-appreciated to be biochemically related, potentially enabling their dual-regulation. Accordingly, we hypothesize that prodifferentiation therapeutic strategies-even in non-APL AMLs-may be highly effective if both metabolic and chromatin pathways are targeted simultaneously. To date, differentiation therapy with all trans-retinoic acid (ATRA) and arsenic trioxide (ATO) has been curative in the promyelocytic AML subtype (1), which provides a promising precedence. We will highlight the epigenetic and metabolic pathways that are dysregulated in AML and also affect chromatin modifications which are critical for orchestrasting differentiation and locking in the differentiated cellular state. Furthermore, we will discuss the numerous clinical trials that have evaluated the efficacy of metabolic and epigenetic inhibitors and also highlight the combinatorial use of these inhibitors which has provided some promising results. METABOLIC REWIRING IN AML Rapidly proliferating tumor cells exert a vastly different nutrient demand on their microenvironment in comparison to the surrounding benign tissue. Metabolic rewiring in cancer cells is largely responsible for generating this distinct nutrient demand and has been a topic of extensive research for the past few decades (2). One of the first landmark discoveries of oncogenic metabolic dysregulation was made during the 1920s when Otto Warburg observed increased glycolysis by tumor cells even under normoxic conditions (3). This switch in glucose metabolism from primarily mitochondrial tricarboxylic acid cycle to cytosolic anaerobic glycolysis allows cancer cells to maintain low levels of reactive oxygen species (ROS) while also ensuring an adequate supply of biomolecules to facilitate tumor growth (4,5). In the blood, these programs are more complicated. In both AML and their normal hematopoietic counterparts, metabolic states are dynamic, in part due to the hierarchical cell identity programs that characterize each one. Healthy hematopoietic stem cells (HSCs), which reside in the typically oxygen-poor bone marrow, are usually quiescent and non-proliferative. These features are consistent with their observed anaerobic glycolysis and low production of reactive oxygen species (ROS) (6). However, upon commitment to differentiation, these cells enter the cell cycle and transition to oxidative phosphorylation. Interestingly, this situation is reversed in AML; the lowly proliferative leukemia stem cells (LSCs) respire, whereas highly proliferative AML blast elevate glycolysis (7). Notably, altered metabolism is one of the strikingly few differences between HSCs and disease-sustaining LSCs, suggesting its importance in AML cell identity (8). This unique metabolic feature of LSCs may be of significant therapeutic relevance. Inhibition of Bcl-2 impedes oxidative phosphorylation of LSCs and selectively eliminates this population in AML (9). While the Bcl-2 inhibitor Venetoclax is currently being tested as a single agent in clinical trials (10), recent studies have also shown that Venetoclax in combination with DNMT inhibitor Azacytidine has more striking effects in targeting LSCs in AML patients (11)(12)(13). This synergistic effect implies that metabolic and epigenetic programs are intertwined in AML. This intriguing relationship, and the possibility of combination targeting of the two, warrants further exploration of this rising area. EPIGENETIC REPROGRAMMING IN AML Epigenetic programs are similarly awry in AML compared to normal hematopoiesis. In AML, epigenetic modifiers are strongly overrepresented among recurrently mutated driver genes, and mounting evidence suggests their critical role in enforcing the differentiation arrested cell fate. Biochemical evidence suggests that the altered metabolic and epigenetic programs of AML may not be independent of one another. A number of biomolecules produced during glucose metabolism are further utilized as cofactors by enzymes that act as "epigenetic effectors" and regulate the expression of numerous tumor suppressors and oncogenes ( Table 1) (14). The best studied example of metabolic dysregulation that directly alters the chromatin state in a subset of AMLs are mutations in the isocitrate dehydrogenase (IDH1 and IDH2) enzymes. Excitingly, IDH1/2 inhibitors have shown great promise in reversing the differentiation block and improving patient prognosis (15)(16)(17). However, we propose that the mitochondria-chromatin connection can be exploited in cases beyond IDH1/2 mutant AMLs. We derive this view in part from models of normal myeloid cell differentiation programs. Numerous studies have demonstrated the interaction between metabolic dynamics and epigenome regulation in classical cell fate models such as macrophage activation and trained immunity [reviewed in (18)] Upon classical macrophage activation by LPS stimulation, a switch from oxidative phosphorylation to glycolysis occurs rapidly, producing a buildup of lactate-which has been shown to directly antagonize class II HDACs (19). In contrast, in trained immunity models of beta-glucan-treated monocytes, glutaminolysis induced fumurate buildup, leading to inhibition of KDM5 family demethylases and sustained H3K4me3 levels at open chromatin (20). We propose that programs such as theseperhaps in altered or misfunctional form-are also directly relevant to AML biology, as differentiation blockade and cell fate dysregulation are hallmarks of this disease. Below, we will discuss current literature that highlights pathways that have shown promise therapeutically along with new pathways that could be targeted for future development of therapeutics against AML. HISTONE ACETYLATION Histone acetylation is an enzymatically regulated posttranslational modification (PTM), under the control of at least 17 acetyltransferases (HATs) that require an acetyl-CoA cofactor and of at least 18 deacetylases (HDACs) that require an NAD + or zinc cofactor (21). Several lines of evidence suggest that the activities of HATs and HDACs are coupled with the overall metabolic state of a cell. Higher rate of glycolysis in pluripotent cells relative to differentiated lineages accounts for both increased availability of acetyl-CoA and increased histone acetylation (22). On the other hand, non-APL AML blasts generally have reduced acetylation of histone H3 (23) and histone H4 compared to normal white blood cells (24). EPIGENETIC DYSREGULATION OF HISTONE ACETYLATION Some of the most dramatic examples of impaired acetylation regulation involve chromosomal rearrangements that fuse HAT enzymes to other genes. Multiple rare AML subsets contain genetic fusions with HAT partners (25). For instance, the chimeric transcript of t(11;16)(q23;p13.3), t(8;16)(p11;p13), and t(8;22)(p11;q13) AMLs codes for MLL-CREBBP, MOZ-CREBBP, and MOZ-EP300 fusions, respectively. Interestingly, the MOZ-CREBBP fusion is a more potent activator of NF-κB-dependent promoters than either wild type protein alone (26) and synergizes with hyperactivated FLT3 to promote NF-kB target transcription. The relevance of HATs to hematological malignancies is not limited to cases where the enzymes are fusion partners. The proliferation of MLL-AF9 and NUP98-HOXA9 driven leukemia depends on the acetyltransferase activity of MOF, such that MOF mutants incapable of binding acetyl-CoA via a domain deletion or amino acid substitution have impaired colony formation efficiency (27). However, the relevance of HATs in AML can extend beyond acetyltransferase activity. For example, dissociation of the protein interaction between Myb and EP300 with Celastrol acts independently with inhibition of EP300 acetyltransferase activity with respect to repressing Myb-activated gene targets (28). Accordingly, the combined effect of inhibiting both EP300 enzymatic activity as well its interaction with Myb has been shown to induce synergistic myeloid differentiation in in vitro models. These studies highlight the importance of and distinction between the enzymatic activities and the scaffolding functions of HATs in promoting AML. Disruption of histone acetylation homeostasis may have context-specific effects in AML. Decreasing histone acetylation with the HAT inhibitor C646 (Figure 1B), which preferentially targets the catalytic activity of CREBBP and the related acetyltransferase EP300 (also known as KAT3B), significantly reduced the proliferation of 8 out of 10 tested AML tissue culture cell lines (29). Surprisingly, however, increasing histone acetylation with the HDAC inhibitor valproic acid yields similar effects, potently inducing differentiation of AML1/ETO-driven Kasumi-1 cells. Notably, the equivalent dose of valproic acid does not significantly impair the proliferation of normal murine hematopoietic lin − progenitors (30). These different findings suggest the balance of HAT and HDAC activity, rather than total acetylation prevalence, is a critical component of AML malignancy. While poor selectivity and bioavailability of HAT inhibitors has prevented their advance to the clinic (31), HDAC inhibitors (HDACi) have shown therapeutic efficacy. The FDA has recently approved multiple HDAC inhibitors for the treatment of T cell lymphomas and multiple myeloma, including the pan-HDACis Vorinostat, Panobinostat and Belinostat, and the class I specific HDACi Romidepsin. Several studies have evaluated the efficacy of HDACi in AML with promising results (32). In an in vivo model of t(8;21) driven AML, administration of Panobinostat triggered proteasomal degradation of the pathogenic AML1/ETO fusion protein leading to terminal myeloid differentiation and excellent survival benefit in mice (33). Unfortunately, the activity of Panobinostat was limited to this subset of AML, prompting the search for novel combinatorial therapies. However, phase I/II studies using combination of Panobinostat [10-40 mg orally, thrice weekly for seven doses (34) or 20 mg orally for six doses (35)] with Azacitidine failed to report significant improvement in the overall survival of patients and was also associated with severe Panobinostat related side effects of fatigue, nausea, syncope and somnolence. The toxicities associated with pan-HDAC inhibition warrant the need to develop more selective HDACi with improved benefit-risk profiles (36). METABOLIC REGULATION OF HISTONE ACETYLATION Unlike phosphorylation, which is impervious to changes in cellular levels of ATP due to the overabundance of this small molecule, the levels of histone acetylation are intricately linked to the nuclear pool of acetyl-CoA ( Figure 1A) (37), which is largely maintained by the ATP citrate lyase (ACLY) enzyme that utilizes glucose-derived citrate as its substrate (38). The PI3K/Akt/mTOR pathway is a master regulator of cellular glucose intake (39), and also activates ACLY to maintain acetyl-coA in the nucleus under nutrient starvation conditions (40). Multiple cancers exploit this pathway to confer growth advantage under limited nutrition conditions, making the PI3K/Akt/mTOR axis an attractive therapeutic target (41). Constitutive activation of PI3K/Akt/mTOR signaling is observed in >60% of AML patients and is also correlated with aggressive disease progression, as well as shorter overall survival times (42,43). Rapamycin and its analogs (rapalogs), which were initially approved as immunosuppressive drugs to prevent organ transplant rejection, have found significant use as inhibitors of the mTORC1 complex in AML. Initial in vitro studies demonstrated promising cytostatic effects upon rapamycin administration in K562, U937, HEL, HL60, KG1, and KG1a AML cell lines (44). However, moderate clinical effects have been observed with rapalogs alone due to the complex interplay between PI3K, Akt, and mTOR proteins leading to rapid development of drug resistance (43,45). To overcome these limitations, simultaneous inhibition of multiple pathways in the PI3K/Akt/mTOR axis has been the focus of next generation of dual inhibitors (46). Dactolisib or NVP-BEZ235 is a dual inhibitor of the PI3K and mTOR signaling which was shown to block the clonogenic ability and proliferation of multiple primary AML cells and cell lines with low nM doses (47). Along with establishing the safety and efficacy of the new dual inhibitors, multiple phase I/II trails are also ongoing to evaluate the efficacy of combining rapalogs with chemotherapeutic agents (48,49). Even though single drug therapies have so far been the major focus in the attempt to restore the balance of histone acetylation in secondary and refractory AML, the most promising therapeutic leads have come from the simultaneous inhibition of HDACs and the mTOR/Akt axis. Entinostat, which primarily inhibits HDAC 1 and 3, partially inhibits Akt/mTOR signaling in HL60 AML cells. However, combination of Entinostat with the mTOR inhibitor Everolimus yields excellent synergism in inducing differentiation and apoptosis in myeloblasts (50). More recently, HDAC3 was found to directly interact with and regulate the Akt protein in leukemic cells. Accordingly, the HDAC3-specific inhibitor, RGFP966 (Figure 1B), which sensitizes MLL/AF9 AMLs to Doxocycline and Cytarabine, could also be useful in preventing drug resistance, as it suppresses Akt activation (51). The study fell short of combining HDAC and mTOR/Akt inhibitors in the long term to understand the mechanism of drug resistance development, but similar findings in other cancers point toward a general mechanism of overcoming drug resistance associated with the combination of HDACi and mTOR inhibitors (52). HISTONE METHYLATION Post-translational histone methylation of lysine and arginine residues is well-documented and arguably more complex in regulation and function than histone acetylation (Figure 2A). While a lysine can only acquire a single acetyl group, it can acquire up to three methyl groups, and an arginine can acquire one or two methyl groups arranged in symmetric (i.e., one methyl group per side chain amine) or asymmetric (i.e., both methyl groups on the same side chain amine) manner (53). Furthermore, unlike histone acetylation, which is generally associated with active gene expression, histone methylation is associated with either active or repressed gene expression, depending on the site and degree of methylation (54). The methyl donor used by all histone methyltransferases (HMTs) is S-adenosyl methionine (SAM), which is derived chiefly from the condensation of methionine and ATP, and is one of the links between the metabolic state of a cell and its epigenetic landscape (55). EPIGENETIC DYSREGULATION OF HISTONE METHYLATION Of all the known HMTs, KMT2A or the MLL methyltransferase has the most well-documented role in the development of acute leukemia. MLL translocations are observed in ∼10% of all AML cases ( Figure 2B) and are often associated with poor prognosis (56). To date, 135 different MLL rearrangements have been identified in acute leukemias, but almost 90% of the MLL rearrangements observed in AML patients are with the translocation partners AF6/9/10, ELL, ENL, and SEPT6 (57). Downstream upregulation of HOXA9 and MEIS1 is a common feature of a large majority of these MLL fusions and is essential for driving leukemogenesis (58). Adding to this complexity, the regulation of the Hox cluster of genes in MLL fusion driven leukemias is strongly associated with H3K79 methylation, a mark associated with transcriptional elongation, rather than the canonical MLL substrate H3K4 (59). Hence, inhibition of Dot1L, the only known H3K79 methyltransferase in humans, is a promising therapeutic strategy against AML, although recently concluded phase I studies have shown modest effects of a highly specific Dot1L inhibitor, Pinometostat (EPZ-5676) (60). The H3K4 mono and di-methylation specific, FAD-dependent histone demethylase LSD1 (also known as KDM1A) has also been found to be crucial for both normal hematopoiesis and AML malignancy (61). While LSD1 is ubiquitously expressed in all tissues, up to 32% of CML patients and 56% of myelodysplastic syndrome (MDS) patients have elevated levels of the protein in bone marrow biopsies (62). Additional genome-wide knockdown screens have revealed that growth of AML cell lines, relative to other non-AML cancer types, are especially sensitive to LSD1 depletion (63). In support for the central role of LSD1 in AML, pharmacological inhibition of LSD1 with GSK-LSD1 and ORY-1001, both of which irreversibly inactivate the enzyme, leads to significant reduction in proliferation of various AML tissue culture lines and increase in survival of murine disease models (64,65). However, the essential nature of LSD1 in ensuring normal differentiation of hematopoietic progenitors (66) and the ability of tranylcypromine (Parnate) based LSD1 inhibitors to cross the blood brain barrier results in severe side effects when used in high doses as a monotherapy. New generation of reversible inhibitors that selectively target LSD1's interaction with other proteins such as CoREST could show therapeutic benefits without the neurological side effects (67). Alternatively, combination of low-dose LSD1 inhibition with inhibitors of metabolic pathways may provide another avenue to induce therapeutic differentiation in myeloblasts. Other methylation regulators, in particular the methyltransferase EZH2, which is a subunit of the Polycomb repressive complex 2 (PRC2), is frequently mutated in myeloid malignancies. EZH2 is responsible for di-and tri-methylation of Lys27 in H3 to establish a repressive chromatin state at promoters of important hematopoietic transcription factors such as HOXA9 (68). Although loss-of-function mutations in EZH2 primarily result in myelodysplastic/myeloproliferative neoplasms (MDS/MPN) (69), the catalytic activity of EZH2 was also shown to be essential for rapid AML progression (70). Hence, inhibition of EZH2 catalysis has been explored as a therapeutic strategy in AML, first with pan methyltransferase inhibitors such as 3-Deazaneplanocin A (DZNep) in combination with the HDAC inhibitor Panobinostat (71), and more recently with specific EZH1/2 inhibitors such as OR-S1/S2 (72) and UNC1999 (73). Unfortunately, many EZH2 inhibitors fail to show robust effects in vivo as a monotherapy compared to the current standard Example of a chromosomal translocation event between chromosomes 11 and 9 that leads to gene fusion of MLL and AF9. The fusion occurs at various junctions, but generally leads to transcription and translation of an MLL-AF9 fusion protein (note the reciprocal AF9-MLL protein can also be generated but is not shown). The fusion protein can coordinate with DOT1L methyltransferase or MOF acetyltransferase to epigenetically regulate numerous gene targets, for example genes in the HOXA cluster, that directly and indirectly promote tumorigenesis. of care chemotherapy, prompting the combination of multiple epigenetic inhibitors as a treatment option for AML (74). However, this work needs to proceed with caution, as EZH2 has also been reported to help prevent development of resistance against multiple drugs (75). The various methylation states of arginine residues in histones adds additional complexity to the histone methylome. Three classes of arginine methyltransferases (PRMTs) have been identified in mammals so far. Class I generates omega-N G , N ′ G -asymmetric dimethylarginine (ADMA), Class II catalyzes omega-N G , N ′ G -symmetric dimethylarginine (SDMA), and Class III stops at the production of omega-N G -monomethylarginine (MMA). The first identified arginine methyltransferase, PRMT1, has been shown to directly associate with a variety of oncogenic fusion proteins. Shia et al. demonstrated that PRMT1 interaction with RUNX1 fusion proteins mediates transcriptional activation through increasing promoter H4R3me2a of multiple genes regulating proliferation (76). Subsequently, Cheung et al. found that several MLL and non-MLL fusion proteins, such as MOZ-TIF2, require interaction with PRMT1 and other epigenetic modifiers such as the H3K9ac demethylase KDM4C to drive their oncogenic transcriptional programs (77). The H4R3 residue plays additional roles in leukemogenesis through its symmetric dimethylation by PRMT5. PRMT5 has been suggested to influence the transcription of critical genes in AML. Tarighat et al. found that PRMT5 activates the FLT3 gene and suppresses miR-29b, resulting in increased proliferation of myeloblasts (78). Intriguingly, chemical inhibition of PRMT5 using EPZ015666 (GSK3235025) partially reversed the differentiation blockade of MLL-rearranged AMLs and slowed disease progression in vivo (79). The safety, pharmacokinetics and pharmacodynamics of these newly developed PRMT5 inhibitors, including EPZ015666, are currently being investigated in clinical trials and hold promise as an alternate line of attack against various cancers. METABOLIC REGULATION OF HISTONE METHYLATION Similar to acetylation, small changes in the cellular levels of SAM can differentially affect the activity of various histone methyl transferases (80,81). Human ESCs/iPSCs, which are known to rely on glycolysis for their energy production needs, have been shown to depend on high levels of SAM to maintain pluripotency and proliferation (82). Although global dynamics of metabolism-responsive histone lysine and arginine methylation have not been examined, Lys4 in H3 displays a strong association with metabolism. In an elegant study by Mentch et al., a direct and reversible relationship of H3K4me3 with methionine depletion was observed in human cell lines and in mice. SAM depletion under methionine restriction induced loss of promoter H3K4me3 at oncogenes such as AKT1 and MYC, leading to their downregulation. However, feedback to one-carbon metabolism pathways eventually allowed for partial restoration of H3K4me3 at several promoters (83). This compensation is most likely achieved through decreased consumption of SAM by alternate pathways and by downregulation of certain H3K4 demethylases. Consequently, this suggests that, in order to effectively modulate H3K4me3 levels at promoters, a combination of metabolic regulation of SAM and inhibition of H3K4 specific HMTs or demethylases is required. Indeed, a domain focused CRISPR screen has recently shown that knocking out the catalytic domains of certain HMTs and histone demethylases has profound effect on the survival of murine MLL-AF9/Nras G12D AML cells (54). Lysines demethylases are equally adept at responding to the metabolic state of a cell and are known to rely on two types of cofactors; either flavin adenine dinucleotide (FAD) in case of LSD1/LSD2, or 2-oxoglutarate (α-ketoglutarate), oxygen and iron Fe(II) in case of the Jumanji domain containing demethylases (84). The role of α-ketoglutarate in relation to leukemogenesis is perhaps the most well-studied example of metabolic dysregulation and has been extensively discussed in several reviews (85,86). Additionally, many JmjC-containing histone demethylases (JARID1B, JMJD1A, JMJD2B, and JMJD2C) show increased expression under hypoxic metabolic conditions as a result of direct HIF-1 promoter binding to compensate for their decreased catalytic activity under limited O 2 conditions (87). A more direct role of KDM4A in sensing cellular oxygen was recently proposed when its catalytic activity demonstrated a graded response to hypoxia in U2OS cells (88). On the other hand, prolonged hypoxia had no effect on the protein levels of LSD1, but reduced its catalytic activity through a gradual decrease in cellular FAD levels (89). This finding makes sense, as O 2 is essential for the reoxidation of FADH 2 back to FAD after a round of catalysis by LSD1. However, the presence of more than 100 flavoproteins inside a cell makes it difficult to isolate the epigenetics-related effects of FAD metabolism in cancer development. Nonetheless, new reports challenging the long held view of FAD synthesis being restricted to mitochondria and cytosol have opened the possibility that nuclear pools of FAD can have localized effects on the demethylation activity of LDS1/LSD2 (90). How these effects play out in the hypoxic microenvironment of the bone marrow (0.5-4% O 2 ) still needs to be investigated, because clues from these metabolic studies will provide new and exciting pathways that can be targeted in conjunction with well-studied epigenetic pathways as future therapeutics in leukemias. DISCUSSION Thus far, we have reviewed evidence demonstrating how metabolic states can directly influence chromatin-based epigenetic programs in AML. The shared biochemistry of mitochondrial and chromatin regulation is common to all eukaryotic cells. However, the rapid tissue turnover in the blood requires orchestration of rapidly changing cell identity programs, creating unique mitochondrial dynamics. From the quiescence of HSCs to the proliferative bursting of progenitors to cell cycle exit upon terminal differentiation, blood cells can phase through markedly different epigenetic and metabolic states in a short period of time. However, to become leukemic, these cells must not only elevate proliferative potential, but must also disable differentiation programs. We propose the possibility that altered metabolic dynamics may help simultaneously confer both of these features upon AML cells, as metabolic alterations not only enable the rapid proliferation of blasts, but also directly skew the epigenetic programs that regulate differentiation. This scenario would be distinct from solid tumors, in which the metabolic switch to aerobic glycolysis is primarily thought to enable rapid proliferation. Indeed, it is possible that metabolic alterations in fact represent a very key mechanism through which the AML differentiation block is constructed. One prediction of this hypothesis is that experimental manipulation of cellular metabolic programs in normal blood cells in vivo may impair their differentiation programs in a manner that resembles the AML differentiation block. There is support for this prediction. For example, a mouse knockout study investigated the blood phenotypes resulting from loss of the mitochondrial phosphatase PTPMT1, which is highly expressed in HSCs (91). Hematopoietic loss of PTPMT1 hampered oxidative phosphorylation, elevated glycolysis and decreased oxygen consumption. Most intriguingly, bloodspecific ablation of PTPMT1 markedly arrested myeloid differentiation programs and induced a large buildup of HSCs and phenotypic multipotent progenitors (MPPs). Additionally, in an elegant study that isolated the stage of maturation from myeloid progenitors to granulocytes, Lee et al. (92) found that inhibition of the key nutrient sensor mTORC1 blocked terminal differentiation and allowed continuous growth of GMPs in vivo. Such studies demonstrate that metabolic alterations in healthy myeloid cells can induce an AML-like differentiation block. It will be of interest to see how exactly the corresponding alterations in chromatin regulators may contribute to the gene expression program driving the observed differentiation arrest. Finally, the connection between metabolism and chromatin regulation in AML may have therapeutic significance. Epigenetic regulators have emerged as attractive therapeutic targets for AML, and we have highlighted several inhibitors of chromatin factors that have shown potential in AML models. However, as with most cancer therapeutics, these drugs may work better in combination than as monotherapies. We propose that it would be of particular interest to explore AML drug combinations that pair an epigenetic inhibitor with a metabolic inhibitor. Our rationale is that metabolic inhibitors will have profound influences on chromatin regulators, perhaps opening up marked epigenetic vulnerabilities. For example, if a metabolic inhibitor decreases the pool of nuclear acetyl-CoA, cells may struggle to maintain proper histone acetylation levels-which in turn could make them exquisitely vulnerable to HAT inhibitors, as any further impairment of histone acetylation could be intolerable. The number of possible combinations between metabolic and chromatin regulators is large, but advances in screening methodologyboth genetic and small molecule-is starting to make such endeavors feasible. AUTHOR CONTRIBUTIONS AD and MB conceptualized the idea and wrote majority of the manuscript. BZ and FY helped write parts of the manuscript and prepared the figures. FUNDING Work in the Blanco laboratory is supported by a grant from the NIH (1K22CA214849-01). AD and BZ are supported by the NIH grants 5R35CA210104-02 and 5R21AI130737-02.
6,277.6
2019-05-22T00:00:00.000
[ "Biology" ]
Application of a MALDI-TOF analysis platform (ClinProTools) for rapid and preliminary report of MRSA sequence types in Taiwan Background The accurate and rapid preliminarily identification of the types of methicillin-resistant Staphylococcus aureus (MRSA) is crucial for infection control. Currently, however, expensive, time-consuming, and labor-intensive methods are used for MRSA typing. By contrast, matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a potential tool for preliminary lineage typing. The approach has not been standardized, and its performance has not been analyzed in some regions with geographic barriers (e.g., Taiwan Island). Methods The mass spectra of 306 MRSA isolates were obtained from multiple reference hospitals in Taiwan. The multilocus sequence types (MLST) of the isolates were determined. The spectra were analyzed for the selection of characteristic peaks by using the ClinProTools software. Furthermore, various machine learning (ML) algorithms were used to generate binary and multiclass models for classifying the major MLST types (ST5, ST59, and ST239) of MRSA. Results A total of 10 peaks with the highest discriminatory power (m/z range: 2,082–6,594) were identified and evaluated. All the single peaks revealed significant discriminatory power during MLST typing. Moreover, the binary and multiclass ML models achieved sufficient accuracy (82.80–94.40% for binary models and >81.00% for multiclass models) in classifying the major MLST types. Conclusions A combination of MALDI-TOF MS analysis and ML models is a potentially accurate, objective, and efficient tool for infection control and outbreak investigation. INTRODUCTION Since their emergence in the 1960s, methicillin-resistant Staphylococcus aureus (MRSA) infections have been a major health care concern worldwide (Chen & Huang, 2014;Walter et al., 2015;Wang et al., 2010). Many epidemiological studies have revealed that different multilocus sequence types (MLST) present specific characteristics such as virulence gene profiles (Recker et al., 2017;Schuenck et al., 2012;Wang et al., 2009Wang et al., , 2010Wang et al., , 2012. Understanding the evolution of MRSA lineages and the origin of infection is crucial in outbreak investigation. Molecular typing methods, such as pulsed-field gel electrophoresis and MLST, are highly expensive and labor intensive for epidemiological studies. Hence, the application of these methods in clinical practice is limited (Struelens et al., 2009). Sequence based typing method has been widely used since the past decade, and it provides adequately high resolution for confirming transmission. However, in regions with few medical resources and financial constraints, it remains relatively impracticable (Harris et al., 2013;Koser et al., 2012;Schwarze et al., 2018). Recently, matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has been used in many clinical microbiology laboratories. This method can be used to identify bacterial species effectively and rapidly (Ge et al., 2016). The peptide or protein MS fingerprint of each bacterium can be generated and stored in a bacterial library for species identification. In addition, MALDI-TOF MS also provides an alternative solution for molecular typing methods (e.g., MLST) (Lartigue, 2013;Lu et al., 2012) and has the potential to offer lineage typing up to the subspecies level. Studies that have adopted the MALDI-TOF MS approach (Lasch et al., 2014;Sauget et al., 2017;Ueda et al., 2015) have reported varying results; it may be due to several reasons. First, the predominant MRSA lineages in different areas are different and the discriminatory power of MALDI-TOF might therefore differ when compared between different MRSA lineages. Second, the bacterial MS fingerprints of isolates from one area may not match those of isolates from other areas. Third, in many of the published works, the MALDI-TOF mass spectra have been assessed manually. An objective, standardized, and automated protocol has not been widely applied thus far (Camoez et al., 2016). Finally, the data obtained from MALDI-TOF mass spectra are relatively complicated and may be analyzed by a wide variety of bioinformatics tools. A manual approach cannot ensure a consistently high-quality output, because of the potential for large interindividual or intraindividual variation in the interpretation of the data. Therefore, a reliable analysis platform is necessary to ensure comparability between reports in clinical practice. Although Staphylococcus protein A typing provides comparable performance as MLST typing with less cost and time, our study has adopted MLST typing because we aim to demonstrate the possibility to implement our method beyond S. aureus (Crisostomo et al., 2001;O'Hara et al., 2016). To validate the use of MALDI-TOF mass spectra in classifying MLST types of MRSA in Taiwan, we used the ClinProTools software for analyzing MALDI-TOF mass spectra to generate classification models of MRSA lineages. Accordingly, clinical microbiology laboratories may rapidly provide preliminary typing reports of MRSA, which may be further confirmed using a sequence-based method, thus enabling clinical practitioners to exclude an outbreak or transmission in time. Bacterial lineages This study included 306 convenience non-duplicate MRSA lineages isolated from multiple reference hospitals in Taiwan, mainly through the Surveillance of Multicenter Antimicrobial Resistance in Taiwan (SMART) program (Wang et al., 2012). The SMART program consecutively collected MRSA isolates from ten medical centers throughout Taiwan from March to August 2003 (Ho et al., 2010). All the lineages in this study were convenience samples which had been recovered from blood cultures. All duplicate isolates were removed from the study. After procuring the cultures from bacterial banks, tests were performed again to confirm the characteristics of each lineage. The identification of S. aureus was based on colony morphology, microscopic examination, a coagulase test, a catalase test, and MALDI-TOF mass spectra. Bacterial identification through MALDI-TOF MS The fresh bacterial colonies that were grown on blood agar plates for 24 h were picked up and smeared onto a MALDI steel target plate, forming a thin film of colonies. Next, one mL of 70% formic acid was introduced on the film and dried at room temperature. Subsequently, one mL of the matrix solution (i.e., 50% acetonitrile containing 1% a-cyano-4-hydroxycinnamic acid and 2.5% trifluoroacetic acid) was introduced on the film again. The sample-matrix was dried at room temperature before analyzing it through MS for data acquisition. Mass spectrum analysis was performed using a MicroFlex LT mass spectrometer (Bruker Daltonik GmbH, Bremen, Germany) with linear positive model, and the analytic region was 2,000-20,000 Da. For each sample, 240 laser shots (at frequency of 20 Hz) were collected, and a Bruker Daltonics Bacterial test standard (Bruker Daltonik GmbH, Bremen, Germany) was used for calibration and as the control with the linear positive model. The procedures were conducted according to the manufacturer's instructions, which have been detailed in previous studies (Ge et al., 2016;Lu et al., 2012). The results of mass spectrum from the MALDI Biotyper 3.1 software (Bruker Daltonik GmbH, Bremen, Germany) were compared with those in the database and assigned scores. Peaks with scores >2 were further selected for peak signal analysis. The lineages were randomly divided into batches, and the analyses were conducted on different days to avoid a possible batch effect. MALDI-TOF MS spectra analysis MALDI-TOF mass spectra of the MRSA lineages were fed into the ClinProTools TM software (version 3.0, Bruker Daltonik GmbH, Bremen, Germany) in batches. The data preprocessing steps, including baseline subtraction, smoothing, and recalibration, were set as default for all analyses (Bruker Daltonik GmbH, 2011;Camoez et al., 2016;Zhang et al., 2015). ClinProTools is a widely used software developed by Bruker (Bruker Daltonik GmbH, Bremen, Germany). It has been used in MALDI-TOF data analysis for MRSA lineages typing (Camoez et al., 2016;Zhang et al., 2015). Characteristic peaks among various MLST types were selected and sorted through several statistical tests, including the t-test, analysis of variance (ANOVA), the Wilcoxon or Kruskal-Wallis (W/KW) test, and the Anderson-Darling (AD) test. A P-value of 0.05 was set as the cutoff. If P was <0.05 in the AD test, a characteristic peak was selected if the corresponding value of P in the W/KW test was also <0.05. When P was 0.05 in the AD test, then a characteristic peak was selected if the corresponding value of P in ANOVA was also <0.05 (Stephens, 1974). Generation and validation of classification models Classification models of the major MLST types (ST5, ST59, and ST239) were generated using the machine learning (ML) algorithms in ClinProTools, namely QuickClassifier (QC), Supervised Neural Network (SNN), and Genetic Algorithm-K Nearest Neighbor (GA-KNN). The description and setting of the ML models are detailed in the ClinProTools user manual (Bruker Daltonik GmbH, 2011). All the peaks in the spectra were used in model generation. The W/KW test was used to sort peaks during selection. For GA-KNN, GA was used as a method for supporting feature selection, where the maximum number of best peaks was set as 30, and the maximum number of generations was set as 50. The numbers of the nearest neighbors evaluated in the GA-KNN algorithm were 1, 3, and 5-7 for each binary classification. To avoid over fitting, we used fivefold cross validation to obtain an unbiased statistical measurement of performance. Accordingly, the data were split into five subsets in a randomized manner. Each subset would serve as the validation set for the model trained by the remaining four subsets iteratively. Classification accuracy was obtained from the average of the five evaluations. Consequently, the bias of over fitting could be avoided using fivefold cross validation. Statistical analysis The AD test is used to test for a normal distribution of peak intensity in ClinProTools. In the AD test when P 0.05 (i.e., the data distribution did not follow normal distribution), the W/KW test was used as the statistical method to select discriminative peaks. In the W/KW test, if P 0.05, a rank-based multiple test procedure was used for post hoc analysis to conduct paired comparisons in the non-normal distributed data and calculate the simultaneous confidence intervals with Tukey-type contrasts (Konietschke, Hothorn & Brunner, 2012). For evaluating the performance of various ML models, accuracy, sensitivity, and specificity were used as the metrics. where TP, TN, FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. Characteristic peaks for discrimination among various MLST types The characteristic peaks were sorted using the corresponding P-values obtained in the (W/KW) test because the AD test revealed P < 0.05 (Table 1). The top 10 characteristic peaks, sorted using the W/KW test results, were selected for further statistical evaluation. The 10 peaks ranged from m/z 2,082 to 6,594 (Table 1). All the 10 selected peaks revealed P < 0.000001 in the W/KW test, thus indicating that they were informative and discriminative peaks in the MLST type classification. In Table 1, the distribution of these peaks over various MLST types was further evaluated. High expression levels of m/z 2,430 and 3,893 and a low expression level of m/z 3,877 indicated the fingerprint of ST5; low expression levels of m/z 2,416, 2,980, and 3,893 indicated the fingerprint of ST59; high expression levels of m/z 2,416 and 2,880 and low expression levels of m/z 3,277 and 3,893 indicated the fingerprint of ST239. Based on 10 informative peaks, the odds ratio of different ST-pairs was calculated to determine the association between the peaks and respective ST-pairs (Table S1). Furthermore, the distribution of the isolates was plotted according to peak intensity of m/z 3,277 (x-axis) and m/z 6,594 (y-axis) (Fig. 1). The peaks at m/z 3,277 and 6,594 were the top two characteristic peaks among the ST types (Fig. 1). The scatter plot figures may be seen as a projection of a high-dimensional scatter space with various dimensions of the m/z peaks. The average intensities of the 10 peaks of these lineages are further illustrated in Fig. 2. These results demonstrate the specific characteristics and patterns of peaks in the different MLST types. Specifically, the distribution of the ST59 lineages and ST239 lineages showed satisfactory separation (Fig. 1); ST5 lineages could also be distinguished from ST239 lineages based on distribution (Fig. 1). By contrast, ST5 lineages and ST59 lineages could not be satisfactorily discriminated using information from the peaks m/z 3,277 and 6,594 only (Fig. 1). Similarly, the lineages of other minor ST types mixed with other major ST types (i.e., ST5, ST59, and ST239) on the scatter plot (Fig. 1). ML models for classification of various MLST types of MRSA Various MLST types showed specific patterns of peaks expression, as presented in Table 1 and Figs. 3-5. To generate a comprehensive and objective classification, ML algorithms were used. In the ST5 binary classification models, the peaks at m/z 3,877 and m/z 3,893 showed satisfactory discriminative power (Fig. 3). The QC algorithm attained the highest cross-validation values (94.40%, Table 2). In ST59 binary classification models, peaks at m/z 2,980 and m/z 2,416 also showed satisfactory discriminative power (Fig. 4). The GA-KNN algorithm attained the highest cross-validation values among all the algorithms. The GA-KNN algorithm showed highest performance when the number of the nearest neighbor was set as 5 (85.00%, Table 2). For the binary classification of ST239 vs non-ST239, the peaks at m/z 3,277 and m/z 6,553 showed moderate discriminative power (Fig. 5). The GA-KNN algorithm showed higher performance over other algorithms when the number of the nearest neighbor was set as 7 (82.80%, Table 2). In this study, multiclass models for classifying an isolate into one of four lineages were designed. Generally, the GA-KNN algorithm outperformed the other algorithms; more specifically, this algorithm was suitable for detecting ST239 (Table 3). The GA-KNN algorithm could successfully detect the ST5, ST59, and ST239 lineages with an accuracy of Figure 4 Scatter plot of the ST59 and non-ST59 isolates. The peaks at m/z 2,416 and m/z 2,980 served as the x-and y-axes, respectively. The intensities of the characteristic peaks were expressed in arbitrary intensity units. The ellipses represent 95% confidence intervals of the peak intensities for ST59 (red ellipse) or non-ST59 (green ellipse). ST59: red crosses, non-ST59: green circles. DISCUSSION In this study, the major ST types (i.e., ST5, ST59, and ST239) of blood stream MRSA in Taiwan could be classified through ML-based MALDI-TOF mass spectra analysis with high accuracy. The analysis was conducted using a standardized system (i.e., ClinProTools) to obtain objective and consistent results. Moreover, the ST types of MRSA could be predicted accurately through MALDI-TOF mass spectra analysis before additional molecular typing methods (e.g., MLST). Biomarker peaks of various MRSA ST types were discovered through MALDI-TOF mass spectra analysis by using the ClinProTools software. Some of the biomarker peaks have been previously reported (Camoez et al., 2016;Josten et al., 2013;Sauget et al., 2017;Zhang et al., 2015) and validated in the study (namely m/z 3,277, 3,877, 3,893, 6,553, and 6,594), whereas some of them have not yet been widely validated (namely m/z 2,082, 2,416, 2,430, 2,880, and 2,980). Nevertheless, reports on the MS characteristics of MRSA are inconsistent (Lasch et al., 2014;Sauget et al., 2017;Ueda et al., 2015). Although the accuracy of typing is sufficient in individual studies, the general patterns of specific lineages are not available Figure 5 Scatter plot of the ST239 and non-ST239 isolates. The peaks at m/z 3,277 and m/z 6,553 served as the x-and y-axes, respectively. The intensities of the characteristic peaks were expressed in arbitrary intensity units. The ellipses represent 95% confidence intervals of the peak intensities for ST239 (red ellipse) or non-ST239 (green ellipse). ST239: red crosses, non-ST239: green circles. Full-size  DOI: 10.7717/peerj.5784/fig-5 (Sauget et al., 2017). Currently, bacterial lineages obtained from geographically diverse areas cannot be clearly discriminated using MALDI-TOF MS. However, more robust characteristic patterns of various types may be available in regions with geographic barriers (e.g., Taiwan Island) than in regions without these barriers. The characteristic patterns of specific lineages may be the result of the disseminated lineages in local areas. By contrast, establishing a localized solution by using an appropriate method of Notes: QC, QuickClassifier; SNN, supervised neural network; GA, genetic algorithm; KNN, K-Nearest Neighbor. The number following "GA-KNN" indicated the number of the nearest neighbor used in models; Selected peaks: number of peaks selected by the models. Accuracies were expressed as mean ± standard error. interpreting mass spectra may be more practical and crucial than establishing a generalized pattern. The isolates in this study were obtained from multiple reference hospitals in Taiwan. Consequently, the isolates used in the study can represent the molecular characteristics of MRSA in the local region. The localized molecular characteristics of MRSA may be sufficiently useful for clinical practice in regions with geographic barriers (e.g., Taiwan Island). Currently, clinical microbiology laboratories generally use MALDI-TOF MS for bacterial identification because of its advantages in accuracy and effectiveness over traditional biochemical methods (Lartigue, 2013). Before analytical measurement by using MALDI-TOF MS, a protein extraction process is necessary. In-tube extraction or the direct deposit method are two common extraction methods used in clinical microbiology laboratories. In-tube extraction provides more purified intracellular components than the direct deposit method does, which results in high quality MALDI-TOF mass spectra and low noise. By contrast, the direct deposition of bacteria onto a steel plate is considered a less labor-intensive and more rapid preanalytical process than in-tube extraction. The time for the entire process and the turnaround time of MALDI-TOF MS can be considerably reduced by using the direct deposit method. Consequently, considering the relevance in routine practice, the direct deposit method was evaluated in this study. Knowledge of the bacterial molecular type of MRSA is crucial while performing epidemiological studies on bacterial outbreak. In this study, specific peaks were identified for major clonal lineages of MRSA. The peaks m/z near 3, 277, 3,877, 3,893, 6,553, and 6,594 identified in this study have been reported in previous studies as characteristic peaks in discriminating MRSA clonal complexes (Camoez et al., 2016;Josten et al., 2013;Wolters et al., 2011). Zhang et al. (2015) reported the highest expression of the peaks at m/z 3,277 and 6,554 in ST59. The peak at m/z 6,594 was recognized as the SA1452 protein and as a biomarker of the clonal complex 8 and USA-300 lineages; both these MRSA lineages are prevalent in communities and hospitals (Boggs, Cazares & Drake, 2012;Josten et al., 2013;Wolters et al., 2011). In this study, the peak at m/z 6,594 was characteristic for ST239, which were the most prominent ST type of bloodstream MRSA and prominent HA MRSA (Table 1). By contrast, some of the characteristic peaks have not previously been reported and validated as discriminative peaks (i.e., peaks at m/z 2,082, 2,416, 2,430, 2,880, and 2,980) (Sauget et al., 2017); however, they play a crucial role in MLST type classification. For example, the peak at m/z 2,980 was a distinguishing peak for the ST59 MRSA lineages (Fig. 4), which is one of the characteristic ST types of the CA MRSA in Taiwan (Huang & Chen, 2011). Moreover, the peaks at m/z 2,416 and 2,430 were noted as characteristic peaks in the ST5 and ST239. In the QC models, the peak at m/z 2,416 was proven to represent psm-mec, which is strongly associated with SCCmec III and VIII (Queck et al., 2009). Briefly, although the peak at m/z 2,416 is commonly expressed peak in MRSA, the expression level, in addition to its presence, may serve as an informative feature in classification of MRSA lineages. Furthermore, for identifying subtle differences in the MALDI-TOF mass spectra for preliminary reporting of AST or subspecies results, not only single characteristic peaks but also specific combinations of characteristic peaks may be beneficial. Single MS peaks have provided some characteristics of each major ST type (Table 1; . Additional integration of these characteristic peaks by using ML models may generate a more comprehensive and robust result for ST typing of MRSA (Table 2) than that generated using single peaks. Consequently, the accuracy of the subspecies classification by using a combination of peaks may be higher and more resistant to variations in analysis than that using a single peak because of comprehensive interpretation. Zhang et al. (2015) reported the successful use of ML models (by ClinProTools) in analyzing MALDI-TOF mass spectra to classify various MLST types of MRSA. High performance of the binary models (including ST5, ST45, ST59, and ST239 binary classification models) were described. However, the high performance might have resulted from over fitting because only one specific combination of training sets and validation sets was used for performance evaluation. This study had several limitations. Firstly, our study did not adopt nucA polymerase chain reaction or other sequencing methods to exclude S. argenteus, which were previously identified as S. aureus using molecular typing (Thaipadungpanit et al., 2015;Tong et al., 2015). The MRSA isolates were collected from an island with geographic barriers, thus resulting in a relative simple ST distribution. Possibly, our method might be unsuitable in other country with wide variety of bacterial lineage. We thus recommend researchers who intend to adopt our approach to train and validate their own model by using regional MRSA lineages. Another limitation of this study is that the MRSA isolates in this study are all from previously collected samples under the SMART program. This indicates the model may not exhibit sufficient accuracy in performance on new MRSA isolated in the future. Moreover, only six MLST types (namely ST5, ST45, ST59, ST239, ST241, and ST573) of MRSA were included for models training and validation. Based on the study design, ML learning models can be used for detecting the major lineages of MRSA in Taiwan only, but they cannot be used for detecting new clones, which were not included in model training. Consequently, the models are designed to be used for reporting preliminary lineage information in outbreak investigation. When a new clone emerges in the future, the ML models could be tuned and updated using the newly collected datasets. Our study has demonstrated that in regions with limited medical resources, an MALDI-TOF-based MLST typing model may serve as a valuable method for timely intervention in the transmission or outbreak of MRSA. ACKNOWLEDGEMENT This manuscript was edited by Wallace Academic Editing.
5,035.8
2018-11-07T00:00:00.000
[ "Medicine", "Biology" ]
Tradecraft of the Avars’ metalworking – manufacturing of iron axes and a special multi-metallic method used for belt accessories ABSTRACT Metallographic analyses were performed on several types of early medieval iron axes (hammers) and on a piece of a belt set, found in Hungary, using optical and SEM-EDS microscopes. The examinations were focusing on defining structural constituents, determining their distribution and grain size. Inclusions were also investigated. On the basis of the result traces and characteristics of different technological methods of forging could have been detected. The examined axes were supposedly forged from a piece of inhomogeneous iron without folding, kept at high temperature for a longer period and forged the edges multiple times, not intensely. The belt accessories were covered by an iron oxide layer, however, the complex investigation revealed that these belt ornaments are made of various metals. Sandwich-type iron-tin plates, thin iron wires as well as brass and bronze plates have been used in the product. We were able to reconstruct the steps of the production process. Introduction The empire of Avars was a significant military power in the Carpathian Basin between the 6 th and 9 th centuries. Metal finds that have been found, at archaeological sites of Avar graves and settlements, are remarkable signs of iron and bronze cultures. Pannonian workshop sites from the second half of the Avar period prove that considerable ironworking activities were carried out in the Avar Empire (Gömöri, 2000: 234-240.). In the recent years, numbers of selected iron tools of everyday life (knives, nails, needles, etc.) from Avar settlements and metallurgical workshops were examined by the Archaeometallurgical Research Group of University of Miskolc (ARGUM) . Nevertheless, up to the present, only one of the Avar age axesan L-shaped cleaverhas been analysed (Piaskowski, 1974: 121-122, 128), so we decided to investigate more types and subtypes: a hammer axe, a T-shaped cleaver and a double headed war hammer. Another important part of this study is a detailed metallographic analysis of a piece of belt-set called Kölked-Feketekapu. The name of these types of belt sets is originated back to the name of the area of the cemeteries where most of those have been found. A belt set of a blacksmith from the Grave 80 of Kölked-Feketekapu B was demonstrated by Orsolya Heinrich-Tamáska (2006: 22). Attila Kiss dated Grave 80 which was filled up with goldsmith and blacksmith tools from between the end of the 6 th century and the first third of the 7 th century (Kiss, 2001: 335, 345). He dated the belt set type to the early Avar age, saying that all of the datable parallels of these belt sets are made with similar techniques came from this period (Kiss, 1996: 213). Heinrich-Tamáska drew attention to the fact that the Kölked-Feketekapu type belt sets could be only dated from the second third of the 7 th century, because of their analogy with the multi-part belt sets with silver inlay (Tauschierung) have been classified into the A form group of the horizon (Zeitschicht) 3 of Rainer Christlein (Christlein, 1966: 49;Heinrich-Tamáska, 2006: 22). Beyond the formal matching, she proves that the close relationship between the two types is the fact that they were basically made from iron, and their distribution was restricted to East Pannonia and the few parallels outside the Carpathian Basin which are all known from Germanic cemeteries of South-Germany (Heinrich-Tamáska, 2005: 32, 167-168, Karte 1;Heinrich-Tamáska, 2006: 22). Materials and methods The examined archaeological findings Double headed war hammer from Előszállás (Fejér County, Hungary) Double headed war hammers differ from the axes in their lack of a blade. There are butts on both sides of the socket. Recently, researchers considered them as weapons (Szücsi, 2014: 126-127). The item from Grave 2014/25 of Előszállás-Öreghegy (Figure 1, 1) can be described with the code V-G-V (Szücsi, 2014). Its length is 13,8 cm, d. of the eye: 2,7×2,3 cm, butts: 5 cm and 6 cm, Wt: 126 g. It can be dated from the middle of the 8 th century until the middle of the 9 th century (Török, et al., 2016: 31). T-shaped cleaver with butt and hammer axe from Úrhida (Fejér County, Hungary) The T-shaped cleaver with butt from Grave 87 (Figure 1, 2) can be described with the code 8a-I-I (Szücsi, 2014). Its length is 13,5 cm, the cutting edge: 9 cm, d. of the eye: 2,3×2,3 cm, butt: 5 cm, Wt: 159 g. The form is a Late Antique origin, but it appeared again in the burials of the Carpathian Basin (Szücsi, 2014: 131) in the second half of the Middle Avar period (ca. 670-680). Based on its peculiar shape and the subsequent West-European depictions (a miniature of the manuscript of St. Gallen from the 9 th century and the Bayeux Tapestry from the 11 th century) we can consider it has been used as working tool (Cs. Sós, 1955: 215-216). Belt accessory from Nagyvenyim (Fejér County, Hungary) From Grave 1 of Nagyvenyim-Munkácsy Str.-Fűzfa Str. we recovered a multi-layered iron-copper alloy belt set with cell work (cloisonné) consisting a main strap end, two shield-shaped belt-mounts and eight small strap ends (Szücsi, 2015: 16, 24-25, 51;Török & Kovács, 2015: 66). There are two pieces from the set which can be seen pieces in Figure 2. There is no evidence of inlays in the cells (Heinrich-Tamáska, 2006: 34). This belt is a so-called Kölked-Feketekapu-type, which are multi-part belt sets with cell works. These items can be dated from the second third of the 7 th century, and those are most likely local productions. The workshop can presumably be localized around Kölked-Feketekapu (Baranya County, Hungary), due to the frequent occurrence of this type in the region and to a grave of a blacksmith equipped with such a belt set (Heinrich-Tamáska, 2006: 22;Kiss, 1996: 213). Analytical methods and the aims of the study In Figure 1a metallographic analysis can be seen on the three iron artefacts, which were carried out by the ARGUM. The microstructure was studied with a Zeiss AxioImager optical microscope and a Zeiss EvoMa10 scanning electron microscope equipped with EDAX energy dispersive spectroscopy to perform elemental analysis. The investigation focused on defining structural constituents, determining their distribution and grain size. Moreover, chemical compositions, structures and shapes of the inclusions were also examined. Another aim of this study is to reveal the production techniques of forging, heating and cooling. The finds were sampled by cutting, and embedding the fragment in epoxy-resin. The examined section's surfaces, which are marked, on the photograph of artefacts in Figure 1, were grinded and mechanically polished. Diamond paste (3 µm) was used in the final polishing step. After polishing, the samples were etched in 2% natal solution. Particularly concern of this paper is a thorough analysis of a multi-layered belt accessory from the Avar Period ( Figure 2). The SEM-EDS investigation was carried out on multidirectional sections: (1) SEM-micrographs were taken from the spot on the narrow side (A-A) of the artefact (marked with 1 in Figure 2). (2) A lined section in Figure 2 Besides identification of the several metals (which the artefact was made of) and analysing its microstructure, an additional goal of the analysis was to reconstruct the steps of the production process of these exclusive belt ornaments. Iron axes Detailed and complex metallographic studies on such kind of special Avar axes and war hammers are relatively rare; however, some extensive surveys provide information for essential characteristics of medieval axes and outline the general results of the examinations (Pleiner 1967;Piaskowski 1989). There is only one axe from the Avar Period which has been analysed until now; an L-shaped cleaver from Grave 125 of Környe. Jerzy Piaskowski was an individual who established that it had been made from low-phosphorous and low-carbonic iron (Piaskowski, 1974: 128). It was later case-hardened which means that it had been covered with charcoal and then heated to temper the blade (Piaskowski, 1974: 122-123). Thus, the cleaver could be used as a heavy duty working tool and a weapon as well. There are some interesting samples of the Norwegian and Danish axes from the 10-12 th centuries AD are demonstrated by Buchwald (2005. 307-310). These axes are a common Norwegian type derives from the late Viking Age (Rygh 1885: Fig. 561). The structure of the examined samples is pearlitic-ferritic with 0.3-0.6% carbon. Inside this structure, there are fayalite laths in a glass matrix as slag inclusions could have been examined. No phosphorus has been detected (Buchwald 2005: 308-309). In the medieval times, general techniques have been used to be applied in making axes, adzes, picks, etc. which are demonstrated by Pleiner (2006: 208-209). These technologies of manufacturing e.g. Slavic battle-axes in the 9 th century meant overlapping or punching the shaft-hole and scarf-welding the cutting-edge. Double headed war hammer (DHW, Figure 1 Two samples have been taken from the double headed war hammer. The mosaic image which was taken from the whole section (A-A) revealed an extremely inhomogeneous microstructure ( Figure 3). The centre area of the section consists large (170 µm) uniaxial ferrite grains. Towards to the edge of the cross-section, the size of the ferrite grains decreasing (28 µm) additionally small pearlitic colonies appears. It can be observed that an increase of carbon content causes a large change of the grain size. High carbon content could be found in both sides of the examined section. Near the surface a fully pearlitic and a ferritic-pearlitic microstructure could be observed, moreover, seconder cementite could be detected in the fully pearlitic area. The carbon content varies in a large range (∼0-1.1 wt %) between the fully ferritic area and the parts of seconder cementite and pearlite. However, a suddenly change of carbon cannot be observed on adjacent areas. The ferritic-pearlitic regions are showing a continuous transition. This assumes that the raw material was inhomogeneous, mainly with a carburised surface. During the forging process more or less banded structure has been formed. The micrographs show a transformed structure without any trace of plastic deformation. This indicates that the structure was formed from austenite. Hot forging was performed in a high temperature while the full cross-section was in the austenitic region of the iron-carbon system. Based on the micrographs, after the forging a moderate cooling of the war hammer was also concluded. Near the surface of the A-A section a series of inclusions was observed. The microstructure and the chemical composition, examined by SEM-EDS, suggest that these inclusions arose from the smelting process, although a relatively high amount of Ca was also detected (Figure 4, Table 1). The microstructure of the slag shows a typical glassy matrix in which fayalite (2FeO·SiO 2 ) particles are embedded. Based on the chemical composition and micrographs, the amount of glassy phase is too small that all Ca present in the glassy phase so the particles are a complex olivine instead. Buchwald & Wivel demonstrated a similar microstructure of a fayalite-glass slag inclusion in a ferritic-pearlitic iron (1998: 75, Fig. 2). Pearlitic-ferritic iron has a fayalite-glass inclusion where the more the ferritic part in the structure of the metal, the less the ratio of the glassy phase in the inclusion as it was demonstrated by Buchwald (2005: 162-163, Fig. 160). The microstructures of the inclusions fit with Buchwald's observation within the examined section. Fig. 4 shows a SEM-EDS micrograph of an inclusion in the A-A section of double headed warhammer. Light grey particles (probably fayalite based complex olivine) are embedded in dark grey glassy matrix. Other sample was taken from the material around the shaft-hole (B-B point 1). The mosaic OM-image reveals a similar banded, heterogeneous microstructure ( Figure 5) The shaft-hole is also strongly corroded; however, a pearlitic band can be examined in the centre of the sample with high carbon content. Small grained (10 µm) and ferritic-pearlitic areas are situated next to this band. The small grain size suggests a higher cooling rate after hot-forging, than is supposable on the basis of the examination on the cross-section which has been demonstrated above. The transition from the pearlitic band to the other area is also smooth due to the carbon diffusion. This also proves that the war hammer was made from a piece of single bloom. There were inclusion analyses carried out during the SEM-EDS examination of the sample from the metal around the shaft-hole. These inclusions also have similar microstructure and chemical compositions ( Table 1) as those that examined in the sample taken from the cross-section of the war hammer. T-shaped cleaver (TSC, Figure 1, C-C, D-D) There are also two samples were taken from the Tshaped cleaver (C, D). Figure 6 shows the C-C section of the artefact (see Figure 1). In the OM-image, mainly small grained ferrite and pearlite can be seen. The assumed carbon content is 0.3-0.35 wt% based on the ratio of the ferrite and pearlite. C-C section is almost homogeneous, but series of inclusions and a thin pearlitic band can be observed. The thin pearlite band, which was mentioned above, is not a characteristic feature of the microstructure. The existence of this band could be accidental. The nature of the structure is similar as it can be seen in the shaft hole of the double headed war hammer. There are more bands can be observed in the section. The bands revealed by either the microstructure (the ration of ferrite and perlite, so the carbon content changes a bit) or a series of elongated inclusions. The dimensions of the inclusions show the direction of the plastic deformation. This shows that the material was hammered at top-bottom direction (to see C-C section). To see the dimensions of the object it could be concluded the same. Two kinds of inclusions can be identified by SEM-EDS examination (Figure 7). One type is an iron-silicate slag (Figure 7, Point 1), probably originated from the smelting, with high Al-content and just slightly higher K-and Mn-content than in cases of the inclusions from the double headed war hammer could be detected. However, the microstructure of this type of inclusion is similar to the other ones within the material of the war hammer. There are also assumed fayalite type olivines particles can be found in a glassy matrix. The inclusions of other type (Figure 7, Point 2) have a relatively homogeneous structure and slightly lower Fe-content, but it's higher in Si-content. This type of inclusion mostly appears near the surface arranged in line in the direction of formation (see Fig. 6), while the heterogeneous inclusions can be observed in widely dispersed state in the material. Based on this observation, it is possible that this kind of homogeneous inclusions could originate from smithing. However, it is not possible to conclude this question definitively by using only an optical microscope and SEM-EDS. The observed microstructure confirms that in this case a hot forging process had been applied as well. However, small grain size of the ferrite suggests that it has an enhanced cooling. Also, a normal air cooling is assumed as in the case of war hammer, but the lower thickness of the object can cause the enhanced cooling. Figure 8 shows a mosaic OM-image which was taken from the D-D section of the T-shaped cleaver. A similar banded structure can be observed as in the case of the double headed war hammer. The upper part of the cross-section is mainly built up by ferrite; however, the lower part consists of a ferrite-perlite and fully pearlitic material. A small grained ferritic structure (10-12,5 µm), as a product of the phase transformation from austenite, is the result of an enhanced cooling too. This was also shown through by the Widmanstätten ferrite in the ferritic-pearlitic band. In this case, diffusion zones between the bands with different carbon content are more clearly visible. This assumes that the whole cleaver has been made from a piece of a single bloom as in the case of the war hammer. In this section, there are single phased inclusions can be found arranged in a line. The chemical compositions of these inclusions are similar to those that are examined in section C-C. The Al content is slightly higher, but the Fe/Si ratio is more lower (0.31) than in the cases of the inclusions from section C-C (0.95 and 1.23). Hammer axe (HA Figure 1 E-E) E-E section of the hammer axe has also been studied. Figure 9. In the mosaic OM-image of the all micro structural parts can be observed as in the case of the double headed war hammer. In the upper left corner a large grained (100-150 µm) ferritic area can be discovered, in the lower left corner the dark area is mainly a pearlitic area. In another separated area, a small grained ferritic-pearlitic (ferrite grain size 25 µm) structure can be seen as well in the micrograph. This large difference originated to the large difference in carbon content. The microstructure also suggests a hot-forging technology applied at the austenitic Figure 11. SEM-analysis was taken from the spot on the narrow side of the belt accessory. SEM-micrograph shows the iron-rich area and the copper-based alloy area Figure 12. SEM-micrograph of the tin-iron-tin layers temperature range. The strong similarity between the examined artefacts shows a general technological method of their production. Another evidence for the similar processing is the banded like structure. It is also suggested that the whole artefact is made from one bloom and from one smelting process. Series of inclusion can also be discovered in this cross-section. In the fully ferritic area a large group of inclusions can be seen. Microstructure and chemical composition of a typical inclusion, examined by SEM-EDS, demonstrated in Figure 10. The morphology, the position and the microstructure show that the inclusions of this artefact are also related to the smelting process. Similar distorted slag inclusion with fragmented fayalite laths is demonstrated by Buchwald (2005: 121, Fig. 120) The relatively high Mn-content of these inclusions (7-8 wt%) is similar characteristic Figure 13. SEM-micrograph of the copper-based formations Figure 14. SEM-micrograph of the brass columns and the gap between them. Chemical compositions (by EDS) can be found in Table 2 (Török et al. 2015: 232). Belt accessory (Figure 2, on the left side) Three examinations, as they are listed above, were performed on this artefact using SEM-EDS. The examination (1) practically was a local analysis without polishing. An iron-rich area with more than 6 wt% copper and lead and with 1.54 wt% zinc, a copper-based alloy area (a spot) with 5.11 wt% zinc and 3.91 wt% tin is distinguished. The Figure 11 shows a SEM-micrograph which was taken from the spot on the narrow side of the artefact -(marked with point 1 in Figure 2). This preliminary study reveals that the belt accessory basically contains two different kind of metallic materials: an iron based and a copper based alloy. The SEM-micrographs were made during the examination (2), of the cross-section (marked with B-B line in Figure 2) which shows the belt accessory actually consists of several layers and metals: . The base material is an iron sheet, plated with Sn, like sandwich-type tin-iron-tin tapes ( Figure 12). The thickness of the iron sheet, between the thin Sn layers, is relatively stable (350-400 µm). formations, is iron-oxide in a specific structure containing 3-4 wt% copper and 2-3 wt% lead (Figure 14 Point 6). The material of the tin gaps, between the brass columns, is also an iron-oxide with a number of Cu-and Pb-inclusions. (Figure 14 Point 4) Figure 13 and 14 show the sizes of the brass formations and the distances between the formations and between the brass columns (thin gap with squared corners), as well. The examination of the cross-section of the belt accessory implied a varied and detailed material structure, which encouraged us to make further investigations. Before the examination (3) the surface of the artefact was polished from the rounded end (C-C) up to the line marked point 2 in Figure 2. A specific structure has been found. Largely corroded rings made of a thin iron wire and additional straight iron wires laid between the rings can be seen. Traces of brazing with brass are visible along the iron wires and rings (the light grey areas). (Figures 15 and 16) Iron axes The examined Avar axes have similar metallographic characteristic, made of non-alloyed iron with inhomogeneous structure and have a relatively low average carbon content (0.1-0.5 wt%). The centres of the examined cross-sections have a mainly ferrite structure with a relatively large grain size. The amount of the lamellar pearlite phase is increasing towards to the sides. However, for the cleaver with T-shaped blade we did not find exactly the same structure as in the case of the other two examined artefacts. It could be the result of the inhomogeneity of the iron bloom. All examined axes were supposedly forged from a piece of inhomogeneous iron without folding. Those were kept on high temperature (in austenitic region) and forged the edges multiple times but not with extreme intensely. The forging process ended with moderate air cooling. The EDS analysis of the examined inclusions implied iron-silicate based slag without phosphorus content worth mentioning. Most of the examined inclusions may originate from the smelting process of the bloomery furnace. There were some special characteristic in the chemical compositions. The analysed inclusions of each artefacts could be detected: DHW -high Ca-content; TSChigh Al-content, slightly higher K-content and detectable Ti-content; HAhigh Mn-content, medium Al-content and the highest Fe/Si ratio. There were two ways of getting the shape, as it was mentioned above: bending method or punching method. In the course of the bending method, the long narrow metal-sheet was bent at the middle to encompass a roundish metal rod. The socket was formed from the loop obtained by this way. The bended metal-sheet was hammered and shaped to a blade (Pleiner 2006: 209;Szabó, 1954: 139). In the course of the punching method, the smith formed the raw material into a suitable shape and size and then he punched it at the place of the eye. After the socket has been shaped, he formed the blade and the butt (Pleiner, 1962: 229, Fig. 45;Pleiner 2006: 209). The advantage of this method was that there were not any jumped segments on the axe which could split later in the use. The results of the metallographic examinations proved that the hammer axe, the T shaped cleaver and the double headed war hammer investigated in our study, were made by the second method. Belt accessory The results of the SEM-EDS examination, carried out on multidirectional sections, on a piece of the Avar belt set which revealed that these ornaments were made of various kind of metals. This multi-phase production process, which is a unique among not only the Avars', but also the early medieval methods, was quite complicated method, in spite of the relatively small size of these artefacts. The assumed technology of the production: (1) A 1.5-2 cm wide iron sheet was formed which was plated by tin afterwards (Figure 17.a). (2) On the iron sheet surface, iron rings were placed in three lines. Thin straight iron wires were laid between the lines. The tin plating made it easier to braze the iron wires to the surface (Figure 17.b). (3) Thin brass sheets were fitted along the iron rings and between the straight iron wires (Figure 17.c). (4) During the heating, these brass sheets were melted and functioned as an ornament and also as a braze material which has a good adhesion to the iron. A part of the tin flowed away and the other part fused in the material of the brass -or -remained on the iron sheet as a very thin layer. Of course, the centuries of corrosion modified the original structure ( Figure 17.d.).
5,532.8
2017-12-15T00:00:00.000
[ "Materials Science", "History", "Engineering" ]
The Pros and Cons of Online Competitive Gaming: An Evidence-Based Approach to Assessing Young Players' Well-Being This research addresses a lack of evidence on the positive and negative health outcomes of competitive online gaming and esports, particularly among young people and adolescents. Well-being outcomes, along with mitigation strategies were measured through a cross sectional survey of Australian gamers and non-gamers aged between 12 and 24 years, and parents of the 12–17-year-olds surveyed. Adverse health consequences were associated with heavy gaming, more so than light/casual gaming, suggesting that interventions that target moderated engagement could be effective. It provides timely insights in an online gaming landscape that has rapidly evolved over the past decade, and particularly during the COVID-19 pandemic, to include the hyper-connected, highly commercialized and rapidly growing online gaming and esports sector. review, see Boyle et al. (2011) and Halbrook et al. (2019)]. Video game playing is associated with stronger cognitive abilities and certain positive neurological effects (Nuyens et al., 2019) and when gaming is balanced with physical activity (Halbrook et al., 2019). Social factors associated with video gaming, based mainly on samples of young people with internet gaming disorders (or high gaming frequency), include poorer social skills, lower educational attainment, behavioral problems, and fewer friends offline, but can also include increased social networks from online friends (e.g., Lobel et al., 2017;Van Den Eijnden et al., 2018). Psychological and physiological effects of participation in online gaming have attracted the attention of policy makers, health practitioners, and the community alike, fuelling a need to provide evidence-based research. A lack of governance in esports potentially compounds well-being issues associated with gaming. Unlike traditional sports, control of the content and accessibility of gaming titles rests largely with game publishers (Hollist, 2015;Hall Wilcox, 2017). This research is particularly important during the current COVID-19 pandemic, which has seen an escalation in online gaming participation due to isolation restrictions (Forrester, 2020). While past research has focussed upon physical health outcomes associated with video gaming among adolescents and young adults, none has examined outcomes associated with different forms of gaming participation in the new gaming era, and their potential differential health outcomes across different consumer segments, including heavy users, casual users and non-gamers. The aim of this research is to therefore address a deficit of empirical knowledge on the well-being outcomes associated with gaming competitively, whether recreationally or intensively, among young people. Unlike traditional sports, in which athletes who spend long hours training are praised for their dedication, spending an excessive amount of time gaming could be considered unhealthy (American Psychiatric Association, 2013;World Health Organisation, 2018). The weight of research however, suggests an optimal well-being gamer profile reflecting more recreational, or casual engagement, in contrast to heavy engagement with gaming (e.g., Longman et al., 2009;Halbrook et al., 2019). Moreover, it is possible that this optimal gamer profile may actually lead to better well-being outcomes relative to non-gamers. We therefore test the following hypotheses in the context of competitive online gaming in our study: H1: That casual gamers will exhibit less harmful well-being outcomes than heavy gamers. H2: That non-gamers will exhibit less harmful well-being outcomes than heavy gamers. H3: That casual gamers will exhibit less harmful well-being outcomes than non-gamers. To test these hypotheses, we undertook a cross-sectional online survey of young Australian gamers and nongamers aged between 12 and 24 years and parents of minor participants. Participants and Procedure An online survey was conducted via opt-in "research only" online panels. The in-scope population for the survey was residents of the state of Victoria, Australia who are parents of 12-17 year olds, their children aged 12-17 years old (hereby referred to as minors), and young adults aged 18-24 years old. Panelists were recruited via a blend of print media, online marketing initiatives, direct mail, social media platforms, affiliate partnerships, personal invitations, and a range of other ad-hoc initiatives. Respondents received a nominal incentive for their participation in line with panel guidelines. The survey was conducted from 31 May 2019 to 11 June 2019 and received university ethical clearance. Responses from the parent and minors were compared for consistency, however, we mostly report on competitive online gaming outcomes based on responses collected directly from the youth gamer's themselves. The final sample included 905 respondents comprising parents of minors (n = 316, 65.2% female), minors aged 12-17 years (n = 184, 37.5% female), and young adults aged 18-24 years (n = 405, 69.6% female). Measures The survey captured the extent of gaming behaviors, the contexts in which online competitive games are played, attitudes toward online competitive gaming, general health/lifestyle measures, and demographic information (see Table 1 for a summary of measures). The survey took ∼10 min to complete. The number of panelists who responded to the survey invitation as a proportion of total invitations was 12.4%, which is an acceptable rate for online surveys conducted via non-probability panels (Pennay et al., 2018). Data Analysis Respondents were first grouped based on a screening question "[Has your child/Have you] played an online multiplayer game in the last 3 months?". Those who answered "yes" were coded as "gamers" (minors n = 150, young adults = 250), and those who answered "no" as "non-gamers" (minors n = 68, young adults = 155). Only gamers were asked gaming-related questions, while all participants were asked questions about their health and personal demographics. Within the gaming cohort, responses were divided into subgroups based on their reported daily gaming frequency to examine associations between gaming extent and health and well-being. Individuals were classified as light/casual gamers if they reported 1-2 h or less time gaming on a general weekday and as heavy/frequent gamers if they reported more than 2 h, Participants who reported they had not played an online competitive game in the 6 months were classified as non-gamers. Gamers were also asked to indicate at which time of the day they played. For all gamers, the most Variable Source Respondents' general life satisfaction using The 11-point, single item scale ranging from 0 = "completely dissatisfied" 10 = "completely satisfied" in reponse to the statement, "thinking about your own life and your personal circumstances, how satisfied are you with your life as a whole?" originally developed by Diener et al. (1985), well-validated (Lucas and Donnellan, 2012;Jovanović, 2016). Using the reported population average from the VicHealth Indicators Survey (Victoria State Government, 2015, p. 29) for adults (M = 7.80), responses above 7 on the scale of life satisfaction were coded as "high," and those below the state average were coded as "low." Social connection with others Levels of agreement with the statement "I feel connected with others" on a 6-point Likert scale ranging from 1= "strongly disagree" to 6= "strongly agree" (Nicholson and O'Halloran, 2019). In the version of the survey completed by children, a pictorial representation of social connection using overlapping circles was used. Medical consultation linked to gaming Respondents indicated whether they had ever (within their lifetime) visited a health expert in relation to a health condition they considered to be linked to their gaming (including any visits to a physiotherapist, optometrist, psychologist, or general practice doctor in relation to a physical or mental health issue). Physical Activity • The number of days in a typical week that rigorous activity was engaged in for at least 1 h derived from a single item physical activity measure (O'Halloran et al., 2020). • Types of physical activities with which respondents engaged • 3. The number of hours spent sitting down on a typical weekday/weekend Soft drink consumption Ordinal variable ranging from none to more than five cups per day. Alcohol Consumption • Consumed an alcoholic drink in the 12 months prior • Frequency of partaking in heavy drinking (five or more standard drinks in a single session) which was rated on a 5-point scale ranging from never to daily. • Occurrence of underage drinking based on the responses given by both the parents reporting on their child's drinking habits and children reporting about their own drinking habits. Incidence of smoking Frequency of smoking was rated on a 5-point scale ranging from never to daily. The incidence of experiencing trouble sleeping Recorded on a 5-point ordinal scale ranging from never to three to four times per week. Bullying as a result of gaming Respondents were asked whether they had experienced any bullying that occurred at any point in time either online (e.g., on social media, during gaming sessions, etc.) or in person at school or at work. Examples of bullying were provided in the survey. The extent of bullying was dichotomised into high/low levels for those who had experienced bullying, with low frequency defined as "less than once or twice" and high frequency as "more than every few weeks" within the 4 weeks prior to completing the survey. Mitigation strategies A simple binary response to whether each of a list of mitigation strategies were being used (parents) (e.g., the setting of time limits on their child's gaming, ensuring physical activity, encouraging open lines of communication). Gaming addiction Gaming and Addiction Scale (adapted) for Adolescents (Lemmens et al., 2009). The responses were recorded on the scale from 1= never to 5 = very often with respect to the incidence of various behaviors associated with gaming addiction. The scale items used were are listed below. • Have you ever thought about playing a game for most of the day? common times were between 6 and 10 pm. The second most common time of day to play for minors was between 4 and 6 pm, while for young adults was between 10 pm and midnight. A mixture of primarily cross tabulations using chi-square tests for differences in proportions, t-tests and one way analysis of variance (ANOVA) for variables which were measured continuously, were used to determine statistically significant differences in the health and well-being outcomes of the different gamer groups. Gaming Frequency and Differences Between Cohorts We conducted quantitative comparisons within each of the cohorts of minors and young adults based on their gaming classification: light/casual gamers, heavy/frequent gamers and non-gamers. Overall life satisfaction, social connection, physical activity, diet, sleep quality, and other activities related to school or work were assessed. In the minor cohort (n = 184), 63% (n = 115) were light/casual gamers, 19% (n = 35) were heavy/frequent gamers and 18% (n = 34) were non-gamers. In the young adult cohort (n = 405), 48% (n = 194) were light/casual gamers, 14% (n = 56) were heavy/frequent gamers and 38% (n = 155) were non-gamers (see Figure 1). Overall Life Satisfaction and Social Connection The levels overall life satisfaction was first compared between gamer types using an ANOVA for the differences in means between groups. For minors, the results are inconclusive as the assumption of equal variances is not upheld in this cohort as Levene's test indicated unequal variances [F (2,N=177) = 5.17, p < 0.01], although using Welch's test, which is more robust against unequal means, shows there is an insignificant difference between groups F (2,N=58.2) = 1.33, p = 0.21. For young adults, Levene's test shows equal variances F (2,N=400) = 0.71, p = 0.49, and the ANOVA reveals a significant differences in life satisfaction between gamer types, F (2,N=400) = 1.24, p < 0.05. Heavy/frequent gamers report the highest level of overall life satisfiaction among young adults scoring an average of 7.46 (SD = 2.08) on the 11 point scale, compared to 6.71 (SD = 1.83) in casual/light gamers and 6.94 (SD = 1.88) in non-gamers. The reported population average from the VicHealth Indicators Survey for adults is 7.80 (Victoria State Government, 2015, p. 29). Using this estimated population average as a basis for dichotomising the responses on overall life satisfaction into "high" and "low" does not lead to a substantially different interpretation of the data for minors, although does it permit for a more robust statistical test as there are above the minimum required cell counts in the contingency table (zero cells have an expected count <5 in both the minor and young adult data). Among minors, the proportions of those reporting a low/high level of life satisfaction did not differ significantly between gaming groups in the minor cohort, χ 2 (2,N=184) = 3.60, p = 0.16. For young adults, the interpretation of the data vis-à-vis our interpretation of full scale data does not change. We find heavy/frequent gamers are more likely to be satisfied in life with 59% reporting high life satisfaction compared to 32 and 41% in casual/light gamers and non-gamers respectively with the difference in these proportions being statistically significant, χ 2 (2,N=405) = 0.22, p < 0.05, (see Figure 2). The levels of social connection were compared between gamer types using an ANOVA and t-tests for differences in means between groups. No significant differences in the mean levels of social connection between gamer types were found in either cohort. The level of social connection reported by minors averaged 4.73 (SD = 0.92) for light/casual gamers, 4.42 (SD = 1.37) for heavy/frequent gamers and 4.52 (SD = 1.25) for non-gamers on a 6-point scale. The differences in these means were not significantly different, F (2,N=179) = 1.26, p = 0.29. The mean levels of social connection reported by young adults were 4.34 (SD = 1.03) for light/casual gamers, 4.34 (SD = 1.39) for heavy/frequent gamers and 4.33 (SD = 1.07) for non-gamers on the 6-point scale. The differences in these means were also not significantly different, F (2,N =401) = 1.26, p = 0.99. Physical Activity Using a one way ANOVA, no significant differences were found within gamer types in either cohort with respect to physically active days. The number of physically active days reported by minors was on average about 3 days, F (2,N=144) = 0.76, p = 0.47. Physically active days reported by young adults was also on average about 3 days, F (2,N=326) = 1.28, p = 0.28. In relation to minor's inactivity however, significant differences on average hours of sitting down with heavy/frequent gamers sitting on average for 3.84 h (SD = 1.88) on weekends, compared to light/casual gamers and non-gamers who reported sitting for 2.95 and 2.85 h (SD = 1.62 and SD = 1.28) respectively, F (2,N=159) = 5.02, p < 0.001. There differences in inactivity for minors on weekdays was also significant, but less significant than on weekends, with heavy/frequent gamers sitting on average for 3.66 h (SD = 1.04), compared to light/casual gamers and non-gamers who reported sitting for 2.95 and 3.15 h (SD = 1.27 and SD = 1.26) respectively, F (2,N =159) = 5.02, p < 0.05. For the young adult cohort, there were no significant differences in the average number of hours sitting on either weekdays or weekends, with F (2,N =395) = 0.16, p = 0.98 for weekdays, and F (2,N=392) = 0.14, p = 0.77 for weekends (Figure 3). Soft Drink, Alcohol Consumption, and Smoking In the young adult cohort, heavy/frequent gamers were more likely to be heavy consumers of soft drink compared to both light/casual and non-gamers with 64% of heavy/frequent gamers reporting drinking more than 1 cup of softdrink per day, compared to 47 and 39% in the casual/light and nongamer cohorts respectively. The differences in these proportions amongst minors was approaches statistical significance with χ 2 (2,N=182) = 5.02, p = 0.08. This is driven by the high percentage (74%) of non-gamers who report drinking <1 cup of softdrink per day, however the difference between the two gaming groups is minimal, as summarized in Figure 4. Among young adults, heavy/frequent gamers reported a significantly higher frequency of engaging in harmful drinking, with 60% reporting drinking more than five standard drinks on a more than monthly basis compared to 50 and 40% in the casual/light and non-gamer groups respectively. The differences in these proportions was statistically significant, χ 2 (2,345) = 6.45, p < 0.05. For minors, the incidence of alcohol consumption was too low to conduct any robust statistical tests between gaming groups, so we conclude it unlikely there is an association between gaming and alcohol consumption in minors. We checked for consistency between minor's reporting and their parents reporting with respect to own their children's smoking habits and alcohol consumption. We must consider that these measures are self-reported which may impact the accuracy of this measurement. From our sample 10 minors reported having used a tobacco product at least once, with 14% heavy/frequent gamers reporting having smoked compared to only 4% of casual/light gamers and no non-gamers reported smoking. As the difference in these proportions is statistically significant, χ 2 (2,N=183) = 7.49, p < 0.05, further research into the incidence of smoking among minors who are heavy/frequent gamers. Our data further suggests the incidence of smoking increases into adulthood as 48% of heavy/frequent gamers report having smoked tobacco compared to 28% in casual/light gamers and 22% in non-gamers, χ 2 (2,N=403) = 13.7, p < 0.05. Sleep Quality The levels of sleep was compared between gamer types using an ANOVA for the differences in means between groups. The assumption of homoscedasticity is satisfied for both groups. For minors there are differences in their reported sleep quality dependent on their gaming extent, F (2,N=179) = 3.67, p < 0.05. Among minors, heavy/frequent gamers report the most trouble sleeping with the mean for heavy/frequent gamers being 3.63 (SD = 2.08), which is trouble sleeping on an almost weekly basis. For casual/light gamers the mean value was 2.74 (SD = 1.80) and for non-gamers it was 2.52 (SD = 1.95), both corresponding to experiencing trouble sleeping about once a month. No significant differences in sleep emerged for the young adults group with the average level reported corresponding to experiencing some trouble sleeping about once every 1-2 weeks for young adults. Bullying Related to Online Gaming The reported occurrence of bullying among minors reveals 16.9% have experienced some form of bullying related to online gaming. This figure is 15.1% among young adults. The frequency of bullying was compared between gamer types using an ANOVA for the differences in means between groups. The assumption of homoscedasticity is satisfied for both groups. For minors, heavy/frequent gamers reported being bullied on average "every few weeks", M = 3 (SD = 1.18) compared to casual/light gamers reporting being bullied closer to only "once or twice", M = 1.90 (SD = 0.94). These differences are statistically significant in the cohort of minors, F (2,N=22) = 5.71, p < 0.05. The frequency of bullying between gamer groups was not significantly different in young adults, F (2,N=53) = 0.56, p = 0.46. Non-gamers were not asked whether they had experience bullying related to online competitive gaming. Parental Mitigation Strategies Parents were asked about mitigation strategies they might be using to reduce the extent of their child's gaming. The response format was a simple binary response to whether each of a list of mitigation strategies were being used (e.g., the setting of time limits on their child's gaming, ensuring physical activity, encouraging open lines of communication). We conducted paired samples analysis using data from only those parents for whom a corresponding minor could be matched in the data (n = 184). Some mitigation strategies used by parents were found to be significantly associated with less harmful reported behaviors by minors. These include parents setting time limits, which was associated with lower reported daily gaming hours of between 1 and 2 h by minors whose parents set limits compared to between 3 and 4 h per day by those whose parents do not, F (1, 132) = 10.12, p < 0.05. Parents ensuring their children are physically active was associated with minors reporting a greater number of days they are physically active, F (1, 125) = 7.92, p < 0.05. Problematic Gaming With only a few exceptions within the young adult cohort, heavy/frequent gamers reported significantly higher levels of problematic gaming compared to light/casual gamers. For example, amongst gamers who "[. . . ] thought about playing a game for most of the day, " 24% were heavy/frequent gamers while only 10% were light/casual gamers. This difference in proportion was statistically significant χ 2 (1,N=136) = 17.66, p < 0.001. A similar pattern persists through all of the gaming and addiction scale items in the minor cohort. The exceptions within the young adult cohort are the items: "Sometimes, you play games to forget about real life?, " "Others have unsuccessfully tried to reduce your game use?, " "Have you felt bad when you were unable to play?" and "Have you chosen to spend more time gaming instead spending time with others in person?". For these items, there were no significant differences between gamer types (heavy/frequent vs light/casual gamers) in the young adult cohort. As a more general measure of health-related impacts of online competitive gaming, we also asked gaming respondents to indicate whether they had ever (within their lifetime) visited a health expert in relation to a health condition they considered to be linked to their gaming. Amongst children, 10 individuals representing 7.5% of the total population of children reported having sought medical advice related to their gaming, 80% of which were heavy/frequent gamers compared to 20% light/casual gamers. Of those who did not indicate ever having sought medical advice related to their gaming, 43.1% were heavy/frequent gamers compared to 56.9% light/casual gamers and the differences in these proportions are statistically significant for children (χ 2 = 5.07, df = 1, p < 0.05). In the young adults cohort, 19 individuals representing 8.7% of the total population indicated having sought medical advice for a health concern connected to their gaming, 73.7% of which were heavy/frequent gamers compared to 26.3% of light/casual gamers. Of those who had not sought health advice, the proportions are more evenly distributed with 47.2% of heavy/frequent gamers and 52.8% of light/casual gamers. The differences in these proportions is statistically significant in the young adult cohort (χ 2 = 4.85, df = 1, p < 0.05). SUMMARY Based on these results, both hypothesis 1 and 2 were supported. Casual gamers and non-gamers reported less harmful well-being outcomes (e.g., less sedentary time on weekends for minors, lower soft drink and alcohol consumption, lower proportions of smokers, and less reported trouble sleeping) compared to heavy gamers. On the other hand, hypothesis 3 was not supported, as no notable differences on health and well-being outcomes between casual gamers and non-gamers emerged. DISCUSSION The aims of this research were to gain insight into the positive and negative well-being outcomes associated with online competitive gaming among young players, in addition to identifying suitable mitigation strategies. Overall, our findings suggest that gaming engagement in moderation is preferable for health across minor and young adult cohorts. In fact, casual/light gamers reported similar health and well-being outcomes to non-gamers. There are similarities and differences between the two cohorts (minors and young adults) with respect to the specific associations between heavy gaming and adverse well-being outcomes. In both cohorts, there is a higher likelihood for heavy gamers to be heavy consumers of soft drink, to have smoked at least once and to have seen a health professional for gaming related health problems. Amongst young adults, heavy gamers were more likely to engage in heavy drinking which may be due to their exposure to higher rates of alcohol advertising (Kelly and Van der Leij, 2020). In regard to bullying, the only significant difference was found in the young adult cohort where light gamers were more likely to have experienced bullying than heavy gamers. Another surprising finding was that in the young adult cohort, heavy gamers had significantly higher life satisfaction. Minors who were heavy gamers were more likely to have difficulties sleeping and spend the most amount of time sitting, whilst minors who were light gamers spent the least amount of time sitting (even less than non-gamers). For both cohorts, our findings also reveal that non-gamers were not necessarily more physically active than gamers. This suggests that non-gamers likely spend their sedentary time on other activities. Results indicate that parental mitigation strategies were effective in relation to determining whether a minor was a casual or heavy gamer. The monitoring strategy of parents setting time limits on their children's gaming frequency is shown to be associated with a reduced number of hours gaming on weekdays according to responses given by both parents and minors, but was not associated with reducing the number of hours minors gamed on weekends. Specifically, minors whose parents impose time limits report closer to 1-2 h of weekday gaming vs. 3-4 h from minors whose parents did not impose limits. In addition, minors whose parents make sure they do physical activity report significantly more engagement in physical activity. It should be noted that there are some limitations to this study, including the self-reporting and cross-sectional nature of the survey method, requiring a high degree of self-awareness and insight that young people may not necessarily have welldeveloped. The path from casual gaming to heavy gaming is also not identified due to the cross-sectional design, but would be of interest for future research. Measurement of casual gaming may also underestimate health outcomes associated with gaming, as some of these outcomes are related more generally to screen time, which may not include gaming in isolation. It may therefore be important in future research to consider other screen-based activities in association with online competitive gaming. The small sample size across the cohorts and the sub-categorization into gamer types may have diminished power to conduct robust analyses for some variables. As this research was only intended as an initial snapshot of the current gaming landscape, several items were adapted from more extensive and validated scales due to length restrictions. Future research is warranted, extending on these findings, to gain causal understanding into relationships identified, such as the role of mitigation strategies, parental monitoring, parental engagement in gaming and the optimal ways to prevent heavy gaming tendencies, whilst not completely restricting gaming engagement. This is particularly important for those young people who have experienced lockdown during the COVID-19 pandemic, when rates of online gaming have increased due to the limited number of social and recreation activities external to home (King et al., 2014;Victoria State Government, 2019). Our findings in relation to competitive online gaming align with earlier research which investigates the protective role of parental media monitoring more broadly (Padilla-Walker et al., 2018). Examination of structural aspects in popular games identified is also needed to gain insight into possible cues driving heavier gaming behavior. As younger cohorts of millennial parents emerge, who are themselves avid gamers, it will be critical to educate them about the influence they may have upon their offspring. Further survey evidence could examine heavy gaming cohorts in a more granular approach, given the negative health outcomes that have been revealed as associated with this cohort. Positive outcomes of casual gaming including what appears to be social connection, self-esteem, and well-being need to be emphasized and examined further, perhaps through observation and in-depth interview studies of gamers, in addition to replication cross-culturally. Another avenue for future research may concern how those who engage in a casual/light extent of gaming balance other screen and non-screen based activities, compared to those frequent/heavy gamers and those who do not engage with gaming but may engage in other screen based activities (e.g., excessive social media usage). While previous research has found health outcomes associated with gaming behaviors (e.g., Rosen et al., 2014;Boyle et al., 2016), there is limited research which examines lifestyle and wellbeing associated with gaming, especially in children. In addition, the evolving, highly connected and competitive gaming context associated with esports also warrants examination, along with the significant "passive" engagement through streaming. CONCLUSION This study provides useful initial insight into the online gaming behaviors and associated well-being outcomes of gaming among minors and young adults. It addresses a gap in knowledge of gaming well-being outcomes in the current age of gamingmediated entertainment and socialization, and illuminates both behaviors and protective health strategies that ensure positive engagement in gaming. Given the burgeoning participation, commercial growth and professionalization of the gaming industry, this research is timely and relevant in providing empirical evidence of both the positive and adverse health associations with gaming, and typical gaming behaviors. Our results demonstrate differential trends associated with gaming frequency and age cohorts and is among the first to examine the health, lifestyle and social outcomes of online gaming among young gamers. Given the growing participation in playing and streaming competitive online gaming and esports, particularly during the COVID-19 pandemic, further research is needed to monitor gaming behavior and well-being outcomes and build on research such as this to establish causative links between gaming behavior and it's outcomes on health and well-being. DATA AVAILABILITY STATEMENT Raw data will be supplied by the authors upon request. ETHICS STATEMENT The studies involving human participants were reviewed and approved by University of Queensland Human Ethics Clearance. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
6,757.8
2021-05-10T00:00:00.000
[ "Economics", "Law" ]
Multi-material micro-electromechanical fibers with bendable functional domains The integration of increasingly complex functionalities within thermally drawn multi-material fibers is heralding a novel path towards advanced soft electronics and smart fabrics. Fibers capable of electronic, optoelectronic, piezoelectric or energy harvesting functions are created by assembling new materials in intimate contact within increasingly complex architectures. Thus far, however, the opportunities associated with the integration of cantilever-like structures with freely moving functional domains within multi-material fibers have not been explored. Used extensively in the micro-electromechanical system (MEMS) technology, electro-mechanical transductance from moving and bendable domains is used in a myriad of applications. In this article we demonstrate the thermal drawing of micro-electromechanical fibers (MEMF) that can detect and localize pressure with high accuracy along their entire length. This ability results from an original cantilever-like design where a freestanding electrically conductive polymer composite film bends under an applied pressure. As it comes into contact with another conducting domain, placed at a prescribed position in the fiber cross-section, an electrical signal is generated. We show that by a judicious choice of materials and electrical connectivity, this signal can be uniquely related to a position along the fiber axis. We establish a model that predicts the position of a local touch from the measurement of currents generated in the 1D MEMF device, and demonstrate an excellent agreement with the experimental data. This ability to detect and localize touch over large areas, curved surfaces and textiles holds significant opportunities in robotics and prosthetics, flexible electronic interfaces, and medical textiles. Introduction The recent development of fiber processing technologies has enabled the fabrication of fibrous structures with increasingly complex functionalities. In particular, the thermal drawing process used to fabricate optical fibers has experienced a series of breakthroughs that have extended the range of cross-sectional architectures and materials that can be integrated in fibers [1][2][3]. Thermal drawing traditionally consists of fabricating a macroscopic version of the targeted fiber out of thermoplastic or glassy materials that can be plastically deformed at high viscosities over a relatively large temper ature window. The preform is fabricated at the macroscopic scale that enables us to assemble materials and realize architectures with submillimeter feature sizes in straightforward ways. Fiber pulling results in a drastic reduction of cross-sectional dimensions bringing materials together at the microscopic scale, while expanding uniformly along kilometers of fiber length (see figures 1(a) and (b)). The first significant breakthroughs in optical fiber processing, which contrasted with conventional step-index solid core silica fibers, were the fabrication of 1D and 2D photonic crystal fibers [4][5][6][7]. This marked the beginning of a deeper interest, not only in the functionality of the fabricated fibers and in particular the engineering of their optical properties, but also in the materials and the physical processes at play behind the fabrication technique. Photonic crystal fibers integrate hollow micro-channels with diameters down to sub-micrometer in feature sizes that reproduce uniformly along kilometers of fiber length. Bragg mirror fibers exhibit a periodic structure of concentric layers of high index of refraction glasses and low index polymers with thickness down to a few tens of nanometers, again reproduced along extended lengths of fibers. The ability to fabricate such remarkable structures with a simple and low cost process stems for a large part from a controlled interplay between the viscosity and surface tension. The understanding of the viscous flow and surface science at play in this approach has also led to the design and fabrication of fibers with new materials. It was shown in par ticular that crystalline materials, if encapsulated well within cavities with high viscosity boundaries, can be integrated and flow as a low viscosity melt during the fiber pulling down to the micrometer scale before capillary breakup [8]. This has been exploited in optics for the fabrication of solid core polycrystalline semiconductor fibers [9,10]. It has also led to the design of optoelectronic fibers that can not only guide light but also exhibit a variety of novel functionalities, such as optical [11][12][13][14], heat [15] or chemical sensing [16,17], piezoelectric actuation [18,19], surface emitting fiber lasers [20,21], advanced optical probes [22,23] or field effect and phase change based devices [24,25]. Polymer fibers with electrically conducting domains can also be used in optical imaging systems [26], or in purely electronic functions, such as touch sensing [27,28] or capacitors [29]. So far, however, all novel fiber designs have relied on materials organized in intimate contact to deliver a specific functionality. The concept of electromechanical transduction from freely moving functional domains within multimat erial fibers has not been exploited. This approach is used in a myriad of configurations in the micro-electromechanical system (MEMS) technology, such as cantilever-based devices for micro-sensors and actuators. The ability to integrate such advanced systems within extended lengths of flexible fibers can bring a breadth of novel opportunities for functional fibers and fabrics. Here, we demonstrate for the first time a micro-electromechanical fiber (MEMF) device that exploits the bending of a freestanding electrically conductive polymer sheet (figure 1). Under mechanical pressure, the conducting sheet can bend and be brought in contact with another electrically conducting domain, generating an electrical signal that can reveal the pressure. We show that such fibers enable the detection and localization of a pressure point along the entire fiber length with sub-millimeter resolution. Beyond the simplicity and scalability of the fabrication process, MEMF devices are the first 1D systems that can sense and localize pressure and touch without the need for 2D grids, at (a) very low energetic consumption, and with such a high resolution. This paves the way towards novel advanced fibers and fabrics capable of functionalizing large area surfaces and textiles with pressure mapping capabilities. Preform and fiber fabrication The fabrication approach and fiber architecture of the MEMF device are shown in figure 1. We start by machining a thermoplastic plate, here polysulfone (PSU), in an L-shape crosssectional structure ( figure 1(a)). For the conducting material that will deliver the desired electronic function, we choose a carbon black loaded polycarbonate (CPC) composite. CPC has been exploited for its electrical conductivity and in particular its linear resistance, and for its compatibility with the thermal drawing process [30]. Indeed, the thermoplastic matrix ensures the compatibility with the thermal drawing process at a glass transition temperature close to the one of PSU, while the carbon black filler provides a percolated path for a sufficient electrical conductivity. A CPC bus is placed on the long edge of the L-shaped PSU, while a thin CPC sheet is positioned above it on the short edge as shown in figure 1(a). A Teflon plate is at the same time machined and positioned so as to support both CPC domains during their hot pressing against the PSU construct to fabricate the preform. Hot pressing is performed in a vacuum and at a temperature of 220 °C in a specially designed laboratory press (Lauffer Pressen UVL 5.0, Maschinenfabrik Lauffer GmbH & Co. KG, Germany). After the preform is consolidated, the Teflon part is mechanically removed ( figure 1(a)). The assembly is subsequently thermally drawn in a custom-made draw tower at a set temperature of 260 °C that enables the co-drawing of both PSU and CPC (figure 1(b)). A feeding speed of 1 mm min −1 and a drawing speed between 0.1 m min −1 and 1 m min −1 were used. As shown in figures 1(b) and (c), and as discussed below, the thermal drawing results in an extended length of the flexible ribbons that maintain the exact cross-sectional shape of the initial preform. Fiber structure characterization Scanning electron microscopy (SEM) was used to image the fiber cross-sectional architectures. The sample was coated with a 10 nm carbon film before being transferred into the vacuum chamber. The SEM images were taken with a Zeiss Merlin field emission SEM (Zeiss, Göttingen, Germany) equipped with a GEMINI II column operating at 2.0 kV with a probe current of 150 pA. Electro-mechanical fiber response The fiber response to touch was characterized by measuring the electrical response to the local application of pressure. An Electromechanical universal testing machine (UTS) from Walter + Bai AG (Series LFM-125 kN) was used to vertically move a flat rod (4-mm wide) that was placed in contact with the top side of the fiber. The testing machine enabled down to 1 µm steps for this vertical motion. The CPC bus and sheet were individually connected to a metal wire at both ends of the 85 cm long fiber piece using silver paint. An Agilent E3612A DC power supply was used to apply a tension of 10 V, and the currents i 0 and i L were measured as shown in figure 2 and explained in more detail below, with a Keithley 2450 Sourcemeter and a Keithley 6517B Electrometer. Noise current was measured by recording over one hour the highest current measured between i 0 and i L for a pressure point close to the contact, at a DC applied voltage of 10 V. From a statistical analysis we could extract the standard deviation of the current fluctuations, which we defined as our noise current. For the conservative assessment of the device spatial resolution we derive below, the highest noise we could measure was around i N = 0.1 nA. To measure the time response of the device and obtain a first assessment on its robustness under many bending cycles, we adapted a Dynamic Mechanical Analysis set-up (TA Instrument DMA Q800) to apply a pressure at 200 Hz on the fiber. We used the DMA in compression mode with a 25 µm amplitude, fixing the fiber between two soft PDMS plates of 1 mm in thickness. The fiber was connected to an oscilloscope with a 1 V DC applied. Fiber piano device To demonstrate the ability of the fiber to be employed as a 1D touch sensor, a 'piano device' was fabricated by connecting a fiber piece to a voltage divider circuit to extract the electrical resistance, hence the position of the touch. This circuit was combined with an Arduino Leonardo microcontroller, which was programmed to match different positions of touch on the fiber to pitches played by a piezo buzzer. The piece of music 'Ode To Joy' (Beethoven) was played and a video recorded (see supplemental information (stacks.iop.org/ JPhysD/50/144001/mmedia)). Viscosity measurement The viscosity of CPC was measured with a Rheometer AR2000 (TA Instrument) in a flow procedure with shear rate being set at 2.5 s −1 and temperature varying from 220 °C to 280 °C. Fiber fabrication In figures 1(c) and (d) we show that an extended length of highly flexible ribbons is produced with a cross section that remarkably maintains the initial architecture of the preform. All dimensions have rescaled following the same draw down ratio α, a quantity defined as the ratio between the width (or height) of the initial preform to the width (or height) of the ribbon. Note that by a simple mass conservation principle, the ribbon length scales as α 2 times the length of the initial preform, highlighting the scalability of the process. The particular microstructure of the ribbon cross-section is shown in figure 1(d) with a SEM micrograph of the MEMF crosssection. The two CPC domains are clearly visible, separated by a micro-cavity that allows the thin upper CPC sheet to bend when pressure is applied, and recover its initial horizontal position upon removal of the mechanical excitation. This mechanical behavior enables us in turn to exploit such architecture for pressure sensing. As the CPC layer touches the CPC bus underneath, an electrical connection can occur and a current can flow from one CPC domain to the other, signaling the local pressure. Pressure localization In figures 2(a) and (b) we show a schematic of such a deformation as well as the equivalent electrical circuit for consideration. When a potential difference is applied at one fiber end, one quickly realizes that the current generated will depend upon the position along the ribbon axis (x-axis in the schematic). Indeed, the CPC film and bus act as linear resistors and the further away from the applied potential, the higher the equivalent resistance of the circuit. If the potential is applied at a position x = 0 as shown in figure 2(b), the resistance of the CPC top film R f (x) and bottom bus R b (x) are simply given by ( where ρ CPC is the resistivity of CPC and was measured to be quite uniform along the fiber length and equal to around 1 Ω · m. S f and S b are the cross-sectional surface areas of the CPC film and bus respectively. Note that we consider in this article that the width of the pres sure applied along the fiber axis is very small compared to the fiber length, so that the position x of an applied pressure is well defined. This measurement would not however be sufficient to extract both the presence and position of any pressure applied to the electro-mechanical ribbon. Depending on the pressure intensity, the contact resistance R c between the CPC film and bus can vary. We hence propose another circuit configuration that enables us to measure two different currents out of which the position can be specified regardless of the applied pressure. In figure 2(c), we show the equivalent circuit for this approach where we add a connection to the CPC bus at the other extremity from the applied voltage. We can measure independently the two currents flowing in parallel i 0 (x) and i L (x). Taking their ratio β = To verify our reasoning that the ratio β is indeed independent of the applied pressure, we plotted in the graph of figure 2(c) the measured i L current and the ratio β as a function of the position of the probe that pushes down on the ribbon. At a position of 0 µm, the pressure is just high enough for the two CPC domains to touch each other. This lower limit of pressure sensing corresponds to a Force of around 0.3 N, or a pressure of around 50 kPa, considering a surface area of 4 mm (probe width) times 1.5 mm (fiber width). As the probe is brought down and its position increases from 0 to 4 µm, a higher pressure results and hence a lower R c , increasing the current i L (x, R c ) as seen in the graph. Measuring i 0 (x, R c ) at the same time and plotting the ratio β shows, however, that this ratio remains unchanged as the pressure is increased. The . The applied pressure, knowing the probe size, can then be evaluated from calibration with the respective R c . This can be done over a narrow band of applied pressure, between around 0.3 N and 0.6 N for the particular fiber tested, as it saturates quickly with the increasing force applied (see figure 2(c)). Spatial resolution To assess the spatial resolution of our device, we adopt a very conservative approach to define the noise current as the highest noise measured (worst case scenario) of i N = 0.1 nA and consider this the maximum fluctuation for both i 0 and i L for all the measurements made at each pressure position. The uncertainty over the ratio β is again assessed conservatively by considering that it would be comprised between the Considering the maximum noise value (i N ), and the data collected for i L at every position x, we find the value of ∆x always below 0.5 mm. This conservative evaluation of the resolution can be experimentally verified by recording the currents over an extended period of time (10 mn in this case) for two positions of the probe spaced by a distance above this conservative value, say 0.75 mm. In figure 3(a) we plotted the histogram of such a measurement that shows the number of counts for a certain value of the cur rent versus the position of the pressure applied. The standard deviation of each measurement gives the noise of the system. Clearly, the distance between the average current for the two different positions is much greater (around 40 times greater) than their standard deviation, meaning that the two positions can be separated. This means that the position of the center of an excitation can be known with such precision, if the width of this excitation is very small compared to the fiber length, or if the type of excitation is known (such as the touch of a finger, see the discussion part below). If the excitation has a large unknown width, the system can be adapted to measure both the center position and width of the excitation, very similarly to the reconstruction of two different pressure points, as we discuss below. Note also that simple noise management techniques could lead to even better resolutions, potentially in the sub-hundred micrometer range. Dynamic response The inset of figure 3(b) shows the voltage response over time for a fiber exposed to a cyclic load in compression, as described in the method section. In the main graph we plotted the experimental recording for part of the cycle (black squares) and the exponential fit (red curve). We could extract a response time of 57 µs. We also noted no drift, change of response or damage to the fiber after 10 4 cycles, highlighting the robustness of the fiber device. Multiple pressure points The analysis we performed to localize a pressure point revealed that, assuming a small pressure area, localization is simply performing the measurement of independent currents that can resolve an equation with two unknowns, namely the pressure value (linked to the contact resistance) and the position. It is straightforward to expand this approach to multiple excitations, particularly when the pressure is high enough so that the contact resistance saturates and affects the measurement less than the probe size. In such a configuration, two pressure points bring only two unknowns to the system (their positions) that the two current measurements can be used to resolve. Instead of deriving the simple circuit analysis done previously, here we take the approach of calibrating a 12 cm long fiber subjected to one or two simultaneous pressure points. We measured and recorded the currents generated i 0 and i L for one and two pres sure points spaced by 5 mm (mimicking an application with a finger touch of around 10 mm in width). After this calibration, we subjected the fiber to two events represented in figure 3(c) by black curves with an arbitrary pressure value of 1 at the contact point. On top, we show an excitation by a single pressure point, and below an event with two pressure points. In blue we indicated the reconstructed localized pressure on the fiber for these two events using the calibrated values. The system could sense if one or two touches were present, and the detected positions correspond very well with the excitation to be resolved. We performed a series of similar experiments and found that the calibration was robust and a resolution of ±3 mm could be achieved to resolve the position of the center of two excitations. Discussion The ability to realize a fiber device with such a complex microstructure is firstly a result of the particular attributes associated with the drawing process. To maintain the crosssectional shape during thermal drawing, it is essential to pull the fiber at a high viscosity to avoid any thermal reflow, deformation or capillary break-up of the thin CPC sheet and square bus. To understand why the particular structure we demonstrate here can be maintained and not collapse or deform, a rather intuitive method is to compare a theoretical characteristic time associated with the reflow to the time the material experiences at high temperature in the neck-down region. In figure 4(a) we show a schematic of the thermal drawing process where the neck-down region is highlighted in red. This region defines when the polymers start to flow and deform, until it reaches the final fiber diameter at the output of the furnace. The processing time in the neck-down region is given by figure 4(a)), where v(z) is the velocity along the z axis, considered here to be only a function of z. At z = z 1 , the velocity corresponds to the imposed down-feed speed v Df , the speed at which the preform is fed into the furnace. At z = z 2 , the velocity is the drawing speed, the speed at which the fiber is pulled. In between, the velocity changes gradually as the radius R(z) of the cone is reduced, with a simple relationship between these two quantities that results from volume conservation: where R is the diameter of the preform at z = z 1 . In figure 3(a) we also show a picture of the MEMF preform-to-fiber region obtained by interrupting the draw. The tip of the structure is slightly bent due to an elastic bounce back when the fiber is cut. Nevertheless, one can extract a discrete approximation of the function R(z) that enables us to estimate the dwelling time Df to around 10 3 s. A characteristic reflow time can also be estimated from simple theoretical analysis and experimental measurement. First, a dimensional analysis [8,31] enables us to write a characteristic time τ from the viscosity, surface tension and feature where C is a dimensionless number, η is the viscosity (in Pa.s), λ a characteristic dimension (in meters) and γ the surface tension (in N.m −1 ). Such a formula has shown to be valid for the capillary break-up of thin films [31] as well as for the reflow of periodic textures at the surface of a polymer (C = 1/π) [32]. The specific dynamics of a thermal drawing experiment again imposes to define a weighted characteristic time since the viscosity and feature sizes are a function of the position z in the furnace (we neglect the change of surface tension with temperature). In figure 4(b) we show the measurement of the CPC viscosity versus temperature. We also measured the maximum of the temperature inside the furnace (230 °C) and extrapolate a temperature profile for thermal drawing: ( ) = − ⋅ T z z 230 14300 2 . We can then express the reflow time as , a length scale λ of 100 microns at the fiber level, and using the same drawdown ratio as above, we obtain an estimation of τ ≈ 10 s R 4 , an order of magnitude higher than the dwelling time. Even with the approximations made, it is apparent that the time scale associated with any reflow or break-up of the CPC domains is greater than the time it takes to go from the preform into a fiber shape. Reflow or capillary break-up do not have time to occur, ensuring that at these length scales the cross-sectional architecture is maintained. The thermal drawing of such complex structures enables us for the first time to detect and localize pressure along a single functional fiber. Previous work has reported a capacitive approach that requires a conductive probe (or a finger) to press and then slide on the fiber to extract a location [27]. Other approaches have relied on fiber grids to localize an excitation [13,28,33]. The approach we demonstrate here has several advantages compared to existing configurations. First, it does not require a conducting probe to sense and localize pressure, nor does it require a fiber mesh. This drastically reduces the number of contacts to be made to the fibers, hence improving integration. The accuracy with which one can localize pressure is far beyond what is needed in most practical applications, given the very low noise of the system. Moreover, we showed that a single fiber can sense and localize two pressure points simultaneously applied along its length, which was not possible to do with previous configurations. By integrating more electrodes or more contacts along the fiber length, a greater number of independent currents could be measured and hence complex pressure distributions could be extracted. It is also interesting to note that if we could localize two pressure points of small width, it would also be possible to extract the center and location of an excitation with a large width. This could be very interesting for medical applications, such as pressure ulcer prevention discussed below, where large area pressure on the body could be sensed and localized precisely using a system simple to fabricate, integrate and use. Moreover, the device is an open circuit at rest, consuming very little energy for its functioning to the contrary of other capacitive or piezoresistive based devices. The ability to have a single fiber capable of detecting and localizing pressure in a straightforward and easy to interface way can have an impact in many fields of applications, such as in healthcare, smart textiles and entertainment. Note that we refer to 'detecting' pressure, even though over a small range of applied pressures we showed that we could also measure its intensity. This range is however relatively small in the current configuration, but could be improved with more advanced designs discussed below. For several applications, such as flexible and foldable keyboards, or ergonomic textile-integrated electronic interfaces, however, detecting pressure points regardless of the intensity is sufficient. In the medical field, the ability to cover a very large surface area, such as hospital bed sheets or chairs, enabled by the fiber format, could be very useful for the prevention of diseases, such as pres sure ulcers. These are hard to treat diseases associated with extended periods of immobility and pressure against the skin that fabricintegrated MEMF devices could monitor and help prevent. In figure 4(c) we show a picture of a device that highlights an example of the advanced functionality that a single all-polymer MEMF can do. As we show in the movie provided as supplemental information, we can assign to a location on the fiber different notes via different frequencies from a piezo electric buzzer, fabricating the first 'piano fiber'. With the ability to sense more than one pressure point, more complex operations could be realized in the field of flexible electronic interfaces. The mechanical behavior of the CPC film in MEMF ribbons has proven to be very robust. More accurate characterization of the mechanical robustness to a large number of bending cycles is underway, as well as the use of thinner freestanding sheets to further improve the MEMF response to pressure, its bandwidth and resilience. One difficulty that can be envisioned is the open structure of the cantilever-like design. An encapsulated system with a thin membrane that would protect the fiber from external debris or liquid (when washed if integrated in a textile, for example) could ensure a better protection for the functional parts. In figure 4(d) we show the cross-section of a fabricated fiber with an encapsulated design where a thin PSU wall was left to enclose the fiber cross-section. Tens-of-meters of such fiber were produced with a height down to 300 µm and a PSU enclosing layer as thin as 5 µm. This fiber can perform exactly the same functionalities as the cantilever-like fiber, but due to the rigidity of the PSU film and the mechanical excitation, the structure failed after a few thousands of cycles. We are looking into the thermal drawing of softer materials as the next generation of touch sensing fiber devices, which could exhibit the same advanced pressure localization capabilities but with an encapsulated designed. Finally, given the flexibility of the design at the preform level, many other configurations with thinner films, smaller gaps between the two conducting domains, or the integration of several pressure sensing elements in a single fiber or ribbon can be realized. This could result in advanced and robust fibers capable of measuring, with high sensitivity, the intensity but also the location, direction and nature (shear versus compression) of an arbitrary mechanical excitation. Conclusion In conclusion, the thermal drawing process was used to fabricate a polymer based micro-structured ribbon with domains that can move upon mechanical excitation. This movement can bring two conducting composites in contact and trigger an electrical signal that is exploited to sense and localize the applied pressure. We established a configuration and a model to extract not only the presence but also the position along the fiber axis of the applied pressure regardless of its intensity, and at a submillimeter resolution. We showed that these fiber devices were unaltered after 10 4 loading cycles at 200 Hz, that they could recognize and localize two pressure points, and that they exhibited a response bandwidth of close to 20 kHz. An important aspect of the thermally drawn MEMF devices is the scalability at which fiber length and hence also surface area can be produced. While several strategies have been proposed to sense and sometimes map pressure using piezo-resistive, piezo-electric or capacitive approaches, very few share the attributes of the simplicity and low cost of the thermal drawing process. This is key for applications that require the functionalization of very large area surfaces. Ongoing research further investigates the mechanical attributes of MEMF devices. Other designs highlighted in the discussion part with thinner freestanding sheets, softer materials or encapsulated architectures are also under investigation, paving the way towards novel functionalities, such as a controlled release from partly closed cavities, or advanced functional surfaces for electronic skin applications.
6,850.2
2017-03-07T00:00:00.000
[ "Engineering" ]
Analog quantum simulation of gravitational waves in a Bose-Einstein condensate We show how to vary the physical properties of a Bose-Einstein condensate (BEC) in order to mimic an effective gravitational-wave spacetime. In particular, we focus in the simulation of the recently discovered creation of particles by real spacetime distortion in box-type traps. We show that, by modulating the speed of sound in the BEC, the phonons experience the effects of a simulated spacetime ripple with experimentally amenable parameters. These results will inform the experimental programme of gravitational wave astronomy with cold atoms. Introduction Quantum simulators [1] were originally conceived by Feynman as experimentally amenable quantum systems whose dynamics would mimic the behaviour of more inaccessible systems appearing in Nature. Together with the exploration of this visionary insight in countless quantum platforms at an increasingly accelerated rate, last years had witnessed the birth of alternate approaches to quantum simulation. For instance, a quantum simulator can be used to emulate a phenomenon predicted by a well stablished theory but very hard to test in the laboratory, such as Zitterbewegung [2]. Going a step further, it can also be used to materialise an artificial dynamics that has never been observed in Nature while being theoretically conceivable [3,4] or even to emulate the action of a mathematical transformation [5]. Einstein's theory of general relativity [6] predicts the existence of gravitational waves [7], namely perturbations of the spacetime generated by accelerated mass distributions. Since sources are typically very far from Earth, the theory predicts that the amplitude of gravitational waves reaching our planet is extremely small and thus, finding experimental evidence of their existence is a difficult task. Indeed the quest for the detection of these spacetime distortions [8] has been one of the biggest enterprises of modern science and the focus of a great amount of work during the last decades, both in theory and experiment. Recently, some of the authors of this manuscript have proposed a novel method of gravitational wave detection based on the generation of particles in a Bose-Einstein condensate produced by the propagation of the gravitational ripple [9]. Detection is carried out through a resonance effect which is possible because the range of frequencies of typical gravitational waves is similar to the one of the Bogoliubov modes of a BEC confined in a box-like potential. Then the gravitational wave is able to hit a particle creation resonance, in a phenomenon resembling the Dynamical Casimir Effect [10,11]. Due to the frequencies involved, this effect is completely absent in optical cavities. The use of our technique would relax some of the most daunting demands of other programmes for gravitational wave detection such as the use of highly-massive mirrors and km-long interferometer arms. However, due to the extremely small amplitude of the gravitational waves when they reach the Earth, their detection is always challenging, since it requires an experimental setup extremely well isolated from possible sources of noise. It would be of great benefit for the experiment if the amplitude of the spacetime ripples were larger. In this paper, we show how to realise a quantum simulation of the generation of particles by gravitational waves in a BEC. We exploit the fact that the Bogoliubov modes of a trapped BEC satisfy a Klein-Gordon equation on a curved background metric. The metric has two terms [12,13,14,9], one corresponding to the real spacetime metric and a second term, corresponding to what we call the analogue gravity metric, which depends on BEC parameters such as velocity flows and energy density. While in [9] we analyse the effect of changes in the real spacetime metric, in this case we consider the manipulation of the analogue gravity [1,15] metric, assuming that the real spacetime is flat. Since in this case the experimentalist is able to manipulate artificially the parameters of the condensate, we are able to simulate spacetime distortions with a much larger amplitude, as if the laboratory were closer to the source of the gravitational ripples. We show that with realistic experimental parameters, a physically meaningful model of gravitational wave can be simulated with current technology. The paper is organised as follows. First we review the effects of a real gravitational wave in the Bogoliubov modes of the BEC. Then we consider the case in which there are non-zero initial velocity flows in the BEC, showing that there is always a reference frame in which we can modulate the speed of sound in such a way that the phonons experience an effect analog to the one produced by the propagation of a real spacetime wave. Finally, for the sake of simplicity we assume that there are no velocity flows in the BEC and we show that the gravitational wave can be simulated with current technology. Gravitational waves in a BEC The metric of a gravitational wave spacetime is commonly modelled by a small perturbation h µν to the flat Minkowski metric η µν , i.e. [7], where and c is the speed of light in the vacuum. We consider Minkowski coordinates (t, x, y, z). In the transverse traceless (TT) gauge [7], the perturbation corresponding to a gravitational wave moving in the z-direction can be written as, where h + (t), h × (t) correspond to time-dependent perturbations in two different polarisations. Later on we will restrict the analysis to 1-dimensional fields, where the line element takes a simple form, We are interested in simulating this particular spacetime in a BEC. To this end we use the description of a BEC on a general spacetime metric following references [12,13]. This description stems from the theory of fluids in a general relativistic background [12] and thus is valid as long as the BEC can be described as a fluid [13]. In the superfluid regime, a BEC is described by a mean field classical background Ψ plus quantum fluctuationsΠ. These fluctuations, for length scales larger than the so-called healing length, behave like a phononic quantum field on a curved metric. Indeed, in a homogenous condensate, the massless modes of the field obey a Klein-Gordon equation where the d' Alembertian operator depends on an effective spacetime metric g ab -with determinant g-given by [12,13,14] g ab = ρ c s c The effective metric is a function of the real spacetime metric g ab (that in general may be curved) and background mean field properties of the BEC such as the number density n 0 , the energy density ρ 0 , the pressure p 0 and the speed of sound c s = c ∂p/∂ρ. Here p is the total pressure, ρ the total density and V a is the 4velocity flow on the BEC. In the absence of background flows V a = (c, 0, 0, 0) and then, In the absence of a gravitational wave, the real spacetime metric is g ab = η ab , where η ab has been defined in Eq. (2). Therefore, the effective metric of the BEC phononic excitations on the flat spacetime metric is given by, Ignoring the conformal factor -which can always be done in 1D or in the case in which is time-independent-we notice that the metric is the flat Minkowski metric with the speed of light being replaced by the speed of sound c s . By considering a rescaled time coordinate t ′ = (c/c s )t we recover the standard Minkowski metric ds 2 = −c 2 dt 2 + dx 2 . This means that the phonons live on a spacetime in which, due to the BEC ground state properties, time flows in a different fashion and excitations propagate accordingly. The real spacetime metric of a gravitational wave is given by Eq. (1) and thus, the effective metric for the phonons is, For simplicity, we considered a quasi one-dimensional BEC. The line element is conformal to, In [9], it is shown that the propagation of the gravitational wave generates particles in a BEC confined in a box-like trap. This particle creation is characterised by the Bogoliubov coefficients β mn = −(φ m , φ * n ), whereφ m , φ n are the m and n mode solutions of Eq. (5) given by the metrics in Eq. (10) and Eq. (9) respectively. In particular, if we model the gravitational wave by a sinusoidal oscillation of frequency Ω that matches the sum of the frequencies of a certain pair of modes m and n, the number of particles grows linearly in time. We refer the reader to [9] for more details. Quantum simulation The aim of this section is to show how to get a line element similar to the one in Eq. (11) by manipulating the parameters of the BEC while the real spacetime is assumed to be flat. Going back to the effective metric Eq. (7) and restricting ourselves to 1D we perform the following coordinate transformation [13]: Next, we choose the velocity profile such that in this new coordinate system the BEC is at rest, i.e., v χ = c ; v ζ = 0. The velocity profile in the original coordinate system needed in order to do this is therefore The line element given by the metric in Eq. (7) then takes the form in which we are assuming that the speed of sound c s and the density ρ might depend on the coordinates. We make another coordinate transformation (this time only in χ), so that Here, we denote by c s0 the constant value of the speed of sound in the absence of any manipulation of the BEC parameters. Since we still have the freedom to specify the velocity of sound or the density of the BEC, we choose to let the density be constant, so that ρ = ρ 0 and we fix the velocity of sound such that where ℓ is a constant with units of distance. In order to do this, χ as a function of τ must be Modelling the oscillation of the +-polarisation of the gravitational wave in the standard way as expanding the root in the integral in powers of h + and keeping only linear terms in A + , Equation 18 takes the more friendly form: up to first order in A + . With this, we can solve Eq. (17) for the velocity of sound in terms of τ -substituting Eq. (20) in Eq. (17) as up to first order in A + . Now that we obtained an explicit expression for the velocity of sound as a function of the coordinate τ , it is natural to ask the significance of the constant parameter ℓ. For this, we see that in the absence of a simulated gravitational wave, Equation (17) becomes ℓ = χ 0 ρ 0 c/c s0 (χ 0 ), for a particular value χ = χ 0 . A convenient choice is ℓ = 2cτ 0 , so that the velocity of sound can be expresed as c s (τ 0 ) = c s0 for τ = τ 0 . Hence, ℓ is represented by the hyperbola c 2 t 2 − x 2 = ℓ 2 c s0 /(ρ 0 c) in the Minkowski coordinate system. A final coordinate transformation is made, defining ξ as such that dξ 2 = ℓ 2 dζ 2 /(1 + ζ 2 ), thus transforming the line element Eq. (15) into as desired. The case with no background flows. Experimental implementation In the last section, we have shown how the speed of sound can be modified to mimic a gravitational wave in a coordinate system (τ, ξ) in which there are no background flows. Now, we will consider for simplicity that this coordinate system is the lab frame (t, x) and relate our results directly with experimental parameters. As we have already seen above Eq. (9) -and assuming again a 1D spacetime-if v t = c and v x = 0, then the line element is conformal to: Now, if we consider that the speed of sound can depend on t: the corresponding line element is conformal to: Therefore, if the speed of sound varies in time such that we find, up to the first order in A + : So the experimental task is to modulate the speed of sound as: In a weakly interacting condensate [16], the speed of sound is where ρ is the density, m the atomic mass and the coupling strength is and a is the scattering length. So, finally: It is well-known [17] that a can be modulated in time by using the dependence of the scattering length on an external magnetic field around a Feshbach resonance. The aim is thus to achieve: because then (up to the first order) where To this end, we will exploit the dependence of the scattering length with an external magnetic field where a bg is the background scattering length, B 0 is the value of the magnetic field at which the Feshbach resonance takes place, and ω is the width of the Feshbach resonance. Considering a time-dependent magnetic field and with a little algebra, we can write, to the first order in δB: ). So, if we identify: we are simulating a gravitational wave. Writing the amplitude of the simulated wave is: and Ω is the frequency of the simulated wave. Taking experimental values for B 0 and ω, [17] and assuming that we can control the magnetic field in a 0.1G scale (so we can take δB ≃ B(0) ≃ 0.1G), we estimate the amplitude of the simulated wave as This is much larger that the gravitational waves that we expect to see in the Earth A + ≃ 10 −20 , due to the fact that the Earth is far from typical sources of gravitational waves. Therefore it is interesting to think of the physical meaning of the gravitational waves that can be simulated with our techniques. For instance, the amplitude and frequency of the + polarisation of the gravitational wave generated by a Sun-Earth-like system are: respectively, where G is Newton's gravitational constant, m and M are the masses of the Earth and the Sun respectively, r the distance between the two bodies and R the distance between the detector and the centre of the mass of the system, which is assumed to be much larger than the wavelength λ of the gravitational wave R >> λ. Thus, if we consider the real masses of the Sun and the Earth, a distance between them of r = 10 m and R = 10 7 m, we find Ω in the KHz range -which is very convenient to generate phonons in the BEC-and the desired value of A + . In [9] it is predicted that the changes in the covariance matrix of the Bogoliubov modes induced by gravitational waves of typical amplitudes A + < 10 −20 are in principle detectable. Therefore, the changes generated by a simulated wave of much larger amplitude, should be observed in an experiment with current cold-atoms technology. Conclusions We have shown how to generate an artificial gravitational wave spacetime for the quantum excitations of a BEC. In the case in which there are no initial background flows, we show that the simulated ripple is obtained through a modulation of the speed of sound in the BEC. In the laboratory, this can be achieved with current technology by exploiting the dependence of the scattering length on the external magnetic field around a Feshbach resonance. With realistic experimental parameters, we find that simulated gravitational waves that can resonate with the Bogoliubov modes of the BEC. The amplitude of these artificial ripples is much larger than the typical amplitude expected for gravitational waves reaching the Earth, due to the fact that the Earth is very far from typical sources. Thus our simulated waves would mimic the waves generated by sources much closer to the BEC. This feature would enhance the effects of the ripple in the system, facilitating their detection. The experimental test of our predictions would be a proof-of-concept of the generation of particles by gravitational waves and would pave the way for the actual observation of real spacetime ripples in a BEC. More generally, our low-cost Earth-based tabletop experiment will inform the whole programme of gravitational wave astronomy.
3,795.4
2014-06-13T00:00:00.000
[ "Physics" ]
On the relationship between á connections and the asymptotic properties of predictive distributions In a recent paper, Komaki studied the second-order asymptotic properties of predictive distributions, using the Kullback±Leibler divergence as a loss function. He showed that estimative distributions with asymptotically ef®cient estimators can be improved by predictive distributions that do not belong to the model. The model is assumed to be a multidimensional curved exponential family. In this paper we generalize the result assuming as a loss function any f divergence. A relationship arises between á connections and optimal predictive distributions. In particular, using an á divergence to measure the goodness of a predictive distribution, the optimal shift of the estimate distribution is related to ácovariant derivatives. The expression that we obtain for the asymptotic risk is also useful to study the higher-order asymptotic properties of an estimator, in the mentioned class of loss functions. Introduction The main goal of this work is to provide distributions that are close, in the sense of an f divergence, to an unknown distribution belonging to a curved exponential family D f ( p(x; u), p(x; x 1$ N )) f p(x; x 1$ N ) p(x; u) p(x; u)ì dx, where f is a smooth, strictly convex function that vanishes at 1.We measure the closeness by In order to choose p, we could try to ®nd the distribution that minimizes (1), uniformly in u, among `all probability distributions' equivalent to p. Since there are some technical problems in giving a structure of differentiable manifold to this in®nite-dimensional space, we follow the procedure suggested by Komaki (1996) and try to solve the problem only for distributions belonging to a ®nite-dimensional model containing P. We construct this model by enlarging P in orthogonal directions.We shall see that, for large samples, there is a special direction such that the improvement on the estimative density is maximum if and only if this direction belongs to the tangent space associated to the enlarged model.The solution does not change if we add more orthogonal directions and in this sense we can consider the problem solved in the in®nite-dimensional space of all probability distributions equivalent to p. For simplicity, we shall work with á divergences D á (for their use in statistical inference, see Amari (1985, Chapter 3)), i.e. f divergences with In the ®nal remark, we extend the results to any f divergence. The enlarged model Let E be a n-dimensional full exponential family, i.e. where the probability functions p(x; è) are densities with respect to some ó-®nite reference measure ì and is an open subset of R n .We consider the model P to be a (n, m)-curved exponential family of E , m < n, be the so-called á representation of p(x; u) (Amari 1985, p. 66).From now on, the index á will be used to denote all that regards á representation of geometric quantities.The tangent space T u of P in u is identi®ed with the vector space spanned by that are the components of what we call the á-score function.The ®rst and second derivatives of l á (x; u) are related to those of l(x; u) log p(x; u) l 1 (x; u) by and we have that the inner product of vectors d a l á and d b l á , does not depend on the á representation; it is the (a, b) component of the Fisher information matrix g ab .In the sequel, we omit the subscript á in the inner product and in the expectation, since it will be clear from the representation used.We indicate by g ab the inverse of g ab and use the repeated index convention.Following Amari et al. (1987), we can construct a vector bundle on P by associating to each point p(x; u) P P a linear space H u de®ned by Attached to different points we have different but isomorphic Hilbert spaces.In order to see this, let p p(x; u) and q p(x; u9) be two different points of P and consider the transformation In fact, it is easy to see that I u9 u (h) P H u9 , since q (1á)a2 I u9 u (h)ì dx 0 and q á fI u9 u (h)g 2 ì dx p á h 2 ì dx À q (1á)a2 p q áa2 hì dx Moreover, I u9 u is linear, its inverse is and, by (2), it is bounded.I u9 u is then a continuous linear bijection, i.e. an isomorphism between H u and H u9 .The aggregate H (P ) uPU H u constitutes Amari's Hilbert bundle.It is necessary to establish a one-to-one correspondence between H u and H u9 , when p(x; u) and p(x; u9) are neighbouring points, in order to express the rate of variation in a vector ®eld as an element of the Hilbert bundle.If we move in the direction d a l á and h u P H u , d a h u a P H u in general.Anyway, if is a smooth vector ®eld, in the sense that we can interchange the integral and the derivative, Thus, we can de®ne the á-covariant derivative in H as and the á-covariant derivative in P is the projection of = á (H ) These connections coincide with the á connections de®ned by Amari (1985, p. 38).We use the superscripts m and e respectively for the À1 and 1-covariant derivatives. Let M be any regular parametric model containing P. We can consider on M the coordinate system (u, s), where u a , a 1, F F F , m, is the old coordinate system on P and s I , I m 1, F F F , r, r .m, are new coordinates on M .We suppose that s 0 for the points in the original manifold P and u and s are orthogonal in P. The tangent space to the enlarged model M is now spanned by vectors d a l á (x; u, s), a 1, F F F , m, and Predictive distributions We consider predictive distributions p(x; u N (x), s(x)), with s(x) O p (N À1 ), so that and u N (x) is a smooth, asymptotically ef®cient estimator, and hence ®rst-order equivalent to the maximum-likelihood estimator, of the form For ®xed x, both depend on N only through x. Predictions and á connections For each N, u N is a map u N : E 3 P , since x can be identi®ed with the point in E having expectation parameters ç i x i .Then, u I is also a map from E to P and we can associate with u N a family of ancillary (n À m)dimensional submanifolds of E , A fA(u)g, where A(u) u À1 I (u).In some discrete cases, even though the exponential model is regular, x could correspond to the expectation parameters of a point in E with a probability different from one.However, since this probability goes to one exponentially in N, we can consider a modi®cation of x, say x à , such that x à x o p (N À2 ) and x à are the expectation coordinates of some point in E .Then, all the results could be rewritten in terms of x à instead of x. Following Amari (1985, p. 128), it can be shown that u I is consistent if and only if every p(x; u) P P is contained in the associated submanifold A(u) and u I is asymptotically ®rst-order ef®cient if and only if A(u) is orthogonal to P in u.On the other hand, since lim in distribution, the results still hold for u N . If we introduce a coordinate system v k , k m 1, F F F , n on each A(u), every point in the full exponential family containing P is uniquely determined by a pair (u, v).It is convenient to ®x v 0 for the points in P. We denote by indices a, b, c, F F F P f1, F F F , mg the coordinates u in P, by k, ë, ì, F F F P fm 1, F F F , ng the coordinates v in A(u) and by á, â, ã, F F F P f1, F F F , ng the new coordinates w (u, v) in E .Since u N is asymptotically ef®cient, g ak (u) 0X Indices i, j, F F F P f1, F F F , ng are used to denote the natural parameters è in E and indices I, J, K, F F F P fm 1, F F F , rg for the coordinates s that we add to enlarge the model P. By the coordinate system we choose on M , g aI (u) 0X Under these assumptions, we have the following theorem. Theorem 3.1.The average á divergence from the true distribution p(x; u 0 ) to a predictive distribution p(x; u N (x), s(x)) is given by where all the quantities are evaluated in u 0 , u u(E(x)), s s(E(x)), á a is the a component of the general covariant derivative of a tensor with respect to the á connection. Proof.Only an outline of the proof is given; see Corcuera and Giummole Á (1996) for detailed calculations.Since, from the de®nition of f á , and s(x) O p (N À1 ), the expansion of an á divergence from p(x; u 0 ) to p(x; u N , s) is where ũ u N À u 0 and The brackets [ ] refers to the sum of a number of different terms obtained by permutation of free indices, e.g. The mean value of D á is The mean squared error of u N can be written as where xi x i À d i ø.We can easily calculate the moments of x: By using geometrical properties of curved exponential families, it can be shown that Moreover, by ( 11) and ( 10), where u u(E u 0 (x)).If we substitute ( 11) and ( 12) in ( 9), we can ®nally write 13) By ( 4), we also have that where s s(E u 0 (x)).By ( 11) and ( 10), and We can now use ( 13)±( 17) and ( 7) to calculate each term of ( 8).With some further calculations we obtain the result.u From ( 6) we can obtain a decomposition of the average á divergence from the true distribution to any predictive one, in two parts: The ®rst term in (18) depends on the choice of the estimative distribution and the other on Predictions and á connections the shift orthogonal to the model P. It is well known that the problem of choosing a secondorder ef®cient estimator u N (x) has not, in general, a unique solution.On the other hand the following theorem solves the problem of the choice of the optimal shift orthogonal to the model. Theorem 3.2.The optimal choice of s I (x), with respect to an á divergence, is given, up to order N À1 , by where u N (x) is any asymptotically ef®cient estimator. Proof.It is easy to see, by ®nding the derivative of ( 18) with respect to s, that the minimum value of the asymptotic risk corresponds to The result follows by (4).u Let us now de®ne, for a, b 1, F F F , m, Vectors h ab are, by de®nition orthogonal to the original model P. Moreover they belong to H u .The following theorem explains the important role that they play in our analysis. Theorem 3.3.The difference in average á divergence from the true distribution, between the estimative distribution p(x; u N (x)) and the optimal predictive distribution p(x; u N (x), s opt (x)), is maximal if and only if the vector g ab h ab belongs to the linear space spanned by the h I .In this case, the optimal predictive distribution is Proof.By (20) and the de®nition of H á abI , we have that By substituting ( 19) in ( 18), E u 0 fD á ( p(x; u 0 ), p(x; u N (x)))g À E u 0 fD á ( p(x; u 0 ), p(x; u N (x), which depends only on the projection of g ab h ab on the linear space spanned by the h I .Thus, it is maximal if and only if g ab h ab is included in this space and its maximal value is In this situation, by ( 19), ( 22) and ( 20), we have that and the result follows by substituting ( 24) in (3).u Remark.Including the vector g ab h ab on the enlarged model allows us to attain the best improvement on the estimative distribution.For any regular parametric model M containing P and g ab h ab we obtain the same optimal predictive distribution.In this sense, (21) gives a predictive distribution that can be considered optimal among all probability distributions equivalent to p. In the case when P itself is a full exponential family, we can write (21) in a simpler form Note that for á 1 there is no correction, i.e. we do not move out of the full exponential model.Moreover, for á À1 we obtain exactly the same result as Vidoni (1995, p. 858, equation (3.1)). Predictions and á connections Example 3.1.We consider m-dimensional multivariate distributions N (ì, I m ): where ì (ì i ), i 1, F F F , m, is unknown.We have that g ij (ì) ä ij and à á ijk (ì) 0, for all á.Now let x(l), l 1, F F F , N , be independent of N (ì, l m ) and ì ì N (x) be any estimator for the mean vector ì, where We thus have that the optimal predictive distribution can be written in a close form as X For á À1, it coincides, up to order N À1 , with the result of Barndorff-Nielsen and Cox (1994, p. 318).By (23), we can calculate the difference in average á divergence between the estimative distribution and the predictive distribution: which does not depend on ì, the ef®cient estimator used.Now let ì be the James±Stein estimator for ì, i.e.We can use ( 6) with s 0 to compare the two estimative distributions obtained respectively from the maximum-likelihood estimator ì mle x, and the James±Stein estimator: E ì fD á ( p(x; ì), p(x; ì mle ))g À E ì fD á ( p(x; ì), p(x; ì))g Remark.Let us consider an f divergence D f as a loss function.Without loss of generality, we can suppose that f 0(1) 1. Theorem 3.1 can be easily generalized to this case by putting á 2 f -(1) 3 and by substituting the coef®cient (á À 11)(á À 1) 32 of the term Q abcd g ab g cd N 2 by â f (4) (1) À 2 f -(1) À 4 8 X In fact, in the expansion of D f , the ®rst-and second-order terms remain unchanged.The coef®cient of the third-order term is Predictions and á connections and it can be written as with á 2 f -(1) 3. The coef®cient â is calculated by H u is a closed linear subspace of L 2 ( p á ì), it is a Hilbert space.It is easy to see that T u & H u and the inner product de®ned on T u is compatible with that in H u . H
3,610.6
1999-02-01T00:00:00.000
[ "Mathematics" ]
Is Incompatibilism compatible with Fregeanism? This paper considers whether incompatibilism, the view that negation is to be explained in terms of a primitive notion of incompatibility, and Fregeanism, the view that arithmetical truths are analytic according to Frege’s definition of that term in §3 of Foundations of Arithmetic, can be held together. Both views are attractive in their own right, in particular for a certain empiricist mind-set. They promise to account for two philosophical puzzling phenomena: the problem of negative truth and the problem of epistemic access to numbers. For an incompatibilist, proofs of numerical non-identities must appeal to primitive incompatibilities. I argue that no analytic primitive incompatibilities are forthcoming. Hence incompatibilists cannot be Fregeans. The Problem of Negation and Negative Truth Some philosophers find negation problematic.It is not difficult to appreciate why.Nothing really corresponds to negation.Nowhere do you encounter negativity: you do not perceive that the sky is not green, that there is no beer in the fridge, that this Riesling is not dry, that this is box does not weigh 5kg.You encounter just what is the case, not also what is not the case.What you see is that the sky is blue, you check what is in the fridge and there is only a bottle of wine, you taste the sweetness of the wine, you weigh the box and the scales' indicator comes to rest at 3kg. There is only what there is, not also what there is not.So how can we speak truly about the world using negative propositions? The problem of negation or negative truth has been acutely felt by empiricists.For words to be meaningful, they have to denote something positive, as all that we perceive is positive.Thus the meanings of negative expressions must be derivative of and stem from the meanings of positive ones, and negative truths must be secondary to and explained in terms of positive truths.Hobbes expresses this thought in his Elements of Philosophy: The positive names are prior to the negative ones, because, unless the former existed beforehand, there could be no use of the latter.(Hobbes 2000, Part 1, Chapter 2, §7) Locke concurs and writes that negative or privative words cannot be said properly to belong to, or signify no ideas: for then they would be perfectly insignificant sounds; but they relate to positive ideas, and signify their absence.(Locke 1979: Book III, Chapter 1, §4) Ayer, grappling with the distinction between negative and affirmative statements, concludes that, although negative statements cannot be reduced to affirmative ones because the former are less specific than the latter, logically a negative statement […] can be verified only through the truth of some more specific statement which entails it; a statement which will itself, by contrast, be counted as affirmative.(Ayer 1952, 815) Ayer continues, drawing attention to a metaphysical aspect of his conclusion, that in the same way we can account for the inclination that many people have towards saying that reality is positive.The explanation is that any information which is provided by a less specific statement will always be included in the information provided by some more specific statement.(Ibid.) Ayer describes this inclination quite neutrally, which indicates that, although a particularly natural component of the empiricist line of thought, the view is attractive also before other metaphysical backgrounds. Incompatibilism One attempt at explaining negative truth or negation in terms of positive notions is almost immediately forthcoming.If the sky is blue, then it is not green, because being blue excludes it from being green; if the fridge is full of wine, its contents exclude bottles of beer from being in it; the sweetness of the wine excludes it from being dry; if something weighs only 3kg, this excludes it from weighing 5kg.Negation can be explained in terms of what things are and what properties exclude each other or which properties are incompatible with each other.For 'a is not F' to be true, it suffices for a to have a property G which is incompatible with F. The puzzle dissolves, because negation is not a primitive concept, but one that is explained in terms of incompatibility. Demos offers an extended discussion of the problem of negation and its solution in terms of a primitive notion of incompatibility in an empiricist setting.According to Demos, "a negative proposition constitutes description of some true positive proposition in terms of the relation of opposition which the latter sustains to some other positive proposition" (Demos 1917, 194), where opposition is the notion of incompatibility introduced in the last paragraph.More recently, Huw Price has argued in a similar spirit that "the apprehension of incompatibility [is] an ability more primitive than the use of negation" (Price 1990, 226).Price, like Demos, proposes to explain negation in terms of incompatibility: It is appropriate to deny a proposition P (or assert ~P) when there is some proposition Q such that one believes that Q and takes P and Q to be incompatible.(Ibid.231) I call the view that negation is to be explained in terms of a primitive notion of metaphysical incompatibility incompatibilism. 1 Let's put some more flesh on incompatibilism.Russell (1951, 297) reports that Wittgenstein once refused to accept that there was no hippopotamus in a lecture room in Cambridge.Neither is there a hippopotamus in the 1 Price appeals to a further primitive in his explanation of negation, namely a primitive speech act of denial.The crucial thought, however, is that negation is based on incompatibility.Negation, according to Price (2019, 6), is needed only for pragmatic reasons, to enable speakers to register explicitly and to convey to other speaker that they consider two propositions to be incompatible.Similarly, Rumfitt (2000) appeals to a notion of incompatibility, albeit between speech acts, rather than propositions, in his bilateral account of logic.Restall (2005, 6ff), too, appeals to a notion of incoherence, that of asserting and denying the same proposition.room I am in now.Cheyne and Pigden explain that the "great big positive fact (or collection of facts)" the room as it actually is makes it true that there is no hippopotamus in it.Their "claim is that the existence of this fact [...] necessitates or makes true the proposition that there is no hippopotamus in the room" (Cheyne and Pigden 2006, 255).Had there been a hippo in the room, that fact would not have existed.Containing intact furniture, books on shelves, an unscathed philosopher etc., is incompatible with a room containing a hippo.The things or facts there are suffice to explain negative truths.As another example, suppose Theaetetus is not flying, but sitting next to Socrates.Then the big fact (or collection of facts) that we can roughly characterize as Theaetetus as he actually is necessitates the truth of [ Veber, too, emphasises that very large, positive, facts, are the truthmakers of negative truths. If the truth of Q is incompatible with the truth of P then P will entail Not-Q and thus P's truthmaker will function as Not-Q's truthmaker as well.Provided that every negative truth is entailed by some set of positive truths with positive truthmakers, negative truths can be made true by positive facts.(Veber 2008, 82) That neither the Great Wall of China nor a golf ball are in my coffee cup is due to certain positive facts.In the first case, "that the cup has certain dimensions and that the Wall has certain dimensions are metaphysically incompatible with the Wall being contained in the cup" (Veber 2008, 83).The dimensions of the cup and the Great Wall of China are positive facts.Concerning the golf ball and the cup, "truths about the distribution of air (or coffee) molecules inside the cup" and what the golf ball is made of are incompatible with the golf ball being in the cup.Golf balls are made of "rubber or hard plastic" and that "an air (or coffee) molecule is located in a certain place at a certain time is incompatible with a molecule of rubber or hard plastic being there" (Veber 2008, 83).Thus, only positive facts and what they are incompatible with are needed as the truthmakers of the negative truth that there is no golf ball in my coffee cup. The view that negation can be explained in terms of incompatibility is interesting, well motivated and attractive.To put it more sharply into focus and to illustrate the advantages of the incompatibilist view, let's compare it briefly with the solution that Russell favoured at some point in his thinking: that there are negative facts.2 Against Negative Facts Russell argued that there are two kinds of facts. Let us suppose, for the sake of illustration, that x has the relation R to y, and z does not have the relation S to w.Each of these facts contains only three constituents, a relation and two terms; but the two facts do not have the same form.In the one, R relates x and y; in the other, S does not relate z and w.It must not be supposed that the negative fact contains a constituent corresponding to the word "not."It contains no more constituents than a positive fact of the correlative positive form.The difference between the two forms is ultimate and irreducible.We will call this characteristic of a form its quality.Thus facts, and forms of facts, have two opposite qualities, positive and negative.(Russell 1919a, 4) Russell argued that Demos' view has no methodological advantage, and in fact some disadvantages, over the view that there are negative facts, and that it is circular, if the aim is to avoid negative notions, as incompatibility is itself negative (Ibid., 5f).Not many philosophers are satisfied with the view that there are negative facts.Even Russell himself was not entirely convinced: in The Philosophy of Logical Atomism (Russell 1919b, 42), Russell is less committal and merely asks his audience to consider the possibility that there are negative facts in addition to positive ones.Demos aimed to explain negative true propositions without having to introduce negative facts.According to Demos, the reason why such a view must not be entertained is the empirical consideration that strictly negative facts are nowhere to be met with in experience, and that any knowledge of a negative nature seems to be derived from perception of a positive kind.(Demos 1917, 189) A congenial view is expressed by Grzegorczyk in a paper giving an interpretation of intuitionist logic as the logic of scientific research: the atomic sentences of the language are established as true or otherwise by a method of enquiry, while the compound sentences are not a product of experiment, they arise from reasoning.This concerns also negations: we see that the lemon is yellow, we do not see that it is not blue.(Grzegorczyk 1964, 596) The negations of sentences are not verified directly, but their verification involves reasoning. 3 Another reason to reject the existence of negative facts is that for each positive fact there are uncountably many negative facts.There would be, besides the facts of the contents of my room, also the negative facts that there is no hippo in it, no rhinoceros, no blackbird, no giraffe, besides the facts of its location, there would also be the negative facts that it is not in Madrid, not in Paris, not in Berlin, not in Warsaw etc.That is just too many facts.It is a demand of ontological economy that if the phenomena can be explained without appeal to negative facts, then this is what we should do.Arguably, an account such as Cheyne's and Pigden's or Veber's succeeds in doing precisely that, and so the existence of negative facts should be rejected. Another reason against accepting negative facts is the following.Suppose the cat is on the mat.Then that fact can be said to be located where the cat and the mat are.But suppose the cat is not on the mat.Where is the negative fact located?Where the cat is?Where the mat is?Where both are?Neither answer is particularly attractive.Negative facts do not appear to be located anywhere.But something that is not located in space presumably also cannot enter the causal nexus of the world.Negative facts would then not do any causal work and have no causal effects on the world, and as such, on plausible metaphysical assumptions about causality and the physical world, they would not be part of it.Negative facts would serve no purpose in the world, but they were introduced as supposedly on a par with positive facts, which undoubtedly serve a purpose. 4 Incompatibilism is a view as general as the problem it aims to solve.The problem of negative truth isn't one exclusively for empiricists.Some philosophers may simply share the sentiments Mumford expresses, 3 I owe the reference to Grzegorczyk to a referee for this journal. 4For further development of this argument, see Molnar (2000, 76ff).according to whom negative facts, to which negative truths would appear to correspond, are too mysterious to be taken seriously.['Everything that exists is positive'] has almost a ring of aprioricity about it.How can these facts exist and be negative?Indeed, how can any existent really be negative?(Mumford 2007, 49) Mumford's description is, just like Ayer's quoted earlier, quite neutral, which indicates that the problem is not specifically tied to a correspondence theory of truth either.Existence itself seems to be essentially positive.Nothing negative exists.The problem of negative truth is a very ancient and general one.A closely related problem, the problem of how there can be false speech or thought, posed itself already to Parmenides, who warns us that never shall this be forcibly maintained, that things that are not are, but you must hold back your thought from this way of enquiry, nor let habit, born of much experience, force you down this way, by making you use an aimless eye or an ear and tongue full of meaningless sound: judge by reason the strife-encompassed refutation spoken by me.(Kirk, Raven and Schofield 1983, 248, Fragment 294) Parmenides concludes: "What is there to be said and thought must needs be: for it is there for being, but nothing is not" (Ibid., 247, Fragment 293).But then 'false speech' or 'false thoughts' are meaningless and neither speech nor thought at all.The ancient problem of falsity received profound systematic treatment and conceptual clarification by Plato.In the middle section of the Sophist, the Eleatic Stranger is lead to commit "the patricide of father Parmenides" and to "insist by brute force both that that which is not somehow is, and then again that that which is somehow is not" (Plato 1997, 241d).In the Euthydemos, Socrates encounters two sophists who deny the possibility of false thoughts and disagreement. 5Plato's vivid presentation of the perplexities surrounding negation, falsity and negative truth challenges philosophers of any background to address the problem.The issues discussed here apply to a wider range of positions than just some forms of empiricism.For the purposes of illustration, however, I confine consideration to empirically minded philosophers. 6 Fregeanism Reference to and knowledge about numbers is also something many philosophers have found problematic, maybe even more so than negation.This, too, is a problem that is particularly acute for empiricists, for whom reference and knowledge must ultimately be explained in terms of sense perception and causal relations between speakers or thinkers and objects referred to or known about.We do not experience numbers in sense perception and we cannot stand in causal relations to them, as they are abstract objects.So how can we refer to them, let alone know anything about them?Maybe empiricists are even forced to admit that there are no numbers at all, which makes the ubiquity, usefulness and applicability of propositions apparently about them even more of a mystery. Frege, although himself not touched by empiricist worries, formulated an attractive starting point for a solution.The logicist view that arithmetical truths are analytic opens up prospects for explaining how we manage to refer to numbers even though they are abstract objects by explaining numerical identities in terms of one-to-one correlations, or even of explaining away reference to numbers altogether.According to the characterisation of numerical identity that Frege attributes to Hume in Foundations of Arithmetic §63, the number of Fs equals the number of Gs if and only if there is a one-to-one correspondence between the Fs and the Gs.Letting # abbreviate 'the number of' and ∃! 'there is exactly one', what is often called Hume's Principle has the following formalisation: 6 There are of course also empirically minded philosophers who have no problem with negation.I have already mentioned Russell.Aristotle, too, has no qualms about appealing to negation in the formulation of the most certain and fundamental principle in Metaphysics .3, that "the same attribute cannot at the same time belong and not belong to the same subject in the same respect" (Aristotle 1985.Mill also belongs to this group: "When the positive name is connotative, the corresponding negative name is connotative likewise; but in a peculiar way, connoting not the presence but the absence of an attribute.Thus, not-white denotes all things whatever except white things; and connotes the attribute of not possessing whiteness.For the non-possession of any given attribute is also an attribute, and may receive a name as such; and thus negative concrete names may obtain negative abstract names to correspond to them" (Mill 1882, 41f).For opposition to the incompatibilist account of negation, see Armstrong (2004, 55ff), Kürbis (2019, Ch. 4), Molnar (2000), Taylor (1952Taylor ( , 1953)).The last paper is a response to a paper of Ayer's on negation quoted earlier.For a commentary on Molnar's paper, see Kürbis (2018). For ease of exposition, we can additionally require that ∀x∀y(Rxy → Fx & Gy), so that R is a relation the domain of which are the Fs and the range of which are the Gs. Russell's version of logicism was sympathetic to empiricism.Carnap explicitly thought that logicism provides an approach to solving the problem of reference to numbers in an empiricist setting.Carnap describes how he had learned from Frege that all mathematical concepts can be defined on the basis of the concepts of logic and that the theorems of mathematics can be deduced from the principles of logic.Thus the truths of mathematics are analytic in the general sense of truth based on logic alone.[...] It became possible for the first time to combine the basic tenet of empiricism with a satisfactory explanation of the nature of logic and mathematics.(Carnap 1963, 46f) Hale and Wright developed Frege's thoughts in a direction which, although they themselves may not be motivated purely by empiricist worries either, can plausibly be appropriated by empiricists.Their aim is to explain how statements of a given kind can be understood as involving reference to abstract objects and can yet remain, at least in principle, humanly knowable, given that the objects they concern are outside space and time and in consequence can stand in no sort of epistemologically relevant, causal relations to human knowers.[...] A statement of numerical identity---in the fundamental case, a statement of the kind: the number of Fs = the number of Gs---is true, if true, in virtue of the very same state of affairs which ensures the truth of the matching statement of one-to-one correspondence among concepts, and may be known a priori if the latter may be so known.(Hale and Wright 2002, 118f) Despite Frege's own nonchalance regarding epistemological concerns, logicism provides philosophers reluctant to posit a special faculty of the mind to account solely for our capacity of reference to and knowledge about numbers, be it Kantian or Gödelian intuition, with an attractive account of how we, as physical beings situated in space and time, nonetheless manage to have epistemic access to numbers. In Foundations of Arithmetic, Frege also provided appealing definitions of a priori, a posteriori, synthetic and analytic: It is necessary to find a proof [of a proposition] and to follow it down to the primitive truths.If in that process all that is met with are the general logical laws and definitions, then the truth is analytic [...] If, however, it is not possible to give a proof without appealing to truths which are not of the general logical kind, but are related to a special field of knowledge, then the sentence is synthetic.For a truth to be a posteriori we require that its proof cannot proceed without appealing to facts, i.e. to unprovable truths without generality that contain statements about specific objects.If, on the other hand, it is possible to give a proof from purely general laws that can neither be proved nor stand in need of proof, then the truth is a priori.(Frege 1990, §3) For want of a better term, I shall call the view that arithmetical truths are analytic in Frege's sense Fregeanism.The terminology is not supposed to suggest that Fregeanism incorporates all of Frege's philosophy.It is only a thesis on the nature of mathematical truths and the definitions of a priori, a posteriori, synthetic and analytic.In my terminology, Frege is of course a Fregean, but Fregeans need not accept all of Frege's views.The most promising way of spelling out Fregeanism is to count Hume's Principle as analytic, but philosophers who accept Hume's Principle as analytic need not be Fregeans in my terminology, if they do not accept Frege's definition of analyticity. Fregeanism is independent of empiricism.However, as Carnap's position or an empiricist Neo-Fregeanism are well motivated, for the purposes of this essay I am interested in an empirically minded Fregean.I do not require my empiricist to reject the existence of abstract objects outright, but only that he does not accept their existence lightly: a philosopher who demands a strong argument, ideally a proof, before accepting the existence of a particular kind of abstract object, and hence who does not just accept that there are numbers, but demands that this must be established. Incompatibilist Fregeanism Fregeanism and incompatibilism deserve and have received serious consideration.They are initially plausible and provide promising ways of accounting for philosophically puzzling phenomena, especially in a broadly empiricist setting.Some philosophers may wish to accept both views.I will argue that, attractive though it is, this position is problematic. 7 Consider 'a is red and green all over'.By Frege's definition, it is neither synthetic nor analytic, neither a priori nor a posteriori, as it is not true: being red is incompatible with being green all over.Only true propositions are classified by Frege's definition: false propositions do not have proofs, and to classify a proposition, it is necessary to find a proof of it, says Frege.That is slightly unusual, but it is merely a slightly unusual use of terminology.We can amend the definition by stipulating that false propositions belong to the same categories as their negations. The axioms of logic are a priori.Axioms of logic are propositions which can neither be proved nor do they stand in need of proof (from something else), while at the same time they are proved from purely general laws: they are their own one-step proofs.The same can be said of 'Being red is incompatible with being green all over'.It is a primitive, general law expressing a truth that anyone who has mastered the concepts 'red' and 'green' is in a position to recognise.Thus it is a priori.But its (one-step) proof is related to a special field of knowledge, namely colours, so it is synthetic.8Arguing indirectly, 'Being red is incompatible with being green all over' cannot be anything but synthetic a priori.It is not a posteriori, as it does not contain reference to specific objects.It does not follow from 7 My aim is to map out logical space and assess the general prospects for combining two views, while avoiding the details of how any particular philosopher might combine them.The possibility of combining incompatibilism and Fregeanism has not attracted much attention in the literature.However, Neil Tennant accepts both, logicism and incompatibilism (see Tennant 1987Tennant , 1999Tennant , 2009)).Various members of audiences to whom I presented this paper have expressed sympathy for the combination.I'll say a few words about Tennant in a later footnote.Although Tennant's approach is attractive and elegant, discussing it in more detail here would distract from what is at issue.His explanations of concepts of arithmetic may strike some readers as problematic for reasons independent of my concerns in this paper, as he appears to define the concept 'the number of' and '0' at the same time. only general logical laws and definitions, so it is not analytic.Assuming every truth can be classified by Frege's definitions, it is synthetic a priori. If establishing the incompatibility of F and G appeals to a special field of knowledge concerning the properties F and G, then 'Being F is incompatible with being G' is synthetic a priori.Frege's definitions assume that if there is a proof of a proposition, there is one in which every step is made explicit according to the axioms of the system and the additional assumptions necessary to derive the proposition.If negation is defined in terms of incompatibility, any such fully analysed proof of a proposition ~A must appeal to propositions about incompatibilities.If these propositions are synthetic, ~A itself is synthetic. A Fregean can employ an axiomatisation of logic in which negation is primitive.The incompatibilist needs to adopt one in which incompatibility is primitive.Proofs in second-order logic plus Hume's Principle remain valid for the incompatibilist Fregean, but they require analysis into more basic steps where any appeal to negation is replaced by an appeal to incompatibility.As by the Fregean definition fully analysed proofs are decisive for establishing whether a proposition is analytic or synthetic, a priori or a posteriori, although arithmetical propositions certainly remain a priori, because the newly analysed proofs will only appeal to purely general laws that can neither be proved nor stand in need of proof, to ensure that they remain analytic, the incompatibilist Fregean needs to avoid appeal to propositions that refer to a specific field of knowledge.Incompatibilism is motivated by examples such as 'Being red is incompatible with being green', which involve properties of physical objects.These would not do for arithmetic, as arithmetic is not tied to the existence of colours.An incompatibilist could extend the account of negation to arithmetic by appealing to primitive incompatibilities involving the numbers, such as 'Being identical to 1 is incompatible with being identical to 2'.However, these appeal to a special field of knowledge, namely the numbers, and thus any proposition proved by appeal to them would be synthetic.Thus this route is not open to the Fregean incompatibilist.The fundamental idea of Frege's logicism was that names referring to numbers are not primitive, but defined in purely logical terms.In other words, the Fregean incompatibilist must assume that there are propositions of the form 'Being F is incompatible with being G' which are analytic, i.e. that there are purely logical properties that are incompatible with each other. Are there Analytic Incompatibilities? As numerical identities are explained in terms of Hume's Principle, we might expect numerical non-identities to be provable on the basis of incompatibilities involving one-to-one mappings.Let's consider an example of the kind Frege uses to motivate his account.Suppose you're laying the table.You map the knives and forks one-to-one onto each other and attempt to map them one-to-one onto the plates.You fail and one plate is left over.You have discovered that the forks and knives are equinumerous, but that the plates are not equinumerous to them.Trying to express the non-identity 'The number of plates is not identical to the number of knives' in terms only of what things are and incompatibility, we could say that being that left over plate is incompatible with being mapped onto a knife and fork.Generalising, attempting to map Fs and Gs one-toone onto each other leads sometimes to success, sometimes to frustration.If the number of Fs is not identical to the number of Gs, attempting to map the Fs one-to-one onto the Gs will always leave some Gs or Fs out. Arithmetic cannot be based on an activity of mapping, anymore than it can be based on the activity of laying the table.If we appeal to a mental faculty of carrying out such mappings or mathematical constructions in the abstract, it looks as if we once more appeal to a special field of knowledge, so that propositions about incompatibilities between sizes of sets turn out to be synthetic.The incompatibilist Fregean should follow a similar path to Frege's and use the example as purely heuristic to motivate a general account suitable for the foundations of arithmetic.Following this line of thought, the incompatibilist Fregean needs to specify purely logical primitive incompatibilities between sizes of sets that can be appealed to in establishing numerical non-identities. Let's assume that there are more Gs than Fs.Then for any one-to-one relation R with the Fs as domain, for every F, there is exactly one G such that R relates them, but there are some Gs which are not identical to any of those that are related by R to an F. The incompatibilist Fregean needs a general characterisation of one-to-one relations that map the Fs into but not onto the Gs in terms of incompatibility and without using negation.It must apply to all cases in which there is no one-to-one relation between Fs and Gs.Only then can we expect to be able to prove that such incompatibilities hold, independently of being able to carry out certain constructions or not.It is not enough to say that assuming there to be a oneto-one correlation entails two incompatible statements: what these might be is precisely the question we are trying to answer.The incompatibilities we are looking for need to be general, so we cannot rely on some characterisation involving the particular natures of the Fs and the Gs.It would be too general to lay down that 'Being one of the Fs is incompatible with being one of the Gs', which is true if there are as many green as there are red things.We might try the following: If for any relation R, R's being a one-to-one mapping onto the Fs is incompatible with R's being a one-toone mapping onto the Gs, we can conclude that there is no one-to-one mapping of the Fs onto the Gs and that the number of Fs is not identical to the number of Gs.This, though, is not an incompatibility that can simply be appealed to in a proof: it is itself the kind of thing that stands in need of proof. Let's go back to the heuristic point that some Gs are 'left over' by any oneto-one mapping R of the Fs into the Gs.Being one of those Gs is incompatible with R mapping an F to it.This isn't good enough, as we cannot always indicate the Gs, but it shows that we need to draw a general distinction between two kinds of Gs: between those such that R maps some F to them and the others.The problem the incompatibilist faces is that they cannot use negation, as we normally would, to draw general distinctions.The most obvious differentiation between the two kinds of Gs is that one kind of G is such that R relates an F to them, while the other Gs are not of that kind, but that makes use of negation.We might try the following: being one of the Gs to which R relates an F is incompatible with being one of the other Gs.But that still requires a specification of a way of establishing the otherness of those Gs, and besides, what could 'being other' mean other than 'not being identical to any of those'.As a final attempt, for any oneto-one relation R, there is a G such that being the value of R for an F is incompatible with being it.But even waiving worries about what 'being incompatible with being it' might mean, the problem remains of how to establish in general that this is the case for a given G.9 To solve these difficulties, the incompatibilist Fregean might introduce a further notion: difference.We can then say there are some Gs which are different from those Gs such that R relates an F to them.Doing so is of course to admit that incompatibility alone is insufficient, as a further primitive is needed for a satisfactory theory.More importantly, however, there is a crucial difference between difference and incompatibility.We have introduced 'difference' merely to avoid using 'not': it has no further content than 'not identical'.By contrast, incompatibility is a rich and interesting notion: there is an attempt at giving it content independently of our interest in negation.The metaphysics of colours gives rise to some of them being incompatible with each other.Other properties exhibit a similar phenomenon.Difference, on the other hand, appears to have no other content than non-identity and as an additional primitive it is just 'not identical' rewritten into one word.The move of adding a primitive notion of difference is rather desperate.It is either ad hoc or a thinly veiled appeal to negation. 10 Contrary to expectation, one-to-one correspondences are not a promising source of analytic incompatibilities.But maybe there are others.Frege accepted that there are two logical objects, the True and the False, so that T = F is a logical falsehood.However, such an approach is not congenial to an incompatibilist: if there are such objects, we might as well define negation in terms of them rather than incompatibility.It may be that an incompatibilist can accept the existence of these two logical objects, but then the burden of proof is clearly on the incompatibilist to provide such an account and establish its superiority over an account that begins with truth and falsity. There is a more general point here.The use of classical truth tables is not congenial to the incompatibilist account.Classical truth tables appeal to independently given notions of truth and falsity.'A' is false if and only if '~A' is true, hence anyone finding negation problematic will find falsity problematic, too.The incompatibilist aims to explain negation in terms of incompatibility: ~A is true if and only if there is some true proposition incompatible with A. The same explanation will work for falsity, using the former equivalence.So on the incompatibilist account, falsity is to be explained, just like negation, in terms of incompatibility.Besides, Price observes that giving the meaning of negation in terms of its truth table also depends on a primitive notion of incompatibility, as it "clearly depends on 10 One might even go further, as suggested by a referee, and observe that the statement that R is a one-to-one correspondence between the Fs and the Gs involves an implicit appeal to negation: R maps different Fs to different Gs, and to say that x and y are different is to say that x and y are not identical, which appeals to negation.Thus right from the start, a logicism building on Hume's Principle is incompatible with incompatibilism.However, an incompatibilist like Tennant would deny that the concept of one-to-one correspondence implicitly appeals to negation, as negation is not appealed to in Tennant's rules for one-toone correspondence: those rules are entirely positive.our knowing that truth and falsity are incompatible" (Price 1990, 226).Nonetheless, incompatibilism is not biased against classical logic.The references to Grzegorczyk and Tennant in the current paper may suggest that an incompatibilist view is more congenial to intuitionist, rather than classical, negation.There are, however, also incompatibilists who have no qualms about accepting classical logic.Price is one of them.Demos, Cheyne, Pigden and Veber express no hesitations about classical logic.Peacocke (1987, 163f) argues that his explication of the meaning of negation in terms of primitive incompatibility validates double negation.Brandom (2008, 126f) is a further example of a classicist incompatibilist. According to an influential generalised treatment of negation discussed by Dunn, the negations of propositions are evaluated in terms of a primitive incompatibility relation ⊥ between states, situations or possible worlds: Intuitively, "⊥ is to be thought of as a kind of incompatibility relation, i.e., ⊥ means that asserts something which denies" (Dunn 1993, 332).One might try to appropriate this explanation to the present case to search for analytic incompatibilities, and say that ⊥ holds in case and contain propositions that are metaphysically incompatible.This, however, this still leaves the crucial question unanswered.~p will only count as analytically true at a world if the incompatibility relation amongst worlds may hold as a matter of analytically incompatible propositions being asserted at each world.So unless analytic incompatibilities are forthcoming independently of the definition of when the negation of a proposition is true at a world, so that we can say that there are cases where asserts a proposition that is analytically incompatible with a proposition that asserts, the definition is not going to produce analytically true negations. 11nother option for an analytic incompatibility might be 'Everything is identical to everything'.For Frege, at least, this is a logical falsehood that can be formulated without using negation.It is false because there are at least two objects, the True and the False.Even better, Hume's Principle entails that there are infinitely many objects.But this is not a suitable answer for an incompatibilist Fregean.The reason why 'Everything is identical to everything' is logically false is that there are at least two different objects.Hume's Principle only entails the existence of infinitely many objects if we have a means of expressing that there are different objects, and the proof appeals to negation in the definition of 0 as the number of things equinumerous to the non-self-identical ones.Even if we contrived a new concept 'being incompatible with being itself', this still leaves the question of how to secure that being equinumerous to the objects falling under that concept is not equinumerous to the number of things falling under the concept 'identical to 0'. 12 As argued, adding a primitive notion of difference to secure this is unconvincing. As a final attempt, one might observe that in second order logic it is possible to express logical falsehoods without using negation, as the falsum constant ⊥ is definable as ∀p .p, and that it is possible to prove that there are at least two different concepts or properties, one under which everything falls and one under which nothing false, so that 'All concepts are identical' or 'All properties are identical' can serve as an analytic falsehood that does not appeal to negation.The crux here, however, as before, lies with 'different'.'All concepts are identical' or 'All properties are identical' is absurd only if there are two different concepts or properties, that is to say, two concepts or properties that are not identical.Besides, to say that there is a concept under which nothing falls blatantly appeals to negation.That all propositions are true is also absurd only if there are at least two different propositions, one true and one false, or one incompatible with the other. 13The former may be true as a matter of logic, but it relies on the notion of difference, hence negation, and besides, it appeals once more to independently given notions of truth and falsity, which, as argued, is no good for the incompatibilist Fregean.The latter option just reiterates the problem: those two incompatible propositions would have to be analytically incompatible to be of use to the incompatibilist Fregean, and we have not been able to find any such propositions. Conclusion No analytic incompatibilities are forthcoming.The conclusion suggests itself that the only propositions that are analytically incompatible are analytic propositions and their negations. 14But this is no good for the incompatibilist Fregean, who aims to define negation in terms of incompatibility and is in need of analytic incompatibilities for the foundations of mathematics.So it looks very much as if incompatibilism is incompatible with Fregeanism.I conclude that Fregean incompatibilism, if not incoherent, has tricky questions to answer.The burden of proof is certainly on the Fregean incompatibilist to make the case that the position is tenable.Of course, it would be possible to adopt different definitions of 'analytic' and 'synthetic'.But that would not change the fact that much of arithmetic on an incompatibilist account would turn out to be synthetic according to Frege's definition.And hasn't Frege himself given good reasons against taking arithmetic to be synthetic according to his definition? 15 Theaetetus is not flying].For if Theaetetus were flying this fact would not exist.Thus positive facts constituting what Theaetetus is doing necessitate negative truths about what he is not doing.(Ibid., 259) Negative truth is explained in terms of the things there are and what they exclude or with what they are incompatible.
9,332.4
2019-02-21T00:00:00.000
[ "Philosophy" ]
Big Data Analytics : Towards a Model to Understand Development Equity for Villages in Indonesia The aim of this paper is to design a prototype model that can be used to better understand development equity for villages in terms of public monitoring and evaluation. In designing the model, the research has reviewed several techniques of big data analytics as well as alignment of business strategic objectives and technology. The prototype model also tested using several types of data. Although some obstacles have found, as it also found in the reviewed literature, a prototype model which can guide researchers and practitioners to understand ways to capture public monitoring is presented in this paper. Furthermore, Information systems researchers could use this prototype model for further research to get a deeper understanding of big data analytics roles for development, particularly in developing countries. Introduction The village government in Indonesia is a nearest unit to society and involved directly in government programs for several basic development areas.Because of its importance, the Indonesian government has established a specific regulation through the Act.No. 6, 2014 [1] about village governance.Nevertheless, there are issues in order to control village developments which was managed by village government.Reports of village misappropriation, including corruptions in the use of village funds are still occurred [2].By using IT as an enabler, public control of village development, particularly from the village community over their own village, is expected to create a positive contribution to village development in Indonesia.A technology that have potentials to meet this expectation is big data analytics. Despite big data analytics perceived have strategic potential benefits for developing countries [3], there are some challenges in implementation phase, such as data access and privacy, human resource capacity, IT infrastructure, IT management, and financial capabilities [4][5][6][7].Furthermore, some developing countries are facing issues in digital divide and analytical processes such as methodology, interpretation accuracy, analytical methods and anomaly detection [6,7].This has motivated this study to further explore the creation of the prototype model to address those challenges. In the attempt to address the situation, this paper is constructed in several sections as follows.Section 1 described background and issues related to the topic.Section 2 represented several related studies which highlight possible technologies related to big data for development and techniques of big data analytics.Next section, Section 3, described a way to address those problems by designing a prototype model that captured specific public information, and as a tool to support public control over village development.Before developing a model, in order to create alignment between strategic objectives and technology, we mapped several strategic objectives as stated in the Act (Act.No. 6, 2014) to technology, in this case is big data analytics [8].This process included mapping the objectives to critical success factors, critical information, and solutions of big data analytics.The model presented in Section 3 is tested using several types of data which are call data record (CDR) and Twitter data.From several previous studies, the use of this service model could gain more benefits by presenting real-time information [3,9].Following this section, Section 4, results from the study to address the challenges is presented and discussed.Finally, a summary will ends this paper as a final section. The use of big data analytics has several potential areas for development in the developing world.Refer to results of Bellagio big data workshop in 2014, which consist of several activists, researchers and data expert; those potential areas including advocating and facilitating; describing and predicting; facilitating information exchange and promoting accountability and transparency [3].Later, the term used for these purposes is big data for development which coined by UN Global Pulse [7]. Recently there are several research and applications related to big data for development using various data sources such as social media [9], cell phone usage [10][11][12], digital transactions [13], online news media [9,14], and administrative records [15].Some of these applications are using public datasets which were become critical elements of big data for development [7].The combination of data capture process and information management on those studies are used in the prototype model. The data analysis used in this paper refers to big data analytics as part of business intelligence and analytics (BIA) which become an important part for researchers and industries in more than two decades [16][17][18].Big data analytics is described as in-app analysis techniques with a large and complex set of data sets that require unique data storage, management, analysis, and visualization technologies [3,17,19], which also used in the prototype model in this paper. Prototype design Figure 1 shows the design of prototype model.As previously mentioned, there are two types of data used, which are short message system (SMS) data from CDR and Twitter data.There are several reasons to use these types of data.Firstly, the biggest active Twitter users from developing countries is came from Indonesia [20].This fact shows that netizens in Indonesia are often express their opinions and critiques on actual issues including those related to village development using social media.Secondly, although social networking can be used as a source of data, there are issues related to digital divide in Indonesia, and the evidence shows that Twitter data are only valid in major cities.Therefore, we used SMS data from CDR as second sources of data as it used more widely in rural areas in Indonesia.Finally, these two types of data are considered quite representative for the prototype model, however, it is possible to use other types of data as long as it already filtered as a clean data.After retrieving those type of data, next process is anonymize process as part of privacy protection process.Anonymizing CDR data is done by deleting cellular ID, while user ID was erased in the Twitter data.Following this step is the analysis process as shown in Figure 2, which includes information analysis and topic models.There are at least two experimental environment models with different methods for testing which are virtualization and physical cluster.However, in real situation there will be a third environment which is cloud computing using the same technical methods as in the physical cluster model.Figure 3 shows these three models. Results As mentioned before, we have done the mapping process of business objectives to technology we used.Table 1 shows the results of this mapping process.After getting the result, we made a classification of needs of big data analytics.The classification result shows that prototype model is directed to provide the following information: Achievement of accountable public services in the village Monitoring and reporting system of public services in the village vii).Improve socio-cultural resilience of village communities in order to realize village communities that are able to maintain social unity as part of national security; The existence of activities to improve the social resilience of village culture Monitoring and reporting system improving village resilience viii).Promote the economy of rural communities and overcome the national development gap; and The existence of activities that can promote the economy of rural communities System of monitoring and reporting of economic activities and or use of village funds for village development ix).Strengthen the village society as the subject of development. The existence of activities that make village society as the subject of development System of monitoring and reporting of village development activities Based on the analysis and design results, data capture process from data source began.Initial data capture from Twitter are done for 5 d with the initial query word 'desa' (which means village).Total initial data obtained from Twitter with geo-location Indonesia is 28 000 units.On the contrary, we have difficulties to access CDR data from telco operators.The cellular service providers who owned the data were bound by the government policy that the data access was prohibited.So we only get five different types of CDR data including calling and SMS data.We utilized those data to extract the structure and create a model of data filters on the prototype model. Testing process shows us that process of anonymize data from Twitter takes up to 180 min.We also found that during data capture process, particularly in rural area, there is a possibility of lack of connection to the Internet.This indicates that infrastructure availability and reliability become crucial in the actual implementation.After finishing the anonymized data process, result data are converted into text form for classification of information using topic models based on latent dirichlet allocation (LDA) [21], and the functions we used is based on online learning for latent dirichlet allocation [22].Some modifications are made in order to make this functions suitable for the research purpose. Result of LDA shows that several topics are match to the information classification in analytics process.Figure 4 shows percentage of topics that fit to the result of Table 1.Nevertheless, 32 % of data were not relevant to the strategic objectives.Therefore, carefulness in work while filtering the data is consider necessary.The final results of the information from prototype model can then be directed to specific information needs, such as public sentiment on village development as those undertaken by UN Global Pulse to understand food price crises [9], as well as other purposes summarized in the Bellagio big data workshop [3]. Conclusion and future work Research conducted as described in this paper shows that there are obstacles in using big data analytics for villages in Indonesia such as data access, data privacy, IT infrastructure, digital divide, and anomaly detection.In spite of those obstacles, we found a relation between village management objectives as regulated by the Government of Indonesia and the topic of public discussion in the Internet as we captured the data.It shows us that big data analytics solution could be set as a tool to understand development equity for villages which provide public control and evaluation functions.However, it is important to note that another future work of big data analytics for villages in Indonesia will need higherlevel exploration and longer research periods to obtain stronger evidence in terms of validity and effectiveness.The prototype model can also compare and combine with current model of monitoring and reporting of village government.For instance, the Government of Indonesia has provided a model of manual reporting system.By comparing and combining both, analytics result and reporting information, we can get a new perspective of the report in terms of public monitoring and evaluations.It would be easier if the adoption of domain: desa.id(a domain type for village in Indonesia) was utilized for both model.This may lead to another model to get deeper understanding of equitable development for villages. Fig. 4 . Fig. 4. Percentage of topics that fit to the result of the mapping objectives. Table 1 . Mapping the objectives to achieve the alignment of technology.
2,454.2
2018-04-23T00:00:00.000
[ "Computer Science" ]
ppBAM: ProteinPaint BAM track for read alignment visualization and variant genotyping Abstract Summary ProteinPaint BAM track (ppBAM) is designed to assist variant review for cancer research and clinical genomics. With performant server-side computing and rendering, ppBAM supports on-the-fly variant genotyping of thousands of reads using Smith–Waterman alignment. To better visualize support for complex variants, reads are realigned against the mutated reference sequence using ClustalO. ppBAM also supports the BAM slicing API of the NCI Genomic Data Commons (GDC) portal, letting researchers conveniently examine genomic details of vast amounts of cancer sequencing data and reinterpret variant calls. Availability and implementation BAM track examples, tutorial, and GDC file access links are available at https://proteinpaint.stjude.org/bam/. Source code is available at https://github.com/stjude/proteinpaint. Introduction Mutation detection by next-generation sequencing is becoming mainstream in both cancer research and clinical diagnosis. Common practice includes running variant callers such as GATK (McKenna et al. 2010) for detecting variants. Manual review is essential to ensure variant calling accuracy by visually examining read alignment over mutations, especially for indels and complex mutations with alignment inconsistencies. Current BAM visualization tools such as IGV (Robinson et al. 2017), pileup.js (Vanderkam et al. 2016), BamView (Carver et al. 2013), UCSC Genome Browser BAM track (Fujita et al. 2011), andBamSnap (Kwon et al. 2021) provide limited support for manual review of variant calls (Supplementary Table S1), primarily due to the visualization using only reference genome and lacking support for complex mutations. Here, we present ProteinPaint BAM track (ppBAM), a web-based BAM visualization tool based on the ProteinPaint platform (Zhou et al. 2021), designed for accurate and efficient variant review. ppBAM leverages ProteinPaint's server-side computing and rendering to be performant for analyzing variant calls from deep sequencing data. It is well-suited to support clinical genomics efforts such as early detection of pathogenic complex mutations and is easily integrated into local or cloud-based clinical genomics workflows to assist manual review. The cancer research community can also benefit from ppBAM through its support of the NCI Genomic Data Commons (GDC) BAM slicing API (Heath et al. 2021). Improved variant genotyping by ppBAM Typical variant calling results provide only the number of reads supporting the reference and alternative allele, and do not report reads not matching either alleles or uninformative reads with equal similarity to both alleles. As an improvement, ppBAM divides reads to up to four groups against a variant by aligning each read to both reference and alternative alleles using a Rust implementation of Smith-Waterman algorithm (Kö ster 2016) ( Supplementary Fig. S1) that can process thousands of reads on-the-fly (see Supplementary Tutorial). The first two groups are reads supporting the alternative and reference alleles in case of single-allele variants. In case of multiallele variants, multiple groups could be created for each of the alternative alleles ( Supplementary Fig. S2). The next group contains reads not supporting either allele which may be due to a wrong base call at the variant region or possibility of presence of a different alternative allele. The last group contains ambiguous reads with equal similarity to both alleles. Reads from each group are displayed separately for visual comparison (Supplementary Fig. S3). Realignment of reads supporting alternative allele Reads supporting the complex indels can be challenging to comprehend when viewing their alignments on the reference genome, hindered by alignment inconsistencies between reads which otherwise have the same read sequences in the variant region. To solve this problem, the ppBAM allows the user to realign a group of reads against a mutated reference genome which can clearly display read support for a complex mutation. The realignment is done using multiple-sequence alignment tool Clustal Omega (Sievers and Higgins 2018). Visualization of cancer genome sequencing results from NCI Genomic Data Commons ppBAM supports access to human cancer genome sequencing data hosted in NCI GDC. Given a sample or file ID, ppBAM will list BAM files from the sample, as well as somatic mutations of the same sample by querying the GDC API. Users can either select a mutation, or enter a genomic region or custom mutation to view the read alignments from the GDC BAM file. On-the-fly genotyping is performed on a mutation for users to review the read support ( Supplementary Fig. S4). ppBAM genotyping resolves alignment inconsistencies in a TP53 intragenic deletion case At an 18-bp deletion in TP53, ppBAM classifies out of a total of 1853 reads, 654 reads as supporting reference allele, 959 as supporting alternative allele, 62 as not supporting either allele, and 178 as ambiguous (Supplementary Fig. S3). Reads classified as supporting alternative allele displays alignment inconsistencies including mismatches, softclips, insertions, or combinations of these ( Fig. 1a and b). ppBAM verifies these reads are indeed supporting the deletion event by realigning to the deletion allele (Fig. 1c). ppBAM also reveals a large number of ambiguous reads which are attributed to a sequence duplication (GCAGCGC) flanking the deletion, making these reads uninformative for genotyping ( Supplementary Fig. S5). ppBAM genotyping pinpoints a wrong variant call A wrong variant call is easily flagged by the variant genotyping feature of ppBAM. When visualizing reads alignment for a previously published complex mutation in cancer genome, ppBAM displays only 3 reads supporting the mutation and 106 reads supporting neither reference nor alternative allele, indicating wrong variant call ( Supplementary Fig. S6a). Manual correction to the variant increases the number of reads to 102 supporting the alternative allele ( Supplementary Fig. S6b) and is further confirmed by realignment ( Supplementary Fig. S6c). Summary ppBAM track is an efficient web tool to support read alignment visualization and variant review, using on-the-fly genotyping. Users can use it standalone to access rich BAM resources in NCI GDC or private BAM files using local ppBAM instance, or integrate it into any web-based clinical genomics workflow to assist variant review. Supplementary data Supplementary data are available at Bioinformatics online. Conflict of interest None declared. Funding This work was supported by a contract from Leidos and NCI (20X068F3), and by American Lebanese Syrian Associated Charities (ALSAC).
1,394.4
2023-05-01T00:00:00.000
[ "Computer Science", "Biology", "Medicine" ]
Colonic Fermentation Promotes Decompression sickness in Rats Massive bubble formation after diving can lead to decompression sickness (DCS). During dives with hydrogen as a diluent for oxygen, decreasing the body’s H2 burden by inoculating hydrogen-metabolizing microbes into the gut reduces the risk of DCS. So we set out to investigate if colonic fermentation leading to endogenous hydrogen production promotes DCS in fasting rats. Four hours before an experimental dive, 93 fasting rats were force-fed, half of them with mannitol and the other half with water. Exhaled hydrogen was measured before and after force-feeding. Following the hyperbaric exposure, we looked for signs of DCS. A higher incidence of DCS was found in rats force-fed with mannitol than in those force-fed with water (80%, [95%CI 56, 94] versus 40%, [95%CI 19, 64], p < 0.01). In rats force-fed with mannitol, metronidazole pretreatment reduced the incidence of DCS (33%, [95%CI 15, 57], p = 0.005) at the same time as it inhibited colonic fermentation (14 ± 35 ppm versus 118 ± 90 ppm, p = 0.0001). Pre-diveingestion of mannitol increased the incidence of DCS in fasting rats when colonic fermentation peaked during the decompression phase. More generally, colonic fermentation in rats on a normal diet could promote DCS through endogenous hydrogen production. another gas than hydrogen as the diluent. It could directly increase inert gas load during hyperbaric exposure and contribute to bubble formation during the decompression phase. The aim of this study was to assess whether stimulating colonic fermentation in fasting rats by feeding them mannitol (MNL) before the dive would increase the incidence of DCS. Inhibiting this fermentation pathway by administering oral metronidazole (MTZ) was also tested. After a rapid decompression, DCS signs and cell counts (platelets, leukocytes) were compared in rats force-fed with MNL and controls, either conditioned with MTZ or not. Results Weight. Weights were similar six days before compression in both groups (384 ± 36.5 g with MTZ versus 406 ± 37 g; NS). Weight loss between force-feeding time and removal from the hyperbaric chamber was no different whether rats were force-fed with MNL or water (-0.5 ± 2.2% versus -0.6 ± 2.2%; NS). Hydrogen and carbon dioxide in exhaled air. In the group without MTZ, more hydrogen was exhaled one hour before compression by rats force-fed with MNL than those force-fed with water (118 ± 90 ppm versus 3 ± 6 ppm; p = 0.0001). In MNL-fed rats, the increase was significantly lower among those treated with MTZ (14 ± 35 ppm versus 118 ± 90 ppm; p = 0.0001). In water-fed rats, MTZ had no effect on hydrogen exhalation (Fig. 1). In the group without MTZ, carbon dioxide production was the same one hour before compression in rats force-fed with MNL as in those force-fed with water (9.9 ± 4.1 ml.min −1 versus 8.8 ± 2.5 ml.min −1 ; NS). In MNL-fed rats, MTZ intake was not accompanied by a significant change in carbon dioxide production (9.9 ± 5.0 ml.min −1 versus 9.9 ± 4.1 ml.min −1 ; NS). Behavior and clinical observations. In the group without MTZ, the incidence of DCS of all types (death, neurological signs, labored breathing) was significantly higher in rats force-fed with MNL than in those force-fed with water (80% ± [56,94] versus 40% ± [19,64]; p = 0.0098) (Fig. 2 Finally, no deaths, neurologic signs or labored breathing were found in the sham pressurization group. Blood cell counts. In the sham pressurization group, there was no significant variation from force-feeding at the time of removal from the chamber, and no variation difference in hematological parameters between rats force-fed with MNL and those force-fed with water: platelets (-4.8 ± 18.2% versus -3.8 ± 16.5%; NS), white blood cells (-6.9 ± 37.4% versus + 5.1 ± 22.0%; NS) or hematocrit (-1.9 ± 21.7% versus + 9.2 ± 30.6%; NS). In the group without MTZ, a significantly greater decrease was observed in platelet count at the time of removal from the chamber in rats force-fed with MNL compared to those force-fed with water (-17.1 ± 11.0% versus -5.0 ± 33.7%; p = 0.0358) (Fig. 3). Prior MTZ intake did not affect the platelet count in either rats force-fed with MNL (-14.1 ± 25.2% versus -17.1 ± 11.0%; NS) or in rats force-fed with water (-3.6 ± 12.6% versus -5.0 ± 33.7%; NS). The white blood cell count was slightly lower at the time of removal from the chamber in rats force-fed with MNL with no prior MTZ intake (-40.0 ± 30.0%; p = 0.0781), which was not seen in either rats force-fed with water (3.8 ± 65.4%; NS) or those force-fed with MNL after MTZ treatment (-6.3 ± 51.3%; NS). However, variation differences between these subgroups were not significant. Hematocrit at force-feeding time was not different between groups with and without MTZ (44.6 ± 6.6% versus 47.3 ± 7.4%; NS). Hematocrit variation between force-feeding time and the time of removal from the chamber was not different between rats force-fed with MNL and those force-fed with water (-0.4% ± 30 versus 21 ± 1.6. 1%; NS). Arterial blood gases. In the group without MTZ, there was no significant difference in pH between rats force-fed with MNL and those force-fed with water (7.38 ± 0.05 versus 7.37 ± 0.07; NS) or in partial carbon dioxide pressure (44.1 ± 6.5 mmHg versus 45.8 ± 7.4 mmHg; NS). Discussion We have shown in rats that ingesting MNL four hours before a dive-so that peak hydrogen production coincides with decompression-increases the incidence of DCS according to both clinical criteria (death, neurological signs, labored breathing) and in terms of reduction of platelet and white blood cell. Prior MTZ treatment reduces the risk of DCS to a level comparable to that we found in rats that did not ingested MNL. Several hypotheses may explain these observations. Previous studies have shown that bubble formation and the incidence of DCS are highly dependent on body weight in rats 15,16 . Protocols using rats with a body weight above 350 g led to severe neurological symptoms and death. In our study, weights were similar in both groups and subgroups. Dehydration is known to promote DCS 17 . Mannitol or 1.2.3.4.5.6-hexanehexol (C 6 H 14 O 6 ) is a polyol ("sugar alcohol"). Following ingestion, it is not digested in the stomach or small intestine but is broken down in the large intestine through bacterial fermentation. We also know that 10% or 20% MNL solutions are hypertonic. High doses of MNL may have a laxative effect with subsequent fluid transfer through the gut wall 18 leading to dehydration. Greater weight loss was not observed in rats force-fed with MNL compared to those force-fed with water. In addition, no significant differences were observed in hematocrit variation between rats force-fed with MNL and those force-fed with water. The MNL laxative effect was not sufficient to cause dehydration as a result of loss in feces. The amount of hydrogen measured in exhaled air reflects the rate of colonic bacterial fermentation 14 . We have shown that ingesting MNL before diving significantly increases fermentation and exacerbates the risk of DCS. If MTZ is administered before force-feeding, fermentation is inhibited and the risk of DCS drops back down to baseline. To prove that the risk of DCS is related to bacterial fermentation, it has to be demonstrated that MTZ does not itself affect the risk of DCS. Several studies have shown that taking oral MTZ could have beneficial effects, not only local effects on inflammatory bowel disease 19 but also systemic effects in subjects like burn victims 20 . This beneficial effect is not merely a result of MTZ's antibiotic activity but also because of its antioxidant effect. This antioxidant effect may be related to modulation of the gut microbiota, specifically by stimulating growth of the population of bifidobacteria in the caecum 21 . In this study, no specific effect of MTZ was observed on the incidence of DCS so it seems that colonic bacterial fermentation during decompression after MNL ingestion promotes DCS. In addition, preliminary tests suggest that some non-fasting rats have a high spontaneous fermentation rate in the absence of MNL ingestion. Colonic bacterial fermentation could therefore be a general risk factor for DCS in rats, even those on a normal diet. Hypercapnia (high partial carbon dioxide pressure [PaCO 2 ] in the blood) is known to exacerbate the risk of DCS 22 . Colonic MNL fermentation is accompanied by the formation of carbon dioxide; some of this is eliminated via the anus but the rest crosses the intestinal barrier and diffuses throughout the body before being eliminated via the respiratory system. Before pressurization, there was no more CO 2 in the air exhaled by rats force-fed with MNL than in that exhaled by those force-fed with water. In addition, there was no PaCO 2 difference between rats force-fed with MNL and those force-fed with water. The higher incidence of DCS in rats force-fed with MNL cannot therefore be explained by hypercapnia. A fraction of the hydrogen generated by MNL fermentation in the large intestine diffuses across the gut wall into the body and this endogenous hydrogen could act exacerbate the risk of DCS. It could directly increase the inert gas load during hyperbaric exposure and form bubbles during decompression. Before diving, it could also help to seed the initial formation of inert gas bubbles from nuclei-the micro-bubbles that are at the origin of desaturation events 2 . Intraperitoneal administration of H 2 -rich saline 24 hours before a dive may have a protective effect against DCS in rats 23 . Recently, it was shown that successive deep dives impair endothelial function and enhance oxidative stress in humans 24 . Owing to its ability to rapidly diffuse across membranes, molecular hydrogen (H 2 ) could reach and react with cytotoxic reactive oxygen species (ROS) and thus protect against oxidative damages. H 2 has been recognized as a potential free radical scavenger which can selectively reduce the hydroxyl radical (OH•) and peroxynitrite anion (ONOO-) 25 . Ohsawa et al. provided evidence that inhaled H 2 had anti-oxidative and anti-apoptotic activities that protect the brain against ischemia-reperfusion injury and stroke 25 . Others have shown that H 2 inhalation protects rabbits against spinal cord ischemia-reperfusion injury 26 . Thus, while increasing the inert gas load at the time of decompression appears to be deleterious, the place of endogenous hydrogen in the prevention and treatment of DCS remains to be explored. Conclusions Colonic fermentation of MNL ingested before diving increases the incidence of DCS in fasting rats when the peak fermentation rate coincides with the decompression phase. More generally, colonic fermentation in rats receiving a normal diet without any MNL could be sufficient to increase the risk of DCS. The generation of hydrogen by fermentation and its subsequent diffusion through the body could increase this risk. In humans, whether colonic fermentation during decompression promotes DCS remains to be assessed. Different foodstuffs are associated with differential fermentation rates so pre-dive dietary modification could cut down the risk of experiencing DCS. It is also known that the intestinal microbiota-including its fermentation capacity-can be modulated with the administration of pre-or pro-biotic agents. Finally, it might be worth investigating the effect of administering substances that absorb intestinal gases just before a dive. Materials and Methods Study population. All procedures involving experimental animals were in line with European Union rules (Directive 86/609) and French law (Decree 87/848). This study was approved by the IRBA Ethics Committee. The investigator (NV) has been accredited (Number 83.6) by the Health and Safety Directorate of our Department, as required by French rules R.214-93, R-214-99 and R.214-102. Only male Sprague-Dawley rats (Charles River Laboratories, France) were used in this experiment in order to avoid fluctuations due to female hormone cycles. Rats were kept at 22 ± 1°C in a 12:00/12:00-h light/dark cycle (lights on at 7:00 a.m.) with food (Global Diet 2018, Harlan, Italy) and water ad libitum. Before experiments, rats were housed in an accredited animal care facility. A total of 93 rats were exposed to compressed air to induce decompression stress and bubble formation. The rats were randomly divided into two groups, the first one (n = 41) was treated for six days before pressurization with MTZ, administered as a solution contained 250 mg MTZ and 6 g sucrose per 100 ml; the expected daily MTZ intake was 40 mg per rat or 100 mg.kg −1 . This dose is higher than the dose providing a significant decrease in pulmonary hydrogen excretion after lactulose ingestion in humans and animals [27][28][29] . The second group (n = 52) was given a control solution containing 6 g sucrose per 100 ml. Twelve animals in this second group were subjected to sham pressurization (with no pressure increase in the chamber). Each group was further randomly divided into an experimental subgroup and a control subgroup (Fig. 4). Four hours before pressurization, rats in the experimental subgroup were force-fed with a bolus of 2.5 ml of 20% MNL solution (0.5 g or 0.125 g.100 g −1 ) while the control subgroup was force-fed with 2.5 ml of water, so that decompression coincided with the peak of hydrogen exhalation. Indeed we determined in a preliminary test that exhaled hydrogen peaked 4-6 hours after MNL ingestion (Fig. 5), while the peak was observed 8 hours after lactulose ingestion in gnotobiotic rats 30 . The dose was also determined from preliminary tests in which a rise in exhaled hydrogen was observed quite similarbut with less inter individual variability -to the one obtained in non-fasting rats force-fed with 2.5 ml of water (135 ± 47 ppm versus 76 ± 126 ppm; NS) (see supplementary information). Hyperbaric procedure. Batches of 8 freely-moving rats in two different cages (4 per cage) were subjected to a hyperbaric protocol in a 200-liter hyperbaric chamber fitted with three ports for observation. Each batch consisted of a heterogeneous sample of rats which had been treated with MTZ or not and had been pre-fed with either MNL or water. We used a staged decompression, capable of causing ischemic impairments of the spinal cord with variable severity in the rat 31,32 . This model therefore makes it possible to move closer to the DCS observed in humans. In fact, in recreational diving medullary DCS is frequent and, in most cases, occurs in the absence of a fault in procedure or a fast ascent to the surface 8 . Compression was induced at a rate of 100 kPa.min −1 to a pressure of 1000 kPa (90 msw) and then maintained for 45 minutes in air. At the end of the exposure period, the rats were decompressed down to 200 kPa at a rate of 100 kPa.min Compressed air was generated using a diving compressor (Mini Verticus III, Bauer Comp, Germany) coupled to a 100-liter hyperbaric chamber at 3.10 4 kPa. The oxygen analyzer was based on a MicroFuel electrochemical cell (G18007 Teledyne Electronic Technologies/Analytical Instruments, USA). Water vapor and CO 2 produced by the animals were respectively captured with Seccagel (relative humidity: 40-60%) and soda lime (< 300 ppm captured by the soda lime), respectively. Gases were mixed by an electric fan. The day-night cycle was respected throughout. The temperature inside the hyperbaric chamber was measured using a platinum-resistance temperature probe (Pt 100, Eurotherm, France). All these variables were controlled by a dedicated computer. Hydrogen and carbon dioxide in exhaled air. The hydrogen in the exhaled air derives from the fermentation of carbohydrate in the colon. The carbon dioxide derives from metabolic processes as well as fermentation processes in the gut. To measure hydrogen and carbon dioxide in exhaled air, we assumed that the ventilation rate was constant over time and the same in all rats, i.e. 225 ml.min −1 : a resting value of 27.27 ± 2.39 ml.min −1 .100g −1 (mean standard deviation) has been documented for Sprague-Dawley rats 33 . For each measurement, rats were individually introduced at surface pressure into a clean, dry polyvinyl chloride (PVC) cylinder (internal diameter 75 mm, length 200 mm, internal volume 883 mL). Both sides were sealed by plastic disks with holes. Air was insufflated at one side and collected at the other one. Air was supplied by an aerator (Rena Air 200 ® , France) at a constant rate of 225 ml.min −1 (checked by the rise of a bubble in an inverted 100 ml test-tube, pierced at the bottom). After 5 min to allow the mixing of the gases inside the cylinder (taking into account the dead space not occupied by the rat), hydrogen (ppm) and carbon dioxide (percentage) were measured in the air coming out of the cylinder. Hydrogen was measured using a portable expired hydrogen analyzer (Gastrolyser ® , Bedfont Scientific Ltd, UK). Carbon dioxide was measured using a portable gas exchange analysis system (Cosmed ® K4b 2 , Italy). Data were recorded four hours before compression just before force-feeding and one hour before compression. Behavior and clinical observations. At the end of the decompression, each rat was put into an individual cage and assessed at will for 30 minutes by observers who were unaware of the animal's pre-compression conditioning. The following symptoms were considered as manifestations of DCS: labored breathing, paresis or difficulty moving (including limping, failure to maintain balance, sideways gait, falling, difficulty getting up after a fall) or death. Rats were considered as having a paresis when they didn't reach grade 4 of the "Motor Score" presented by Gale et al. 34 . Times of onset were also recorded. Problems with fore or rear limbs were classified as being due to neurological DCS. Blood cell counts. Blood cell counts were carried out in an automatic analyzer (ABCvet ® , SCIL, France) on samples taken 4 hours before the dive and then again 30 minutes after surfacing. Red cells, leukocytes and platelets were counted in 20 μ l samples taken from the tip of the tail and diluted in an equivalent volume of 2 mM EDTA (Sigma, France). A sample of arterial blood was taken from each rat by direct aortic puncture and blood gases were measured using an i-STAT System (Abbott Laboratories, Illinois, USA). Numerical results were expressed as median interquartile range for quantitative variables, and percentage confidence interval [95%] for dichotomous variables. A contingency table was used for independence and association tests, coupled with a Fisher Exact or Chi 2 test of significance. Differences between two groups were analyzed using a Mann-Whitney test and a Wilcoxon test was used for matched comparisons within groups. A difference was considered as significant for p-values < 0.05.
4,333
2016-02-08T00:00:00.000
[ "Biology" ]
Bio-process performance, evaluation of enzyme and non-enzyme mediated composting of vegetable market complex waste Vegetable Market have become major sources of organic waste. Some of such waste when being diverted to landfills not only increase the landfill loading but also contribute to increase greenhouse gas emission. Of the many technologies available in handling such hugely generated waste, composting has proven very effective for decades. Enzyme and non-enzyme mediated aerobic composting of vegetable market complex waste (VMCW) have been investigated. Conventional composting technique though being capable of handling large quantum of waste are found to consume more time. Proven to be disadvantages factor. In the present investigation, the pre-cultured seed inoculums used for vegetable market complex waste, shortened the typical composting period from 45 to 9 days for the first time. Also, rapid size and volume reduction of VMCW was witnessed. The organic degradation of VMCW was observed as 42% (82 ± 2.83% to 40.82 ± 0.61%), with a volume reduction from 0.012m3 to 0.003 m3 within 9 days. An enriched nutrients NPK level of compost bio-fertilizer was recorded as 0.91% w/w, 0.5% w/w and 1.029% w/w respectively. Compost maturity observed through the X-ray diffraction (XRD) analysis of the manure confirmed the conversion of the crystal structure of the compost particle to amorphous form and the mineralization of organic matter during the composting. Thus, the fermented pre-cultured seed inoculums favored an enhanced nutrients level with shortened composting time. Scientific Reports | (2020) 10:19801 | https://doi.org/10.1038/s41598-020-75766-3 www.nature.com/scientificreports/ Moisture content, temperature, C/N ratio, pH and coliform bacteria were chosen as parameters to control the quality, stability and efficiency of the composting process 8,[14][15][16][17][18] . It has been reported that microbial decomposition of organic matter, occurs in the thin liquid (hygroscopic water) film around the surface of the particles enriched by water. During composting, the thin films of water surrounding individual particles are observed to dry off, making the microorganisms decomposing inorganic matter inactive. Thus, investigators concluded that the hygroscopic water around the waste particles to be a deciding factor in determining the efficiency of the composting process 19 . Temperature is reported to be a governing factor having an impact on the microbial activity during the composting process 20,21 . Temperatures below 20 °C and above 60 °C contribute in slowing down the composting process. Having observed the influence of high temperature and reduced moisture content on the efficiency of the composting 22,23 incorporated aeration not only to have an optimal control on temperature and moisture content but also maintain the oxygen level 20,21,24,25 . Traditional anaerobic and aerobic decomposition methods were reported to acieve mature compost in several months [26][27][28] . Recently developed techniques based on aerobic decomposition, reduced the composting period to about 4-5 weeks 29,30 . Further, reducing the time was attempted by few other researchers and concluded that seed inoculums could increase the microbial population, and produce desired enzymes enhancing the conversion of organics and reducing the odorous gas emissions 31,32 . However, less research on the enzyme activation and biokinetics modelling of aerobic composting are reported. Therefore, for the first time, an attempt has been made in formulating a precultured seed inoculum that can degrade VMCW by aerobic composting. To achieve size and volume reduction of VMCW by aerobic composting in a reduced time of seven days, a composter incorporated with shredding, aeration and mixing of VMCW has been designed, fabricated and used for the present study. This compact, easy to operate compacter for degradation of VMCW by aerobic composting is a novel attempt that not only reduces the size and volume of the waste but digest it in lesser time making the process viable. Further, the biokinetic modelling 33 of the VMCW using the newly fabricated composter as also been investigated. Sample collection and shredding. Stratified random sampling technique was followed in order to eliminate the areas of non-uniform properties or concentrations which were identified and stratified. Thereafter, simple stratified random sampling technique was followed, so as to obtain a representative homogenous sample. The random sampling was done, where the vegetable market complex waste (VMCW) such as tomato, carrot, cabbage, radish, Ridge gourd, bottle gourd, brinjal, chayote, and beetroot were collected in mixed form and pooled further into a composite homogenized sample for the enhancement of composting processes. Materials and methods Mechanical shredder for VMCW breakdown. The VMCW shredding process was carried out in two stages. Primarily, the collected VMCW sample was broken into manageable pieces approximately 1.5 to 2 cc, which was then fed into a secondary shredder for further size reduction approximately to 0.5-0.6 cc. Aerobic Composting Reactor specifications. The composting reactor prototype as per design depicted in Fig. 1a is made up of SS316, with the sizing of 45 × 30 × 60 in cm (L × B × H) and equipped with a manual stirring lever ( Fig. 1b) with continuous aeration system through the air pump (capacity 230 V TID-75-P) (Fig. 1c) to maintain uniform homogenous mixing. The air enters the composting vessel ( Fig. 1b) through the nozzle (Fig. 1c) and it was distributed through perforated holes in order to achieve uniform distribution of air throughout the vessel. Preparation and feeding of an enzyme into the composting reactor. An attempt has been made to prepare an enzyme (pre-cultured seed inoculum) using Jaggery and curd. Gabhane et al. 34 have investigated the compost using different additives/catalyst and concluded that high microbial enzymatic activity, organic matter degradation, bulk density, quality of finished compost including gradation, were observed in the Jaggery additives. In addition to this, Partanen et al. 35 , have reported that the acceleration and the mature end product of composting was achieved by the addition of curd. The investigation on composting of municipal solid waste with jaggery and curd as seed inoculum was prepared by mixing 200 g of Jaggery and 10 g of curd with 1 L of water and pre-cultured for 5 days at room temperature under static condition. Bulking agent (rice husk) was added for increasing the surface area of the substrate which could increase the microbial activity 36 . Eventually, the bulking agent was sieved out at the end of the composting process which can be used as seed compost since it has a greater number of bacteria attached. Operation of aerobic bio composter. A sample of 12 kg (wet weight) Vegetable Market Complex Waste (VMCW) was taken and cut into small pieces of 0.5 cm in length using the mechanical shredder. The shredded vegetable market complex waste and seed inoculum with rice husk in the ratio of 16:1 was transferred into the composting reactor. The sample was collected on a daily basis at different depths before turning peddles and stored in a plastic container to monitor the composting process at room temperature. The parameters such as pH, TS, VS, volume reduction and temperature (at different depths) of the VMCW were noted at every 24 h. were characterized with total solids (TS) (APHA, 2540B), volatile solids (VS) (APHA, 2540G) and fixed solids (FS) (APHA, 2540E) using standard methods for the examination of water and wastewater 37 . The moisture content of the sample was measured after oven-dried at 105 °C for overnight 37 . The oven-dried sample was ground using pestle and mortar and then used for further analysis. The organic matter was calculated from the ash after igniting the sample (dry weight) in a muffle furnace at 550 °C for 2 h 37 . The pH of the compost was determined immediately upon collection of samples from the reactor by adding distilled water under the condition of solid to water ratio (1:10 w/v) 9 . The temperature in the bioreactor was measured manually during sample collection using a glass thermometer. CHNS analysis. The elemental compositions such as carbon (C), hydrogen (H), nitrogen (N) and sulphur (S) of each organic waste were determined using CHNS analyzer (Model Euro vector 3000 series, Italy) (ASTM D-5291) available in CATERS CSIR-CLRI, Chennai, India and quantified based on the dry weight. The total and volatile solids were estimated using standard methods for the examination of water and wastewater 37 . An aliquot of the sample was oven-dried at 105 °C and kept in a desiccator prior to analysis then known weight of the dried VMCW was taken into an aluminium boat and the sample initial weights were noted. Thereafter, the boats were loaded in the CHNS analyzer and increased peak, the calibration graph was used to quantify the CHNS 38 . The oxygen content was calculated by the difference between VS and the total sum of C, N, H, S 39 . The calorific energy value (CEV) was estimated using Eq. (1) proposed by Channiwala et al. 40 . where, CHNO and A represents a percentage of carbon, hydrogen, nitrogen, oxygen and ash on dry weight basis. XRD analysis. The overall structural changes during the decomposition of wastes were studied using X-Ray diffraction method. Prior to analysis, the sample was dried at 50 °C for 24 h and was ground to fine powder. The spectra were recorded on Bruker D8 Powder XRD instrument using the source Copper K alpha (CSIR-CLRI, Chennai). Microbial growth rate in the composter. 1 g (wet weight) of composting sample was taken from the composting reactor at 24 h interval and serially diluted with 9 mL of sterile distilled water up to 10 −9 dilution. Then 1 mL of the 10 −2 to 10 −8 the diluted sample was dispensed into a sterile petri dish with a molten Nutrient agar and gently swirls to solidify. The solidified plates were stored for 24-48 h. The microbial growth was expressed in CFU/g dry weight of total solids. Kinetics. Substrate concentration (S), and the consumption rate of the substrate (r) were observed during the composting period. The reciprocals of reaction rate (1/r) and Substrate concentration (1/S) were computed using www.nature.com/scientificreports/ a Lineweaver-Burk plot. In the Lineweaver-Burke plot, the intercept on the y-axis gives the value of K 3 , whereas the value of K m is obtained from the slope of the line and the linear regression was obtained between 1/r and 1/S. Results and discussion The aerobic composting was performed using novel pre-cultured seed inoculums in addition to the aeration and peddling system. The evaluation of the size and volume reduction with composting period revealed the effect of Jaggery and curd based pre-cultured inoculums which are detailed in the following sections. Characteristics of the VMCW. The composition of VMCW collected from the Tambaram Municipality was quantified as 72% of organics, 18% of plastics, 4% of paper and 6% of miscellaneous from the complete source segregated, collected, weighed samples ( Fig. 2a). Since it was separated at source, VMCW was taken for the aerobic composting operation which contains more than 80% organic waste ( Fig. 2b) and the neutral pH of 6.8 was observed which favors microbial activity. The initial total solids content was observed in the range of 19-22% (Fig. 2b). The initial volatile solid content observed was 82.11 ± 2.83% (Fig. 2b) illustrated a high fraction of organics in Vegetable Market Complex Waste consistent with results reported by Pellera et al. 41 . The empirical formula was calculated as reported as C 16 H 25 NO 4 for the primary sedimentation tank sludge which indicates that the waste contains a lower value of degradable/ non-degradable nitrogen content depicted higher C/N ratio at the end of the experiment (Table 1) 42 . Effects of moisture and temperature. Initial decomposition was carried out by mesophilic microorganisms thrived at 29.7 to 36 °C temperatures. Due to fine grinding of the VMCW, high moisture content (80.22 ± 0.34%) was observed initially, that has reduced the O 2 uptake level and lowers the biodegradability potential of VMCW which was consistent with results reported by Kalamdhad et al. 43. In order to reduce the impact of moisture content, the bulking agent rice husk was mixed with VMCW, which further improved the surface area and dryness for microbial growth inside the reactor. Eventually, the matured compost with a reduced moisture content of 18.37 ± 0.29% (9th day) was observed at the end of the composting process (Fig. 3a) 44 . The decreasing trend in moisture content observed was attributed to temperature and metabolic activity of VMCW, whereas the sample had high moisture content at the initial phase of composting reducing the oxygen supply. However, the reactor itself recovered from this drop of oxygen supply. The temperature was estimated at various depths (Fig. 4a) and every acuity achieved the estimation of 54 ± 9.27 on the third day, demonstrating that the heap had reached the thermophilic stage.The observed rapid reduction of moisture content (64%) on the 3rd day was attributed to the system acquired temperature by the metabolic enzymatic activity of microorganism for the protein degradation (Fig. 4a). This was confirmed with a decreased level of C/N ratio, degradation rate (VS 6.6%/ day) and reduction rate (volume reduction 3.2 L/day) [45][46][47][48] . Eventually, the temperature drop was observed at the final stage and, thereafter no reduction in temperature was observed attributed to insufficient www.nature.com/scientificreports/ bulk volume (3 L) and the liberation of excessive heat which indicates the completion of the aerobic composting process and production of matured manure. Effects of pH on composting. pH is the most influenced parameter in the solid waste biochemical metabolic reactions, which allows microorganisms to degrade the complex substances and increases the solubility of mineral elements during composting. The initial pH of 6.8 ± 0.08 (Fig. 3b) was observed for VMCW, enhance the growth of micro-organism could lead to high volume reduction, was supported by Khan et al. 49 , Zorpas et al. 50 . Further, the pH of the reactor was decreased to 6.23 on the 2nd day was attributed to the rapid particulate hydrolysation by high microbial of 2.9 × 10 −6 CFU/g (Fig. 8) and enzymatic activity during the aerobic microbial metabolism in addition to the high temperature of 53 °C on 2nd day which increases the hydrogen ion concentration. Similarly, reports have been revealed that the disintegration of bigger sized substrate inhibits the oxygen supply, which could lead to anaerobic condition causes acid accumulation in the substrate that has reduced the pH. Subsequently, a gradual increase in pH to 7.8 was observed with days elapsed (3rd to 9th day) (Fig. 3b) which was attributed to proteinaceous degradation and formation of ammonia bicarbonate contributed to alkalinity which was prolonged up to the end of the experiment. This has favored adequate growth of microorganism in moderately alkaline condition. Supporting to this, decreased pH up to 5.5 on 3rd day and thereafter increased pH of 8.5 up to end of the matured manure production period was observed by Gabhane et al. 34 . Effects of carbon-nitrogen ratio on composting. The maxima and minima C/N ratio were observed as 17.39 and 2.35 on 5th day and 9th day respectively (Fig. 3c) was attributed to the low and high complexity of carbohydrate and protein compound degradation. The study focused on the effect of pre-cultured seed inoculums on compost and its maturity (C/N ratio) (Fig. 3c). The C/N ratio was slightly reduced (11.95) during the thermophilic phase (3rd day) attributed to a high rate of degradation of carbohydrates where the aeration got disturbed due to the hydrolysation. This was in line with the study reported by Chai et al. 51 . Consequently, slow degradation of protein compounds into ammonia reduces the nitrogen compound, thereby increasing trend of C/N ratio of 17.39 was observed up to 5th day which was corroborated well with high volume reduction (56%) (Fig. 3d) on the 5th day. This was consistent with results reported by Awasthi et al. 52 . Subsequently, the fall in the C/N ratio was observed could be regarded as being caused by microbial respiration due to the rapid uptake of available organic carbon by microorganisms. Thus, large amounts of CO 2 were liberated leading concomitantly to an increased proportion of total nitrogen of the medium from 3.88% to 10.39% (Fig. 4b) which was consistent with the results of Karak et al. 53 , Awasthi et al. 52 . Eventually, the minimum carbon (24.35%) and maximum nitrogen (10.33%) on 9th day (Fig. 4b) was lower than the initial C/N ratio evidenced the perfect aerobic degradation and maturity of manure which was consistent with the maximum volume reduction of 77% (Fig. 3d) 54 . Volume and organic strength reduction in compost. The volumetric reduction of VMCW depends on the microbial degradation of the organic content available for composting reaction and moisture level inside the composter. In enzymatic pre-cultured seed inoculum-based composting, reduction in size of the material depends on the degree of hydrolysis of substrate by the bacteria replenishing enzymes which breakdown the complex polymeric organic materials into simpler ones. The volume of the compost material was decreased day by day with increased composting time (Fig. 3d), from 0.012 m 3 on 0th day to 0.003 m 3 on 9th day. The low initial rate of reduction in volume (0.0002 m 3 /day on 1 st day and 0.0006 m 3 /day on 2nd day) observed was attributed to the insufficient oxygen supply for aerobic microbes resulting from the high initial moisture in the substrate which reduces the air supply on the top and the middle surface of the composter. This was consistent with the results reported by Makan et al. 55 . High volume reduction was recorded on 3rd day (0.0032 m 3 /day) which was attributed to the initial settlement of VMCW on the reactor due to reduced moisture content and further confirmed by the high microbial activity on 3rd day (3.6 × 10 −6 CFU/g) (Fig. 8) which was consistent with volatile solids reduction beyond 3rd day. Thereafter, gradual volume reduction was observed with elapsed time. Eventually, no volume reduction was observed on 8th and 9th day of the composting period, as the composting is completely depending on the volume of voids in the waste and further, attainment of shrinkage limit of waste due to reduced moisture content along with converted matured manure didn't have any effect on volume reduction. The decreased trend in VS was noticed from 2nd to 8th day (Fig. 3d) during the mesophilic and thermophilic phases of the composting and this change of the volatile solid content during composting was used to assess mineralization and decomposition of organic matter 56 . Eventually, the VS content reached relatively the stable form on 9th day which reveals that the substrate conversion had reached a stable phase and was matured. It was observed that, 82 ± 2.83% of TS was volatilised during the composting process and this loss contributed to the degradation of organic compounds ( Table 2) 57 . High correlation (R 2 = 0.862) (Fig. 5a) between the percentage of volatile solids and percentage of volume reduction was observed for VMCW indicated that stabilization of organic content into manure which normally mineralized to nitrogen (N), phosphorus (P) and potassium (K) in addition to the other components with an organic mass in the volume of the composting reactor. The energy loss during the conversion of waste into manure was observed as 5600 kJ/kg of TS attributed to the decomposition of carbohydrates and protein compounds. Supporting to this the organic carbon decreased by 29.375% which was consistent with observed the volatile solids reduction of 40.6 ± 0.61%. This conforms the utilization of carbon for aerobic degradation and mineralization to mature compost was further confirmed with high correlation (R 2 = 0.913) between volatile solids and CEV (Fig. 5b). Moreover, the calorific energy value (CEV) of the manure was observed to be as 19,450 kJ/kg of DM evidenced the maturity and further confirmed by the reduction of moisture to less than 15% on 9th day. This was Scientific Reports | (2020) 10:19801 | https://doi.org/10.1038/s41598-020-75766-3 www.nature.com/scientificreports/ consistent with results reported by Stainforth et al. 58 for wheat straw 17,600 kJ/kg. On the contrary, the maturation and stabilization pressure for the change of ammoniacal nitrogen to nitrate nitrogen and further conversion took 14 weeks, where the calorific energy value of the manure observed to be 10,000 kJ/kg DM with a moisture content of 12%are reported by Raclavska et al. 59 . This was in line with the results reported by many researchers and evaluated the CEV ranged from 7000 to 20,000 kJ/kg. For instance, the CEV of 8092 kJ/kg for bio-solids and wood chips was observed by Ekinci et al. 60 . Similarly, Steppa et al. 61 reported for organic waste as 9000-11,000 kJ/ kg, which was lower than that reported by Sobel and Muck 62 for poultry droppings (12,800 kJ/kg), whereas the paper mill sludge and poultry manure compost reported CEV of 3649 kJ/kg, and for 5111 kJ/kg for straw and poultry manure compost 63 . XRD spectra of crystallised mature manure. Diffraction designs were contracted utilizing a goniometer. X-beam diffraction spectra of VMCW at various development stages (0th and 9th) as appeared in Fig. 6. It was seen that the quantity of exceptional peaks presents in spectra on 0th day test has diminished towards the 9th day of the treating the soil cycle, while less serious pinnacles expanded towards later phases of the treating the soil cycle. This demonstrates the transformation of strong squanders from translucent to vague during the degradation cycle, showing that the cellulose and different polysaccharides mixes were corrupted. This was reliable with results for the humidification and developing cycle of the excrement straw detailed by Hu et al. 64 . Effects of the enzyme on the mineral nutrient of matured compost. The concentration of mineral nutrients such as Nitrogen (N), Phosphorous (P) and Potassium (K) was observed as 0.918%, 0.5% and 1.029% respectively for the matured manure produced at reduced composting time of 9 days. It was observed that the NPK concentration in the obtained compost manure on 9th day was comparable with other literature (Fig. 7) reported the composting manure nutrients produced at 45-75 days. This reveals that the high activity of Jaggery based seed inoculums converts the high solid waste molecules into nutrients in the short time period of 9 days. Supporting this, the N, P and K value was 0.9%, 0.28% and 0.0487% respectively and reported that the availability of these nutrients was high in Jaggery based enzyme when compared with other enzymes, which are based on fly ash, phosphor gypsum, lime, and polyethene glycol revealed by Gabhane et al. 34 . Similarly, value of N, P and K in the MSW compost varied from 0.9%, 0.27% and 0.55% respectively was addressed by Saha et al. 65 . Kinetic modelling of enriched microbial composting. To analyze the effect of pre-cultured seed inoculums on VMCW composting process, the total microbial growth counts of the composting system were examined. The maximum microbial growth rate of 3.47 × 10 −6 CFU/g was observed on 4th day (Fig. 8) whereas, in the normal composting process, the microbial consortia activity was delayed due to the acclimatization of the www.nature.com/scientificreports/ same to the vegetable waste which has increased the composting time. This could lead to odour production and maggot formation. The sharp lag phase observed on 2nd day was attributed to the high adaptability of microbes and the decomposition of the substrate which was confirmed by the acidic the pH of 6.2. Further, the exponential and declining microbial growth phases observed was evidenced in the short period of acclimatization and maturity (Fig. 8). In addition, the exponential growth phase was prominent on 3rd day and stationary phase was observed until the 5th day which reveals that the shortened stationary phase was attributed to efficient utilization of enzymes. Moreover, the microbial growth has coincided well with high VS, volume reduction and optimum temperature (Fig. 3a), pH (Fig. 3b) and C/N ratio (Fig. 3c). Declining phase depicted afterwards, where the microbial activity occurs on the thin films of water surrounded on the substrate. Thereafter, this hygroscopic water was dried so that the microbial activity was arrested at the end of the experiment, which was consistent with temperature ( Fig. 3a), pH (Fig. 3b), C/N (Fig. 3c), Volume reduction (Fig. 3d). The two stages of pre-cultured seed inoculum the composting process were based on microbial population for the former stage and substrate concentration for the later stage as a limiting factor. Initially, organic matter was sufficient to be decomposed where the microbial population was insufficient which limits the composting www.nature.com/scientificreports/ process. Thereafter, the microbial concentration was increased with decreased organic matter concentration which could lead to reduced heat due to low metabolic activity which was insufficient to maintain the composting temperature so that the substrate became the limiting factor for the composting process. The fundamental microbial kinetics were plotted and analysed was used to examine the efficiency of inoculation during the aerobic composting processes with reference to the Xi et al. 66 . The organic matter was plotted against composting time and the reaction rate (r) was determined by drawing the tangential resultant curve. The reciprocals of reaction rate (1/r) and Organic matter (1/S) were computed (Fig. 9). The Michaelis-Menten constant (K m ) and limiting velocity reaction rate constant (K 3 ) for the composting reactor was 81.06 and 0.15 respectively. The reaction rate (r) of the aerobic composting was very high especially on 8th day which was observed as 26.71%/day. This has depicted a high degradation potential of VMCW. Hence, the addition of enzymes proved to be attained rapid degradation of organic waste during composting under aerobic condition. The Michaelis-Menten constant (K m ) and limiting velocity reaction rate constant (K 3 ) for the composting reactor was 81.06 and 0.15 respectively. The reaction rate (r) of the aerobic composting was very high especially on 8th day (26.71%/day), depicted a high degradation potential of VMCW. Hence, the addition of enzymes proved to be a rapid degradation of organic waste during composting under aerobic condition. Conclusion The aerobic composting duration was considerably shortened to 9 days from the traditional composting period of 45 days, using a novel pre-cultured seed inoculum enzyme obtained from Jaggery and curd. The natural debasement of VMCW was 42% (82 ± 2.83% to 40.82 ± 0.61%), with a volume decrease from 0.012m3 to 0.003 m3 inside 9 days. An improved supplements NPK level of manure bio-compost was recorded as 0.91% w/w, 0.5% w/w and 1.029% w/w separately. This was evident with an increased microbial population by the plate count confirmed with a lag phase of microbial growth on 2nd day with an acidic pH, lower growth (CFU/mL). The shortened log phase on 5th day was attributed to an optimum temperature, C/N, pH coincided with high VS and www.nature.com/scientificreports/ volume reduction. The observed bio-kinetic constants based on the microbial count, substrate concentration was recorded as Km of 81.06, and K3 of 0.15 coincided with size and volume reduction. The enriched nutrients confirmed the reduced composting period, and this was further confirmed by the XRD analysis. Fertilizer development seen through the X-beam diffraction (XRD) examination of the excrement affirmed the transformation of the precious stone structure of the manure molecule to indistinct structure and the mineralization of natural issue during the treating of the soil. The lower K3 and improved microbial composition depicted an enhanced composting process in aid of pre-cultured bacteria/enzymes becoming a better alternative for VMCW degradation that can reduce the land requirement. In this way, the aged pre-refined seed inoculums supported an improved supplements level with abbreviated fertilizing soil time.
6,398
2020-11-13T00:00:00.000
[ "Environmental Science", "Biology" ]
Polarized emission of quantum dots in microcavity and anisotropic Purcell factors We study the polarization properties of quantum dot (QD) emission coupled with the fundamental cavity modes. A rotation of polarization axis and a change of polarization degree are observed as the coupling is varied. To explain this observation, we derive an analytical model considering the polarization misalignment between QD dipole and cavity mode field. Our model also provides a new approach to extract the anisotropic Purcell factors by analyzing the polarization of detected quantum dot emission coupled to the cavity mode, which paves the way to develop high-efficiency polarized single photon sources. Introduction The system of single semiconductor quantum dots (QDs) coupled to cavities paves the way for the exploration of cavity quantum electrodynamics (CQED) effects [1]. In the past decade, many CQED phenomena in such systems, including Purcell effect in the weak coupling regime [2] as well as the vacuum Rabi splitting in the strong coupling regime [3,4], have been performed and investigated extensively due to their potential applications in quantum information science [5,6]. Weak exciton-photon coupling reveals a pronounced modulation on the spontaneous emission (SE) rate of the QDs through the Purcell effect [7]. The magnitude of modulation, namely, the ratio between the QD's SE rate with (1/ cav τ ) and without ( 0 1/τ ) cavity can be quantified by the following expression: where the first of the four terms on the right side takes on the value denoted P F . Q is the quality factor of cavity mode, V eff is the effective mode volume, and λ c is the wavelength of cavity mode. Moreover, ( ) δω c and a function of detuning Δ. The last two terms are the mismatches between dipole and field in the space and polarization, respectively. Micropillars, owing to its directional radiation pattern, exhibit a high extraction efficiency of photons and thus become a promising candidate for accomplishing efficient single photon devices [8] and further for generating indistinguishable photons [9], where those are at the basis of quantum cryptography [10,11]. The single photons with specific polarization are often used to encode qubit information in quantum computation and cryptography [10,12]. On the other hand, single photon source with variable polarization are quite appealing for avoiding the loss caused by external polarization-resolved components. To control the polarization properties of emitted photons, an anisotropic geometry is introduced to the micropillars [13]. Such an anisotropic cross section lifts the twofold degeneracy of fundamental modes into a pair of linearly polarized modes. A continuous polarization control of single QD emission has been achieved by changing the coupling of QD with the cavity mode (CM) [14,15]. However, a complete experiment/model of the polarization properties, such as the polarization degrees and angles, of QD emission controlled by the Purcell effect is still absent. In this paper, we investigated the polarization-resolved spectroscopy of the QD emission coupled to the CM, both experimentally and theoretically. In experiment, the polarization axis of QD emission is continuously changed with the detuning between QD and CM, as well as the polarization degree. An analytical model considering the coupling between two cavity modes and the exciton state has been proposed to quantitatively explain not only our observations on QDs' polarization behavior but also those in the previous reports [14,15]. Moreover, since the Purcell factor is a figure of merit for characterizing cavity-based single photon devices, developing a reliable approach to extract the Purcell factor is requisite and various methods have been demonstrated previously. The most intuitive way to probe the Purcell effect is to directly employ Eq. (1). By comparing the QD lifetime at and far from resonance with the cavity mode, the Purcell factor can be determined [16]. There are other signatures reflecting the Purcell effect. For example, as the SE of the QD exciton becomes faster near resonance due to the Purcell effect, the pump power required to saturate the optical transitions will be increased. The faster emission rate also enhances the emission intensity at saturation. Therefore, measuring the pump rate where the QD emission intensity saturates [2] or the QD emission intensity at saturation [17,18] could quantitatively illustrate the Purcell effect. Alternatively, by measuring the polarization-resolved emission of QDs, we provide a new approach for extracting the anisotropic Purcell factors of the microcavity. Experimental method, results and discussions The sample studied in this work is fabricated from a planar microcavity grown by molecular beam epitaxy. The microcavity consists of a one λ GaAs cavity surrounded by alternating AlAs and GaAs layers of 24/20 pairs as the bottom/top distributed Bragg reflector (DBR). One layer of InAs QDs is grown at the antinode of cavity and the area density of QDs is approximately 1 × 10 9 cm −2 . The pillars are fabricated first by conventional lithography using negative photoresist (AZ 5214E) and following by the inductively coupled plasma (ICP) etching. The photoresist (PR) is then patterned into groups of circles by photo-lithography and serves as a mask in the following chloride-based ICP etching. After etching and defining the pillars, the residual PR is removed by O 2 plasma ashing. A scanning electron micrograph (SEM) of pillar with diameter of 3 μm is shown in the inset of Fig. 1. Emission spectra are taken in a low-temperature micro-photoluminescence (μ-PL) setup. A 632.8nm line of HeNe laser is focused onto the pillars through an objective lens with a numerical aperture (NA) of 0.5. The PL signals are analyzed by a spectrometer consisting of a 0.55-m monochromator and a liquid-nitrogen-cooled charge-coupled detector (CCD) camera, which provides a spectral resolution of 80 μeV. The polarization-resolved μ-PL is accomplished with the mounting of a rotating half-wave plate and a fixed linear polarizer in front of the monochromator. All the following data are taken with a half-waveplate rotating in the steps of 3° from 0° to 180°such that the PL intensity of the whole polarization plane can be mapped. By applying this polarization-resolved spectroscopy, we can also analyze the fine structure splitting of exciton with an accuracy of 5μeV. Temperature-dependent PL spectra from the studied 3-μm-diameter micropillar with quality factor Q = 7,600 are recorded in Fig. 1. To obtain an authentic quality factor, we excite the pillar with high pump power for feeding a broad and smooth background sources to the cavity. The estimated maximal Purcell factor F P is 14 for the studied micropillar with V eff = 0.87 μm 3 . By varying the temperature, we can control the coupling of the QD with the CM. The energy state of the QD is brought into resonance with the CM at T = 35.2 K and the Purcell enhancement results a pronounced increase in the PL intensity at resonance. The energy crossing at resonance indicates a weak coupling regime for this QD-microcavity system. The fundamental cavity mode HE 11 of circular micropillars is twofold polarization degenerate. However, this degeneracy is lifted due to a slight asymmetry (with ellipticity~0.02, referring to the inset of Fig. 2(a)) in the cross section of the studied micropillar, which is caused during the fabrication process [13]. The energy splitting between the two linearly polarized modes of our pillar is about 10 μeV as shown in Fig. 2(a). The polarization angles of these two linearly polarized modes are determined by measuring the emission intensity in the whole polarization plane. As shown in Fig. 2(b), one of the split modes is polarized along 102° and emits stronger PL intensity than the perpendicular one. Above measurements are performed at T = 10 K. For the target spectral line of the QD in Fig. 1, we cannot confidently identify it as an exciton or a charged exciton since it has no observable fine structure splitting. As shown in Fig. 3(a), the polarization of its emission intensity is measured at T = 10 K. We have observed a certain degree of polarization up to 0.2. Such polarization anisotropy was found to vary from dot to dot and ranged from 0 to 0.5 in our sample. Similar observations were reported previously by other groups for InAs or InGaAs QDs [19][20][21][22]. Besides, from the investigations of polarization anisotropy on tens of individual QDs, we found that the polarization axes were not aligned along the crystallographic axes. The optical anisotropy misaligned with respect to the crystallographic axes is attributed to the strong valence-band mixing which is due to the shape and strain anisotropies [23,24]. We show the polarization orientations of the QD and of the CM at detuning Δ = −0.85 meV together in Fig. 3(a). An included angle θ d exists between the polarization plane of the QD and the CM. Under the detuning Δ = −0.85 meV and Δ = −0.2 meV, we observed a change in the polarization direction as shown in Fig. 3(b). We thus depict the polarization axis and the polarization degree of QD emission as a function of detuning in Fig. 4. As seen in Figs. 4(a) and 4(b), both the polarization axis and the polarization degree change with the detuning. According to Eq. (1), it is expected that the misalignment between the polarization axis of QD and CM could play a role in the observed change of polarization. Thus a model which takes the misalignment between the QD and the CM into consideration is needed to quantitatively explain this observation. Theory for anisotropic Purcell effect To illustrate the polarization-dependent Purcell effect, an analytical model is derived as follows. When the QD is pumped with a pumping rate r, it emits photons via either cavity mode or leaky modes. The SE rate of exciton coupled with the leaky modes is denoted γ x . In addition, the exciton emission rate coupled to the cavity mode is modified by the Purcell effect and can be expressed as Γ = γ X F eff L(Δ), where F eff denotes the effective Purcell factor taking the spatial and polarization mismatch between the dipole and the mode field into account. The photons emitting from cavity and leaky modes are then collected by the polarization-resolved μ-PL setup. The PL intensity from desired QD collected by the detector can be given by [25] det ( , , ) ( , , ) ( , , ) , where we assume that the QD emission comes from the exciton consisting of a singlet state. The PL intensity is a function of detuning Δ, pumping rate r and analyzer axis θ. The efficiencies of cavity mode and leaky mode coupled to the detector are denoted η leak and η cav , respectively. The PL of the QD emitted through the leaky mode keeps its polarization properties [23] and has the expression of where A is the intensity integral over the whole polarization plane and given as (2 + N)π. N is a parameter depending on the linear polarization degree of the QD. N = 2P l /(1-P l ) if the polarization degree of the QD emission is P l . θ D is the direction of polarization anisotropy. p X is the steady excitonic population and can be derived from the rate equations, which depends on detuning Δ and pumping rate r but not on analyzer axis θ. The PL from the cavity mode is where B is again the intensity integral over the whole polarization plane but given as π. In order to investigate the impact of misalignment between the QD and the CM on the PL, the inner product of the dipole and field orientation is separated from the effective Purcell factor and is given with a function of cos 2 (θ-θ c ), where θ c is the polarization angle of single cavity mode. By doing this, we have F eff = Fcos 2 (θ-θ c ). F is the Purcell factor involving only the spatial mismatch. This expression manifests that the Purcell effect selectively enhances the SE rate of photons of the same polarization. When the analyzer axis is chosen to align along the CM, the effective Purcell factor arrives at a maximum so that the detected PL intensity along this axis is greatly contributed from the photons emitted into the CM. On the contrary, zero effective Purcell factor occurs at the direction perpendicular to the CM, which leads to a leaky mode dominated PL intensity. By putting Eqs. (3) and (4) into Eq. (2), the detected PL intensity can be written in the following formula: where α = η cav /η leak . All the terms related to N have been replaced with P l . To illustrate the Purcell effect on the polarization properties of detected PL described by Eq. (5), we plot the PL intensity of QD coupled to single mode in a polar diagram by letting αF = 20 as shown in Fig. 5(a) for θ d = 0°, 45° and 90° at several detunings, where θ d = θ C -θ D . Here we consider a QD which has a polarization degree of 0.1 at large detuning (off resonance). For each included angle, the polarization degree is modified as the coupling of QD with CM is changed and reaches a maximum at resonance, where the polarization degree of detected PL is defined by (I max -I min ) /(I max + I min ).What worthy to note is that for 45 o d θ = the polarization axis rotates with the detuning. This rotation could be quantitatively determined as follows. The polarization axis of detected PL can be found by setting the derivative of the detected PL intensity to zero, where the intensity maximum or minimum occurs. Then we obtain the polarization axis θ det of detected PL: where the parameter V = αF/P l for clarity. det θ depends on the included angle θ d as well as the detuning Δ. Figure 5(b) shows the polarization axis of detected PL as a function of detuning for θ d = 0°, 45° and 90°. A rotation of polarization axis is predicted for θ d = 45° but no rotation would occur for θ d = 0° and 90°. We also plot the polarization axes for different V values when θ d = 45° in Fig. 5(c). It is seen that a larger V value initiates an earlier starting point of rotation because of the stronger Purcell effect. Further discussions on parameter V will be given later. The above discussions focus on the case of the QD emission coupled to single cavity mode. In fact, for micropillar with circular cross section, there is twofold degeneracy in the fundamental cavity mode. The QD dipole is coupled to both two degenerate modes which are polarized perpendicular to one another. So let us consider a more general case where the degeneracy of fundamental mode is lifted, which occurs in imperfect or elliptical micropillars. In this case, Purcell-enhanced SE rate Γ is given as 2 2 cos ( ) ( ) sin ( ) ( ) , where F H and F V are the respective Purcell factors of two linearly polarized modes. Δ H and Δ V are the respective energy detuning between QD and two polarized CMs. Note that the sine square term (rather than cosine square) after F V arises from the 90° difference between two orthogonal CMs. The general expression for the detected PL of QD coupled to both two modes is written as Here we have taken out the factors independent of the analyzer axis and kept only the polarization-dependent terms. For a circular micropillar, since the fundamental mode is twofold degenerate, so Δ H = Δ V and F H = F V . Therefore, cavity-coupled exciton emission rate Γ in Eq. (7) has no polarization dependence. It means that the polarization properties of detected emission will not be modified by the Purcell effect. We now discuss three typical cases of elliptical micropillars with different ellipticity and hence different mode splitting: The imperfection in micropillars will slightly split the twofold degenerate fundamental mode, which exactly meets our fabricated pillar. Since the mode splitting is rather small compared to the interested coupling regime, the relation H V Δ ≈ Δ ≡ Δ holds. Thus Eq. (8) reduces to where κ = F H /F V . Again, a zero derivative of detected PL is used to derive the polarization axis. The polarization axis takes the form of Eq. (6) but now with V = αF V (κ-1)/P l . θ d is explicitly given from the difference of the observed polarization axis between QD and CM at large detuning. The fitting results are plotted in Fig. 6(a). A well-fit for the polarization axis (upper panel in Fig. 6(a)) obtains a V value of 18.2 ± 1.2, where V is the only free parameter of the fit. Regarding the polarization degree of detected PL intensity, which is defined by (I max -I min ) /(I max + I min ), can be derived from Eqs. (6) and (9) To determine the unknown κ value, we begin with the case Δ ~0 (on resonance) of Eq. (9). Given that α is large due to the poor coupling between leaky modes and detector, the final term dominates over the former terms. At resonance, the CM emission has a clear dependence on the polarization as shown in Fig. 2(b). Moreover, the polarization properties of CM emission are always preserved even when the detuning is varied. According to the recent understandings of CM emission [26], while entering the Purcell enhancement regime, i.e. our interested detuning region, the CM is weakly coupled to the target QD and emits photons that are mostly contributed from the target QD. That is to say, the polarization dependence of CM emission indicates a polarization dependence of Purcell enhancement experienced by the QD dipole. Therefore, comparing the dominating term in Eq. (9) with the polarization of detected PL emission at Δ ~0, κ can be determined as 1.34. The measured polarization degree compares well with the curve predicted by Eq. (10) when P l = 0.4 as shown in the lower panel of Fig. 6(a). Comparison of polarization degree between our calculated curve and the measured data taken from [15]. The splitting between X and Y polarized cavity modes is about 500 μeV. (c) Comparison of polarization degree between our calculated curves and the measured data taken from [14]. QD1(black symbol), QD2(red symbol) and QD3(green symbol) respectively own a polarization degree of 0, −0.35 and 0.25 at large detuning. The solid line with corresponding color is calculated with P l CM = 0.8. The blue dashed line is plotted with P l = 0 and P l CM = 1. (2) Δ H < Δ V and Δ V -Δ H > δω c For an elliptical micropillar with a small ellipticity, the mode splitting is a few times the cavity linewidth. This case was demonstrated experimentally in [15]. In such micropillars, the polarization of QD emission can be switched between two orthogonal polarizations by changing the coupling of QD with two linearly polarized CMs. In their work, the splitting between two orthogonal polarized modes is about 500 μeV. The studied QD had isotropic polarization, i.e. P l = 0, at large detuning and the polarization degree was defined by (I X -I Y ) /(I X + I Y ). According to Eq. (8), the polarization degree of detected PL can be derived as The index H (V) corresponds to the polarization Y (X) defined in [15]. By exploiting the fact that X polarized mode has a higher quality factor, we obtain the conditions of α V F V > α H F H and a smaller cavity linewidth for X polarized mode. Giving the parameters with the following values, α H F H = 15, α V F V = 60, cavity linewidth of 0.13 for X polarized mode and of 0.15 for Y polarized mode, the curve predicted by Eq. (11) matches the data points measured in [15] as shown in Fig. 6(b). In addition, due to a higher quality factor of X polarized mode, the polarization degree of QD emission at resonance with X polarized mode is higher than that at resonance with Y polarized mode. (3) Δ H << Δ V and Δ V -Δ H >> δω c This is the case for micropillars with a large ellipticity. In this case, since the mode splitting is far larger than the cavity linewidth, the mode far from the QD state can be fairly neglected. So we obtain the polarization degree of detected PL of det ( ) . The above formula predicts a highly polarized emission at resonance which is close to unity. The blue dashed line in Fig. 6(c) shows the curve of polarization degree for P l = 0 according to the above formula. The experimental result in [14], in which they studied an elliptical micropillar with the ellipticity of 0.6, is also plotted in Fig. 6(c). The blue dashed line obviously deviates from the data at resonance. We attribute this deviation to a partially polarized CM as observed in [14]. Taking this into account, the detected PL is rewritten as where the second term in the bracket denotes the PL contribution from orthogonal polarization and ξ is a fraction. The factor 1/(ξ + 1) ensures a normalization of integral over all polarizations for the last term. So the polarization degree is rewritten as where P l CM = (1-ξ)/(1 + ξ) being the polarization degree of CM. Considering a partially polarized CM with P l CM = 0.8, the polarization degrees of detected PL for QD1, QD2, and QD3 are plotted in Fig. 6(c) by letting P l = 0.25, 0, and −0.35 and αF = 100, 160 and 150, respectively. It is clear that all calculated curves compare well with the experiments. In short, our analytical model can be used to quantitatively explain all the observed data up to date. Extraction of anisotropic Purcell factors A further prospect of the PL polarization-resolved measurement on the QD-cavity system is propounded as described below. For the polarization direction of QD orienting an angle with respect to that of CM, it is predicted and demonstrated that a rotation of polarization axis occurs when the coupling of QD with CM is changed. The rotation in the polarization axis of detected PL as seen in Fig. 6 is modeled by Eq. (6) with a given included angle and a fitting parameter V = 18.2 ± 1.2 as mentioned above. According to the formula V = αF V (κ-1)/P l , the Purcell factor F V can be obtained if α, κ, and P l are known. Actually, κ and P l have been determined in the previous section. The value of α, which is the ratio between the collection efficiency of the CM and the leaky mode for the detection system, can be quantitatively determined as follows. We follow the method proposed in [25] and implement pump powerdependent measurements on the investigated micropillar. The integrated PL intensities of the QD against pump power under various detunings are recorded. Note that in this measurement we removed the waveplate and the polarizer in front of the monochromator so that the PL emissions of all the polarizations are entirely collected by the detector. We show the experimental results at the detunings of resonance and far from resonance in Fig. 7. Note that the curve for off resonance case has been rescaled for clarity. The ratio ε between the total detected PL intensity on resonance and far from resonance is, For the pump power well below the one required to saturate the PL emission, the excitonic population is given as ( ) ( ) below X p r γ = +Γ Δ . As for the pump power well above the saturation, the excitonic population is approximated to be below Comparing the PL intensity with low pump power at resonance and far from resonance, the ratio ε becomes: And the ratio ε for high pump power is: The Purcell factor can be extracted by dividing Eq. (16) by Eq. (17). Substituting the obtained Purcell factor back into Eq. (17), α is immediately obtained. We thus obtain the Purcell factor Fig. 7. PL integrated intensity as a function of pump power when on resonance (black cross) and far from resonance (red cross). The data circled are used to extract the ratio α. Such a large value of α confirms a better collection of photons from the cavity mode than the leaky mode and is comparable to the value obtained in [25]. Since α is determined by the pillar geometry and collection setup (numerical aperture), it should be identical for the pillars of the same diameter measured in the same setup. Therefore, with P l = 0.4 and κ = 1.34, we obtain F V = 1.6 ± 0.5 and F H = κF V = 2.1 ± 0.7. The difference in Purcell factors between the two polarized CMs could be caused mainly by the spatial mismatch term in Eq. (1) as they have almost the same quality factor. To further confirm the obtained anisotropic Purcell factors, we perform another measurement which is widely used in QD-micropillar system [2,25]. The pump power required to saturate the QD emission is recorded as a function of detuning as shown in Fig. 8. We fit the curve with (1 + FL(Δ)) 1/2 [25] and get the Purcell factor F = 1.8 ± 0.2. It is not surprising that the Purcell factor obtained from this approach lies in between the two anisotropic Purcell factors since the latter approach measures the average one. Note that the error listed here does not consider the intensity fluctuations [25]. At last, we would like to mention another possibility to extract the Purcell factor. Back to aforementioned case that Δ H <<Δ V , where the QD is coupled with single cavity mode in an elliptical pillar. The polarization degree of detected PL emission against the detuning is modeled by Eq. (14). For various QDs owning different polarizations, P l can be explicitly assigned by measuring the polarization of QD emission at very large detuning. Thus the free parameter of this fit is the product of α and F. Evidently we can obtain the Purcell factor with high accuracy if α can be correctly found out. To find α, instead of implementing pump-power dependent measurement, one could possibly compare the cavity emission with the addition of a pin-hole, which is of the similar size with the micropillar, as close as possible to the micropillar to that with the removal of pin-hole. Conclusion In conclusion, we propose an analytical model which quantitatively explains the rotation of the polarization angle and the change of the polarization degree of detected PL as a function of detuning, which is observed in our experiment. Our model can be generalized to fit previous works. Moreover, we provide an alternative approach based on analyzing the polarization of detected PL to probe the anisotropic Purcell factors, which is beyond the scope of time-resolved and pump-rate dependent measurements. One should bear in mind that the relatively large error in determining the ratio α is still an obstacle to correctly derive a Purcell factor. A proper way such as the addition of pin-hole to the setup is required to be implemented in order to find correct α.
6,384
2014-01-27T00:00:00.000
[ "Physics" ]
Resveratrol promotes sensitization to Doxorubicin by inhibiting epithelial‐mesenchymal transition and modulating SIRT1/β‐catenin signaling pathway in breast cancer Abstract Breast cancer is one of the leading fatal diseases for women worldwide who cannot have surgery typically have to rely on systemic chemotherapy to extend their survival. Doxorubicin (DOX) is one of the most commonly used chemotherapeutic agents against breast cancer, but acquired resistance to DOX can seriously impede the efficacy of chemotherapy, leading to poor prognosis and recurrences of cancer. Resveratrol (RES) is a phytoalexin with pharmacological antitumor properties, but its underlying mechanisms are not clearly understood in the treatment of DOX‐resistant breast cancer. We used cell viability assays, cell scratch tests, and transwell assays combined with Western blotting and immunofluorescent staining to evaluate the effects of RES on chemoresistance and the epithelial‐mesenchymal transitions (EMTs) in adriamycin‐resistant MCF7/ADR breast cancer cells, and to investigate its underlying mechanisms. The results showed that a treatment of RES combining with DOX effectively inhibited cell growth, suppressed cell migration, and promoted cell apoptosis. RES reversed EMT properties of MCF7/ADR cells by modulating the connection between SIRT1 and β‐catenin, which provides a hopeful therapeutic avenue to conquer DOX‐resistance and thereby prolong survival rates in breast cancer patients. failing to prolong survival, can bring about undesirable effects for tumor patients. 5 Thus, we tried to find a chemical sensitizing agent and a side effect attenuator within the chemotherapy regimen for breast cancer that could alleviate the DOX-resistance in a versatile approach, and then to explore the molecular mechanism that produced this result as well. Many phytochemicals have been found to suppress the growth of cancer cells without affecting normal cells, to exhibit chemoprophylaxis effects, and to promote the sensitivity of cancer cells to DOX. 6 Resveratrol (RES), known as a type of polyphenol compounds extracted from peanut, grape (red wine), tiger cane, mulberry, and other plants, has attracted increasing attention because of its varied healthy benefits. 7,8 RES is of key importance in aging, neurological dysfunction, angiocardiopathy, and inflammation diseases. It was also recognized to have antitumor and chemoprevent activity in connection with the lung, gastric, prostate, and breast cancer. 9,10 Furthermore, RES was reported to have synergistic effect with DOX in gastric cancer cells and reverse multidrug resistance in acute myeloid cells. 11,12 Thus, combined treatment with DOX and RES might be a potentially promising chemotherapy approach for breast cancer, but the relevant mechanisms remain to be explored. Recent studies indicated that the occurrence and development of epithelial-mesenchymal transition (EMT) might be the crucial factor in the cell migration and DOX-resistance of cancers. 13,14 However, whether RES takes effect on the DOX-resistance and metastasis of breast cancer cells remains to be further studied, and the transformation of EMT in this process should be investigated as well. In the present study, adriamycin-resistant MCF7/ADR breast cancer cells exhibited enhanced aggressive migratory characteristics and EMT phenotype. RES successfully alleviated cell migration and increased synergistic sensitivity to DOX through reducing EMT processes and regulating the correlation between silent mating type information regulation 2 homologue 1 (SIRT1) and β-catenin. Our work suggests new avenues for use of RES as a potential effective adjuvant drug in breast cancer treatment to overcome DOX-resistance as well as tumor metastasis. Further investigation of the potential mechanisms for RES use with DOX and in prospective clinical trials is warranted. Doxorubicin and resveratrol used in this study were purchased from Sigma (St. Louis, MO, USA). DOX was scale-diluted with phosphate buffered saline (PBS) buffer while RES was diluted in dimethylsulfoxide (DMSO) solvent, and then was stored at −80℃. MG132 and cycloheximide (CHX) were obtained from Selleck (Houston, TX, USA), both were diluted in DMSO solvent in a stock concentration. Lentivirus infection was operated as protocol from Genechem described. Stable cell lines were generated in 24-well plates with serum-free medium. MCF7/ADR cells were transfected with shSIRT1 lentivirus at the infection MOI ≥ 90 for 24 hours. Then cells were cultured in the medium with 10% FBS and continuously cultured for another 6 days followed by selection with G418 (Invitrogen, Carlsbad CA, USA) at 500 μg/mL. | Cell Counting Kit-8 assay To detect cell proliferation ability, we used cell Counting Kit-8 (CCK8, DOJINDO, Japan). First, about 1 × 10 4 cells contained in 100 μl medium were seeded in each well of 96well plates. After cultivation for 24 hours, cells were treated with or without different doses of DOX or/and RES at different time points, respectively. For investigation, cells were incubated with 10 μL/well CCK-8 solution for 2 hours, then the number of viable cells were measured and calculated by the absorbance at 450 nm. | Cell scratch test Cells were seeded into 6-well plates and reached a confluence in 24 hours, then a 100 μL pipette tip was used to make a single scratch. After washed with PBS, cells were incubated with FBS-free culture medium alone or containing DOX and/ or RES. Olympus IX71 inverted microscope was used to capture images of the scratches at different time points (0, 24 and 48 hours). | Transwell migration assay Cell migration was investigated by transwell migration assay using transwell inserts with an 8 μm pore filter (BD bioscience, SanJose, CA, USA). Cells were with or without DOX and/or RES for 48 hours and then trypsinizated to cell suspension. Next, serum-free medium contained with 4 × 10 4 cells were seeded into the upper chamber while complete medium was added to the lower chamber. After incubation for 24 hours, the cells were fixed with paraformaldehyde for 15 minutes and stained with crystal violet (Beyotime Biotechnology, Beijing, China) for 10 minutes. Next, the cells on the upper surface of the membrane were wiped off, and cells on the lower membrane were surveyed by Leica microscope. Migration ability of cells was evaluated by the average number of migrated cells from four random fields. | Colony formation assay After untreated or treated with DOX and/or RES for 48 hours, cells were seeded into each well of 6-well culture plates with a density of 500 cells/well. After cultured in drug-free culture for another 14 days, cells were fixed with paraformaldehyde for 15 minutes and stained with crystal violet for 10 minutes to visualize colonies. Only ≥50 cells regarded as positive colonies were counted and compared. | TUNEL assay For TUNEL assay, the cells were washed twice with PBS and fixed with formaldehyde for 20 minutes, then were permeabilized with 0.1% TritonX-100 for 10 minutes. After being washed in PBS for 5 minutes, cells were incubated with TdT reaction mixture at 37℃ in a dark humidified chamber for 60 minutes. Next, the cells were immersed in 2 × SSC buffer for 15 minutes to stop the reaction. After being washed with PBS, the cell nuclei were counterstained with DAPI (Beyotime Biotechnology). Finally, images were captured using a fluorescence microscope (TCS-SP2, Leica, Germany). | Immunofluorescence technique Cells were seeded in cofocal dishes and treated with the corresponding ways. Cells were fixed with paraformaldehyde for 30 minutes and then with 0.2% TritonX-100 for 10 minutes. After being washed with PBS, cells were blocked by normal goat serum supplemented with 10% albumin from bovine serum for 30 minutes. Then cells were incubated with the primary antibodies including anti-Vimentin (diluted to 1:500), E-cadherin (diluted to 1:200), and anti-β-catenin (diluted to 1:50) overnight at 4℃. After being washed with PBS, cells were incubated with anti-rabbit secondary antibody conjugated to Alexa-488 fluorescence (Invitrogen) for 2 hours at the room temperature. At last, the nuclei were stained with DAPI for 15 minutes and intracellular localization of target proteins were observed with a fluorescence microscope. | Co-immunoprecipitation assay Antibodies against SIRT1 and β-catenin were used to be incubated with protein A/G beads (ThermoFisher Scientific, MA, USA) for 6 hours, respectively. After being washed with TBS for three times, the antibodies conjugated beads were incubated with the cell lysates at 4℃ overnight. Then the beads were washed with TBS and centrifuged at 19 000 g for 5 minutes. After discarding the supernatant, the equivoluminal SDS buffer was added into the beads. Lastly, the beads were boiled for 5 minutes and the target proteins were detected by Western blotting. | Western blot analysis Cultured cells were lysed in RIPA buffer (Beyotime Biotechnology) directly and the concentration was determined by BCA Protein Assay Kit (Beyotime Biotechnology). Proteins with the same concentration were segregated on SDS-PAGE gels and transferred onto PVDF membranes (Millipore, Danvers, MA, USA). After blocked by 5% skim milk, the membrane was incubated with the primary antibodies at 4℃ overnight. The next day, the membrane was washed with TBS-T buffer and then incubated with appropriate secondary antibodies at 37℃ for 2 hours. Finally, the samples were detected by the ECL system (ThermoFisher). | Statistical analysis Data were expressed as means ± SD from at least three independent experiments. SPSS 19.0 software was used to perform statistical analysis. Student's t test was performed to evaluate the differences between individual groups. P values <0.05 were considered to be statistically significant and graphs were created with GraphPad Prism 5.0 software. | Effects of DOX and RES on breast cancer cells We detected the chemical sensitivity of MCF7 and MDA-MB-231 cells to DOX and RES treatment by CCK8 assay, respectively. Concentration gradient of DOX was from 0 to 10 μg/mL. The survival rate of MCF7 cells was inhibited by DOX, and the inhibition rate increased along with the increase in treatment time and concentration ( Figure 1A). However, DOX did not inhibit the survival of MDA-MB-231 cells in a dose-and time-dependent manner until its concentration reached 4 μg/mL. Besides this, survival rate of MDA-MB-231 cells was still as high as 45% after 7-day treatment of 2 μg/mL DOX while MCF7 cells presented with 15% only ( Figure 1B). Then cells were treated with RES with the concentration from 12.5 to 200 μmol L −1 M. As the same, RES significantly inhibited cell survival of MCF7 cells in a dose-and time-dependence manner whereas RES had no obviously suppression effect on MDA-MB-231 cells until its concentration exceeded 50 μmol L −1 ( Figure 1C). As the previously found, the 7day survival rate of MDA-MB-231 cell maintained over 80% when treated with 25 μmol L −1 RES and about 60% in 50 μmol L −1 treatment ( Figure 1D). | DOX-resistant cells MCF7/ADR exhibited enhancive migratory phenotype As both DOX and RES have obvious inhibitory effects on MCF7 cells, we selected MCF7 cells and MCF7/ADR cells as the suitable cell models to investigate the effects of RES on DOX-resistance in breast cancer. CCK8 assay showed that MCF7/ADR cells had no significant change with the treatment of different concentrations of DOX while MCF7 cells had a visible decrease in cell vitality ( Figure 2A). After being treated with low dose of DOX (4 μg/mL) for 48 hours, MCF7 and MCF7/ADR cell nuclei were stained by DAPI. It turned out that morphological changes including nuclear condensation and nuclear fragmentation happened on MCF7 cells while no changes occurred in MCF7/ADR cells ( Figure 2B). Meanwhile, colony formation was performed to confirm that MCF7 cells had a slower growth compared with MCF7/ ADR cells with the treatment of 4 μg/mL DOX ( Figure 2C). These results suggested that MCF7/ADR cells maintained the resistant ability to DOX while MCF7 cells were sensitive to it. Next, we investigated the relation between DOX-resistance characteristics of MCF7/ADR cells and its enhancive migratory phenotype. We detected cell migration ability by cell scratch test and transwell assay, and both results confirmed that the migration capacity of MCF7/ADR cells was greater than that of MCF7 cells ( Figure 2D-E). | MCF7/ADR cells showed significant EMT characteristics As cell migration and drug resistance were related to EMT phenotype, we detected the EMT markers including E-cadherin, N-cadherin, Vimentin, and β-catenin by Western blots ( Figure 3A) and immunofluorescence (IF) ( Figure 3B). The expression of epithelial-specific marker E-cadherin was lower in MCF7/ADR cells while mesenchymal markers Ncadherin, Vimentin, and β-catenin increased significantly compared with MCF7 cells, which indicated that MCF7/ ADR cells presented with enhancive migration ability was possibly due to its transform in EMT phenotype. | RES combined with DOX effectively inhibited proliferation and migration of MCF7/ ADR cells To investigate the effect of DOX combined with RES on the proliferation inhibition of MCF7/ADR cells, we treated cells with different concentration of DOX and RES for 48 hours. Compared with DOX or RES treated alone, the inhibitory effect in the combined group was significantly stronger than the single drug group. By calculating the combination index (CI) of each group, we found that the CI of each combination group was always less than 1, and the average value was 0.45, showing an obvious synergistic effect (Table 1). Thus, we treated cells with 4 μg/ mL DOX or 50 μmol L −1 RES, either alone or in combination to detect the cytotoxicity of RES on MCF/ADR cells. ADR cells ( Figure 4A). In addition, we analyzed whether combined treatment could suppress the colony formation of MCF/ADR cells. The results showed that combined treatment with DOX and RES had the strongest inhibitory effect on colony formation; also, RES treated alone had an inhibitory effect while DOX-treated group had no effects on that ( Figure 4B). The effect on cell apoptosis induced by DOX (4 μg/mL) and/or RES (50 μmol L −1 ) treatment was measured by TUNEL assay. It showed that there were no apoptotic cells observed in DOX-treated alone group while RES treated alone could trigger cell apoptosis. Meanwhile, combined treatment with 4 μg/mL DOX and 50 μmol L −1 RES induced cell apoptosis maximally ( Figure 4C). Next, the effect of RES on cell migration was detected by cell scratch test and transwell assay. In the same way, we treated cells with 4 μg/mL DOX or 50 μmol L −1 RES, either alone or in combination. Results of the cell scratch test suggested that DOX treatment alone had no significant effect on cell migration while RES decreased cell migration significantly in MCF7/ADR cells. Combination of DOX and RES treatment achieved a remarkably stronger migration-inhibitory effect than RES or DOX treated alone ( Figure 4D). Transwell assay also showed DOX treatment alone did not influence the transmembrane ability of MCF7/ADR cells whereas RES treated alone or combined with DOX, significantly decreased cell number that migrated through the membrane ( Figure 4E). To sum up, combined treatment with DOX and RES effectively inhibited cell proliferation, promoted cell apoptosis, and suppressed cell migration of MCF7/ADR cells. | RES reversed EMT phenotype of MCF7/ADR cells and promoted the expression of SIRT1 Previous results indicated that MCF7/ADR cells showed enhanced migratory ability associated with EMT phenotype. Thus, we explored whether the effect of RES on MCF7/ADR cells was through transforming EMT phenotype. We treated MCF7/ADR cells with or without 4 μg/ mL DOX or/and 50 μmol L −1 RES, then we investigated the expressions of EMT-related proteins by Western blot and IF technology. Increased expressions of Vimentin, N-cadherin, and β-catenin were observed in DOX-treated alone group while the treatment with RES alone decreased the expressions of Vimentin, N-cadherin, and β-catenin significantly compared to untreatment. Notably, in DOX and RES combined treatment group, RES antagonized DOXinduced upregulation of Vimentin and N-cadherin as well as β-catenin via Western blots ( Figure 5A). Furthermore, RES decreased expressions of Vimentin, N-cadherin, and β-catenin in a concentration-and time-dependent manner ( Figure 5B-C). Similar findings on expressions of Vimentin and β-catenin were observed by IF staining, and expression of E-cadherin was upregulated significantly in RES-treated group ( Figure 5D). These findings revealed that RES possibly changed the expressions of EMT-related molecules to resist DOX-induced EMT, leading to prohibit the migration ability and the growth of MCF7/ADR cells. Thus, to clarify in which way RES modulated DOX-resistance and enhanced EMT phenotype in MCF7/ADR cells was necessary. Because RES has been reported as the activator of SIRT1, we detected the expression of SIRT1 by Western blots. The result showed that RES induced an evident increased expression of SIRT1 ( Figure 5E), which was also upregulated along with the increase in concentration and time ( Figure 5F-G). The protein level of SIRT1 in MCF7/ADR cells was lower than that in MCF7 cells was detected ( Figure 5H). When MCF7 cells were treated with DOX of different concentration gradients (0, 2, 4, 6, 8, 10 μg/mL) at different time points (0, 3, 6, 12, 24 hours), the protein level of SIRT1 was always upregulated ( Figure 5I-J), reminding us of SIRT1's inhibitory effect on DOXresistance. To determine the SIRT1's role in RES-mediated inhibiting effects on MCF7/ADR cells, we generated MCF7/ADR-shSIRT1 cell line using shSIRT1 lentivirus to test cell proliferation, cell apoptosis, and cell migration after being treated with DOX and RES. Our results showed that shSIRT1 treatment significantly reversed inhibiting effects of RES on MCF7/ADR cells ( Figure S2). | RES regulated SIRT1/β-catenin signaling pathway in MCF7/ADR cells To illustrate how RES reverses EMT, we focused on β-catenin which occupied a decisive position in proliferation, apoptosis, and migration of tumor cells. Previous results from Western blots showed that RES decreased the expression of β-catenin, and promoted SIRT1 expression significantly. We sought to explore further whether SIRT1 modulated the expression of βcatenin in RES-treated cells. To do this, we detected decreased expression of β-catenin in SIRT1-overexpressed MCF7/ADR cells by Western blots and IF staining ( Figure 6A-B), which implicated an inverse relationship between the expressions of SIRT1 and β-catenin. To elucidate the mechanism by which SIRT1-repressed expression of β-catenin, we sought to test whether SIRT1 could affect the stability of β-catenin protein. It was known that β-catenin was degraded through ubiquitinmediated proteolysis in various cancers. Thus, we detected the role of SIRT1 on the degradation and ubiquitination of β-catenin in MCF7/ADR cells. We found that the protein level of β-catenin was restored when SIRT1-overexpressed MCF7/ ADR cells were treated with proteasome inhibitor MG132 ( Figure 6C). In addition, the half-life of β-catenin protein was significantly reduced in SIRT1-overexpressed MCF7/ADR cells compared with the empty vector group ( Figure 6D). These results similarly indicated that SIRT1 might inhibit the protein level of β-catenin through ubiquitin-mediated proteolysis. Indeed, the ubiquitination level of β-catenin protein was increased in SIRT1-overexpressed MCF7/ADR cells ( Figure 6E). It has been reported that phosphorylation of β-catenin was required for its ubiquitination and degradation by proteasome. Thus, we analyzed the phosphorylation of β-catenin when SIRT1 was overexpressed in MCF7/ADR cells, and observed a marked increase ( Figure 6F). In addition, Co-IP assay was performed and indicated an interaction between SIRT1 and β-catenin ( Figure 6G). These data together demonstrated that SIRT1 induced β-catenin degradation by promoting ubiquitinmediated proteolysis in MCF7/ADR cells. At the same time, we used MCF7/ADR-shSIRT1 cell line to confirm the important role of SIRT1/β-catenin pathway in RES-treated cells. We treated MCF7/ADR or MCF7/ADR-shSIRT1 cells with 4 μg/mL DOX and 50 μmol L −1 RES, then we detected the expressions of EMT-related proteins by Western blot. Being treated with shSIRT1 antagonized RES-induced deregulation of Vimentin and N-cadherin as well as β-catenin ( Figure 6H), which showed that RES reversed EMT in a SIRT1/β-catenin pathway dependent manner. | DISCUSSION In the present study, we demonstrated for the first time that RES reversed DOX-resistance, inhibited migration capacity of breast cancer DOX-resistant cells by modulating EMT phenotype and SIRT1/β-catenin pathway. We have shown that RES upregulated the expression of SIRT1, consequently suppressed, and thereby leading to the destruction of β-catenin. We showed a new mechanism that underlies the progression of β-catenin, and suggested that the molecular principle of RES may be used in DOX-resistance in breast cancer. For patients with advanced and metastatic cancers who have no opportunities to operate, chemotherapy is considered as the most efficient therapy which alleviates symptoms and improves survival rate. DOX is a commonly used chemotherapeutic drug which has been queried for severe side effects and the acquired drug resistance, as well as the development of EMT. 15,16 EMT is a biological transformation of epithelial cells to mesenchymal cells through which cells lose cell polarity and connections so that more migratory and invasive properties can be obtained. 17 EMT is an important pathological process for the migration and invasion of tumor cells derived from epithelial cells. 18 It not only promotes metastasis of cancer cells but also enhances the development of DOXresistance. 19 EMT was reported to induce the overexpression of chemoresistance-related genes and thus led to multidrug resistance in breast cancer. 20,21 DOX-induced EMT also was detected in gastric cancer cells which was inhibited along with the suppression of β-catenin signaling pathway. 22 In addition, selective inhibitor of β-catenin effectively inhibited EMT progress and enhanced chemo-sensitivity of HER-2 positive gastric cancer cells to lapatinib which targeted to HER-2. 23 A recent report has suggested that under the treatment of cyclophosphamide, the primary tumor cells showed EMT phenotype and chemoresistance exhibited by reduced cell proliferation and increased expression of multidrug resistant genes in prostate cancer. 24 All these data indicated that EMT occupies an important position in chemoresistance and promotes metastasis in cancers. In this study, MCF7/ ADR cells were confirmed to acquire DOX-resistance and enhanced migration ability which showed in Figure 2. At the same time, higher expressions of mesenchymal cell markers such as β-catenin, N-cadherin, and Vimentin, as well as lower expression of epithelial cell molecule E-cadherin were observed in MCF7/ADR cells compared with MCF7 cells (Figure 3). To overcome DOX-resistance of breast cancer cells and alleviate its adverse effects, we took RES, which is an eminent phytochemical with various health benefits, to collaborative treatment with DOX. In gastric cancer MGC803 cells, RES served as a chemosensitizer to DOX and inhibited cell cycle progression by targeting PTEN which was known as a negative regulator of PI3K/Akt pathway. Also PTEN has been regarded to reverse drug resistance mediated by EMT. 11 In breast cancer, RES was reported to inhibit cell proliferation and increased cellular influx of DOX. 25 Consistent with the previous study, our data revealed that RES effectively inhibit cell proliferation and F I G U R E 6 RES regulated SIRT1/β-catenin signaling pathway in MCF7/ADR cells. (A) MCF7/ADR cells overexpressed with SIRT1 were harvested for Western blot analysis for β-catenin (n = 3, **P < 0.01). (B) IF assay was used to detect the expression of β-catenin in MCF7/ADR cells which overexpressed with or without SIRT1, β-catenin were stained in green and SIRT1 was stained in red (Bar = 40 μm). (C) MCF7/ADR cells were transfected with SIRT1, then treated with 10 μmol L −1 MG132 for 6 h. Cells were harvested for Western blot analysis of β-catenin (Vector: control cells without the overexpression of SIRT1, n = 3, **P < 0.01). (D) MCF7/ADR cells were transfected with SIRT1, then treated with 10 μg/mL CHX for different time (0, 1, 3, 6, 12, 24 h). Cells were harvested for Western blot analysis of β-catenin (Vector: control cells without overexpressing SIRT1, n = 3, *P < 0.05, **P < 0.01). (E) MCF7/ADR cells were transfected with SIRT1, then treated with 10 μmol L −1 MG132 for 6 h. Cells were harvested and Western blot analysis was performed to detect ubiquitination level of β-catenin (Vector: control cells without overexpressing SIRT1). (F) MCF7/ADR cells were transfected with SIRT1, then Western blot analysis was performed to analyze the phosphorylation of β-catenin at Ser33/37/Thr417 sites (Vector: control cells without overexpressing SIRT1, n = 3, **P < 0.01). (G) Co-IP assay indicated an interaction between SIRT1 and β-catenin. Protein A/G beads were incubated with β-catenin/SIRT1 for 6 h, respectively, followed by the incubation with cell lysate overnight. Then Western blot analysis was performed to detect expression of SIRT1 and β-catenin, respectively. (H) MCF7/ADR or MCF7/ADR-shSIRT1 cells were treated with 4 μg/mL DOX and 50 μmol L −1 RES, then the expressions of EMT-related proteins including Vimentin, N-cadherin, and β-catenin were investigated by Western blot analysis (n = 3, *P < 0.05, **P < 0.01) promote cell apoptosis in MCF7/ADR cells. What's more, RES suppressed the growth of breast cancer cells without affecting mammary epithelial cells MCF10A ( Figure S1). Moreover, we discovered that RES could suppress cell migration and transform the expression of EMT-related proteins, suggesting that RES had an inhibitory effect to EMT on MCF7/ADR cells (Figure 4). Resistance to DOX mediated by EMT in cancers is regulated by various signaling pathways, among which β-catenin signaling gets our attention. β-catenin was reported to regulate EMT-related proteins and promote the expression and activity of ZEB1, which then repair DNA damage and reverse DOX-resistance. 26 In the present study, β-catenin was found to high expressed in MCF7/ADR cells which confirmed its important roles in DOX-resistance of breast cancer. Based on this, we detected that RES downregulated the expression of β-catenin time-and dose-dependently in MCF7/ADR cells. RES is a well-known activator of SIRT1 which was reported to inhibit proliferation and migration through SIRT1-mediated posttranslational modification of PI3K/Akt signaling in hepatocellular carcinoma cells. 27 Another report revealed that SIRT1 promotes EMT in colorectal cancer by regulating Fra-1. 28 Also Bylesl V et al have reported that SIRT1 induces EMT by cooperating with EMT transcription factors and enhances prostate cancer cell migration and metastasis. 29 Thus, whether SIRT1 is a EMT promoting factor or suppressor needs to be further explored. Chu et al 30 reported that SIRT1 was upregulated in drug-resistant cancer cell lines and patients' tumor samples through increasing expression level of multidrug resistant protein 1, which was reminiscent of SIRT1's promoting effect in the sensitivity of chemotherapy drugs. However, Deng et al 31 found that the expression of SIRT1 was lower in prostate cancer, bladder cancer, ovarian cancer, and glioblastoma when compared with normal tissues. Thus, SIRT1 can function either as a promoter or a suppressor in chemotherapy resistance, depending on the tumor types, cellular background, or microenvironment. In our results, the protein level of SIRT1 in MCF7/ADR cells was lower than that in MCF7 cells, reminding SIRT1's inhibitory effect on DOX-resistance. Also, the protein level of SIRT1 was always upregulated under the treatment with DOX ( Figure 5). Concordantly, our data suggested that RES activated expression of SIRT1 in MCF7/ ADR cells in a time-and dose-dependent manner ( Figure 5). As EMT is associated with abnormal activation of canonical Wnt/PI3K/Akt pathway, which is correlated with the stabilization of β-catenin. 32,33 We speculated that SIRT1 mediated β-catenin signaling to reverse EMT and consequently inhibits cell migration in RES-treated MCF7/ADR cells. In the results shown in Figure 6, we established an inverse link between SIRT1 and β-catenin, and showed that up-regulation of SIRT1 affected the expression level and stability of β-catenin in MCF7/ADR cells. By overexpressing SIRT1, phosphorylation level of β-catenin decreased, as well as the ubiquitin-mediated proteolysis of β-catenin. Our data suggest that RES reverses DOX-resistance through upregulating SIRT1 and then managing β-catenin in MCF7/ADR cells.
6,238.4
2019-01-29T00:00:00.000
[ "Medicine", "Biology" ]
Legume Lectin FRIL Preserves Neural Progenitor Cells in Suspension Culture In Vitro In vitro maintenance of stem cells is crucial for many clinical applications. Stem cell preservation factor FRIL (Flt3 receptor-interacting lectin) is a plant lectin extracted from Dolichos Lablab and has been found preserve hematopoietic stem cells in vitro for a month in our previous studies. To investigate whether FRIL can preserve neural progenitor cells (NPCs), it was supplemented into serum-free suspension culture media. FRIL made NPC grow slowly, induced cell adhesion, and delayed neurospheres formation. However, FRIL did not initiate NPC differentiation according to immunofluorescence and semiquantitive RT-PCR results. In conclusion, FRIL could also preserve neural progenitor cells in vitro by inhibiting both cell proliferation and differentiation. INTRODUCTION Stem cells are unique cells that retain the ability to divide and proliferate throughout postnatal life to provide progenitor cells that can progressively become committed to produce specialized cells. Many cytokines have been reported as inducers of stem cell proliferation and differentiation, one of them is FL (Flt3 ligand) which can prolong maintenance of primitive human cord blood cells in stromal-free suspension cultures [1][2][3][4]. However, FL-containing cultures still cause the loss of repopulating capacity of primitive cells [5][6][7][8][9][10]. We have extracted and identified a new legume lectin from Hyactinth bean [11]. It is a ligand of receptor tyrosine kinase Flt3 which is strictly expressed in hematopoietic and neural cell lines [12][13][14]. Gabriella has reported a new flt3 ligand which has the ability to preserve hematopoietic progenitor cells in vitro for 4 weeks [15]. The gene and protein sequences of our protein are quite similar to Gabriella's protein [11], besides our protein is also capable of long time in vitro preservation of hematopoietic progenitor cells. According to previous reports [15][16][17][18][19], we named our protein FRIL too and found that FRIL could maintain hematopoietic stem cell in G 0 /G 1 phase for at least 4 weeks. Low activation of FRIL's receptor Flt3 is considered the key mechanism through which FRIL preserves cells. Further experiments showed that FRIL inhibited hematopoietic stem/progenitor cell (HSPC) apoptosis by upregulating p53, reduced activation of MAPK and phosphorylation of STAT5, disturbed formation of AP-1 complex, and in the end suppressed the expression/activation of cell cycle proteins [20][21][22]. Over the last decade, neural stem cell research provided penetrating insights into plasticity and regenerative medicine. Stem cells have been isolated from both embryonic and adult nervous system [23,24]. Although with the help of extrinsic signaling factors progress has been made in understanding fundamental stem cell properties, processes that determine cell fate or that maintain a cell's primitive state are still unknown. As a following experiment to find out whether FRIL can preserve neural progenitor cells as it does on HSPCs, we first identified the isolated neural progenitor cells (NPCs) and their flt3 expressing levels, then we supplemented FRIL into different NPC suspension culture media, observed cell morphological changes, analyzed cell growth and cell surface marker expression, finally we used semiquantitive RT-PCR to verify the changes in gene expression. 2 Clinical and Developmental Immunology NPCs isolation and cell culture E13.5 SD rats (experimental animals department, Peking University, Beijing, China) were sacrificed, then fetal embryonic telencephalon and mesencephalon were dissected under the dissecting microscope, after digestion in the sterilized tube, cells were seeded in 25 cm 2 flask for general culture and passage. Meanwhile statistically supplemented EGF(20 ng/mL), bFGF(20 ng/mL), and FRIL(300 μg/mL, 1 : 1000) into basic culture media(98% DMEM/F12 (1 : 1) and 2% B27/N2) to form eight groups of culture media (control, EGF, bFGF, FRIL, EGF&bFGF, EGF&FRIL, bFGF&FRIL, EGF&bFGF&FRIL). After second passage, equal numbers of NPCs were seeded with these different culture media. Furthermore, some NPCs were seeded in differentiation media which contains 5% fetal bovine serum. Culture media comprised a low percent of glucose and did not contain mannose which may exerted negative effect on FRIL's function [15]. Identification of NPCs and expression of flt3 on NPCs P2 neurospheres were transferred into poly-l-lysine (pll) precoated 24-well plate, incubated for 8 hours at 37 • C, and then stained by nestin for immunofluorescence for the purpose of NPCs identification. Meanwhile, some other neurospheres were treated with HRP conjugated FRIL in order to find out the proportion of flt3 expressed on NPCs. Cell counting assay Cells were harvested at day 7 following seeding, resuspended with PBS after digestion and centrifuge, and then seeded on 96-well plate together with 100 μL solution and 10 μL CCK-8(Cat. no. CK04, Dojindo Laboratories Co, Japan.) per well. Plate was incubated at 37 • C, for 4 hours, then the absorbance at 450 nm was measured using microplate reader with a reference wavelength at 650 nm(Bio-Rad Model 550). Data was analyzed by SAS 8.2 software. Morphological observation P3 NPCs were harvested and digested into single cell, cells were counted and seeded into 25 cm 2 flasks with a desity of 5 × 10 4 per mL. Each flask was supplemented with only one media described above, and NPC morphology was observed everyday. Immunofluorescence assay NPCs were seeded into pll precoated 24-well plate and incubated at 37 • C for 8 hours, fixed with 4% paraformaldehyde for 20 minutes, washed with PBS 5 minutes three times, 0.3% Triton X-100 20 minutes, PBS 5 minutes three times, then were incubated with primary antibody for 2 hours at 37 • C, washed, and incubated with secondary antibody for 30 minutes at 37 • C. Primary antibodies that were used TIFF files using AlphaImager3300 (Serial no. 981878, Alpha Inc, China.). Gel band quantitation analysis was performed via analysis tools of AlphaImager3300. NPCs identification, flt3 expression, and neural progenitor cell growth assay Primer (P0) cells began to form regular spheres on day 2 after dissection (Figure 1(a)), although some cell death was evident by the presence of cell fragments. More than 98% of the P2 cells were nestin-positive, indicating they were NPCs (Figures 1(c), 1(d)). While more than 95% of cells were positive for FRIL-HRP stain, indicating that most NPCs expressed flt3 protein (Figure 1(b)). FRIL delayed formation of neurospheres and induced cell adhesion When seeded, NPCs formed spheres later in media with FRIL than those without FRIL. Although there were neurospheres in FRIL containing media, these spheres were smaller than those formed in media without FRIL (Figures 3(b), 3(d), 3(f), 3(h)). Especially in FRIL medium, there were many single/two cells and some small neurospheres which mostly had 3 or 4 cells (Figure 3(b)). After P3 NPCs were seeded, there was no difference on cell growth in media with/without FRIL from day 1 to day 4 after seeding. However, after day 4, cells in FRIL containing media adhered to the flask surface ( Figure 3). And this phenomenon took place again when NPCs were cultured after passage. NPCs cultured in differentiation media showed difference from day 3 to day 7 after seeding. Cells in serum media seemed to begin to grow on day 3 and covered the bottom of well/flask on day 6/day 7, while cells in FRIL and serum containing media still spoted on the bottom or just formed some colonies. However, from day 7 on cells in FRIL and serum-containing media reached confluency. Most cells showed glial-like morphology. 1∼3 neuron-like cells per well existed in serum-containing media, while 13∼21 cells were present in FRIL and serum-containing media (data not shown). FRIL reduced GFAP expression while maintained NSE expression We immunostained cells, nestin for NPCs, GFAP for astrocytes, NSE/tubulin for neurons, and O4 for oligodendrocytes. Fewer FRIL treated NPCs differentiated into astrocytes than those not treated by FRIL, despite the fact that they had adhered to the plates (Figure 4). Oligodendrocytes were not found. NSE positive cells were not found in culture media containing EGF and EGF&FRIL too. The number of NSE positive cells in FRIL containing media was slightly higher than that without FRIL, although NSE positive rates were small. In addition, tubulin positive cells were not found in any media. Semiquantitive RT-PCR analysis affirmed FRIL inhibition in glial differentiation On the basis of previous findings [25][26][27], we chosed 5 genes, namely, GFAP and Notch (an indicator of GFAP), NeuroD1, NeuroD2, NeuroD3, and Ngn3 (which serves as upstream regulator of NeuroD) to examine gene expression of cells in eight different cell culture media. GFAP and nestin were expressed in each group, nestin expression was similar, while GFAP expression in FRIL containing groups was lower than those without FRIL ( Figure 5). Notch was not found in control and serum-treated groups, and its expression was quite opposite to that of GFAP. Neuronal gene expression was weak, there was no NeuroD3 expression, NeuroD2 was only expressed in 3 media, although expression in NeuroD3 and Ngn3 was much higher in FRIL and serum media than in serum media. Although FRIL might has no function on neuronal differentiation, these RT-PCR results suggested that FRIL inhibited neural progenitor cells from autodifferentiating into glials. DISCUSSION 98.6% nestin positive cells showed that to some extent all cells at P2 were NPCs. 95.7% flt3 expression confirmed the similarities between NPCs and HSCs, although this contradicted to Brazel's findings in which flt3 was expressed only in NPCs in dorsal root ganglions [12]. The similarities between NPCs and HSCs [28,29], especially flt3 expression, gave us hints on FRIL-NPCs study. Previous studies showed that the plant lectin FRIL was successfully extracted from Dolichos lablab using a mannose affinity column [11,15]. Although mannose or αmethyl α-D-mannoside were reported exerting a negative effects, FRIL had the ability of preserving HSCs in vitro for a long time [21,22]. In this study, we reported that FRIL could also preserve neural progenitor cells as it does on hematopoietic stem/progenitor cells, it slowed NPC aggregation and growth, on the other hand it inhibited NPC glial autodifferentiation, although it might not affect NPC neuronal autodifferentiation. At present, the common widely studied neural differentiation pathways are Notch, Shh, and so forth [29,30]. Based on these pathways, both neuronal and glial RT-PCR results are in coincident with the results of immunofluorescence. No matter what kinds of development in which FRIL is involved, the mechanism is still unclear and further researches in signal pathways should be done. Depending on our previous studies that FRIL can affect activities of MAPK, STAT5, and p53, FRIL neural mechanical researches that may be more complicated could be performed. There have been reports that stem cells can be induced differentiating into neural cells, however cytokines that effectively control these cells are still inadequate. FRIL's ability to functionally preserve hematopoietic and neural cells may significantly extend the range and time for manipulating these cells. Experiments are underway to find out whether FRIL can improve applications in neural stem cell transplantation, treatment of neural degenerative diseases, and cell therapy, and FRIL's properties of fuctionally select, preserve, and synchronize hematopoietic cell population could also give some hints.
2,507.4
2008-08-05T00:00:00.000
[ "Biology" ]
Determinants of modern family planning methods in Ethiopia: A community-based, cross-section mixed methods study In 2019, Ethiopia had a total fertility rate of 4.2 births per woman with the rates varying significantly across regions. The Federal Ministry of Health of Ethiopia announced “Ethiopia FP 2020” to address the high fertility rate, aiming to reduce it to 3.0 by 2020. This study aimed to identify the determinants of the use of modern family planning services in the Amhara, Oromia, and Somali regions. A community-based, cross-sectional mixed methods study was conducted, using quantitative and qualitative surveys. The quantitative survey data were subjected to binary logistic regression analyses. Participants included over 4117 married men and women aged 15–65 years old. This study found that participants in Oromia were 8.673 times more likely to have modern family planning methods than those in Somali. Participants in Amhara were 5.183 times more likely to have modern family planning methods than their Somali counterparts. Women, married respondents, and recipients of media messages were more likely to have family planning experience. Family planning discussions with health extension workers and health professionals played a significant role in modern family planning. These findings show that establishing a family planning strategy that considers the sociocultural characteristics of each region help address regional contexts. Everyone in Somali—especially husbands and religious leaders—must be educated in family planning and funds be made available to deploy advanced measures for the same. Introduction The world's population has grown by 80 million people every year since 2000, exceeding 7.7 billion in 2019 [1].Global population growth is expected to be concentrated in sub-Saharan Africa, which is home to the world's poorest countries, making population control in the region important.In particular, among the 20 countries with the highest population growth rates, 19 of them are located in sub-Saharan Africa [1].The total fertility rate in developed countries, as defined by the Organisation for Economic Co-operation and Development (OECD), has decreased to 1.7, while in sub-Saharan Africa it has increased to 4.6 [1]. In 2017, to address the growing population problems in developing countries, the UK Department for International Development, the Bill & Melinda Gates Foundation, and United Nations Population Fund (UNFPA) invited recipient countries, international organizations, civilian groups, private organizations, and research institutes that were conducting family planning projects to hold an international conference and set common goals for worldwide population control [2,3].In particular, they decided to support the right of an additional 120 million women of childbearing age in developing countries, including sub-Saharan Africa, to decide for themselves whether, when, and how many children to have.It was predicted that this would potentially prevent 100 million unwanted pregnancies, 50 million abortions, and 3 million maternal and child deaths [2].According to the UNFPA report, family planning can reduce the total fertility rate, and improve the health levels of mothers and newborns.In addition, it is expected to improve the economic levels of families by increasing opportunities for women's social/economic activities and, in turn, lead to the economic development of countries [4]. In 2019, Ethiopia as the 12th most populous country in the world, had a total fertility rate of 4.2.It is expected to be the 8th most populous country in the world and 5th among developing countries by 2050 [1].However, in Ethiopia, one in four people fail to meet their family planning needs [5].Ethiopia faces polarization of the population between cities and local provinces, along with high total fertility rates and low family planning rates [6]. Dialogues in Health 1 (2022) 100025 The difference in the total fertility rate between the Somali region, an area with high fertility, and Addis Ababa, an area with low fertility, in 2016 was 5.4 [5]. The Federal Ministry of Health (FMOH) of Ethiopia announced "Ethiopia FP 2020" to address the high fertility rate, with the goal of reducing the fertility rate to 3.0 by 2020 [7].In March 2019, SDG Performance Funds, a meeting of the Ethiopian government, FMOH, and aid agencies in Ethiopia, discussed the budget allocation for family planning policies.The budget for Ethiopian family planning efforts has been a problem every year since the discussion started in 2012 [8].The reason for this is that family planning resources were allocated in proportion to regions' populations.However, there was much disagreement as the project tried to curb an increase in the population.Accordingly, the meeting mentioned the importance of the basis for budget distribution by region and emphasized the need for research on family planning projects at the regional level [9,10]. This study aims to identify the determinants of the use of modern family planning methods in three regions of Ethiopia. Study design This study used a community-based, cross-sectional mixed methods design.We selected the three largest out of the nine regions in Ethiopia: Amhara, Oromia, and Somali.The Amhara region is located in the northwestern part of Ethiopia, the Oromia region (which has the largest ethnolinguistic group in Ethiopia) is located in the central and western parts of the country [11], and the Somali region is located in the east and southeast. The target group was married women and men living with a partner such as a husband, wife, or cohabitee in Amhara, Oromia, and Somali regions in Ethiopia. Sample size and data collection This study used two kinds of data: quantitative and qualitative.The sample size for the quantitative survey was determined using a double population proportion formula with the assumption of a 95% confidence level and 5% margin of error using G*Power software (version 3.0.10;Franz Faul, Kiel University, Kiel, Germany).The total sample size was estimated to be 2494 married couples, including a 10% non-response rate.Respondents in the three regions were selected using the multistage cluster sampling method.First, we selected towns in each region -six towns in Amhara, four in Oromia, and four in Somali.The six towns of Amhara region included Gozamin, Bahirdar, Dessie, Debrebirhan, Debremarkos, and Dessie; Oromia's four towns included Jima, Adama, Chiro, and Assela; and Somali's four towns included Jijiga, Gode, Siti, and Kebridehar.Second, we randomly selected a rural area and an urban area in each target town.Third, for homogeneity, we randomly selected 20-40 married couples per Kebele (the smallest administrative unit) in the Amhara, Oromia, and Somali regions. The questionnaire used in this study was adapted from the USAID Demographic and Health Survey 2016 in Ethiopia [5].This structured questionnaire was first developed in English by reviewing diverse studies on each region.Thereafter, the questionnaire was translated into three languages: Aaharic, Affan Oromo, and Somali by experts who fluently speak both English and these local languages.The data collection was conducted through face-to-face interviews by enumerators from May 10 to May 31, 2017.We used the Open Data Kit survey application on a tablet PC.We obtained data from 4688 respondents living in Amhara, Oromia, and Somali regions, with a response rate of 93.9%.Subsequently, we excluded 571 respondents' answers due to missing or censored data.The final sample for this study consisted of 1634 people from the Amhara region, 2383 from the Oromia region, and 100 from the Somali region. We conducted qualitative in-depth interviews with the key informants living in the target regions.The questionnaire was developed based on the "Qualitative Study on Family Planning" by UNFPA."Religious and community leaders helped us select the target respondents from whom we randomly selected 24 (8 per region) for key informant interviews. Variables The dependent variable was family planning practice experience.To measure this, we asked the respondents, "Have you (or your spouse) ever used any modern family planning methods (Pills, IUCD, Injectable, Implants, Emergency contraceptives, Foam, Condom, Vasectomy) to delay or stop a pregnancy?"A response of "Yes" was coded as "1" and other responses were coded as "0."This question was taken from the Demographic and Health Survey 2016 (DHS 2016) by USAID [5]. The independent variables were socio-demographic characteristics including region, age, sex, religion, occupation, and marital status.We also selected family planning variables including a family planning discussion with partner (spouse, friend, family, health professional, health extension workers [HEW], religious leader, or community leader) and exposure to family planning messages. The qualitative interview questionnaire had nine themes and 29 questions.We used seven themes: 1)fertility intention, 2)knowledge, perceptions & beliefs about contraceptive methods, 3) experience of contraceptive use, 4) reasons for discontinuing the use of the method, 5) reasons for non-use of contraceptive, 6) reasons for unwanted pregnancy and seeking induced abortion, and 7) future intention on contraceptive use and enabling factors. Data analysis The quantitative data were subjected to binary logistic regression analysis to identify the determinants of modern family planning methods across the three regions.This analysis was performed using SPSS Statistics 24.0. The qualitative data were recorded using a digital recorder, and audio data were transcribed and then translated into English, saved as an MS Word file, and exported into Atlas-ti software for coding and categorization.Every item of the qualitative dataset was translated, and appropriate codes related to the evaluation objectives were assigned.Similar codes were grouped under the same category and a thematic content analysis was conducted.The themes were categorized into three types of reason: knowledge, interruption, and religion. Ethical approval Ethical approval was obtained from the Research Ethics Committee of the School of Public Health and the Institutional Review Board of the College of Health Science, Addis Ababa University (No. 032/17/SHP).Moreover, we obtained a support letter from the FMOH, regional health bureaus, and the study districts' health officers. We obtained written informed consent from all participants for both the quantitative and qualitative surveys.We especially emphasized the respondents' right to refuse the interview.Couples were interviewed at the same time in separate places to avoid contaminating the data due to sensitive information being passed between spouses during the interviews. Quantitative analysis: Binary logistic regression in Oromia were 8.673 times more likely to use modern family planning methods compared to those from the Somali region (AOR = 8.673, 95% CI: [5.160-14.581],p < .001).Moreover, those in Amhara were 5.183 times more likely to use modern family planning methods compared to those from the Somali region (AOR = 5.183, 95% CI: [3.147-8.538],p < .001)(Table 2).Respondents who discussed family planning with their spouse were 2.426 times more likely to use modern family planning methods than those who did not.Family planning discussions with HEWs (2.430 times) or a health professional (1.806 times) were significant factors in using modern family planning methods. Respondents who received family planning messages through the media were 1.210 times more likely to use modern family planning methods than those who did not (AOR = 1.210, 95% CI: [1.043-1.404],p < .05). Qualitative analysis A total of 24 key informants participated in the study.Religious leaders, community leaders, HEWs, family planning (FP) service providers, and heads of health centers assisted with the qualitative interviews of this study. Some of the points that emerged from the qualitative interview respondents belonging to different religions are listed below.The results showed that family planning practices could be divided into three focal points: knowledge, interruption, and religion. Knowledge Some informants reported that they were aware of the importance of family planning for the health of both the mother and the newborn child.However, they said that opposition from their husband, spouse, or family stopped them from putting family planning strategies into practice. "Child-spacing benefits both the mother's and child's health, but often husbands don't like their wives when they have few children, and as a result they often marry a second wife to get many children.But I believe that child-spacing is good; for example, children who are born spaced are physically stronger than those who are born in succession" [Somali, female, non-user]. On the other hand, many respondents did not use family planning because of incomplete or incorrect knowledge.Some respondents reported that family planning practices would cause a physical disease (such as skin disease, back pain, etc.), illness, or decreased sexual feeling. "… I am male, but I hear people saying once a woman uses this family planning, she will never give birth again as her reproductive organ will be closed.In cases where she lacks adequate food, the pills cause damage to her belly, skin discoloration ('madiat') on her face, etc.These are some of the doubts or fears in our community."[Oromia,male, non-user]. "… The drug results in weight gain and has a sensation of burning.It also makes the women aggressive and results in quarrels between husband and wife.It decreases love by reducing the sexual feelings of females; they also complain about pain during intercourse.I know many people who divorced because of this problem…"[Amhara, male, Orthodox religious leader]. Interval in childbearing Some respondents stopped using family planning services because they faced problems like pain, disease, headaches.The following quote is from an interview with a 30-year-old woman who used family planning for 7 years after the birth of her second child: "…After the second child, she again used an injectable for some time but stopped using it because she thought that it caused aggression, headaches, an internal burning sensation, and her menses did not flow…"[Amhara, female, orthodox]. There were no answers from the Oromia and Somali regions on this topic. Religion Most respondents who did not practice family planning for religious reasons were Muslims.This is because the Muslim community considers family planning practices such as condoms, implants, UIDs, etc., as taboo under Sharia law.Therefore, Muslims have a low rate of practicing family planning.The following interviews were conducted with Muslims in the Amhara, Oromo, and Somali regions: "She aspires to have at least 10 children because she believes that having more children is more prestigious in the community and she will be respected more by her husband and his relatives.On the other hand, she believes that modern family planning methods are forbidden by Sharia and she has a plan to space her children using the calendar method and breast feeding, which are acceptable in Islam."[Somali,female, Muslim]. "….God said multiply and replenish the earth…children are gifts from God… when every child is born, he/she comes with his/her own fate/luck…"[Amhara, female, Orthodox]. "I believe that the only one who makes every decision of the household including family size and family planning use [is the man] since he is entitled under Sharia law."[Oromia, male, Muslim]. Discussion In this study, the respondents' experience rate of modern family planning methods was 68.9% on average, with 75.7% in Amhara, 65.0% in Oromia, and 33.0% in Somali.The respondents discussed family planning with their spouses the most (87.5%),followed by friends (29.9%), health professionals (19.6%),HEWs (8.5%), religious leaders (2.4%), family (0.7%), and community leaders (0.7%).This concurs with the extant DHS data which found that people discussed family planning mostly with their spouses (72.8%) [5].On the basis of region, family planning was discussed with spouses at the rate of 87.5% in Amhara, 88.9% in Oromia, and 54.0% in Somali region.According to DHS 2016, the rate in Amhara was 75.8%, 11.7% lower than in this study, and the rate in Oromia was 72.8%, or 16.1% lower than in this study.However, in Somali region (62.6%), the rate was 8.6% higher than in this study. The quantitative analysis showed that the rate of usage of modern family planning methods was significantly different for each region in Ethiopia.The use of modern family planning methods in Oromia was 8.673 times higher compared to the Somali region (AOR = 8.673, 95% CI: [5.160-14.583],p < .001)and in Amhara was 5.183 times higher compared to the Somali region (AOR = 5.183, 95% CI: [3.147-8.538],p < .001).These results were associated with Somali's total fertility rate of 7.2, twice that of Amhara (3.7) and more than 1.5 times that of Oromia (5.4) [11].According to the quantitative and qualitative analyses, because 98.7% of the local residents in the Somali region followed Islam and family planning (contraception) is prohibited under Sharia law, it is rarely used there.In particular, the use of condoms, implants, pills, and IUDs for family planning is prohibited, while only the calendar method and breastfeeding is permitted.In fact, the Quran clearly states, "Bring a lot of children for the prosperity of Islam", so there are limitations in implementing family planning policies for Muslims [12][13][14]. According to a study by Tigabu [13] on religious views on contraception in western Ethiopia, there is a belief that if a female Ethiopian Muslim dies with an implant, a female contraceptive device, in her body, her soul cannot go to heaven. On the other hand, there were some opinions that ran contrary to our qualitative findings and the findings of previous studies.One Muslim woman responded that family planning was necessary because she perceived that her children were born healthy because of family planning, which controlled the age gap of her children.This demonstrates that, even though women themselves recognize the need for family planning, they fail to implement it for religious reasons.There is a 3.4% level of use of family planning in Somali region, where 98.7% of the population is Muslim, while 79.0% of women are aware of the importance of family planning [5].There are, thus, many women who cannot practice family planning (contraception) for religious and doctrinal reasons, despite their wishes.Traditionally, Muslim husbands have all the decision-making power.A previous study on the use of condoms in Ethiopia found that because family planning in Muslim families was decided by the husband, women faced many limitations in using/practicing modern family planning methods on their own [15,16]. this study, the proportion of women who decided on family planning after consultation with their husband was 54.0% in the Somali region, lower than in Amhara (87.5%) and Oromia (88.9%).Therefore, in the Somali region, it is important to educate religious leaders and heads of the family (e.g., husbands) about the importance of family planning (contraception) to increase the rate of family planning practice, thereby lowering the total fertility rate.In fact, after conducting educational sessions on family planning with Muslim religious leaders and men in three groups, it was found that their perceptions changed, which lowered the fertility rate and gestational disease rates [17,18]. In Ethiopia, family planning projects such as education for health professionals, HEWs, and religious leaders; media education; and the distribution of brochures are carried out by various organizations including international organizations and NGOs, research institutes, and civic groups [8,[19][20][21].Among them, the Small, Happy, and Prosperous family in Ethiopia, conducted by the Korea International Cooperation Agency, was identified as the most effective intervention for delivering family planning educational messages through the media [22,23].However, the Somali region has the lowest message exposure (radio: 4.8%, TV: 6.6%), so there are limitations in delivering family planning education content through the media [5].Therefore, family planning education in the Somali region should involve 1) supplying equipment to send messages (to improve message exposure), and 2) providing direct education to religious leaders, husbands, and medical professionals.The details are as follows. First, providing equipment (radios, TVs) to listen to messages will help improve message exposure in the Somali region and increase the effectiveness of education delivered through the media.In this study, the number of people with experiences of receiving family planning messages through the media (exposure experience) was 1.12 times higher (AOR = 1.120, 95% CI: [1.043-1.404],p < .05)than the number of people with no experience of using family planning services.In particular, the transmission of messages has proved to be an effective and widely used intervention in previous studies [23,24]. Second, education to increase awareness of family planning is necessary for family decision-makers and certain influential groups such as husbands, health professionals, and HEWs.In this study, the use of modern family planning methods depended on with whom these discussions were held.Those that discussed whether to use modern family planning methods with HEWs, which had the highest influence, were 2.430 times (AOR = 2.430, 95% CI: [1.698-3.476],p < .001)more likely to use the services compared to those that did not, and this result was statistically significant.This is because Ethiopian females were worried about getting scolded by their husbands.They therefore relied on and consulted the most with HEWs who were of the same gender and not a family member [25].The second most influential group to discuss using modern family planning methods were spouses; the use of modern family planning methods after a discussion with a spouse increased by 2.426 times (AOR = 2.426, 95% CI: [1.968-2.990],p < .0001)as compared to the absence of a discussion.The third most influential group was health professionals, where the use of family planning services was 1.448 times higher (AOR = 1.488, 95% CI: [1.185-1.869],p < .001)after a discussion with them. On the other hand, discussions with friends, religious leaders, and community leaders were not significantly related to the use of modern family planning methods.Usage of these services increased when consulting with a decision-maker or influential health professional, but in some cases, false information prevented people from using them.According to the interviews in the qualitative survey, a male respondent heard from his neighbors that using modern family planning methods would block the female reproductive system and prevent his wife from future childbirth and he cited this as the reason for not using it.This shows that incorrect knowledge among major decision-makers such as husbands can act as a barrier to family planning. One limitation of this study were the discrepancies in the responses to the qualitative and quantitative surveys.The respondents of the qualitative survey were interviewed regardless of whether they had participated in the quantitative survey.To minimize errors in the quantitative and qualitative analytical results, there needs to be consistency in the respondents [26].However, as we could not conduct corresponding qualitative surveys for all 4117 respondents of the quantitative survey, the qualitative respondents were selected via conditional extraction.Moreover, the Somali region sample had only 100 respondents, because Muslim women avoided talking with outsiders and did not want to participate in the survey as it was about family planning.In future studies, it is necessary to minimize the errors in the analysis by equalizing the respondents in the qualitative and quantitative surveys. Moreover, this study was conducted in three out of nine regions of Ethiopia: Amhara, Oromia, and Somali.In future studies, it is necessary to minimize errors by conducting surveys in all nine regions. Conclusion This study aimed to identify the determinants of the use of modern family planning methods in Ethiopia using a mixed method research design based on both quantitative and qualitative surveys.This study was conducted in the three largest regions of Ethiopia: Amhara, Oromia, and Somali. Compared to the Somali region, the use of modern family planning methods was significantly higher in the Amhara region (5.183 times higher) and the Oromia region (8.673 times higher).The reason, confirmed through the qualitative research, was that 98.7% of those in the Somali region were Muslims and there were limits in using modern family planning methods due to the absolute right of husbands to decide on family planning, the prevailing religious doctrine, and a lack of knowledge.In particular, although the results showed that people were aware of the importance of family planning, for religious reasons, it was not possible to use it. In addition, the use of modern family planning methods differed significantly depending on the person consulted.When participants consulted with their husband on whether to use modern family planning methods, usage increased by 2.426 times (a significant difference) compared to those that did not.When they consulted with a religious leader, usage increased by 1.806 times (not statistically significant).In addition, those that listened to a family planning message through the media had an increased usage of modern family planning methods by 1.210 times (a significant difference). Thus, it is necessary for Ethiopia to establish a family strategy in consideration of the cultural and social characteristics of each region.In particular, the Somali region needs to educate husbands and religious leaders about the importance of family planning.As there exists a shortage of media equipment, supplying it or organizing a common radio/TV spot is necessary to enable their access to family planning messages. Ethics approval and consent to participate Ethical approval was obtained from the Research Ethics Committee of the School of Public Health and the Institutional Review Board of the College of Health Science, Addis Ababa University (No. 032/17/SHP).We obtained written informed consent from all participants for both quantitative and qualitative surveys.In particular, this study includes women aged 15-19 who are mothers with children.In the case of women aged 15-19, we received consent from their parents.Because the questionnaire included family planning (contraception), sexual issues, sexual knowledge, intercourse experience, etc., parents' consent was obtained because the content may be inappropriate for women under 19 years. We especially emphasized the respondents' right to refuse the interview.Couples were interviewed at the same time in separate places to avoid contaminating the data due to sensitive information being passed between spouses during the interviews. Consent for publication Not applicable. Table 1 Respondent characteristics by region. on gender, women were 1.764 times more likely to use modern family planning methods than men (AOR = 1.764, 95% CI: [1.502-2.072],p<.001).Respondents whose occupation was housework were 0.746 times less likely to use modern family planning methods than the no-work group (AOR = 0.746, 95% CI: [0.566-0.983,p< .05]),orconversely those who were unemployed were 1.34 times more likely to use modern family planning methods compared to houseworkers.Daily workers were 1.531 times more likely to use modern family planning methods than respondents who were unemployed (AOR = 1.531, 95% CI: [0.999-2.346],p< .001).Married respondents were 2.277 times more likely to use modern family planning methods than respondents who lived with a partner but were not Table 2 Binary logistic regression analysis of family planning (FP) experiences in the Amhara, Oromia, and Somali regions.
5,898.2
2022-06-01T00:00:00.000
[ "Economics" ]
Improved maize reference genome with single-molecule technologies An improved reference genome for maize, using single-molecule sequencing and high-resolution optical mapping, enables characterization of structural variation and repetitive regions, and identifies lineage expansions of transposable elements that are unique to maize. Supplementary information The online version of this article (doi:10.1038/nature22971) contains supplementary material, which is available to authorized users. research, which will enable increases in yield to feed the growing world population. The current assembly of the maize genome, based on Sanger sequencing, was first published in 2009 (ref. 3). Although this initial reference enabled rapid progress in maize genomics 1 , the origi nal assembly is composed of more than 100,000 small contigs, many of which are arbitrarily ordered and oriented, markedly complicating detailed analysis of individual loci 6 and impeding investigation of inter genic regions crucial to our understanding of phenotypic variation 7,8 and genome evolution 9,10 . Here we report a vastly improved de novo assembly and annotation of the maize reference genome (Fig. 1). On the basis of 65× single molecule realtime sequencing (SMRT) (Extended Data Fig. 1), we assembled the genome of the maize inbred line B73 into 2,958 contigs, in which half of the total assembly is made up of contigs larger than 1.2 Mb (Table 1, Extended Data Fig. 2a). The assembly of the long reads was then integrated with a highquality optical map (Extended Data Fig. 1, Extended Data Table 1) to create a hybrid assembly consisting of 625 scaffolds (Table 1). To build chromosomelevel superscaffolds, we combined the hybrid assembly with a minimum tiling path generated from the bacterial artificial chromosomes (BACs) 11 and a highdensity genetic map 12 (Extended Data Fig. 2b). After gapfilling and error correction using short sequence reads, the total size of maize B73 RefGen_v4 pseudomolecules was 2,106 Mb. The new reference assembly has 2,522 gaps, of which almost half (n = 1,115) have optical OPEN Letter reSeArCH map coverage, giving an estimated mean gap length of 27 kb (Extended Data Fig. 2c). The new maize B73 reference genome has 240fold higher contiguity than the recently published shortread genome assembly of maize cultivar PH207 (contig N50: 1,180 kb versus 5 kb) 13 . Comparison of the new assembly to the previous BACbased maize reference genome assembly revealed more than 99.9% sequence identity and a 52fold increase in the mean contig length, with 84% of the BACs spanned by a single contig from the long reads assembly. Alignment of chromatinimmunoprecipitation followed by sequencing (ChIP-seq) data for the centromerespecific histone H3 (CENH3) 14 revealed that centromeres are accurately placed and largely intact. Several previously identified 15 megabasesized misoriented pericentromeric regions were also corrected (Extended Data Fig. 3a, b). Moreover, the ends of the chromosomes are properly identified on 14 out of the 20 chromosome arms based on the presence of tandem telomeric repeats and knob 180 sequences (Extended Data Fig. 3a, c). Our assembly made substantial improvements in the gene space including resolution of gaps and misassemblies and correction of order and orientation of genes. We also updated the annotation of our new assembly, resulting in consolidation of gene models (Extended Data Fig. 4). Newly published fulllength cDNA data 4 improved the anno tation of alternative splicing by more than doubling the number of alternative transcripts from 1.6 to 3.3 per gene (Extended Data Fig. 5a), with about 70% of genes supported by the fulllength transcripts. Our reference assembly also vastly improved the coverage of regulatory sequences, decreasing the number of genes exhibiting gaps in the 3kb region(s) flanking coding sequence from 20% to < 1% (Extended Data Fig. 5b). The more complete sequence enabled notable improvements in the annotation of core promoter elements, especially the TATA box, CCAATbox, and Y patch motifs (Supplementary Information). Quantitative genetic analyses have shown that polymorphisms in regu latory regions explain a substantial majority of the genetic variation for many phenotypes 7,8 , suggesting that the new reference will markedly improve our ability to identify and predict functional genetic variation. After its divergence from Sorghum, the maize lineage underwent genome doubling followed by diploidization and gene loss. Previous work showed that gene loss is biased towards one of the parental genomes 3,16 , but our new assembly and annotation instead suggest that 56% of syntenic sorghum orthologues map uniquely to the dominant maize subgenome (designated A, total size 1.16 Gb), whereas only 24% map uniquely to subgenome B (total size 0.63 Gb). Gene loss in maize has primarily been considered in the context of polyploidy and func tional redundancy 16 , but we found that despite its polyploidy, maize has lost a larger proportion (14%) of the 22,048 ancestral gene orthologues than any of the other four grass species evaluated to date (Sorghum, rice, Brachypodium distachyon and Setaria italica; Extended Data Fig. 6). Nearly onethird of these losses are specific to maize, and analysis of a restricted highconfidence set revealed enrichment for genes involved in biotic and abiotic stresses (Extended Data Table 2), for example, NBARC domain diseaseresistance genes 17 and the serpin protease inhibitor involved in pathogen defence and programmed cell death 18 . Transposable elements were first reported in maize 19 and have since been shown to have important roles in shaping genome evolution and gene regulatory networks of many species 20 . Most of the maize Letter reSeArCH genome is derived from transposable elements 3,21 , and careful study of a few regions has revealed a characteristic structure of sequentially nested retrotransposons 21,22 and the effect of deletions and recombina tion on retrotransposon evolution 23 . In the annotation of the original maize assembly, however, fewer than 1% of long terminal repeat (LTR) retrotransposon copies were intact 24 . By applying a new homology independent annotation pipeline to our assembly (Extended Data Table 3) To understand the evolutionary history of maize LTR retrotransposons, we also applied our annotation pipeline to the sorghum reference genome, and used reverse transcriptase protein domain sequences that were accessible owing to the improved assembly of the internal protein coding domains of maize LTR retrotransposons to reconstruct the phylogeny of maize and sorghum LTR retrotransposon families. Despite a higher overall rate of diversification of LTR transposable elements in the maize lineage consistent with its larger genome size, differences in LTR retrotrans poson content between genomes were primarily the result of marked expansion of distinct families in both lineages (Fig. 2). Maize exhibits tremendous genetic diversity 25 , and both nucleotide polymorphisms and structural variations have important roles in its phenotypic variation 10,26 . However, genomewide patterns of structural variation in plant genomes are difficult to assess 27 , and previous efforts have relied on shortread mapping, which misses the vast majority of intergenic spaces where most rearrangements are likely to occur 10 . To investigate structural variation at a genomewide scale, we generated optical maps (Extended Data Table 1) for two additional maize inbred lines: the tropical line Ki11, one of the founders of the maize nested association mapping (NAM) population 28 , and W22, which has served as a foundation for studies of maize genetics 29 . Owing to the high degree of genomic diversity among these lines, only 32% of the assembled 2,216 Mb map of Ki11 and 39% of the 2,280 Mb W22 map could be mapped to our new B73 reference via common restriction patterns (Table 2, Fig. 3a and Extended Data Fig. 7). The high density of alignments across and near many of the exceedingly retrotransposon rich centromeres reflects the comparatively low genetic diversity of most centromeres in domesticated maize 15 and illustrates the ability of the combined optical mapping/singlemolecule sequencing methodology to traverse large repeatrich regions. Within the aligned regions, approximately 32% of the Ki11 and 26% of the W22 optical maps exhibited clear evidence of structural variation, including 3,408 insertions and 3,298 deletions ( Table 2). The average indel size was approximately 20 kb, with a range from 1 kb to over 1 Mb (Fig. 3b). More than 90% of the indels were unique to one inbred or the other, indicating a high level of structural diversity in maize. As shortread sequence data are available from both Ki11 and W22 (ref. 10), we analysed 1,451 of the largest (> 10 kb) deletions and found that 1,083 were supported by a clear reduction in read depth (Fig. 3c). The confirmed deletions occurred in regions of low gene density (4.4 genes per megabase compared to a genomewide average of 18.7 genes per megabase). Onethird (83 out of 257) of the genes missing in Ki11 or W22 lack putative orthologues in all four grasses (rice, sorghum, Brachypodium and Setaria), consistent with previous data 30 . Although maize is often considered to be a largegenome crop, most major food crops have even larger genomes with more complex repeat landscapes 2 . Our improved assembly of the B73 genome, generated using singlemolecule technologies, demonstrates that additional assemblies of other maize inbred lines and similar highquality assem blies of other repeatrich and largegenome plants are feasible. Further highquality assemblies will in turn extend our understanding of the genetic diversity that forms the basis of the phenotypic diversity in maize and other economically important plants. Online Content Methods, along with any additional Extended Data display items and Source Data, are available in the online version of the paper; references unique to these sections appear only in the online paper. METhOdS No statistical methods were used to predetermine sample size. The experiments were not randomized, and investigators were not blinded to allocation during experiments and outcome assessment. Whole-genome sequencing using SMRT technology. DNA samples for SMRT sequencing were prepared using maize inbred line B73 from NCRPIS (PI550473), grown at University of Missouri. Seeds of this line were deposited at NCRPIS (tracking number PI677128). Etiolated seedlings were grown for 4-6 days in ProMix at 37 °C in darkness to minimize chloroplast DNA. Batches of ~ 10 g were snapfrozen in liquid nitrogen. DNA was extracted following the PacBio protocol 'Preparing Arabidopsis Genomic DNA for SizeSelected ~ 20 kb SMRTbell Libraries' (http://www.pacb.com/wpcontent/uploads/2015/09/SharedProtocol PreparingArabidopsisDNAfor20kbSMRTbellLibraries.pdf). Genomic DNA was sheared to a size range of 15-40 kb using either Gtubes (Covaris) or a Megarupter device (Diagenode), and enzymatically repaired and converted into SMRTbell template libraries as recommended by Pacific Biosciences. In brief, hairpin adapters were ligated, after which the remaining damaged DNA fragments and those without adapters at both ends were elimi nated by digestion with exonucleases. The resulting SMRTbell templates were sizeselected by Blue Pippin electrophoresis (Sage Sciences) and templates rang ing from 15 to 50 kb, were sequenced on a PacBio RS II instrument using P6C4 sequencing chemistry. To acquire long reads, all data were collected as either 5 or 6h sequencing videos. Construction of optical genome maps using the Irys system. Highmolecular mass genomic DNA was isolated from 3 g of young ear tissue after fixing with 2% formaldehyde. Nuclei were purified and lysed in embedded agarose as previously described 31 . DNA was labelled at Nt.BspQI sites using the IrysPrep kit. Molecules collected from BioNano chips were de novo assembled as previously described 32 using 'optArgument_human' . De novo assembly of the genome sequencing data. De novo assembly of the long reads from SMRT Sequencing was performed using two assemblers: the Celera Assembler PBcR -MHAP pipeline 33 and Falcon 34 with different parameter settings. Quiver from SMRT Analysis v2.3.0 was used to polish base calling of contigs. The three independent assemblies were evaluated by aligning with the optical genome maps. Contamination of contigs by bacterial and plasmid genomes was eliminated using the NCBI GenBank submission system 35 . Curation of the assembly, including resolution of conflicts between the contigs and the optical map and removal of redundancy at the edges of contigs, is described in the Supplementary Information. Hybrid scaffold construction. To create hybrid scaffolds, curated sequence con tigs and optical maps were aligned and merged with RefAligner 32 (P < 1 × 10 −11 ). These initial hybrid scaffolds were aligned again to the sequence contigs using a less stringent P value (1 × 10 −8 ), and those contigs not previously merged were added if they aligned over 50% of their length and without overlapping previously merged contigs, thereby generating final hybrid scaffolds. Pseudomolecule construction. Sequences from BACs on the physical map that were used to build the maize v3 pseudomolecules were aligned to contigs using MUMMER package 36 with the following parameter settings: 'l(minimum length of a single match) 100 c(the minimum length of a cluster of matches) 1000' . To only use unique hits as markers, alignment hits were filtered with the following parameters: 'i(the minimum alignment identity) 98 l(the minimum alignment length) 10000' . Scaffolds were then ordered and oriented into pseudochromo somes using the order of BACs as a guide. For quality control, we mapped the SNP markers from a genetic map built from an intermated maize recombinant inbred line population (Mo17 × B73) 10 . Contigs with markers not located in pseudochromosomes from the physical map were placed into the AGP (A Golden Path) using the genetic map. Further polishing of pseudomolecules. Raw pseudomolecules were subjected to gap filling using Pbjelly (maxTrim = 0, minReads = 2) and polished again using Quiver (SMRT Analysis v2.3.0). To increase the accuracy of the base calls, we performed two lanes of sequencing on the same genomic DNA sample (library size = 450 bp) using Illumina 2500 Rapid run, which generated about 100fold 2 × 250 pairedend (PE) data. Reads were aligned to the assembly using BWAmem 37 . Sequence error correction was performed with the Pilon pipeline 38 , after aligning reads with BWAmem 37 and parsing with SAMtools 39 , using sequence and alignment quality scores above 20. Annotation. For comprehensive annotation of transposable elements, we designed a structural identification pipeline incorporating several tools, including LTRharvest 40 The MAKERP pipeline was used to annotate proteincoding genes 46 , integrat ing ab initio prediction with publicly available evidence from fulllength cDNA 47 , de novo assembled transcripts from shortread mRNA sequencing (mRNAseq) 48 , isoformsequencing (IsoSeq) fulllength transcripts 14 , and proteins from other species. The gene models were filtered to remove transposons and lowconfidence predictions. Additional alternative transcript isoforms were obtained from the IsoSeq data. Further details on annotations, core promoter analysis, and comparative phylogenomics are described in Supplementary Information. Structural variation. Leaves were used to prepare high molecular mass DNA and optical genome maps were constructed as described above for B73. Structural variant calls were generated based on alignment to the reference map B73 v4 chromosomal assembly using the multiple local alignment algorithm (RefSplit) 32 . A structural variant was identified as an alignment outlier 32,49 , defined as two wellaligned regions separated by a poorly aligned region with a large size difference between the reference genome and the map or by one or more unaligned sites, or alternatively as a gap between two local alignments. A confidence score was generated by comparing the nonnormalized P values of the two wellaligned regions and the nonnormalized loglikelihood ratio 50 of the unaligned or poorly aligned region. With a confidence score threshold of 3, RefSplit is sensitive to insertions and deletions as small as 100 bp (events smaller than 1 kb are generally compound or substitution and include label changes, not just spacing differences) and other changes such as inversions and complex events which could be balanced. Insertion and deletion calls were based on an alignment outlier Pvalue threshold of 1 × 10 −4 . Insertions or deletions that crossed gaps in the B73 pseudomolecules, or that were heterozygous in the optical genome maps, were excluded. Considering the resolution of the BioNano optical map, only insertion and deletions larger than 100 bp were used for subsequent analyses. To obtain highconfidence dele tion sequences, sequencing reads from the maize HapMap2 project 8 for Ki11 and W22 were aligned to our new B73 v4 reference genome using Bowtie2 (ref. 51). Read depth (minimum mapping quality > 20) was calculated in 10kb windows with step size of 1 kb. Windows with read depth below 10 in Ki11 and 20 in W22 (sequencing depths for Ki11 and W22 were 2.32× and 4.04× , respectively) in the deleted region were retained for further analysis. Data availability. Raw reads, genome assembly sequences, and gene annotations have been deposited at the NCBI under BioProject number PRJNA10769 and BioSample number SAMN04296295. PacBio wholegenome sequencing data and Illumina data were deposited in the NCBI SRA database under accessions SRX1472849 and SRX1452310, respectively. The GenBank accession number of the genome assembly and annotation is LPUQ00000000. A genome browser including genome feature tracks and ftp is available from Gramene: http://ensembl.gramene. org/Zea_mays/Info/Index. All other data are available from the corresponding author upon reasonable request.
3,901.8
2017-06-01T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Hydrodynamization in hybrid Bjorken flow attractors Hybrid fluid models, consisting of two sectors with more weakly and more strongly self-interacting degrees of freedom coupled consistently as in the semi-holographic framework, have been shown to exhibit an attractor surface for Bjorken flow. Retaining only the simple viscid fluid descriptions of both sectors, we find that, on the attractor surface, the hydrodynamization times of both subsectors decrease with increasing total energy density at the respective point of hydrodynamization following a conformal scaling, reach their minimum values, and subsequently rise rapidly. The minimum values are obtained when the respective energy densities are of the order of the inverse of the dimensionful inter-system coupling. Restricting to attractor curves which can be matched to glasma models at a time set by the saturation scale for both $p$-$p$ and Pb-Pb collisions, we find that the more weakly coupled sector hydrodynamizes much later, and the strongly coupled sector hydrodynamizes earlier in $p$-$p$ collisions, since the total energy densities at the respective hydrodynamization times of these sectors fall inside and outside of the conformal window. This holds true also for phenomenologically relevant solutions that are significantly away from the attractor surface at the time we match to glasma models. E Different initial ratios of subsector energy densities 44 1 Introduction One of the most remarkable discoveries in non-equilibrium dynamics recently has been that quantum many-body systems can hydrodynamize far away from equilibrium [1][2][3][4][5][6][7][8].Hydrodynamization refers to the stage of dynamical evolution in which the expectation values of the energy-momentum tensor and conserved currents start following the constitutive relations of hydrodynamics to a very good approximation.In fact, hydrodynamization can happen in both weakly coupled and strongly coupled systems even when they are far away from equilibrium and have large pressure anisotropies [9].Theoretically, hydrodynamization can be formalized in terms of a hydrodynamic attractor in phase space which is built out of a resummation of the divergent series of the late time effective hydrodynamic expansion such that the time-dependent energy-momentum tensor approaches that of the attractor for any arbitrary initial condition [3,7,[10][11][12][13][14][15][16][17][18][19][20][21], see [22] for a recent review.The hydrodynamic attractor in diverse conformal systems, irrespective of whether they are weakly or strongly coupled (e.g., a kinetic system or a holographic strongly interacting field theory), are quasi-universal in the sense that the evolution is almost identical when the phase space is described by suitable variables.The study of the hydrodynamic attractor has been extensively carried out in the context of the Bjorken flow to learn about the hydrodynamization of the quark-gluon plasma (QGP) produced in heavy-ion collisions (see e.g.[23,24] for recent reviews), although such a simplified set-up captures the longitudinal expansion only and not the transverse flow. To understand the hydrodynamization of the QGP, one needs to follow its evolution from the earliest stages, where a perturbative description in the forms of the glasma effective theory [25] and subsequently kinetic theory [26,27] is applicable, to later epochs when the soft gluons form a strongly coupled thermal bath.The semi-holographic framework, briefly reviewed in Sec. 2, allows for simple phenomenological constructions where the evolution of both perturbative (weakly self-interacting) and non-perturbative (strongly self-interacting) degrees of freedom can be described simultaneously in a consistent way [28][29][30][31][32][33][34].In the simplest model, one retains only two effective metric couplings, and reduces both sectors to viscous fluids each following the Müller-Israel-Stewart (MIS) [35,36] description with different amounts of viscosities.The effective metric couplings permit the exchange of energy and momentum, while the total energy-momentum tensor, which can be constructed readily from the subsector energy-momentum tensors, is conserved in the physical background (Minkowski) metric.For further simplicity, the fluids describing both sectors are assumed to be conformal.The full system deviates from conformality, however, since the effective metric couplings (denoted by γ, γ = rγ) have mass dimension −4.We assume them to be given in terms of a saturation scale that sets the scale for the energy densities of the QGP when perturbative descriptions tend to break down. 1he existence of hydrodynamic attractors in such hybrid fluid models has been established in [37].These attractor curves form a two-dimensional surface in four-dimensional phase space such that for any initial condition the evolution in phase space approaches a curve on this attractor surface after some time.At late time, the full system can be described as a single fluid with equation of state and transport coefficients dependent on two constant parameters which label the curves on the attractor surface.A feature of holographic, we are only using an effective fluid description for the strongly coupled non-perturbative sector with the expectation that our simplification should be sufficient to learn about hydrodynamization.The explicit dual holographic description will be necessary to understand the role of irreversible energy transfer to a dynamical black hole, which is expected to be a slow process determined by the coupling between the two sectors given the results obtained in [34]. the attractor surface is an initial evolution that is reminiscent of the so-called bottom-up thermalization scenario of Ref. [38], for at sufficiently early time, the energy density of the perturbative sector always dominates over the strongly self-interacting non-perturbative sector.This "bottom-up-like" feature is a consequence of the effective fluid description of the semi-holographic setup, which however can be expected to be a good approximation to a full semi-holographic treatment only at τ ≳ γ 1/4 .Nevertheless, it is a robust feature of the resulting hybrid attractor itself, where the dominance of the weakly coupled sector at earlier times is determined by an exponential factor with an exponent involving only the viscosities and relaxation times of the two sectors.Apart from the overall energy scale, the ratio of the respective energy densities at a conveniently chosen early reference time τ 0 turns out to be a useful parameter for the curves which span the attractor surface, even though τ 0 is outside the phenomenologically relevant range. In this paper, we systematically study hydrodynamization in these hybrid hydrodynamic attractors following preliminary observations in [37] indicating that one can obtain significant new insights into small vs large system collisions (high-multiplicity p-p or A-p vs. A-A heavy ion collisions [39,40]) by a detailed exploration of the attractor surface.In the application to such high-energy collisions, we consider only times τ ≳ γ 1/4 , since a fluid description cannot be expected to be applicable at arbitrarily small times.In particular, our systematic study of hydrodynamization on the attractor surface suggests a novel explanation for rapid hydrodynamization of the softer degrees of freedom in small system collisions as summarized below. The general characterization of the hydrodynamization times of the two components on the hybrid attractor surface in our simplified model is as follows.Let us denote the hydrodynamization times of the perturbative and non-perturbative (holographic) sectors as τ hd and τ hd , respectively.When the energy density at the hydrodynamization times is in the regime E(τ hd ), E( τ hd )≪γ −1 , we find that the hydrodynamization times of both sectors follow conformal behaviors, i.e. with p and p being constants.Generically, due to the bottom-up-like feature discussed above, the energy density of the perturbative sector is dominant at some reference time τ 0 ≪ γ 1/4 .For such cases, we find that p ≈ 0.63, which is approximately the same value as in usual conformal attractors (which we obtain in the decoupling limit).Therefore, the hydrodynamization time of the perturbative sector in the conformal regime is both universal and identical to that of conformal attractors studied earlier in the literature.Furthermore, p for the strongly self-interacting sector depends only on the ratio of energy densities of the two subsectors at early time.As this ratio of the energy densities of the perturbative to holographic sectors at reference time (say) τ 0 = 0.001γ 1/4 is increased from 1 : 1 to 1000 : 1, the constant p increases from 0.64 to 2.1.The hydrodynamization time of the relatively soft sector is thus sensitive to initial conditions, and can be significantly larger than in usual conformal attractors even in the conformal window. Remarkably, for fine-tuned initial conditions on the attractor surface, where the holographic sector has dominant energy at a fixed early reference time, the roles in the conformal window are reversed.The p of the holographic sector is then universally ≈ 0.63 as in the usual conformal attractors, while p of the perturbative sector is determined by the ratio of the energy densities at the initialization time.We can thus deduce that the hydrodynamization time of the sub-dominant sector increases because it is forced to share its energy with the dominant sector prior to hydrodynamization, while the dominant sector is dynamically driven to behave in a universal manner. On the attractor surface, the hydrodynamization times of both subsectors take their minimal values when the respective energy densities are close to γ −1 , i.e. For larger E(τ hd ), E( τ hd ) the hydrodynamization times of both sectors increase rapidly and do not follow any scaling behavior like that in the conformal window. For pondering potential phenomenological consequences of these general features of the hybrid attractor, we interpret solutions with lower total energy densities at the matching time τ i ∼ γ 1/4 as corresponding to small systems (high-multiplicity p-p or A-p collisions) [39,40] as opposed to A-A heavy-ion collisions.To quantify our results, we identify the energy scale of the intersystem coupling γ −1/4 with the saturation scale Q p-p s of p-p collisions, and set various initial conditions such that the total energy density ∝ Q 4 s at τ ≈ Q −1 s , with Q s in A-A systems chosen from typical values in the IPsat and bCGC models [41].Generically, we find: • The total energy densities at hydrodynamization time of the perturbative sector (E(τ hd )) are within the conformal window approximately for both p-p and Pb-Pb collisions.Since E(τ hd ) is smaller in p-p collisions, naturally the hydrodynamization time of the perturbative sector is significantly larger in p-p collisions as readily follows from the scaling behavior of τ hd in the conformal window. • The total energy densities at hydrodynamization time of the holographic sector (E( τ hd )) are outside the conformal window for both p-p and Pb-Pb collisions, i.e.E( τ hd ) > γ −1 .Since E(τ hd ) is smaller in p-p collisions, the hydrodynamization time of the softer sector is smaller in p-p collisions than in the case of Pb-Pb collisions.This follows from the growth of τ hd with E( τ hd ). Furthermore, we have established that these results remain valid for generic phenomenologically relevant initial conditions which can be matched to the glasma models at time scales of the order the saturation scale for both p-p and Pb-Pb type collisions.For a large class of such initial conditions, our model exhibits an attractor of a system with both weakly and strongly coupled components that is driving the systems towards hydrodynamic behavior well before the onset of hydrodynamics.Our simple hybrid model therefore provides a new perspective on how collective flow can emerge from the softer components even in small system collisions.The organization of the paper is as follows.In Sec. 2, we review our set-up of two fluids undergoing Bjorken flow and coupled mutually via the democratic effective metric coupling [32,33].We further discuss the method to discern the attractor surface in this hybrid system.In Sec. 3, we provide more analytic details of early time behavior on the attractor surface, before turning our attention to the late-time hydrodynamic behavior.In both cases, we expand the discussion beyond our earlier results in [37].Finally in Sec. 4, we systematically study the hydrodynamization on the attractor surface and connect to the phenomenology of p-p and Pb-Pb collisions. The hybrid action A minimalist set-up required to obtain our hybrid fluid model involves only two leading democratic effective metric couplings [32,33].The action formalism for the full theory describing the perturbative and non-perturbative degrees of freedom coupled via the mutual democratic effective metric coupling is [33] Above S and S denote the Wilsonian effective actions for the perturbative and nonperturbative sectors respectively, with respective degrees of freedom ψ and ψ, and living in respective effective metrics g µν and gµν , while the physical metric of the full theory is G µν (to be set to Minkowski metric eventually).Both of these sectors inhabit the same topological space, and they are superficially invisible to each other as they experience different background metrics, but those are determined by the respective opposite system.If S is strongly self-interacting and admits a holographic description, then it is the on-shell action of an extra-dimensional gravitational theory whose boundary metric should be identified with g.In what follows, we will not need any explicit description of both S and S. Since both S and S are individually diffeomorphism-invariant in the respective background metrics g and g, we obtain the two Ward identities where the energy momentum tensors are and ∇ and ∇ are built out of g and g, respectively. The interaction term S int in (2.1) relates these two different background metrics in terms of the actual physical background metric G ultralocally, as follows.We readily see that the two effective metrics appear in the full action as auxiliary fields, and extremizing the action with respect to them leads to imposing The above yield the following coupling equations where γ and γ ′ are the two dimensionful couplings with mass dimension −4. , where Λ int is a suitable scale separating perturbative and non-perturbative phenomena (and is also larger than the relevant scale of observation).In the context of heavy-ion collisions, Λ int can be identified with a saturation scale Q s determining the initial conditions.It is evident from (2.5) that higher order effective metric couplings will be suppressed by powers of (T /Q s ) 4 , where T is an effective temperature determining the scales of the stress tensors. We can compute the energy momentum tensor of the full system by varying the action (2.1) with respect to the physical background metric G µν , which yields In the first equation above, all indices for subsystem variables are lowered using the respective background metrics, while that of the full system (on the left hand side) is lowered using the physical background metric.It can be explicitly checked that the coupling equations (2.5) and the two Ward identities (2.2) indeed imply the local conservation of the full energy-momentum tensor in the physical background metric, i.e. The mutual effective metric coupling thus leads to a full energy-momentum tensor (2.6) which can be obtained ultra-locally from the subsystem energy-momentum tensors alone, and does not require any further microscopic input.This ensures that a coarse-grained hydrodynamic description of the full energy-momentum tensor in terms of those of the subsystems.It implements the hypothetical Wilsonian perspective that the coarse-grained perturbative description should determine the complete non-perturbative description at any scale.Therefore, assuming that S and the inter-system couplings can be derived from S, we can state that the coarse-grained descriptions of the two subsystems are related,2 and the full hydrodynamic description can be constructed in terms of the subsystem hydrodynamic variables.A detailed description will follow soon. In most set-ups, the full system is solved in a self-consistent iterative procedure [31,34,42].At the first step of iteration, the effective metrics are identified with the background metric G µν , and the two systems are solved independently with given initial data.In the next step, the effective metrics of the two subsystems are updated via the coupling equations (2.5) using the respective subsystem stress tensors obtained from the previous iteration.The subsystems are solved once again with the same initial conditions.The iterations are thus continued with fixed initial conditions until they converge implying that the full energy-momentum tensor is conserved in the physical background metric. This iterative procedure was first proposed in the context of heavy-ion collisions with glasma-like initial conditions for the perturbative sector and an empty AdS-like initial conditions for the holographic sector [30].Subsequent works [31,34,42] have shown that the convergence of the semi-holographic iterative procedure can not only be achieved, but to a very good accuracy over three to four iterations only.For example, the convergence of the iterative procedure has been demonstrated with non-trivial initial conditions for a scalar field in the boundary coupled to the bulk [34,42].Note that since both systems are marginally deformed via their effective metrics in each step of the iteration, the iterative procedure is well defined in the full quantum theory, however we will not discuss this further here. Here, we will restrict ourselves to a large N approximation in both sub-sectors which ensures factorization of expectation values of multi-trace operators into those of the single trace operators like the energy-momentum tensor, and thus ignore stochastic hydrodynamic phenomena.Furthermore, we will see that the iterative procedure is also not necessary in the hydrodynamic limit -the coupling equations and the dynamical equations can be evolved simultaneously ensuring conservation of the full energy-momentum tensor in the physical background metric automatically. Hybrid fluid model Although the above discussion included an action, it is not necessary to describe the evolution of the system, which is governed by the effective conservation laws (2.2) and the algebraic coupling equations (2.5) in the hydrodynamic limit.The only required inputs are the energy-momentum tensors of each sector, t µν and tµν , determined via the constitutive relations, and the physical background metric G µν .Since the hydrodynamic limit is found in many theories (we have in mind a strongly coupled holographic theory for one subsystem and a weakly coupled kinetic theory for the other), we can consider the simpler case in which the two subsystems are two different relativistic fluids. u µ g µν u ν = −1 and ũµ gµν ũν = −1. (2.10) The dissipation tensors, Π µν and Πµν , are taken to be orthogonal to their respective four velocities Π µν u µ = 0 and Πµν ũµ = 0. (2.11) The two fluids are assumed to have dissipative dynamics dictated by Müller-Israel-Stewart (MIS) theory 3 [35,36,43], 4 namely where η and η are the subsystem viscosities.Note that we have truncated at first order, and since we are working with conformal subsystems, we have set the bulk viscosities to zero. It is worthwhile to underline that although the two subsystems are conformal, the complete system (2.6) will in general not be conformal, T µν g (B) µν ̸ = 0. We will see that there is an emergent conformality for large proper time.It is worthwhile to mention that the full system also has emergent conformality at thermal equilibrium at high temperature (with respect to the scale of the intersystem coupling) [33]. Explicit equations for boost-invariant Bjorken flow We now provide the explicit form of the complete set of equations, (2.2), (2.5) and (2.12).We will work in the Milne coordinates: τ = √ t 2 − z 2 , x, y, and ζ = arctanh(z/t), where x 3 MIS theory with the Weyl covariant version of the convective derivative, as in Eqs.(2.12), is just a consistent truncation of BRSSS theory where certain non-linear transport coefficients are considered to be vanishing. 4The more weakly and the more strongly coupled sectors are also simply assumed to have higher and lower specific viscosity, respectively, as well as correspondingly shorter and longer relaxation time scales. While not pursued here, one could also consider including second-order derivatives of the shear tensor of the strongly coupled sector as in [44,45] such as to make the dynamics of the latter more similar to the holographic dynamics, which is governed by quasinormal modes.(We thank Michal Heller for this remark.) and y are the transverse coordinates and z is the longitudinal coordinate for the expanding system.In these coordinates, the Minkowski metric assumes the form The above is thus the physical background metric for the full system.The Bjorken flow is boost-invariant, and therefore in the Milne coordinates, all physical variables depend only on the proper time τ .Motivated by the form of the background metric, we make the following boost-invariant ansatz for the effective metrics: (2.14) The four velocities are then , 0, 0, 0 and , 0, 0, 0 . (2.15) Here we consider the dissipation tensors, Π µν and Πµν , to be symmetric, transverse and traceless i.e., g µν Π µν = 0 and gµν Πµν .These conditions imply that there is only one independent component of Π µν for each sector, which we call Π η η = −ϕ (the longitudinal non-equilibrium pressure) following the convention in [12], which means that ) (2.17) Then explicitly, the only non-vanishing components of the conservation equations (2.2) are the MIS equations are (2.12) and finally, the algebraic coupling equations (2.14) are b2 − 1 = γ ab 2 c τ where we introduced r ≡ −γ ′ /γ.We see that in the decoupling limit, namely γ = 0, the right hand side vanishes and the effective metrics reduce to the usual background Milne metric (2.13) in which the two fluids evolve independently.It is clear that the above system actually has four dynamical variables, namely the energy densities and the non-equilibrium pressures (ε, ε, ϕ, φ) which follow the first-order equations (2.18), (2.19), (2.20) and (2.21).The six effective metric variables, a, ã, b, b, c, c, can be eliminated by determining them as functions of the four dynamical variables by solving the six algebraic equations (2.22)-(2.27).The evolution of the full system can thus be readily determined numerically for given initial values of (ε, ε, ϕ, φ). In what follows, we parametrize the transport coefficients in the two conformal subsystems, and the respective MIS relaxation times via dimensionless quantities C η , Cη , C τ and Cτ defined via In the simulations that follow, we will usually choose motivated by the values obtained in holographic descriptions and in kinetic theories, respectively.This marks the tilded sector as more strongly coupled than the untilded one.We shall set unless noted otherwise.In the inviscid Minkowski case [33], the condition of UV completeness restricts the value of r to r > 1.Moreover, it was found in the inviscid case that there is a second order phase transition for 1 < r ⪅ 1.11, which we discuss in Appendix B. Note that the coupling γ is the only dimensionful scale in our framework.Further, we will work with the dimensionless anisotropy parameter, χ = ϕ/(ϵ + P ).Thus the conservation and the MIS equations read where the prime denotes a derivative with respect to the proper time, e.g.ε ′ = ∂ τ ε.The full energy momentum tensor (2.6) is given by where the total energy density, total transverse pressure and longitudinal pressure are (2.39) Determining the attractor surface As discussed above, the hybrid fluid system has a four-dimensional phase space with the energy densities and non-equilibrium pressures, (ε, ε, ϕ, φ), as the dynamical variables.It was shown in [37] that the full system has a two-dimensional attractor surface.For any initial condition (ε, ε, ϕ, φ)(τ 0 ), the system evolves towards a particular curve on the attractor surface.The curves which compose this attractor surface are characterized by .Anisotropies χ of one particular attractor solution (thick lines) and four neighboring trajectories (thin lines).The less viscous (strongly coupled) system is displayed by blue curves, the more viscous (more weakly coupled) system by red curves, the total system by orange curves.From [37], where more plots are given for this particular solution. dynamical evolutions where the dimensionless anisotropies, χ and χ do not diverge as we trace back in time to the moment of collision τ = 0. Furthermore, where See [15] for similar analytic behavior of the attractor.In Fig. 1 we show the evolution of the anisotropies in the different sectors and the total system for one particular attractor solution and neighboring trajectories (from [37], where more plots are given for this particular solution). In order to determine the attractor curves on the attractor surface we proceed as follows.We initialize at a time τ 1 and choose our initial energy densities and non-equilibrium pressures, (ε, ε, ϕ, φ)(τ 1 ).We then solve the coupling equations (2.22)-(2.27) at the initial time to determine the metric components at the initial time.Subsequently, we evolve the system backwards in time via (2.18), (2.19), (2.20) and (2.21), and tune the values of (ϕ, φ)(τ 1 ) with fixed (ε, ε)(τ 1 ) such that χ and χ tend to constants for early times following (2.40).Having thus found an attractor curve to a satisfying level of precision (in this paper, 10 −15 ), we then evolve the system forward in time to study the late time approach to hydrodynamics. Mapping the attractor surface at early and late times Although in the application to Pb-Pb and p-p collisions, only times τ ≳ γ 1/4 are relevant phenomenologically, we shall begin by mapping out the attractor surface for arbitrary times.Even when the early-time behavior of the attractor surface is an insufficient approximation to a full semi-holographic treatment (let alone actual high-energy collisions), its constituent lines can be usefully parametrized by its parameters at some early reference time τ 0 ≪ γ 1/4 . Early time behavior: dominance of the weakly coupled sector The attractor surface has a universal scaling behavior in the limit τ → 0 which is determined in terms of the two dimensionless viscous parameters σ 1 and σ 2 defined in (2.41).Crucially, we need The relations σ 1,2 < 1 are imposed by causality [46].The two relations above imply that σ 2 > σ 1 as should be the case if the second subsystem is more strongly coupled than the first.The stricter lower bound on σ 2 is necessary for reasons to be described below. We first note that with the above conditions, the anisotropies, χ and χ, go to σ 1 and σ 2 respectively at τ = 0 on the attractor surface, as we see in Fig. 2.Moreover, we find numerically that ε goes to zero while ε goes to a constant, namely γ −1 √ r − 1/ √ r.With these numerical inputs, assuming (3.1), and by expanding the conservation, MIS and coupling equations in τ around τ = 0, we readily determine the following universal early time scaling behavior of the energy densities and the anisotropies: and the following scaling behaviors of the components of the effective metrics: Since the attractor surface is two-dimensional, there should be only two independent coefficients of the above early time expansions, which can be chosen to be k 2 and g 2 .The latter determine the remaining coefficients as follows: As a result, in the limit τ → 0, the following five identities should hold on any curve on the attractor surface.Fig. 3 demonstrates the numerical check of the above identities on an attractor curve. The stricter lower bound on σ 2 in Eq. (3.1) is necessary because the above scaling relations follow from solving the coupling equations, etc., with the assumption that ã vanishes as τ → 0. From the explicit scaling of ã in (3.3), it is clear that the lower bound is then self-consistent.Furthermore, it is also clear from (3.2) that χ also does not diverge as as a consequence of the lower bound on σ 2 and σ 1 < 1. If instead of Eq. (3.1) we impose 0 < σ 2 < 1 and max 0, we obtain that ã diverges as τ (5/3)σ 1 −(4/3)σ 2 −(1/3) as τ → 0 and all other scaling relations discussed above are also altered.Nevertheless, χ and χ still reaches σ 1 and σ 2 as τ → 0 on the attractor surface.However, the above condition involves a lower bound on σ 1 which cannot be motivated otherwise, while (3.1) can be readily satisfied with reasonable weak coupling and strong coupling parameters.Therefore, we do not further investigate the above condition. 10 -6 10 -5 10 -4 0.001 0.010 0.100 1 0.7 0.8 0.9 Figure 3. Check of the identities (3.5) at early times.Similar identities given in (C.5) also holds for the disperser surface, and can be verified to a very high accuracy numerically as evident from Fig.It is easy to see from the above that the physical energy densities of the two subsystems behave as follows at early time and therefore Since σ 2 > σ 1 , it follows that the energy density of weakly self-interacting sector is always dominant at sufficiently early times, leading to the universal feature of an energy distribution on the attractor surface which is reminiscent of the so-called bottom-up thermalization scenario in heavy-ion collisions [38]. 5Also note that E 1 always diverges as τ → 0 since σ 1 < 1.However, E 2 may or may not diverge in this limit, without contradicting any of the two conditions in (3.1), depending on the choice of parameters.While all initial conditions evolve towards specific curves on the attractor surface, they also get repelled from a disperser surface in which χ and χ approach −σ 1 and −σ 2 respectively in the limit τ → 0. On the disperser surface, the opposite of the energy distribution in bottom-up thermalization takes place, since both E 2 and E 2 /E 1 diverge as τ → 0. In contrast to (3.2), γ ε goes to the constant √ r − 1/ √ r while γε vanishes as τ → 0 on the disperser surface.For more details, see Appendix C. One can also find two other such half-disperser surfaces, in which χ and χ approach σ 1 and −σ 2 respectively, or χ and χ approach −σ 1 and σ 2 respectively as τ → 0. These can be analyzed in a similar manner.All these three surfaces are not relevant for understanding hydrodynamization with generic initial conditions. Crucially, γε(τ 0 ) is bounded from above for any choice of ε(τ 0 )/ε(τ 0 ), as otherwise we do not obtain solutions.However, the energy densities and the total energy density can be arbitrarily large.This is similar to the equilibrium solutions studied in [33] where ε and ε were bounded from above, but the total energy density and pressure were unbounded. We can systematically correct this error by iterative fine-tuning.At each step of the iteration, we fine tune ϕ and φ for each γε(τ 0 ) and ε(τ 0 )/ε(τ 0 ) such that χ = σ 1 and χ = σ 2 at progressively earlier times.Practically, the very first step of iteration gives a very precise estimate of initial values of ϕ and φ on the attractor surface for each γε(τ 0 ) and ε(τ 0 )/ε(τ 0 ), and also the upper bound on γε(τ 0 ) for each ε(τ 0 )/ε(τ 0 ) if we initialize at τ 0 = 10 −3 γ 1/4 with our choices of parameters.We also find that the estimates of hydrodynamization times and other results reported below change only sub-percent if we correct for our deviation from the attractor surface systematically in the iterative procedure. Hydrodynamic expansion At late time, the full system can be mapped to a hydrodynamic derivative expansion with powers of τ −2/3 counting number of derivatives.The hydrodynamic expansion is determined only by two dimensionless parameters, which could be chosen for instance as lim τ →∞ γ 2/3 ετ 4/3 and lim τ →∞ γ 2/3 ετ 4/3 . (3.9) (For reasons to be mentioned below, we will choose a different pair of parameters.)Each pair of these parameters determines a unique curve on the attractor surface.Thus any initial condition in the four-dimensional phase space evolves to a specific curve on the attractor surface labelled by a specific pair of parameters determining the late-time hydrodynamic expansion.On the attractor surface itself, the pair of parameters k 2 and g 2 determining the early time expansion should relate to the pair of parameters γε(τ 0 ) and ε(τ 0 )/ε(τ 0 ) giving the initialization at time τ 0 .In turn the latter pair of parameters should match uniquely to a pair of parameters giving the late-time hydrodynamic expansion.We postpone the discussion of the latter matching and turn now to the hydrodynamic expansion itself. The entire hydrodynamic expansion can be expressed in terms of the perfect fluid components which give the leading τ dependence of ε and ε at late time as follows with • • • denoting viscous corrections and τ 0 is an arbitrary moment of time.Note that at late time the two fluids decouple.The expansion dilutes the energy densities and therefore the mutual coupling via the effective metrics.As a result, the leading order perfect fluid expansions assume the usual forms for both systems.It is clear that the choice of τ 0 above is arbitrary.A different choice τ0 simply rescales ϵ 10 and ϵ 20 as follows: For the sake of convenience, we define dimensionless time: Note however that ω 1 and ω 2 are not independent time variables.In fact The hydrodynamic expansion in τ , obtained by solving all coupling equations, the two MIS equations and the two conservation equations, can be readily stated in terms of these variables as follows The above equations show all terms up to second order in derivatives.Defining we can invert (3.17) In terms of w 1 we can readily express the hydrodynamic expansion as a phase space curve With w 1 , w 2 , χ and χ as the phase space variables, the phase space curves representing the hydrodynamic expansion (or the attractor) are then given by the above expressions, χ(w 1 ), w 2 (w 1 ) and χ(w 1 ).As manifest from above, each curve is labelled by two dimensionless, time-reparametrisation invariant parameters, α and β. The hydrodynamic late-time expansion of the components of the effective metrics are: Remarkably, up to O(w −4 1 ) the volume factors are: We can readily compute the full system energy density which behaves as where We find that the full energy-momentum tensor can be mapped to the standard hydrodynamic Bjorken flow expansion (in the usual Milne metric) of a single fluid, but with effective equation of state and effective transport coefficients dependent on the parameters α and β. We can explicitly map the full system to a single expanding fluid at late time and compare to the above discussion.For a single fluid, the evolution of the energy density in the Milne background metric takes the generic form (up to first order) where ζ and η are the bulk and shear viscosity, respectively.Furthermore, the trace of the (first-order) hydrodynamic energy-momentum tensor takes the form We can assume that with k, κ and κ constants.Solving (3.33) with inputs from (3.35) in the large proper time expansion, we obtain and also (3.34) We can determine the constants k, κ and κ simply by matching the above expansions to those obtained by solving our system of equations.We readily see that these constants depend on the specific curve on the attractor surface.Comparing Eq. (3.37) with Eq. (3.31), we explicitly find that the effective viscosity of the full system is which depends manifestly on which curve on the attractor surface the full system evolves to at late time.In fact κ can be obtained from (3.39): Similarly, the hydrodynamic expansion of our hybrid system yields that Tr(T ) = − 8τ We can immediately conclude that • and κ = 0, i.e. the effective bulk viscosity vanishes since the τ −2 term vanishes in (3.41). Since k and κ both depend on α and β, it's clear that the effective equation of state and the transport coefficients of the single fluid description of the full system depends on the specific curve on the attractor surface. In order to interpolate to hydrodynamics, it is useful to do a doubly logarithmic plot for χ, χ, ε and ε.Perfect hydrodynamics begins when the logarithmic plots of ε(τ ) and ε(τ ) become parallel to each other with slope −4/3, i.e. when they both reach the perfect fluid hydrodynamic tail 1/τ 4/3 , which is evident in Fig. 5 for the first-order hydrodynamic tails of ε and ε for the first and second attractors, respectively.First-order hydrodynamics begins when the logarithmic plots of χ(τ ) and χ(τ ) become parallel to each other with slope −2, i.e. when they both reach the hydrodynamic tail 1/τ 2 given by first-order hydrodynamics (note in the perfect fluid limit χ vanishes).This is shown in Fig. 5 for the first-order hydrodynamic tails of χ and χ for the first and second attractors, respectively.We can also match using leading order hydrodynamic expansion of χ ( χ): to a high accuracy. In Appendix D we illustrate how to understand the hydrodynamic gradient expansion and see how its Borel resummation can describe the attractor surface in terms of the parameters α and β, defined in (3.12) and (3.13). Late time approach to attractor: fluctuation analysis The approach to the attractor at late time can be simply understood by perturbing around the late-time hydrodynamic expansion computed above. The leading order linear fluctuations of energy densities, anisotropies and effective metric variables about the hydrodynamic expansion, i.e. the attractor solution at late time are defined as follows: and similarly for the tilded sector.Above τ := τ /τ 0 and δ ε(τ ), δϕ(τ ), etc. denote the fluctuations about the hydrodynamic expansion.We have truncated to the first few terms of the hydrodynamic expansions which can be found explicitly in Appendix A, because only these are sufficient to obtain the leading order behavior of the fluctuations.The fluctuations have two independent non-hydrodynamic modes capturing the missing information of the initial conditions in the four-dimensional phase space, and two hydrodynamic modes which span the two-dimensional attractor surface.Since the non-hydrodynamic modes, which do not admit a power series expansion τ −2/3 , tell us how a generic evolution relaxes towards the attractor surface, we will focus on finding these explicitly here.One can eliminate the fluctuations in six effective metric variables in terms of other four fluctuations (δε(τ ), δ ε(τ ), δϕ(τ ), δ φ(τ )) by using six metric coupling equations.This leaves us with four first-order coupled ordinary differential equations in four variables, which are given in Appendix A. One can further solve these equations until one is left with a pair of decoupled fourth-order differential equations for the anisotropy fluctuations.Depending on the relative values of parameters, the fluctuations, δϕ(τ ) and δ φ(τ ), have different leading order forms, which we detail below. In the case when C τ ϵ 20 1/4 ̸ = Cτ ϵ 10 1/4 , the two independent non-hydrodynamic solutions (at leading order) are: where m 1 , m 2 , n 1 , n 2 are constants, of which only two are independent: (3.54) will be the dominant exponential and give the relaxation towards the attractor.The powers of τ accompanying this exponential in (3.50) and (3.51) imply that δ φ(τ ) will decay faster towards the attractor surface than δϕ(τ ) in this case.This implies that the strongly coupled sector will hydrodynamize faster than the weakly coupled one.If (3.56) is the dominant mode.By an analogous argument, δϕ(τ ) will decay faster than δ φ(τ ) implying that the weakly coupled sector will hydrodynamize faster than the strongly coupled one. Full numerical solutions do confirm that indeed when (3.53) is realized, the strongly coupled sector hydrodynamizes faster and the reverse occurs when (3.55) holds.However, realizing (3.55) requires choosing large ratios of ε(τ 0 )/ε(τ 0 ).This is also unnatural because this requires extreme fine-tuning at early time, because of the bottom-up behavior on the attractor surface in which ε goes to a constant while ε vanishes as τ → 0. the two independent solutions to leading order are: where m 1 , m 2 , n 1 , and n 2 are arbitrary constants only two of which are independent: (3.60) Matching with late times Now we are prepared to study how parameters setting initial conditions on the Bjorken flow attractor surface map to α and β, which are defined in (3.12) and (3.13), and which determine the hydrodynamic expansion at late time.In particular, we find that they scale as a power of the total energy density on the attractor surface.For r = 2, the scaling of α and β is found to be α ∼ (γE(τ = γ 1/4 )) 3/2 , (3.61) where E is the total initial energy density.Using the following definitions of the total entropy density and the individual subsystem entropy densities, which are valid when both subsystems reach the perfect fluid limit: we can readily see from the hydrodynamic expansions discussed above that the individual system entropy densities behave at late time as which means that the total entropy scales like We can compare this to the pocket formula in [47] giving the final state particle multiplicity as a power of the total initial energy density for single conformal fluid attractors via (Sτ ) hydro ∼ (eτ ) 2/3 0 to see that the interactions due to the effective metric coupling in our hybrid model increases the scaling exponent from 2/3 to 3/4 when the total energy density is evaluated at time γ 1/4 .The pocket formula of [47] states that the total particle multiplicity per unit rapidity6 follows dN dη ≈ (Sτ ) hydro . (3.68) We would need more microscopic inputs to relate our results to particle multiplicities, and therefore it is beyond the scope of the present work.Interestingly, in the case of one fluid in Bjorken flow, the attractor solution has been shown to provide a maximum for dilepton production (while the repulsor/disperser represents a minimum) [48].It would be interesting to do a similar study in our context, however we would then need to replace the perturbative MIS sector with a kinetic theory.We can also study the trace of the full energy momentum tensor, which encodes the interaction energy.For late times, we find that the trace falls like the typical energy density, . The trace of the full energy momentum tensor as a function of the total energy density, which acts as a proxy for the effective temperature.This plot is for an initial energy density of the hard sector of γε = 0.8, ε/ε = 40.The green dot denotes the value at τ 0 = 10 −3 γ 1/4 and the red point is at τ = 10 +3 γ 1/4 .as follows from (3.31) and (3.41).An illustrative example for the full time-evolution of the interaction energy is shown in Fig. 6.We find that the interaction energy is non-vanishing only in the intermediate time-scales when both systems hydrodynamize, however it vanishes both at early and late times. Hydrodynamization on the attractor surface Figure 5 shows the existence of hydrodynamic tails of the attractor at a sufficiently late time, giving evidence of hydrodynamization in the hybrid attractor.This preliminary observation motivates a systematic study of the hydrodynamization times of the two sectors on the attractor surface. Hydrodynamization time Most commonly, hydrodynamization time is defined as the time when both the longitudinal and transverse pressure can be described by the first-order hydrodynamics up to a certain accuracy [6].For the hybrid hydrodynamic attractor, we define the hydrodynamization time of the hard (more weakly coupled) sector via and similarly for the strongly coupled (tilde) sector, so that τ hd and τhd denote the hydrodynamization times of the hard and soft sectors, respectively.This implies that the ratio of the departure of the anisotropic pressures (ϕ and φ respectively) from that given by first order hydrodynamics (ϕ 1st and φ1st respectively) to the respective isotropic parts of the pressures for both sectors is less than 10 percent after the corresponding hydrodynamization times.In general, the hydrodynamization time of perturbative sector τ hd is larger than that of the holographic sector τhd as shown in Fig 7, where the blue curve corresponds to the soft sector and the red one to the hard sector.Further, if we analyze the hydrodynamization time of the two components of the hybrid attractor by classifying the total energy density at the hydrodynamization time into three regions, one can find the following generic features of them, First, when the total energy density at the hydrodynamization time of each sector lies within the region, 0.0001γ −1 ⪅ E(τ hd ), E(τ hd )≪γ −1 (negative slope in the Fig 7), we have found that the hydrodynamization time of both the sectors follows conformal behaviours i.e. where p and p are constants.We note from Fig. 7 that the range of the energy density at hydrodynamization time in the conformal window is slightly larger in the weakly selfinteracting subsector.The value of p, p and the characteristic of the hydrodynamization time in this conformal window is determined based on which sector dominates at early time.For instance, with dominance of the perturbative sector at reference time τ 0 = 10 −3 γ 1/4 , we find p ≈ 0.63.(4.4)This is the same value (within numerical accuracy) that is obtained in the case of conformal attractors.We find that the hydrodynamization time of the perturbative sector in this conformal region is universal as shown in Fig 8 and the same as in conformal attractors studied earlier in the literature.However for the holographic sector as in Fig 9, the value of the p increases from 0.64 to 2.1 as the ratio of the energy densities of the individual sectors at some early time τ 0 = 0.001γ 1/4 increases from 1 : 1 to 1000 : 1.Thus the hydrodynamization time of the holographic sector shows sensitivity towards the initial ratio of the energy densities of the subsectors.Hence one finds that even in the conformal window the hydrodynamization time of the soft sector can be substantially larger than the conformal attractors. The doubly logarithmic plot of the scaled hydrodynamization time of the weakly coupled (hard) sector vs the total energy density at the τ hd shows universality and indistinguishability from other conformal attractors, when the hard sector is dominant at a reference time (here τ 0 = 0.001γ 1/4 ).The various color labels refer to different ratios of energy densities of the hard to the strongly coupled (soft) sector at the initial reference time.We note that the hydrodynamization time of the dominant hard sector is independent of these ratios. However, with initial conditions where the energy density of the holographic sector is dominating on the attractor surface, the roles of conformal window in the subsector reverses i.e., p assumes the conformal value 0.63, thus showing universality as in Fig 10 and coinciding with the usual conformal attractors.Meanwhile from Fig 11, we find that the perturbative sector shows dependence on the initial energy densities of the subsectors and the value of the constant p increases with the increase in the initial energy density of the perturbative sector.This reversal in the behaviour of hydrodynamization time in the conformal window can be attributed to the fact that the sub-dominant sector is forced to share its energy with the dominating sector even before the hydrodynamization, while the dominating sector is dynamically driven to act universally. Next, when the energy densities at hydrodynamization times of both the sectors are close to γ −1 i.e., E(τ hd ), E(τ hd ) ∼ γ −1 the hydrodynamization times on the attractor surface takes minimum values as shown in Fig 7. The figure shows the doubly logarithmic plot of the scaled hydrodynamization time of the strongly coupled (soft) sector against the total energy density at the respective τhd , when the weakly coupled (hard) sector is dominant at an initial reference time (here τ 0 = 0.001γ 1/4 ).The various color labels refer to different ratios of energy densities of the hard to the soft sector at the initial reference time.Values of p for ratios from 1 : 1 to 1000 : 1 of the initial energy densities of the subsectors increases from 0.64, 0.76, 0.93, 1.07, 1.21, 1.76 to 2.1.These different values of p shows the dependency of τhd on the ratio ϵ(τ 0 ) : ε(τ 0 ). 10 -6 10 -5 10 -4 0.001 0.010 0.100 1 0.5 The figure shows the doubly logarithmic plot of the scaled hydrodynamization time of the strongly coupled (soft) sector against the total energy density at the respective τhd when the soft sector is dominant at the reference time τ 0 = 0.001γ 1/4 .The various color labels refer to different ratios of energy densities of the hard to the soft sector at the initial reference time.We note that the hydrodynamization time of the dominant soft sector is independent of these ratios.The plot shows the universal behaviour of the hydrodynamization time of the dominant sector in the conformal window, as the value of p is 0.63, and is the same as in Fig. 8 corresponding to the cases where the hard sector was dominant at initial reference time. When E(τ hd ) and E(τ hd ) are very large, the hydrodynamization times increase rapidly and do not show any scaling behaviour like that in conformal window.This corresponds to the positive slope region in Fig 7 .-29 - The doubly logarithmic plot of the scaled hydrodynamization time of the weakly coupled (hard) sector vs the total energy density at the τ hd when the soft sector is dominant at the reference time τ 0 = 0.001γ 1/4 .The various color labels refer to different ratios of energy densities of the hard to the soft sector at the initial reference time.We note that the hydrodynamization time of the subdominant hard sector depends on the ratio of the energy densities.Comparing with Fig. 9, where the hard sector was dominant at initial reference time, we observe that in both of these cases, the more dominant one sector is, the more delayed is the hydrodynamization of the sub-dominant sector. Rough phenomenological match to p-p and Pb-Pb collisions In order to contemplate possible phenomenological implications of these features, we interpret attractor curves with lower total energy densities at τ ∼ γ 1/4 as corresponding to small systems (high-multiplicity p-p or A-p collisions) as opposed to A-A heavy-ion collisions (Pb-Pb or Au-Au).To relate the two, we identify the energy scale of the intersystem coupling γ −1/4 with the saturation scale Q p-p s of p-p collisions, and we assume that E(τ ) ∝ Q 4 s at τ ∼ Q −1 s with large A-A systems having a larger Q s .In the IPsat and bCGC models studied in [41] Q s,A at large nucleon number A, impact parameter b = 0, and Feynman x parameter around 0.001 was found to be given roughly by For Pb-Pb collisions, where A 1/3 ≈ 6, we take simply as a typical value and get that at τ ∼ Q −1 s,P b the total physical energy density is E Pb-Pb (τ ∼ 0.63) ∼ 6.25 E p-p (τ ∼ 1) in units where γ = 1. The Tables 1 and 2 show the hydrodynamization times of a selection of attractor solutions of the accordingly defined p-p and Pb-Pb collisions for various ratios of energy densities of the two subsectors at the conveniently chosen early reference time τ 0 = 10 −3 γ 1/4 , which we use to parametrize the attractor surface.1. Hydrodynamization times for p-p initial conditions parametrized by ε(τ 0 )/ε(τ 0 ) at a reference time τ 0 = 10 −3 ≪ τ i ∼ 1 (in units where γ = 1).Here w = E 1/4 1 τ , where E 1 is defined in (3.7), w hd = w(τ hd ), and similarly for the second subsystem.Note that for ε(τ 0 )/ε(τ 0 ) < 1, the hydrodynamization times for the hard (soft) sector are typically larger (smaller). The figures show a doubly logarithmic plot of the scaled hydrodynamization time for the perturbative sector (in the left) and holographic sector (in the right) against the total energy density at the respective τ hd and τhd .The black and the green dots are the hydrodynamization times for initial conditions which are physically realizable in the p-p and Pb-Pb collision as listed in Tables 1 and 2. The total energy density at hydrodynamization time for Pb-Pb collisions is scaled by a factor of 1/5 and 1/2 to fit in the same plot with p-p collision. From these tables one finds that in our p-p collisions E(τ hd ) and E(τ hd ) are smaller, while the hydrodynamization time of the perturbative sector is larger and that of the holographic sector is smaller compared to the Pb-Pb collisions.This behaviour is reflected in Fig 12, where the E(τ hd ) and E(τ hd ) of the Pb-Pb collision is scaled by the factor of 1/5 and 1/2 to fit in the plot with p-p collision.Also one observes that this phenomenologically realizable hydrodynamization time of the perturbative sector lies in the conformal region (maximal value of γE(τ hd ) is about 0.1 for p-p and 0.6 for Pb-Pb collisions) while that of the non-perturbative sector lies in the non-conformal region for both p-p and Pb-Pb collisions (minimal value of γE(τ hd ) is about 0.5 for p-p and 0.9 for Pb-Pb collisions).These features of the hydrodynamization time of perturbative and holographic sector for the initial conditions of p-p and Pb-Pb collisions are captured in the Fig 7 as mentioned below (note also from this figure that the conformal windows for the two sectors are slightly different from each other): • In case of the perturbative sector for both p-p and Pb-Pb collisions, the total energy densities at the hydrodynamization time, E(τ hd ), lies approximately within the conformal region denoted by the black and the green dots in the figure.In addition, since the E(τ hd ) is minimal in p-p collisions, the hydrodynamization time of the perturbative sector τ hd is sufficiently larger which is evident from the scaling behaviour of τ hd in the conformal window. • For the non-perturbative sector, E(τ hd ) > γ −1 and lies outside the conformal region for both p-p and Pb-Pb collisions as shown in the figure.Further due to the small total energy density, E(τ hd ), in p-p collisions, the hydrodynamization time τhd of the soft sector of p-p collision is smaller than that of Pb-Pb collisions, as indicated from the growth of τhd with E(τ hd ). Note from the Tables 1 and 2 that for smaller ratios of the hard to soft energy densities at reference time (ε(τ 0 )/ε(τ 0 ) ≲ 1), the hydrodynamization times for hard/soft sectors are typically larger/smaller, in both p-p and Pb-Pb collisions.For details on the evolution of energy densities in these scenarios, see Appendix E. Relevance of results for general initial conditions The above study of hydrodynamization, in the phenomenological context of small and large system collisions, has been restricted to the hydrodynamic attractor surface.However, the attractor surface is not necessarily relevant at τ ∼ Q −1 s , where we use conditions such as (4.6) to match the evolution with a more microscopic description, such as the glasma effective theory, via the total energy density.A natural question then is whether our results for hydrodynamization in large vs small system collisions remain valid for generic evolutions that may be far away from the attractor surface at a glasma matching time τ i ∼ Q −1 s .We have explored this issue by studying evolutions in the context of both p-p and Pb-Pb type collisions where the anisotropies of both hard and soft sectors are far from the attractor values around τ i ∼ Q −1 s , retaining the constraints on the total energy densities used in Sec.4.2, which are E Pb-Pb (τ i ∼ 0.63) ∼ 6.25 and E p-p (τ i ∼ 1) ∼ 1 respectively (in units γ = 1), in order that we could match with glasma models at the time where the latter just cease to be applicable and need to be replaced by kinetic-like descriptions.(Recall that we have identified γ with Q −4 s for p-p collisions.)Numerically, this has been implemented by initializing with values different from known attractor solutions both before and after τ i ∼ Q −1 s , and evolving forwards/backwards in time.While it is hard to explore the full four-dimensional phase space, initializing with different anisotropies (i.e.ϕ and φ) away from the attractor surface typically produce maximal deviations from the attractor surface around τ i ∼ Q −1 s .As shown in Fig. 13, in the Pb-Pb collision scenario, the convergence to the attractor happens much earlier than hydrodynamization. 7The hydrodynamization times for the hard/soft sectors for Pb-Pb collision type scenarios are identical to that of the corresponding attractor curves to which the evolution converges.The same is also true for the hard sector in p-p collision type scenario as shown in Fig. 13.However, for the soft sector in p-p collisions, as shown in Fig. 13, the convergence to the attractor is simultaneous with hydrodynamization, and therefore the hydrodynamization times in off-attractor evolutions do not coincide with that of the corresponding attractor curve, and can be larger than the latter by typically up to 50 percent (gray column in Fig. 13(d)). Nevertheless, we note that even the largest hydrodynamization time for the off-attractor evolution in p-p type collision type scenario is smaller than the hydrodynamization time 7 Because of the nonlinearity of the dynamics, one cannot give a precise initial rate of approach to the attractor as this depends strongly on the specific solution.In the example shown in Fig. 13, the rate Γinit = |d ln(χ − χattractor)/dτ | at initial times γ −1/4 τi = 1 and 0.63, respectively, for p-p and Pb-Pb, is, however, always much larger than τ −1 hd , for both, the hard and the soft sectors.Close to the attractor, Γinit/τ −1 hd ∼ 60 for the hard sector in both p-p and Pb-Pb, and Γinit/τ −1 hd ∼ 20 (30) for the soft sector in p-p (Pb-Pb).Further away from the attractor, this ratio varies between 40 and 130 (20 and 50) for the hard (soft) sector of Pb-Pb, and between 10 and 70 (2 and 300) for the hard (soft) sector of p-p. .Left: Generic evolutions of the dimensionless anisotropy which deviate from the attractor surface in Pb-Pb collisions for both hard sector (top) and soft sector (bottom).In all cases, the glasma matching condition E Pb-Pb (τ i ∼ 0.63) ∼ 6.25 (in units where γ = 1) is satisfied (see Sec. 4.2) at the glasma matching time τ i ∼ 0.63 = Q −1 s,P b which is indicated as a black dash-dotted vertical line.All evolutions converge to the corresponding attractor curve which is marked in bold red (for hard) and bold blue (for soft), and this occurs much earlier than hydrodynamization in both cases.The hydrodynamization time for the soft sector is marked as a vertical blue dotted line, while that for the hard sector is beyond the range of the plot.Right: The same for p-p collisions, where the glasma matching condition E p-p (τ i ∼ 1) ∼ 1 is satisfied (see Sec. 4.2) at the glasma matching time τ i ∼ 1 = Q −1 s,p indicated as a black dash-dotted vertical line.All evolutions converge to the corresponding attractor curve which is marked in bold red (for hard) and bold blue (for soft).In the case of the hard sector, the convergence to the attractor occurs also much before hydrodynamization time, but in the soft sector of p-p collisions, the convergence to the attractor is simultaneous with hydrodynamization.The hydrodynamization times for various evolutions are spread over a range marked as a gray column which includes the blue dotted vertical time corresponding to the attractor (the corresponding range in the Pb-Pb soft case is negligibly small).Note that the hydrodynamization time for the soft sector in Pb-Pb collisions is larger than the maximum value for that of the soft sector in p-p collisions. in Pb-Pb scenario.The latter is determined by the attractor.Therefore, we conclude that the hydrodynamization of the soft sector in p-p type collisions occurs earlier than that in Pb-Pb type collisions in generic evolutions even if they deviate significantly from the attractor surface at τ ∼ γ 1/4 . Thus the insights gleaned from the study of the attractor surface should apply also for more generic phenomenologically relevant initial conditions.Particularly, the facilitation of hydrodynamization of the soft sector due to smaller energy densities in small system collisions compared to that in large system collisions (since both the hydrodynamization times lie outside of the conformal window) should be valid for generic phenomenologically relevant initial conditions. Discussion The main physical insights which we have gained from the Bjorken flow attractor of our hybrid fluid model is how hydrodynamization may work in an asymptotically free gauge theory with both weakly interacting and strongly interacting degrees of freedom.In our model we represent those by two fluid components with viscosities differing by one order of magnitude, coupled in accordance with the semi-holographic framework developed in [28][29][30][31][32][33][34].This model has only one dimensionful energy scale, namely γ −1/4 , which sets the inter-system coupling.We have studied the hydrodynamic attractor of this system and we have shown analytically that at very early time, τ ≪ γ 1/4 , the energy density in the weakly coupled sector always dominates over that of the strongly coupled sector with an exponent which is determined by the transport coefficients and the relaxation times of the two sectors.This feature, which is reminiscent of the bottom-up thermalization scenario of Ref. [38], is, however, only relevant physically when the system is initialized correspondingly early; when in the context of heavy-ion collisions γ −1/4 is identified with the saturation scale Q s , no such hierarchy arises at τ ∼ Q −1 s , where we make contact with glasma descriptions, although the weakly coupled sector is typically the dominant one. The hydrodynamic attractor of our hybrid fluid model is given by a (phase-space) surface of attractor lines that can be characterized by the ratio of the energy densities in the two sectors at some arbitrary early reference time together with the overall energy, and we have determined the resulting hydrodynamization times in the two sectors for a wide range of these parameters. When the total energy densities at the respective hydrodynamization times (E(τ hd ), E(τ hd )) is less than γ −1 , both sectors hydrodynamize as in a conformal hydrodynamic attractor (τ hd ∝ E(τ hd ) −1/4 and τhd ∝ E(τ hd ) −1/4 ) up to a pre-factor p. Remarkably, the dominant sector (which is the perturbative sector unless we fine-tune the initial conditions) always hydrodynamize in a universal way, with the pre-factor p being the same as in a decoupled conformal hydrodynamic attractor.The inter-system coupling does not affect p for the dominant sector, however the pre-factor for the sub-dominant (typically strongly interacting) sector increases with the ratio of energy densities at the initialization time -the higher the dominance of the dominant sector, the more delayed is the hydrodynamization of the sub-dominant sector.When the energy densities at the respective hydrodynamization times increases beyond γ −1 , the hydrodynamization times of both sectors increase rapidly after reaching their minimum values. Another relevant insight of our model is that although the full system behaves as a single fluid at late time, its effective equation of state and effective transport coefficients depend on which curve on the attractor surface that the system converges to, and is thus process dependent.Although for the equation of state the process-dependence is subleading, the effective shear viscosity depends on how the full energy is finally shared between the two subsectors. We also have gained new insights with potential relevance for heavy-ion collision experiments.Setting initial conditions motivated by saturation scenarios, we see that the total energy density at the hydrodynamization time of the strongly coupled sector E(τ hd ) lies outside the conformal window, while that corresponding to the time of hydrodynamization of the perturbative sector, namely E(τ hd ), lies within the conformal window.It follows from our general results that while the perturbative sector hydrodynamizes later, the nonperturbative strongly interacting sector hydrodynamizes earlier in p-p collisions compared to Pb-Pb collisions on the attractor surface.This is because the hydrodynamization times for the soft sector in both cases lie outside of the conformal window, and the total energy density is smaller in the case of p-p collisions than in Pb-Pb collisions at the respective hydrodynamization times.Thus we get a new perspective on how collective flow develops in small system collisions. Furthermore, we have shown that these results remain valid for generic phenomenologically relevant initial conditions with energy densities matched to the glasma models at time scales of order the saturation scale.In Pb-Pb collisions, the off-attractor evolutions for both sectors converge to the attractor surface much earlier than hydrodynamization, while in p-p collisions this happens only for the hard sector.In these cases, the attractor surface determines the hydrodynamization times.However, the approach to the attractor is simultaneous with hydrodynamization for the soft sector in p-p collsions, and the hydrodynamization times show typically up to 50 percent departure from the corresponding attractor value.Nevertheless, our conclusion that the hydrodynamization time of the soft sector in p-p collisions is smaller than in Pb-Pb collisions remains true even for evolutions which deviate significantly from the attractor surface at the time scales when we match with glasma models. We therefore conclude that the hybrid attractor of our two component system with a more weakly and a more strongly interacting subsector is relevant for a range of phenomenologically relevant initial conditions well before the onset of hydrodynamics.The hybrid fluid model thus provides a potentially useful model for describing regimes with different importance of weakly and strongly coupled physics that will be interesting to confront with experiment.It would be interesting in particular to explore whether the different off-equilibrium evolution of the soft, more strongly interacting IR sector in small systems could lead to characteristic differences between p-p and Pb-Pb collisions.In the future, we hope to study a fuller semi-holographic set-up with a kinetic theory coupled to a five-dimensional geometry governed by Einstein's equations.This is necessary for confronting our approach with experimental data, also for the study of dilepton production [48,49] and hadronization [47] on the attractor surface. Another interesting direction of study is to understand how stochastic statistical and quantum fluctuations, which are suppressed in the large N limit, affect the hydrodynamic attractor.This needs to be understood separately for both weakly interacting and strongly interacting theories.In our hybrid set-up, these fluctuations are needed for equi-partition of energy between the perturbative and holographic sectors and thus for complete thermalization (we will discuss this more in an upcoming work).In the context of the attractor, this may lead the Bjorken attractor surface to collapse to a single curve where the ratio of physical subsystem temperatures goes to one asymptotically, over a long time-scale.Such fluctuations could be enhanced if the system passes near a critical point, 8 and therefore such a study could be interesting for the search of a critical endpoint in heavy-ion collisions. The equations of the fluctuations are given by δϕ B Phase transition As was explored in the inviscid case in [33], the coupling equations as presented exhibit a phase transition depending on the value of r.Thermal equilibrium is best explored in the Minkowski background.For the purposes of this section ε = 3P, P = nT 4 1 , and where the last relation is the requirement of thermal equilibrium, which means that the physical system temperature, T , can be expressed in terms of the individual subsystem temperature via Further, we introduce the light cone velocity v = a/b, and ṽ = ã/ b, (B.3) which represents the effective causal light cone of each individual subsystem.Interactions between the two systems change the effective light cone velocity.Note that c > v, ṽ > 0, where c = 1 is the speed of light. light cone velocity in one system is multivalued, while in the other it is not.Of course, it is worthwhile to repeat that we are not in thermal equilibrium, which means we are not exactly mapping to the physical temperature T , but instead to an effective temperature.), as studied in [33].The blue line denotes the Bjorken evolution with increasing τ going from the green to red dot (right to left on the plot).Left: in the case r = 2, the green dot is for γε(τ 0 = 0.0001) = 0.348, while the red dot denotes τ = 20, the time we choose to stop the simulation.Right: below the phase transition, for r = 1.01 the Bjorken system is initialized at the green dot with γε(τ 0 = 0.01) = 0.1.14.As before, the evolution in the τ coordinate follows from the green to red dot (right to left).The upper solid purple line denotes the more viscous system, while the lower solid blue line is the weakly coupled system.The initial conditions are identical for both systems for each r and with initially no dissipation ϕ(τ 0 ) = φ(τ 0 ) = 0 at τ 0 = 1, i.e. top: r = 2 and γε = γ ε = 0.348, middle: r = r c with γε = γ ε = 0.248, and finally bottom: r = 1.2 with γε = γ ε = 0.3, while the transport coefficients are given by (2.30). The five identities analogous to those in Eq.We also find that the other six coefficients of the early time expansion can be determined in terms of k 1 and g 1 in consistency with the fact that the disperser surface is two-dimensional. In Fig. 16, the early time behaviors of χ, χ, ε and ε in a disperser solution have been shown, and in Fig. 17 D Borel resummation of the hydrodynamic expansion The full system at late time behaves as a single fluid with the transport coefficient being determined by which attractor curve on the 2D attractor surface the system goes to as discussed above.The curves on the attractor surface are parameterized by the dimensionless parameters, α and β (defined in (3.12) and (3.13)) which govern the late time hydrodynamic expansion and thus the effective transport coefficients of the full system.Here we aim to understand the hydrodynamic gradient expansion and see how its Borel resummation can describe the attractor surface in terms of these two parameters. We will expand the anisotropies, at late time in powers of inverse proper time, x = Let's use the notation ξ i with i = 1, 2 and χ 1 = χ and χ 2 = χ.At arbitrarily high order the coefficients of these two series expansions r i,n for χ i show factorial growth following indicating the series is asymptotic in nature with zero radius of convergence, which is seen in subsystems in the top panels of Fig. 18.The parameters ξ i,0 and γ i for the weak and strong systems depend on α and β. To promote the divergent series to a well-defined function the series needs to have finite radius of convergence.This is successfully done by adopting the standard technique of Borel resummation [54].The Borel transform of the series is given by r i,n are the coefficients from the series (D.1).This Borel transformed function has a finite radius of convergence about the origin in the complex ξ plane (i.e. the Borel plane) and is related to the original series via Laplace transform.However the function possesses poles in the complex plane as a consequence of the finite radius of convergence.Hence to perform the integral one has to do analytic continuation of the function χ B,i (ξ).Since we only have access to a finite number of terms, we will need to approximate the integrand, which we do via Padé approximant χ B,i (ξ) ≈ P N,M,i (ξ) = N i=0 n i ξ i 1 + M j=1 m j ξ j (D.3) where the coefficients of the Padé approximant are fixed to agree with series coefficients (D.2).The Padé series can go up to arbitrary orders with a constraint N + M = k, where k is the order up to which Borel transformed anisotropy is truncated.Most commonly, one considers the symmetric case with N = M = k/2.In our case we have taken the order k to be as high as 500 with precision set to thousand digits.The singularities of the function χ B,i (ξ) appear as concentrated poles of the Padéapproximant in the Borel plane and they depend on the dimensionless parameter α and β.In Fig. 18, we show the pole structure of both sectors in the Borel plane.The poles start from a point nearest to the origin, namely ξ 1,0 and ξ 2,0 for χ B (ξ) and χB (ξ) respectively, and accumulate along the real axis giving an image of branch cut as in the case of decoupled MIS [12].The starting value of the branch cut is proportional to the inverse of the slope of the coefficients shown in Fig. 18 and can be identified with the non-hydrodynamic mode that decays exponentially at late-times.From the analysis in Sec.3.4 we readily see that [12,16,54] χ i (x) = 1 x C e −ξ/x P N,M,i (ξ)dξ (D.4) where C is the contour from 0 to ∞. However the presence of singularity in χ B,i (ξ) introduces complex ambiguities implying that the integration depends on the choice of contour C. The choice of the contour actually determine the Stokes coefficients appearing in the trans-series that encode the full details of the initial conditions.Appropriate choices of these Stokes coefficients (contour of integration) correspond to evolution on the attractor surface.In our case, the contours of integration can be chosen to be the straight lines given by two different angles for χ and χ, respectively, as shown in Fig. 18.We have two Stokes coefficients corresponding to the two exponential decay modes discussed in Sec.3.4 and these give the two missing data (other than the parameters α and β appearing in the hydrodynamic expansion) which specify the initial conditions fully in the four-dimensional phase space. E Different initial ratios of subsector energy densities While at sufficiently early times all attractor solutions have a dominant weakly coupled component, different initial conditions lead to different ratios of energy densities at some initial reference time, chosen here as τ 0 = 0.001γ 1/4 .In figure 19 we display the evolution of the energy densities in the subsectors and the total energy density for different initial ratios, comparing the two sets of initial conditions we tentatively associated with p-p and Pb-Pb collisions in sect.4.2.For a ratio E 1 : E 2 less than 4:1, the energy density in the holographic sector overtakes the one in the perturbative sector for a stretch of time, whereas in the opposite case it never gains dominance.At sufficiently large time, the "perturbative" sector becomes again dominant in each case, but although this is reminiscent of the transition to a weakly coupled hadron gas after hadronization, we consider our model as a toy of heavy-ion collisions only in the deconfined early stages. Figure 1 Figure1.Anisotropies χ of one particular attractor solution (thick lines) and four neighboring trajectories (thin lines).The less viscous (strongly coupled) system is displayed by blue curves, the more viscous (more weakly coupled) system by red curves, the total system by orange curves.From[37], where more plots are given for this particular solution. 17 Figure 3. Check of the identities (3.5) at early times.Similar identities given in (C.5) also holds for the disperser surface, and can be verified to a very high accuracy numerically as evident from Fig. 17. Figure 3. Check of the identities (3.5) at early times.Similar identities given in (C.5) also holds for the disperser surface, and can be verified to a very high accuracy numerically as evident from Fig. 17. γ - 1 Figure 7 . Figure 7.The figure shows a doubly logarithmic plot of the hydrodynamization time for the hard (in red) and the soft sector (in blue) against the total energy density at the respective τ hd and τhd for ratio ϵ(τ 0 ) : ε(τ 0 ) = 5 : 1.The hydrodynamization time of the hard sector is longer than the soft sector.The black and the green dots corresponds to the hydrodynamization times for initial conditions which are physically realizable in the p-p and Pb-Pb collision. Figure 9 . Figure 9.The figure shows the doubly logarithmic plot of the scaled hydrodynamization time of the strongly coupled (soft) sector against the total energy density at the respective τhd , when the weakly coupled (hard) sector is dominant at an initial reference time (here τ 0 = 0.001γ 1/4 ).The various color labels refer to different ratios of energy densities of the hard to the soft sector at the initial reference time.Values of p for ratios from 1 : 1 to 1000 : 1 of the initial energy densities of the subsectors increases from 0.64, 0.76, 0.93, 1.07, 1.21, 1.76 to 2.1.These different values of p shows the dependency of τhd on the ratio ϵ(τ 0 ) : ε(τ 0 ). Figure 10 . Figure 10.The figure shows the doubly logarithmic plot of the scaled hydrodynamization time of the strongly coupled (soft) sector against the total energy density at the respective τhd when the soft sector is dominant at the reference time τ 0 = 0.001γ 1/4 .The various color labels refer to different ratios of energy densities of the hard to the soft sector at the initial reference time.We note that the hydrodynamization time of the dominant soft sector is independent of these ratios.The plot shows the universal behaviour of the hydrodynamization time of the dominant sector in the conformal window, as the value of p is 0.63, and is the same as in Fig.8corresponding to the cases where the hard sector was dominant at initial reference time. p Figure 13 Figure 13.Left: Generic evolutions of the dimensionless anisotropy which deviate from the attractor surface in Pb-Pb collisions for both hard sector (top) and soft sector (bottom).In all cases, the glasma matching condition E Pb-Pb (τ i ∼ 0.63) ∼ 6.25 (in units where γ = 1) is satisfied (see Sec. 4.2) at the glasma matching time τ i ∼ 0.63 = Q −1 s,P b which is indicated as a black dash-dotted vertical line.All evolutions converge to the corresponding attractor curve which is marked in bold red (for hard) and bold blue (for soft), and this occurs much earlier than hydrodynamization in both cases.The hydrodynamization time for the soft sector is marked as a vertical blue dotted line, while that for the hard sector is beyond the range of the plot.Right: The same for p-p collisions, where the glasma matching condition E p-p (τ i ∼ 1) ∼ 1 is satisfied (see Sec. 4.2) at the glasma matching time τ i ∼ 1 = Q −1 s,p indicated as a black dash-dotted vertical line.All evolutions converge to the corresponding attractor curve which is marked in bold red (for hard) and bold blue (for soft).In the case of the hard sector, the convergence to the attractor occurs also much before hydrodynamization time, but in the soft sector of p-p collisions, the convergence to the attractor is simultaneous with hydrodynamization.The hydrodynamization times for various evolutions are spread over a range marked as a gray column which includes the blue dotted vertical time corresponding to the attractor (the corresponding range in the Pb-Pb soft case is negligibly small).Note that the hydrodynamization time for the soft sector in Pb-Pb collisions is larger than the maximum value for that of the soft sector in p-p collisions. Figure 14 . Figure 14.Light cone velocity as a function of temperature for the inviscid Bjorken and Minkowski cases of identical subsystems for two different values of r.The orange dashed curve is the result of working directly in the Minkowski background (B.4), as studied in[33].The blue line denotes the Bjorken evolution with increasing τ going from the green to red dot (right to left on the plot).Left: in the case r = 2, the green dot is for γε(τ 0 = 0.0001) = 0.348, while the red dot denotes τ = 20, the time we choose to stop the simulation.Right: below the phase transition, for r = 1.01 the Bjorken system is initialized at the green dot with γε(τ 0 = 0.01) = 0.1. Figure 15 . Figure 15.Light cone velocity as a function of temperature for the viscous Bjorken and Minkowski cases for a variety of r.The orange dashed curve is the inviscid case in the Minkowski background as shown in Fig.14.As before, the evolution in the τ coordinate follows from the green to red dot (right to left).The upper solid purple line denotes the more viscous system, while the lower solid blue line is the weakly coupled system.The initial conditions are identical for both systems for each r and with initially no dissipation ϕ(τ 0 ) = φ(τ 0 ) = 0 at τ 0 = 1, i.e. top: r = 2 and γε = γ ε = 0.348, middle: r = r c with γε = γ ε = 0.248, and finally bottom: r = 1.2 with γε = γ ε = 0.3, while the transport coefficients are given by (2.30).
18,865
2022-11-10T00:00:00.000
[ "Physics" ]
Computational Analysis of Complex Population Dynamical Model with Arbitrary Order This paper considers the approximation of solution for a fractional order biological population model. The fractional derivative is considered in the Caputo sense. By using Laplace Adomian decomposition method (LADM), we construct a base function and provide deformation equation of higher order in a simple equation.The considered scheme gives us a solution in the form of rapidly convergent infinite series. Some examples are used to show the efficiency of the method. The results show that LADM is efficient and accurate for solving such types of nonlinear problems. Introduction Nowadays in most of the research areas, the importance of fractional differential equations has been increased due to its wide range of applications in real world problems.In different scientific and engineering categories such as chemistry, mechanics, and physics applications of fractional calculus can be found.It can be also used in control theory, optimization theory, image processing, economics, and so on [1][2][3][4][5].Mathematical models of fractional order are very important to study natural problems.As is known, the nature of the trajectory of the fractional order derivatives is nonlocal, describing that the fractional order derivative possesses memory effect features and any dynamical or physical system related to fractional order differential operators has a memory effect, which shows that the future states depend on the present and past states.The limitations of Caputo fractional derivative are removed while introducing Caputo-Fabrizio derivative; see [6].This new approach was used by Singh et al. to model the dynamics of computer viruses, particularly in those cases where the physical processes do not bear plasticity, fatigue, damage, and electromagnetic hysteresis effects.In view of the great significance of new fractional derivative, a fractional order model of chemical kinetic system with full memory effect has been studied very well.The authors examined the existence and uniqueness of the solutions for chemical kinetic system of arbitrary order by using the fixed-point theorem. System of nonlinear differential equations are very important for mathematicians, engineers, and physicists, because in most of physical systems the input is not proportional to output in nature.In addition to the study of simple nonlinear differential equations, exact solution of the nonlinear evolution equation also plays an important role in the study of some nonlinear problems.For the exact solution there are many approaches such as Hirota's method, Darboux transformation, and Painleve expansions. In last few years, more alternative numerical methods have been used for solving both linear and nonlinear problems of physical interest including homotopy perturbation method (HPM) [7,8], Adomian decomposition method (ADM) [9,10], and homotopy analysis method (HAM) [11].Due to various applications of Lienard's equation, Kumar et al. proposed a numerical algorithm based on fractional homotopy analysis transform method to study fractional order Lienard's equation, while recently in 2017 the authors used -homotopy analysis method and Laplace transform approach to explore some aspects of the FitzHugh-Nagumo equation of fractional order.Laplace decomposition method was adopted in [12,13] to deal with some other nonlinear problems while homotopy perturbation transform method is selected in [14] to explore the approximate solution.Very recently Laplace transform combined with homotopy analysis method is used to produce effective method called homotopy analysis transform method (HATM) [15,16].Its main purpose is to handle nonlinear physical problems.Baleanu et al. [17] have solved the fractional order optimal control problems.For the solution of system of Schrodinger-Korteweg-de Vries equation, Golmankhaneh has used the homotopy perturbation method [18].In [19], the authors presented the following nonlinear fractional order biological population model using homotopy analysis transform method: with initial condition as where / = , 2 / 2 = 2 , and 2 / 2 = 2 .Further, V represents density of the population and denotes supply of the population due to birth and death rate.This nonlinear fractional order biological population model is obtained by replacing the first-order derivative term in the corresponding biological population model by fractional order , where 0 < ≤ 1.The derivative is considered in Caputo's sense.The parameter involved in fractional derivative shows various responses.If = 1, the fractional order biological model reduces to standard biological model.In [20,21], the authors discussed such models while exploring their different aspects.Promoting the above work, we solve model (1) by using LADM. with the given initial condition Since the proposed model is highly nonlinear and contains nonlinear terms V 2 and V (+) for + ̸ = 1, therefore, we decompose V 2 and V (+) in terms of Adomian polynomials as where Preliminaries In order to assist the readers, here in this section we recall some fundamental definitions and results from fractional calculus. Definition 2 (see [22]).The Riemann-Liouville fractional order integral operator of order > 0 of function () is given as In case of (, ) then the Riemann-Liouville fractional order integral operator of order > 0 is given by 0 (, ) = (, ) . ( The Riemann-Liouville fractional order integral [22] is given by provided that integral on the right side is point wise defined on (0, ∞). Definition 4 (see [19]).The Laplace transform of the Caputo derivative is given as Definition 5 (see [23]).The Mittag-Leffler function in term of power series is defined as LADM for Biological Model This section is devoted to the general procedure of the LADM for solving (3) with given initial conditions.To apply Laplace transform to model (3) we proceed as with the given initial condition From definition of Laplace transform on both sides of (3), we have Using given initial conditions yields that Therefore, after decomposing nonlinear terms in terms of Adomian polynomials and considering the unknown solutions V = ∑ ∞ =0 V , (18) can be written as Comparing terms on both sides, we get After taking Laplace inverse transform of system (21), we get . . . Applications In this section we present some application of LADM to solve biological population model. Complexity Example 6.Consider the following fractional order biological model [21]: with given initial condition Upon using the proposed method on (23) and comparing terms of both sides and then taking Laplace inverse transform, we get . . . and so on. (25) The series solution is provided as In closed form, the solution is given by which is the exact solution.Putting = 1 in (27), we get solution as This is the classical solution. Example 7. Consider the following fractional order biological model [21]: with given initial condition Applying Laplace transform on both sides, we have After using Laplace inverse transform on both sides of (31), we have and so no.Now the series solution of problem ( 29) is given by The closed form is given by Considering = 1, we get classical solution as Example 8. Consider the following fractional order biological model. corresponding to the initial condition Applying Laplace transform on both sides, we have After using Laplace transform on both sides, we have and so on.The series solution of problem (36) is given by Hence closed form of the solution is given by Considering = 1, we get classical solution as V (, , ) = √ . (42) Conclusion and Discussion In this article, the (LADM) has been applied successfully to obtain the exact solution of the generalized fractional order biological model with given initial conditions.From the numerical plots given in Figures 1, 2, and 3 of the considered examples, we see that the procedure is efficient to obtain approximate or exact solutions to fractional order partial differential equations corresponding to various fractional order.In the current situation we have come across the exact solution for the corresponding nonlinear FPDEs. In the 1980s George Adomian formulated a novel powerful scheme for solving nonlinear functional equations.Afterward, in the literature this method was known as Adomian decomposition method (ADM).The technique is based on the splitting of a system of differential equations solutions in the form of series of functions.Every term of the associated series is obtained from a polynomial generated by expansion of an analytic function into a power series.This is an effective tool for solution of systems of differential equations appearing in physical problems.Over the last 20 years, the Adomian decomposition approach has been applied to obtain formal solutions to a wide class of stochastic and deterministic problems involving algebraic, differential, integrodifferential, differential delay, integral, and partial differential equations.As compared to other existing numerical schemes, the main advantage of this method is that it does not require perturbation or liberalization for exploring the dynamical behavior of complex dynamical systems.The LADM has done extensive work to provide analytical solution of nonlinear equations as well as solving differential equations of fractional order. Further in this paper, the obtained results have been compared with the analytical solution and with the results obtained by Adomian decomposition method [20] and homotopy analysis method.In [20], the authors show that ADM generally does not converge, when the method is applied to highly nonlinear differential equation.Our proposed method is better than HAM, HPM, and VIM, because it needs no parameter terms to form decomposition equation, no perturbation as needed in the mention methods.Our proposed method is simple and does not require or waste extra memory like Tau-collocation method.Further from direct ADM, our method is better as we applied Laplace transform and then decompose the nonlinear terms in terms of Adomian polynomials, while in Adomian decomposition particular integral is involved, which often creates difficulties in computation.The main advantage of this technique is that it avoids complex transformations like index reductions and leads to a simple general algorithm.Secondly, it reduces the computational work by solving only linear algebraic systems.From this method, we obtained a simple way to control the convergence region of the series solution by using a proper value of parameters.The results show that LADM is very efficient, powerful method to find the analytical solution of nonlinear differential equations. Figure 3 : Figure 3: Numerical plots of approximate solutions of Example 8 at various fractional orders and using = 0.1, = 10.
2,297.4
2018-01-03T00:00:00.000
[ "Mathematics", "Biology", "Computer Science" ]
A survey on software test automation return on investment, in organizations predominantly from Bengaluru, India Software industry has adopted automated testing widely. The most common method adopted is graphical user interface test automation for the functional scenarios to reduce manual testing and increase the repeatability. Reducing execution time and redundancy to achieve quality “go/no go decisions” provides rational for the executive management to allocate funds to adopt automation and invest in the setup including people, process, and tools to achieve faster time to market. There are a variety of practices engaged by testers, like frameworks, tools, methods, procedures, models, and technologies to achieve automation. Nonetheless, the actual effectiveness in terms of return on investment (ROI) is not known, though there are various formulas to calculate ROI of test automation. The factors that determine the ROI are maturity of test automation, purpose, or intent, picking right tests for automation, knowledge of the tester to derive test coverage, domain expertise, defining right metrics like defects found by automation runs, cost versus reduction of time, labor versus quality of scripts, repeatability, and results. These factors form the base of the questions designed for the survey. The paper presents a survey and analysis to understand the ROI of test automation from industry test professionals from both product and services organizations. Introduction Executives in the commercial IT industry strive to automate all test cases to save time, get repeatability, identify defects early, increase quality, and cut cost. 1 The nature of software testing activity is a support function, and hence, it is treated as cost center. Therefore, the inclination to automate all test suites to minimize the cost is widely common.2 100% automation is rarely achieved in the organization 3 because of various impeding factors. In a survey of industry professionals, only a meager 6% think that 100% automation is possible. 2 Often the first step to automation by software companies is to embrace functional test automation, which involves the graphical user interface (GUI). This takes considerable investment, and the challenges are plenty. 4 There are a lot of research and case studies that describe various approaches to increase benefits and reduce cost. There is still some way to go. 5 The important factor where the management underestimates the cost is when it comes to maintenance of the suite. 6 The IT services industries provide automation services to the software product organizations. This is one of their mainstream businesses. While the product organizations do own automation teams, it is largely viewed as an activity that can be outsourced. The success depends on achieving repeatability, uncovering defects, and faster execution while keeping their maintenance cost low. This has been the nemesis in the SDLC cycle. There are numerous research works in this area on automation tools, methodologies, comparison of frameworks, return on investment (ROI) computation on using certain tools over other tools, and low code-no code automation. But there is no real answer in terms of ROI on automation for the organization. The method proposed here is to create a survey questionnaire and gather the answers from the industry professionals and analyze the results, with the current pay scale (from PayScale 7 data as of 2021) and derive the real scenario in the industry. Also, from the data collected, the expectations of the services and the product industry on ROI of automation are illustrated using correlation. The paper elicits review of literature on automation, its activities, frameworks, tools, ROI, scope of IT industry, and so on. Problem description, research method, sampling distribution mechanism, survey design, survey validity, survey execution and data collection method, response quality, PayScale data for automation engineers, data analysis, response variation, observations, results, and conclusion are discussed. Review of literature The systematic review of literature discusses the various aspects, factors and benefits of test automation, and the works of researchers in terms of ROI yield. Efficiency, effectiveness, and precision are the three main factors that determine the success of automation in any organization. Efficiency means optimized test coverage, execution time, write once run anywhere 3 and maintenance, 6 effectiveness means the right tests that are beneficial to the customer's usage, 8 and precision means able to repeat the correct results. 9 The review of literature is carried out to present the various researches that are already published. They are classified and referenced according to the various categories that are highlighted in italics. These served as a base for designing the survey questionnaire. There are a lot of generic automation framework for different applications and References 10-14 provide the, research that is published in this area. Web automation framework is an action-related framework and it is UI driven. References 15-18 provide the research that is published in this area. Device automation framework has been gaining popularity with device farms and device automation frameworks to enable mobile device automation frameworks. [19][20][21][22][23] Choosing the right GUI tool with accessibility, federated, and industry case studies are discussed in References 2, 24-31. Automation frameworks based on GUI Models are expected to ease the automation activity. [32][33][34][35] Dynamic test generation methodologies like codeless automation and auto generation of tests have been in research and being adopted by the industry professionals. [36][37][38][39] Model-based automated test generation is used for high coverage, and keyword-driven GUI tests are user-friendly. [40][41][42] UML-based test generation for textual and visual patterns for generating tests is used for functional test coverage. 43 With DevOps, the adoption of continuous integration and development became imperative for automated code deployments. [44][45][46][47] The same is now extended as QAOps for continuous verification for executing automated regression test suites. [48][49][50] Unit test cases provide the code coverage and they are even automated and autogenerated unit based on models. [51][52][53][54][55][56][57] Behavior-driven test for graphical user interface tests using Cucumber and Gherkin is widely adopted for mobile GUI automation. 58 ML/AI test framework, the recent researches present ways to create machine language-based automation of tests. [59][60][61][62][63][64][65] Low-code automation is the latest adoption by the industry for building blocks of code that can be easily assembled to quickly build workflows and functional scenarios and use cases. The main advantage is it saves time because the blocks are reusable. The same concept is adopted for testing for non-technical, citizen testers to write automation scripts. [66][67][68] Reusable libraries framework and reusable test libraries reduce maintenance effort and maximize test coverage. [69][70][71][72][73] The services organizations employ a test factory mechanism where it is persona-based automation. An experience report on industry practice is discussed. 74 The product code is refactored every now and then to ensure scale, technology adoption, and optimization. Similarly, there is research for identifying tech debt in test automation and applying software design principles for test automation development. [75][76][77] The criteria for when and what to automate are decided by the factors eligibility, priority, cost, coverage, end user usage, quality, maintainability, and time. [78][79][80] Certainly, there are various impediments for automation, and the factors contributing to hindrance of a robust test automation suite are discussed in Reference 81. Automation framework challenges typically are usability and metrics like test coverage, test effectiveness, usability, and defects found.4,82 It is interesting to note that automation practice is researched by region. Industry practice adopted by professionals from various regions is presented in References 83-85. Again automation practice by industry verticals and its study on different organizations are highlighted in References 86-88. Similarly, automation practice by infrastructure, for example, DevOps-related software test automation, is discussed in Reference 50. Benefits and limitations of automation are measured by cost, quality, time, effort, maintainability, and return on investment (ROI). 8,89 With the microservices, serving different clients and applications running on different multitenant servers rises a need for distributed test agent framework to enable cross platform frameworks to function. [90][91][92] Automation metrics define the usefulness of the automation and measure the quality of the product with cost and time. 93 Maintenance involves effort and the ease of maintaining the automated suite. 6,11,94 Automation complexity like greedy-based test suite reduction methods 95,96 is mostly still at the research level. Automation test coverage ascertains the quality of tests that yield maximum benefits. [97][98][99] Automation is also used as decision support system to support go and no go decision for production code promotions. 100,101 Automation effectiveness is measured by the number of defects found and quality of software under test. 102,103 Automated suite optimization helps in reduction of tests to minimize effort and time and results in overall quality for the software under test. 104,105 Automation ROI estimation helps to justify the time and material spent on the automation suite with respect to quality. [106][107][108][109][110] Problem description Every software company, with faster to market strategies in place, strives to adopt automation to provide quality by investing in resources like tools, frameworks, infrastructure, consultation, and people. The average testing spends in the last 5 years according to Statista are 28% of the overall IT spend. 111 Also, the estimated market size for test automation by the year 2026 is $60.4 billion. 112 One of the main focuses of automation is to achieve customer-and user-centric test coverage 103 with repeatability and reduced time, catch defects early, and save cost. Unfortunately, achieving ROI requires knowledge in test coverage, maintenance of suite, and setting automation goals. The current trends in automation are all about tools and number of tests automated. Continuing with this prevents the focus on the test coverage, test optimization, test effectiveness, etc. Developing more informed KPIs for test automation could help the software organizations to measure the cost and saving to derive the ROI on automation spend. The proposed paper aspires to present the current understanding of industry professionals on test automation and what might be the present ROI for the software organizations from the sample. For collection of data, a specifically designed survey questionnaire is created to capture present usage and metrics and circulated via electronic medium targeted to the current industry professionals who practice/manage test automation. The Conclusion section presents the results with required charts and analysis. Research method The research method adopted is the survey research method because it requires to collect data from various software organizations on their current practice, observations, and results. This approach collects the qualitative and quantitative primary data by self-reported measures, on the understanding of test coverage and actual automation coverage from a sample of industry professionals from their respective organizations. The secondary data for cost of tools are referred from the top-used tools website. [113][114][115] The secondary data for cost of test automation professional is collected from PayScale website. 7 The experimental data is derived from the primary data to correlate unit test coverage to functional coverage, automation ROI from the perspective of industry test professionals who work for product organizations versus services organizations. The analysis of data from industry practice professionals presents analysis for the questions-"is there an ROI for automation?", "Is there a difference in expectations-product versus services" because it is a cost for product organizations and revenue for services organization. Survey design As stated in the problem statement that IT spends 28% of total IT budget on testing activities with a major push toward test automation. 111 The survey questions are designed in such a way the operations of the test activities are questioned with a focus on concepts like test coverage, test effectiveness, test strategy, knowledge of test professionals, tools used, cost of test automation, and type of industry ( Table 1). The actual survey questions are given in the appendix-1 and the actual survey responses are given in appendix-2. Scope of IT industry This section depicts the target organizations and the survey execution ( Figure 1). The software companies registered with Ministry of Corporate Affairs (MCA.gov), India, and have presence in Bengaluru, including the international software organizations represent the target population of the survey. The survey was a paid survey and researcher selffunded the research. Survey execution and data collection methodology The survey questions are added to a Google Form. The sample selected are the test professionals from the software industry from this region. The form was circulated to a. email list of industry professional, b. shared via WhatsApp to list of phone numbers of industry professionals belong to this region, and c. to Bengaluru-based testing groups on LinkedIn. The phone number was an optional field; if the respondent wanted to be paid, they chose to provide the phone number and was paid INR 50, for each successful response. The survey duration was from 23 March 2021 to 29 April 2021. Survey validity The survey was sent to 118 industry professionals, and 54 responses are received from the target commercial software organizations from the target region. Quality of survey questions The design of survey concentrates on concept versus practice-centric questions to draw the response toward the ROI, as explained in the Survey Design section. Some of the respondents conveyed their appreciation on quality of questions verbally, as well as in the comments section of appendix-2. Few respondents did require help in understanding the concepts to answer the survey. The tone of the questions is maintained neutral to eliminate any bias where most of the questions could be answered descriptive and if they are multiple choice, there was a choice for open-ended answer. Quality of response The responses when tabulated are found to be 95% good to do further data analysis. The 5% of data cleanup had to be done in the following areas like a. 15 Categorization of responses The responses are primarily categorized into product and services organizations based on the organization name mentioned by the respondents. This helps to understand aspects of testing activities from different point of views from the industry professionals. If they are part of product organizations, then they are the owners of quality for the product. If they are part of services organization and provide engagement to support testing activities of a product organization, they act as support function. The other major contributor to the difference in point of view is it is an expenditure/investment for a product organization, whereas for services, it is this activity that brings revenue. Since the motivation and goals may vary depending on the engagement model they belong to, the responses are categorized into responses from industry professionals working in product organizations and services organizations. The one-way ANOVA results ( Figure 2) notice that p-value is 0.93, p > 0.05, and hence, the means are equivalent between the groups. Also, F-value < Fcrit value. Hence, accept the null hypothesis. Data analysis, observations, and inferences Response variation 31 numbers of responses are from respondents working in product organizations, 23 numbers of respondents are from Figure 1. Survey execution and data collection methodology. services organizations, and 2-3 numbers of respondents are from the same organization often large but different divisions. The (Figure 3) charts show the organization type and the years of experience from the survey responses. For our research purposes, the IT education and IT banking are custom solutions and mostly employ people for their own back-office purposes-hence the classification as product. Interestingly, the product organization has more experienced professionals than the services industry. This may be due to the engagement model of the services sector to be profitable, where the professionals are trained and deployed in a client organization (typically a product organization). 112 Average of factors from respondents The averages of all the factors collected from the respondents are stated below ( Table 2). The averages are shown with and without outliers for comparison purposes. The standard deviation in the case of number of bugs found, years of experience, and team count are high because of the nature of the mix of organization types. The rest of the data is good with less deviation. It is observed that the test coverage, bugs found, and ROI yield are low. Besides, the maintenance (fix of script failures and change of scripts due to application changes) is high at 22%-30%. In the coming sections, each factor is analyzed separately, in combination with related factors to further examine them to deduce if the above factors have any impact on the ROI yield. Number of defects versus automation team count The following Figure 4 shows that irrespective of the team count, the number of defects found is mostly below 10 and concentrated near 5. Notice that few professionals from services have quoted beyond 10. Return on investment yield from overall responses The ROI yield, as quoted by the industry professionals, is plotted alongside the test coverage, and failures due to application changes and script failures. The maintenance is the overhead and contributes to the robustness of the suite. Failures are main factors that contribute to the stability of the automated suite. Test coverage contributes to the confidence of the suite. Hence, they are important in determining the ROI. The below charts compare the factors with and without outliers. As observed in Figure 5, the data for outliers shows that where the test coverage is high and the failures are low, the ROI is more, especially the outliers Verizon with 300% and other with 150%. Their primary suite is API based and not GUI based and hence is treated as outliers. As observed in Figure 6, the data without outliers shows that the trend of ROI range is between 26% and 51%. There are a few interesting observations: a. Where the failures are low and test coverage is high, the ROI is also high and vice versa and b. few industry professionals have quoted otherwise. Let us further dissect the data in the next section to understand if the organization category or any other factor is a contributor to this. A mathematical computation on correlation is shown in Figure 7 to support the above observations. Note that there is an overall positive correlation between test coverage and ROI yield and negative correlation between ROI yield and failures due to app changes, script failures. Clarity in understanding of "return on investment on test automation" by respondents This section aims to see if the understanding of the industry professionals is in the right direction, especially with categories, namely, product and services. Figures 8 and 9 call out the difference in responses from product and services professionals. Figure 8 represents the responses from professionals from product organizations and Figure 9 represents the responses from the services organizations. The observations from the sample are tabulated in Table 3. The table presents the results of the statistical analysis from the data with various combinations to understand the responses better from the initial aim of the concepts used for the survey design. This table represents the difference between inference of sample collected from industry professionals from product and services organizations. A mathematical computation on correlation is shown in Figure 11 to support the above observations. For product, there is a positive correlation of ROI with coverage, defects, unit tests, team count, years of exp, % of suite automated, and 95% pass achieved and a negative correlation of ROI with failures on both app changes and script issues. For services, there is a negative correlation of ROI with team count and failures and interestingly failures due to app changes are positively correlated. From the response it is found that ∼82% mentioned GUI suite is not productive without unit tests ( Figure 12) and ∼57% mentioned GUI suite is not productive without unit tests ( Figure 12). These unit test cases help to generate technical wealth. 124 Finally, the responses are sorted to get the responses with high (>70%) ROI. The following Figure 13 highlights the gap in clarity on test automation. In the case of product, low failures, high coverage, more scripts automated, and high pass percentage contribute to high ROI. In the case of services, few responses with high failures, not known coverage, and average number of scripts automated cannot contribute to high ROI, but then the respondents have quoted high ROI. Figure 9. Clarity in understanding of ROI on test automation-services organizations. Table 3. Results from various statistical analyses of the responses. Professionals from product organization Professionals from services organization ∼64% know the % unit test for the application under test ( Figure 8). ∼57% know the % unit test for the application under test (Figure 9). ∼57% mentioned GUI suite is not productive without unit tests ( Figure 12). When the responses are sorted to get the responses with low (<30%) ROI, the following Figure 14 highlights the gap in clarity on test automation from both product and services respondents. Tools and licensing cost data The paid tools has more adoption in product organizations, though the usage percentage is about 5% and the cost ranges from $2000 125 to $20,000 126 per year based on the tools that are used. But predominantly, open source tools are preferred with in-house frameworks depending on code base and other operations based on release cycles, that vary from one team to another. 127 Figure 15 shows overall open source is predominantly adopted for automated testing by both product and services, although services professionals use commercial tools like UiPath 128 and QTP/UFT. 129 Figure 16 shows the overall tool dispersion against the tool dispersion between the product and services industry. PayScale data For computing the ROI, the average salary per year for the test professional commonly titled as SDET (software engineer in test) 7 in the region of Bangalore is given in Table 4. * The data from PayScale.com 7 is taken as on 25 th May 2021 and $1=₹72.78 on the same day. Average investment based on sample versus ROI yield Average team size from the sample = 10. The PayScale data mentioned in this paper are from industry professionals employed by organizations. Hence, the amount mentioned is based on salaries drawn by them. If a services organization is engaged for consulting, the costs vary, and typically, the consulting engagements are higher than the salaries provided to employees in a range of about 20%-40%. So, these costs would go up by 20%-40%. Typically, QA services are rarely tiered and listed on websites. But QA Mentor, a testing services organization, provides some view into a pricing structure. 130 Adding to this, the tools and licenses section of this paper highlights the cost of commercial tools. 113,114 Hence, if these tools are used, then the cost may be incremented anywhere from $2000 to $20,000. Conclusion The data from the samples presented from the point of view of respondents are from product or services organizations. In the real-world scenario, the services industry mostly deploys their professionals in the product organizations for aiding the test automation in the product organizations as that is the business model on which they operate. From the analysis of this paper, the observations deduce that the product and services show a difference in their respective point of views, with respect to test automation. In reference to Statista, the average spend on testing from overall IT budget is 28% for the last 5 years, but the analysis of the sample collected from different organizations for this paper highlights the areas (presented in Table 3) that contribute to low ROI on test automation. The common denomination is the difference in understanding the perspectives of automation (product and services organizations) and its goals (way to automate vs automating the right test). The results show us unit tests and the right automation helps to generate technical wealth that aids in faster, cleaner, and quality deliveries with low maintenance. Dimensions of expertise, project automation, and quality goals are indirectly considered with the experience. Expanding this may provide more insights. For future work, dimensions of expertise, automation goals, metrics, etc. may be added to measure before and after experiments, on the actual testing experience from the organizations that have low ROI yield. Experiments could be to set the automation goals and metrics and align with the right understanding and then measure and optimize the ROI continuously. Acknowledgments We are grateful to all the respondents who have filled in the questionnaire for automation ROI. Author Contributions Survey and analysis. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
5,650.4
2021-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Repositório ISCTE-IUL —Non-orthogonal multiple access (NOMA) concatenated with multiple-input multiple-output (MIMO) or with massive MIMO, has been under scrutiny for both broadband and machine-type communications (MTC), even though it has not been adopted in the latest 5G standard (3GPP Release 16), being left for beyond 5G. This paper dwells on the problems causing such cautiousness, and surveys different NOMA proposals for the downlink in cell-centered systems. Because acquiring channel state information at the transmitter (CSIT) may be hard, open-loop operation is an option. However, when users clustering is possible, due to some common statistical CSI, closed-loop operation should be exploited. The paper numerically compares these two operating modes. The users are clustered in beams and then successive interference cancellation (SIC) separates the power-domain NOMA (PD-NOMA) signals at the terminals. In the precoded closed-loop system, the Karhunen-Lo`eve channel decomposition is used assuming that users within a cluster share the same slowly changing spatial correlation matrix. For a comparable number of antennas the two options perform similarly, however, while in the open-loop downlink the number of antennas at the BS is limited in practice, this restriction is waived in the precoded systems, with massive MIMO allowing for a larger number of clusters. I. INTRODUCTION Non-orthogonal multiple access (NOMA) has been much studied for next generation wireless systems, including 5G, when dense networks are envisaged due to its ability to further enhance the overall spectral efficiency [1,2,3].In contrast to orthogonal multiple access (OMA), in NOMA all users are superimposed in the time, frequency, or the code domains and then separated by means of successive interference cancellation (SIC) or parallel interference cancellation (PIC), and achieve all points in the capacity region of the multiple access channel (MAC) region [4].In power-domain NOMA (PD-NOMA) the signals are separated at the receivers taking in consideration their different power levels, although other types of NOMA exist; a range of other categories of NOMA are described in [5]: scrambling-based NOMA, spreading-based NOMA, codingbased NOMA, and interleaving-based NOMA.In addition to those, NOMA based on the partitioning of lattices [6] have also proved well even when the channel gains of the users in the same cluster are similar [7,8,9,10], which is a requirement for a good performance in PD-NOMA.Both multiple-input multiple-output (MIMO) schemes compared in this paper use PD-NOMA, overwhelmingly the most studied type of NOMA.Clustered MIMO-NOMA has also been studied for millimeter waves ranging from 28 GHz up to 73 GHz [11], exhibiting a superior sum-rate than OMA, as initially suggested by [3].In [12] multiple analog beams are formed to further create more NOMA groups and therefore increase performance for any angular distribution of the users' positions. NOMA was included in long-term evolution (LTE) Release 14, under the name multi-user superposition transmission (MUST) [13], multiplexing two users, and despite the information-theoretic foundations of NOMA [14], its practical application has been postponed by 3GPP for the 5G standard, after having been analyzed during the 3GPP technical tasks.The authors in [15] show some reasons underpinning that decision.They show how NOMA only outperforms multi-user MIMO (MU-MIMO) when the system loading (defined in respect to the length of the quasi-orthogonal sequences used in a spreadingbased or coding-based NOMA system) gets larger.However, at lower overloading factors, the use of several slots (or resources in general) by a spreading-based or coding-based NOMA makes it less spectrally efficient than MU-MIMO.NOMA only outperforms MU-MIMO at high signal-to-noise ratio (SNR), due to the inherent interference-limitation in MU-MIMO, but even so with almost negligible gain (around 1 dB). In the case of PD-NOMA, the number of users supported in the power-domain is always very low, typically two or three users, due to SIC error propagation.The issue of fair power allocation is prone to a discussion over the definition of fairness in the context of NOMA (see [16] and references therein).In [17] it was proposed a power allocation policy based on stochastic geometry to take into account the distribution of the users' location.While most papers analyze NOMA in a singlecell scenario, the problem in real multi-cell scenarios raised the problem of how to associate users to a cell while maximizing the system's sum-rate.This problem is dealt with in [18], applying matching-theoretic algorithms.The multi-cell scenario can be enhanced by considering different types of service requirements in different cell and a power allocation algorithm that takes in consideration different types of data traffic is proposed in [19].The benefits of PD-NOMA over OMA highly depend on the differences between the channel gains to each user and a well-designed system should have the option of switching to a OMA at times.The authors in [20] recently bridged this gap by proposing a utility cost that takes in consideration the costs and gains associated to each MAC mode such that the system can opt between them.When device-to-device (D2D) communication exists in a cellular communication environment, NOMA can be used to multiplex the communication from one transmitting terminal to two receiving devices [21].While the number of users possible to multiplex in PD-NOMA is extremely limited (2 or 3 users only), the concatenation of PD-NOMA with orthogonal frequency division multiplexing (OFDM) largely increases the number of multiplexed users in the overall system.The problem then becomes the one of user grouping and power allocation, for which simple greedy algorithms can perform quite decently [22]. When MIMO-NOMA is considered, the natural approach is to spatially cluster users which, from a MU-MIMO precoding point of view, behave as one virtual-user [23,24].Subsequently, the messages to each user in a cluster are separated by SIC.This requires channel state information at the transmitter (CSIT), which can be challenging to obtain, and [25] shows how to refine the quality of the CSIT.In the scenario of machine-type communications (MTC) [26], where terminals are particularly simple and energy-constrained, CSIT is even harder to attain.Moreover, user grouping is also hard given the combinatorial nature of the problem.An optimal solution to the user-selection problem to form NOMA groups (which are then differentiated in some orthogonal domain) is given in [27], but only for singleantenna BS and single-antenna users, and only for groups with two users.Obtaining CSIT with a massive MIMO base station (BS) is even more challenging, due to the sheer number of channels; a technique to mitigate intra-cluster pilot contamination has been proposed in [28]. This paper looks at the two most important MIMO-NOMA downlink setups using clusters of users: the first system operating in open-loop (analyzed in Section II) and the second in closed-loop using precoding at the BS (analyzed in Section III).Both schemes were respectively proposed by Ding et al. in [29] and [30].The former system has also been analyzed in [31] in terms of its information-theoretic achievable rates.The uncoded system requires the number of antennas at the terminals to be equal or greater than the number of antennas at the BS in order to take advantage of the null space that the extra dimensions permit, which constitutes a strong limitation on the number of clusters.Both schemes are assessed and compared in this paper not from an information-theoretic point of view, as typical in the NOMA literature [31,32], but rather via numerical simulation. A. System Model Consider a downlink multi-user open-loop MIMO transmission with M antennas at the BS and N ≥ M antennas at each user, similar to the one in [29,31], where users are grouped in M clusters of K users, multiplexed with PD-NOMA (see Fig. 1).The BS transmits x = Ps, where P is the M × M precoding matrix, which in the open-loop system corresponds to an identity matrix, given that there is precoding at the BS, and therefore no CSIT is needed at the BS.The transmitted vector s ∈ C M ×1 is constructed as: where s m,k ∈ C is the BPSK or QAM symbol to be transmitted to the k-th user in the m-th cluster and the coefficient α 2 m,k ∈ [0, 1] defines the power allocation for the k-th user in the m-th cluster.This system can be seen as a multi-user MIMO (MU-MIMO) (also known as the broadcast channel [6]), where each cluster plays the role of an aggregated virtual-user, and later the information to each user within each cluster is distilled from the NOMA symbol detected by the cluster.The set of power coefficients is selected such that K k=1 α 2 m,k = 1 [29].In the worst case, a user within a cluster will have to decode K − 1 signals from other users with higher power allocation coefficients than its own.The signal received at the k-th user in the first cluster is: where H 1,k ∈ C N ×M is the Rayleigh flat-fading matrix from the BS to the k-th user in the first cluster and n 1,k is the unit power additive white Gaussian noise vector for k-th user in the first cluster.The noise is taken from an independent circularly symmetric complex Gaussian distribution.i.e., n 1,k ∼ CN (0, σ 2 n ) ∈ C 1×K .The channel matrix for the first user in the first cluster, is denoted as H 1,1 ∈ C N ×M .Linear detection at each terminal is made by multiplying the incoming signal (2) by the detection vector, leading to: where v 1,k H denotes the Hermitian transpose of v 1,k , and w l is an indicator vector.This relation can be expanded, knowing that at the first cluster one is interested only in the sum α where sm ∈ C is the contribution of cluster m to the s vector.The aim is to eliminate the inter-cluster interference for any i = m.The matrix Hi,k ∈ C N ×M −1 is constructed by removing the m-th column of the matrix H m,k .The problem can now be rewritten as: where must belong to a space that is orthogonal to Hi,k .Let us expand the matrix Hm,k into its SVD decomposition for the case M = N : where U i,k is a unitary matrix: and λ has the diagonal form: Note that (9) has a zero row at the bottom (even if M = N ) because after removing a column from H m,k to create Hi,k , the matrix becomes tall and thus rank-deficient.In general, there will be (M − N ) + 1 rows of zeros in the matrix of singular values.One can see that the column highlighted in (8) (which is a matrix in the general case), Ũi,k ∈ C N ×(N −M +1) , does not contribute to Hi,k since it is multiplied by the row of zeros (or a zero fat matrix in general), thus spanning a space orthogonal to Hi,k .Next, one projects the h m,ik column onto the orthogonal space using the projection matrix which is equally applied by all the users in the m-th cluster, eliminating the inter-cluster interference because (10) fulfils the requirement established in (5).Consequently, N ≥ M antennas are needed at each user, otherwise the Hi,k matrix becomes fat rather than tall and there is no orthogonal space spanned by the columns of U i,k in (8).Without loss of generality, focusing on the first cluster, the channel gains of the different users in the first cluster should be ordered such that 1,k .Note that this ordering happens within each cluster, and all clusters are statistically identical.Zero-forcing (ZF) detection is then applied at each terminal in the cluster: leading to a sum of the intended NOMA signal for that cluster perturbed by a noise term. For SIC detection to be possible with BPSK, the following constraint is imposed: for users 1 ≤ k ≤ K in the m-th cluster, even though it disregards fairness.One follows the rule α 2 m,k−1 = 0.5 × α 2 m,k .The rule follows the geometric progression of ratio 1/2 deprived from its first term with k = 0, the value of N k=1 (1/2) k tends to 1 as N tends to infinity, and the restriction ( 12) is naturally fulfilled.A similar strategy was proposed in the context of visible light communications (VLC) using decaying factors 0.3 and 0.4 instead of 0.5 [33]. B. Performance A two-user case PD-NOMA with null-space based MIMO is assessed with BPSK and different QAM modulation schemes in Figures 2, 3 and 4, with M = 2, N = 3, and K = 2 in all cases.Subsequently, a five-users case with BPSK is also assessed. The well-known two regimes of PD-NOMA emerge, depending on the SNR.Consider user 1 the one with the lowest power allocation (i.e., the one with larger channel gain).At low SNR, user 1 can incorrectly detect the signal with the larger power coefficient and propagate the error, incorrectly decoding its own signal.At high SNR, user 2 still has to cope with the extra degradation imposed by the interference from the signal to user 1, yielding a poorer performance.In Figures 2 and 4 user 2 clearly outperforms user 1 in the low SNR regime.As expected, when using higher modulation schemes at user 2, that user's performance is degraded.Interestingly, when users 1 and 2 respectively apply BPSK and 16-QAM, the two regimes do not appear in Fig. 3 because at low SNR the errors arising at initial detection stage in user 1 are not significant to corrupt the BPSK detection of user 1. The robustness of the system is chiefly defined by the relations between the power coefficients.In Figures 2 and 3, α 1 = 1 /4 and α 2 = 3 /4 were used to compare with the results in Fig. 1 in [29].In Fig. 4, one has α 1 = 1 /17 and α 2 = 16 /17.Comparing Fig. 2 with Fig. 1 in [29], one observes that the SER is bounded by the outage probability. For the five user case, with M = 2 and N = 2, the users are ordered such that user 1 has the best channel and user 5 the worst.In Fig. 5, one can find that users with higher power allocation coefficients have a better (lower) SER at low SNR and then worse performance at high SNR, exhibiting the same dual-regime.The α m,k were defined by the set {1, 2, 4, 8, 16}, normalized by With six users and the same power allocation rule α m,1 becomes too small, and user 1 gets a SER > 0.5 for SNR = 10 dB, showcasing the limitations of PD-NOMA. A. System Model Consider a scenario similar to the previous one, but now with a massive-MIMO BS with M antennas transmitting to users equipped with N antennas.The users are also grouped into L clusters, each of which with K users, all with different channel matrices, however, they all share the same spatial correlation matrix R l .In such cases one can apply the Karhunen-Loève channel decomposition [34,35], according to which the k-th user in the l-th cluster can have its channel matrix written as: where G l,k ∈ C N ×N denotes a fast fading complex Gaussian matrix, Λ l ∈ C M ×M is a diagonal matrix that contains the eigenvalues of R k and U l ∈ C M ×M is a matrix that contains the eigenvectors of R l , meaning that since that a correlation matrix is always symmetric.However, R l only has r l non-zero eigenvalues, with r l being the rank of R l .Therefore, Λ l is of the form: and thus can be reduced to a r l × r l matrix, turning G l,k a N × r l matrix and U l a r l × M matrix.Obtaining CSIT for the fast fading matrix G l,k may often be difficult.Because R l is a slowly-changing channel correlation matrix, its estimation at the BS is easier to obtain.The BS sends a precoded M × 1 NOMA vector with superimposed symbols where s l,k is the symbol for the k-th user in the l-th cluster, α l,k is the power coefficient for the k-th user in the l-th cluster. The number of effective BS antennas for each cluster is Ml = (M − r l (L − 1)) and, P l is the M × Ml precoding matrix of the l-th cluster. T is the Ml × 1 precoding vector that has a 1 in the l-th position.The k-th user in the l-th cluster therefore receives where n l,k is the noise at the k-th user in the l-th cluster. Looking at (17), P l needs to satisfy the following constraint to eliminate inter-cluster interference: Since H is always a fat matrix (and thus it always has some non-zero nullspace), then Using a P l given by ( 19), the inter-cluster interference is removed and (17) becomes Without loss of generality, looking at user k = 1 in the first cluster (l = 1) of a system with K = 2 users, (20) leads to: The information to all users is carried by the Ml × 1 vector: which imposes a limit of Ml to the number of clusters.This vector is then multiplied by the matrix G 1,1 Λ 1 2 1 U 1 P 1 whose dimensions are N × M , and whose elements will be denoted as c n, m.Disregarding noise, this can be written as: Lowest alpha, best channel Highest alpha, worst channel which again, as in the previous closed-loop model, is the intended NOMA mixture for the cluster, added to a noise term. B. Performance A system with a massive array with M = 50 antennas at the BS and terminals with N = 3, which is the same number of antennas considered at the terminals in the openloop setup.Comparing Figures 6 and 7 with Figures 2 and 3, also with two PD-NOMA users and the same modulations, one can see that the performances very similar.This is because the channel models of ( 11) and ( 16) are in fact equivalent in terms of end-to-end SNR per user.To understand why this happens one needs to revisit equations (11) and (24) and note that G 1,1 Λ 1 2 1 U 1 = H 1,1 (Karhunen-Loève decomposition) and that v m,k = P 1 = 1.Hence, both equations are in fact equivalent in terms of the ratio between the signal power and the noise power in each of these ZF schemes, when averaging over several channel realizations. IV. COMPARISON AND CONCLUSIONS While both systems are equivalent in terms of performance, the open-loop cannot uphold a massive array at the BS because it is limited by the number of receive antennas that the terminals can fit, while in the closed-loop model an increasing number of M antennas at the BS can lead to an arbitrarily large number of clusters.However, one should notice the trade-off that higherrank correlation matrices impose, forcing to lower the number of clusters or the number of effective transmit antennas per cluster.It is worth mentioning that a system's designer should not only optimize the power coefficients but also consider different modulations for the users.The correlation matrix can have a rank as large as r l = N , so considering for example N = 8 antennas at the receivers and M = 128 at the BS, it is possible to support L = 15 clusters, with Ml = 128 − 8 × (15 − 1) = 16 effective transmit antennas per cluster.In this example, the closed-loop system can almost duplicate the number of NOMA clusters possible in open-loop, which would be L = N = 8.Notably, with single antenna terminals (N = 1), keeping the M = 128 and the same Ml = 16 antennas per cluster, one could support L = 113 clusters. Lowest alpha, best channel Highest alpha, worst channel Figure 2 : Figure 2: Open-loop with two users, both using BPSK.
4,585
0001-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Rational Design of a Novel Smart Mobile Communication System for Arabian Deaf and Dumb The deaf impairment is among the most substantial health problem worldwide, that can lead to various personal, economical, and social crisis. Therefore, it is critical for developing an efficient way to facilitate communication between deaf-dumb impaired and normal people. Herein, we have rationally designed a new digitally computerized and mobile smart system as an efficient communication tool between deaf impaired and normal Arabian people. This is based on two main steps, including creating a digital output for the hand gestures using gloves flex sensors equipped with a three-axis accelerometer that is controlled using a microcontroller. The digital results are compared to that in a words-based “database”, where Arabs use expressions not alphabet in their communication. The second step is translation or conversion the outputs of the first stage into written texts and voices. The newly developed system allows Arabian deaf to translate words of ordinary people into gestures using a speech recognition system with an impressive accuracy over 90 % without the needing for a webcam, colored gloves, and/or online translator. The presented system can be used on any android or windows. Introductions According to the world health organization, there is more than 360million deaf people around the world [1], and it is expected to increase significantly in the future, owing to the uncontrolled growth in the chronic diseases. To this end, there are at least 10 % of Arabian are deaf impaired (i.e., Saudi Arabia 23.5 % and Egypt 16 % of the population ). Various efforts are dedicating to solve this critical issue and development an efficient way to facilitate communication between deaf impaired and ordinary people [2][3][4][5][6][7]. Regarding this, most deaf and dumb people around the world are using the American Sign Language (ASL) or Spanish. [8] They should be aware of English as they use 26 signs for English alphabets to spell words from the English language. [9][10][11][12] Many articles used a webcam placed in front of an impaired person and wearing collared rings in fingers, alphabets signs are detected and translated into speech [13]. A disadvantage of these systems that it needs ones to stand in front of the camera all time, time consumption, and the person cannot manipulate the data by himself [14]. Some systems used laptop camera and capture image for the gesture made by the right hand at one feet distance from the camera and background without any other objects. The accuracy was good with different light intensity [15]. Others used video frames, detect hand shape, and use classification system [16]. Research is employed to overcome communication problems using smart gloves that depend on flexible resistance and accelerometer for sensing the tilt of hand [17][18][19]. The sign is collected to make words and display them as text or produce a voice. Wireless system for signals is done with the help of RF transceiver [20,21]. This makes the system more comfortable than others. The letter sign is sent via 30 meters wireless module to PC where data for the stored signed letter exist, compared, and then letters or word are displayed [9,22]. To overcome using cables, gestures image processing is introduced based on image technique, hand segmentation, point, and color matching with a database, conversion to text was done [23,24]. Other study tracks hand motion and detects its gravity center, which provides the speed of any conducting time pattern [25][26][27]. Comparing with the stored data, the sign is recognized. Some systems used Neural Network for gesture recognition. It based on using hand sign images for a neural network (NN) training [28][29][30]. NN combined with Fuzzy systems gave good accuracy in sign recognition and classification [31]. Other system extracts signs taken from videos. The feature was extracted from the image as hand contour then the corresponding alphabet or meaning was described. [32][33][34] The data gloves sensor technique is more advantages than the image-based one and became a promising tool for communication [17,35,36]. These hand gestures can play a significant role in different fields as Robotics and Automatic control [22,37]. Some projects used capacitive touch sensor, and others mostly used flexible resistance. However, the limitation of data and systems affects the efficiency and speed of the process. For Arab people, as letters are different from the English alphabet where words with the same letters have different meanings, pronunciations, and signatures. There is a crucial need to develop a new system for Arabs deaf people. Inspired by this, herein, we rationally designed a new smart mobile system for effective communication between visually impaired and healthy people. This is successfully achieved using tow min stages involving using gloves flex sensors equipped with a threeaxis accelerometer that is controlled using a microcontroller for generating digital outputs for the hand gestures followed by conversion these outputs into written texts and voices with an accuracy of 90 %. Our newly developed system is mobile, low-cost, digital, fast, and easy to handle without the needing for a webcam, colored gloves, and online translator. Moreover, the system can promptly convert the outputs into texts instead of alphabets/letters. Finally, the newly designed system can be easily integrated on any android or windows system. Material and Methods In this paper, the communication problem of deaf and dumb people who speak Arabic and cannot express themselves could be solved. The proposed system divided into two stages. First, translates gesture of deaf people into words and voice. Flexible sensors, accelerometer and mega2560 as a microcontroller based on the ATmega2560 was used to receive data and transmit them to the laptop The second stage; translate words of an ordinary person into sign language using speech library in c#.. To design the smart gloves, we have used flex sensors and accelerometer fitted on a glove as well as a wireless speech recognition system. Smart Gloves Design In this stage, gloves are used to capture the hand gesture made a person and converted into speech and text. Flexible sensors fitted on gloves, accelerometer on the wrist is used for motion capture. Bending degree of Finger gestures are calculated into voltage terms using voltage divider rule. The microcontroller is used for analog to digital conversion of data from flex sensors. Then digitized data are matched with that stored data. Block diagram of proposed work is shown on Fig.1. Flexible Resistance A flexible resistance was used to capture the person's fingers motion, as shown in Fig.2. It is a normal resistance with the ability to change with bending. It is approximately 22.5 k ohm without bending and approximately 75.6 k ohm with maximum bending. The output voltage is calculated for different Resistances, and a voltage divider with R2=20 k ohm is used. Flexible Resistances were used for the five fingers in a glove and then connected them to Arduino mega analog pins. Accelerometer The accelerometer detects the moving directions of the hand. As hand can make a specific shape, but the orientation gives different meaning, it is important to detect hand tilting. Analog output is converted into digital using Arduino Inertial measurement unit (IMU) shown in Fig.3, which was used to determine orientation degree. Wireless Connection A wireless connection was used to make the person comfortable and provides an easy movement from one place to another without any impediments due to the absence of wires. Fig. 5 shows the used prototype, while Fig. 6 shows the wireless connection. Arduino mega and processing An SQL server database was used to store signs data and process them. First Stage At first stage, the flex resistors capture motions then, a voltage divider used to convert the change in resistance into voltage. After that, the Arduino ADC is provided with the angle and motion of a person's hand captured by mpu9150; then the wireless send the data from both gloves via the laptop to be analyzed. Finally, the output is introduced in the form of image and word. The system could be simplified in the flow chart shown in Fig.7. Flexible resistance and mpu9150 circuit shown in Fig.8. The circuit shows that the flex res from f1 to f5. The analog inputs takes the values of f1, f2, f3, f4, f5 to j1 while j2 is the power source and reset. The wireless TX, RX connected to the j5. For interrupt connected to j9 and the Arduino TX, RX connected to j6. Second stage: Transferring voice into text As normal people speak, the voice is processed and translated into text, and the corresponding sign is retrieved from database.so, deaf people can read. This stage was done using the system speech library to c#. A speech recognition application will typically perform the following basic operations: 1. Initialize the speech recognizer. 2. Create a speech recognition grammar. 3. Load the grammar into the speech recognizer. 4. Register for speech recognition event notification. 5. Create a handler for the speech recognition event. After the voice recognized the picture for the meaning was appeared as shown in Fig.9, the word is displayed in both Arabic and English. A database contained all the words recognized by the program in Arabic and English words is used. The system also allows the user to add new words as a binary sign code. A notepad is shown in Fig.10 displays the code for some words. As one means that this finger is flatted and used while zero means that it does not use this one. B for back, F for forward, R for right and u for up. Results Ten persons with different hand size and different voice pitch performed sign language. They started the test for words and sentences. The accuracy was 71 to 82 % while it was 82 to 92 % for persons with big hand size. It was recognized that as gloves good fitted and suitable to hand size, the error decreased. Accuracy computation for transferring the gloves signs into voice and text was 89% on average, so as the speech recognition and translation into the sign was high accuracy. When the training increases the error decrease. Conclusion The accuracy was relatively high; the accuracy was 92.5 % with a person who used good fitted gloves. Most of the cases achieved accuracy of at least 75 %. The highest accuracy with the good preparation was over 85 %. The results show good performance and nearly fast response with result below one second after each gesture. By increasing dataset, the results could be better. Hybrid of methods may enhance the performance. The system is better than other systems that depend on the alphabet sign converter, as this one depends onward and sentence sign translation. Other advantage is, that is easy for the user as deaf and dumb to build their dataset for different signs. User detect his sign and specify position of each finger and store it in database. The work is prepared to be used as mobile application. The work could be used to improve life quality and communication method for this category of people. Funding: This work is supported by the Qatar University Grant (IRCC179). Authors Contributions: The authors contributed equally to the work. Dr. Amal designed the sensors, Dr. Kamel arranged the data, and Prof. Aboubakr supervised the work. Conflicts of Interest There are no conflicts to declare. Fig.1: Block diagram illustrates our newly developed system.
2,636.2
2019-06-21T00:00:00.000
[ "Computer Science" ]
Modification of Fox-Wolfram Moments for Hadron Colliders Collisions of composite particles impose an arbitrary boost in the longitudinal direction on a given event. This implies that the centre-of-mass frame at hadron colliders is undetermined for processes with missing energy in the final state. This motivates the modification of the Fox-Wolfram moments such that the moments for a given event are identical when viewed in the lab or centre-of-mass frame of the beam. The resulting moments are invariant under rotations in the plane transverse to the beam and boosts parallel to the beam. These moments are then used to demonstrate improved signal separation in the channel where the Higgs decays to two b-quarks while being produced in association with a vector boson. Introduction The complexity of final states being studied at hadron colliders motivates the use of topological variables. The topological variables being used at the Large Hadron Collider (LHC) were often developed in the context of electron-positron colliders. This is true of the Fox-Wolfram moments (FWMs). The FWMs have been applied in the context of the ATLAS and CMS experiments [1][2][3] but are not widely used. In contrast, the B-factories, such as Belle and BaBar, use the FWMs extensively to partition phase space [4,5]. In particular, the FWMs are used in the algorithms to suppress the continuum background [6]. The FWMs partition phase space in a way which is natural for B-factories but not always natural for hadron colliders. The FWMs correspond to a decomposition of the event's phase space into Fourier modes on the surface of a sphere. In the context of hadron colliders, it would be desirable for a harmonic analysis to be invariant under Lorentz boosts parallel to the direction of the beam. Incorporating and evaluating this change will be the subject of this submission. Overview of Fox-Wolfram Moments The FWMs are defined as [7][8][9]: where Y m l are the of spherical harmonics, is the sum over all reconstructed objects or particles, √ s is the centre-of-mass energy of the collision and θ i , φ i and p i are the i th object's spherical coordinates and momentum in the event's centre-of-mass frame. The moments can be written in terms of the angular distance between each final state object: using the addition formula for the spherical harmonics: where P l is the Legendre polynomial of order l and Ω ij is the angular distance between particle i and j. In this form, the invariance under rotation is made manifest as the dependence on the arbitrary axis in equation 2.1 disappears. The moments are typically normalised to the zeroth moment; this corresponds to a uniform rescaling for electronpositron colliders but is more significant in hadron colliders. Intuitively, the Fox-Wolfram moments describe how compatible the event topology is with each of the spherical harmonics. The utility of the FWMs arises because the moments form an orthogonal basis and are invariant under the set of rotations, SO(3). The rotational symmetry reflects the symmetry of the particle collision in the event's centre-of-mass frame. In effect, this means that any rotation of a given event will not change the moment (up to and excluding detector effects). Similarly, orthogonality removes the redundancy between different moments. When applied correctly, this allows the most important features of an event to be reduced to the lower order moments, with higher order moments describing features of the event dependent on finer resolutions. The FWMs do not contain enough information to reconstruct the energy-density because all information about phases has been removed. They do, however, allow the reconstruction of the energy-density correlation function. Limitations of the Fox-Wolfram Moments In hadron collider experiments, the Fox-Wolfram moments no longer describe the symmetry of the detector and are no longer orthogonal. The limitation of hadron colliders is that the centre-of-mass frame of the event cannot be accurately reconstructed for many processes; particularly processes with final states containing missing energy or where objects are mismeasured jets. In e + e − colliders, the centre-of-mass energy of the collision is known and the event can always be boosted from the lab frame into the centre-of-mass frame. By contrast, hadron colliders can only determine the transverse missing energy, so for final states with large missing energy, the event's four-vector sum cannot be reconstructed. The use of FWMs in this context implicitly assumes that the event is produced at rest in the labframe. This motivates the creation of a new set of moments that, unlike the Fox-Wolfram moments, are invariant under longitudinal boosts rather than rotations in the polar angle, θ. Both moments are invariant under rotations in the transverse plane. For brevity, the new moments will be referred to as the Hadron Collider Moments (HCMs). A further weakness of the FWMs is that the orthogonality condition for the spherical harmonics, no longer holds because of the incomplete coverage of the detectors. For example, the ATLAS detector's inner tracker has a coverage in rapidity of |η| 2.5, corresponding to θ ≈ 40 • missing out of 360 • . Over this reduced integral, equation 3.1 becomes: and hence the spherical harmonics no longer form an orthogonal basis. This limitation also applies to experiments where the centre-of-mass frame can be reconstructed. This problem is identical to that faced by the Cosmic Microwave Background (CMB) mapping experiments where the all-sky spectrum is expressed as a Fourier-Legendre series. In CMB experiments, the galactic plane must be excluded (again approximately 40 • of 360 • ). To remedy this, a simple orthogonalisation procedure can be adopted identical to that used by the CMB experiments [10]. This limitation is not present in the HCMs. Symmetries of the Lab Frame in Hadron Colliders The natural symmetries of an event in the centre-of-mass frame is SO (3), or the two rotations of R 3 . By contrast, the natural symmetries of an event in the lab frame of a hadron collider experiment is SO(2) T × SO(1, 1) β , where SO(2) T refers to rotations in the plane transverse to the beam and SO(1, 1) β refers to the Lorentz boosts parallel to the beam. This motivates the use of the standard coordinate system of hadron colliders: where the rapidity, y, is defined as: The rapidity if often approximated by the pseudorapidity: which is valid in the limit that the mass of the particles are negligible relative to their energy. At the LHC this is valid for leptons but may not be valid for jets which often have much larger masses. Similarly, the transverse mass, m T , is often approximated to the transverse momentum, p T , in this limit. This coordinate system has the advantage that the symmetries of the system are manifest with the invariance of both ∆η and ∆φ under longitudinal boosts and rotations about the transverse plane. It is also useful to define an invariant distance in this coordinate system: where ∆γ is the angle between the two radii in η-φ space. This contrasts with 2.2 where the distance on a great circle, Ω, is given by: Proposed Moments The FWMs are equivalent to a Fourier series on the surface of a sphere, also known as a Legendre series. Equation 4.4 describes a cone with radius ∆R. In this case, it is necessary to take a Fourier series on the radial component of a cylinder, also known as a Fourier-Bessel series. Together with γ, the polar angle of R in (y, φ) space, this forms a set of cylindrical coordinates with the metric: The new set of moments can be calculated for the new symmetry by solving the Laplacian for the metric in equation 5.1. Alternatively, the Fourier expansion can be rewritten in terms of γ and R and the moments inferred. This can be seen from the partial wave expansion: where J m is the m th Bessel function. The partial wave expansion can be used to rewrite the Fourier series as: As with the FWMs, the HCMs describes the Fourier modes of the density correlation function and not the density. Using the equation 5.3, a set of modes can be constructed which are analogous to equation 2.1: The infinite sum in this equation limits the usefulness of this form. The infinite sum can be removed by applying the addition theorem for Bessel functions, where ∆R 2 1,2 = R 2 1 + R 2 2 − 2R 1 R 2 cos γ i,j . The resulting equation has the dependence on an arbitrary axis removed and is analogous to equation 2.2: The coefficients of this expansion, k l , are defined by the boundary conditions. Using the Dirichlet boundary condition that the energy-density correlation function becomes zero outside of the detector acceptance region, the coefficients, k l , corresponds to the l th zero of J 0 . For convenience, u 0 is set to zero. The moments are normalised to the zeroth HCM in the same manner as the FWMs. The dependence on the arbitrary axes disappears and the symmetries become manifest. Using these new moments, an event can be expected to give the same value whether they are measured in the lab or centre-of-mass frame. Information about the particle identifications can be incorporated into both the FWMs and the HCMs by splitting up the summand as follows: This allows the moments to be divided into several moments consisting of each of these summands: S l = S l,lepton×lepton + S l,jets×jets + 2 × S l,leptons×jets . 2 is a Fourier-Bessel series which can also be interpreted as the Hankel transform on a discrete interval. The Hankel transform is linked to the Abel transform by the Projection-Slice theorem [12]. In this theorem, the Hankel transform is equivalent to the Abel transform view in Fourier space: The Abel transform of a function, f (r), can be written in the following form: This allows the moments to also be interpreted as examining, in Fourier space, the projection of the density correlation function in (η, φ) space along parallel lines of constant φ. Application to Associated Production of H → bb As a simple test of the utility of the HCMs relative to the FWMs, the moments are applied to the all-hadronic final state of the H → bb produced by Higgs-strahlung from a W -or Z-boson. This process was chosen because of the complexity of its final state, and it is a new channel which offers the potential to increase the explored final states of the process H → bb at the LHC. One of the main backgrounds for this signal is the production of tt and tt + jets. The moments are tested for their discriminating power against this background. Simulation and Monte-Carlo Generation The simulated events used to test the new moments are generated using the MadGraph [13] and Pythia8 [14] simulation framework with the ATLAS detector simulated by Delphes [15] with proton-proton collisions at √ s = 13 TeV. A filter is placed on the jets which corresponds to a 17 GeV cut on jet p T at the parton level and a 20 GeV cut was used at reconstruction level. The background sample includes up to three additional jets. The final state requires at least two jets which are tagged as coming from the decay of a b-quark. Figure 1: The distribution of feature improtance from the random forest when testing the separation of the all-hadronic final state for the signal H → bb, produced in association with a W -or Z-boson, from the backgrounds consisting of top quark pair production with additional jets. The distribution of feature importances is shown for the FWMs (blue) and the HCMs (green) with the moments arranged in decreasing importance. The feature importance in each tree is defined as the normalised reduction in node impurity brought by all splits on that feature in the tree. The overall features importance is the average feature importance across all trees. For brevity, only the first eight moments were calculated for both the FWMs and HCMs. The corresponding index for each moment is shown on the x-axis. Quantifying Separation The separation of these two processes is quantified by the integrated area under the receiver operating characteristic (ROC) curves. In addition to the ROC curves, a simple random forest [17] is constructed to obtain the separation which includes the effect of correlations. A random forest was chosen for this task because it is able to more fully explore the full phase space provided by these variables in a less biased way. At each decision node in the decision tree, the best cut from a random subset of the input variables is performed; by contrast a classic decision tree is usually deterministic. Even when boosted, a decision tree will tend to look fairly uniform and it will consistently cut on a few dominant moments in the first few decision nodes. This prevents the full phase space of the moments being fully explored. The feature importance is generated by the random forests and consists of the normalised reduction in node purity brought by all splits on that feature averaged across all trees in the random forest (with the sum of all scores normalised to one) [17]. The distribution of feature importances for the first eight moments of the FWMs and HCMs are shown in figure 1. Figure 1 shows greater separation between signal and background when using the HCMs over the FWMs for the first eight moments. The moment which demonstrates the most separation for the HCMs are shown in Figure 3 and the feature importance and ROC curves for the HCMs and FWMs are shown in 1 and 2 respectively. The HCMs, in particular the eighth moment, show consistently better performance and separation than the FWMs. This is reflected by both the random forest's ranking of feature importance and the ROC curves. Analysis The use of the FWMs and the HCMs are not mutually exclusive. Both provide information based on two different limits. The FWMs approximate the final state as being produced at rest while the HCMs assume that the boost is arbitrary. For the very heavy final mass states, the FWMs are expected to improve because heavier mass states are generally very close to rest in the rest frame of the two colliding protons. Conclusion The inability to determine the centre-of-mass frame of the collision in events with missing energy causes the centre-of-mass to only be determinable in the transverse plane. This causes the standard rotational symmetries implicit in the Fox-Wolfram moments to no longer hold. Hence, the same underlying event topology can lead to different Fox-Wolfram moments due to the varying longitudinal boost. To remedy this, a new set of topological moments are proposed for hadron colliders which reflect the underlying symmetry of the lab frame. The use of HCMs and FWMs is complementary with both moments representing limiting cases. The HCMs can be incorporated into analyses in the same way as the FWMs are; individual moments can be used as cuts or inputs for a multi-variable analysis. The distribution for the eighth order HCM (l = 8) for the all-hadronic final state for the signal H → bb (blue), produced in association with a W -or Z-boson, from the backgrounds consisting of top quark pair production with additional jets (red). This moment was shown to have the most separation of the HCMs (for l < 8).
3,585.4
2015-08-13T00:00:00.000
[ "Physics" ]
Assessing the impact of ecotourism on livelihood of the local population living around the Campo Ma’an National Park, South Region of Cameroon 1 Department of Forestry, Faculty of Agronomy and Agricultural Sciences, University of Dschang, P. O. Box 222, Cameroon. 2 Institute of Agricultural Research for Development (IRAD), Bertoua, P. O. Box.203, Cameroon. 3 Department of Geography, Faculty of Letters and Human Sciences, University of Maroua, Cameroon. 4 Department of Geography and Planning, Faculty of Arts, University of Bamenda, P.O. Box.39, Bambili, Cameroon. INTRODUCTION Ecotourism could redress the unending dilemma of conflicting goals between conservation and livelihood sustenance in/around protected areas (PAs) (Lambi et al., 2012;Ayivor et al., 2013;Fanuel, 2014;Clements et al., 2014;Ajonina et al., 2014;Tchamba et al., 2015;Moshi, 2016;Rainforest Foundation, 2016 Sama and Molua, 2019). It is spotlighted in academic literature as a strategy for economic development within communities, based on potential economic and social benefits which it can generate, at the same time ensuring natural resource protection (Mulindwa, 2007;Sunita, 2013). Unfortunately, many ecotourism projects viewed as successful reveal insignificant change in existing resource-use practices for the local population, bringing in just modest supplement to local livelihoods (Kiss, 2004). Ecotourism impart skills of the local population, exposing them to different employment opportunities like receptionists, housekeeping, cooks, gardeners, dancers, watchmen and guides (Ogutu, 2002;Isaac and Conrad-J, 2012;Harilal and Tichaawa, 2018).There is further capital injection into the local economy from ecotourism as visitors spend money on indigenous food and souvenir shopping (Ogutu, 2002;Sunita, 2013). In Kenya, ecotourism is contributing significantly to livelihoods of the Porni's community, with over US$5300 paid yearly as revenue for community land leased, and between US$500 and 1200 as entrance fees and bed charges from tourists; and the generated revenue is used for different community livelihood initiatives like school construction, payment of hospital bills and boreholes maintenance (Ogutu, 2002). In Ghana, it has created opportunities for rural communities to earn income, through the creation of poverty alleviation ecotourism related jobs for the conservation of local ecosystem and culture (Ghana Tourism Authority, 2010). Ecotourism projects in and around National Parks (NP) have aided to establish micro credit schemes, granting 'soft loans' for establishment of local retail businesses (Nkengfack, 2011;Isaac and Conrad-J, 2012). In Cameroon, ecotourism activities around NP generate a little bit of extra income for the rural population as a non-extractive activity, but it is inadequate and not good enough as alternative livelihood, nor a real support for community development projects (Tieguhong, 2008;Nkengfack, 2011;Balgah and Nfor, 2017;Harilal and Tichaawa, 2018;Sama and Molua, 2019). As such, they have to carry on extra activities like farming or entrepreneurship to make ends meet (Nkengfack, 2011;Harilal and Tichaawa, 2018). In some communities around PAs, such as the Douala Edea Wildlife Reserve, ecotourism is not a livelihood option as they are not directly involved in it (Nkengfack, 2011;Harilal and Tichaawa, 2018). From the aforementioned insights, it was found that most studies carried out on the impacts of ecotourism on livelihood sustenance around PAs in Cameroon and Africa focused mainly on the role played by ecotourism towards livelihood enhancement, while neglecting the factors affecting the impact of ecotourism on the local population's livelihood. Within this context, this study sought to assess the impacts of ecotourism on the livelihood of the local population living adjacent the Campo Ma'an National Park, South Region of Cameroon, with emphasis laid on the local population's perception of the role played by ecotourism in livelihood sustenance as well as the factors affecting the impact of ecotourism on the local population's livelihood. Study area This study was carried out in nine communities located in the southern section of the Campo Ma'an National Park, South Region of Cameroon (Figure 1). These communities were selected both at the coast and further inland. Climate parameters that are annual rainfall and temperature averages are respectively 2800 mm and 25°C (Mbenoun et al., 2017). The average annual rainfall reduces as we get further away from the coast to the hinterlands; with an average annual rainfall of about 2800 mm recorded in Campo close to the coast and 1670 mm in Nyabissan in the Ma'an area located further inland (Tchouto, 2006;PNCM, 2014). Hydromorphic and ferralitic soils are the most dominant types of soil in the study area. Hydromorphic soils are trapped within river valleys and lowlands; and ferralitic soils which are yellowish or reddish in color and developed on very acidic parent rocks. The vegetation of the Campo-Ma'an National Park and its environs consist of the dense Guineo-Congolese evergreen forest. The vegetation generally present is the Biafran type rich in Caesalpinacae with more than 60 species (Plan Commune de Développement (PCD, 2012). The main existing exploitable species are: Iroko (Millitia sp.), Bubinga (Guibourtia ehie), Tali (Erythrophleum ivorense), Doussie (Afzelia bipindensis), white Doussie (Afzelia Pachyloba), Red Eyoum (Dialium bipendensis) etc. and certain non-wood forest products used in handicrafts (rattan, raffia and its derivatives, bamboo and certain vines). The rest of the vegetation around the houses is made up of plantations and food fields, fallows and fruit trees. This dense forest is inhabited by different wildlife species like Chimpanzees, gorillas, forest elephants, giant pangolins, marine turtles, forest tortoise, monkeys, duikers and many others. The drainage network of the CMNP and its environs is drained by two Sub-Basins (Ntemand Lobé) or watersheds which are the drainage of the Atlantic Ocean Basin (Ajonina et al., 2010;Anonymous, 2014;PNCM, 2014). Sampling design This study made use of three main sampling techniques: purposive sampling technique, random sampling technique and the snowball sampling technique. Purposive sampling was used to select the study area, Campo Ma'an National Park (CMNP). The CMNP was purposively chosen for two main reasons: firstly, it is the pilot site for ecotourism in Cameroon; and secondly, it has been working on a management plan that has projected ecotourism as the main strategic tool for livelihood and conservation around protected areas. After the CMNP was purposively chosen, reconnaissance trips were taken to the area to assess the different stakeholders involved in ecotourism activities in and around the CMNP, and especially to get some general information on the target population (local communities) living around the park. Following the reconnaissance surveys, focus group discussions and key informant interviews were undertaken in the different communities. Snowball sampling was employed to select the participants for focus group discussions and key informant interviews. Focus group discussions and key informant interviews permitted the researcher to know the communities to target during household surveys. Following focus group discussions and key informant interviews with different stakeholders involved in the management of ecotourism in and around the CMNP, household surveys were carried out in the different chosen communities around the CMNP. Household surveys were done using the simple random sampling technique. Data collection procedure Primary data were collected through household surveys, key informant interviews, focus group discussions and direct field observations. Household surveys were carried out by administering semi-structured questionnaires. A total of 124 semi-structured questionnaires (open and close-ended questions) were administered to the local population. Respondents targeted for this survey were those involved in one of the following activities around the Park: agriculture, fishing, hunting, harvesting of NTFPs, tourism and trade. Questionnaires were structured to capture socioeconomic characteristics of the population; the impact of ecotourism on the livelihood of the local population living adjacent the CMNP. Some socio-economic parameters of the local population vital to this study were: community, age, gender, main occupation, secondary occupation, level of education, ethnic group, marital status, time spent in the community, and number of children. In each sampled community, a local informant helped to translate the questions into the dialect and responses in French. Key informant interviews and focus group discussions were used to collect general information on the capacity of ecotourism to improve the livelihood of the local population living adjacent the CMNP. Information collected through key informant interviews, focus group discussions and direct field observations was mainly used to ascertain the truthfulness of responses obtained during household surveys. Data analysis Information got through household survey was used for descriptive and inferential statistical analyses. Data were encoded into Microsoft Excel and analysed with Statistical Package for Social Sciences (SPSS 19.1). The chi-square test, Spearman rank correlation and binary logistic regression analyses were run. The chi-square test statistic (Equation 1) and Spearman correlation (Equation 2) were used to determine the non-cause-effect relationship existing between socio-economic variables and the impact of ecotourism on the livelihood of the local population. (1) Where: a: is frequency of males benefiting from ecotourism; b: is frequency of males not benefiting from ecotourism; c: is frequency of females benefiting from ecotourism; d: is frequency of females not benefiting from ecotourism; Chi-square test statistic (X 2 ) = . − . . (2) Where: n: is the numbers of pairs of values of variables X and Y; di: is the difference obtained from subtracting the rank of Yi from the rank of Xi; : is the sum of the squared values of di. The binary logistic (BNL) regression model (Equation 3) was used to evaluate the cause-effect relationship existing between socio-economic variables and the impact of ecotourism on the livelihood of the local population. (3) Where: Ŷ: is the predicted probability that ecotourism improves livelihood; − Ŷ: is the predicted probability that ecotourism does not improve livelihood; X: are the independent variables like gender, age, occupation, level of education, marital status etc. Socio-economic characteristics of respondents Analysis of socio-economic data revealed that the main socio-economic attributes were: communities, age, gender, main occupation, secondary occupation, level of education, ethnic group, time spent in the community, marital status, and number of children (Table 1).With respect to communities, it was found that most of the respondents lived in the communities of Ebianemeyong (19%) and the least in Mvini (4%). With respect to age, most of the respondents were between the ages of 31 to 45 years (42%). There were equally a relatively high proportion of respondents with ages above 46 years (35.5%). Respondents less than or equal to 30 years were the least represented in the study population (23%). Pertaining to gender, most of the respondents were male (66%). Only 34% of the respondents were female. With regard to the main occupation of the respondents, it was found that they were mostly farmers (43.5%), and the least were drivers (1%).With respect to secondary occupation, it was found that most respondents had no secondary occupation (28%). However, 72% were into secondary occupations. As for level of education, it was found that most of the respondents had either primary (38%) or no formal education (24%), with few in secondary education. In the case of ethnic group, most of the respondents were from the ethnic groups Mvae (35%) and the least represented ethnic group was the Beti (7%).With regard to time spent in the community, it was found that most respondents (58%) had spent over 10 years in the community. Pertaining to marital status, most of the respondents (67%) were married, with 3% divorced cases. In terms of the number of children, it was found that most of the respondents (13%) of the respondents had 3 to 5 children. Local population's perception of the impact of ecotourism around the CMNP on livelihood For 65% of the riparian populations of the CMNP, ecotourism activities in and around the CMNP did not contribute towards improving their livelihood and only 35% perceived the contrary. Non-cause-effect and cause-effect relationship between independent variables and impact of ecotourism on local population's livelihood around the CMNP Chi-square test statistics indicated the existence of a non-cause-effect relationship between the different independent variables and the impact of ecotourism on the livelihood of the local population (Table 2). Coefficients of the Spearman rank correlation indicated the existence of a direct and indirect non-cause-effect relationship between independent variables and perceived impact of ecotourism on the local population's livelihood (Table 3). However, there is a statistically significant direct non-cause-effect relationship between perceived impacts of ecotourism on livelihood and independent variables like community (rho = 0.200; p<0.05); gender (rho = 0.307; p<0.05); main occupation (rho = 0.218; p<0.05); level of education (rho = 0.287; p<0.05); and ethnic group (rho = 0.263; p<0.05). Meanwhile a statistically significant indirect non-causeeffect relationship was found to exist between perceived impacts of ecotourism on livelihood and independent variables like secondary occupation (rho = -0.204; p<0.05); and number of children (rho = -0.200; p<0.05). Independent variables like age, time spent in the community and marital status had a statistically nonsignificant direct and indirect relationship with perceived impacts of ecotourism on livelihood. Cause-effect relationship between independent variables and impact of ecotourism on livelihood of the local population living around the CMNP Coefficients of the binary logistic regression model indicated the existence of a cause-effect relationship between independent variables and the perceived impact of ecotourism on livelihood (Table 4). From the coefficients of the logistic regression model, two independent variables had a statistically significant direct cause-effect relationship with perceived impact of ecotourism on livelihood. These two independent variables were gender (β = 1.218; p<0.05); and level of education (β = 0.442; p<0.05). The classification table of the logistic regression model indicated the observed and predicted statistics for the perceived impacts of ecotourism on the livelihood of the local population (Table 5). From the percentage of the perceived impacts correctly classified, it was found that upwards of 74.2% of the perceived impacts of ecotourism on the livelihood of the local population was correctly classified. This goes to show that the predictions of the model are powerful enough to be taken seriously. Local perception of ecotourism impact on livelihood The local population living around the CMNP perceived that ecotourism activities contribute very little to their livelihood. This is explained by the fact that, income from ecotourism has been paid to conservation management. All revenue goes in to the coffers of the government; in turn, the communities benefit no basic social amenities like water, schools etc. This result corroborates those of Harilal and Tichaawa (2018) who have also found the same similar result for the rural population around the Douala-Edea Wildlife Reserve. This is also supported by Das and Chatterjee (2015)'s finding, which portrays that the local communities involved in ecotourism activities benefit little or nothing. These findings could also be attributed to the fact that, ecotourism activities around national parks prohibit the local population from carrying out any activity within the park limits. Activities around the park such as farming, hunting, and collection of NTFPs are also regulated. These protection and regulation in and around the park respectively have probably resulted in an increase or stabilization in fauna species like buffaloes, elephants, chimpanzees and gorillas (Tchamba et al., 2015). Such growth in fauna population propels human-wildlife conflicts resulting from the destruction of many farmlands in the communities by wildlife. This makes the local population to see ecotourism activities as a threat to their livelihood rather than an asset. This could also be attributed to the fact that the management of the parks for ecotourism is based on the top-down approach. This approach relegates the local population to the background, reducing their ability to participate in ecotourism activities. The lack of participation in the ecotourism sector makes the local population to lose the benefits coming from ecotourism and to lose interest in the activity. On the contrary, the findings of Isaac and Conrad-J 0 3 12 0 1 **, *Significant at 5 and 10% probability levels respectively; ns = not significant. (2012) found that the local population sees ecotourism as significantly contributing to livelihood improvement in their different communities. In the same vein, Ogutu (2002) indicated that huge financial benefits from ecotourism are used to develop social facilities like schools and hospitals in the local community of Porini. These positive findings could be attributed to the fact that the local population is directly involved in the management of the protected area. Thus, there is a bottom-top approach to management which makes the local population to participate actively in management, thereby reaping enormous benefits from ecotourism activities Determinants of local population's perception of the impact of ecotourism on livelihood Although different studies carried out in Cameroon and Africa have generally reported that ecotourism activities around protected areas impact the local population either favorably or unfavorably (Ndenecho, 2009;Kimbu, 2010;Nkengfack, 2011;Isaac and Conrad, 2012;Conrad-J et al., 2013;Sunita, 2013;Kimengsi, 2014;Cheung, 2015;Harilal and Tichaawa, 2018), little has been done to examine in an in-depth and holistic manner, the factors influencing the impacts of ecotourism on the local population living around protected areas in general and national parks in particular. This study through the use of appropriate inferential statistics is one of the first to unearth the factors influencing the impacts of ecotourism on the local population living around the CMNP, south region of Cameroon. From the coefficients of the logistic regression model, only two independent variables (gender and level of education) show a statistically significant relationship with the impacts of ecotourism on the livelihood of the local population. These two variables had a statistically significant direct cause-effect relationship with the impact of ecotourism on the local population's livelihood. For gender, the finding indicates that males benefit more from ecotourism activities than females. This result is similar to that of Kimengsi et al. (2019), where they acknowledge that, men participate more in ecotourism activities than women within the Western Highland region of Cameroon. This is in contrast with the findings of Tran and Walter (2014), where it was noticed that, there was almost parity in the level of involvement in ecotourism activities based on gender. The latter explains that, there was a more equitable division of labor, resulting in self-confidence and new leadership roles performed by women. Irandu and Shah (2014) have found out that it is necessary for ecotourism packages to be more inclusive for both women and men. This is due to the high positive impacts of women's involvement like formal and informal employment, economic independence, and decisionmaking. In the CMNP, the main ecotourism activities for the local population were; tour guides, porter, clearing of tracks to touristic attractions in the forest, and gorilla tracking. These are mainly activities which are tedious, requiring energy and courage to engage in them especially in forest communities. This most likely explains the low rate of women's participation in ecotourism activities in the area. In coastal communities like Campo beach and Ebodje where the main site visited is the beach, the females are more involved in ecotourism activities than the other communities. In Ebodje, some women gain extra income when a tourist visits their community as they are used as cleaners and receptionists in the eco-lodges and in some rare cases as cooks. These ladies are often paid less than men as their activities are considered less strenuous and less risky. For level of education, the finding indicates that as level of education increases, the benefits obtained from ecotourism activities equally increase and vice versa. This could be attributed to the fact that more educated persons have the possibility of serving as interpreters as well as tour guides, making them to have more benefits from ecotourism activities than their less educated counterparts. The result is in line with the findings of Kimengsi et al. (2019), who have demonstrated that, high level of education provides multiple choices to community members to participate in high valued and high income ecotourism activities, enhancing livelihood survival practices of the person with such human capital. Liu et al. (2020) acknowledge that, those with low level of education, lack the technical skills to improve on their livelihood strategies in rural areas of Mongolia. Tran et al. (2018) report that, education plays a pivotal role in households' livelihoods choices, with income inequality reflected in level of educational attainment in the North west region of Vietnam. In Central Nepal, educational level of different households' head significantly impacted rural households positively in implementing diverse subsistence strategies with ecotourism inclusive (Khatiwada et al., 2017). In the CMNP, 63% of the population has ended up in the primary school. Implicitly, the benefit of the local population for livelihood sustenance is minimal. This also depicts the inability of the community to actually develop and manage ecotourism projects successfully. For these communities to actually benefit from ecotourism, there is the need for partnership with public and private enterprises, capable of developing veritable ecotourism ventures, that can provide employment to many persons in the communities based on their levels of educational attainment. Thus, the impact of ecotourism on the livelihood of the local population living around the CMNP is influenced by a plethora of factors with the two most important being gender and level of education. Conclusion To conclude, the local population living around the Campo Ma'an National Park, South Region of Cameroon perceived that ecotourism contributes little to their livelihood. Several factors affect the impact of ecotourism on the livelihood of the local population. The factors plausibly affecting the impact of ecotourism on the local population's livelihood were community, gender, main and secondary occupations, level of education, ethnic group, and number of children. The factors having a direct causal effect on the impact of ecotourism on the local population's livelihood were gender and level of education. There is the need for policy makers therefore, to factor in these variables when formulating policies geared towards enhancing the benefits of ecotourism on the livelihood of the local population living around the CMNP. For ecotourism to work as a livelihood option in the CMNP, policies should take into consideration gender sensitive activities that can reduce the disequilibrium in ecotourism employment, as well as encourage real public private partnership
5,016.2
2020-05-31T00:00:00.000
[ "Economics" ]
Bayesian Orthogonal Least Squares (BOLS) algorithm for reverse engineering of gene regulatory networks Background A reverse engineering of gene regulatory network with large number of genes and limited number of experimental data points is a computationally challenging task. In particular, reverse engineering using linear systems is an underdetermined and ill conditioned problem, i.e. the amount of microarray data is limited and the solution is very sensitive to noise in the data. Therefore, the reverse engineering of gene regulatory networks with large number of genes and limited number of data points requires rigorous optimization algorithm. Results This study presents a novel algorithm for reverse engineering with linear systems. The proposed algorithm is a combination of the orthogonal least squares, second order derivative for network pruning, and Bayesian model comparison. In this study, the entire network is decomposed into a set of small networks that are defined as unit networks. The algorithm provides each unit network with P(D|Hi), which is used as confidence level. The unit network with higher P(D|Hi) has a higher confidence such that the unit network is correctly elucidated. Thus, the proposed algorithm is able to locate true positive interactions using P(D|Hi), which is a unique property of the proposed algorithm. The algorithm is evaluated with synthetic and Saccharomyces cerevisiae expression data using the dynamic Bayesian network. With synthetic data, it is shown that the performance of the algorithm depends on the number of genes, noise level, and the number of data points. With Yeast expression data, it is shown that there is remarkable number of known physical or genetic events among all interactions elucidated by the proposed algorithm. The performance of the algorithm is compared with Sparse Bayesian Learning algorithm using both synthetic and Saccharomyces cerevisiae expression data sets. The comparison experiments show that the algorithm produces sparser solutions with less false positives than Sparse Bayesian Learning algorithm. Conclusion From our evaluation experiments, we draw the conclusion as follows: 1) Simulation results show that the algorithm can be used to elucidate gene regulatory networks using limited number of experimental data points. 2) Simulation results also show that the algorithm is able to handle the problem with noisy data. 3) The experiment with Yeast expression data shows that the proposed algorithm reliably elucidates known physical or genetic events. 4) The comparison experiments show that the algorithm more efficiently performs than Sparse Bayesian Learning algorithm with noisy and limited number of data. Background High-throughput technologies such as DNA microarrays provide the opportunity to elucidate the underlying complex cellular networks. There are now many genome-wide expression data sets available. As an initial step, several computational clustering analyses have been applied to expression data sets to find sets of co-expressed and potentially co-regulated genes [1][2][3][4][5]. As a next step, there have been efforts to elucidate gene regulatory networks (GRN) embedded in complex biological systems. A growing number of methods for reverse engineering of GRN have been reported as follows: Boolean networks [6,7], Bayesian networks [8][9][10], Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) [11], linear models [12], neural networks [13], methods using ordinary differential equations [14,15], a sparse graphical Gaussian model [16], a method using a genetic algorithm [17], a method using Sparse Bayesian Learning (SBL) algorithm [18,19], and etc. In the reverse engineering of GRN, essential tasks are developing and comparing alternative GRN models to account for the data that are collected (Figure 1). There are two levels of inference involved in the task of data modeling process [20]. The first level of inference is fitting one of models to the data with an assumption that our chosen model is true. The second level of inference is the task of model comparison. It is desired to compare the alternative models with the help given by the data, and give some level of preference to the alternative models. Thus, the reverse engineering method should be used as a framework for fitting several different GRN models to the data to compare the models. For instance, there are several GRN modeling studies in which the reverse engineering algorithm could be applied: 1) system of ordinary differential equation (ODE) [15], 2) Dynamic Bayesian networks based methods (DBN) [10,18], 3) a linear stochastic differential equation for a transcriptional regulatory network [14], and etc. It is noted that these three models can be represented by linear systems. By a "linear system", we mean a system represented as a linear equation in matrix form such as Eq. 2 in methods section. Microarrays have been used to measure genome-wide expression patterns during the cell cycle of different eukaryotic and prokaryotic cells. The review paper of Cooper and Shedden [21] presents various published microarray data sets, which have been interpreted as showing that a large number of genes are expressed in a cell-cycle-dependent manner. From this point of view, it is assumed that the underlying GRN is dependent upon cellcyclic time. Therefore, this study uses the DBN because it can represent how the expression levels evolve over time. In this paper, we address two main challenges in reverse engineering with linear systems and present a novel algorithm to overcome these difficulties. Firstly, reverse engineering of GRN will be computationally less challenging task if significantly large amount of experimental data is available. However, this is limited due to the expensive cost of microarray experiments. This problem makes the reverse engineering of GRN to be underdetermined, which means that there is substantially greater number of genes than the number of measurements. Secondly, reverse engineering of GRN with linear systems is ill conditioned because small relative changes in design matrix E in Eq. 2 due to the noise make substantially large changes in the solution. Therefore, the reverse engineering algorithm named as Bayesian orthogonal least squares (BOLS) is developed to overcome these difficulties. The BOLS method is created by combining three techniques: 1) Orthogonal Least Squares method (OLS) [22], 2) second order derivative for network pruning [23], and 3) Bayesian model comparison [20]. We evaluate the BOLS method by inferring GRN from both synthetic and Yeast expression data. We provide the performance comparison between BOLS and one of stateof-the-art reverse engineering methods, SBL algorithm [24]. The SBL algorithm has been recently used in GRN studies with linear systems [18,19]. For evaluation with Yeast expression data, we validate the inferred GRN using The Bayesian Orthogonal Least Squares algorithm could be used as a framework for gene regulatory study including the collecting and modeling of data Figure 1 The Bayesian Orthogonal Least Squares algorithm could be used as a framework for gene regulatory study including the collecting and modeling of data. the information from the database that contains large data sets of known biological interactions. Results and discussion Case study 1: In silico experiment Our in silico experiment follows the methodology for the generation of synthetic expression dataset for systems of DBN as used in Rogers and Girolami [19]. We generate synthetic networks using power-law distribution. To create a network structure, we decompose the entire network into a set of small networks that are defined as unit network and proceed a unit network by a unit network. Figure 2 presents a unit network consisting of a target gene and a list of genes as regulators. It should be noted that there is no requirement for the network to be acyclic. All created unit networks will be combined to create a whole GRN. The combination of all (or selected) unit networks is straightforward process based on the definition of a graph (see Methods section). This unit network approach is similar to the approach adopted in Bayesian network based methods [9,25] and SBL based method [19]. For each target gene, we sample the number of genes (m i ) regulating this target gene from the approximate power distribution where the normalization constant is given by Following Rogers and Girolami [19], Wagner [26], and Rice et al. [27], the constant η is set to 2.5. m max (or m min ) is the maximum (or minimum) number of allowed regulator genes in a unit network respectively. The condition m max << K (the number of genes) ensures the sparseness of the synthetic network. Note that m max is set to 4 for all experiments. The more details of creating synthetic network can be found from the supplementary information of Rogers and Girolami's study [19]. In this study, the Matlab code from Rogers and Girolami's study for creating synthetic network is used, which is available from [28]. We first randomly generate the synthetic networks unit network by unit network to combine them for a whole GRN, and then we generate the synthetic expression data with randomly generated synthetic GRN. Using these synthetic expression data, we infer the network with BOLS and the simulated data set. It should be noted that BOLS and the generation method of synthetic networks are not cooperative to work because the generation and inference of networks are completely separate processes. In this experiment, the synthetic expression data is generated based on DBN using Eq. 1, in which the expression data are evolved over the time. However, as the simulation process is continued over the time, the expression data diverge by constantly either increasing or decreasing. Thus, we collect a single time point only after the expression data are simulated for a certain period of time (from t = 0 to t = T) to avoid expression levels being too high. We proceed in a single time point by a single time point manner. For a single time point, the generation of expression data is started with initial synthetic data. For each gene, the initial condition is assigned with random number between 0 and 1. With given initial condition, we simulate the expression data for each gene i from t = 0 to t = T using Eq. 1. We take the measure with t = T-1 for design matrix E and with t = T for Y i in Eq. 2. We repeat these process N times to collect N data points and apply the reverse engineering algorithms to reconstruct the synthetic network. The schematic of unit network Figure 2 The schematic of unit network. (a) Input unit network consisting of target gene Y i and all other genes as regulator candidates. (b) Output unit network consisting of target gene Y i and its most probable regulators. We make the performance comparison between BOLS and SBL methods to show the efficiency of BOLS method. The SBL method is one of the state-of-the-art algorithms, which has been recently applied to GRN studies with linear systems [18,19]. We use the Matlab code of SBL algorithm that is available from [29]. At first, we investigate the effect of the number of data points (N = 20, 40, 60, 80, 100) and noise (ε = 0.01, 0.05, 0.1) on the performance using synthetic GRN and expression data with fixed number of genes (K = 100). Sensitivity and complementary specificity are used as measures for the performance of the algorithm [9,19,27]. We compute the sensitivity = TP/(TP + FN) and the complementary specificity = FP/(TN + FP), where TP is the number of true positive, FN is the number of false negative, FP is the number of false positive, and TN is the number of true negative. In other words, the sensitivity is the proportion of recovered true positive interactions and the complementary specificity is the proportion of false positive interactions. To investigate the variability of test results, we have run both algorithms 20 times with same control parameters, i.e. the number of data points, noise, and etc. For each run, new random synthetic network has been created. We find that the sensitivity and the complementary specificity are constant over 20 experiments with small variability (see Table 1). The systematic effect of noise and the number of data points on the performance can be analyzed from Table 1. As the number of data points increases with the fixed number of genes and noise level ε, the performance of both algorithms increase. As noise level ε increases, the performance of both algorithms decreases. For the number of data points ≥ 80, the sensitivity of SBL is slightly greater than BOLS and the complementary specificity of SBL is significantly greater than the ones of BOLS. It means that BOLS algorithm produces significantly smaller proportions of false positive interactions than SBL algorithm. It should be noted that results from SBL algorithm with N = 20, 40, and 60 are not available because the Matlab code of SBL algorithm dose not run when the number of data points N is relatively low. SBL algorithm includes the Cholesky factorization of their Hessian matrix that is required to be positive definite. Note that the Hessian matrix is consisted of design matrix and hyperparameters [24]. When the number of data points is relatively smaller than the number of genes in the data set, this Hessian matrix becomes non positive definite. Our experiments show that SBL algorithm is not suitable for the reverse engineering with limited number of data points and it doesn't even run with significantly limited number of data points. On the other hand, BOLS algorithm produces relatively small proportion of FP interactions with limited number of data points. It should be noted that Rogers and Girolami [19] generate 2R expression levels for each gene in each knock-out experiment-R in the normal (wild type) system and R in the knock-out (mutant) system. In a network of K genes, in which each is knocked out individually, they have 2RK data points. Since the evaluation of BOLS with Rogers and Girolami's knockout approach [19] is beyond the scope of the objectives of our study, i.e. the underdetermined problem using DBN model, we provide the comparisons between BOLS and SBL based on their approach as Additional file 1. We use the receiver operating characteristic (ROC) analysis [19] to characterize the trade-off between the proportions of true and false positive interactions with limited number of data points (N < 20 and K = 100) in Figure 3. This ROC analysis shows that BOLS algorithm produces the solution with extremely small proportion of false positive interactions when the number of data points is extremely small. We also analyze the effect of the number of gene K with limited number of data points N = 10. In Figure 4, it is shown that the performance of BOLS algorithm decreases as K increases from 100 to 300. Therefore, it can be concluded that performance of the proposed algorithm is dependent on K, N, and ε. From the experiments, it is shown that BOLS produce solutions with significantly low complementary specificity regardless of K, N, and ε, because Bayesian model selection scheme is efficiently enough to discover the optimal solution. Since we do not have any information on noise in the data, we completely over-fit the data to DBN model using OLS as a first step. Then, we remove the unnecessary inferred parameters (the inferred parameters that are related with "noise") to obtain the optimal solution by a trade-off between minimizing the natural complexity of inferred GRN and minimizing the data misfit. As the complexity of inferred GRN decreases with network pruning process, we use the Bayesian model selection to select the most optimal solution. It should be noted that the Bayesian model selection includes Occam's factor, which automatically suppresses the tendency to discover spurious structure in data. Thus, we can say that BOLS is efficient to infer GRN with significant small portion of FP interactions with the noisy and limited number of data set for DBN. In Figure 5, we present an example showing that the performance of BOLS increases as the network pruning step proceeds. We first generate the synthetic networks of 50 genes (N) and then simulate 20 data points (K) with this networks and noise level ε = 0.1. We concentrate on an output unit-network with highest evidence value logP(D|H i ) for evaluation. It should be reminded that as the network pruning continues the number of inferred interactions in the unit network decreases. Figure 5b shows that the number of errors (FP+FN) decreases, as the network pruning proceeds. The complementary specificity also decreases along the network pruning ( Figure 5c). On the other hand, the sensitivity remains constant, which is equal to 1 (Figure 5d). It means that the over-fitted solu-tions after OLS step contain only TP and FP interactions (no FN interactions). It is observed that the number of errors and the complementary specificity converge at 0 as the network pruning process proceeds, in which the unit network has the highest evidence logP(D|H i ) ( Figure 5a). Therefore, it can be concluded that the OLS method can cope with underdetermined problems using noisy data, provided that the method is combined with network pruning process and Bayesian model selection techniques. The BOLS algorithm should be run K times producing K unit networks, which are combined to build the whole network. Each unit network is assigned with P(D|H i ). For each unit network, we compute the number of errors = FN + FP. The relationship between P(D|H i ) and the number of errors for each unit-networks is shown in Figure 6. The number of errors decreases as the number of data points increases on the synthetic data. Unit networks with higher P(D|H i )s are more accurate than those with lower P(D|H i )s. This signifies that unit networks with higher P(D|H i )s have a higher confidence that unit networks are correctly reconstructed. Thus, when low numbers of data points and extremely high numbers of genes are given, the algorithm should be able to recover a partially correct network with unit networks only having relatively high P(D|H i )s. It should be noted that BOLS algorithm does not provide the confidence levels among interactions inside unit network. However, it can be noticed from Figure 6 that many unit networks with relatively high evidence values have zeroed number of incorrectly inferred interactions. Therefore, the evidence values for unit networks are efficient enough to cope with problems for locating unit networks without FP or FN interactions. Case study 2: Application of BOLS to Saccharomyces cerevisiae data To evaluate our algorithm for reverse engineering of GRN, we use the microarray data from Spellman et al. [30], in which they created three different data sets using three different synchronization techniques. We focus our analysis using the data set produced by α factor arrest method. Here we concentrate on our study with the expression data set of 20 genes known to be involved in cell cycle regulation [31]. Thus, we have a data set with 20 genes and 17 data points in this experiment. Based on the previous The output unit networks using this data set are presented in Table 2 including the P(D|H i ) of unit networks, in which there are 107 inferred interactions from BOLS. Among those interactions, 45 of them are identified as physical or genetic interactions from the BioGRID database [32]. Later in this section, we provide the logical basis such that some of unidentified interactions might be possible physical or genetic events. BioGRID is a freely accessible database including physical or genetic interactions from the Saccharomyces cerevisiae available at [33]. The output in Table 2 shows that the interactions with higher P(D|H i )s have higher likelihood of having known physical or genetic interactions identified from the BioGRID than the interactions with lower P(D|H i )s: 1) Among the elucidated interactions with P(D|H i ) > 66 th percentile of all P(D|H i )s, 23 of them are identified as physical or genetic interactions from BioGRID database. 2) Among the elucidated interactions with 66 th percentile > P(D|H i )s > 33 rd percentile, 14 of them are identified as physical or genetic interactions, 3) Among the elucidated interactions with P(D|H i )s < 33 rd percentile, 8 of them are identified as physical or genetic interactions. This could be explained from the previous simulation experiment showing that BOLS has less false positive interactions with relatively high P(D|H i ) than with low P(D|H i ). It is noted that the overall values of P(D|H i ) in Table 2 are relative small compared to the ones in Figure 6. It should be noted that number of genes is set to 20 in Table 2 and the number of genes is set to 100 in Figure 6. From Figure 6, it is also noted that the overall values of P(D|H i ) decreases as the number of data points decreases. It is also found that the overall values of P(D|H i ) decreases as the noise level increases. Therefore, the overall values of P(D|H i ) can be relatively different depending on the number of genes, noise level, and the number of data points. We pool both physical and genetic interactions from BioGRID to validate the output interactions in this experiment. The rationale for this pooling can be described as follows. Several proteins join together to form multi-protein complex having certain functions or regulating other proteins. For example, SCF complex consists of Skp, Cullin, and F-box proteins, which promotes G1-S transition by targeting G1 cyclins and Cln-Cdk inhibitor Sic1 for degradation. From Breeden's [34] review study, it is known that cell cycle regulated complexes with at least one sub-unit are regulated at the transcript level. This means that certain protein complexes might regulate other complexes with time delay. With only expression data sets, we only have information that which proteins are present for certain time. In terms of efficiency and log- ical order, it is assumed that the cell only makes the proteins when it is needed. If the proteins are made all the time, the cell could be inefficient in an environment without the substrates of the protein [21]. From these points of view, there can be two possible cases to be considered when certain two proteins are present: 1) two proteins form a multi-protein complex by interacting each other, 2) One protein might form a complex with some other proteins. This complex might regulates the other protein that could also form a protein complex. Therefore, those two types of interactions can be pooled together to validate the output interactions. [53,67], Swi5 [53], and Swi6 [53,72]. Cdc28 is a catalytic subunit of the main cell cycle cyclin-dependent kinase (CDK), which alternatively associates with G1 cyclins (Cln1, Cln2, and Cln3) and G2/M cyclins (Clb1, Clb2, and Clb4) that direct the CDK to specific substrates. Hct1 (Cdh1) and Cdc20 are cell cycle regulated activators of the anaphase-promoting complex/cyclosome (APC/C), which direct ubiquitination of mitotic cyclins. One of Cdc28's Gene Ontology definitions gives another evidence that Cdc28 might have associations with Hct1 and Cdc20, because Cdc28 is involved in the progression from G1 phase to S phase of the mitotic cell cycle. Sic1 is an inhibitor of Cdc28-Clb kinase complexes that controls G1/S phase transition, which prevents premature S phase and ensuring genomic integrity. Among the list of genes inside Cdc28 unit network, there are several genes not identified as having physical or genetic interactions with Cdc28 from the BioGRID: Clb6, Mbp1, Mcm1, Cdc34, Cdc53 and Skp1. However, the definitions of these genes from the BioGRID give enough evidences such that some of them indirectly interact with Cdc28. For example, Clb6 is a B-type cyclin involved in DNA replication during S phase, which activates Cdc28 to promote initiation of DNA synthesis. Clb6 also has a role for the formation of mitotic spindles along with Clb3 and Clb4. Thus, Clb6 indirectly regulates Cdc28 along with Clb4. Cdc53, Cdc34 and Skp1 also indirectly regulate Cdc28 through Sic1. They form a structural protein of SCF complexes, called cullin, with an F-box protein. The SCF promotes the G1-S transition by targeting G1 cyclins and the Cln-Cdk inhibitor Sic1 for degradation. Mbp1 is a transcription factor involved in regulation of cell cycle progression from G1 to S phase, which forms a complex The relationship between the evidence value P(D|H i ) and the number of errors for unit networks, where K = 100, ε = 1 The changes of performance of BOLS as the network pruning step proceeds Figure 5 The changes of performance of BOLS as the network pruning step proceeds. The simulation experiment is done with N = 50, K = 20, and ε = 0. Clb6 Cdc20 (X) Swi4 (_) 26 (X)s indicate that the regulator gene is identified as having known interactions with its target gene from the BioGRID database, which is a freely accessible database of physical and genetic interactions available at [33]. example, Mbp1 and Clb5 have an indirect interaction. Mbp1 is a transcription factor involved in regulation of cell cycle progression from G1 to S phase, which forms a complex with Swi6 that binds to MluI cell cycle box regulatory element in promoters of DNA synthesis genes. It is already identified that Swi6 regulates Clb5. Thus, Mbp1 indirectly regulate Clb5 through Swi6. We also evaluate both BOLS and SBL using the same data set in Table 2 to compare the efficiency of BOLS with SBL. For direct comparison purpose, we make the graphs of inferred GRN by BOLS and SBL (Figure 7). There are 107 inferred interactions (45 of them are identified as known interactions from the database) by BOLS (Figure 7a) and 116 interactions (38 of them are known interactions) by SBL (Figure 7b). Figure 7c shows that 62 interactions are inferred by both BOLS and SBL, in which 32 of them are known interactions. Figure 7d shows 45 interactions only by BOLS in which 13 of them are identified as known interactions from the database. On the other hand, Figure 7e shows 54 interactions only by SBL in which only 6 of them are known interactions from the database. It is reasonable to assume that the complete information of biological interactions of Yeast is not available from the database yet. It is believed that the more depositions of the information concerning Yeast interactions are still on the way to reach the more complete understanding of the underlying complex cellular networks of Yeast. Based on the currently available information from the database, we can say that SBL algorithm infers GRN with relatively more complexity and less identified known interactions than BOLS. Therefore, based on our evaluation experiments with synthetic and Yeast expression data, it is sufficient to conclude that SBL produces more over-fitting solutions (i.e. more FP solutions) than BOLS. Conclusion In the evaluation of BOLS using synthetic data, it is shown the proposed BOLS algorithm is able to reconstruct networks using a very limited number of experimental samples. In this study, we assume that there is significantly limited number of data points and the noise level in the data is not known. This is a common situation in expression data analysis. To handle these difficulties, we adopt a decomposition strategy to break down the entire network into a set of unit networks. This decomposition makes the inferring of a whole GRN into several separate regressions. Thus, if we have extremely small number of data points, The inferred GRN by both BOLS and SBL are compared with the same expression data used in Table 2 Figure 7 The inferred GRN by both BOLS and SBL are compared with the same expression data used in Table 2 our method can not provide 100% correct solutions, but provides each unit network with P(D|H i ), which can be used as confidence level. The unit network with a higher P(D|H i ) has a higher confidence such that the unit network is correctly inferred ( Figure 5). Previously, Basso et al. [11] validated their ARACNe algorithm using 19 nodes synthetic network. With 350 sample size, the sensitivity and complementary specificity are approximately 80% and 20%, respectively. The inferred interactions from their method contain approximately 20% false positive and false negative interactions, respectively. With our method, it is possible to locate false positive or false negative interactions with P(D|H i )s, which is the unique property of BOLS algorithm. Our in silico experiment shows that the performance depends on the number of data points, genes, and noise level. Further study will be required to investigate the relationship between these parameters and the performance so that the BOLS algorithm can be generally applied to the microarray data with any number of genes, data points, and noise level. Another evaluation is conducted with the Yeast Saccharomyces cerevisiae data of 20 genes, which are involved in cell-cycle regulation. In the output network, there is noticeable number of interactions that are identified as physical or genetic interactions from the literature. There are also several interactions that are not identified from the literature. However, it is shown that the definition of these genes from the BioGRID database gives enough evidences that some of them have indirect interactions. Thus, this experiment shows that BOLS algorithm is able to elucidate remarkable number of known physical or genetic events. For both evaluation experiments with synthetic and Yeast expression data, we compare the performance between BOLS and SBL algorithms. The SBL algorithm [18,19,24] is a general Bayesian framework to obtain sparse solutions utilizing linear models. This method is known as type-II maximum likelihood method [24], in which the solutions are obtained by maximizing the marginal likelihood. On the other hand, BOLS utilizes the Bayesian model selection that is an extension of maximum likelihood model selection, in which the posterior is obtained by multiplying the best fit likelihood by the Occam's factor. From our both evaluation experiments, it is concluded that BOLS produces sparser solutions with less FP than SBL does. Gene regulatory network model and Unit networks To study GRN, we choose a system of DBN as our GRN model. This model is described by Here N is the number of data points, K is the number of gene in the data, and ξ i (t) is a noise at any time. The e i (t)s are the level of mRNAs at any given time t, which influences the expression levels of the gene. The value w ij describes the interaction strength between the j th gene and the i th gene. This model has the first order Markov property, which means that the future expression e i (t+1) is independent of the past expression level e i (t-1) given the present expression level e i (t). As briefly mentioned in the previous section, it is required that the data set is reorganized into a linear system to reverse engineer GRN using BOLS algorithm. This GRN model can be easily rewritten into a linear system as It should be noted that the expression levels e i (t)s in both Eq. 1 and 2 are same and noisy ones. Eq. 1 describes that the current expression levels e i (t)s are determined depending on the previous ones e i (t-1) and noises ξ i (t-1) and the expression levels are evolved over the time based on the GRN. Hence, ξ i (t) is a noise added into e i (t)s during the generation of synthetic or real expression levels based on DBN model. Once the "noisy" expression levels are available, we consider Eq. 2 for reverse engineering of GRN. Because the given expression levels e i (t)s in Eq. 2 are noisy, we should have a condition such that |Y i -Ew i | = |n i | > 0. If n i = 0, we will have over-fitting solutions, in which model fitting oscillates widely so as to fit the noise. Thus, we can say that n i is a noise related with "data misfit" or "confidence interval" on the best fit parameters. On the other hand, ξ i (t) is a noise related with the "generation" ill conditioned problems. Thus, we are motivated to develop a novel strategy to overcome these problems. We decompose the entire network into a set of small networks that are defined as unit networks. Each unit network consists of one particular target gene and its regulator genes. The unit network is used as input and output of BOLS algorithm. Figure 2a presents an input unit network that includes all genes in the data set as regulator candidates. Figure 2b presents an output unit network that contains most probable regulator genes to the target gene. Thus, we can decompose the GRN into several separate regressions and apply the BOLS algorithm to each unit network. Therefore, for GRN with K number of genes, we will run the algorithm K times-producing a unit network for each time. It should be noted that we can use all unit networks to create whole GRN that consists of two finite sets, a set of nodes (genes) and a set of edges (interactions) such that each edge connects two nodes. With all (or selected) unit networks, the generation of GRN can easily be generalized by constructing a K × K graph matrix G = [g i, j ], with binary element Furthermore, this matrix induces a GRN, in which nodes corresponds to genes and an edge joins nodes i and node j if and only if g i, j = 1. For each edge, we store the information of unit network where it belongs. The information of these sets can be easily obtainable from all unit networks. Thus, the combination of all unit networks for creating whole GRN is easy and straightforward procedure. Bayesian orthogonal least square algorithm As briefly described in background section, reverse engineering with a linear system with limited data has to overcome two difficulties. In this section, we describe our efforts to overcome these challenges by developing BOLS algorithm. The system is referred as underdetermined when the number of parameters is larger than the number of available data points, so that standard least squares techniques break down. This issue can be solved with the OLS [22] method involves decomposition of the design matrix into two using Gram-Schmidt Orthogonalization theory as, where E N-1 × K = [e 1 , e 2 ,......, e k ], X N-1 × K = [x 1 , x 2 ,......, x k ] and U K × K is a triangular matrix with 1's on the diagonal and 0's below the diagonal, that is, Let's say that w is the regression parameter inferred by E and g is the regression parameter inferred by X. It is noted that g and w satisfy the triangular system g = Uw. The computational procedure of Gram-Schmidt method is described as Because x i and x j (i ≠ j) are orthogonal to each other, the sum of square of Y is defined as The variance of Y i is defined as It is noticed that (x i T x i )g i 2 /N-1 is the variance of Y i which is contributed by the regressors and n i T n i /N-1 is the noise (or unexplained) variance of Y i . Hence, (x i T x i )g i 2 /N-1 is the increment to the variance of Y i contributed by w I , and the error reduction ratio only due to x i can be defined as This error term provides a simple and efficient measure for seeking a subset of significant regression parameters in a forward-regression way. The repressor selection procedure from Chen et al. [22] is summarized as follows: 1) At the first step, for 1 ≤ i < K, compute Y Y x x g n n [NError] 1 (i1) = max{[NError] 1 (i) , 1 ≤ i ≤ K} and select (2) At the j th step where j ≥ 2, for 1 ≤ i ≤ K, i ≠ i 1 ,..., i ≠ = i j- where u mj = u mj (ij) , 1 ≤ m < j. (3) The OLS is terminated at the K s th step when where 0 <ρ < 1 is a chosen tolerance. We assume that we do not have any information about noise level in the data, so that we completely over-fit the data to the model using OLS method with ρ << 1 (ρ = 1.0e-3 in this study). Then we reduce unnecessary parameters to deal with the ill-conditioned problem by using second order derivative for network pruning techniques and Bayesian model comparison framework. We can obtain the optimal solution by trading off between the complexity of the model and the data misfit [20]. We start this procedure using an extremely small value for data misfit by completely over-fitting the data to the model. As the complexity of the model is reduced; i.e. the number of effective parameters is reduced, the value for data misfit is increased. The optimal complexity of the model for "true solution" is decided using a Bayesian model comparison frame that assigns a preference to the model H i with cer-tain complexity, a Bayesian evaluation so called as the evidence P(D|H i ). The evidence is obtained by multiplying the best-fit likelihood by the Occam's factor, where P(D|g MP , H i ) corresponds to the best fit likelihood, P(g MP |H i )(2π) K/2 det -1/2 A corresponds to the Occam's factor, A = ∂ 2 logP(g|D, Hi)/∂g 2 , and g MP represents the most probable parameters of g. The Occam's factor is equal to the ratio of the posterior accessible volume of H i 's parameter space to the prior accessible volume, or the factor by which H i 's hypothesis space collapses when the data is collected [20]. The model H i s can be viewed as consisting of a certain number of exclusive sub-models, of which only one is chosen when the data is collected. The Occam's factor is a measure of complexity of the model that depends not only on the number of parameters in the model but also on the prior probability of the model. Therefore, the over-fitting solution can be avoided by using Bayesian model comparison frame because the Bayesian Occam's factor assures getting the optimal complexity of the model. See Mackay [20] for more details of Occam's factor. With a second order derivative for network pruning [23], we can select the parameters to be eliminated first. Our goal here is to find a set of parameters whose deletion causes the least increase of cost function C C = β/2(Y -Xg) T (Y -Xg) + α/2(g T g), where α and β are hyper-parameters that measure the complexity of the model and data misfit, respectively. It will be shown later in this section the iterative formulae to estimate α and β with a given data set and model structure in Eq. 6 and 7. Using the second order derivative for network pruning method, we can derive the saliency equation as follows, where A jj-1 is a j th diagonal component of the inverse matrix of A, A = ∂ 2 C/∂g 2 , and v j is the unit vector in parameter space, the j th dimension at which it is equal to one and the rest of the dimensions are equal to zero. It should be noted that A and A -1 are diagonal matrices because of the decomposition of design matrix E by Gram-Schmidt Orthogonalization theory. With Eq. 3 we can select parameters, whose elimination produces the least increase of cost function C. With a Bayesian frame [20] we can compare alternative models when our model structures keep changing with the network pruning method. Let's say we have a data set D = [Y, X], where Y = [y(1), y(2),..., y(N)] T is the target data set, X = [x 1 , x 2 ,..., x K ] the N × K design matrix, and x i = [x i (1), x i (2),..., x i (N)] T each column matrix in X. The regression parameters we want to infer are g = [g 1 , g 2 ,..., g K ] T . The log posterior probability of data D, given α and β, can be derived [20] as, where the subscript MP denotes Most Probable. The evidence P(D|H i ) can be obtained if we marginalize the probability defined in Eq. 5 over the hyper-parameters α and β. Before the estimation of the evidence P(D|H i ), we have to find the most probable value of the hyper-parameters α and β. The differentiation of Eq. 5 over α and β and the rearrangement gives formulae for the iterative re-estimation of α and β [20], where γ = N -αTrace(A -1 ), g = A -1 X T Y, N is number of data points, and K is number of variables (genes). To rank alternative structures (or complexities) of the model in the light of data set D, we evaluate the evidence by marginalizing the posterior probability P(D|α, β, H i ) over α and β, P(D|H i ) = ∫∫P(D|α, β, H i )P(α, β)dαdβ. We have very little prior information about α and β. When the available prior information is minimal, the learning process is often started with an objective prior probability. This uninformative prior probability is referred to as "vague prior" for a parameter with a range from 0 to ∞, which is a flat prior [87]. This prior probability can be left out when we compare alternative models. With the prior available, we can marginalize the posterior P(D|α, β, H i ). The marginalization of P(D|α, β, H i ) over α and β can be estimated using a flat prior and Gaussian integration [20], where σ logα|D and σ logβ|D are the error bars on logα and logβ, found by differentiating Eq. 5 twice: In this study, we create a novel reverse engineering algorithm for linear systems with K number of genes using three techniques described above. The algorithm is run K times so that all genes in the data set are considered as a target gene at least once. The algorithm of BOLS for a unitnetwork construction is summarized as Additional file 1 Comparing the performance between BOLS and SBL using the data set generated based on Rogers and Girolami's study -Supplementary Information. This description provides the comparison of performance between BOLS and SBL using the synthetic data generated by Rogers and Girolami [19].
9,860.2
2007-07-13T00:00:00.000
[ "Biology", "Computer Science", "Engineering" ]
Exploring the ketogenic diet’s potential in reducing neuroinflammation and modulating immune responses The ketogenic diet (KD) is marked by a substantial decrease in carbohydrate intake and an elevated consumption of fats and proteins, leading to a metabolic state referred to as “ketosis,” where fats become the primary source of energy. Recent research has underscored the potential advantages of the KD in mitigating the risk of various illnesses, including type 2 diabetes, hyperlipidemia, heart disease, and cancer. The macronutrient distribution in the KD typically entails high lipid intake, moderate protein consumption, and low carbohydrate intake. Restricting carbohydrates to below 50 g/day induces a catabolic state, prompting metabolic alterations such as gluconeogenesis and ketogenesis. Ketogenesis diminishes fat and glucose accumulation as energy reserves, stimulating the production of fatty acids. Neurodegenerative diseases, encompassing Alzheimer’s disease, Parkinson’s disease are hallmarked by persistent neuroinflammation. Evolving evidence indicates that immune activation and neuroinflammation play a significant role in the pathogenesis of these diseases. The protective effects of the KD are linked to the generation of ketone bodies (KB), which play a pivotal role in this dietary protocol. Considering these findings, this narrative review seeks to delve into the potential effects of the KD in neuroinflammation by modulating the immune response. Grasping the immunomodulatory effects of the KD on the central nervous system could offer valuable insights into innovative therapeutic approaches for these incapacitating conditions. Introduction The ketogenic diet is characterized by a significant reduction in the consumption of carbohydrates and a greater intake of fats and proteins, this type of diet determines a metabolic state called "ketosis", in which fats are used as the main energy source instead of carbohydrates (1).Many recent studies have identified how the ketogenic diet can induce potential benefits in reducing the risk of certain pathologies such as type 2 diabetes (2), hyperlipidemia (2), heart disease (3) but also cancer (4).Therefore, KD therefore consists of a high intake of lipids, a moderate consumption of a good number of proteins and a low intake of carbohydrates.Generally, KD can thus be divided into the various percentages of macronutrients, i.e. fats which can vary from 60-90% (usually 70-75%) of the entire energy intake, carbohydrates less than 50 g per day (generally represented by 5-10% of total kcal) and proteins with a range from 1.0-1.2 to 1.7 g per kg of body weight (represented by 20% of daily kcal).The main purpose of carbohydrates is to provide energy for various tissues in the body (5).However, when their intake is limited to levels below 50 g/day, subsequent insulin secretion decreases significantly, inducing a catabolic state.Due to this process, glycogen stores are used to the point of exhaustion, triggering a series of metabolic changes, including gluconeogenesis and ketogenesis (6).During ketogenesis, insulin secretion is low due to low blood glucose levels via feedback mechanism, resulting in a reduction in the accumulation of fat and glucose as energy reserves, also triggering further hormonal changes that contribute to an increase of the use of fats as an energy source and subsequent production of fatty acids (7).Neurodegeneration is defined as a pathological condition that mainly affects the brain's environment at neuronal level.Neurodegenerative diseases are a large group of neurological disorders that affect specific regions of the CNS, having various clinical and pathological characteristics.Conditions in this subgroup include Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington's disease (HD).Although the underlying mechanisms are different, such as the different protein aggregates or genetic variations, they share the hallmark of chronic neuroinflammation (8).Initially, the molecular mechanisms that were recognized as underlying these pathologies were mainly focused on anatomical changes including, as cited previously, protein aggregation such as amyloid beta plaque (Ab), neurofibrillary tangles (NFTs) and neuronal damage.Subsequently, the presence of possible immune-related proteins in patients with AD was reported for the first time among the causes of neurodegeneration.The causes and pathogenesis are complex and diverse, mainly including protein aggregations, mutations, and infections.Following numerous studies, it has been suggested that the chronic activation of the immune response in the CNS and the presence of high levels of inflammatory factors play a notable role in the emergence and pathogenesis of neurodegenerative diseases (9).Cerebral neuroinflammation is fundamentally characterized by the activation and proliferation of some main immune cells forming part of the CNS such as microglia and astrocytes, subsequently accompanied by the regulation and release of inflammatory mediators (10).Currently, studies on possible therapies are still ongoing and the development of effective therapeutic strategies is essential.Recently, emerging evidence has underlined and highlighted both the potential pathophysiological and clinical benefits of KD in neurodegenerative diseases, indicating it as a possible treatment (11).The protective power of KD is associated with the creation of ketone bodies (KB), fundamental players and factors of this type of diet.In fact, over the years, studies on the neuroprotective and inflammation-modulating effects associated with KD have increased, increasingly deepening research in this field, as well as in other types of pathologies.For about 100 years, KD has been the protagonist of numerous publications in the PubMed search engine, inserting the words "ketogenic diet AND neurodegenerative disease", with about 259 results from 1924 to 2024 (in progress), noting an increase especially in the last decades starting from the 2010s, with around 245 results (Figure 1).In light of these described concepts, and analyzing articles from the last decade, although the complete mechanisms of the influence of KD for the treatment of neurodegenerative diseases remains to be analyzed with further research, the objective of this review was to systematically summarize the information and findings present in scientific literature to support the use of KD as a possible therapeutic approach, in particular for neurodegenerative diseases, or discuss its potential adverse effects. Ketogenic diet The ketogenic is a dietary approach which was first used by Russell Wilder as a non-pharmacological treatment for epilepsy in 1921 (12), subsequently coining the term "ketogenic diet".Through observations, it was noted that this type of dietary regime reduced the frequency and intensity of epileptic seizures in a subgroup of patients characterized by epilepsy who followed this dietary approach (13), in fact for almost a decade, before the use and discovery of specific antiepileptic drugs, the ketogenic diet was considered as one of the best therapeutic options for the treatment of pediatric epilepsy (14).This diet subsequently became popular around 1970s, becoming a potential treatment for various conditions (15), and in weight loss interventions, demonstrating its short-term effectiveness (16).In fact, through the significant reduction in the consumption of carbohydrates and a greater intake of fats and proteins, this type of diet determines a metabolic state called "ketosis", in which fats are used as the main energy source instead of carbohydrates.Many recent studies have identified how the ketogenic diet can induce potential benefits in reducing the risk of certain pathologies such as type 2 diabetes, hyperlipidemia, heart disease but also cancer (17).Therefore, KD therefore consists of a high intake of lipids, a moderate consumption of a good number of proteins and a low intake of carbohydrates.Generally, KD can thus be divided into the various percentages of macronutrients, i.e. fats which can vary from 60-90% (usually 70-75%) of the entire energy intake, carbohydrates less than 50 g per day (generally represented by 5-10% of total kcal) and proteins with a range from 1.0-1.2 to 1.7 g per kg of body weight (represented by 20% of daily kcal) (18) (Figure 2). Considering a diet characterized by 2000 kcal/day, the total daily intake of carbohydrates would be approximately 20-50 grams per day (18).The main purpose of carbohydrates is to provide energy for various tissues in the body.However, when their intake is limited to levels below 50 g/day, subsequent insulin secretion decreases significantly, inducing a catabolic state.Due to this process, glycogen stores are used to the point of exhaustion, triggering a series of metabolic changes, including gluconeogenesis and ketogenesis (19, 20).During ketogenesis, insulin secretion is low due to low blood glucose levels via feedback mechanism, resulting in a reduction in the accumulation of fat and glucose as energy reserves, also triggering further hormonal changes that contribute to an increase of the use of fats as an energy source and subsequent production of fatty acids (21).Once produced, fatty acids are used to produce ketone bodies (the main metabolites of the ketogenic diet), initially in the form of Acetoacetate (AcAc) and subsequently converted into beta-hydroxybutyrate (BHB) and Acetone, determining the appearance of a metabolic state which is called "nutritional ketosis", ketotic metabolism that will be maintained until the body is deprived of carbohydrates.Nutritional ketosis can generally be considered "safe" as the production of ketone bodies appears to be in moderate concentrations (approximately 5.0 mmol/l) (22) which have no ability to significantly influence blood pH.Naturally, nutritional ketosis is clearly different from ketoacidosis, which is a serious and life-threatening condition for the individual, with excessively high levels of ketone bodies, resulting in blood acidosis (23).Ketone bodies, once synthesized following the ketosis process, can be used as the main source of energy especially for the main vital organs such as the heart, muscle tissue, kidneys but above all the brain (23,24).In fact, in this case, ketone bodies have the ability, due to their structure, to cross the blood-brain barrier with the aim of providing alternative energy to the brain, as it is clearly dependent on glucose.Therefore, ketone bodies can play a crucial role in providing energy to the body during periods of low carbohydrate intake such as in the ketogenic diet, allowing the body to maintain energy balance (24).These basic mechanisms, characterizing this type of diet, could be the basis of the treatment of some types of neurodegenerative diseases to exploit the ability of ketone bodies to modulate the complexes implicated in some neurological diseases.Naturally, these mechanisms are still a source of discussion and research in this regard, which will therefore be explored in depth in this review. subgroup include Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington's disease (HD).Although the underlying mechanisms are different, such as the different protein aggregates or genetic variations, they share the hallmark of chronic neuroinflammation (25).Initially, the molecular mechanisms that were recognized as underlying these pathologies were mainly focused on anatomical changes including, as cited previously, protein aggregation such as amyloid beta plaque (Ab), neurofibrillary tangles (NFTs) and neuronal damage.Subsequently, the presence of possible immune-related proteins in patients with AD was reported for the first time among the causes of neurodegeneration.In recent years, microglial activation was identified as one of the main features of neurodegenerative diseases (26, 27).Therefore, numerous studies and efforts in the field of research have led to a revaluation of the role of inflammation in neurodegeneration, as a triggering and determining cause of the development of neurodegenerative diseases, resulting in immune signalling which may not only be a consequence of protein aggregation in the brain, but also the cause of the accumulation itself in the early stages of the pathological process (28,29).The immune system, in fact, performs numerous functions in maintaining tissue homeostasis, including for example the elimination of possible pathogens but also the recovery of injuries (30,31).Generally, the immune response resolves and repairs the tissue injury beneficially and eliminates the infection present.Despite this, in some cases, the ability to reduce and turn off a possible inflammatory stimulus is lacking, therefore, these resolving mechanisms could be insufficient, resulting in chronic inflammation, which leads to the release of neurotoxic factors and an increase in the disease.The causes that determine the birth and duration of chronic inflammation can be multiple, for example depending on endogenous and genetic factors (32, 33), environmental factors (systematic infection, intestinal dysbiosis, senescence processes, nutritional lifestyle) (34,35), but also a possible genetic susceptibility, for example mutations of apolipoprotein E4 (APOE4), an allele associated with an increased risk of late onset of AD disease (36,37).Furthermore, recent studies affirm how a group of bioactive lipids, specialized pro-resolving lipid mediators (SPMs), could be induced to promote the resolution and recovery of inflammation.Failure to turn off inflammation would lead to reduced production of SPM, resulting in an increase in chronic inflammatory diseases (38).Furthermore, another fundamental key concept to explore is the microglial role on neurodegenerative disease, in fact, the inflammatory receptors present on the surface of immune cells, in particular glial cells, generally act as sensors to detect possible changes in the homeostasis of the environment.Microglia, macrophages resident in the brain environment, and astrocytes, in fact, are considered as immune cells that are mainly involved in neuroinflammation processes (39).Microglia present proinflammatory or neuroprotective roles in the CNS based on the physiological response following possible external stimuli; Depending on their phenotype, they can be distinguished into the M1 phenotype of the pro-inflammatory type and the M2 phenotype of the anti-inflammatory type.M1 type microglia, activated by nuclear factor kB (NF-kB) but also by STAT3 signal transducers, generate proinflammatory cytokines, and in particular the interleukins IL-1b, IL-6, IL-12, IL-23, cyclooxygenase (COX)-1, COX-2, ROS and nitric oxide (NO) (39).However, in the case of the M2 antiinflammatory phenotype, when activated they release neurotrophic factors, such as IL-4, IL-10, IL-13 and transforming growth factor-b (TGF-b) (40).Similar to microglia, even astrocytes if activated present a double phenotype (A1 and A2) in order to respond to external pathological agents and therefore contribute to the neuroinflammatory process, resulting in the production of proinflammatory cytokines (such as IL-1b, TNF-a, IL-6 and NO) or anti-inflammatory cytokines (such as IL-4, IL-10, IL-13 and TGFb) via the STAT3 pathway (40).Microglial activation is also determined by Molecular Patterns associated with DAMP or PAMP Pathogens (41), which lead to the activation of the signal by activating transcription processes and inducing the secretion of inflammatory mediators which in turn amplify further inflammation.Generally, glial cells, if activated, should eliminate possible pathogens by inducing an inflammatory resolution process to eliminate DAMPs or PAMPs, reducing the inflammatory response.However, in some cases, immune cells may not be able to resolve this inflammation which acquires a chronic character, causing neuronal toxicity and favouring protein aggregation.Chronic inflammation leads to increased release of proinflammatory cytokines, including interleukin-6 (IL-6), tumor necrosis factor a (TNF-a), adipokines, and Monocyte chemoattractant protein-1 (MCP -1) (42) which also decrease good mitochondrial function, stimulate the production of ROS (43).Consequently, these inflammatory mediators cause damage at the blood-brain barrier (BBB), entering the brain, inducing persistent chronic neuroinflammation and subsequently causing neurodegeneration (42).For example, in AD, immune cells and therefore microglia, if activated in a state of chronic inflammation, could be involved in the birth of the Ab plaque by increasing its secretion, inducing the expression of interferon-induced transmembrane protein 3 (IFITM3) which improves the aggregation of b-amyloid plaque (44,45).This state of inflammation would facilitate a link between the initial aggregation of Ab and the development of "tau" aggregates with the formation of harmful clusters in the brain that fail to perform their structural stabilizing functions (46,47).The accumulation of this type of tangles would lead to a further loss of neuronal function in the nerve fiber and, as a final process, to cellular apoptosis.In PD pathology, however, proinflammatory cytokines released by microglial inflammation could induce the phosphorylation of asynuclein, a key protein in maintaining the production of neurons in the brain, at serine 129 (Ser129) (48), thus promoting degradation of a-synuclein itself (49).Similarly, TDP-43 aggregates, recognized as the main pathogenic factor of ALS, following persistent inflammation, could invade mitochondria, releasing mtDNA and inducing inflammation in neurons through activation of the cGAS/ STING pathway (50).Therefore, inflammation at the microglial and neuronal level could have a close relationship with the development of these pathologies. The neuroprotective role of the ketogenic diet in inflammation and neurodegeneration: mechanisms and therapeutic potential Over the years, this diet has been evaluated from various aspects, in an ever more in-depth manner, especially in the field of various pathologies; however, the main primary use, since its discovery, mainly includes neurological diseases having initially been used in epilepsy.The effect of the ketogenic diet on neurological diseases has been widely described in various publications highlighting its effects on neuroprotection, neuroinflammation and neuroplasticity, neurotransmitter function and changes in cellular energy and metabolism among other aspects (51).Neurodegenerative diseases, such as Alzheimer's disease (AD) and Parkinson's disease (PD), but also Huntington's disease (HD) and amyotrophic lateral sclerosis (ALS), are pathologies that are characterized by the reduction and loss progressive and chronic progression of neuronal structure and functions (52,53).The causes and pathogenesis are complex and diverse, mainly including protein aggregations, mutations, infections.Following numerous studies, it has been suggested that the chronic activation of the immune response in the CNS and the presence of high levels of inflammatory factors play a notable role in the emergence and pathogenesis of neurodegenerative diseases (54).Cerebral neuroinflammation is fundamentally characterized by the activation and proliferation of some fundamental immune cells forming part of the CNS such as microglia and astrocytes, subsequently accompanied by the regulation and release of inflammatory mediators (55, 56).Currently, studies on possible therapies are still ongoing and the development of effective therapeutic strategies is essential.Recently, emerging evidence has underlined and highlighted both the potential pathophysiological and clinical benefits of KD in neurodegenerative diseases, indicating it as a possible treatment option (57).The protective power of KD is associated with the creation of ketone bodies (KB), fundamental players and factors of this type of diet.Following dietary carbohydrate restriction, once KB reach plasma concentrations greater than 4 mmol/L, a shift from the use of glucose as the main brain source of energy to KB occurs (58).KB have a particularity: they can be absorbed into the brain tissue via monocarboxylate transporters (MCT) on microvascular endothelial cells and astrocytes; the transport process by MCTs depends on circulating KB concentrations (57,59).MCTs are a family of transporters which are linked to protons which passively transport possible metabolic substrates, such as lactate, pyruvate and therefore KB.Four isoforms are the main ones, including MCT1 (SLC16A1), MCT2 (SLC16A7), MCT3 (SLC16A8) and finally MCT4 (SLC16A3), each having affinity for characteristic substrates (60).MCTs are ubiquitously expressed in the brain, as they are the exclusive transporters for KB (38,41).The main ketone bodies produced in the liver are acetoacetate (AcAc), betahydroxybutyrate (BHB) and acetone (61).Acetoacetate is reduced to BHB by an enzyme "b-hydroxybutyrate dehydrogenase 1" (BHD1); both BHB and AcAc can be transported into the bloodstream or to their respective target organs via MCT transporters (60), while acetone, is produced from a small portion of acetoacetate and then metabolized in the liver as a waste product.Under normal physiological conditions, both BHB and AcAc are excreted at the renal level, while acetone at the pulmonary level (51, 62).MCT1 has a relatively low affinity for BHB, mainly in endotheliocytes and astrocytes of the blood-brain barrier (BBB) (63), while MCT2, with high affinity for BHB, is present almost exclusively at the postsynaptic level of neurons (64) but also with low affinity in astrocytes (65).It is also interesting to underline that according to recent studies FFA can cross the BBB and actively participate in ketogenesis in astrocytes (66).Based on this mechanism, thanks to numerous field research, it has been possible to observe how many neurodegenerative diseases are characterized by disorders of glucose metabolism at the neuronal level.Extremely important it follows that in case of brain trauma, and therefore with the help of ketosis, the number of MCT channels in brain cells increases and allows BHB to be catalyzed and metabolized (67,68).Furthermore, it has been seen that the application of KD could reduce the demyelination of the myelin sheath of nerve fibers, the death of oligodendrocytes (cells responsible to produce myelin) and the degeneration of axons which could be caused by glucose deficiency (69).Furthermore, ketone bodies and in particular BHB can contribute to the reconstruction of the respiratory chain, as the energy derived from ketone bodies can be assimilated even in the event of damage to the first complex of the respiratory chain (70,71).In this regard, the caloric deficit typical of KD would show a neuroprotective potential, increasing the number of neuroprotective factors, including brain-derived neurotrophic factor (BDNF) and glial cell line-derived neurotrophic factor (GDNF), neutrophin-3 (NT-3) and molecular chaperones (71).Caloric deficiency would also improve mitochondrial function, reducing the production of reactive oxygen species (ROS) and showing a consequent anti-inflammatory potential given by the inhibition of the activities of cyclooxygenase-2 (COX-2) and inducible nitric oxide synthase (iNOS), as well as by blocking the synthesis of IL-1b, IL-2, IL-4, IL-6, TNFa.and NFkB (72,73).In this case, the effect of KD was demonstrated in reducing microglial and neuritis activation via inhibition of COX-2 and PPARg.The condition of ketosis and production of ketone bodies could promote brain autophagy through the activation of sirtuin 1 (SIRT1) and hypoxia-induced factor 1a (HIF-1a) and the inhibition of the mTORC1 complex, preventing neurodegenerative disorders by eliminating protein aggregates or damaged mitochondria (74,75).For example, studies have shown that KD could reduce the number of beta-amyloid plaques and other metabolic byproducts in brain tissue (76).The KD and the ketone bodies derived from it would finally determine the restoration of the integrity of the BBB following the increase in connexin-43 (Cx43) useful in the construction of the BBB and the MCT and GLUT transporters (glucose transporters) (77,78).KD could determine the polarization of microglia, upon activation, from M1 to M2 by modifying the inflammatory environment by reducing the production of proinflammatory cytokines, including TNF-a, IL-1b and IL-6 and molecular patterns such as TLR4 and NF-kB, and upregulating the production of anti-inflammatory cytokines such as TGF-b (79).The result is therefore a broad and favourable spectrum of KD effects (Table 1), although the mechanisms still need to be verified and better researched. Once absorbed into the brain, ketone bodies can play various neuroprotective roles in the CNS.Mainly, since KB are produced following a carbohydrate and dietary restriction with consequent inhibition of glycolysis, this would improve insulin sensitivity and therefore glucose tolerance, often related to the onset of agerelated conditions (80,81).Secondly, KB could compensate for possible mitochondrial dysfunctions in neurons and especially in glial cells by improving mitochondrial respiratory functions (according to a recent study BHB regulates "nicotinamide adenine dinucleotide" by decreasing the NAD+/NADH ratio) and increasing ATP production (82).Furthermore, increased mitochondrial function could result in blocking complex I (CI) in the etiological mechanism of PD (83), and similarly alleviating the production of reactive oxygen species (ROS) resulting in reduced inflammatory responses through various pathways.KB production may also be able to prevent neuronal apoptosis via the sirtuin (SIRT)-1 signalling pathway, i.e.KB may reduce SIRT-1mediated hyperacetylation of the transcription factor p53 and consequently down -regulating proapoptotic protein (BAX) and up-regulating anti-apoptotic proteins (Bcl-2 and Bcl-xL) in animal models (84).Furthermore, KB are particularly involved in neuroinflammation processes at the glial level (85).Resident glial cells of the central nervous system (microglia and astrocytes) mediate neuroinflammation processes by regulating the production of inflammatory cytokines, ROS, and chemokines (86).Based on the progress of the inflammatory response, neuroinflammation is divided into acute and chronic.Generally, an effective inflammatory response allows the elimination of invading pathogens, promoting the repair of damaged tissues (87).However, an uncontrolled neuroinflammation process could induce a chronic inflammatory response, characterized by hyperactivation of microglia and the uncontrolled production of neurotoxic factors but also by neuronal and glial cell destruction, which together would accelerate the deterioration of the initial condition (88).Microglia and therefore, are immune cells mainly involved in neuroinflammation (89,90), considered as resident macrophages in the brain, play a fundamental role in neuroinflammation (91).Microglia exhibit pro-inflammatory or neuroprotective roles in the CNS based on their phenotype (proinflammatory M1 phenotype and anti-inflammatory M2 phenotype) (92).M1-type microglia (classical-type activation) are activated via pro-inflammatory nuclear factor kB (NF-kB) and signal transducers of transcription 3 (STAT3) signalling pathways to generate proinflammatory cytokines, such as example interleukin (IL)-1b, IL-6, IL-12, IL-23, cyclooxygenase (COX)-1, COX-2, ROS and nitric oxide (NO).While M2 type microglia (alternative type activation), are activated with the aim of releasing neurotrophic factors, such as IL-4, IL-10, IL-13 but also transforming growth factor-b (TGF-b) (93).In contrast, astrocytes, also part of glial cells, can regulate the brain environment, maintaining neurotransmitter homeostasis, controlling synapse formation, and forming distinctive perivascular channels to eliminate potentially neurotoxic agents.Being very similar to microglia, they too can be activated into A1 phenotype and A2 phenotype to respond to pathological insults and participate in the neuroinflammatory process (94).Over time, it has been stated that the neuroinflammatory process certainly contributes to the birth and pathological progression of various neurodegenerative diseases (95), as being neuroimmune cells, microglia and astrocytes modulate the inflammatory response, releasing proinflammatory cytokines or anti-inflammatory (95).BHB, in this context, promotes microglial branching typical of M2 alternative type activation, with anti-inflammatory properties by acting on protein kinase B (Akt)-small RhoGTPase (96).Currently, there is growing evidence suggesting that KD and KB play neuroprotective roles by regulating central and peripheral inflammatory mechanisms (60).In addition to being substrates for glial protective purposes, KBs can also be active as mediators of intracellular signalling, as they participate in intracellular signalling cascades by regulating neuroinflammation directly or indirectly (97).Hydrocarboxylic acid receptor 2 (HCA2), which can also be called G-coupled protein receptor (GPR) 109A, is a receptor abundantly present in microglia, macrophages but also in dendritic cells.Its activation determines a greater action by COX-1 which in turn generates prostaglandin D2 (PGD2), a factor alleviating neuroinflammation (98).BHB being a specific ligand of HCA2, may be able to activate it starting from a concentration of TABLE 1 The molecular mechanisms involved in the modulation of inflammation and neurodegeneration by KD.4.1 possible neuroprotective effect of ketone bodies. Biological activity Reference Energy metabolism mediated by by ketone bodies in cases of glucose metabolism disorders.(68,69) KD could reduce the demyelination of the myelin sheath of nerve fibers, death of oligodendrocytes and degeneration of axons. KD and BHB contribute to the reconstruction of the respiratory chain even in the event of damage to the first complex Frontiers in Immunology frontiersin.orgapproximately 0.7 mmol/L, like human nutritional ketosis (99,100).Furthermore, BHB, by binding to HCA2, also inhibits the production of proinflammatory cytokines and enzymes via the NF-kB pathway in activated microglia pretreated with BHB and lipopolysaccharide (LPS) (101).The activation of HCA2 by BHB would also facilitate the reduction of endoplasmic reticulum stress, the activity of the NLRP3 inflammasome and finally the levels of IL-1b and IL-18 (102).Furthermore, KB can therefore alter the balance of the release of inhibitory and excitatory neurotransmitters, crucial for long-term neuronal survival; it has in fact been demonstrated that AcAc inhibits the uptake of glutamate into presynaptic vesicles, decreasing excitotoxicity (103).Similarly, BHB attenuates the process of autophagy resulting in degradation enhancing neuronal damage (104).It is interesting to consider that 3-Hydroxy-3-Methylglutaryl-CoA Synthase 2 (HMGCS2), i.e. the control enzyme in ketogenesis, would induce the autophagic clearance of the b-amyloid precursor protein, which is mediated by the metabolism of AcAc or and by the mTOR signalling pathway, suggesting that HMGCS2 and therefore AcAc inhibits the possible production and deposition of amyloid b (Ab) (105).Finally, as initially mentioned, KD and therefore KB can improve insulin sensitivity and the glycemic profile (106,107), in this case BHB and AcAc slow down insulin glycation, the accumulation of Advanced glycation end products (AGEs) (108, 109) and liposomal lipid peroxidation, thus preventing microglial apoptosis (110).Therefore, it can be hypothesized that the anti-inflammatory effects of KB could reduce the development of neurodegenerative diseases in multiple aspects (Table 2 and Figure 3). Discussion Neuroinflammation and immune activation represent crucial components in the pathogenesis of various neurological disorders, including neurodegenerative diseases.The emerging role of the ketogenic diet (KD) in modulating these processes has sparked considerable interest and debate within the scientific community.One aspect of neuroinflammation involves the activation of microglia, the resident immune cells of the central nervous system (CNS).Microglia play a dual role in neuroinflammation, exhibiting both pro-inflammatory (M1 phenotype) and antiinflammatory (M2 phenotype) responses.Chronic activation of pro-inflammatory microglia contributes to the release of neurotoxic factors, exacerbating neuronal damage and accelerating disease progression.On the other hand, the activation of anti-inflammatory microglia promotes tissue repair and neuroprotection.Recent studies have suggested that the ketogenic diet may exert profound effects on microglial activation and polarization.Ketone bodies, particularly betahydroxybutyrate (BHB), have been shown to modulate microglial phenotype (66), shifting them towards the antiinflammatory M2 phenotype.This polarization may be mediated through various mechanisms, including the inhibition of NF-kB signaling and the activation of hydroxycarboxylic acid receptor 2 (HCA2) (99,100).By promoting the M2 phenotype, ketone bodies may attenuate neuroinflammation and mitigate neuronal damage in neurodegenerative diseases.Furthermore, the ketogenic diet has been implicated in reducing the production of pro-inflammatory cytokines and chemokines, such as interleukin-1b (IL-1b) and tumor necrosis factor a (TNF-a) (102).These inflammatory mediators play pivotal roles in propagating neuroinflammation and exacerbating neuronal injury.By dampening the release of these pro-inflammatory factors, the ketogenic diet may exert anti-inflammatory effects within the CNS, thereby mitigating the pathological processes associated with neurodegenerative diseases.Moreover, ketone bodies have been shown to modulate other aspects of the immune response, including the regulation of astrocyte function and the promotion of resolution of inflammation.Astrocytes, like microglia, play crucial roles in maintaining CNS homeostasis and responding to pathological insults.The ketogenic diet may influence astrocyte activation and polarization, thereby exerting additional effects on neuroinflammation and immune activation (30,31).While accumulating evidence suggests a potential role for the ketogenic diet in modulating neuroinflammation and immune activation, several questions and challenges remain.For instance, the optimal composition and duration of the ketogenic diet required to elicit therapeutic effects in neurodegenerative diseases are yet to be fully elucidated.Moreover, the precise By reducing oxidative stress, the KD may help attenuate inflammation and prevent tissue injury.Furthermore, emerging evidence suggests that the KD can modulate the composition and function of the gut microbiota, which plays a crucial role in regulating immune function.By promoting the growth of beneficial bacteria and suppressing the growth of pathogenic microbes, the KD may exert indirect immunomodulatory effects through its impact on the gut microbiota.Metabolic dysregulation is intricately linked to inflammation, and the metabolic changes induced by the KD may contribute to its immunomodulatory effects.By promoting ketogenesis and shifting cellular metabolism away from glycolysis, the KD alters cellular signaling pathways involved in inflammation and immune activation.In addition to its systemic immunomodulatory effects, the KD has been shown to exert specific effects on neuroinflammation within the central nervous system (CNS).Ketone bodies can penetrate the bloodbrain barrier and modulate microglial activation, thereby reducing neuroinflammation and providing neuroprotection in various neurological disorders.Overall, the immunomodulatory effects of the ketogenic diet are multifaceted and involve complex interactions between metabolic, inflammatory, and immune pathways.In conclusion, the ketogenic diet holds promise as a potential therapeutic strategy for mitigating neuroinflammation and immune activation in neurodegenerative diseases.By targeting microglial and astrocyte function, as well as modulating inflammatory signaling pathways, the ketogenic diet may offer novel avenues for the treatment of these devastating disorders.However, further research is needed to fully understand the complex interplay between the ketogenic diet and neuroinflammation, paving the way for the development of effective therapeutic interventions (111). FIGURE 1 FIGURE 1Number of Publications on "ketogenic diet AND neurodegenerative disease" from 2010 to 2024 (in progress). ( 70 , 79 ) 71) KD increase the number of neuroprotective factors, as BDNF, GDNF, NT-3 and molecular chaperones.(71) KD and Caloric deficiency improve mitochondrial function, reducing the production of ROS.(72) KD reduce iNOS, blocking the synthesis of IL-1b, IL-2, IL-4, IL-6, TNFa and NFkB.(72, 73) KD and ketone bodies promote brain autophagy through activation of SIRT1 and HIF-1a, and inhibition of the mTORC1 complex, eliminating protein aggregates or mitochondria damage.(74, 75) KD and the ketone bodies determine the restoration of the integrity of the BBB with the increase in Cx43.(77, 78) KD determine the polarization of microglia from M1 to M2 reducing the production of TNF-a, IL-1b, IL-6 and TLR4 and NF-kB, and upregulating the production of TGF-b.(Monda et al. 10.3389/fimmu.2024.1425816 mechanisms underlying the immunomodulatory effects of ketone bodies within the CNS warrant further investigation.Overall, the immunomodulatory effects of the ketogenic diet (KD) represent a fascinating area of research with potential implications for various health conditions, including neurological disorders, metabolic diseases, and inflammatory conditions.One of the primary mechanisms by which the KD exerts its immunomodulatory effects is through the inhibition of inflammatory pathways.Ketone bodies, particularly beta-hydroxybutyrate (BHB), have been shown to inhibit the activation of the NLRP3 inflammasome (103), a key mediator of inflammation.By dampening NLRP3 activation, BHB reduces the production of pro-inflammatory cytokines such as interleukin-1b (IL-1b) and interleukin-18 (IL-18) (102), thereby attenuating the inflammatory response.The KD has been shown to modulate the function of various immune cells, including macrophages, T cells, and dendritic cells.Ketone bodies can influence the polarization of macrophages towards an anti-inflammatory M2 phenotype, which is associated with tissue repair and resolution of inflammation.Additionally, the KD may promote the expansion of regulatory T cells (Tregs), which play a critical role in maintaining immune tolerance and suppressing excessive inflammation.Oxidative stress is closely linked to inflammation, and the KD has been shown to reduce oxidative stress through multiple mechanisms.Ketone bodies possess antioxidant properties and can scavenge reactive oxygen species (ROS) (86), thereby mitigating oxidative damage to cells and tissues. TABLE 2 Role of ketone bodies involved in neuroinflammation and neurodegeneration.
7,632.8
2024-08-12T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Disbalancing Envelope Stress Responses as a Strategy for Sensitization of Escherichia coli to Antimicrobial Agents Disbalancing envelope stress responses was investigated as a strategy for sensitization of Escherichia coli to antimicrobial agents. Seventeen isogenic strains were selected from the KEIO collection with deletions in genes corresponding to the σE, Cpx, Rcs, Bae, and Psp responses. Antimicrobial activity against 20 drugs with different targets was evaluated by disk diffusion and gradient strip tests. Growth curves and time-kill curves were also determined for selected mutant-antimicrobial combinations. An increase in susceptibility to ampicillin, ceftazidime, cefepime, aztreonam, ertapenem, and fosfomycin was detected. Growth curves for Psp response mutants showed a decrease in optical density (OD) using sub-MIC concentrations of ceftazidime and aztreonam (ΔpspA and ΔpspB mutants), cefepime (ΔpspB and ΔpspC mutants) and ertapenem (ΔpspB mutant). Time-kill curves were also performed using 1xMIC concentrations of these antimicrobials. For ceftazidime, 2.9 log10 (ΔpspA mutant) and 0.9 log10 (ΔpspB mutant) decreases were observed at 24 and 8 h, respectively. For aztreonam, a decrease of 3.1 log10 (ΔpspA mutant) and 4 log1010 (ΔpspB mutant) was shown after 4–6 h. For cefepime, 4.2 log10 (ΔpspB mutant) and 2.6 log10 (ΔpspC mutant) decreases were observed at 8 and 4 h, respectively. For ertapenem, a decrease of up to 6 log10 (ΔpspB mutant) was observed at 24 h. A deficient Psp envelope stress response increased E. coli susceptibility to beta-lactam agents such as cefepime, ceftazidime, aztreonam and ertapenem. Its role in repairing extensive inner membrane disruptions makes this pathway essential to bacterial survival, so that disbalancing the Psp response could be an appropriate target for sensitization strategies. INTRODUCTION Since antimicrobial resistance is increasing worldwide, new targets (Dickey et al., 2017;Recacha et al., 2017;Cattoir and Felden, 2019) need to be sought, either to find new antimicrobial families or to increase the susceptibility of bacterial populations (Laxminarayan et al., 2013). Envelope stress responses are important pathways for bacterial survival in the presence of stressors, including antimicrobials (Guest and Raivio, 2016;Hersch et al., 2020), and their alteration could be proposed as a strategy for weakening bacteria. The Gram-negative envelope is composed of inner membrane (IM), periplasm, containing a thin peptidoglycan (PG) layer, and outer membrane (OM). This envelope provides Gram-negative bacteria with protection against external environmental agents, including antibiotics (Silhavy et al., 2010). The σ E , Cpx, Rcs, Bae and Psp systems are the main envelope stress response pathways in Gram-negative bacteria for restoring homeostasis to cells with induced envelope damage and are activated in different ways (Guest and Raivio, 2016;Mitchell and Silhavy, 2019). The σ E response detects perturbations in outer membrane (OM) or lipopolysaccharide (LPS) biogenesis through interactions between either the exposed C-terminus of misfolded outer membrane proteins (OMPs) and the DegS periplasmic protease, or between the anti-anti-s factor RseB and periplasmic LPS molecules, respectively. These both initiate a regulated intramembrane proteolysis cascade ultimately leading to the liberation of σ E from a membranebound anti-sigma factor and the upregulation of adaptive factors, including chaperones, proteases, membrane biogenesis proteins, and a set of small RNAs that downregulate OMP production (Ades, 2004;Ruiz and Silhavy, 2005;Valentin-Hansen et al., 2007;Lima et al., 2013;Flores-Kim and Darwin, 2014;Kim, 2015). The Cpx response is regulated by the CpxA sensor kinase and response regulator CpxR. Envelope stresses causing protein misfolding, and adhesion, inactivate the inhibitor CpxP, trigger CpxA-mediated phosphorylation of CpxR, and altered expression of protein foldases and proteases, respiratory complexes, transporters, and cell wall biogenesis enzymes that impact resistance to a number of antibiotics, particularly aminoglycosides (Raivio, 2014). The Rcs response is regulated by a two-component phosphorelay consisting of two inner membrane (IM)-associated sensor kinase molecules, RcsC and RcsD, together with a cytoplasmic response regulator, RcsB. Multiple environmental parameters and conditions leading to a weakened envelope activate RcsC and/or RcsD, which together catalyze the phosphorylation of RcsB, leading to changes in the expression of genes associated with capsule production, motility, virulence, biofilm formation, and other envelope proteins (Majdalani and Gottesman, 2005;Huang et al., 2006). The Rcs pathway has been linked to resistance to a number of microbially and host produced antimicrobials including beta-lactam antibiotics, cationic antimicrobial peptides and bile (Hirakawa et al., 2003;Erickson and Detweiler, 2006;Laubacher and Ades, 2008;Farris et al., 2010;Farizano et al., 2014). The Psp response is activated by changes linked to the aberrant localization of OM secretin complexes and other conditions that disrupt the IM, including the dissipation of the proton motive force. These signals are transduced through changes in interactions between a set of Psp proteins that ultimately lead to the liberation of the PspF transcription factor from the inhibitor PspA and the upregulated production of a limited set of adaptive factors capable of fostering endurance and survival (Flores- Kim andDarwin, 2014, 2016). Finally, Bae response is controlled by the two-component system made up of the sensor kinase BaeS and its cognate partner BaeR. This pathway is activated by antimicrobial compounds made by plants, animals, and microbes, as well as metals, and can stimulate resistance to broad classes of these substances, primarily, it appears, through the regulation of the multidrug RND efflux pumps AcrD and MdtABC, together with the common OM component TolC (Baranova and Nikaido, 2002;Raffa and Raivio, 2002;Cordeiro et al., 2014;Lin et al., 2014). The aim of this study was to investigate the effect of alteration of the envelope stress response pathways of the σ E , Cpx, Rcs, Bae, and Psp systems on sensitization to antimicrobial agents targeting the bacterial cell wall, protein, RNA, DNA or folic acid synthesis. Antimicrobial Susceptibility Testing Antimicrobial susceptibility was determined by disk diffusion (Oxoid R , United Kingdom) and gradient strip tests (Liofilchem R , Italy), using CLSI reference methods (Clinical and Laboratory Standards Institute, 2016.). Any mutant-antimicrobial combination with a halo size that differed by more than 3 mm by disk diffusion from the wild-type (E. coli BW25113) was selected for the gradient strip test. Growth Curve Assays Growth curves were performed for mutant-antimicrobial combinations with a decrease of MIC determined by gradient strip tests. Psp mutants (except pspF) were tested to betalactams agents listed in Table 1 despite not showing decreases in MIC value. After overnight culture in Mueller-Hinton broth (MHB) at 37 • C, bacterial suspensions were diluted to achieve an OD 625nm of 0.1 (ca. 10 8 CFU/mL), then diluted 10 −4 -fold in MHB medium containing sublethal concentrations (0.5xMIC and Time-Kill Curve Assays To show the effect of alteration of the stress response pathways on bacterial viability, time-kill curve assays were performed with the pspA, pspB pspC mutants. MHB with 1xMIC concentrations of ceftazidime (CAZ), cefepime (FEP), ertapenem (ETP), ampicillin (AMP), and aztreonam (ATM) were used. Antimicrobial concentrations were relative to the MICs for strains harboring unmodified stress responses (wild-type). Growth in drug-free broth was evaluated in parallel as a control. Cultures were incubated at 37 • C with shaking at 250 rpm. An initial inoculum of 10 5 CFU/mL was used in all experiments; bacterial concentrations were determined at 0, 2, 4, 6, 8, and 24 h by colony counting. Statistical Analysis All statistical analyses were performed using Graphpad Prism 6 software 1 . The Student's t-test was used for statistical evaluation when two groups were compared. Differences were considered significant when P < 0.05. Sensitization of E. coli to Antimicrobials Agents Determined by Disk Diffusion and Gradient Strip Test Twenty antimicrobials were tested by disk diffusion (Supplementary Table 3) in the initial screening (340 mutantdrug combinations were tested). Psp response was the most sensitized stress pathway with 22.5% of drug-gene deletion combinations affected, followed in descending order, by the Rcs (18%), Bae (17.5%), Cpx (13.7%), and σ E responses (2.5%). Each mutant-antimicrobial combination that showed sensitization by gradient strips was tested with growth curves to analyze bacterial growth after short and long incubation periods in the presence of the antimicrobials cited above to confirm the previous results. The generalized sensitization of Psp mutants to beta-lactam agents (Table 1) led to growth curves even although no changes in MIC values were observed. The pspB mutant showed clear sensitization with differences for aztreonam, ceftazidime, cefepime and ertapenem relative to wild-type. After 24 h, no growth was observed in the presence of aztreonam at 0.5xMIC (optical density, OD value 0.32) (p < 0.01, compared to wild-type, OD value 0.08) (Figures 1D-F), a decrease in OD was observed in the presence of ceftazidime at 0.25xMIC (OD value 0.33) (p < 0.0001, compared to wild-type BW25113, OD value 0.5) and at 0.5xMIC (OD value 0.16) (p < 0.05, relative to wild-type, OD value 0.3) (Figures 1M-O), and also in cefepime at 0.5xMIC (OD value 0.11) (p < 0.05, relative to wild-type, OD value 0.21) (Figures 1D-F). After 8 h of incubation, a decrease in OD was also observed at 0.25xMIC of cefepime (OD value 0.13) (p < 0.0001, compared to wild-type, OD value 0.25) (Figures 1G-I) and at 0.5xMIC of ertapenem (OD value 0.14) (p < 0.0001, relative to wild-type, OD value 0.29) (Figures 1P-R). The pspA and pspC mutants showed no growth at 0.5xMIC of aztreonam (OD value 0.10) (p < 0.01, compared to wild-type, OD value 0.27) (Figures 1A-C) until 12 h and no growth was observed at 0.5xMIC of cefepime (OD value 0.08) (p < 0.0001, relative to wild-type, OD value 0.30) (Figures 1J-L) at 24 h, respectively. No significant decrease in growth was observed for other mutant-antimicrobial combinations, and a paradoxical effect was observed with aztreonam ( Supplementary Figures 2-4). The Impact of Psp Response Alteration on Bactericidal Activity of Beta-Lactam Antimicrobials Psp mutants ( pspA, pspB, and pspC) were the selected mutants due to its significant sensitizing effect to antimicrobials compared to the wild-type and was therefore used for timekill assays to study cell viability in the presence of 1xMIC concentrations of beta-lactam agents selected. A bactericidal effect was observed for pspA and pspC mutants in the presence of AMP with drops up to 1 log 10 (p < 0.01) at 8 h (Figures 2A,B). To note, a bacteriostatic effect was observed with the rest of the antimicrobials evaluated. At 1xMIC CAZ and ATM for pspA, pspB, and pspC mutants, reductions of 2.2 log 10 (p < 0.0001), 0.9 log 10 (p = 0.117, ns) and 2.5 log 10 (p < 0.0001), respectively, were observed at 6-8 h for the first agent (Figures 2C-E), maintaining growth delay at 24 h and drops of 3.1 log 10 (p < 0.01), 4 log 10 (p < 0.01) and 3.9 log 10 (p < 0.01), respectively, were observed at 4-6 h for the second drug (Figures 2J-L). At 1xMIC ETP, reductions of 6 log 10 (p < 0.01) for pspB mutant and 1.7 log 10 pspC mutants (J-L) at 1xMIC relative to wild-type (BW25113) using low initial inoculum. No differences in cell viability loss were observed for pspB mutant in the presence of ampicillin or for pspA mutant in the presence of ertapenem and cefepime ( Supplementary Figures 5A-C). DISCUSSION Apart from the search for new drugs, new strategies are also necessary to prevent the emergence of resistance and extend the life of antimicrobial agents. Envelope stress responses are a set of coordinated physiological mechanisms that sense envelope damage or defects and trigger transcriptome alterations to mitigate this stress. In general terms, these pathways are focused on outer membrane stress (σ E response), inner membrane stress (Cpx, and Psp responses), damage through exposure to toxic molecules (Bae response) and alterations in outer membrane permeability, changes in peptidoglycan biosynthesis and defects in lipoprotein trafficking (Rcs response) (Mitchell and Silhavy, 2019). Psp-activated IM disruptions tend to be more severe than those required to activate Cpx, being the first extensive disruptions that result in the loss of proton motive force (van der Laan et al., 2003;Maxson and Darwin, 2004;Becker et al., 2005), which could explain the greater effect on sensitization in strains deficient in this response. Previous studies have evaluated the effect of mutations on the envelope stress response through deletion of certain genes (Nicoloff et al., 2017) or overactivation responses (McEwen and Silverman, 1980;Cosma et al., 1995;Danese et al., 1995), with sensitization to antimicrobials infrequently used in clinics (rifampicin or bacitracin) (Nicoloff et al., 2017) when RseA was deleted. In the present study, we evaluated a putative strategy consisting of disbalancing of envelope stress responses in the presence of antimicrobials from different families, including cell wall-disturbing agents (penicillins, cephalosporins, carbapenems, aztreonam, colistin, and fosfomycin), protein synthesis inhibitors (aminoglycosides, tetracyclines, and chloramphenicol), RNA synthesis inhibitors (rifampin), DNA synthesis inhibitors (fluoroquinolones) and folic acid synthesis inhibitors (sulfonamides and trimethoprim). It is important to highlight the clinical relevance of our set of selected antimicrobials for the treatment of infections caused by Gram-negative bacteria and the wide spectrum of targets covered, listed above. Gene deletions that affected sensitization to antimicrobials involved the Rcs response (rcsD, aztreonam and fosfomycin), the Bae response (baeR, aztreonam) and the Psp response (pspA, ceftazidime, cefepime and ertapenem; pspB, ceftazidime and ertapenem; pspC, ampicillin, ceftazidime, aztreonam). Beta-lactams constituted 83% of the antimicrobials to which strains were sensitized by gradient strip test, and the cell wall was the target in 100% of them. Alteration of the Psp response was the envelope stress pathway with the greatest effect on sensitization in the presence of antimicrobials, as demonstrated by growth curves and timekill curve assays. Various components are involved in the Psp response, notably PspB (inner membrane protein). Under activating conditions, PspB and PspC interact with PspA (PspF inhibitor), which releases PspF (response regulator) (Yamaguchi et al., 2013), which interacts with RNA polymerase to increase psp gene transcription (Jovanovic et al., 1996;Lloyd et al., 2004). Specifically, deletion of the pspA, pspB and pspC genes had the greatest impact on cell viability and bacterial growth in the presence of beta-lactams antimicrobials, mainly ampicillin, aztreonam, cefepime, ceftazidime and ertapenem, enhancing the bactericidal effect of this family of agents. Another important aspect is that the target of these antimicrobials in the cell wall of E. coli is primarily PBP3 (ampicillin, cefepime, ceftazdime, aztreonam) and PBP2 (ertapenem) which are involved in cell division whose inhibition lead to filamentation and the formation of spherical cells, respectively (Hayes and Orr, 1983;Bush and Bradford, 2016;Rodvold et al., 2018). Nevertheless, the underlying interaction between these agents and the Psp response proteins is unknown. We could hypothesize that the double damage of the bacterial envelope: inner membrane damage due to psp deletion and the beta-lactam antimicrobials effect acting on the PBPs proteins, trigger a sensitization effect reducing bacterial growth a increasing antimicrobial lethality. In general terms, the effect on antimicrobial sensitization in the tested mutants was moderate but consistent; however, it could be interesting to test other essential genes (rpoE, degS, rseP, cpxQ, igaA) involved in envelope stress responses, although these were not available in the KEIO collection. The emergence of innovative therapeutic strategies, in combination with more conventional approaches, is advancing our understanding of interactions between microbiota, host and pathogenic bacteria. Questions that remain to be answered include how disruption of the envelope stress response could impact not only harmful bacteria, but also healthy ones, causing microbiota impairment and associated disorders, such as C. difficile infection (Bäumler and Sperandio, 2016). In conclusion, a defective Psp envelope stress response increases E. coli susceptibility to beta-lactams antimicrobials, and is particularly remarkable with aztreonam, cefepime, ceftazidime and ertapenem. The role of this system in repairing extensive disruptions to the inner membrane makes this pathway essential to bacterial survival. Its use as a potential target for bacterial sensitization deserves in-depth evaluation. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS VF, SD-D, and AG-D performed the lab assays. ER interpreted the data and wrote the manuscript with input from all coauthors. JR-M and ER designed and guided the execution of all experiments. FD-P, ÁP, and JR-M supervised the project and contributed to the interpretation of the results and valuable discussion. All authors contributed to the article and approved the submitted version.
3,707.2
2021-04-07T00:00:00.000
[ "Biology", "Medicine" ]
Numerical Study on Time Dependent Maxwell Nanofluid Slip Flow over Porous Stretching Surface with Chemical Reaction The recent study deals with the numerical analysis of an unsteady Maxwell nanofluid flow passing through a porous stretch surface with slip boundary condition with chemical reaction effect using Buongiorno's mathematical model. The water-based fluid and the gold nanoparticles (Au) are preferred for this study. The similarity transformations are applied to renovate the governing model equations into a set of ordinary non-linear differential equations. The solutions of the coupled non-linear dimensionless equations are numerically decoded using the Nachtsheim-Swigert shooting method together with the Runge-Kutta iterative technique for various values of the flow control parameters. In addition, the built-in function bvp4c of MATLAB is used to enhance the consistency of numerical results. The numerical results are graphically demonstrated and narrated from the physical point of view for the non-dimensional velocity, temperature and concentration profiles, as well as the local coefficient of skin friction, Nusselt number and Sherwood number for different parameters of materials, such as the volume fraction parameter, Deborah number, unsteadiness, slip, stretching, suction and chemical reaction parameters. It is witnessed that the heat transfer rate is widely controlled by the Deborah number for Au-water nanofluid. The outcomes of this analysis clearly indicate the considerable influence of the suction imposed in the model. INTRODUCTION The low thermal conductivity of traditional heat transfer fluids like water, engine oil and ethylene glycol is a grave limitation in developing the performance, efficiency and solidity of modern engineering equipments. Nanofluids engineered by stably suspending and uniformly dispersing a small amount of nano-sized (between 1 and 100 nm in diameter) ultrafine metallic, nonmetallic or ceramic particles in ordinary heat transfer fluids are the newly invented heat transport fluids containing thermal conductivity to a great extent at low concentration than traditional fluids. This idea of colloidal suspension in regular fluid, as a concept of nanofluid, was first developed by Choi and Eastman (1995). Recent researchers have identified that the replacement of conventional coolants with nanofluids may be advantageous for improving the competence of heat transfer in the nuclear space and engineering, chillers, domestic refrigerators/freezers; and cooling of engine and microelectronics (Sridhara and Satapathy, 2011). As well nanofluids are used as the antibacterial agent in textile industry, water disinfection, medicine and food packaging (Hajipour et al., 2012;Uddin et al., 2016). Moreover, the electromagnetic nanoparticles used to control and operate the nanofluids through magnetic force are playing an important role in biomedical purposes compared to other metal particles, Chiang et al. (2007). Due to the mounting demand for encompassing extraordinary characteristics of providing unique physical and chemical properties nanofluids are receiving significant interest of many scientists and researchers, Emerich and Thanos (2006), Ma et al. (2006), Buongiorno et al. (2008) and Kuznetsov and Nield (2010). As a part of these researches, Buongiorno (2006) composed a precise model to investigate the heat transfer by convection in nanofluids taking into account the effects of Brownian motion and thermophoresis diffusion. The research on the boundary layer of non-Newtonian fluid flow is the primary focus for the last few decades because of the multiplicity of its practical purposes in technology and industrial processes, for instance in geophysics, petroleum, biomedical and chemical industries, Mushtaq et al. (2016). The characteristics of the non-Newtonian fluid flow are quite unusual compared to Newtonian fluids. The momentum equations of these fluids are particularly nonlinear and the Navier-Stokes equations undoubtedly are not adequate to illustrate the flow of non-Newtonian fluids, Mustafa and Mushtaq (2015). Almost all rheological complex fluids, for example, industrial and biological fluids such as polymer solutions, paints, pulps, fossil fuels, molten plastics, liquid foods, jams and blood demonstrate the non-linear alliance between deformation and stress rates (Bandelli, 1995;Fetecau and Fetecau, 2003). The non-Newtonian fluid models are developed by the researchers involving the constitutive equations tending to be highly nonlinear and are able to predict the rheological characteristics, Tan and Masuoka (2005). These models are mostly categorized into three classes: rate-, differential-and integral-type fluids. The Maxwell fluid model enclosed with viscoelastic material is the simplest subclass of rate-type non-Newtonian fluids. This model is capable to elucidate the characteristics of the relaxation time effect of fluids where the differential-type fluids fail to predict, Gallegos and Mart´ınez-Boza (2010). A revolutionary work done by Harris (1977) portraying 2D flow of Maxwell fluids is promoting the researchers to investigate more possibilities. Following this route, Fetecau et al. (2010) explored the viscoelastic fluid flow by way of fractional Maxwell model subjected to time dependent shear stress. The Maxwell fluid model is reduced into the simple Navier-Stokes relation in the absence of the effect of relaxation time (Sochi, 2010). Very recent, a number of researchers investigated a choice of parametric effects like Joule heating; heat source and thermal radiation on Maxwell fluid flow passing above a stretching sheet (Abbas et al., 2006;Aliakbar et al., 2009;Abel et al., 2012;Nadeem et al., 2014). The natural porous media are wood, beach sand, limestone, bile duct, sandstone, small blood vessels, gall bladder with stones and the human lung, Nield and Bejan (2006). Hayat and Qasim (2010) explained a pathological situation, for example, the fatty cholesterol distribution in the lumen of coronary artery is equivalent to the fluid flow in a porous surface. Metallic nanoparticles are promising vectors of drug administration to the brain. The common metals used for the administration of drugs in nanoparticles are gold, silver and platinum, due to their biocompatibility. These metallic nanoparticles are used because of their large surface area at volume ratio, chemical and geometric tuning and endogenous antimicrobial properties, Mandal (2017). The silver cations released by the silver nanoparticles can bind to the cell membrane of negatively charged bacteria and increase the permeability of the membrane, thus allowing foreign chemicals to enter the intracellular fluid. The metal nanoparticles are synthesized chemically using reduction reactions, Kumar et al. (2018). For example, silver nanoparticles conjugated to the drug are created by reducing silver nitrate with sodium borohydride with an ionic drug compound. Niu and Li (2014) expressed the fact that the active drug ingredient binds to the surface of silver, stabilizes the nanoparticles and prevents the aggregation of the nanoparticles. With an objective to describe the flow of blood as a rheological fluid containing nanoparticle drugs that react destructively through blood vessels in the presence of an electromagnetic field, the analysis of unsteady two-dimensional laminar flow of chemically reacted Gold (Au)-water nanofluid passing through the porous stretching surface with the slip boundary condition is executed in this article by applying the Maxwell rheological fluid model. MATHEMATICAL MODEL This study was conducted 4 Months before, PhD research work, Dept. of Applied Mathematics, University of Dhaka, Dhaka-1000, Bangladesh. In the present model, a two-dimensional time dependent flow of an electrically conducting non-Newtonian Maxwell nanofluid with magnetic field and chemical reaction effects passing a semi-infinite stretching porous surface with slip boundary condition is considered. If elastic stress is applied to a non-Newtonian fluid, the resulting strain will be time dependent characterized by the relaxation time. The constitutive equation for a Maxwell fluid (Fetecau and Fetecau, 2003) is: (1) where the Cauchy stress tensor is denoted by . And the extra stress tensor S satisfies: (2) In which is the viscosity, is the relaxation time and the Rivlin-Ericksen tensor is defined through: (3) The relaxation time for Maxwell fluid is considered by , where is the initial value at . Moreover, the nanofluid as a mixture of the water and Au nanoparticles is discussed here. The surface is saturated with nanoparticles, that is, the nanoparticles mass flux is taken to be zero at the surface. To explain the physical configuration, the Cartesian coordinate system is introduced such a way that axis is measured along the plate surface and axis is perpendicular to it, shown in Fig. 1. The plate surface is assumed to be detained for due to a time dependant magnetic field of strength applied normal to the surface. Also the electromagnetic strength is used as and is the time dependent chemical reaction. It is important to note here that, the expressions for are valid only for time unless . Taking the above observations into account and using the Buongiorno's nanofluid model incorporating the combined effect of thermophoresis and Brownian diffusions (Kuznetsov and Nield, 2010;Buongiorno, 2006), the equations for mass, momentum, thermal energy and nanoparticles concentration for a Maxwell fluid are: Considering the velocity slip proportional to the local shear stress, the governing equations are associated with the boundary conditions: is the slip velocity, is the surface temperature and is the concentration assumed to vary both along the surface and with time. Relations among the physical properties of nanofluids (Uddin et al., 2016;Maxwell, 1873) are: Here the nanoparticles volume fraction is represented by . Also and are the density, thermal conductivity, thermal diffusivity, heat capacitance and electrical conductivity respectively. And the suffices f, s and nf represent the base fluid, solid nanoparticles and nanofluid respectively. Here is the nanoparticle shape factor for thermal conductivity and for spherical shape of nanoparticles is defined by Maxwell (1873). The thermo-physical properties of base fluid water and different nanoparticles are given in Table 1, Mustafa and Mushtaq (2015). To describe transport mechanisms in nanofluids, it is significant to make the equations dimensionless using similarity transformation. The main outcomes of making the equations dimensionless are: • To understand the controlling flow parameters of the system • To get rid of dimensional limitations (Uddin et al., 2016). Form the above point of view, the governing equations can be reduced to dimensionless forms, using the following similarity transformations (Madhu et al., 2017): Using the above transformations Eq. (4) is satisfied and the Eq. (5)-(7) are reduced to: where at the time-dependent reaction rate = A destructive reaction = That there is no reaction = A constructive/generative reaction described by Prasannakumara et al. (2017) The physical attentions in the existing study are the skin friction coefficient Cf, the local Nusselt number Nu and the Sherwood number (Sh) defined as: NUMERICAL METHODS Equations (9)-(11) combined with the boundary conditions (12) are solved numerically using the Runge-Kutta method with the Nachtsheim-Swigert shooting technique (Nachtsheim and Swigert, 1965) for various values of the parameters nanoparticle volume fraction parameter , Deborah number , magnetic field parameter , unsteadiness parameter , slip parameter , stretching parameter , suction parameter , electromagnetic field parameter , Prandtl number (Pr), Eckert number (Ec), thermophoresis parameter (Nt), Brownian motion parameter (Nb), Lewis number (Le) and chemical reaction parameter (R). The step size is taken as and the tolerance criteria are set to 10 −6 . In Figure 2 is giving significant agreement to validate the numerical results attained by both shooting method and bvp4c function. But still for more contentment, it is necessary to certify our codes by comparing with some published articles of similar nature. For this purpose, we have analyzed the numerical values of local skin friction coefficient for the models investigated by several researchers. A remarkable agreement of these results with other models can be seen in Table 2. RESULTS AND DISCUSSION First, the numerical calculations of the velocity, temperature, concentration and heat transfer profiles are executed for water-based nanofluids using 5% volume fraction containing different solid nanoparticles (Ag, Cu, Al2O3, TiO2 and Au) one by one, in Fig. 3 to 8. It is observed in Fig. 3 that Au-water has lower velocity compare to other nanofluids (Ag, Cu, Al2O3 and TiO2water) and it has higher temperature as well as concentration profile ( Fig. 4 and 5). At the same phase, the heat transfer rate of Au-water nanofluid is higher shown in Fig. 7. And friction (Fig. 6) and mass transfer (Fig. 8) rate are lower. As a higher heat mass transfer coefficients are exposed in Fig. 9 to 14, respectively, both in the presence and absence of magnetic field effect. Figure 9 exerts the decreasing velocity along the surface due to the raise of unsteadiness parameter A for both cases, M = 0 and 3. This indicates an associated reduction of the momentum boundary layer thickness near the surface and away from the surface. Simultaneously, it is also revealed that the magnetic field parameter M produces a retarded force that slower the velocity of the flow. Furthermore, the steady case of the model is characterized by taking . It is observed from Fig. 10 that the temperature decreases significantly with the increasing unsteadiness parameter and the magnetic field parameter effect is very negligible. This means that the less heat is transferred from the fluid to the surface and hence, the temperature decreases. It provides an important message that the cooling rate is faster for higher values of the unsteady parameter, whereas cooling may take longer during steady flows for both cases, M = 0 and M = 0.3. Figure 11 represents the declining effect of unsteadiness parameter A on the solute distribution, both in the presence and absence of magnetic field effect. The mass transfer from the nanofluid to the surface is decelerated due to the increasing unsteadiness parameter which results lower concentrated fluid. The results are significantly similar to Madhu et al. (2017). When the elastic stress is applied to the non-Newtonian fluid, the time during which the fluid achieves its stability is the relaxation time, which is greater for highly viscous fluids. The Deborah number β deals with the fluid relaxation time to its characteristic time scale. Here β = 0 gives the result for Newtonian viscous incompressible fluid. The fluid with a small Deborah number exhibits liquid-like activities but large Deborah number communicates with solidlike materials able to conduct and retain heat better. Therefore, it is observed physically that gradually increasing the Deborah number can increase the fluid viscosity, which enhances resistance to flow and, as a result, the hydrodynamic boundary layer thickness reduces for Maxwell fluid, as shown in Fig. 12 for both steady and unsteady motion. These results are showing good agreement with Sadeghy et al. (2005). Figure 13 shows very negligible effect of the Deborah number on the thermal boundary layer for both cases . Thus, the heat transfer rate at the stretching surface increases with increasing β. From this analysis, it can be concluded that the elastic force endorses the heat transfer in Maxwell nanofluid. The higher values of β on concentration are more pronounced for steady motion, in Fig. 14. Thus the mass transfer rate at the stretching surface decreases with increasing β. Figure 15 to 17 depict the suction effect on momentum, energy and concentration distributions, respectively. From these figures it is clear that the momentum as well as energy distributions are decreasing function of suction showing good agreement with Nayak (2016) for both slip and no slip boundary surface. Physically, the imposed starts to decrease away from the surface with the higher suction. Moreover, an increase in stretching parameter significantly enhances the flow velocity (magnitude) for both slip and no slip boundary surfaces, seen in Fig. 18. But the temperature and concentration profiles decrease with the increase of stretching parameter (Fig. 19 and 20). The effects of magnetic field on momentum for both steady and unsteady cases are demonstrated in Fig. 21. It is observed that an increase in the magnetic parameter M monotonically reduces the velocity profiles. This is the result of the effect of the magnetic field imposed on an electrically conductive fluid, which generates a drag force called Lorentz force against the flow direction along the surface to slow down velocity. In steady case, this situation is more significant. This is in accordance with the fact that the magnetic field is responsible for reducing the velocity of fluid flow. The current results in Fig. 22, similar to the findings carried out by Narayana and Gangadhar (2015), is that the velocity decreases as the slip Figure 23 demonstrates the effects of chemical reaction on species diffusion for both steady and unsteady cases. It is exerted that higher destructive reaction rate parameter enhances the concentration. However, concentration has opposite effects for generative chemical reaction parameter . The effect of chemical reaction parameter is more outstanding in steady case. Physically, the larger values of destructive chemical reaction parameter deal with higher rate of destructive chemical reaction which deliberates the infected species more efficiently and hence the solutal distribution increases, Hayat et al. (2017). However, an opposite situation is observed for generative chemical reaction parameter. Figure 24 exposes the result in concentration in response to a variation in Brownian motion and thermophoresis parameters. It is found that the increasing values of the Brownian motion parameter cause the thickness of the concentration boundary layer to decrease (Sithole et al., 2017), that is, the concentration boundary layer thickness is larger with a lower Brownian diffusion. Therefore, the Brownian motion parameter facilitates diffusion of nanoparticles in the concentration boundary layer. Furthermore, the reverse environment is created by thermophoresis parameter . Finally, from the point of view of physical interest, the skin friction coefficient is useful to estimate the total frictional drag exerted on the surface. The Nusselt Number is used to characterize the heat flux from a heated solid surface to a fluid. The Sherwood number is used in mass-transfer operation. The effect of several variables on the skin fraction coefficient the local Nusselt number and the local Sherwood number are arranged in the Table 3. CONCLUSION As a final point, the results obtained from the effects of various physical parametric effects on the characteristics of flow, heat and mass transfer are investigated and discussed extensively. The major outcomes drawn from the study of the present model can be summarized as follows: • The heat transfer rate of Au-water is comparatively higher and the friction and mass transfer rates are lower. • The friction rate is going lower for the increasing unsteadiness parameter, while the cooling rate and the mass transfer rate are much faster for higher unsteadiness parameter, whereas cooling may take longer during steady flows. • The hydrodynamic boundary layer thickness is reduced for Maxwell fluid. • Imposed suction can rule the momentum, energy and concentration distributions significantly. • The magnetic field and slip velocity are liable for reducing the velocity of fluid flow. • The concentration distributions are very much affected by chemical reaction and Brownian motion parameters.
4,249
2020-02-15T00:00:00.000
[ "Engineering", "Materials Science", "Physics", "Chemistry" ]
Entanglement, partial set of measurements, and diagonality of the density matrix in the parton model To study quantum properties of the hadron wavefunction at small x, we derived the reduced density matrix for soft gluons in the CGC framework. We explicitly showed that the reduced density matrix is not diagonal in the particle number basis. The off-diagonal components are usually ignored in the conventional parton model. We thus defined the density matrix of ignorance by keeping only the part of the reduced density matrix which can be probed in a limited set of experimental measurements. We calculated Von Neumann entropy for both the reduced and ignorance density matrices. The entropy of ignorance is always greater than the entanglement entropy (computed drom the reduced density matrix) of gluons. Finally, we showed that the CGC reduced density matrix for soft gluons can be diagonalized in a thermal basis with Boltzmann weights suggesting thermalization of new quasi-particle states which we dubbed entangolons. Introduction This work was motivated by the paradox raised in Ref. [1]. On one hand the proton as a quantum object is in a pure state and is described by a completely coherent wavefunction with zero entropy. On the other hand in high energy experiments when probed by a small external probe, it behaves like an incoherent ensemble of (quasi-free) partons. Such an ensemble carries a nonvanishing entropy. Reference [1] suggested that the origin of this entropy is entanglement between the degrees of freedom one observes in DIS (partons in the small spatial region of the proton) and the rest of the proton wavefunction which are not measured in the final state and therefore play the role of an "environment". Formally, this can be formulated in terms of the reduced density matrix where |P〉 is the proton wave function and the partial trace is taken over the unobserved degrees of freedom. The entropy of the parton model is then identified with the von Neumann entropy of the reduced density matrix according to Most of the quantities measured DIS are related to the average number of particles or multi-parton momentum distributions 〈a † (k 1 )a(k 1 )...a † (k n )a(k n )〉. These quantities do not provide an access to the off-diagonal elements of the density matrix in the number operator basis. This leads to the fundamental problem that one cannot reconstruct the full density matrix from the available experimental data. This lack of knowledge of the actual density matrix of the system can be characterized by an entropy. We will dub this entropy "the entropy of ignorance". Among the family of all density matrices that reproduce the same experimental observable, we defined the matrix that have zeros on all off-diagonal elements as the density matrix of ignorance (the operation of dropping the offdiagonal elements of the density matrix can be considered as an example of a quantum-to-classical channel). The associated entropy It has the following property S I ≥ −Tr ρ r lnρ r (see e.g. Refs. [2,3]). The entropy of ignorance is exactly equal to the Boltzmann entropy of the classical ensemble of partons with the probability distribution defined by the diagonal matrix elements The CGC density matrices The Color Glass Condensate (CGC) effective theory describes scattering at high energy (see Refs. [4][5][6][7][8] for reviews). According to the CGC, the valence ("hard") partons can be treated as static classical sources of the small-x "soft" gluons. This partial separation of degrees of freedom can be formalized by |ψ〉 = |s〉 ⊗ |v〉 , where |v〉 is the state vector characterizing the valence degrees of freedom and |s〉 is the soft fields in the presence of the valence source. Despite appearances, the state is not of a direct product form since the soft vacuum depends on the valence degrees of freedom. In the leading perturbative order, the CGC soft gluon state is |s〉 = C|0〉 with the coherent operator where φ i (k) ≡ a + i (k) + a i (−k), and the trace operation (tr) is over all colors. The background field b i a is determined by the valence color charge density ρ via: For simplicity we restrict our consideration only to the leading order contribution in the charge density ρ which captures only the longitudinal polarization of the soft gluons. We do not expect our results to change if higher order corrections are included. The valence wave function |v〉 is customarily modeled following the McLerran-Venugopalan (MV) model [9,10] where N is the normalization factor and the parameter µ 2 determines the average color charge density. Now we can formally define the hadron density matrix: Integrating out the valence degrees of freedom, we obtain the reduced density matrix for the soft gluons:ρ To extract the ignorance density matrix,ρ I , in gluon number basis, we need the basis vectors where c is the color indices, and we only consider longitudinal polarization. Since a mode with momentum k mixes only with the mode with momentum −k due to the fact that ρ * a (k) = ρ a (−k), we introduce N c = n c + m c . Furthermore,ρ r is a direct product of density matrices in a fixed transverse momentum sector due to transnational in variance, see [11] for details. The matrix elements of the density matrix for a given value of momentum q and given color c are We thus explicitly demonstrated that the off-diagonal components are non zero! This is one of the main results of this paper. If we ignore the off-diagonal elements we obtain the ignorance density matrix in the number of gluons representation Quantum Entropies Using the reduced density matrix, the entropy of entanglement can be readily calculated. It reproduces the known result obtained in Ref. [12]: There is no analytic expression for the entropy of ignorance. Hence, we compare the results numerically in Fig. 1. According to the ratio S I /S E , the dominant contribution to their difference is at small q where states with more than one particle cannot be ignored. While at large q, only the vacuum state survives and the two density matrix are approximately the same hence the ratio approaching 1. Thermal quasi-particle basis It is amusing that the entanglement entropy can be rewritten in the form with f = (exp(βω) − 1) −1 and ωβ = 2 ln q 2gµ + 1 + q 2gµ 2 . This suggests that there exists a quasi-particle basis in the reduced density matrix is diagonal and thermal (i.e. all eigenvalues of the density matrix are powers of the same number.). We named this quasi-particles entangalons. At small momentum, ωβ ≈ q gµ , we thus can identify the effective temperature of entangolons with gµ. At large momentum, the quasi-particle distribution function coincides with the Weizsacker-Williams distribution of gluons f ≈ g 2 µ 2 /q 2 . Conclusions Our explicit calculation confirms that, within the CGC framework, the reduced density matrix for soft gluons is not diagonal in parton/particle number basis. Neglecting the off-diagonal elements results in the loss of information and thus in larger entropy. We introduced a new form of entropy associated with this loss and called it entropy of ignorance. It is greater than the entanglement entropy which is supposedly related to the entropy of final state particles. In addition, by analysing the entanglement entropy, we hypothesized that the reduced density matrix directly is diagonal in some quasi-particle basis with the eigenvalues having the form of Boltzmann weights.
1,745.8
2021-07-22T00:00:00.000
[ "Physics" ]
Response of Two Rosa sp. to Light Quality in Vitro . : In this study, dark and various light qualities (white, red, green, and blue) were applied to evaluate their effects on growth characteristics, chemical content, and callus characteristics of Rosa damascene Mill. and Rosa hybirda L . Explant (single-node and shoot tips) cultured on MS media supplemented with sucrose, agar, and plant growth regulators ( Kin 0.5 mg/l and IBA 1 mg/l for whole plant formation experiment or 1 mg/l kin with 0.5 mg/l IBA for callus experiment), incubated in a growth chamber . The results of the whole plant formation experiment showed variation in growth characteristics in two types of Rosa, Green and white light caused the height ratio of shoot growth compared with dark, while red and green light caused the height ratio of root growth to compare with dark. Rosa growing in white or green has a higher content of chlorophyll, as well as carbohydrates and proteins in shoot and root parts compared with dark . The results of the callus experiment explained variation in callus characteristics (shape, texture, color, and weight), the highest fresh weight (1.83 and 1.99) gm and dry weight (0.207 and 0.233) mg of callus Rosa damascene and Rosa hybirda , respectively in dark conditions compared with least in blue light. Introduction : In the world, Rosas are among the most familiar and popular flowers [1].Which may be distributed in Asia, Europe, the North, and Middle East America.For their flowers and fruits, these deciduous shrubs are often cultivated in gardens [2].Subtropical regions of the Northern Hemisphere are home to more than 100 species of the genus Rosa [3].A woody perennial belonging to Rosaceae [4].Which has more than 18000 cultivars and 200 different species [5]. One of the most important ornamentals plants are Rosas, which are most often used for aromatic and medicinal purposes [6].Traditional uses for the Rosa flower extracts include an astringent stomachic, agent for promoting blood flow and reducing blood stasis, therefore regulating menstrual, antitoxin, treating mastitis and diabetes.also used to preparing of Rosa tea extracts which are widely regarded as cosmetic and dietary products [1].Rosa has a long history of usage in China as a food and medicine.Rosa petals have been used in teas, cakes, and flavoring extracts [4].According to [7], leaves, roots, flowers and fruits of Rosa contain chemical components, a rich sources of chemical compounds that are both primary and secondary metabolites. Seeds, cuttings, layering, and grafting are all viable methods for growing roses.While these techniques of rose propagation are laborious and time-consuming, seed propagation frequently yields variety.Therefore, it is necessary to develop effective techniques for the quicker growth of roses [8].In more recent years, procedures for tissue culture have been used to regulate and commercially propagate roses [9]. Plants' growth in vitro depends entirely upon artificial light sources as an alternative to solar energy, which should provide light in the fitting regions of the electromagnetic range for photomorphogenic reactions and photosynthetic metabolism.Regulating light quality, photoperiod, also irradiances enable the production of desired characteristics of plants.Light conditions are important environmental factors that significantly affect plant growth and development.Light quality (light source), quantity (intensity), and duration (photoperiod) are three principal characteristics that affect plant growth and functional component formation through photosynthesis. Light is the most important environmental factor affecting on photosynthesis therefore, the plant grows better [10].Light quality regulates cell metabolism, morphogenesis, gene expression, and other physiological processes [11].Plant growth is affected by the interaction between the level of endogenous plant hormones and light quality [12].Therefore, this research aims to choose the optimal light source and its effects on the growth characteristics, chemical content, and callus characteristics of Rosa damascene Mill.and Rosa hybirda L. Materials and methods : Experiments were carried out in a tissue culture lab. in biology department -science collage -Al-Qadisyah university during the period from August 2021 to June 2022 to study the effect of Dark and various lights quality: white light (W, 420nm), blue (B, 460nm) green (G, 560nm) and red (R, 660nm) on growth characteristics, chemical content of shoot and root, and callus characteristics of Rosa damascene Mill.and Rosa hybirda L. using tissue culture technique. -Sterilization of Explant Explants (single-node and shoot tips) washed under running tap water to remove adherent soil, then remove the leaves, then spry ethyl alcohol 70% for 1 minute and soak for 20 min in commercial bleach NaOCl (8%) containing 2 drops Tween-20 to aid wetting.Finally, in A laminar flow cabinet, the explant was washed with sterilized distilled water 3 times for 5 minutes to remove any side effects of sterilization and maintain the vitality [13]. -Preparation MS medium Media for Rosa cultures contained 4.44 gm/l Murashige & Skoog media, 30 gm/l sucrose, then add growth regulators 0.5 mg/l kinten and 1 mg/l butyric acid for whole plant experiment or 1 mg/l kinten and 0.5 mg/l Indole butyric acid for callus experiment.The pH of MS media was adjusted at 5.7 ± 1 using HCL (0.1 N) or NaOH (0.1 N) before adding 7 gm/L agar and using a hot plate magnetic stirrer to homogenize the components of the medium at a temperature of 70 °C for 15 minutes and then distributed in sterile culture jars (35 ml capacity) each containing 6 ml of medium.after that medium was sterilized by Autoclave at 121°C and 1.05 kg/cm2 (15-20 psi) for 20 minutes. -Culture Explant Sterilized explant (single-node or apical meristem) 0.5 cm long culture on MS medium in A laminar flow cabinet, and incubated in the growth room.At a temp. of 25 ± 1 °C under Dark and various lights (white, red, green, and blue) with 16/8-hour photoperiod (switch on/off).For about 30 days, cultures were maintained in the same media and then transferred into the same new media . The results were taken after 4 weeks.At the end of the incubated period the growth parameters were calculated as the mean number of the highest rate of length shoot and root, the number of branches, leaves and roots, fresh and dry weight of shoot and root, and shoot content of chlorophyll a, b and total.Shoots and roots content of carbohydrates and protein.The characteristics of the callus (color, texture, and shape) and fresh and dry weight of the callus [14]. -Experimental design : Data analysis was carried out using a complete randomized design (CRD) in a factorial arrangement 2×5 (Rosa type and light quality) with 10 replications per treatment.When the treatment effect was significant, Revised Least Significant Difference (RLSD) was applied to compare averages at a probability 0.05 level [15]. Results : The statistical analysis of The Table (1) below shows that the lighting factor has a significant effect on all the growth characteristics of Rosa, where white light has the highest rate of emergence explant in vitro 100% for both types of plants compared to other treatments . As for the growth indicators of explant, the highest rate of length shoot, number of branches and leave, fresh and dry weight of shoot under green light conditions, followed by those who were exposed to white light, Figure (1).While the highest rate of length root, number of branches of root and fresh and dry weight of root under the conditions of red light then green, Figure (2).The results in Table 2 indicate explant for both types of plants growing in white light has a higher content of photosynthetic pigments: chlorophyll a, chlorophyll b, and total chlorophyll, as well as produced carbohydrates and proteins in shoot and root parts compared to the least in explant growing in dark.The results of Table (3) explained significant differences in the characteristics of the callus (color, texture, and shape) under different lighting conditions Figure (3).The optimal conditions for the highest fresh and dry weight of callus reached in explant growing in dark or white light and lowest in blue light where callus grew slowly and became more compact and yellowish green or whiteish green.The highest fresh (1.83gm) and dry weight (0.207mg) of Rosa damascena callus in dark conditions, while the highest fresh (1.99gm) and dry weight (0.223mg) of Rosa hybirda callus also in dark conditions, with significant differences among all treatments. Discussion : Light spectra have different wavelengths, photosynthetically active radiation (PAR) which falls between 400 nm to 700 nm, the most crucial wavelength for plant growth [16].Therefore, of the results achieved, the causes for different growth indicators by light conditions are due to the physiological responses of plants, and the process of metabolism, which is reflected in the growth of shoot and root.also pointed out that light is an environmental factor that has a significant influence on increasing the ability to absorb plant hormones, especially cytokines by explant.It also reduces the side effects by producing high concentrations of auxin and cytokines added to the medium.Thus, choosing the optimal light quality was significant for the development explant [17]. The light quality or intensity-dependent changes in morphogenesis and plant physiology are regulated by phytohormones [18] [19].As a result the interaction between light signals from the environment and biosynthesis of endogenous phytohormones auxin and cytokinin, which directly or indirectly affect the growth of shoot and root [20].Red light induces chlorophyll biosynthesis so plants can photosynthesis, also blue light for plant morphology [21]. The ratio of red light is four times more than blue light perhaps to encourage the plant to photosynthesis and induce plant growth, which agrees with the results reported by [22] where light quality will be induced or inhibited of the shoot and root growth.According to, [23] red light to induce the photosynthesis process has been accepted extensively.[24] explain red wavelength was competent to be absorbed by photosynthetic pigments.Moreover, the optimum absorption peak of chlorophyll was red.but Blue light has significant roles in plants including the stomatal process, gas exchange, and water relations [25].Therefore, red and blue light may be useful for photosynthesis in and thus for the growth and development of shoots and roots [26]. The plant reflects green light, therefore plants appear green and have been thought to be of no use for plant growth by decreasing photosynthesis processes.But plant exposure to green light induces stem elongation, which means useful [27].Although the rate of absorption of red light or blue by leaves is about 90% but the green light is about 75% [28].Greenlight also affects plant physiology and morphology, including leaf growth, early stem elongation, and stomatal conductance [27] [22].Generally, callus with active growth and friable structure was easy to dedifferentiate, and the light intensity had a greater influence on the texture, color, shape, and physiological state of the callus, where the reason for this induced callus growth may be natural auxin levels increased under darkness or lower light intensity [29].As result, the oxidation processes of phenolic compounds induced by light exposure [30] [31], therefore, the callus showed a higher browning rate and darker browning color under a higher light intensity which was reflected in the growth and development of callus and explants in vitro. Conclusion : Depending on the results of our research, I note that exposing the explant to the green light plays an important role in increasing many indicators of shoot growth, while the red light recorded the best results of root growth.As for the characteristics of the callus, it was better when incubated explant in the dark. Table 1 : Showing the effect of light conditions in the Characteristics studied of two types of Rosa after 4 weeks of culture. R.L.S.D. at Table 2 : Showing the effect of light conditions on chemical contents of two types of Rosa after 4 weeks of culture. Table 3 : Showing the effect of light conditions on the callus Characteristics for two types of Rosa after 4 weeks of culture.
2,723
2023-10-30T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Tunable entanglement distillation of spatially correlated down-converted photons We report on a new technique for entanglement distillation of the bipartite continuous variable state of spatially correlated photons generated in the spontaneous parametric down-conversion process (SPDC), where tunable non-Gaussian operations are implemented and the post-processed entanglement is certified in real-time using a single-photon sensitive electron multiplying CCD (EMCCD) camera. The local operations are performed using non-Gaussian filters modulated into a programmable spatial light modulator and, by using the EMCCD camera for actively recording the probability distributions of the twin-photons, one has fine control of the Schmidt number of the distilled state. We show that even simple non-Gaussian filters can be finely tuned to a ~67% net gain of the initial entanglement generated in the SPDC process. Introduction Entanglement represents correlations among quantum systems that cannot be explained by classical local models [1], and it is at the core of quantum information science. For instance, it is necessary for entanglement-based quantum cryptography [2,3], for teleportation of quantum states [4], for quantum dense coding [5], and for Bell tests of quantum non-locality [6,7]. In photonic experiments, a suitable source of entangled photons is the spontaneous parametric down conversion (SPDC) process. SPDC is widely used to generate pairs of correlated photons that can be entangled in several degrees of freedom [8,9]. For instance, the spatial entanglement of photons generated in the SPDC process has been used for fundamental studies of quantum mechanics [10,11], enhanced quantum imaging [12][13][14], and generation of high-dimensional quantum states [15][16][17][18]. Since the transverse momentum and position of single photons are continuous variables (CVs), one can use the spatially correlated down-converted photons for CV quantum information processing [19][20][21][22]. Studies regarding the quantification of spatial entanglement in SPDC have been reported and one way to quantify the spatial entanglement is by using the Schmidt decomposition technique [23][24][25][26]. However, analytical computation of the Schmidt number K is experimentally demanding since it requires a full state reconstruction [27,28]. The estimation of K can be greatly simplified by noting that the spatial state of down-converted photons, under typical illumination procedure, is well approximated by a pure state [29]. In this case, K can be obtained only by measuring the marginal distributions of the down-converted photons at the near-field (image) and far-field planes of the source. Therefore, the degree of spatial entanglement depends on the aforementioned distributions and, thus, modifications of them through the use of local spatial filters allow the entanglement manipulation [30]. In this paper, we report on a novel experimental technique for entanglement distillation of the continuous variable (CV) spatial state of photons created in the SPDC process. Using tunable non-Gaussian local operations, and measuring in real time the marginal distributions, we developed a toolbox which allows a real-time, fine-tuning of the distilled entanglement. The filtering is achieved by using a spatial light modulator (SLM), which is a programmable liquid crystal display that can be used to control the amplitude and/or the phase of an incident beam. They have been used in several quantum information experiments [27,28,31,32], showing the practicality for generating and manipulating d-dimensional quantum states encoded into the transverse linear momentum of single photons. The real-time recording of images is done by an Electron Multiplying Charge Coupled Device (EMCCD). The EMCCD camera is based on a technology consisting of a CCD image sensor capable of detecting single-photon level intensities with high frame rates, low readout noise, and with high quantum efficiency [33]. For this reason EMCCD cameras have been used in several recent quantum information and quantum imaging experiments [11,14,34]. Although SPDC is a widely employed source of entangled photons, the advantage of the combined use of tunable spatial filters and real-time recording of twin-photon marginal distributions for entanglement distillation has not been explored up to date. Our technique can find applications in recent applied research, where the propagation of spatially entangled photons through turbulent channels is considered. As the entanglement may deteriorate due to imperfections of the channel [35][36][37][38], the receiver might need to distillate the received state for quantum information processing. We also envisage that our technique can be used to generate entangled Bell states of orbital angular momentum modes with very high quantum numbers [39][40][41][42][43]. Spatial entanglement of the down-converted photons SPDC is a nonlinear optical process in which a laser beam with frequency ω 3 and wave vector k 3 pump a nonlinear birefringent crystal, in such a way that it generates two photons with frequencies ω 1 and ω 2 and wave vectors k 1 and k 2 . If we assume the paraxial regime, a classical monochromatic continuous-wave pump beam, and the degenerate case, the spatial quantum state of the down-converted photons can be written as [9,44] q j represents the transverse momentum q for the j-th photon, namely q = k − k z . The function v(q) is the angular spectrum of the pump beam. Typically, this function is a Gaussian distribution, namely v(q) ∝ exp(−σ 2 |q| 2 /4), where σ is the beam waist. g(q) ∝ sinc L 4k 3 |q| 2 is the phasematching function associated to the process, where L is the crystal length, and k 3 = |k 3 | is the wavenumber of the pump beam. The phase-matching function implies that the state of Eq. (1) is a non-Gaussian CV-state [45,46]. It can be rewritten in the transverse position domain through the Fourier transform as [29,47] where W(x) and G(x) are the Fourier transform of v(q) and g(q), respectively. The spatial CV-state given by Eqs. (1) and (2) is entangled for any value of σ, k 3 and L [23]. In order to compute the degree of spatial entanglement, we use the Schmidt number K. This number is defined as the inverse of the purity of the marginal state ρ m of a bipartite state ρ 12 , namely K = 1/Tr(ρ 2 m ). However, a complete knowledge of the bipartite state ρ 12 is required to calculate the Schmidt number. Although this knowledge can be gathered through quantum state tomography, it would require one to perform many measurements in different bases to reconstruct the state [48]. Instead, we resorted to a recently proposed strategy to compute the Schmidt number associated to the spatial state of the down-converted photons [29]. Assuming the state is pure, this strategy is based on measuring the probability distributions of the transverse momentum q j , and the transverse position x j , of one of the twin-photons associated to the state described by Eqs. (1) and (2). The Schmidt number is given by where the probability distributions for the j-th photon are Under typical experimental conditions, i.e., considering a thin crystal and a wide pump beam, one has that σ L K . Thus, Eqs. (4) and (5) can be simplified by considering , and one obtains that Note that due to the symmetry of the state, the probability distributions are the same for both down-converted photons. Therefore, the Schmidt number K is given by Since the spatial entanglement depends on the widths and amplitude of both marginal probability distributions, it is possible to manipulate the amount of entanglement by adding a spatial filter at the image plane of the source for both down-converted photons [30]. Considering a local transmission function F(x), the two-photon state after the spatial filter is given by Depending on the shape of the filter, we can increase the spatial entanglement, despite losing part of the photons sent. This is the protocol known as entanglement distillation. In our case, the spatial filter is implemented using a programmable SLM, and the modified marginal probability distributions are recorded in real time using the EMCCD camera. Please note there are other approaches to certifying the spatial entanglement of down-converted photons such as the use of separability criteria for entangled CV-states [49][50][51]. Nonetheless, in the case of spatially correlated SPDC photons, these criteria require the coincidence rate measurements in two conjugate planes and this is experimentally demanding [52,53]. Experiment The experimental setup is depicted in Fig. 1. Alice uses a continuous-wave, single-mode laser operating at 355 nm to pump, a β-barium-borate type-I (BBO-I) nonlinear crystal to generate collinear down-converted photons. The maximum laser power is 350 mW. The nominal pump beam waist is 600 µm, and the crystal length is 4.5 mm. After the BBO crystal, high-quality dichroic mirrors are used to remove the remaining pump beam, transmitting only the downconverted photons. The generated photons are sent to Bob through the transmission channel. In our case, this channel corresponds to a free space separation between Alice and Bob, composed of two lenses, L 1 and L 2 , both with 7.5 cm of focal length, in a 4-f configuration. To test our entanglement distillation protocol, Bob uses a transmissive spatial light modulator (SLM) working in amplitude-mode to perform the spatial filtering. The SLM is composed of an array of 800 × 600 pixels (px), which are 32 [µm] wide, sandwiched between cross-aligned polarizers. Bob places the SLM at the propagation mode of both down-converted photons. Even though it acts on both photons, the applied spatial filter is local because the total transmission function is given by F(x 1 , x 2 ) = F(x 1 )F(x 2 ). This filter allows the manipulation of the probability distribution of the transverse position of x j since it is located at the image plane of the source (see Fig. 1). As it is well known, the modulation of the intensity on the near-field plane of the BBO-I crystal corresponds to a modification on the transverse position distributions of the twin-photons [21,47]. Even though the spatial state of the down-converted photons is non-Gaussian [45,46], it can be approximated by a Gaussian one [9,14,21,23]. Therefore, it is natural to consider a non-Gaussian spatial filter since it is impossible to implement an entanglement distillation protocol under Gaussian states using Gaussian operations [54]. Thus, we consider the local non-Gaussian filter given by This function, for each transverse direction x, y, is composed of a sum of two Gaussian functions, where ±d is their offset position, and a is their width. The filter given by Eq. (10) To experimentally monitor in real-time the manipulation of the spatial entanglement after the photons cross the filter, an EMCCD camera is used to record the marginal probability distribution of the transverse position and momentum of a down-converted photon. This measurement is done at the image plane (near-field) and the Fourier plane (far-field) of the filter, respectively. In particular, we used a Princeton ProEM 512 camera, with a resolution of 512 × 512 px. Each camera's pixel has a width equal to 16 µm. In the case of measuring at the near-field plane, we resort to the use of the lens L 3 with a 7.5 cm of focal length. When L 4 (15 cm focal length) is placed instead of L 3 , the camera is located at the Fourier plane of the SLM. An interference filter placed at the front of the EMCCD camera was used to select the degenerated down-converted photons at a wavelength of 710 nm with 5 nm of bandwidth. It is worth to note that in our scheme the purity of the post-selected states is guaranteed such that Eq. (3) is still valid. For instance, decoherence effects generating a mixed spatial state would arise if the single-photon spatial degree of freedom entangles with other degrees of freedom, e.g. polarization, time-bin, etc. Nevertheless, in our case this is avoided by using polarization filters, spectral filters, and the repetition rate of the SLM in large timescales compared with the coherence time of the down-converted photons [55]. Thus, avoiding spatial-polarization, spatial-frequency and spatial-temporal correlations, respectively. Results To compute the Schmidt number of the entangled generated states, we initially fit a surface to each image recorded by the EMCCD at the near-and far-field planes. According to the state given by Eq. (9), and the filter function of Eq. (10), the fitting surface must be a sum of four Gaussian functions for the near-field plane images and a sinc function for the far-field plane images Each integral in Eq. (3) is calculated based on the obtained parameters of the fitted surfaces (i.e. α i , β i , γ i , δ i , i , a, b, c, x 0 , and y 0 ). That is, the Schmidt number K for the post-selected states can be calculated directly using these parameters. Measuring the initial spatial entanglement To properly estimate the efficiency of our technique for the spatial entanglement distillation, we started by measuring the initial spatial entanglement of the down-converted photons generated at the crystal. For this purpose no filter was addressed in the SLM, and it was set to a "blank configuration" corresponding to the maximal transmittance setting. The recorded images and the fitted ones are shown in Fig. 2. The presence of an unwanted vertical diffraction pattern in the recorded image at the far-field plane can be spotted. It is caused by non-modulated light scattered on periodic pixel structure of the SLM and, therefore, it corresponds to background noise and does not affect/impact on the estimation of the degree of entanglement of the distilled states. From the recorded images, we obtain that the initial Schmidt number is K 0 = 842 ± 40. The error was extracted from the computational routine which gives the goodness of the fitted surfaces. This result is in a good agreement with the predicted Schmidt number of K theo = 834.05, obtained considering the experimental values of σ, L, and λ 3 into Eq. (8). Experimental spatial entanglement distillation We consider two main scenarios with different spatial filter configurations for the spatial entanglement distillation protocol: (i) in the first case, d is fixed at 20 px while a varies from 4 to 21 px, and (ii) d varies between 0 and 21 px while a is fixed at 7 px. These arbitrary values for d and a were chosen after simulating the effect of the non-Gaussian filters and noting that the post-selected entangled states would in some cases show high values of K such that it is greater than K 0 . The simulation results are shown in Fig. 3. In Fig. 3(a) we show the ratio K/K 0 of the post-selected states when a and d are varying. In Fig. 3(b) we show the success probability P succ of the protocol. It is defined as the probability for the transmission of the down-converted photons through the filter addressed in the SLM, conditioned on their initial state of Eqs. (1) 4 397 65 5 558 54 6 745 38 7 1071 40 8 1404 31 9 1238 56 10 1031 48 11 902 15 12 825 7 13 779 12 14 753 12 16 717 12 18 699 12 20 695 12 22 694 13 computed for each configuration adopted. The corresponding results for the first scenario are shown in Fig. 6(a), and explicitly given in Table 1. The obtained results related to the second scenario are shown in Fig. 6(b) and given in Table 2. The dashed blue line in Figs. 6(a) and 6(b) corresponds to the initial entanglement K 0 . The black dots correspond to the experimental results obtained with different non-Gaussian filters. The pink areas represent confidence bounds computed using the Monte Carlo method and the uncertainty of σ and L values. As it was mentioned above, σ = 600 µm and L = 4.5 mm. The corresponding errors are ∆σ = 13 µm and ∆L = 0.3 mm, and they are consequence of systematic errors in the used measuring devices. Therefore, in the Monte Carlo method the computation of K, for each non-Gaussian filter, was done by randomly adopting σ and L values within σ ± ∆σ and L ± ∆L. Please note the agreement between the experimental data and the expected values within the confidence bounds. In Fig. 6(a) one can observe an increment of the initial entanglement when 6 < a < 12 px. The most entangled post-selected state is obtained when a = 8 px. In this case, K = 1404 ± 31, which corresponds to a ∼67% net gain of the initial measured spatial entanglement. Fig. 6(b) shows that with a = 7 px entanglement distillation occurs when d ≥ 18 px, with no apparent upper bound within the observed range. The reason why larger values of d were not tested lies in the fact that the probability of the photons to successfully pass through the filter becomes smaller, making the number of detected photons smaller as well. In turn, a smaller number of detected photons produces poorer signal-to-noise ratios, as evidenced in the larger error bars of the experimental data as d grows. This observation suggests that keeping a constant value for d while searching for a good value of a yields better results for the entanglement distillation than the other way around. Conclusion We report on a toolbox technique that allows one to increase the amount of spatial entanglement of the down-converted photons. It is based on the implementation of tunable local non-Gaussian filters encoded onto programmable spatial light modulators, while the post-processed distilled entanglement is measured in real-time using an EMCCD single-photon sensitive camera to register the probability distributions of the transverse position and momentum of the down-converted photons. We showed that by finely tuning simple non-Gaussian filters one can already obtain a ∼67% net gain of the initial spatial entanglement generated at the SPDC process. As entanglement is the core resource for quantum information processing, this procedure should find applications in several different scenarios. For instance, in quantum cryptography schemes where OAM correlated photons are propagated through the turbulent atmosphere and then used to establish a secret key for secure communications [56,57]. Other CV schemes, such as computation based on continuous-variable cluster states might also benefit from this technique [58][59][60].
4,174.8
2018-05-17T00:00:00.000
[ "Physics" ]
Three-Phase Distribution Transformer Connections Modeling Based on Matrix Operation Method by Phase-coordinates This paper proposes a matrix operation method for modeling the three-phase transformer by phase-coordinates. Based on decoupling theory, the 12x12 dimension primitive admittance matrix is obtained at first employing the coupling configuration of the windings. Under the condition of asymmetric magnetic circuits, according to the boundary conditions for transformer connections, the transformers in different connections enable to be modeling by the matrix operation method from the primitive admittance matrix. Another purpose of this paper is to explain the differences of the phase-coordinates and the positive sequence parameters in the impedances of the transformers. The numerical testing results in IEEE-4 system show that the proposed method is valid and efficient. Introduction The distribution systems are unbalanced naturally. With the rapid development of the distributed generators and the wide use of electric vehicles,the unbalanced condition of distribution systems are getting worse by those single-phase power supply and loads increasingly [1,2]. In this case, it is great need to promote the research and analysis of the unbalanced distribution systems. But for the unbalanced distribution systems, this unbalanced nature makes it difficult to generate the decoupled (1-2-0) networks for analysis. Consequently, it is direct and convenient to employ phase (a-b-c) coordinates for the analysis and solution of the unbalanced distribution system [3]. There are many connections and different neutral point states for the three-phase transformers. In modern distribution system analysis, models of the transformers play an important role in power-flow analysis and short-circuit studies. Therefore, it is necessary to study a new approach modeling three-phase transformer connections by phase-coordinates in unified matrix analysis.. There are several representative approaches for modeling the three-phase distribution transformer connections in the admittance matrix form by phase-coordinates proposed in [4]- [10]. Reference [4] developed an approach from the KCL and KVL, which was able to generate the 6x6 matrix in different transformer connections according to singlephase transformer symmetrical lattice equivalent circuits as the units grouping up. In the paper, the authors described the transformer in model relationship between the phasecoordinates and component coordinates as well. However, the interphase coupling did not be considered inthis approach. Later, the improved models proposed in [5]- [7] derived from this approach of assembling single-phase transformer equivalent circuits by different connections. Another representative approach generated a primitive matrix by the six coins equivalent circuit of the transformer in (YN, yn0) connection [8]- [9].The main characteristic of this approach was that the models were obtained from product relations between the primitive matrix and the incidence matrix in different connections, while it was not accurate description that the left incidence matrix and the right one were same. Reference [10] accounted for a method of handling matrix singularity in the use of the transformer power-flow models. And the modified augmented nodal analysis (MANA) [11] was proposed, which enriched the model application of transformers. There were several flaws in the previous approaches for modeling the three-phase transformers by phasecoordinates. For one thing, the phase self-impedances and mutual-impedances come from the transformer positive impedances directly, though the phase impedances are much closed to the positive impedances. For another, the most of models described the parameters of 6 coins by 6x6 dimension to 8x8 dimension matrix in the models for transformer, only consider the injection currents at a-b-c phase between the primary and the secondary sides, but not all the currents. In this case, the models did not enable to cover the use of both in power-flow and short circuit calculations under the asymmetric magnetic circuits. The three-phase AC transmission theory used to be generated from the single-phase models of the system equipment by the AC circuit theory based on the symmetrical characteristics of the three-phase voltages and currents, which the three-phase symmetry is the characteristic of traditional power system analysis. And the unavoidable asymmetry conditions of the transformers are even more serious than the lines. The purpose of this paper is to model the three-phase transformers in different connections by phase-coordinates by matrix operation method. The main work for studying is show as follows: • Generated the primitive admittance matrix from the coupling windings by decoupling method. •Modeling the transformers in different connections by matrix operation method based on the asymmetric magnetic circuits. • Method for obtaining phase-coordinate modified parameters for the models of the transformers. Methods /experimental The aim of this paper is to solve the problem for winding connections modeling of three-phase transformer by phasecoordinates based on matrix operation method. Firstly, we introduced the steps to obtain transformer admittance matrix by matrix operation method. Then we analyzed the modeling for the connections of three-phase distribution transformer, and also we analyzed the differences of the impedance parameters between phasecoordinates and sequence-coordinates. Finally, we verified the effectiveness of the modeling methods by simulation. Modeling methodology In this section, the modeling approach for a transformer is described by the matrix operation method from the 12x12 dimension primitive admittance matrix. The complex variables and the values of parameters are given in per-unit system. Firstly, the coupling configuration used prefers to describe and analyze the coupling phenomenon by comparing with the two circuit topologies for a single-phase double-winding transformer. And then the phasecoordinates construction methodology for the three-phase model is described by the following steps. • Definition of the transformer primitive admittance matrix P Y . • Definition of the transformer admittance matrix T Y by the matrix operation method in different connections . Connections Legend Fig.1 shows the main steps of the conceptual scheme of the construction methodology [12]. Firstly, the primitive matrix is generated from the coupling windings shown as the inner blue frame. And then, according to the boundary conditions of the connections at the windings, the transformers are modeled by the matrix operation method. [Z], Z Impedance matrix, element of impedance matrix. [Y] or Y, y Admittance matrix, element of admittance matrix. Primitive admittance matrix P Y The two kinds of single-phase equivalent circuits are shown in Fig.2. Fig.2 (a) is the T-configuration, and Fig.2 (b) is the coupled configuration. In Fig.2, "A", "X" are the two buses at the primary side, "a", "x" are the two buses at the secondary side respectively. 1 2 , V V ) stand for the node voltages at the windings. In Fig.2 (a), 1 Z , 2 Z are the windings selfimpedances, and jwM is the mutual-impedance between windings. w is the angular acceleration. In Fig.2 is the noload admittance. The circuit relationship in Fig.2 (a) (in p.u system) can be given by 1 1 1 There is potential difference between Bus "X" and "x" in transformer testing experiment. The T-configuration fails to express the electrical characteristics of the transformer, although the calculation enables to equipotential potential. When running in symmetrical operation, the two buses (X, x) are the same at "zero" potential point, so they can be connected in the form of equipotential. Generally, the two buses are not the same at the zero potential point when they run in three-phase asymmetric operation. In order to reflect the potential offset phenomenon of neutral point and consider the various connections of transformers, threephase transformers for modeling should adopt the form of Fig.2 Consequently, there is no electrical connection between Bus X and Bus x in Fig.2 (b), which can indicate the potential difference between the two buses. The floating phenomenon without buses grounding and the different connections of the transformer enable to explain as well. While the T-configuration equivalent circuit of transformer in Fig.2 (a) utilizes the equipotential characteristics at the symmetrical operation, which is not suitable for asymmetrical operation analysis. The application of Fig.2 (b) at Fig.1, the three-phase transformer contains 6 coins and 12 buses (A-B-C buses and X-Y-Z buses at the primary side; a-b-c buses and x-y-z buses at the secondary side) shown as Fig.3, and the generalized primitive model can be given by a 12x12 matrix using the decoupling methodology. The branch current equation can be given as where   , which presents the impedance relationship of the transformer, and its admittance is The node voltage equation of the transformer can be shown as 12 where 12 12 , which is the full primitive model of the transformer in Y-bus form. Transformer admittance matrix T Y According to different connections, we obtain the admittance models based on the matrix operation method from the derivation of the initial admittance matrix P Y . The equation (4) presents the boundary conditions of the transformer in YN,d11 connection. The admittance matrix can be calculated by the matrix operation method according to the connected relationship of the transformer in YN,d11 connection. Steps to obtain transformer admittance matrix by matrix operation method can be shown as follows: 1) YN connection of primary side: Bus X, Y and Z are grounding, and the rows and columns of them are retained in the primitive admittance matrix Y P . 2) d11 connection of secondary side: 3) Preserve three-phase voltage variables, and delete Buses (X, Y, Z, x, y, z) by needed, which can be obtained a 6x6 standard matrix (shown as Fig.5) to a 8x8 matrix by retention of neutral buses. Based on matrix operation method instead of scanning the branch, the models of the transformer are derived from the relationship of the connections, which directly forms the nodal admittance matrix. The analysis method can be used to three-winding transformers as well. In accordance with the above rules, we can obtain a 12x12 incidence matrix C. The Y-bus model relationship of the transformer in YN,d11 connection between the C and Y P can be given by 12 12 [ ] The model in the equation (5) is the complete full transformer admittance model. Generally, we need to retain the related parameters of the three-phase voltage variables. Repeat Step 3, we can enable to obtain a 6x6 matrix (to a 8x8 matrix containing neutral buses). Modeling of three-phase transformers In this section, the generalized modeling methodology is applied to represent the three-phase constructions of transformers. The aim is to demonstrate the matrices of the models in derivation process. All the three-phase two winding transformers are defined in magnetic circuits of asymmetry configurations in this paper, and symmetric configuration is a special kind of asymmetric configurations. There are the magnetic circuits connecting closely of the transformer in three-phase three-limb core, besides the magnetic coupling of the primary and secondary windings of each phase, as well as the magnetic circuits coupling of the different phase windings as shown Fig.6. The effects of the coupling of the inter-phase windings are obvious when the transformer runs asymmetrically. Fig.6 Three-phase two winding transformer in magnetic circuit asymmetry configuration (the transformer in three-phase three-limb core) Considering mutual inductance between the windings, the branch current equation for conveniently analyzing mutual inductance is Bus X and x, Bus Y and y, Bus Z and z are checked in unequal potentials, and those buses do not connect together. The self-impedance of each winding is nearly equal in noload test, and the relationship of the self-impedance can be given as According to the nodal voltage equation (6) and equation (7), the nodal admittance matrix can be shown by 12x12 dimensions as 1 where m y is the mutual admittance between the primary and secondary sides of windings on the same iron-core. m y is the mutual admittance among different phases primary (secondary) sides of windings on the different ironcores. m y is the mutual admittance among different phases from the primary sides to secondary sides of windings on different iron-cores. s y is the self-admittance on the primary sides or the secondary sides of windings. The Bus X, Y, Z and Bus x, y, z of transformer can be connected according to the connection of transformer to express the corresponding voltages in neutral points, based on the method of Section 2. 1) YN,yn0 connection The network topology of the transformer in YN,yn0 connection can be shown as Fig.7. And its 6x6 Y-bus matrix can be given by the matrix operation method as The network topology of the transformer in the equation (10) can be shown as Fig.8. 2) YN,d11 connection In Fig.9, the connected topology of the transformer in YN,d11 connection is presented. For the transformer in the asymmetric magnetic circuits (such as the transformer in three-phase three-limb cores), the node admittance matrix is full because of the coupling among the windings. The boundary condition is given by the equation (4). And the Buses X, Y, Z are grounded. We enable to obtain the 6x6 Y-bus matrix by the matrix operation method retaining the three-phase variables, shown as According to different magnetic circuits, the Y-bus model can be given from analyzing the equation (11): l) Ignoring mutual-inductances among the three phases, the model is able to be seen as the connection by three single-phase transformers assembling. The 6x6 admittance matrix can be given as The network topology of the transformer in the equation (12) can be shown as Fig.10. 3) Plane magnetic circuit layout: , and the admittance matrix can be given by Method obtaining the admittances by phasecoordinates References in this paper attempt to convert directly the parameters of symmetry test into phase-coordinate parameters. In principle, unsymmetrical static three-phase equipment fails to form decoupled 1-2-0 sequence circuits. As a result, the phase-coordinate parameters converted by the decoupled 1-2-0 parameters are approximate values. Compensation method enables to reduce the errors, but it cannot be equivalent. In order to distinguish between nominal values in this section, variables and symbols representing per-unit values are marked "*" in the subscript. In this section, the admittance matrix in the equation (6) needs to analyze and calculate. Impedances obtained in the symmetry Considering the symmetry of structural parameters for three-phase Transformer, the relationship of the equation (6) in per-unit system enables to be expressed by I I I I I z z z is the mutual-admittance between the secondary windings. (1) No-load test: The test enables to obtain the two groups of data, and they are no-load current I 0 and no-load loss P 0 .The relationships of parameters in no-load state can be given by where y 0* is the no-load admittance; z 0* is no-load impedance; g 0* is the no-load conductance; b 0* is the no-load susceptance; S TN is the transformer's rated capacity. The no-load impedance can be expressed by (2) Short circuit test: The test enables to obtain the two groups of data, they are the percentage of short-circuit voltage V s % and short circuit power loss P k . The relationships of parameters at short-circuit state can be given by where R T* is the transformer winding resistance; * T z is the leakage reactance; x T* is the winding reactance. And the impedance in short-circuit state can be given by The equation (17)-(20) deduced under the condition of the symmetrical currents or voltages, the above parameters stand for the positive sequence parameters. The steps for extrapolating the phase-coordinate parameters can be shown as follows: Considering the same values of leakage reactance on the primary side and secondary side, the relationship (in value system) can be shown by Z  are the leakage reactances on the primary and secondary sides as Fig.2 (b) shown. The basic parameters in the symmetry in no load test can be given by The reluctances of transformer's magnetic circuit are mainly from the air gap between iron cores. According to the principle of magnetic circuit of phase separation test method, there is the relationship shown as follows: I I I I I jx z Fig.11 (a). While the relationships for our modified parameters are shown in Fig.11 (b), which enable to explain the work in this section intuitively. Impedances and admittances obtained in the asymmetry The magnetic circuits of the three-phase three-limb core transformer can be shown in Fig. 12. The branch equation (6) enables to express the relationships of the transformer conveniently. Considering the relations of the linear circuits without magnetic saturation, the impedance parameters in the equation (6) can be obtained by the open circuit test. Fig. 13 shows the coupling relations between the primary side and the secondary side. Fig.12). (2) Simulation calculation The IEEE 4 node feeder test system network is shown in the Fig. 14. The parameters of the test system are as follows: transformer ratio is 12.47(kV):24.9 (kV), and the parameters of transformer and line present in [13]. The tolerance for calculation is 10 -5 for testing. The unbalanced loads in bus 4 of the test system are 1250kW, 1800kW, 2375kW, and the power factors are -0.85, -0.9 and -0.95 respectively (Complex power of load marked as S 1 ). There are 6 types and parameters of the transformers for testing by power flow calculations. Table.2 presents the types and the parameters of the transformers. In Table.2, The admittance matrix in No.1 and No.2 groups of the transformers employ the method in [8] and [12]. And the rest of the transformers use the method in this paper. Tab Discussion The calculated results of No.1 groups at Tab.3 and Tab.4 are the same as [13].Comparing with the results in No.1 groups of Tab.3 and Tab.4, The results of No.2 groups are merely smaller, but less errors considering no-load parameters. While both the admittance matrix parameters in the two groups of transformers are calculated by the method in [8] and [12], which the parameters essentially are the positive sequence ones. The method employed in Section 5 of this paper is used to calculate the three-phase admittance matrix parameters, and the experiments also show the differences in the test. Comparing with the previously results, the results in No.3 group and No.4 group at Tab.3 and Tab.4 are a little smaller and closer in the same condition. Analysis from the physical definition, the true phase admittances are different from the positive sequence ones due to the demagnetization by three-phase symmetrical currents. And the results of different magnetic circuits are given in Tab. 3 and Tab.4 as well. Those results also are different from each other. In Table.5, comparing with the No.1 group, c-phase voltages are increasing gradually by the changes of loads in No.6 group, due to the increased unbalances of loads' currents. Comparing with symmetric component parameters, the effect of node voltages is more different by the phasecoordinates parameters. Consequently, the three-phase symmetrical currents have the demagnetization, which make the phase-impedances slightly smaller than the synthetic impedances (positive sequence impedance). In addition, the mutual-impedances effect significantly by asymmetry. Conclusions In this paper, the authors propose the phase-coordinate modeling method by a 12x12 primitive full admittance matrix to structure the 6x6 admittance matrix (extending to 8x8 matrix by retention of 2 neutral points) for three-phase transformers in different connections from asymmetric magnetic circuits by the matrix operation method. The phase-coordinate impedances have been corrected by considering the magnetization of three-phase symmetrical currents. There are the advantages shown as follows: •Modeling the transformers in different connections and asymmetric magnetic circuits; • Real phase-coordinates parameters , which enable calculations accurately. Finally, the matrix operation method also enables to use for modeling three-phase three-winding transformers usefully, besides the three-phase double-winding transformers. And the method applies to model the nonstandard transformers as well.
4,474.4
2020-12-14T00:00:00.000
[ "Engineering", "Physics" ]
Use it or lose it Blocking the activity of neurons in a region of the brain involved in memory leads to cell death, which could help explain the spatiotemporal disorientation observed in Alzheimer’s disease. A lzheimer's disease is a neurodegenerative disorder that affects approximately 55 million people globally (Gauthier et al., 2021). One serious symptom of the disease is a tendency to become disorientated both in space and in time. Researchers attribute this, in part, to impairments in the medial temporal lobe of the brain, which includes the hippocampus and the entorhinal cortex. These two regions are involved in the formation and retrieval of memories; in particular, they are important for making spatial memories for navigation, including memories of place, time, head direction and environmental borders (see for example review by Knierim, 2015). To encode new memories and retrieve past ones, neurons in the network formed by the entorhinal cortex and the hippocampus have to maintain their plasticity -that is, their ability to form new connections between cells and remove ones which are rendered obsolete -throughout life. In mouse models of Alzheimer's disease, however, the role of hippocampal and entorhinal cells in learning and memory is impaired (Rechnitz et al., 2021;Ying et al., 2022). As these regions are highly interconnected, early-stage corruption in the entorhinal region is further amplified downstream in the hippocampus. This leads to a vicious cycle that may impact behavior and cause spatial and temporal disorientation. The specific mechanisms leading to a deterioration of the brain's neural networks in Alzheimer's disease are unclear. Similarly, it is unknown how this leads to memory loss. However, the impairment of neural networks within the entorhinal cortex often appears early in the progression of the disease (Stranahan and Mattson, 2010). Now, in eLife, Joanna Jankowsky and colleagues at Baylor College -including Rong Zhao, Stacy Grunke and Caleb Wood as joint first authors -report on how a neuronal population in layer II of the entorhinal cortex contributes to this impairment (Zhao et al., 2022). Their experiments show that blocking the activity of these neurons ultimately leads to increased cell death in this population. Zhao et al. used a mouse model thought to undergo the same disruption to cell activity observed in Alzheimer's disease. However, in this model, the impairment is not induced by the amyloid-beta plaques or neurofibrillary tangles characteristic of the disease. This is interesting because, even though it had previously been established that neurons die during Alzheimer's disease, this was usually attributed to a vicious cycle of increased amyloid release driven by increased synchronization between cells causing hyperactivity (Busche and Konnerth, 2015;Zott et al., 2019). Instead, Zhao et al. show that, in their mouse model, cell deterioration and death in the entorhinal cortex are driven by silencing of specific neuronal activity. Zhao et al. also demonstrate that, in their mouse model, a competitive process between active and inactive cells in the entorhinal cortex In healthy mice, the tri-synaptic pathway begins in the entorhinal cortex (MEC, pink) and terminates in the CA1 region (grey) of the hippocampus. A prominent population of stellate cells (dark blue) can be found in layer 2 (L2, pink) of the entorhinal cortex. First, signals travel from layer 2 of the entorhinal cortex through the perforant path (dark blue arrow) to the dentate gyrus (DG, light blue) and the CA3 (red). Signals then travel from the dentate gyrus to the CA3 through the mossy fibers (light blue arrow). From the CA3, signals are transmitted through axons known as Schaffer collaterals (red arrows) to the CA1. This pathway is known to be essential for the formation of spatial memory and navigation. The inset shows how a healthy mouse, in which this pathway is working correctly, can navigate a maze. (B) In mice models of Alzheimer's disease, the results of Zhao et al. suggest that the stellate cells in the entorhinal cortex degenerate and corrupt the information precedes neurodegeneration. This type of competition is similar to what is seen in infantile neuronal plasticity during development, when neural circuits are refined by selecting neurons and neural pathways depending on their levels of activity. Following this refinement process, which was previously thought to end soon after birth, some neurons degenerate while others persist. The extension of infantile neuronal plasticity in entorhinal cortex cells into adulthood may act as a double-edged sword: on the one hand, the plasticity allows these cells to form and modulate memories throughout life; on the other hand, the cells are more vulnerable to malfunction and death through competition. Sensory information from multiple modalities (i.e. tactile, olfactory, auditory etc.) converges into layers II and III of the entorhinal cortex, which form the major input sources for two regions in the hippocampus, called the dentate gyrus and CA3. These areas of the brain are part of a pathway that ultimately terminates in another hippocampal region known as CA1 (Witter et al., 2000). It is therefore suggested that the cell impairment observed in the entorhinal cortex by Zhao et al. resembles an isolated deficit that can occur in Alzheimer's disease. This impairment potentially forms an early seed to the later deterioration of neural networks and brain regions downstream, mainly in the hippocampus (Figure 1; Cacucci et al., 2008;Rechnitz et al., 2021;Roy et al., 2016). This may lead to the memory deficits and disorientation observed in Alzheimer's patients. The findings of Zhao et al. hold promise for potential new treatment strategies targeting the cell population in layer II of the entorhinal cortex early on in the Alzheimer's disease. These interventions have the potential to slow down the cascade of events leading to the onset of the condition.
1,310.4
2023-01-24T00:00:00.000
[ "Biology", "Psychology" ]
Nesting and reproduction of Pachycondyla striata ( Formicidae : Ponerinae ) in urban areas : an ant that offers risk of accidents It was conducted a research, in urban areas, on the nesting habits and reproductive period of Pachycondyla striata, a species of ant that stings painfully. The study was motivated by the frequent reports of accidents in the city of São Paulo. The reports are more common during the reproductive seasons of the species, when the winged females sting the population, since they enter the houses of the people attracted to light. Although anaphylaxis for P. striata has not been reported yet, other close species may cause anaphylaxis, which makes important to understand their biology in order to take management and control measures. Fourteen green areas in the city of São Paulo, Brazil, were monitored in the search for the species and their nests that were found in 64.3% of the areas. The nests are located around the trunk base of the trees, between the roots that protrude from the ground, under the rocks and through the cracks and crevices on the sidewalks. The spatial distribution of the nests is random. The reproductive period of P. striata was monitored from April 2012 to November 2013, through passive collection and laboratory colony. The nuptial flights occur during the cooler and drier months of the year, between July and September. Introduction Among invertebrates, ants are the most speciesrich and ecologically dominant animals of all the eusocial insects (HÖLLDOBLER; WILSON, 1990), and under certain circumstances some species may cause negative impacts, from altering an ecosystem by interfering with the mutualistic relationships, to becoming a medical or agricultural problem (HOLWAY et al., 2002). Pachycondyla striata Fr.Smith is a Ponerinae Neotropical ant species, distributed in South America between the parallels 10°S and 35°S.It is present in Brazil, Bolivia, Paraguay, Uruguay and North Argentina, and colonies have been found since the sea level until 1,300 meters in altitude (MACKAY;MACKAY, 2010).Medeiros and Oliveira (2009) excavated nests, and recorded from five to six interconnected chambers located between 5 and 80 cm beneath the ground surface, with two or eight entrances beneath the leaf litter (from 20 to 80 cm apart from each other), but with the ant traffic occurring only through a single main entrance.Colonies can be polygynous and Acta Scientiarum.Biological Sciences Maringá, v. 37, n. 3, p. 337-344, July-Sept., 2015 monogynous (RODRIGUES et al., 2011).It is considered a generalist predator ant, feeding on insect carcass, fruits, and hunting other arthropods (GIANNOTTI; MACHADO, 1994).The workers and alate reproductive forms are relatively large, from 13.2 to 16.7 mm (KEMPF, 1961).This species was already registered in Brazil in the Restinga Forest (PASSOS; OLIVEIRA, 2003), in the Eucalyptus forests (FONSECA; DIEHL, 2004), in the Atlantic Forest (ROSUMEK et al., 2008;MEDEIROS;OLIVEIRA, 2009), in Cerrado (SOARES et al., 2010) and in the urban area (SILVA-MELO; GIANNOTTI, 2010).Workers and queens are able to sting effectively, due to their predator habits (SILVA-MELO; GIANNOTTI, 2011), and they can be a threat to public health. Ants do not lose their stings after stinging their prey, and some venoms are neurotoxic, cytolytic, or may even cause both effects (HÖLLDOBLER; WILSON, 1990), including the Ponerinae ants.The efficacy of the sting as a defense weapon of the insects is based on the toxic properties and the pronounced allergenic effect of the secretion produced by the venom gland (ORTIZ; MATHIAS, 2006).Ponerinae ants include over 1,600 species (BOLTON et al., 2006), many of which are well known for their potent sting, which may cause anaphylactic shock.As an example, Brachyponera chinensis (Emery) (=Pachycondyla chinensis) is an Old World species that was introduced into the New World, that has a painful sting that can induce severe allergic reaction (MACKAY; MACKAY, 2010;GUÉNARD;DUNN, 2010). Pachycondyla striata may be a threat to human health due to the stings of the workers or alate females.Citizens report the accidents to the health technicians in the Municipality of São Paulo that, in turn, seek information in the research institutes.In this context, this study aimed to register where the P. striata nests are located, how their spatial distribution are characterized, and the reproductive period of the species.Such data are expected in order to inform the general public about this species and to be the bases for the management, control, and for the preventive measurements against this Ponerinae ant. Material and methods The study was conducted in the city of São Paulo (23°33'S; 46°37'W; 800 m in altitude), located in Southeast Brazil.The city has a humid subtropical climate (Cwa), according to the Köppen classification, characterized by a dry winter and a rainy summer, presenting annual precipitation of 1,376.2mm and annual average temperatures between 17 and 24 o C (CEPAGRI, 2014).São Paulo is within the Atlantic Forest biome, a world hotspot characterized as a mosaic of plant physiognomies that range from dense, open and mixed rainforests.It is the biggest city of Latin America and one of the five biggest metropolitan areas of the world (IBGE, 2000).The city is almost completely urbanized with many buildings that offer suitable conditions for several synantropic species, including ants (RIBEIRO et al., 2012;PIVA;CAMPOS, 2012). Survey of Pachycondyla striata nests Fourteen urban green areas in the city were visited in the period from April 2012 to April 2013 (Figure 1) for the search of P. striata nests and collections of workers.Depending on the size of the studied area, each area was visited once or twice in the period.One person did the searches for the nests at random, from 10h00 am up to 4h00 pm, uninterruptedly.When workers were seen foraging, they were followed until their return to the nests.They were then collected with forceps and placed into vials in 90% alcohol for further identification, following Kempf (1961) and Mackay and Mackay (2010).All nest locations were described, and the data are shown as being of presence/ absence of nests in the area. Spatial distribution of nests The spatial distribution of the nests of P. striata was determined in the park of the Instituto Biológico (IB) (Figure 1), a research institute, in October 2013.The evaluated area comprises a coffee crop with 1,500 plants, lawns, gardens with flowers, bushes, trees, palm trees, and buildings that house laboratories, administrative units and a library.The park is not open to public visitors as the other surveyed areas.The gardens, lawns and buildings are surrounded by paved areas in order to allow the traffic of cars and people. The area was subdivided into 25 blocks of 10 x 36 m in order to facilitate the counting of nests.Every garden, lawn, tree root, rocks and crevices on the paved area were carefully searched for nests, by one person, in each block, which was daily visited for five consecutive days, from 10h00 am to 15h00 pm.When the nests were found, due to the flux of ant workers, their positions were plotted on the map of the park for further evaluation.Only the nests with visual activity of ants were included in this study in order to avoid the inclusion of abandoned nests, or to count different entrances of the same nest, once this species uses only one main entrance for the ant traffic (MEDEIROS; OLIVEIRA, 2009).In order to evaluate the spatial distribution of the nests, it was used the Dispersion Index (I), and to calculate it, the estimated rate of variance (S 2 ) and the mean number of nests were taken (LUDWIG; REYNOLDS, 1988;KREBS, 1989).For an aggregate distribution I = S 2 / x >1, for a uniform distribution I = S 2 / x <1, and for a random distribution I = S 2 / x =1.In order to test the significance of the value I observed, the chi-square test was used at a 5% significance level. Reproductive period Three sites were chosen to study the reproductive period, one park and two buildings.Together, they totaled 19 months of observation. The chosen park was located at the Instituto Biológico (Figure 1).Two light traps for collecting nocturnal reproductive forms were placed (model Luiz de Queiroz, using 15 W lamps with black light blue bulbs) at a height between two and seven meters from the soil in order to intercept the reproductive forms of P. striata in the different heights.One Malaise trap for collecting diurnal alates was also placed near the light traps, on the soil. The insects captured were removed weekly from the traps from August 2012 to November 2013.The light traps had photoelectric cells to turn them on at sunset and off at sunrise.The insects attracted to the light traps were preserved in 1% formaldehyde solution in the field and taken to the laboratory, where they were thus, placed in 90% ethanol for subsequent sorting and identification.The gender and number of ants were evaluated.The voucher material is stored in the Coleção Entomológica Adolph Hempel (CEAH) of the Instituto Biológico. In a building (hereinafter referred to as building A), another light trap was placed at a height of 45 meters, on the last floor, which had an open area.The light trap was left there from April 2012 to November 2013.The insects were weekly collected, and the ants were separated from the other taxa for identification.The Building A was chosen for the study due to the several complaints about the Ponerinae alate ants, mainly on the last floors, stinging the employees who worked in the building. In the other building (hereinafter referred to as building B), the owner of the apartment on the last floor, at 80 meters in height, assumed the commitment of advising the authors of this research when the nuptial flights of the ants are observed in his property.The owner had reported to the Department of Health of the city of São Paulo (at SUVIS, Campo Limpo), on the presence of alate ants flying around and stinging members of his family.According to the biologists from the Department of Health, the complaints occurred annually, from 2010 to 2012, in the drier season, from July to August, when they then requested the assistance of the Instituto Biológico in order to identify the ant species and understand this biological event.The species was P. striata. An artificial rearing from P. striata females, recently mated, was also implemented in order to register the exact time of the alate production.Sixty females were collected on August 30, 2012, and were taken alive to the laboratory and placed in two terrariums of 50 x 30 x 40 cm, with 30 females in each terrarium.Ants were provided with soil, water and fed daily with Tenebrio molitor Linnaeus (Coleoptera: Tenebrionidae), bee larvae, and blackberries.The ant rearing was observed daily for 12 months to the emergence of workers and reproductive forms. The Meteorological registers for the study period were obtained at the National Institute of Meteorology -Mirante-Santana (INMET), in São Paulo. Results and discussion Survey of Pachycondyla striata nests Workers of P. striata forage individually or in pairs on the soil and they were not hard to be found walking among plants and on the paved spaces of the surveyed areas.Tanden running, the habit of two workers walking in pairs is a common behavior in the genus Pachycondyla, to lead nestmates to new localities such as the rich sources of food, or new nest sites (TRANIELLO; HÖLLDOBLER, 1984). Workers and their nests were found in nine of the 14 surveyed green areas, mainly around the base of tree trunks, among their roots that are project from the ground, under the rocks, and in the crevices on the sidewalks.They were never found directly on the lawns and gardens.Medeiros and Oliveira (2009), and Silva-Melo and Giannotti (2010) also reported that the nests were located in shaded areas near the trees of medium and large size, in places with few herbaceous plants on the campus of the State University of São Paulo (UNESP) in the city of Rio Claro, Brazil. In five parks (35.8%),Parque Trianon, Parque da Independência, Parque da Água Branca, Parque da Luz, and Parque Villa Lobos (Figure 1), neither workers nor nests of this ant species were found.In these monitored green areas, the ground was cleaned weekly in order to have the fallen leaves removed, with the exception of the Parque Trianon where a huge layer of leaf litter was present.The litter layer hindered the search for nests, but even in the cleanest gardens and lawns, no nests were found.The Parque da Independência, Parque da Água Branca, and the Parque da Luz had the most compacted soils due to the grass and the use of the lawns by users of the park to rest and for outdoor activities, in which may interfere in the establishment of the ant species.In the Parque da Água Branca, several free-range species of birds such as chickens, ducks, and peacocks are found in its area.They scratch the ground all day, which may limit the nesting by P. striata.The Parque Villa Lobos was built on a swampy area; the soil is often waterlogged, with accumulation of puddles on the lawns and gardens, so it is clear that the environment is not suitable for the establishment of ants.The most common species in this park was Solenopsis invicta Buren, being observed few other ant species.The workers of Solenopsis were collected and identified through the mitochondrial gene cytochrome oxidase I (COI) (GUSMÃO et al., 2010). Spatial distribution of nests Twenty six nests of P. striata were found in the sampled area (0.9 ha) in the park of the Instituto Biológico, most of them were among the tree roots and on the crevices of the sidewalks around the buildings, as were nests found in other areas surveyed in this study.The locations of the nests corroborate the observations made by Silva-Melo and Giannotti (2010) and Medeiros and Oliveira (2009) who reported nests in the shady areas and in close proximity to live trees. The spatial distribution of the nests was at random (I = 1.04, χ 2 = 24.96,d.f.= 24), throughout the area.Based on such distribution, when searching for the nests of P. striata, they should be sought in every tree root and crevices on the sidewalks, and around the human constructions.Efforts should be taken in these places so that the soil does not need to be directly investigated. Reproductive period The light traps captured 1,348 alates of P. striata at the building A (Figure 2).At the park of the Instituto Biológico (IB), the Malaise traps did not capture any ant, and the light traps captured only five specimens, in August 2013.The poor efficiency in capturing the specimens at the IB was due to the time of the nuptial flights, from 10h00 am to 1h00 pm, and to the occurrence of the mating flights at high altitudes.Since the Malaise traps were placed on the ground, they were not efficient in capturing the insects, or the nocturnal flights were not common, which explains the low capture by the light traps.Additionally, as the light trap in the building A was placed in a terrace, at a height of 45 meter, the males and females were gathered in the place during the day and were attracted by the lamp during the night, what explains the efficiency in the capture in this latter case. In the first year of the study, flight activities occurred in late August/early September, only.In the second year, they occurred from July to September, but peaks occurred in late August/early September, during the winter, the driest and coldest season in southeast Brazil, declining sharply after September 18 th and ceasing during the last week of September (Figure 2).According to Wolda (1986;1992) timing of peaks may vary from year to year. Males were captured more frequently than females at a ratio of 1.6♂:1♀ (826♂:522♀).It is not possible to infer if the highest number of males captured is an artifice of the light trap design or if the male numbers are indicative of the mating strategies of this species.According to the published records, two types of mating behaviors seem to predominate in ants (HÖLLDOBLER; BARTZ, 1985;HÖLLDOBLER;WILSON, 1990): (i) males that fly in large swarms or are gathered in central places where females can choose among males, and (ii) females that call males.In this latter behavior, flights tend to be asynchronous and may last for many months with occasional males flying out to search for calling females. The reproductive period varies among the ant species and the geographical regions, and a huge amount of energy is wasted to produce the reproductive forms, males and females (HÖLLDOBLER; WILSON, 1990).The synchronism for producing alate specimens in nearby colonies is essential for the successful reproduction.Such synchronism can be triggered by the abiotic factors such as temperature, rainfall and luminosity (KASPARI et al., 2001). In the city of Rio Claro, 180 km away from the city of São Paulo, Silva-Melo and Gianotti (2010) registered a maximum of 80 males, when a nest of P. striata was excavated.Based on this information, the high number of reproductive forms captured in our study, indicates a synchronization of nuptial flights of several nests in the city of São Paulo in the same period. In both buildings, A and B, the reproductive forms were observed for consecutive days in the same period in which the alate forms were captured in the light traps, from August to the end of September.In the artificial rearing boxes 45 (75%) females between the captured females died.The 15 remaining females produced only males, in April 2013, but both sexes emerged, from August to September.In the same period, the nuptial flights were recorded in the city of São Paulo.Although we did not measure the workers and alate specimens, they seemed to be as large as reported by Kempf (1961).The first workers emerged in November 2012, three months after the establishment of the artificial rearing. The alate forms of P. striata, already deposited in the CEAH (Table 1), were collected at different altitudes and latitudes in the southeast region of Brazil, as in this study.The data showed that this species is highly synchronous in its nuptial flight time, except in the municipality of Ouro Fino, Minas Gerais State, where the reproductive forms were captured in October.Mackay and Mackay (2010) have also recorded dealate females, that is, females that had already been inseminated in this month in Brazil. Two different episodes of records of alates were in April and May 2012, and 2013.Four males were captured in the building A and, in the same period, the artificial rearing colony also produced males. Other surveys should be conducted in the other regions of Brazil in order to determine if the time of reproduction is also similar to those observed in this study. Conclusion Nests of Pachycondyla striata were found in nine from 14 monitored green areas in São Paulo, São Paulo State, located around the base of tree trunks, among their roots, under the rocks, and in crevices on the sidewalks.The spatial distribution of the nests is at random.The nuptial flights occur from July to September, and swarming from 10h00 am to 1h00 pm, at an altitude between 45 and 80 m from the ground.Such data can help to find the nests in order to control the species and on the prevention of flying ants entering the households during the mating periods. Figure 2 . Figure 2. Reproductive period of Pachycondyla striata in the city of São Paulo, with data of temperature and rainfall.
4,484.2
2015-07-01T00:00:00.000
[ "Biology" ]
LCP-dropout: Compression-based Multiple Subword Segmentation for Neural Machine Translation In this study, we propose a simple and effective preprocessing method for subword segmentation based on a data compression algorithm. Compression-based subword segmentation has recently attracted significant attention as a preprocessing method for training data in Neural Machine Translation. Among them, BPE/BPE-dropout is one of the fastest and most effective method compared to conventional approaches. However, compression-based approach has a drawback in that generating multiple segmentations is difficult due to the determinism. To overcome this difficulty, we focus on a probabilistic string algorithm, called locally-consistent parsing (LCP), that has been applied to achieve optimum compression. Employing the probabilistic mechanism of LCP, we propose LCP-dropout for multiple subword segmentation that improves BPE/BPE-dropout, and show that it outperforms various baselines in learning from especially small training data. Introduction 1.Motivation Subword segmentation has been established as a standard preprocessing method in neural machine translation (NMT) [1,2].In particular, byte-pair-encoding (BPE)/BPE-dropout [3,4] is the most successful compression-based subword segmentation.We propose another compression-based algorithm, denoted by LCP-dropout, that generates multiple subword segmentations for the same input; thus, enabling data augmentation especially for small training data. In NMT, a set of training data is given to the learning algorithm, where a training data is a pair of sentences from the source and target languages.The learning algorithm first transforms the given each sentence into a sequence of tokens.In many cases, the tokens correspond to words in the unigram language model.The extracted words are projected from a high-dimensional space consisting of all words to a low-dimensional vector space by word embedding [5], which enables us to easily handle distances and relationships between words and phrases.The word embedding have been shown to boost the performance of various tasks [6,7] in natural language processing.The space of word embedding is defined by a dictionary constructed from the training data, where each component of the dictionary is called vocabulary.Embedding a word means representing it by a set of related vocabularies. Constructing appropriate dictionary we tackle in this study is one of the most important tasks.Here, consider a simplest strategy that uses the words themselves in the training data as the vocabularies.If a word does not exist in the current dictionary, it is called an unknown word, and the algorithm decides whether or not to register it in the dictionary.Using a sufficiently large dictionary can reduce the number of unknown words as much as desired; however, as a trade-off, overtraining is likely to occur, so the number of vocabularies is usually limited to 16k and 32k.Therefore, the subword segmentation has been widely used to construct a small dictionary with high generalization performance [8][9][10][11][12]. Related works Subword segmentation is a recursive decomposition of a word into substrings.For example, let the word 'study' be registered as a current vocabulary.By embedding other words 'studied' and 'studying', we can learn that these three words are similar.However, each time a new word appears, the number of vocabularies grows monotonically. On the other hand, when we focus on the common substrings of these words, we can obtain a decomposition, such as 'stud y', 'stud ied', and 'stud ying' with the explicit blank symbol ' '.Therefore, the idea of subword segmentation is not to register the word itself as a vocabulary but to register its subwords.In this case, 'study' and 'studied' are regarded as known words because they can be represented by combining subwords already registered.These subwords can also be reused as parts of other words (e.g., student and studied), which can suppress the growth of vocabulary size. In the last decade, various approaches have been proposed along this line.SentencePiece [13] is a pioneering study based on likelihood estimation over unigram language model, having high performance.Since maximum likelihood estimation requires a quadratic time in the size of training data and the length of longest subword, a simpler subword segmentation [3] based on BPE [14,15], which is known as one of fastest data compression algorithms, and therefore has many applications, especially in information retrieval [16,17] has been proposed.BPE-based segmentation starts from the state where a sentence is regarded as a sequence of vocabularies where the set of vocabularies is initially identical to the set of alphabet symbols (e.g., ASCII characters).BPE calculates the frequency of any bigram, merges all occurrences of the most frequent bigram, and registers the bigram as a new vocabulary.This process is repeated until the number of vocabularies reaches the limit.Thanks to the simplicity of the frequency-based subword segmentation, BPE runs in linear time in the size of input string.However, frequency-based approach may generates inconsistent subwords for same substring occurrences.For example, 'impossible' and its substring 'possible' are possibly decomposed into undesirable subwords, such as 'po ss ib le' and 'i mp os si bl e', depending on the frequency of bigrams.Such merging disagreement can also be caused by misspellings of words or grammatical errors.BPE-dropout [4] proposed a robust subword segmentation for this problem by ignoring each merging with a certain probability.It has been confirmed that BPE-dropout can be trained with higher accuracy than the original BPE and SentencePiece on various languages. Our contribution We propose LCP-dropout: a novel compression-based subword segmentation employing the stochastic compression algorithm, called locally-consistent parsing (LCP) [18,19], for improving the shortcomings of BPE.Here, we describe an outline of original LCP.Suppose we are given an input string and a set of vocabularies, where similarly to BPE, the set of vocabularies is initially identical to the set of symbols appearing in the string.LCP randomly assigns the binary label for each vocabulary.Then, we obtains a binary string corresponding to the input string where the bigram '10' works as a landmark.LCP merges any bigram in the input string corresponding to a landmark in the binary string, and adds the bigram to the set of vocabularies.The above process is repeated until the number of vocabularies reaches the limit. By this random assignment, it is expected that any sufficiently long substring contains a landmark.Furthermore, we note that two different landmarks never overlap each other.Therefore, LCP can merge bigrams appropriately, avoiding undesirable subword segmentation that occurs in BPE.Using this characteristics, LCP has been theoretically shown to achieve almost optimal compression [19].The mechanism of LCP has also been applied to mainly information retrieval [18,20,21]. A notable feature of the stochastic algorithm is that LCP assigns a new label to each vocabulary for each execution.Owing to this randomness, the LCP-based subword segmentation is expected to generate different subword sequences representing a same input; thus, it is more robust than BPE/BPE-dropout.Moreover, these multiple subword sequences can be considered as data augmentation for small training data in NMT. LCP-dropout consists of two strategies: landmark by random labeling for all vocabularies and dropout of merging bigrams depending on the rank in the frequency table.Our algorithm requires no segmentation training in addition to counting by BPE and labeling by LCP and uses standard BPE/LCP in test time; therefore is simple.With various language corpora including small datasets, we show that LCP-dropout outperforms the baseline algorithms: BPE/BPEdropout/SentencePiece. Background We use the following notations throughout this paper.Let A be the set of alphabet symbols, including the blank symbol.A sequence S formed by symbols is called a string.S[i] and S[i, j] are i-th symbol and substring from S[i] to S[j] of S, respectively.We assume the meta symbol '−' not in A to explicitly represent each subwords in S. For a string S from A ∪ {−}, a maximal substring of S including no − is called a subword.For example, S = a − b − a − a − b/a − b − a − ab contains the subwords in {a, b}/{a, b, ab}, respectively. In subword segmentation, the algorithm decomposes all the symbols in S by the meta symbol.When a trigram a − b is merged, the meta symbol is erased and the new subword ab is added to the vocabulary, i.e., ab is treated as a single vocabulary. In the following, we describe previously proposed subword segmentation algorithms, called SentencePiece (Kudo [13]), BPE (Sennrich et al. [3]), and BPE-dropout (Provilkov et al. [4]), respectively.We assume that our task in NMT is to predict a target sentence T given a source sentence S, where these methods including our approach are not task-specific. SentencePieace SentencePiece [13] can generate different segmentations for each execution.Here, we outline SentencePiece in the unigram language model.Given a set of vocabularies, V , a sentence T , and the probability p(x) of occurrence of x ∈ V , the probability of the partition The optimum partition x * for T is obtained by searching for the x that maximizes P (x) from all candidate partitions x ∈ S(T ). Given a set of sentences, D, as training data for a language, the subword segmentation for D can be obtained through the maximum likelihood estimation of the following L with P (x) as a hidden variable by using EM algorithm, where X (s) is the s-th sentence in D. SentencePieace was shown to achieve significant improvements over the method based on subword sequences.However, this method is rather complicated because it requires a unigram language model to predict the probability of subword occurrence, EM algorithm to optimize the lexicon, and Viterbi algorithm to create segmentation samples. BPE and BPE-dropout BPE [14] is one of practical implementations of Re-pair [15], which is known as the algorithm with the highest compression ratio.Re-pair counts the frequency of occurrence of all bigrams xy in the input string T .For the most frequent xy, it replaces all occurrences of xy in T such that T [i, i + 1] = xy, with some unused character z.This process is repeated until there are no more frequent bigrams in T .The compressed T can be recursively decoded by the stored substitution rules z → xy. Since the naive implementation of Re-pair requires O(|T | 2 ) time, we use a complex data structure to achieve lineartime.However, it is not practical for large-scale data because it consumes Ω(|T |) of space.Therefore, we usually split T = t 1 t 2 • • • t m into substrings of constant length and process each t i by the naive Re-pair without special data structure, called BPE.Naturally, there is a trade-off between the size of the split and the compression ratio.BPE-based subword segmentation [3] (called BPE simply) determines the priority of bigrams according to their frequency and adds the merged bigrams as the vocabularies. Since BPE is a deterministic algorithm, it splits a given T in one way.Thus, it is not easy to generate multiple partitions like stochastic approach (e.g., [13]).Therefore, BPE-dropout [4], ignoring the merging process with a certain probability, has been proposed.In BPE-dropout, for the current T and the most frequent xy, for each occurrence i satisfying T [i, i + 1] = xy, merging xy is dropped with a certain small probability p (e.g., p = 0.1).This mechanism makes BPE-dropout probabilistic and generates a variety of splits.BPE-dropout has been recorded to outperform SentencePieace in various languages.Additionally, BPE-based methods are faster and easier to implement than likelihood-based approaches. LCP Frequency-based compression algorithms (e.g.[14,15]) are known to be not optimum in theoretical point of view.Optimum compression here means a polynomial-time algorithm that satisfies |A(T with the output A(T ) of the algorithm for the input T and an optimum solution A * (T ).Note that computing A * (T ) is NP-hard [22]. For example, consider a string Since such pathological merging cannot be prevented by frequency information alone, frequency-based algorithms cannot obtain asymptotically optimum compression [23].Various linear-time and optimal compressions have been proposed to improve this drawback.LCP is one of the simplest optimum compression algorithms.The original LCP, like Re-pair, is a deterministic algorithm.Recently, the introduction of probability into LCP [19] has been proposed, and in this study, we focus on the probabilistic variant.The following is a brief description of the probabilistic LCP. We are given an input string T = a 1 a 2 • • • a n of length n and a set of vocabularies, V .Here, V is initialized as the set of all characters appearing in T . According to 4. Set V = V ∪ {a i a i+1 } and repeat the above process. The difference between LCP and BPE is that BPE merges bigrams with respect to frequencies, whereas LCP pays no attention to them.Instead, LCP merges based on the binary labels assigned randomly.The most important point is that any two occurrences of '10' never overlap.For example, when T contains a trigram abc, there is no possible assignment allowing (ab)c and a(bc) simultaneously.By this property, LCP can avoid the problem that frequently occurs in BPE.Although LCP theoretically guarantees almost optimum compression, as for as the authors know, this study is the first result of applying LCP to machine translation. 3 Our Approach: LCP-dropout BPE-dropout allows diverse subword segmentation for BPE by ignoring bigram merging with a certain probability.However, since BPE is a deterministic algorithm, it is not trivial to generate various candidates of bigram.In this study, we propose an algorithm that enables multiple subword segmentation for the same input by combining the theory of LCP with the original strategy of BPE. Algorithm description We define the notations used in our algorithm.Let A be an alphabet and ' ' be the explicit blank symbol not in A. A string w formed from A is called word, denoted by x ∈ A * , and a string s ∈ (A ∪ { }) * is called a sentence. We also assume the meta symbol '−' not in A ∪ { }.By this, a sentence x is extended to have all possible merges: Let x be the string of all symbols in x separated by −, e.g., x = a − b − a − b − b for x = ababb.For strings x and y, if y is obtained by removing some occurrences of − in x, then we express the relation y x and y is said to be a subword segmentation of x. After merging a − b (i.e., a − b is replaced by ab), the substring ab is treated as a single symbol.Thus, we extend the notion of bigram to vocabularies of length more than two.For a string of the form s = α 1 − α 2 − • • • − α n such that each α i contains no −, each α i − α i+1 is defined to be a bigram consisting of the vocabularies α i and α i+1 . Example run Table 1 presents an example of subword segmentation using LCP-dropout.Here, the input X consists of a single sentence ababcaacabcb.The hyperparameters are (v, , k) = (6, 5, 0.5).First, the set of vocabularies is initialized to end while %subroutine of LCP-dropout Framework of neural machine translation Figure 1 shows the framework of our transformer-based machine translation model with LCP-dropout.Transformer is the most successful NMT model [25].The model mainly consists of Encoder and Decoder.The Encoder converts the input sentence in the source language into a word embedding (Emb i in Figure 1), taking into account the positional information of the characters.Here, the notion of word is extended to that of subword in this study.The subwords are obtained by our proposed method, LCP-dropout.Next, the correspondences in the input sentence are acquired as attention (Multi-Head Attention).Then, the normalization is done through a forward propagation network formed by linear transformation, activation by ReLU function, and linear transformation.These processes are performed in N = 6 layers for the Decoder. For the Decoder, it receives the candidate sentence generated by the Encoder and the input sentence for the Decoder.Then, it acquires the correspondence between those sentences as attention (Multi-Head Attention).This process is also performed in N = 6 layers.Finally, the predicted probability of each label is calculated by linear transformation and softmax function. 4 Experimental Setup Baseline algorithms Baseline algorithms are SentencePiece [13] with the unigram language model and BPE/BPE-dropout [3,4].Sen-tencePiece takes the hyperparameters l and α, where l specifies how many best segmentations for each word are produced before sampling and α controls the smoothness of the sampling distribution.In our experiment, we use (l = 64, α = 0.1) that performed best on different data in the previous studies. Table 1: Example of multiple subword segmentation using LCP-dropout for the single sentence 'x = ababcaacabcb' with the hyperparameters (v, , k) = (6, 5, 0.5), where the meta symbol − is omitted, and L is the label of each vocabulary assigned by LCP.The resulting subword segmentation is BPE-dropout takes the hyperparameter p, where during segmentation, at each step, some merges are randomly dropped with the probability p.If p = 0, the segmentation is equal to the original BPE and p = 1, the algorithm outputs the input string itself.Then, the value of p can be used to control the granularity of segmentation.In our experiment, we use p = 0 for the original BPE and p = 0.1 for the BPE-dropout with the best performance. Data sets, preprocessing, and vocabulary size We verify the performance of the proposed algorithm for a wide range of datasets with different sizes and languages.Table 2 summarizes the details of the datasets and hyperparameters.These data are used to compare the performance of LCP-dropout and baselines (SentencePiece/BPE/BPE-dropout) with appropriate hyperparameters and vocabulary sizes shown in [4]. Before subword segmentation, we preprocess all datasets with the standard Moses toolkit, 1 where for Japanese and Chinese, subword segmentations are trained almost from raw sentences because these languages have no explicit word boundaries; and thus, Moses tokenizer does not work correctly.Based on a recent research on the effect of vocabulary size on translation quality, the vocabulary size is modified according to the dataset size in our experiments (Table 2). To verify the performance of the proposed algorithm for small training data, we use News Commentary v162 , a subset of WMT143 , as well as KFTT 4 .In addition, we use a large training data in WMT14.The training step is set to 200,000 for all data.In training, pairs of sentences of source and target languages were batched together by approximate length. As shown in Table 2, the batch size was standardized to approximately 3k for all datasets. Model, optimizer, and evaluation NMT was realized by the seq2seq model, which takes a sentence in the source language as input and outputs a corresponding sentence in the target language [24].A transformer is an improvement of seq2seq model, that is the most successful NMT.[25] In our experiments, we used OpenNMT-tf [26], a transformer-based NMT, to compare LCP-dropout and other baselines algorithms.The parameters of OpenNMT-tf were set as in the experiment of BPE-dropout [4].The batch size was set to 3072 for training and 32 for testing.We also use regularization and optimization procedure as described in BPE-dropout [4]. The quality of machine translation is quantitatively evaluated by BLEU score, i.e., the similarity between the result and the reference of translation.It is calculated using the following formula based on the number of matches in their n-grams.Let t i and r i (1 ≤ i ≤ m) be the i-th translation and reference sentences, respectively. where N is a small constant (e.g., N = 4), and BP BLEU is the brevity penalty when |t i | < |r i |, where BP BLEU = 1 otherwise.In this study, we use SacreBLEU [27].For Chinese, we add option--tok zh to SacreBLEU.Meanwhile, we use character-based BLEU for Japanese. Estimation of hyperparameters for LCP-dropout First, we estimate suitable hyperparameters for LCP-dropout.Table 3 summarizes the effect of hyperparameters (v, , k) on the proposed LCP-dropout.This table shows the details of multiple subword segmentation using LCPdropout and BLEU scores for the language pair of English (En) and German (De) from News Commentary v16 The threshold k controls the dropout rate, and contributes to the multiplicity of the subword segmentation.The results show that k and affect the learning accuracy (BLEU).The best result is obtained when (k, ) = (0.01, = v/2).This can be explained by the results in Table 4 which shows the depth of executed inner-loop of LCP-dropout for randomly assigning {0, 1} to vocabularies, where when = v/2, it means the average before the outer-loop terminates.Therefore, the larger this value is, the more likely it is that longer subwords will be generated.However, unlike BPEdropout, the value of k alone is not enough to generate multiple subwords.The proposed LCP-dropout guarantees the diversity by initializing the subword segmentation by ( < v).Using this result, we will fix (k, ) = (0.01, v/2) as the hyperharameter of LCP-dropout. Comparison with baselines Table 5 summarizes the main results.We show BLEU scores for News Commentary v16 and KFTT: En and De are the same in Table 3.In addition to these languages, we set French (Fr), Japanese (Ja), and Chinese (Zh).For each language, we show the average of the number of multiple subword sequences generated by LCP-dropout.For almost datasets, LCP-dropout outperforms the baselines algorithms.Meanwhile, we use the best ones reported in the previous study for the hyperparameters of BPE-dropout and SentencePiece. Table 5 extracts the effect of alphabet size on subword segmentation.In general, Japanese Jn and Chinese Zh alphabets are very large, containing at least 2k alphabet symbols even if we limit them in common use.Therefore, the average length of words is small and subword semantics is difficult.For these case, we confirmed that LCP-dropout has higher BLEU scores than other methods for these languages. Table 5 also presents the BLEU scores for a large corpus (WMT14) for the translation De → En.This experiment shows that LCP-dropout cannot outperform baselines with the hyperparameter we set.This is because the ratio of the vocabulary size (v, ) to dropout rate k is not appropriate.As data to support this conjecture, it can be confirmed that the multiplicity in the large datasets is much smaller than that of small corpus (Table 5).This is caused by the reduced repetitions of label assignments, as shown in Table 6 compared to Table 4.The results show that the depth of inner-loop is significantly reduced, which is why enough subwords sequences cannot be generated. Table 7 presents several translation results.The 'Reference' represents the correct translation for each case, and the BLEU score is obtained from the pair of the reference and each translation result.We also show the average length for each reference sentence indicate by 'ave./word'.These results show the characteristics of successful and unsuccessful translations by the two algorithms related to the length of words. Considering subword segmentation as a parsing tree, LCP produces a balanced parsing tree, whereas the tree produced by BPE tends to be longer for a certain path.For example, for a substring abcd, LCP tends to generate subwords like ((ab)(cd)), while BPE generates them like (((ab)c)d).In this example, the average length of the former is shorter than that of the latter.This trend is supported by the experimental results in Table 7 showing the average length of all Figure 2 shows the distributions of sentence length of English.The sentence length denotes the number of tokens in a sentence.BPE-dropout is a well-known fine-grained segmentation approach.The figure shows that LCP-dropout produces more fine-grained segmentation than the other three segmentation approaches. Therefore, LCP-dropout is considered to be superior in subword segmentation for languages consisting of short words.Table 5 including the translation results for Japanese and Chinese also supports this characteristics. 6 Conclusions, Limitations, and Future Research Conclusions and limitations In this study, proposed LCP-dropout as an extension of BPE-dropout [4] for multiple subword segmentation by applying a near-optimum compression algorithm.The proposed LCP-dropout can properly decompose strings without background knowledge of the source/target language by randomly assigning binary labels to vocabularies.This mechanism allows generating consistent multiple segmentations for the same string.As shown in the experimental results, LCP-dropout enables data augmentation for small datasets, where sufficient training data are unavailable on minor languages or limited fields. Multiple segmentation can also be achieved by likelihood-based methods.After SentencePiece [13], various extensions have been proposed [28,29].In contrast to these studies, our approach focuses on a simple linear-time compression algorithm.Our algorithm does not require any background knowledge of the language compared to word replacement-based data augmentation, [30,31] where some words in the source/target sentence are swapped with other words preserving grammatical/semantic correctness. Future research The effectiveness of LCP-dropout was confirmed for almost small corpora.Unfortunately, the optimal hyperparameter obtained in this study did not work well for a large corpus.Besides, the learning accuracy was found to be affected Table 7: Examples of translated sentences by LCP-dropout (k = 0.01) and BPE-dropout (p = 0.1) with the reference translation for News Commentary v16.We show the average word length (ave./word) for each reference sentence as well as the average subword length (ave./subword)generated by respective algorithms for the entire corpus.We also show the BLEU sores between the references and translated sentences as well as their standard deviations (SD). Reference: 'Even if his victory remains unlikely, Bayrou must BLEU (ave./word= 5.00) now be taken seriously.'LCP-dropout: 'While his victory remains unlikely, Bayrou must 84.5 now be taken seriously.'BPE-dropout: 'Although his victory remains unlikely, he needs to 30.8 take Bayrou seriously now.' Reference: 'In addition, companies will be forced to restructure BLEU (ave./word= 5.38) in order to cut costs and increase competitiveness.'LCP-dropout: 'In addition, restructuring will force rms to save costs 12.4 and boost competitiveness.'BPE-dropout: 'In addition, businesses will be forced to restructure 66.8 in order to save costs and increase competitiveness.by the alphabet size of the language.Future research directions include an adaptive mechanism for determining the hyperparameters depending on training data and alphabet size. In the experiments in this paper, we considered word-by-word subword decomposition.On the other hand, multiwords are known to violate the compositeness of language.Therefore, by considering multi-words as longer words and performing subword decomposition, LCP-dropout can be applied to related to language processing related to multi-words.In this study, subword segmentation was applied to machine translation.To improve the BLEU score, there are other approaches such as data augmentation [32].Incorporating the LCP-dropout with them is one interesting approach.In this paper, we handled several benchmark datasets with major languages.Recently, machine translation of low-resource languages is an important task [33].Applying the LCP-dropout to the task is also important future work. Although proposed LCP-dropout is currently applied only to machine translation, we plan to apply our method to other linguistic tasks including sentiment analysis, parsing, and question answering in future studies. 1 : assign L : V → {0, 1} randomly 2: F REQ(k) ← the set of top-k frequent bigrams in Y of the form α − β with L(αβ) = '10' 3: merge all occurrences of α − β in Y for each α − β ∈ F REQ(k) 4: add all the vocabularies αβ to V V = {a, b, c}; for each α ∈ V , a label L(w) ∈ {0, 1} is randomly assigned (depth 0).Next, find all occurrences of 10 in L, and the corresponding bigrams are merged depending on their frequencies.Here, L(ab) = L(ac) = 10 but only a − b is top-k bigram assigned 10, and then a − b is merged to ab.The resulting string is shown in the depth 1 over the new vocabularies V 1 = {a, b, c, ab}.This process is repeated while |V m | < for the next m.The condition |V 2 | = 5 terminates the inner-loop of LCP-dropout, and then the subword Y 1 = ab − abc − a − a − c − abc − b is generated.Since |V (Y 1 )| < v, the algorithm generates the next subword segmentations Y 2 for the same input.Finally, we obtain the multiple subword segmentation Y 1 = ab − abc − a − a − c − abc − b and Y 2 = ab − ab − ca − a − ca − b − c − b for the same input string. Figure 1 : Figure 1: Framework of our neural machine translation model with LCP-dropout. Figure 2 : Figure 2: Distribution of sentence length.The number of tokens in each sentence by LCP-dropout tends to be larger than the others: BPE, BPE-dropout, and SentencePiece. Table 2 : Overview of the datasets and hyperparameters.The hyperparameter v (vocabulary size) is common to all algorithms (baselines and ours) and others ( and k) are specific to LCP-dropout only. Table 3 : Experimental results of LCP-dropout for News Commentary v16 (Table 2) w.r.t the specified hyperparameters, where the translation task is De→En.Bold indicates the best score. Table 4 : Depth of label assignment in LCP-dropput. Table 2 ) . For each threshold k ∈ {0.01, 0.05, 0.1}, En and De indicate the number of multiple subword sequences generated for the corresponding language, respectively.The last two values are the BLEU scores for De → En with = v and = v/2 for v = 16k, respectively. Table 5 : Experimental results of LCP-dropout (denoted by LCP), BPE-dropout (denoted by BPE), and SentencePiece (denoted by SP) on various languages in Table2(small corpus: News Commentary v16 and KFTT, and large corpus: WMT14), wehre 'multiplicity' denotes the average number of sequences generated per input string.Bold indicates the best score. Table 6 : Depth of label assignment for large corpus. subwords generated by LCP/BPE-dropout for real datasets.Due to this property, when the vocabulary size is fixed, LCP tends not to generate subwords of approximate length because it decomposes a longer word into excessively short subwords.
6,808.2
2022-02-28T00:00:00.000
[ "Computer Science" ]
Continuum extrapolation of quarkonium correlators at non-zero temperature In the investigation of in-medium modifications of quarkonia and for determining heavy quark diffusion coefficients, correlation functions play a crucial role. For the first time we perform a continuum extrapolation of charmonium and bottomonium correlators in the vector channel based on non-perturbatively clover-improved Wilson fermions in quenched lattice QCD. Calculations were done at 4 different lattice spacings, spatial extents between 96 and 192, aspect ratio from 1/6 to 1/2, for 5 temperatures between $T/T_c = 0.75$ and $T/T_c = 2.2$. We interpolate between different quark masses to match to the same vector meson mass over different lattice setups. Afterwards we extrapolate the renormalized correlators to the continuum. While we find a strong temperature dependence for charmonium, bottomonium states are only slightly affected at higher temperatures. Introduction In the investigation of properties of the quark gluon plasma (QGP), which is created at heavy ion collisions, quarkonium bound states are of great interest. Due to the color Debye screening, these states are expected to melt at a certain dissolution temperature and, therefore, suppression of J/ψ yields at heavy ion collisions compared to p-p-collisions can be a signal for the existence of a quark gluon plasma [1]. Indeed, J/ψ-suppression has been found at all three large colliders SPS, RHIC and LHC and also Υ suppression has been reported at the LHC [2][3][4][5][6][7]. Nevertheless, for a better theoretical interpretation of the experiments, one needs more information about the in-medium modification of quarkonium bound states. In addition, measurements at RHIC and LHC revealed a non-zero elliptic flow for heavy quarks [8][9][10]. This indicates a relaxation towards thermal equilibrium and gives evidence to a collective motion of heavy quarks. For a better understanding of these hydrodynamic effects, knowledge about the transport properties of heavy quarks within the medium is needed. Especially, the heavy quark diffusion coefficient D is of interest, as it is related to the energy loss of a heavy quark in the medium. So far leading order perturbative calculations are not in agreement with the experimental data [11] and next-to-leading order calculations show that one can expect large corrections by higher orders [12]. Therefore, first principle studies, such as lattice QCD, are necessary to understand the nonperturbative effects in these quantities. Both, in medium modification as well as transport properties are encoded in the spectral function of quarkonia. Unfortunately, the spectral function and the corresponding Euclidean correlator are related by an integral and a lot of details of the spectral function are averaged away. Moreover, due to the limitation of the τ-direction, only few lattice points are available. Therefore, very fine lattices as well as good statistics are needed to extract structures of the spectral function. So far, there were several attempts using very fine quenched lattices, though no continuum extrapolation has been realized [13][14][15]. In [16] a continuum extrapolation for the heavy quark momentum diffusion coefficient based on a color electric correlator has been carried out. However, this approach is limited to the heavy quark mass limit and as this analysis is not based on meson correlators, in-medium properties of bound states cannot be extracted by this approach. For the first time, we have carried out a continuum extrapolation for charmonium and bottomonium correlation functions in quenched lattice QCD. In this paper we will show the details and steps that have been been used to perform the continuum extrapolation. A similar analysis for light quarks can be found in [17,18]. Meson correlation functions and the spectral function The main object of interest in this study, the Euclidean mesonic correlation function, is defined as where the mesonic current is given by Here, ψ are the fermion fields and Γ H = γ 5 , γ µ , 1, γ 5 γ µ corresponds to the meson channel we are looking at (pseudo-scalar (P), vector (V), scalar (S), axial-vector (AV), respectively). In this study, we perform the continuum extrapolation for the example of the vector channel, but, except for the renormalization, the process works analogously for the other channels. For results for the pseudoscalar channel, see [19]. The above correlator is projected to finite momentum using the lattice Fourier transformation, with n = τ,x = (x, y, z) andp = (p x , p y , p z ) for temporal correlators or n = z,x = (x, y, τ) and p = (p x , p y , p τ ) for spatial correlators, respectively. In the following we restrict top = 0 and, except for the mass determination, to temporal correlators. The temporal correlator is connected to the spectral function via The spectral function contains all information on the medium properties of mesons. Whereas many information, like dissociation temperatures, can be extracted from the deformation of the spectral functions at higher temperature, the diffusion constant D can be directly extracted from the vector spectral function: Thereby, the index at the spectral function stems from the µ-index from Γ V = γ µ and χ q is the quark number susceptibility given by While this is in principle constant in τ due to current conservation, lattice cut-off effects are noticeable for short τ-ranges. Therefore we extract χ q from G V 00 (τT = 0.5). Since the quark number susceptibility is affected by the same renormalization as G V , it is used to circumvent the uncertainties of the renormalization constants using renormalization independent ratios by dividing all correlators by the quark number susceptibility at T = 2.25T c . In order to remove the exponential drop of the correlator we normalize all correlators by the free correlator which is given by The choice of m q is arbitrary in this case, as this normalization is only temporary and will be removed after the final continuum extrapolation. In the following we use m q = 1.5GeV for charmonium and m q = 5GeV for bottomonium. Lattice setup The correlators were measured on quenched configurations generated with the standard Wilson gauge action on large isotropic lattices. All configurations are separated by 500 sweeps which consist of [20] with updated coefficients to include a wider β-range [19]. The correlators have been calculated using non-perturbatively clover-improved Wilson fermions [21]. We computed the correlators for six different κ-values for all lattices except for the N σ = 120 lattice, where we used four different κ-values. The κ-values have been roughly distributed between bottomonium and charmonium using the vector screening masses from the spatial correlator in the deeply confined phase at 0.75T c for tuning. The screening masses have been determined by averaging over the masses from correlated two state fits for different fit intervals. As a fit Ansatz we use G(n τ ) = A 1 cosh(m 1 (n τ − N τ /2)) + A 2 cosh(m 2 (n τ − N τ /2)). For the fit intervals, we fixed the upper limit to N σ /2 − N σ /24 and gradually increased the lower limit. Afterwards we averaged over all fit results that lied within a plateau of the screening mass and in the corresponding χ 2 /d.o. f. of the fit (See figure 1). Due to the high costs of calculating the correlators, the quark mass tuning is not optimal. This will be corrected by a quark mass interpolation later on. For the finest lattice, we used four additional quark sources for the two κ-values closest to bottomonium and charmonium to increase the statistics at these important data points. κvalues closest to charmonium. We find a very strong mass dependence: Even for small differences in the corresponding screening mass, we get a rather large deviation of the correlators. In the right plot we show the correlators whose measured screening mass lies closest to the J/ψ mass. As it can be seen, the ordering of the correlators does not match to the ordering of the lattice sizes. This makes it impossible to perform a continuum extrapolation based directly on the lattice correlators. As this effect stems from the slight uncertainties of the quark mass tuning, we have to make sure that all correlators are matched to exactly the same quark mass. This is realized by interpolating between the correlators corresponding to different screening masses. To do so, we plot the correlators at each τT separately against the measured screening mass from the T = 0.75T c lattice (see right plot of figure 3). Note that this means that also for higher temperatures we use the screening mass from the lowest temperature lattice. Interpolating to the same vector meson mass The actual interpolation is done by fitting the data with an exponential of a quadratic function: Then, the new interpolated correlators are computed by inserting the same reference mass m ref /T for each τT and each lattice size into the above equation (vertical line in the right plot of figure 3). As also the temperature differs slightly between the lattices, we choose m/T from the finest lattice as the reference mass for bottomonium and charmonium.. Because of the large differences between the correlators of different mass, we did not use all six available different masses. Instead we only used the closest four correlators for charmonium and the closest three for bottomonium. In principle this method can be performed for each configuration individually. However, due to fluctuations between the individual measurements, this gives larger errors. Instead we used bootstrap samples with a size of 1000 samples for the mass interpolation. When the choice of the samples is the same for each τT , the bootstrap method preserves the correlations within the correlator. For the three coarsest lattices, the data points arise from a B-spline interpolations to match to the data points of the finest lattice at each τT . Right: The mass interpolated lattice correlators and the corresponding continuum extrapolation. We also show the data point for T 2 /χ q,cont. , at zero distance. In both plots, the correlators have been normalized with the free correlator defined in equation 7 with a quark mass m q = 1.5GeV and with the quark number susceptibility χ q at T = 2.25T c . With this method we are now able to calculate the correlator for any arbitrary meson mass between charmonium and bottomonium and can retune the correlators for all lattice sizes after the actual computation of the correlator and, therefore, can compensate uncertainties that stem from slight errors in the tuning of the quark masses. Extrapolation to the continuum Having performed the mass interpolation on the three coarsest lattices (as we used m ref /T from the finest lattice, we do not need to interpolate the finest lattice), we end up with well ordered correlators (right plot in figure 4) and we are ready to perform the continuum extrapolation. This is realized using a linear extrapolation in 1/N τ 2 on each bootstrap sample (see left plot of figure 4). The three coarsest lattices have been B-spline interpolated in order to estimate all correlators at distances available on the finest lattice. Due to the cut off effects at short distances, we restricted the extrapolation to values larger than τT = 0.08. The final results for all temperatures can be found in figure 5. We find that the correlators agree for short distances of τT at almost all temperatures. Only at T = 2.25T c for charmonium we find deviations. Nevertheless, for larger distances we see clear temperature effects for charmonium. On the contrary, for bottomonium we find temperature effects only for T = 2.25T c , while for lower temperatures, the bound state seems to be independent of the temperature. Conclusion For the first time, we were able to perform a continuum extrapolation of temporal Euclidean vector meson correlation functions for charmonium and bottomonium in quenched lattice QCD. To do so, we interpolated the correlators to exactly the same vector meson mass and then performed a continuum extrapolation linear in 1/N 2 τ . Within the temperature range T = 0.75T c − 2.25T c , we observe strong temperature effects for charmonium, but only slight effects for bottomonium at T = 2.25T c . The continuum extrapolated correlators are now ready for further investigations: Using qualitative motivated fit Ansätze one can find which parts of the spectral function contribute to the temperature dependence and which not. Furthermore, statistical Bayesian methods, such as Stochastical Analytical Interference (SAI) [22,23], the Stochastic Optimization Method (SOM) [24] or the Maximum Entropy Method (MEM) [25] can be used to extract details of the spectral function. These methods have already been tested with the lattice data of the finest lattice [26] and will be applied to the actual continuum correlators.
2,951.2
2017-10-24T00:00:00.000
[ "Physics" ]
The Coin Passcode : A Shoulder-Surfing Proof Graphical Password Authentication Model for Mobile Devices The Next Generation Swift and Secured Mobile Passcode Authenticator Swiftness, simplicity, and security is crucial for mobile device authentication. Currently, most mobile devices are protected by a six pin numerical passcode authentication layer which is extremely vulnerable to Shoulder-Surfing attacks and Spyware attacks. This paper proposes a multi-elemental graphical password authentication model for mobile devices that are resistant to shoulder surfing attacks and spyware attacks. The proposed Coin Passcode model simplifies the complex user interface issues that previous graphical password models have, which work as a swift passcode security mechanism for mobile devices. The Coin Passcode model also has a high memorability rate compared to the existing numerical and alphanumerical passwords, as psychology studies suggest that human are better at remembering graphics than words. The results shows that the Coin Passcode is able to overcome the current shoulder-surfing and spyware attack vulnerability that existing mobile application numerical passcode authentication layers suffer from. Keywords—Mobile graphical password; multi-elemental passcode; shoulder-surfing proof passcode; mobile authentication model I. INTRODUCTION Authentication technology is crucial to the integrity and confidentiality of smart mobile device users, especially when many important features such as banking and finance are easily accessible through mobile applications.Current mobile security mechanisms use the four or six pin numerical passcodes which are easily remembered, while providing a swift security authentication for the users. However, this security mechanism has its flaws when it faces modern attackers who can easily guess or shoulder surf for the password combinations.There are several other user authentication mechanisms such as the alpha-numerical passwords and the pattern drawing lock, which are also prone to shoulder surfing attacks.The two-factor authentication can be easily compromised when the first level protection of the mobile devices is vulnerable. The Coin Passcode Graphical Password Authentication mechanism is a concept of the Cognitive biometric authentication which uses the hybrid scheme graphical password authentication mechanism.This paper is structured in six sections, including the Introduction Section, Related Works, The Coin Passcode Mobile Graphic Authentication Model, Security Analysis and Usability Metrics, Discussion and Conclusion. II. RELATED WORKS The related works of other existing and researched graphical password authentication model are discussed under this chapter. A. Cognitive Biometrics There are several different biometric authentication types including physiological biometrics and cognitive biometrics authentication.Under the Cognitive biometric authentication methods [1], user behaviors are identified using mobile phone sensors, through activities such as gestures and walking patterns.The input patterns are used as a means of behavior authentication.Several researchers have [2] analyzed the key input mechanics and patterns used by the users when they press the on-screen buttons to type on the phone. Another research conducted by Giuffrida [3], allows Cognitive Authentication when the user types the password, where the movement and touch of the screen are analyzed and authenticated.According to Stanciu [4], this method is effective enough to protect the system from statistic attack.An improved version [5] of this method of authentication uses the combination of four aspects which are time, pressure, size and acceleration obtained from the sensor of the device when a user types in his password. Besides password input, Cognitive authentication also includes drawing of patterns [6].Recognition of shape drawing patterns [7] to authenticate a user is a strong and easy way to protect the user against password peeking attacks.Another kind of Cognitive authentication method [8] is through the user"s pattern of walking.In this method, [9] users wear a movement tracker attached to their waist to track the user"s continuous movement data.Besides that, a type of Cognitive biometrics authentication measures the usage pattern and geographical location of regular usage of a user with his smartphone.This method [10] measures the userphone interaction activity such as the application usage, www.ijacsa.thesai.orglocation, communication and motion to detect anomaly intruder usage scenarios. It is suitable for mobile devices to implement a Cognitive biometric authentication such as the graphical password authentication as there are external costs involved in purchasing devices with sensors.Graphical Password Authentication is an authentication method in the Cognitive Biometric Authentication category which is suitable for implementation in mobile devices compared to existing numerical passcodes. Passwords in the form of graphics [11] are secure alternatives to numerical and text-based passwords, where the users are required to select pictures for authentication instead of keying in texts.Passwords in graphical format are much easier to remember compared to text-based passwords.Some studies of psychology [12] have identified that the human brain is way better in memorizing and recognizing visualized information such as pictures, compared to information in the form of text or speech.Pictures are increasingly used for the purpose of security compared to mere texts, as the range of texts and numbers is limited in comparison to pictures which are infinite. B. Recognition-Based Techniques One of the graphical password authentication technique is called the Recognition-Based Technique.For this technique, symbols, icons or images are selected by the users in a series as a password during registration, [13] where the users have to identify the same pictures they have selected during the authentication period.Based on Figure 1, Dhamija and Perrig [14], introduced a method of authentication using predefined images.Through this method, users are required to select their pre-selected pictures which they have defined during their registration from a set of random images to get authenticated by the system.However, this method is vulnerable to shoulder surfing attack. Another example of recognition-based graphical authentication is called Passface™ as shown in Figure 2.This technique [15] will display nine faces on the screen and require the user to choose their pre-selected faces in four rounds, choosing one pre-selected face per round.In addressing the shoulder surfing issue with a graphical recognition authentication method, Haichang Gao, Xiyang Liu, and Ruyi Dai [16] introduced a shoulder surfing prevention method using invisible pattern drawing by swiping gestures to select a sequence of predefined images instead of tapping.An image chain in a story is used to remember the picture sequences to provide the user with the authentication.This method is less likely to be considered as it is vulnerable to shoulder surfing attacks as it is considerably easier to be identified compared to numbers. C. Recall-Based Techniques Another technique for Graphical Password Authentication is the Recall-Based Technique, which is based on pure recall and requires the users to recreate the graphics without any given tips or assisting reminders.However, users may find it hard to recall their password with this technique even though it is more secure than recognition-based technique.A technique called Syukri, by Ali Mohamed Eilejtlawi [17] requires the user to make a signature with a drawing with a stylus pen or mouse during registration, and authentication will be based on the same signature drawing. A similar technique based on recall is enhanced with cues, where users are required to recreate a graphical password with the assistance of tips to enhance the accuracy of the password, where images will be provided to the users in which they must select specific points in the pictures in the right sequence.An example of this technique is a method introduced by S. Chiasson, P.C. van Oorschot, and R. Biddle [18], where the next picture on sequence is shown depending on the point of the previous click by the user.Every picture shown next to the previous picture based on a coordinate function of the point of click by the user of the current picture.A wrong selection of a point will cause the next picture to be shown wrongly, which prevents the attacker from guessing the password without knowing the right point for clicking.An example is shown in Figure 3. D. Hybrid Scheme A combination of multiple graphical password authentication method forms a hybrid scheme.The hybrid scheme is proposed by researchers in addressing the issues with limitation in every graphical authentication technique like shoulder-surfing attacks, hotspot issues, and much more.H. Zhao and X. Li [19] introduced an example of a hybrid scheme, which is a text-in-graphic password authentication scheme -S3PAS in short, to counter shoulder-surfing attacks. A combination of texts and graphics can resist shoulder surfing attacks, hidden cameras and spyware.The method of registration requires the user to choose "k", an original string text password.In the login authentication, the user has to look for the pre-defined password in the image, which will form an invisible triangle named the "passtriangle", and the user must then click in the region inside the invisible triangle to gain access as shown in Figure 4. ChoCD is a hybrid graphical authentication system called ChoCD proposed by Radhi, R. A., Mohd, Z. J. [20].ChoCD is a system which allows the user to sign in with a User ID and a graphical password as shown in Figure 5.The system is implementable in both desktop and smart phones.The system authenticates a user in three ways, from the first step based on choice selection to the second step based on clicks and thirdly based on drawings.The scheme only allows the authenticated user to be able to recognize the passwords through graphic, clicking positions and drawing patterns.Users should be able to remember the pattern when the images are shown. III. THE COIN PASSCODE MOBILE GRAPHICAL PASSWORD AUTHENTICATION MODEL The Coin Passcode Graphical Password Authentication Model is a hybrid graphical password mobile authentication scheme.It is relatively swift to be inputted as a mobile device authentication mechanism compared to a four or six pins numerical passcode.The key feature of this model is its unique multi-elemental buttons which are resistance against shoulder surfing attacks and brute force attacks for mobile devices. The identity verification of a smart device user will be done through the validation of a set of Coin Passcode Graphical Passwords Keypair Authentication process, where the user will initially register a set of Coin Passcode graphical password patterns to be remembered, and by inputting the correct sequence of coin passcode, patterned graphical passwords would authenticate the identity of the user based on their cognitive knowledge.This can prevent an unauthorized user from getting access to the mobile device from just spying. A. The Coin Passcode Structure The Coin Model Graphical Password Authentication uses the concept of multi-elements found in the structure of any currency coin.In coins from different countries, there is always a combination of different symbols, numerical values, and some wordings.As with the concept of coins, the Coin Model Graphical Password Authentication uses the element of colors, numerical values, and icons to form unique coin passcodes as shown in Figure 6.The colour codes are added in the Coin to assist color-blind users. There are a total of 10 icons, ten numbers and ten colours used as the elements of the Coin Passcode.The list of the element items is illustrated in Figure 7.The icons are obtained from Google Material Icons for Android Development. C. The Coin Passcode Registration To strengthen the complexity of the Coin Password, a minimum password standard is set.Each of the three elements must be present in the Coin Password Combination at least twice, resulting in a combination of six coins in a passcode sequence with all three different hidden elements. The Coin Passcode limits the user to place precisely six Coin Passcodes elements in the sequence during registration.The registering of the Coin Passcodes requires one secret hidden element item from each sequence to be initialized by the user during registration.For example, if the hidden element of the first Coin Password chosen is the color element "Yellow", then the color "Yellow" would be the key to the first Coin Password, ignoring the other two elements in the first Coin Password, which are the randomized numbers and icons as shown in Figure 10. The registration algorithm limits the user to select all three different element items in the first three coin-passcodes element selection by removing an element to be selected after each selection.The algorithm then loops again for the last three coin-passcodes selections with the same limitation. D. The Coin Passcode Authentication Algorithm The Coin Graphical Password Authentication is designed in a way that only the authorized user knows the hidden element he or she registers out of the three elements in each coin, whether it"s the color, number, or the icon. An example of a Coin Passcode Registered Sequence combination is shown in Figure 11, with the first secret coin numerical element of "Three", a second coin with the secret color element "Yellow", and the third coin with the secret icon element "Car", continuing the rest of the three secret Coin Passcodes elements with the number "Five", the color "Blue", and finally the icon "Flower" respectively.The authentication system will then match the Coin Passcode inputs based on the registered Coin Passcode elements and sequence while ignoring the rest of the public elements in each of the six Coin Passcode inputs.Any other attempt by selecting coins without the elements in the right sequence will result in a failure in the identity authentication. An object array is used to store the user"s coin passcode login input to be matched with the registered coin passcode object array to check whether the login input object array contains the registered credentials in the right sequence for authentication. A. Usability Metrics and Security Analysis Several experiments are conducted with a group of 50 students to carry out the security analysis and usability metrics for the Coin Passcode against other similar mobile authentication models including the Numerical Passcode, Alphanumeric Passwords, and Passfaces™.The experiments conducted covers the usability metrics of login time and password memorability, and security analysis of shoulder surfing attack, password guessing attack and brute-force attack for each of the mentioned authentication models.www.ijacsa.thesai.org Alphanumerical Passcode Passfaces TM Coin Passcode Table 1 summarizes the security of the different password schemes against several password attack methods."Y" refers to Yes and it means that it is vulnerable to the forms of attack.While "N" refers to No and it means that the password scheme is secured against the attack type. B. Password Complexity Comparison The Coin Passcode Authentication Model which consists of three elements in each coin creates a cognitive authentication link between the user and the authentication system, where only the right user would know the secret element and sequence he or she sets, leaving the rest of the people confused about the password.C. Shoulder Surfing Attack and Spyware Attack Shoulder Surfing attack uses the technique of direct observation or through recording using video cameras such as high-resolution surveillance equipment or hidden cameras to obtain a user"s credentials.A spyware attack is when malwares are installed in a user"s device to record the user"s credentials input, while the information is sent back to the attacker for exploitation.Both of these attacks can easily obtain and exploit a user"s numerical and alphanumerical password, or Passfaces™ credentials by directly observing the password input pressed by the user.However, the Coin Passcode is resistant to this type of attack. An experiment is conducted with a group of 50 students, where a set of passwords for different authentication models, each with an equal password length of six items is pressed in front of the students through a big screen, with each button pressed at five-second intervals.The students are then asked to retype or reselect the shoulder surfed passwords.The result of the shoulder surfing attack experiment is shown in Figure 13.The numerical passcode and alphanumerical password are seen to have a high rate of shoulder surfing success due to their vulnerability to this attack method.The Passfaces™, however has a lower success rate as it requires a certain recognition and memorability of the level of the faces used to reselect the right one.The Coin Passcode can be observed to have zero success rate of shoulder surfing attack.This is because having multiple graphical elements in each input of the Coin Passcode would make shoulder surfing attack and spyware attack meaningless, as students do not get any direct password information from just observing the Coin Passcode combination inserted by the user.It is designed so that it is impossible for a shoulder surfer to know which secret element out of the three was the one being chosen by the user in a single input button. Memorability is the measurement of the extent to which the users can remember the password after a period.A password memorability experiment is conducted for each of the four password authentication models.The test is conducted with a group of 50 students, where each student is given a similar set of passwords of the same password length of six for each password model.The students were given five minutes to memorize each password and were then shown a 3 minutes video to simulate an extended period of idle time.After the video ended, the students were asked to produce the same password in one-minute.The result of the experiment is shown in Figure 14. The experiment result shows that the Coin Passcode has the highest memorability success rate followed by the numerical passcode, the alphanumerical password and lastly, the Passfaces™. Based on the experiment, it was much easier to remember the Coin Passcode because the secret elements used are straightforward elements like colours, numbers, and icons, which can form a story-like chain of keywords such as "3 blue cars, 5 red bikes", as compared to remembering numbers, words or faces which have no direct meaning or connection to the tester.The experiment found that unfamiliar faces are hard to remember after a period of idle time, even though it is also a form of graphical password. D. Login Time Login Time refers to the time taken for users to log into the authentication system using their credentials.An experiment is conducted to analyze the login time for the four authentication models.The test is conducted with a group of 50 students, where each student is given a set of similar passwords of the same length of six for each password model.The students are then asked to reproduce the same passwords five times, and each login attempt time was recorded.The result of the experiment is shown in Figure 15.The Coin Passcode has slightly longer login time compared to Numerical Passcode and Alphanumerical Password of the same length even after five attempts.This is because the positioning of the numerical passcode and alphanumerical password are fixed, which the test user can simply memorize and get used to from each increased attempt.However, the positioning of the Coin Passcode elements is randomized and shuffled in each attempt, which is designed to confuse the shoulder surfing attacker, causing a longer login time for the test users.The Passfaces™ takes a much longer login time compared to the other password models due to the low memorability of the unfamiliar faces which requires the test user to take time to confirm the faces. E. Password Guessing and Dictionary Attack Password Guessing is a kind of brute force attack which uses knowledges or hints gained from the password owner.Each of the password models mentioned in the analysis are vulnerable to this attack when the user leaves certain hints or information about their password exposed to the attacker.This attack cannot be avoided and can only be prevented through security awareness and training. A dictionary attack is conducted using a list of frequently used words or number patterns to crack the password efficiently.However, this only applies to the existing Numerical Passcodes and Alphanumerical Passwords due to the reason that these passwords often contain phrases that are predictable and highly used statistically.The Coin Passcode Authentication Model and the Passfaces™ is not applicable for dictionary attack, because these two authentication models does not contains text or words that can be prepared in dictionary attack. V. DISCUSSIONS Most graphical passwords currently available are mostly proven to be more secure and resistant to several cybersecurity attacks compared to existing numerical and alphanumerical passwords.However, these graphical passwords are mostly available only in the field of research, education and theoretical discussion, and are rarely implemented practically.It may be due to several poor usability factors such as low memorability, high login time, and non-user friendly or nonmobile friendly interfaces, compared to the existing numerical and alphabetical password authentication methods. The proposed Coin Passcode is shown to have higher password complexity when compared to its closest identical numerical passcode model.Even though the alphanumeric password model has a higher password complexity, it is still not a completely secure password mechanism due to its vulnerability towards shoulder surfing attacks.The Coin Passcode is designed to overcome the shoulder surfing attack vulnerability and is currently designed specifically for a swift mobile authentication which greatly enhances the password complexity compared to its nearest comparison.A higher password complexity can be achieved when the coin passcode"s multi-elemental concept is implemented to a similar input of alphanumerical passwords. The memorability of the Coin Passcode is also a beneficial key feature due to its straightforward elemental attributes www.ijacsa.thesai.orgwhich can be formed into a chain of story-like keywords that other existing password models are missing.It is more likely for people to remember a story formed by visuals rather than numbers or alphabets.The login time for the Coin Passcode is not as fast as the existing password models due to the randomization and element shuffling nature of the Coin Passcode model.However, this can be considered a security over performance prioritization measure. VI. CONCLUSION In conclusion, the Coin Passcode is able to overcome the current shoulder-surfing and spyware attack vulnerability that existing mobile application numerical passcode authentication layers suffer from.It is shown that having a Multi-elemental passcode for a mobile login interface can prevent direct observation password attacks, and at the same time provide a higher password complexity against brute-force and password guessing attacks.It is a combination of the behavioral context uniqueness of each person that makes this multi-elemental passcode a stronger mobile password interface. The authors believe in the real potential of graphical password in benefiting the current mobile smart devices swift authentication mechanism, in terms of usability and security aspects.This brings the purpose for us to propose the Coin Passcode Graphical Password Mobile Authentication Model in hoping to overcome the challenges by bringing a simplicity in usage plus complexity in security for the mobile developers as well as the mobile users.However, there are still limitations in the current proposed design of the Coin Passcode which can be further enhanced in the future for the betterment of mobile security.One of it is the lack of encryption for the coin passcodes input and stored passcodes, as the elements are currently stored purely in plaintext and can be easily modified via code injection attack.The Multi-elemental input concept should also be further explored in password model fields other than mobile security layers, such as the network security and banking security. Based on the calculations below, the complexity of the Coin Passcode Model is much more resistant to brute force attacks compared to Numerical Passcodes and Passfaces, but weaker compared to Alphanumerical Password due to the differences in the number of elements.The complexity comparison chart of Coin Passcode and Numerical Passcode can be seen in Figure 12.It takes 729 million attempts to brute force a triple elemental Coin Passcode in the right sequence to find out the right combination of the Coin Passcode secret element values.This makes guessing the password way more difficult compared the huge difference in passcode combination possibilities. TABLE I . PASSWORD ATTACK COMPARISON TABLE
5,385.8
2019-01-01T00:00:00.000
[ "Computer Science" ]
Direct Observation of the Dynamics of Self-Assembly of Individual Solvation Layers in Molecularly Confined Liquids Confined liquids organize in solidlike layers at the liquid-substrate interface. Here we use force-clamp spectroscopy AFM to capture the equilibrium dynamics between the broken and reformed states of an individual solvation layer in real time. Kinetic measurements demonstrate that the rupture of each individual solvation layer in structured liquids is driven by the rupture of a single interaction for 1-undecanol and by two interactions in the case of the ionic liquid ethylammonium nitrate. Our results provide a first description of the energy landscape governing the molecular motions that drive the packing and self-assembly of each individual liquid layer. Understanding the structural properties of liquids at the solid interface is fundamental to many applications in the fields of nanotribology, wetting, or molecular biophysics [1].When confined between two flat surfaces separated by a few nanometers, liquids exhibit properties that cannot be described by the continuum theories based on van der Waals and electrostatic interactions characterizing the bulk properties.Instead, the molecules forming the liquid order into well-defined solvation layers, giving rise to oscillatory forces described by radial distribution functions with a periodicity of about one molecular diameter [1]. The discrete oscillation forces were first experimentally measured by pioneering surface force apparatus (SFA) experiments, in which liquids are confined between two opposing macroscopic flat mica plates of R ∼ 10 μm [2].These observations were later complemented by atomic force microscopy (AFM) measurements, where the confined liquid is compressed by the AFM tip, with a much smaller contact radius (R ∼ 10 nm) [3].In the latter case, the deflection of the AFM cantilever is measured as it approaches a rigid, flat substrate [typically highly oriented pyrolytic graphite (HOPG) or mica] at constant velocity.The layered nature of the probed liquids is hallmarked by the presence of discrete jumps in the resulting forcedistance curves.Each discontinuity or jump occurs when an individual layer of confined liquid is suddenly squeezed out from below the AFM tip [4].Crucially, the distance between two consecutive jumps informs on the precise orientation of each molecular layer with respect to the surface.This experimental approach has revealed invaluable information about the molecular packing properties of a variety of chemically distinct liquids, encompassing nonpolar liquids with spherical structure such as octamethylcyclotetrasiloxane (OMCTS) [5], linear and branched alkanes [6,7], polar liquids such as short alcohols [8], a variety of ionic liquids [9,10], and even water [11,12], all exhibiting a priori similar force-induced rupture mechanisms.This progress notwithstanding, the reverse process encompassing the reversible reformation of each individual molecular layer, together with the quantification of the number of interactions involved in the rupture and reformation processes, remained completely elusive. In this Letter, we make use of force-clamp spectroscopy AFM to directly capture the forced-induced rupture and reformation of each of the individual solidlike ordered layers populating the alcohol-HOPG and ethylammonium nitrate (EAN)-mica interface.These measurements enable us to map out the complete energy landscape of a confined liquid, providing (sub)molecular insight into the mechanisms underpinning the rupture and reformation dynamics of packing and self-assembly of an individual liquid layer according to its particular chemical structure. Using a homemade AFM spectrometer [13], we measured individual force distance curves [Fig.1(a)] on 1-undecanol, whereby the cantilever was approached towards (red trace) and, subsequently, retracted from (blue trace) a HOPG surface at a constant velocity of 200 nm s −1 .Upon approaching the surface, the cantilever penetrated 5 welldefined 1-undecanol layers (red arrows), each of them requiring an exponentially higher force to be indented [Fig.1(b)].As the cantilever was withdrawn from the sample, each previously broken layer was reversibly reformed (blue arrows).The distribution of breakthrough forces can be fitted with an exponentially decaying function [4], (where d is the thickness of the confined layers and τ stands for the decay length), further confirming the periodic packing properties of 1-undecanol.The average jump-in distance (∼3 Å) is significantly shorter than the 1-undecanol length (∼14.7 Å) [14], thus suggesting an almost flat layering of the 1-undecanol layers on the HOPG surface.These results are in full agreement with previous x-ray [15], SFA [14], force [8], and imaging [16] AFM studies.The direct observation of the discrete reformation events in the force-extension traces [Fig.1(a), blue arrows] suggests that the rupturing process is reversible and that, by holding the applied force constant throughout the experiment at an "average" set-point value between the rupturing and reformation force, the equilibrium dynamics between the broken and reformed states should be captured.In fact, applying a constant force of 430 pN with the force-clamp setup displayed bistability [Fig.2(a)]; the layer 3 hopped between the broken and reformed states in real time, with a concomitant change in length Δl ¼ 3.3 AE 2 Å, corresponding to the length of an individual layer (Fig. S2 in the Supplemental Material [18]).Changing the applied force to lower (410 pN) and higher (450 pN) values dramatically altered the equilibrium between the reformed and broken states.The thermodynamics of the reformed ⇄ ruptured transition can be evaluated by measuring the relative time abundance of the reformed state N ref =N total for each distinct constant force value according to the Boltzmann distribution: where ΔG is the difference in free energy between the broken and reformed states and Δl is the physical length distance separating them.Fitting this equation to the experimental data corresponding to the rupture and reformation of layer 3 Notably, the measured value of Δl is consistent with the thickness of one individual 1-undecanol layer and in agreement with the results obtained from force-extension (Fig. S1, [18]) and force-clamp (Fig. S2, [18]) measurements.The force dependent rates of breakthrough and reformation have been measured by extracting the dwell time for each individual transition occurring at a fixed force (Fig. S3 [18]), assuming that, as a first approximation, the distribution of dwell times at a particular force follows a single exponential (Fig. S4, [18]).Our observations demonstrate that, both for the rupturing and reformation processes, the measured rates depend exponentially on the applied force following a simple Bell formalism [19]: where αðFÞ is the rate of the process as a function of force, A 0 the preexponential factor, ΔE stands for the height of the 258303-2 energy barrier, Δx is the width of the energy barrier governing the process (normally known as "distance to the transition state") and N is the number of bonds (or interactions) that withstand mechanical force.This approach has been used to interpret a wide variety of single molecule pulling experiments [13,20,21] for which N ¼ 1. Figure 2(c) shows the graph plotting the ln½αðFÞ as a function of the pushing force, for both the rupturing (red) and reformation (blue) processes.Fitting Eq. ( 3) for both processes results in Δx rupture =N ¼ 2.5 AE 0.1 Å for the rupturing process Δx reformation =N ¼ 0.5 AE 0.1 Å for the reverse reforming reaction.By assuming an arbitrary value A 0 ¼ 4 × 10 9 s −1 [22], we obtain ΔE ¼ 44kT. Dividing the expressions for the rates of rupturing α rup and reformation α ref and assuming that Fitting Eq. ( 4) to the experimental data, and using the previously obtained value of Δl ¼ 2.6 AE 0.2 Å, we obtain ΔG ¼ ð31 AE 2ÞkT and N ¼ 0.9 AE 0.1.Remarkably, the ΔG values obtained through kinetic and thermodynamic measurements are in close agreement, thus strongly suggesting that the hopping process occurs through a single barrier.Most importantly, our experiments revealed that N ≈ 1 (by definition N must be an integer ≥1).This has fundamental implications, entailing that the applied force is transmitted through an "individual" interaction within a very narrow space.These conclusions complement recent findings suggesting that the force does not distribute through the whole cantilever tip in contact with the liquid (which would imply that a larger number of bonds are effectively broken, yielding a large N) but rather through the last microasperities of the AFM cantilever tip [23].The same approach used to uncover the kinetic and thermodynamic properties of layer 3 was extended and applied to layers 1 and 2 (Figs.S6-S8 in the Supplemental Material [30]).Combined, these results allowed us to reconstruct the energy landscape of the first confined layers of 1-undecanol in the 1D direction perpendicular to the graphite surface [Fig.2(d)].Interestingly, and as expected in light of the large hysteresis between the breakthrough and reformation events observed in Fig. 1(a), the layer closer to the surface does not exhibit signatures of dynamic equilibrium between the ruptured and reformed states.Moreover, in this case, N is an order of magnitude bigger than in the rest of the confined layers (Table SI, [18]).The markedly different behavior of this first layer suggests that the underlying solid substrate is likely to have a large effect on the conformation and dynamics of the layer. The dynamic properties of a long, nonbranched alcohol, such as 1-undecanol, lying flat on a HOPG surface sets the ground to study the molecular self-assembly mechanism of chemically more complex solvents.Because of their sterically mismatched anion-cation pairs, ionic liquids exhibit fascinating properties that lie in-between those of a liquid and a solid [9,10].In this context, ethylammonium nitrate (EAN) has revealed oscillatory forces in the vicinity of a flat mica substrate.Using constant velocity experiments [Fig.3 SFA [24] and AFM results [9,25].This suggests that the EA þ and the nitrate ions facing the bulk liquid are perpendicularly adsorbed onto the mica substrate.Because of the similar forces between consecutive solvation layers in EAN [Fig.3(a)], applying a constant pushing force of 50 pN to the EAN-mica interface results in an equilibrium situation where the system hops between three well-defined states (layers 1-2) in discrete jumps of ∼5 AE 2 Å [Fig.3(b)]. Slightly changing the applied force to lower (40 pN) or higher (70 pN) values dramatically shifts the equilibrium properties of the system.The relative time abundance of each state as a function of the applied force is shown in Fig. 3(c).Fitting Eq. ( 2) for each state provides a direct measurement of the associated ΔG and Δl values for the studied layers 1 and 2. Crucially, the obtained Δl values are ∼5 Å, thus corresponding to the actual measurement obtained from force-extension [Fig.3(a) and Fig. S9 in [18] ] and forceclamp (Fig. S10, [18]).As before, the dwell times for each individual transition can be used to compute the cumulative probability density function for the rupturing and reformation processes, the time course of which can be in both cases captured by a single exponential [Fig.3(d)].The resulting rates at each particular force are fitted to Eq. ( 4) yielding in this case N ¼ 2.0 AE 0.8 ≈ 2 [Fig.3(e)].The observed N ¼ 2 indicates that 2 interactions are disrupted in parallel, most likely suggesting that the ions have to be dislocated in pairs in order to avoid configurations where two ions of the same sign are in close contact.While qualitatively similar, AFM solvation force measurements quantitatively differ from SFA results [23].The main important difference lies in the massively different (∼10 6 fold) contact radius, which affects the liquid confinement area and hence the mechanisms of liquid squeeze-out.Briefly, the nanometer confinement below the AFM tip has allowed measurement of solvation forces in branched liquid molecules lacking molecular symmetry [6], and the packing properties can be analysed even for surfaces that are not atomically flat (such as self-assembled monolayers, SAMs) [23], both being important prerequisites in SFA measurements [1].Moreover, the large temperature effect of the solvation forces observed with AFM [26] and the lack of correlation between the measured forces and the tip radius [27]-indicating that the nanometer-scale microasperities of the tip dominate the short-range interaction-suggest novel molecular mechanisms underlying the squeezing out of liquids confined in the gap between the cantilever tip and the substrate [23].These observations have been recently complemented by AFM and STM imaging [16,28,29], directly demonstrating the semisolid behavior of the liquid layers.Our results using force-clamp spectroscopy add a new dimension to the study of the solvation phenomena, providing precise quantification of the equilibrium rupturereformation dynamics, which allow, for the first time, the reconstruction of the solvation free-energy landscape.Moreover, these experiments provide new insights into the (molecular) mechanisms underlying liquid disruption; (i) using the Bell formalism, borrowed from antigenantibody and single molecule mechanics pulling experiments, we gained access to the number of key interactions disrupted (and reformed) under force.(ii) For 1-undecanol, molecules lie parallel to the HOPG surface.Our experiments reveal that a unique interaction (N ¼ 1) needs to be broken to induce the rupture of the layer.We speculate that the pore created in the structure of the confined liquid induces the lateral shift of a whole row of molecules pushed to the sides [Fig.4(a)].However, our experiments cannot tell whether the lateral displacement corresponds to 1 or more molecular positions.(iii) In the case of EAN, molecules are oriented upright from the mica normal direction.The EA þ moiety measures 3 Å, while the nitrate measures 1.7 Å [30].Since the barrier is highly symmetric (Δx ¼ 2.5 Å), and probably located in the plane where electrostatic interactions take place, we speculate that there is a slight molecular mismatch between the position of contiguous ion pairs in order to better accommodate their charges.Our experiments show that in this case N ¼ 2. Since N only corresponds to the number of molecules or interactions being disrupted in parallel with the tip, we hypothesise that individual anion-cation pairs need to be displaced together in tandem [Fig.4(b)], which is easy to rationalize due to their electrostatic nature.These results demonstrate the surprisingly short-range dependence of the interaction between the last atom of the AFM probe and the solvation layer.Altogether, our experiments directly capture the molecular motions involved in the rupturing and self-assembly processes of individual solidlike liquid layers and highlight the single molecule nature of the squeeze-out mechanism when the confinement area lies in the nanometer realm. FIG. 2 ( FIG. 1 (color online).Solvation layers in 1-undecanol revealed by constant velocity AFM.(a) Force vs distance plot of the AFM tip approaching towards (red line) and, subsequently, retracting from (blue line) the HOPG surface in 1-undecanol at constant velocity, showing the rupture and reversible reformation of a total of 5 confined layers.(b) Rupturing and reforming forces (n ¼ 90 individual trajectories) for the first four measurable solvation layers.
3,308.2
2015-06-25T00:00:00.000
[ "Chemistry", "Physics" ]
Role of Survivin in Bladder Cancer: Issues to Be Overcome When Designing an Efficient Dual Nano-Therapy Bladder cancer is the 10th most diagnosed cancer, with almost 10 M cancer deaths last year worldwide. Currently, chemotherapy is widely used as adjuvant therapy after surgical transurethral resection. Paclitaxel (PTX) is one of the most promising drugs, but cancer cells acquire resistance, causing failure of this treatment and increasing the recurrence of the disease. This poor chemotherapeutic response has been associated with the overexpression of the protein survivin. In this work, we present a novel dual nano-treatment for bladder cancer based on the hypothesis that the inhibition of survivin in cancer cells, using a siRNA gene therapy strategy, could decrease their resistance to PTX. For this purpose, two different polymeric nanoparticles were developed to encapsulate PTX and survivin siRNA independently. PTX nanoparticles showed sizes around 150 nm, with a paclitaxel loading of around 1.5%, that produced sustained tumor cell death. In parallel, siRNA nanoparticles, with similar sizes and loading efficiency of around 100%, achieved the oligonucleotide transfection and knocking down of survivin expression that also resulted in tumor cell death. However, dual treatment did not show the synergistic effect expected. The root cause of this issue was found to be the cell cycle arrest produced by nuclear survivin silencing, which is incompatible with PTX action. Therefore, we concluded that although the vastly reported role of survivin in bladder cancer, its silencing does not sensitize cells to currently applied chemotherapies. Introduction Bladder cancer is the 10th most diagnosed cancer with an estimated 440,000 new cases worldwide in 2020, accounting for a 5-year prevalence of up to 1.7 M people [1,2]. The most common histologic type of bladder cancer is urothelial carcinoma, which was formerly known as transitional cell carcinoma (TCC), due to the fact that the transitional cells located at the outside layer of the bladder are the ones transformed into cancer cells [3]. Although current diagnosis methods are highly invasive, and bladder cancer remains complex and difficult to identify [4,5], it is well known that all types of bladder cancer start in the inner lining of the bladder but from different cell types: urothelial cells (95% of bladder cancer), squamous cells (4%) and glandular mucus cells (1%). Approximately 75% of patients are non-muscle-invasive TCC and have a 5-year survival rate between 88% and 98% [2,6]. The other 25% of patients diagnosed with TCC are muscle-invasive in stages between 1 and 4. Depending on the stage of the muscle-invasive TCC the survival rate can range between 80% in the 5 years after diagnosis until 5% of survival with cancer in stage 4 [1]. Synthesis of P polymer nanoparticles encapsulating PTX: Nanoparticles (named PTX-NP) were prepared according to a modified nanoprecipitation method published before [16]. Briefly, 20 mg of P polymer dissolved in 600 µL of acetone and 400 µL of PTX (1 mg/mL PTX in acetone) were mixed together, to form the diffusing phase. This phase was then added to dispersing phase, 20 mL of water, by means of a syringe controlled by a syringe pump (KD Scientific), positioned with the needle directly in the medium, under a magnetic stirring of 700 rpm and at room temperature and with a flux of 50 µL/min. The resulting NPs suspension was then centrifuged in Amicon centrifugal filters at 6000 rpm two times, first 20 min and then for another 30 min. The filtered NPs were resuspended with 1 mL of mQ H 2 O. Synthesis of siRNA pBAE nanoparticles: Two different types of pBAE-NPs were synthesized: with and without protein bromelain (PB) coating. Additionally, two different nucleic acids were encapsulated: siRNA and pGFP as reporter genes. NPs encapsulating siRNA had a final concentration of siRNA of 0.03 mg/mL and the different polymer: siRNA ratios used were 25:1, 50:1, 100:1, 150:1, 200:1 and 300:1 (w/w). For pGFP-nanoparticles, the polymer: DNA ratio used was 50:1 (w/w) and the final concentration of plasmid was 0.06 mg/mL. Polymers used to prepare both types of NPs were of C32 type, and different combinations were used: C32-CR3, C32-CK3, C32-CR3:C32-CK3, C32-CK3:C32-CH3 and C32-CR3:C32-CH3, named as follows: R, K, RK, KH and RH, with the following protocol we previously described [23]. Briefly, siRNA NPs were prepared by mixing equal volumes of siRNA at 0.03 µg/µL with polymers at different concentrations, depending on the polymer: siRNA ratio, in NaAc buffer solution (25 mM, pH 5.5). When the encapsulated genetic material was pGFP, the procedure was the same but with a concentration of 0.06 µg/µL. Then, siRNA was added over polymer solution and was mixed by pipetting, followed by vortexing for 5 s and was incubated at room temperature for 10 (especially for pGFP) or 30 min. When the complexes had coating of PB, different dilutions of PB were prepared, as previously described [27]. To create a coating of PB, siRNA was diluted the same way previously described and 2 µL of R(100 µg/µL) was diluted in 48 µL of NaAc buffer solution (25 mM, pH 5.5) to obtain a final concentration of 6 µg/µL (the concentration was doubled compared to previously due to the fact that is half the volume). After mixing and vortexing for 5 s of the mixture of siRNA and polymer, 50 µL of PB with the corresponding concentration were added carefully, followed by vortexing for 5 s and were incubated at room temperature for 10 (specially for pGFP) or 30 min. Determination of nucleic acid encapsulation by electrophoretic mobility shift assays: The capacity of NPs to encapsulate siRNA at different polymer ratios was studied with the electrophoretic mobility of polymer: siRNA complexes, which was measured on agarose gels (2.5% of agarose w/v) in Tris-Acetate-EDTA (TAE) buffer containing ethidium bromide. The electrophoresis mixture was added into the cubed and the gel was allowed to solidify for 20 min. Then the electrophoresis support was placed into the TAE 1× bath. Finally, Pharmaceutics 2021, 13, 1959 4 of 19 samples were loaded and were run for 1 h at 80 V (Apelex PS 305, France). Finally, siRNA bands were visualized by UV irradiation. Determination of NPs size and polydispersity: Particle size and surface charge measurements were determined by dynamic light scattering (DLS) at room temperature with a Zetasizer Nano ZS (Malvern Instruments Ltd., Worcestershire, UK, 4-mW laser) using a wavelength of 633 nm. Correlation functions were collected at a scattering angle of 173 • , and particle sizes were calculated using the Malvern particle sizing software (DTS version 5.03). The value was recorded as the mean +/− standard deviation of three measurements and each measurement was determined from the average of 20 cycles in a disposable plastic cuvette. The size distribution was given by polydispersity index. The zeta potentials of complexes were determined from the electrophoretic mobility by means of the Smoluchowski approximation. The zeta potential of samples was determined in triplicate from the average of 10 cycles of an applied electric field. In this case, 1 mL of the previous complexes were added into zeta potential cuvette. PTX loading efficiency: Freeze-dried NPs loaded with PTX were dissolved in acetonitrile and the amount of entrapped drug was detected by Ultra Performance Liquid Chromatography (UPLC) (Waters ACQUITY UPLC H-Class). A reverse-phase BEH C18 column (1.7 µm 2.1 × 50 mm) was used. The mobile phase consisted of a mixture of acetonitrile and water (60:40 v/v) and was delivered at a flow rate of 0.6 mL/min. PTX was quantified by UV detection (λ = 227 nm, Waters TUV detector). Drug content was expressed as drug content (D.C. % w/w); represented by Equation (1). For each sample, the mean value was recorded as the average of three measurements. The results were expressed as mean ± S.D for two replicates. Equation (1) In vitro cellular transfection of pBAE-NPs: For immunofluorescence experiments, siRNA F AF546b was used. Cells were grown over a sterile cover slip (gelatine at 0.1% coating for 20 min) in a 12-well plate. Cells were seeded at 200,000 cells/well and incubated overnight to 80% confluence. Cells were washed with PBS 1× and siRNA complexes were added diluted in Mccoy's minimum medium at a final concentration of 16 pmol of siRNA/well. Then, cells were incubated for 2 h at 37 • C in 5% CO 2 atmosphere. All the transfections and controls were performed in triplicate. For flow cytometry experiments, the experiments were performed equally but scaled down to 96 well plates, and pGFP was used instead. For Western blot analysis, on the contrary, the experiment was scaled up to 6-well plates. Cytotoxicity analysis by MTT assay: Performed as we reported previously [16,24]. Fluorescent microscopy to determine nanoparticle uptake: After desired time, cells were washed with PBS 1× and then formalin 10% was added during 20 min at RT. Afterward, cells were washed twice with 1000 µL of PBS 1× and 100 µL of Triton-X-100 0.1% was added in order to allow the permeabilization of the cells. After 30 min cells were washed again twice with PBS 1× and were incubated with DAPI 1:10,000 in PBS 1× for 5 min. Finally, cells were washed three more times with PBS 1× for 5 min. The covers were prepared with mounting medium and were ready to be seen under fluorescence light. Fluorescence was analyzed with the corresponding filter with the fluorescence Zeiss Axiovert 200 M microscope. ImageJ was used for the quantification of the fluorescent signals, according to recommended protocol [28]. In brief, relative quantification (CTCF values) was performed by normalizing the regions of interest of the transfected cells to the black regions as background. Survivin expression by Western blot analysis: (1) Cell lysis by aspirating media and cells were washed with warm PBS 1×. Then, cells were scraped, collected on Eppendorf tubes and centrifuged at 1500 rpm for 2 min at 4 • C. The pellets were dissolved and incubated with lysis buffer (RIPA reagent 1× and 1:200 Protein inhibition cocktail) for 20 min on ice. Next, centrifugation of lysate at 10.000 rpm for 10 min was performed and supernatants were stored at −20 • C in aliquots of 20 µL. (2) Protein quantification Pharmaceutics 2021, 13, 1959 5 of 19 by BCA, following distributor instructions. It was necessary 30 µg of total protein for survivin protein study. (3) SDS-PAGE Gel preparation and running. Running gels: 15% acrylamide. Stacking gels: 6.1 mL of mQH 2 O, 2.5 mL of solution C (0.5 M Tris-HCl), 1.3 mL of solution A, 100 µL of solution D, 10 µL of TEMED and 50 µL of solution G. The samples had added loading buffer and 25 µL of sample was loaded in the gel. Gels were bathed with electrophoresis buffer (7.5 g Tris-basic, 39 g Glycine, 2.5 SDS and 50 mL of mQH 2 O) and run at 150 V (constant). (4) Transfer of the proteins to a PVDF membrane using the XCell IITM Blot Module from Biorad. Pre-wetting of the PVDF membrane in 100% methanol for 30 s, drain and equilibrate with transfer buffer (3.03 g Tris-basic, 14.4 g glycine, 200 mL methanol). The transfer run for 2 h at 40 V imbibed in transfer buffer. (5) Blocking and detection (actin + surviving). After the transfer, the membranes were incubated at room temperature for 2 h in an orbital shaker with blocking buffer (PBS 1×, 0.1% Tween and 5% non-fat powdered milk). Primary antibodies were resuspended in blocking buffer (Mouse anti-actin 1:2000; goat anti survivin 1:1000) and then were incubated with the membrane overnight at 4 • C in an orbital shaker. Next, the membranes were washed out with washing buffer three times for 10 min. The secondary antibody was resuspended in PBST (PBS 0.1% (v/v) Tween 20) (Goat anti-rabbit HRP 1:2000; Rabbit anti-mouse HRP 1:10,000) and it was incubated with the membrane. Next, the membrane was washed 3 times with PBST for 10 min, and HRP was detected by chemiluminescence with Luminata TM forte. Then, the membrane was revealed using ImageQuant LAS 4000 mini (GE Healthcare Life Science). Survivin intracellular localization by immunofluorescence: After the same treatment explained before for cell uptake, incubation with the primary antibody (dilution 1:100) previously described against survivin was made. The secondary antibody was goat anti-rabbit Alexa 488 at a dilution of 1:1000 A final washing step was performed with PBS 1× and DAPI staining was carried out as previously described. The mounting was made with mounting solution and the samples were studied under Zeiss microscope. Cell cycle analysis by flow cytometry: Cell media after transfection were aspirated and cells were washed with warm PBS 1×. Then, cells were trypsinized and collected in Eppendorf tubes and centrifuged at 1000 rpm for 5 min. The pellet was washed with PBS 1×. Cells were centrifuged again at 1000 rpm for 5 min and pellet was resuspended with a solution of 70% of cold ethanol. For propidium iodide staining cells were centrifuged at 1000 rpm for 5 min and the ethanol was decanted. Cells were washed with PBS 1× and centrifuged again at 1000 rpm for 5 min. A mixture of 0.1% (v/v) Triton X-100 (Sigma) in PBS with 2 mg of RNasa A and 200 µL of propidium iodide 1 mg/mL was prepared. Cells were resuspended with this mixture at a concentration of 1 × 10 6 cell/mL for 15 min at 37 • C, 30 min at RT or at 4 • C overnight. Cell cycles were analyzed with flow cytometry (BD Fortessa). Statistical analysis: Data represent the mean ± SD from at least three independent determinations. The significance of differences between more than three samples was analyzed by one-way ANOVA and post hoc test, whereas the significance between two samples was analyzed by the Mann-Whitney U test using the GraphPad Prism software ver. 6.0 (GraphPad Software, San Diego, CA, USA), and a p-value less than 0.01 was considered statistically significant. Formulation and Characterization of PTX-Nanoparticles In order to achieve a sustained and prolonged effect of PTX after its administration, it was encapsulated inside polymeric particles, composed of proprietary P polymer [16,17], The physicochemical characterization of these particles showed sizes of around 148 nm, with negative surface charges and 1.4% drug loading (Table 1 summarizes the results), indicating that, they were appropriate for parenteral administration, as already expected. In fact, these nanoparticles had already been prepared but included a targeting peptide to cross the blood-brain barrier, to treat glioblastoma cells [16,17,29]. Setting Up the pBAE Formulation for the Efficient Encapsulation of siRNAs In our group, we have extensive expertise using our proprietary oligopeptide endmodified pBAE nanoparticles for the encapsulation of different kinds of nucleic acids [22][23][24]30]. In this work, we aimed to encapsulate the siRNA anti-survivin using the C32 polymer family (the most hydrophilic one), for their use in bladder cancer treatment. We selected five different oligopeptides combinations that worked efficiently for previous applications [23], and we started assessing the required N/P ratio to encapsulate a scrambled siRNA corresponding to the two anti-survivin siRNAs designed (see details in the Methods section). As shown in Figure 1, 10/1 and 25/1 ratios were not sufficient for the encapsulation of the siRNA, at least a 50/1 ratio was required to achieve an encapsulation higher than 60% of siRNA. For further experiments, 100/1 and 150/1 ratios were selected as a compromise between an efficient encapsulation with the lowest amount of polymer needed, being the siRNA, the active compound, the one to be maximized in the formulation. Design of Experiments (DOE) for the Selection of pBAE Types After the first EMSA analysis of encapsulation efficiency, the number of possible combinations was still high, since none of the conditions could be discarded. For this reason, a design of experiments (DoE) [31,32] was performed with the objective of rationally establishing the conditions for the most efficient nanoparticle synthesis. It was required to reduce the variables studied, so R and RK polymer combinations were selected for fur- Design of Experiments (DOE) for the Selection of pBAE Types After the first EMSA analysis of encapsulation efficiency, the number of possible combinations was still high, since none of the conditions could be discarded. For this reason, a design of experiments (DoE) [31,32] was performed with the objective of rationally establishing the conditions for the most efficient nanoparticle synthesis. It was required to reduce the variables studied, so R and RK polymer combinations were selected for further experiments. In the next table (Table 2), the summary of the levels of the factors selected for the DoE is presented. These factors were combined resulting in eight experiments (see Tables S1 and S2). The outputs selected for the analysis were: (1) nanoparticle size; (2) PDI and (3) storage stability in time. As shown in Figure 2, for nanoparticle size, the contribution of the polymer type was key, while it was not for PDI and time stability. Incubation time was very important to maintain the PDI and the stability over time, outputs in which the N/P ratio and the siRNA concentration also contributed. Once the most influencing parameters were identified and after discarding the noninfluencing factors, the experiments in with the expected size and PDI (size < 300 nm, PDI < 0.5) and with the smallest variation before and after incubation, were further studied in terms of size and PDI, as well as their stability over 60 min after preparation to finally select the level of each factor (Table S2). These results are plotted in Figure 3. As clearly shown, experience 1 is the one showing the smallest size that was maintained after incubation time, so the levels of the factors that correspond to this experience were set up as the most appropriate ones: 0.03 mg/mL siRNA concentration, RK polymer combination, 25 mM buffer, preparation at 25 °C, 30 min incubation and 100/1 N/P ratio. Once the most influencing parameters were identified and after discarding the noninfluencing factors, the experiments in with the expected size and PDI (size < 300 nm, PDI < 0.5) and with the smallest variation before and after incubation, were further studied in terms of size and PDI, as well as their stability over 60 min after preparation to finally select the level of each factor (Table S2). These results are plotted in Figure 3. As clearly shown, experience 1 is the one showing the smallest size that was maintained after incubation time, so the levels of the factors that correspond to this experience were set up as the most appropriate ones: 0.03 mg/mL siRNA concentration, RK polymer combination, 25 mM buffer, preparation at 25 • C, 30 min incubation and 100/1 N/P ratio. < 0.5) and with the smallest variation before and after incubation, were further studied in terms of size and PDI, as well as their stability over 60 min after preparation to finally select the level of each factor (Table S2). These results are plotted in Figure 3. As clearly shown, experience 1 is the one showing the smallest size that was maintained after incubation time, so the levels of the factors that correspond to this experience were set up as the most appropriate ones: 0.03 mg/mL siRNA concentration, RK polymer combination, 25 mM buffer, preparation at 25 °C, 30 min incubation and 100/1 N/P ratio. Modifying the Surface of the Particles to Enhance Their Stability Although in the DoE the most stable formulation was chosen, clearly one-hour stability would not be enough for the clinical application of these particles. Consequently, and based on our previous studies, we selected the protease bromelain (PB), to coat nanoparticles and provide them with higher stability. Interestingly, it was described that this protein has the capacity of crossing mucosal barriers, required for the envisaged local intravesical delivery [33]. As shown in Figure 4, we were able to coat the particles without significantly modifying their characteristics in most conditions. In addition, the stability of nanoparticles over 2 h was maintained when we added the highest concentration of PB, and neither the size, the PDI, or the surface charge varied significantly. For these reasons, all concentrations could have been chosen and, consequently, we decided to use the highest one (0.33 mg/mL), as the PB concentration for the final formulation. Modifying the Surface of the Particles to Enhance Their Stability Although in the DoE the most stable formulation was chosen, clearly one-hour stability would not be enough for the clinical application of these particles. Consequently, and based on our previous studies, we selected the protease bromelain (PB), to coat nanoparticles and provide them with higher stability. Interestingly, it was described that this protein has the capacity of crossing mucosal barriers, required for the envisaged local intravesical delivery [33]. As shown in Figure 4, we were able to coat the particles without significantly modifying their characteristics in most conditions. In addition, the stability of nanoparticles over 2 h was maintained when we added the highest concentration of PB, and neither the size, the PDI, or the surface charge varied significantly. For these reasons, all concentrations could have been chosen and, consequently, we decided to use the highest one (0.33 mg/mL), as the PB concentration for the final formulation. Statistical test comparing each condition with nanoparticles without the coating (at initial or final times). * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. In Vitro Assays of pBAE-NPs Uptake and Transfection Efficiency In vitro assays were performed with two bladder cell lines, selected to cover the different subtypes of bladder cancer, as usually performed in other antitumor studies [34], one being a monolayer homogeneously distributed adherent cells (T24), and the other forming clusters simulating adherent tumorspheres (RT4; see Figure S1); while RT4 represents adherent cells from a cell papilloma, T24 are adherent cells from transitional cell carcinoma. First, the capacity of selected PB-coated RK pBAE nanoparticles to penetrate cells was qualitatively assessed by fluorescent microscopy, using a non-coding fluorescently labeled siRNA (F-siRNA). As shown in Figure 5, PB-coated, RK pBAE nanoparticles achieved high penetration in both cell lines, especially in RT4. These were surprising re- Statistical test comparing each condition with nanoparticles without the coating (at initial or final times). * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. In Vitro Assays of pBAE-NPs Uptake and Transfection Efficiency In vitro assays were performed with two bladder cell lines, selected to cover the different subtypes of bladder cancer, as usually performed in other antitumor studies [34], one being a monolayer homogeneously distributed adherent cells (T24), and the other forming clusters simulating adherent tumorspheres (RT4; see Figure S1); while RT4 represents adherent cells from a cell papilloma, T24 are adherent cells from transitional cell carcinoma. First, the capacity of selected PB-coated RK pBAE nanoparticles to penetrate cells was qualitatively assessed by fluorescent microscopy, using a non-coding fluorescently labeled siRNA (F-siRNA). As shown in Figure 5, PB-coated, RK pBAE nanoparticles achieved high penetration in both cell lines, especially in RT4. These were surprising results since, growing in these clusters, RT4 cells were expected to be more difficult to penetrate due to the tight junctions between cells. In Vitro Assays of pBAE-NPs Uptake and Transfection Efficiency In vitro assays were performed with two bladder cell lines, selected to cover the different subtypes of bladder cancer, as usually performed in other antitumor studies [34], one being a monolayer homogeneously distributed adherent cells (T24), and the other forming clusters simulating adherent tumorspheres (RT4; see Figure S1); while RT4 represents adherent cells from a cell papilloma, T24 are adherent cells from transitional cell carcinoma. First, the capacity of selected PB-coated RK pBAE nanoparticles to penetrate cells was qualitatively assessed by fluorescent microscopy, using a non-coding fluorescently labeled siRNA (F-siRNA). As shown in Figure 5, PB-coated, RK pBAE nanoparticles achieved high penetration in both cell lines, especially in RT4. These were surprising results since, growing in these clusters, RT4 cells were expected to be more difficult to penetrate due to the tight junctions between cells. Further, the same nanoparticle type was prepared but encapsulating pGFP as a reporter plasmid, with the aim to study nanoparticle transfection efficiency. As shown in Further, the same nanoparticle type was prepared but encapsulating pGFP as a reporter plasmid, with the aim to study nanoparticle transfection efficiency. As shown in Figure 6, both cell lines were efficiently transfected by nanoparticles. Nevertheless, in this experiment, the trend shown in the uptake study was not observed. Here, the transfection levels were higher using the positive control, but it must be taken into account that Lipofectamine, although can efficiently transfect in vitro cultures, cannot be used in vivo due to toxicity issues. Comparing both cell lines, transfection was higher in T24, which could mean that although it seems that particles penetrate more in RT4 cells, the penetration may be superficial, being nanoparticles accumulated only to the border cells, not allowing the transfection of the inner ones. Pharmaceutics 2021, 13, x FOR PEER REVIEW 10 of 20 Figure 6, both cell lines were efficiently transfected by nanoparticles. Nevertheless, in this experiment, the trend shown in the uptake study was not observed. Here, the transfection levels were higher using the positive control, but it must be taken into account that Lipofectamine, although can efficiently transfect in vitro cultures, cannot be used in vivo due to toxicity issues. Comparing both cell lines, transfection was higher in T24, which could mean that although it seems that particles penetrate more in RT4 cells, the penetration may be superficial, being nanoparticles accumulated only to the border cells, not allowing the transfection of the inner ones. In Vitro Antitumor Efficacy of PTX-NPs Monotherapy Next, we tested, by means of colorimetric metabolic assays, the functionality of the engineered nanoparticles on bladder cell models. First, we checked the efficacy of PTX-NPs, in comparison to free PTX. As shown in Figure 7A, free PTX IC50 is strongly dependent on the cell line. As expected, the viability of T24 cells decreased as the concentration In Vitro Antitumor Efficacy of PTX-NPs Monotherapy Next, we tested, by means of colorimetric metabolic assays, the functionality of the engineered nanoparticles on bladder cell models. First, we checked the efficacy of PTX-NPs, in comparison to free PTX. As shown in Figure 7A, free PTX IC50 is strongly dependent on the cell line. As expected, the viability of T24 cells decreased as the concentration of PTX was increased to 600 nM. From this concentration, the loss of viability was kept at 20%. Thus, T24 cells were very sensitive to PTX, with an IC50 around 25 nM. For RT4 cells, the IC50 was much higher, around 300 nM. The decrease in viability was not as abrupt as seen in T24 cells and the viability observed was higher, from 45% to 60%. These cells showed significant resistance to PTX. This remarkable difference of viability between both cell types could be explained due to their differential sensitivity against PTX. As widely known, PTX is a drug that attacks the mitosis phase, preventing cells to divide [16,17,35]. Hence, it will be more effective in highly replicating cells. Accordingly, T24 cells have a doubling time of 20 h, whereas RT4 cells duplicate every 40 h. This important delay in the replication time could explain why the antitumor effect of PTX in RT4 after 3 days was less evident than in T24 cells that replicated in half of the time. With established IC50 values, PTX-NPs were directly tested in different concentration ranges for each cell line. In addition, for RT4 cells, NPs were incubated for longer times to try to increase the effect. As shown in Figure 7B,C, the encapsulation of PTX inside the particles did not hamper its capacity to kill tumor cells. Nevertheless, while it significantly increased the mortality in RT4 cells, for T24 the effect was the opposite: the encapsulation of PTX resulted in lower mortality. nM, in combination with siRNA-NPs, in order to be able to detect a potential synergistic effect. Regarding RT4 cells, we decided to study the cytotoxic effect of PTX after 6 days, because of their slow replication rate and the high viabilities achieved in the IC50 assay after 3 days of treatment ( Figure 7A). As shown in Figure 7C, encapsulated PTX induced a higher cytotoxic effect than free PTX, which can be again attributed to the sustained drug release. As we previously published [16], it takes some days to release PTX from the NPs, so we hypothesize here that the accessible concentration of PTX to cells after 6 days treatment of the initial single dose is higher for encapsulated PTX than for naked drug. In Vitro Antitumor Efficacy of pBAE-NPs Monotherapy Next, we studied the antitumor efficacy of pBAE-NPs encapsulating two different anti-survivin siRNAs (see structure in Figure S2), selected from a bibliographic search [20,21,36,37]. Before assessing the capacity of the particles to produce tumor cell death, we confirmed that the siRNA downregulated the survivin gene in these specific tumor cell The higher viability of T24 cells treated with encapsulated PTX, as compared to naked PTX, was expected and attributed to the sustained release of PTX from P nanoparticles, as previously described [16]. When T24 cells were treated with 25 nM of encapsulated PTX, they had a 64% of cell viability, in comparison to the 53% achieved with the same concentration of free PTX. Accordingly, it was decided that PTX-NPs will be used at 25 nM, in combination with siRNA-NPs, in order to be able to detect a potential synergistic effect. Regarding RT4 cells, we decided to study the cytotoxic effect of PTX after 6 days, because of their slow replication rate and the high viabilities achieved in the IC50 assay after 3 days of treatment ( Figure 7A). As shown in Figure 7C, encapsulated PTX induced a higher cytotoxic effect than free PTX, which can be again attributed to the sustained drug release. As we previously published [16], it takes some days to release PTX from the NPs, so we hypothesize here that the accessible concentration of PTX to cells after 6 days treatment of the initial single dose is higher for encapsulated PTX than for naked drug. In Vitro Antitumor Efficacy of pBAE-NPs Monotherapy Next, we studied the antitumor efficacy of pBAE-NPs encapsulating two different antisurvivin siRNAs (see structure in Figure S2), selected from a bibliographic search [20,21,36,37]. Before assessing the capacity of the particles to produce tumor cell death, we confirmed that the siRNA downregulated the survivin gene in these specific tumor cell lines by Western blot. As shown in Figure 8A and quantified in Figure 8B, transfection of cells with any of the two siRNAs against survivin induced a decrease in protein expression when compared to the negative control (untreated cells), especially for siRNA-1 in T24 cells, as expected from previous bibliography [38]. For the RT4 cell line, both siRNA tested achieved a similar silencing efficiency, although not as high as the one obtained for T24 cells ( Figure 8B). Next, the antitumor capacity was studied over time, for T24 cells, since it was the cell line showing the highest silencing effect ( Figure 9A). As expected from silencing experiments, siRNA-1 achieved a higher killing effect, showing the highest decrease in cell viability (around 50%) after 3 days. This could be explained because, after long periods of time, cells that have not been killed are able to replicate again, hence increasing the cell culture viability. This could be also attributed to the degradation of the siRNA after the 3- Next, the antitumor capacity was studied over time, for T24 cells, since it was the cell line showing the highest silencing effect ( Figure 9A). As expected from silencing experiments, siRNA-1 achieved a higher killing effect, showing the highest decrease in cell viability (around 50%) after 3 days. This could be explained because, after long periods of time, cells that have not been killed are able to replicate again, hence increasing the cell culture viability. This could be also attributed to the degradation of the siRNA after the 3day incubation time, which means that repeated doses could be required for the clinical use of the expected therapy. These data were in agreement with the previous bibliography. Yang et al., for example, showed a 30% decrease in cell viability and proliferation of T24 bladder cancer cells in vitro for 3 days when using the same survivin siRNA-1 oligonucleotide [35]. Interestingly, the efficacy of the present treatment was higher thanks to the use of pBAE NPs with the PB coating. In addition, other reports, such as the study of Grdina et al., needed a much higher concentration (100 nM vs. 16 pM used in the present work) to obtain similar cell killing efficiencies in models of colon carcinoma [34]. Taking into account these results, we studied the viability of RT4 cells only at the 3-day time point (Figure 9B). Again, siRNA-1 achieved a more potent effect on killing tumor cells than siRNA-2. It is worth noting that, comparing the effect on both cell lines, the antitumor effects observed in T24 cells were higher than those for RT4 cells (40% vs. 60% survival). This was expected according to the silencing results previously observed ( Figure 8). Consequently, in the following studies, only siRNA-1 was used. In Vitro Efficacy of the Dual Therapy Once the efficacy of both monotherapies was confirmed, we aimed to design a dual combination therapy, since we hypothesized a synergistic effect could take place. After 3 In Vitro Efficacy of the Dual Therapy Once the efficacy of both monotherapies was confirmed, we aimed to design a dual combination therapy, since we hypothesized a synergistic effect could take place. After 3 days of treatment with 25 nM PTX-NPs and/or 16 pM siRNA-1 pBAE-NPs, the viability of T24 cells treated with the combination treatment was around 25% whereas cells only treated with PTX or only transfected with the anti-Survivin siRNA-1 had a 60% and 45% of viability respectively. Thus, the combined treatment produced a statistically significant decrease in cell viability compared to both treatments administered separately ( Figure 9C). For RT4 cells, the experiment setup was the same, with the exception of the PTX dose, which, was adjusted to 300 nM. It is worth remarking that the dose of PTX is different between cell lines because it was adjusted according to the results of the monotherapies above (see Figure 7). As clearly seen ( Figure 9D), the trend to an increase in the effect when combining both therapies is clear, although the effect, as already seen for monotherapies, is not as potent as for T24 cells. Nevertheless, for both cell types, the viability decrease was observed even with the scrambled siRNAs used as a negative control. Consequently, the mortality could not be attributed to the selective silencing of survivin, but to an unspecific effect of pBAE-NPs. We hypothesized that this effect could be explained due to the fact that the addition of PTX right after the incubation with PBAE-NPs could prevent cells from recovering from the transient toxicity associated with pBAEs transfection (produced due to a very high local concentration). In order to check this hypothesis, PTX-NPs were added to cells 24 h after siRNA treatment and cell viability was assessed 3 days later. Although in this case, a very slight decrease in the viability was observed with anti-survivin siRNA-1 pBAE-NP, this decrease was not significant in any of the cell lines tested. Therefore, although we may be reducing the unspecific toxicity mentioned above, the results were not conclusive. Influence of the Cell Cycle Arrest on Dual Therapy Efficacy At this point, we wondered why both treatments seemed to work when used as monotherapies, but not when combined. Thus, we performed a deeper study of survivin expression. In previous studies, nuclear and cytoplasmatic survivin isoforms were reported to have an almost identical structure that could not be differentiated by siRNAs or by antibodies [36], although they had different functions [37]. While the nuclear isoform might control cell division and proliferation, the cytoplasmic presence of survivin may be associated with cell survival and apoptosis [39]. The expression of both isoforms is not equal in all bladder cancer cells. Different studies showed that the nuclear expression of survivin was present only in 60% of TCC studied [40,41]. Taking these data into account, the expression of survivin was assessed in T24 and RT4 cell lines by fluorescent microscopy. As shown in Figure 10A, survivin expression in T24 cells was spread in the whole cell volume, including nuclear localization, while RT4 survivin expression was found preferentially accumulated in the cytoplasm. Therefore, we hypothesized that we were only blocking the expression of a nuclear isoform of the survivin. This would stop the cell cycle, as described previously [9] impairing the cytotoxic effect of PTX because it can only kill dividing cells. Accordingly, we performed an analysis of the cell cycle (see details in Figures S3 and S4 and summary in Figure 10B,C) and we observed that the inhibition of survivin produced a different cell cycle effect depending on cell line. In fact, only in T24 cells, G2M stage was significantly increased after 2 and 3 days of treatment with anti-survivin siRNA-1. According to previous literature [42], survivin regulates the cell cycle, with overexpression in the G2M stage. This is because T24 survivin is mostly located in the cell nucleus. Altogether, these data explain why the combination of PTX and siRNA against survivin did not induce a specific synergistic effect in T24 cells. associated with cell survival and apoptosis [39]. The expression of both isoforms is not equal in all bladder cancer cells. Different studies showed that the nuclear expression of survivin was present only in 60% of TCC studied [40,41]. Taking these data into account, the expression of survivin was assessed in T24 and RT4 cell lines by fluorescent microscopy. As shown in Figure 10A, survivin expression in T24 cells was spread in the whole cell volume, including nuclear localization, while RT4 survivin expression was found preferentially accumulated in the cytoplasm. In the case of RT4 cells, their cell cycle was not influenced by the treatment (Figure 10C), which can be explained by the cytoplasmatic localization of survivin in this cell line ( Figure 10A). The decrease in viability observed previously (Figure 9), where RT4 cells showed the viability of 60% after anti-survivin siRNA-1 transfection could have been produced by the inhibition of cytoplasmic survivin, which induces apoptosis [21,43]. Previously, we studied the amount of survivin expressed by RT4 and T24 cells in a Western blot assay. As it is shown in Figure 8, the levels of survivin expression in RT4 cells were much higher than those of T24 cells. We hypothesize that probably this fact might be the reason why no synergistic effect was observed when PTX was combined with the siRNA treatment in RT4 cells. The silencing of survivin could be enough to produce an increase in cell apoptosis but not enough to induce a decrease in chemoresistance against PTX. Discussion Bladder cancer remains among the ten most common cancers worldwide and clinical guidelines have not improved notably in the last years [1,2]. For this reason, the need for innovative therapeutic strategies is still a medical need. In this context, we aimed to develop here a dual therapy consisting of a chemotherapeutic drug with a gene-targeted therapy. The chemotherapeutic drug selected was paclitaxel, due to its extended use for bladder cancer, among others. However, a major problem in the long-term efficacy of paclitaxel and other chemotherapeutics is the development of drug resistance, related to worse survival rates. Many studies have indicated that chemoresistance is induced by the overexpression of a set of genes related to the apoptotic route. This is the main reason why the rationale for a combined therapy based on gene silencing stands to be important [44]. Among these genes, survivin is attracting great attraction as one of the most relevant. It is an inhibitor of apoptosis protein (IAP) involved in many cellular responses to stress, presented in different subcellular compartments. Survivin is hardly detected in healthy adult cells, while overexpressed in fetal and tumor tissue [10,18]. Its relationship with the development of a wide variety of cancers, such as colon carcinomas, breast cancer, retinoblastoma, sarcomas and leukemias, has been clearly proven [9,10,36,38]. Survivin overexpression is associated not only with chemoresistance but with radioresistance, tumor growth, migration and aggressiveness and unfavorable clinical outcomes, where DNA damage takes place, producing survivin expression to be increased, thus resulting in a decrease in apoptosis [9,18,20,38]. Consequently, many strategies to downregulate its expression appeared and several studies demonstrated that the downregulation of survivin mRNA is associated with decreased tumor growth and sensitization to radiation and chemotherapeutic agents [42,45]. One of the most relevant and efficient forms to downregulate genes is the use of small interfering RNA (siRNA), a type of short double-stranded RNA that can specifically down-regulate survivin expression [18,44]. Nevertheless, oligonucleotides such as siRNA still show an important bottleneck step: their stability in biological media is compromised by the presence of nucleases, so a physical barrier between them and the biological media is required [46,47]. Therefore, it is clear that combining survivin inhibitors with paclitaxel would be a promising alternative, improved when using a nanomedicine strategy. Here, we propose this combination through the controlled delivery of both monotherapies: paclitaxel drug + survivin gene therapy, encapsulated in proprietary polymeric nanoparticles to achieve a synergistic effect killing cancer cells. Polymeric nanoparticles are used as the required technology to control the delivery of the active principles as well as to cross biological membranes [20]. Firstly, PTX was encapsulated in P polymer (see structure in Figure S5A, SI) [16]. These nanoparticles were previously used in our group for the treatment of glioblastoma multiforme, in a study where, thanks to the addition of a targeting peptide in the polymer, the particles efficiently crossed the blood-brain barrier and achieved a reduction of tumor growth and increase in animal survival [16]. Here, since we aim for the intravesicular administration, the addition of the peptide is not required for this local route. This is extremely advantageous in terms of therapeutic costs. These modified nanoparticles were synthesized, and their characterization enabled them to confirm they were appropriate for the intended use (Table 1). Secondly, we synthesized poly(beta aminoester) nanoparticles for the encapsulation of siRNAs (see structure in Figure S5, SI). These are also proprietary polymers from our group, long studied for the encapsulation of nucleic acids by us [22][23][24]48] and others [49][50][51], due to their advantageous properties in terms of reduced toxicity, that enables the administration of higher doses and, consequently, enhanced efficacy in gene transfection. Although previous studies already used pBAE nanoparticles for the encapsulation of siR-NAs [15,23,30,52,53], and some encapsulated survivin siRNAs [54], here, two novelties stand as important. On the one hand, the use of a design of experiments (Figures 2 and 3) for the selection of the methodological conditions for the formation of the nanoparticles. As far as we know, this is the first time that a rational method for the selection of these parameters was used to set up a formulation based on pBAEs. This is advantageous in terms of time-saving and efficiency of design. On the other hand, the intravesical delivery, enabled by the composition of nanoparticles [27,55]. To achieve so, after a first study of setting up the composition of the particles (Figures 1-4), we selected C32 pBAE backbone, including 50% arginines and 50% lysines as terminal oligopeptides, with a coating of the protein bromelain, which enables the crossing of mucosal barriers [27,55]. An important point to highlight is the high plasmatic membrane penetration in both cell lines tested, especially in RT4 cells that grow forming clusters that were described as highly restrictive to transfection ( Figure 5). When used as monotherapies, both treatments showed high efficacies as antitumor therapies, tested in two cellular models of bladder cancer, representative of the papillary carcinoma (RT4) and carcinoma in situ (T24) cancer subtypes. The expected effect of PTX was confirmed by these in vitro studies, to further assess the IC50 that was used for combination therapy. In the case of pBAE-NPs, when they encapsulated both antisurvivin siRNAs, a decrease in cell viability was also observed, especially with siRNA-1 ( Figure 9). However, the levels of mortality achieved by survivin silencing were far from those achieved when using paclitaxel. Although one could think that it is not worthy to develop these kinds of gene therapies if they are not able to overpass the effects of traditional chemotherapies, it must be taken into account the high toxicity produced when these chemotherapies such as paclitaxel are delivered. Therefore, although their great efficacies in vitro, doses must be adjusted down to avoid these side effects. On the contrary, since siRNAs target directly survivin expression and it is only overexpressed in tumor genes, the activity will be more localized into tumors. Additionally, the encapsulation of both active ingredients in nanoparticles and the local delivery of the therapy will synergistically contribute to avoiding the side effects. Unfortunately, although it seemed clear that the combined therapy would be better, the results were unexpected. The synergistic effect was not observed. As already commented in the Results section, this can be attributed to the subcellular mechanism of action of paclitaxel together with survivin isoforms. We found that T24 showed a preferential survivin nuclear localization, usually attributed to regulation of mitosis [18]; for RT4 it was more cytoplasmatic (Figure 10). This cytoplasmatic expression was associated with advanced-stage tumors with chemoresistance [9]. This localization played an unexpected role in paclitaxel action. The drug, killing only dividing cells, was not able to synergize with survivin silencing therapy in T24 cells, since the downregulation of survivin produced a cell cycle arrest, only in T24 cells ( Figure 10). In RT4 cells, the non-synergistic effect of the dual therapy should have been produced, as it was found for other types of tumor cells with cytoplasmatic expression of survivin [9]. Consequently, another factor must play a role. The lack of synergistic effect could be attributed to the initial higher survivin expression in RT4 cells as compared to T24 and the low doses of survivin siRNAs used. Therefore, although paclitaxel killed the cells, survivin inhibition was not enough to achieve the expected synergistic effect. In future studies, higher doses of survivin siRNA could be tested, if they are not toxic, to check if the envisaged synergistic effect is observed, at least, in cells with a preferential expression of survivin in the cytoplasm. Accordingly, a modification of the dosage pattern established could also be envisaged. While here we administered first the silencing therapy to sensitize tumor cells, in previous reports, the inverse pattern was followed [10]. Another alternative that could be feasible for future studies would be the silencing of other proteins involved in the cell growth and apoptosis control, such as the mammalian target of rapamycin (mTOR) or wnt proteins, both inducers of survivin overexpression [18]. This is proof of the importance of specific cell mechanisms when setting up new combination therapies for cancer, currently a vastly extended practice. For this reason, we recommend performing the efficacy studies of any combination therapy using more than one cell line and together with basic sciences studies of subcellular mechanisms involved, since, as we demonstrated here, although using similar cell lines, the results of the same therapy could be very different. Conclusions In summary, we designed a dual antitumor therapy for bladder cancer. It is based on paclitaxel encapsulated in P-polymer nanoparticles and siRNA survivin-loaded poly(beta aminoester) nanoparticles. Our results suggest that both nanoparticles were efficiently formed, with properties that could enable their intravesical administration, thus reducing invasiveness and the not-selective intravenous route. Each monotherapy was able to significantly reduce the growth of two bladder tumor cells in vitro. Nevertheless, the combination therapy did not enhance the tumor cell killing. This unexpected effect was further studied and attributed to the presence of nuclear vs. cytoplasmatic survivin isoforms combined with the cell cycle arrest produced by nuclear survivin, which is incompatible with the paclitaxel mode of action. Therefore, we could not confirm our hypothesis: survivin silencing does not sensitize bladder cell lines to paclitaxel. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/pharmaceutics13111959/s1. Table S1. DoE planning. Set up of the experiments performed combining the selected factors (7) and giving 2 levels to each them; Table S2. DoE results. Initial (0 min) and final size (60 min) and PDI of the resulting nanoparticles in all 8 experiences from the experience study; Figure S1. Cell micrographics. A-RT4 and B-T24 cell lines; and C-schematic representation of the bladder tumor types; Figure S2. A-Schematic representation of the survivin pre-mRNA and location of each target sequence in survivin mRNA (Genbank accession no. NM_001168.1). B-Nucleotide sequences of siRNAs. (Paduano F. 2006); Figure S3. T24 cell cycle interference of siRNAs anti-survivin. Study over the time on the cell cycle phases as a function of the treatment administered to cells; Figure S4. RT4 cell cycle interference of siRNAs anti-survivin. Study over the time on the cell cycle phases as a function of the treatment administered to cells; Figure S5. Polymers used to engineer both monotherapies. A-P polymer structure; and B-pBAE structure. P polymer was used for the encapsulation of PTX, while pBAE was used for survivin siRNA encapsulation. Both are polymers proprietary for our research groups. In both cases, although not mentioned here, we have a library of polymers with slight modifications to give added functionalities to resulting nanoparticles, such as hydrophobicity, active targeting and stability.
11,501
2021-11-01T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
Lane Line Detection System Based on Improved Yolo V3 Algorithm : Aiming at the problems of low accuracy and poor real-time performance of Yolo v3 algorithm in lane detection, a lane detection system based on improved Yolo v3 algorithm is proposed. Firstly, according to the characteristics of inconsistent vertical and horizontal distribution density of lane line pictures, the lane line pictures are divided into s * 2S grids; Secondly, the detection scale is adjusted to four detection scales, which is more suitable for small target detection such as lane line; Thirdly, the convolution layer in the original Yolo v3 algorithm is adjusted from 53 layers to 49 layers to simplify the network; Finally, the parameters such as cluster center distance and loss function are improved. The experimental results show that when using the improved detection algorithm for lane line detection, the average detection accuracy map value is 92.03% and the processing speed is 48 fps. Compared with the original Yolo v3 algorithm, it is significantly improved in detection accuracy and real-time performance. 1.Introduction In recent years, with the rapid development of China's automobile industry and the continuous progress of artificial intelligence technology, by 2020, vehicles with auxiliary driving function and automatic driving function have accounted for more than 50% of the total number of vehicles in the market [1] .The development of intelligent automobile industry can not only effectively increase the safety factor of automobile driving, but also solve the problems of ecological environment damage caused by traffic congestion. In the field of intelligent vehicles, the automatic detection of lane markings on the road is the technical basis for vehicle assisted driving and an important part of the perception module in driverless vehicles [2] .At present, the main methods of lane detection at home and abroad include: the method of extracting road features by machine vision, the method of establishing road model for detection, and the method of multi-sensor fusion detection [3] . The method of extracting road features by machine vision mainly uses machine vision technology to classify the gray value features and color features of lane lines. After learning, the lane lines on the road can be detected. Because the gray value, color and other features in the image are often affected by external conditions such as light intensity and shadow, this method is used for lane line detectionThe measurement is easily disturbed by environmental changes. The method of establishing a road model for detection is to first establish a two-dimensional or three-dimensional image model of the road picture, and then compare the image in the established model with the road in the photo to be detected, so as to detect the lane line. The application scope of this detection method is relatively small, and it is only applicable to the roads with the characteristics of known templates. In addition, the algorithm has less computation during detectionLarge, with problems such as poor real-time performance. The method of multi-sensor fusion detection is to detect the lane line by means of high-definition camera, UAV aerial photography, GPS, radar and other fusion methods [4] . This method has high cost and is difficult to be applied in the large-scale practical lane line detection system [5] . In view of the above characteristics, advantages and disadvantages of lane line detection technology at home and abroad, this paper proposes a lane line detection system based on improved Yolo v3. The system is mainly composed of equal data preparation module, learning and training module and lane line detection module. After testing, compared with traditional machine learning algorithms such as r-cnn and r-fcn, this detection method has better robustness and operationTime and other aspects have improved significantly. Algorithm principle Yolo is a real-time target detection algorithm proposed by Redmon in 2016. The detection speed of this algorithm can reach 45 frames per second, and its map value is more than twice that of other real-time detection algorithms. This algorithm is widely used in real-time object detection in cuttingedge technologies such as auto driving [6] . Yolo v1 treats detection as a regression problem. First, adjust the size of the input image to a unified 448 * 448 pixel, then convolute the image, and finally detect the target by the full connection layer. The algorithm divides the input image into s * s grids. If the center point of the detected target falls into a grid, the grid is responsible for detecting the target. The parameters to be predicted in each grid include: the predicted value of B boundary boxes, the confidence conf of each boundary box, and the probability pr (class I | object) of C condition categoriesFormula 1 shows the number of parameters num to be predicted for each grid [10] . Num=s*s*(B*(4+1)+C) (1) Each bounding box includes four predicted values: X, y, W and h. where (x, y) respectively represents the coordinate value of the center point of the bounding box, and W and H are the width and height values of the bounding box. The position and size of the bounding box can be determined through the four values of X, y, W and h. the confidence conf of each bounding box refers to whether the target is included in the bounding box and the accuracy of its prediction. The definition of confidence conf is shown in formula 2. Conf=Pr(Object)×IoU(Pred,Truth) (2) In formula (2), IoU (PRED, Truth) refers to the matching degree between the prediction box and the real boundary box. Its definition is shown in Formula 3, where area (Boxt⌒Boxp) represents the area where the prediction frame intersects with the real frame, area (boxt∪Boxp) represents the area of the union of the prediction frame and the real frame. When the prediction frame completely coincides with the real frame, IOU (PRED, truth) = 1; when the two completely do not coincide, IOU (PRED, truth) = 0. Pr (object) refers to the probability that the bounding box contains the target. When the target is included, PR (object) = 1; when the target is not included, PR (object) = 0. The probability pr (class I | object) of condition category is based on the condition that the grid contains detection targets, and each grid only predicts the probability of a specific category of targets. Its definition is shown in formula 4 [12] . Pr(class i |object)=Pr(Class i)/Pr(object) (4) In formula 4, Pr (class I) is the probability that the detection target is a specific class target, and PR (object) is the probability that the detection target is included in the bounding box. Loyo V2 is a faster target real-time inspection algorithm launched at the end of 2016 based on Yolo v1. After testing, the map value of loyo V2 is 76.8% at the processing speed of 67 frames per second and 78.6% at the processing speed of 40 frames per second [13] . Compared with Yoyo V1, the main change of YOYO V2 is to reduce the image input into the network. The original 448 * 448 pixel input image is adjusted to 416 * 416 pixels. In addition, the basic ) ox ( network layer is also adjusted. Yoyo V2 uses darknet-19 as the classification network, and the darknet-19 network is composed of 19 volume layers and 5 maximum pool layers. After testing, it is compared with the Darknet network adopted by yoyo v1In addition, the target classification accuracy map of darknet-19 is improved by about 3.2% [15] . Yolo V3 (you only look once V3) is a single-stage target detection algorithm proposed in early 2018. It not only maintains the operation speed of the algorithm, but also improves the prediction accuracy. It is a popular real-time target detection algorithm at present. The network structure model of Yolo V3 is shown in Figure 1 [18] Figure 1. Network structure model of Yolo V3 Compared with the previous version of Yolo, the main differences of Yolo V3 include: using darknet-53 (106 layers in total, including 53 convolution layers) basic network model; using independent logical classifier logistic instead of the previous softmax function; using a method similar to FPN for multi-scale feature prediction. The previous version of Yolo V2 can only use 1 on a single feature map×1. The most prominent feature of Yolo V3 is that it can detect targets in three sizes. When the feature size is 13 * 13, it is used to detect larger targets, when the feature size is 26 * 26, it is used to detect medium-sized targets, and when the feature size is 52 * 52, it is used to detect smaller targets. Therefore, it solves the function of adjusting the detection network according to the size of the target [19] . The detection steps of the three feature maps with different sizes in Yolo V3 are 32, 16 and 8 respectively. For the pictures with 416 * 416 pixels input, the first detection layer is located in the 82nd layer of darknet-53 network. Because its detection step is 32, the feature map of this layer is a picture with a resolution of 13 * 13. The second detection layer is located in the 94th layer of darknet-53 network. Because its detection step is 16, the characteristics of this layer are differentThe characteristic image is a picture with a resolution of 26 * 26. The third detection layer is located at layer 106 of darknet-53 network. Because its detection step is 8, the characteristic image of this layer is a picture with a resolution of 52 * 52 [22] . Algorithm related parameters Loss function is an important parameter in Yolo V3 algorithm, which is used to indicate the inconsistency between the predicted value and the real value in the model. The training model in the algorithm aims to reduce the value of loss function. The smaller its value, the higher the robustness of the model. Commonly used loss functions include mean square difference loss function, cross entropy loss function, etc. The definition of loss function L in Yolo V3 algorithm is shown in formula (5), which is composed of three parts: bounding box coordinate prediction error L coord , bounding box confidence error L conf and target classification prediction error L class . L=L coord +L conf +L class (5) The bounding box coordinate prediction error l coord is used to represent the error between the predicted value and the real value of the bounding box position. The definition is shown in formula 6. λ indicates the weight coefficient of coordinate error, S 2 indicates that there are S 2 grids in the picture, and B indicates the number of prediction bounding boxes of each grid. It indicates the possibility that the j prediction box of the I grid contains targets. If targets are included, I obj ij =1 otherwise I obj ij =0. (x i ,y i , w i , h i ).The four values represent the abscissa, ordinate, width and height values of the center point of the real bounding box in the ith grid. The four values represent the abscissa, ordinate, width and height values of the center point of the predicted bounding box in the ith grid.coord [25] (6) The definition of confidence error l conf is shown in formula 7.λ Noobj represents the weight penalty coefficient, and respectively represent the real and predicted confidence. The meaning of other parameters in the formula is the same as that in formula (6). (7) The definition of target classification prediction error lclass is shown in formula 8. There are C types of targets to be detected, taking C = 1,2,3... C. and respectively represent the real and prediction probabilities of targets of type C in the ith grid, and the meanings of other parameters are the same as those in formula 6 [26] . Improved Yolo V3 algorithm 3.1 Improvement of network model Firstly, in the conventional Yolo V3 algorithm, the network model is darknet-53, and the input image is divided into s * s grids. When using Yolo V3 algorithm for lane line detection, the shape feature of lane line image is that the transverse length is short and the longitudinal length is long. Therefore, in order to increase the grid detection density of image in the longitudinal direction, the algorithm is improved to divide the image into s * 2S gridsThis network structure is more suitable for lane line detection system. Secondly, aiming at the problem that the size of lane line targets is different under different road conditions, resulting in the poor detection effect of the algorithm, based on the three detection scales of yolov3 algorithm and combined with the idea of FPN algorithm, the detection scale is adjusted to four detection scales: 13 * 13, 26 * 26, 52 * 52104 * 104. After improvement, each detection scale not only obtains the details of the bottom layer of the network, but also can be used aloneThe improved detection scale is more suitable for lane line detection system than the original algorithm. Finally, yolov3 algorithm uses darknet-53 network as the main framework. When applied to lane line detection, the problem is that the network has a large receptive field, but the spatial resolution is insufficient, which makes it easy to miss target detection when the lane line is not clear. To solve this problem, the convolution layer in yoov3 algorithm is adjusted from 53 layers to 49 layers (named darknet-49), the network structure is simplified and the loss of low-dimensional information is reduced. The network structure upsamples the input image four times to obtain four scale feature maps, which improves the detection performance. The improved network structure is shown in Figure 2. Improvement of algorithm parameters 3.2.1 Cluster center distance parameter Anchor parameter refers to the prediction frame with a fixed size. In the process of target detection, the size of the prediction frame directly affects the accuracy and speed of the algorithm, which is a very important parameter. In the original yolov3 algorithm, K-means clustering algorithm is used to obtain anchor parameters. In this method, different feature attributes in the distance center distance formula are treated with equal weight, and the influence of different feature attributes is not consideredIn addition, if there are some noise points or isolated points in the sample, these points will have a great impact on the calculation results and cause serious deviation. In the lane line detection system, in order to solve the problem of determining the above clustering center distance parameters, a method is proposed to replace the Euclidean distance in the original algorithm with the intersection and union ratio of the sample frame and the prediction frame. Its definition is shown in formula 9, where box is the prediction frame of the sample, CEN is the clustering center, and IOU is the area of the union of the prediction frame and the real frame (as formula 3). (9) The actual detection effect comparison between the improved method and the original yolov3 algorithm is shown in Table 1. Its average accuracy and average speed are higher than that of the original yolov3 algorithm. It shows that the improved cluster center distance is more suitable for the detection of small targets such as road routes. Table 1. Comparison of detection results of cluster center parameters by different methods Loss function The loss function is used to estimate the inconsistency between the predicted value and the real value in the model, which is the basis for the evaluation of the false detection rate by the neural network and reflects the convergence performance of the network model. The loss function in the yoov3 algorithm consists of three parts, as shown in formula 5, in which the boundary box coordinate prediction error adopts the form of mean square deviation, while the boundary box confidence error and the target classification prediction error cross Entropy loss function form. When calculating the loss function in yolov3 algorithm, the boundary box coordinate prediction error, boundary box confidence error and target classification prediction error are activated by logistic function. The function expression is shown in formula 10 and the derivative formula of the function is shown in formula 11. The characteristic of logistic function is that it is single and continuous, and the function value is limited, so the data will not diverge during network transmission; the characteristic of the derivative of this function is that its value is less than 1. When the input value of the network is large, the derivative value will be very small, so the error of the calculated loss function will become larger, and the convergence performance of the network model will become worse. In order to solve the above problems, the focal loss function is proposed to replace the logistic function in the original algorithm. The expression of the focal loss function is shown in formula 12. Where p is the output value of the logistic function, (1-p)γIs the adjustment factor of the network system,αWeight system for target category (0)<=α<=1),γIs the focusing coefficient(γ>=0), y is the predicted probability value of the tag. x e x f   1 1 ) ( The boundary frame coordinate prediction error, boundary frame confidence error and target classification prediction error of the improved loss function are shown in formulas 13, 14 and 15 respectively. After adopting the improved adjustment factor, the reception range of small targets such as lane line is effectively increased and the detection accuracy is improved. Lane detection system design Based on the improved yolov3 algorithm network structure, the lane detection system is mainly composed of equal data preparation module, learning and training module and lane line detection module. The implementation process is shown in Figure 3. The data preparation module mainly completes the collection, marking, screening and preprocessing of lane line samples. The sample collection method is to install a high-definition camera on the car and take lane line photos on the way. Data marking mainly refers to marking lane lines in the picture, including white solid line, white dotted line, yellow solid line, yellow dotted line and other lane lines. The marking method isThe main work in the screening and preprocessing stage is to select highquality pictures from the marked pictures and preprocess them into the format required by yolo v3 neural network. The system uses nearest neighbor down sampling as the main method of picture preprocessing. The flow chart of the nearest neighbor down sampling algorithm is shown in Figure 4. Experimental environment When the system is tested, in order to improve the computing speed and reduce the running time, the computer configuration is adopted: the CPU is Intel i7 10700f, the graphics card is NVIDIA gtx1060, the memory size is 16GB, the operating system is Ubuntu, and the interactive language is python. Experimental data set In target detection, the selection of experimental data set and feature annotation are very important, and its accuracy directly affects the effect and speed of training. During the test, a total of 500 photos containing road routes in cities were selected and grouped, including 400 learning samples and 100 test samples. After the learning samples are preprocessed by the nearest neighbor down sampling method, the lane lines in the pictures are marked. The marking method is to mark the learning sample pictures manually by using the professional data set marking software labelme, and then save them as JSON format files. Experimental results The marked lane line pictures are sent to the improved yolov3 algorithm network for training. In the training stage, the size of the pictures used is 416 * 416 pixels. During the training process, several important index parameters in the algorithm are dynamically recorded. The change process of the average loss function L value is shown in Figure 5. It can be seen from Figure 5 that at the beginning of training, the loss function value is about 1.8. When the number of training increases, the convergence of the loss function value decreases, and when the number of iterations is about 20000, the loss value is 0.1 left and right to achieve the expected effect. Performance comparison of different algorithms The lane line detection system using the above improved yolov3 model has good performance in detection accuracy, map value, detection time, missed detection rate and so on.The performance parameters of the algorithm are compared with those of traditional yolov3, r-cnn and fast r-cnn, as shown in Table 2.It can be seen from table 2 that the performance of the lane line detection system of the improved yolov3 model is better than the other three algorithms in all aspects. .Conclusion Aiming at the problem that the traditional algorithm in lane detection can not give consideration to the detection accuracy and detection speed, a detection system based on the improved yolov3 algorithm is proposed in this paper.The main improvements include the following aspects: 1.According to the characteristics of inconsistent vertical and horizontal distribution density of lane line images, it is proposed to divide the images into s * 2S grids to improve the vertical detection density. 2. The detection scale is adjusted to four detection scales: 13 * 13, 26 * 26, 52 * 52104 * 104, which is more suitable for the detection of small targets such as lane lines 3. The convolution layer in the original yoov3 algorithm is adjusted from 53 layers to 49 layers, which simplifies the network and improves the system performance. 4.The parameters such as cluster center distance and loss function are improved, which are more relevant and more suitable for lane line detection environment. The actual test shows that the improved algorithm has good detection performance when detecting flat roads, but when the roads have large slopes, the detection is easy to be affected. Therefore, solving the problem of lane line detection in large slope scenes will be the focus of the next research.
5,087.6
2021-10-13T00:00:00.000
[ "Computer Science" ]
Cytotoxicity evaluation of dental and orthodontic light‐cured composite resins Abstract Introduction The aim of this study was to determine the cytotoxicity of light‐cured composite resins (Clearfil ES‐2, Clearfil ES Flow, Filtek Supreme XTE, Grengloo, Blugloo, Transbond XT, and Transbond LR) then to assess leachable components in contact with human gingival fibroblasts (GFs) and to quantity detected bisphenol A (BPA). Methods Light‐cured composite resin discs were immersed for 24 hours in gingival fibroblastic medium (n = 3 for each product) and in control medium (n = 2 for each product) contained in plate. Cytotoxicity of the products (n = 95) was determined by the measure of cell viability using MTT assay after reading the optical densities of the plates. The analysis of leachable components was done by gas phase chromatography and mass spectrometry (GC–MS) and detected BPA was quantified. The limit of quantification was 0.01 μg/mL. Statistical analyses were performed by using IBM SPSS Statistics 20 and Kruskal–Wallis and Mann–Whitney U‐tests were applied. Results Cell viabilities were between 85 and 90%. Many chemical compounds including triethylene glycol dimethacrylate (TEGDMA) and BPA were identified. The average concentrations were 0.67 μg/mL ± 0.84 in the control medium and 0.73 μg/mL ± 1.05 in the fibroblastic medium. Filtek Supreme XTE presented the highest concentration of BPA with 2.16 μg/mL ± 0.65 and Clearfil ES Flow presented the lowest with 0.25 μg/mL ± 0.35. No BPA was detected with Transbond XT and Transbond LR. Clearfil ES Flow, Filtek Supreme XTE, Grengloo and Transbond LR presented residual TEGDMA. Conclusions Light‐cured composite resins are slightly cytotoxic opposite GFs and release many components including BPA and TEGDMA. Clinical precautions should be taken to decrease the release of these monomers. | INTRODUCTION Bisphenol A (BPA) based dental resins are commonly used in preventive and reparative dentistry and in orthodontics. The composite resins used in dentistry are complex polymers containing a variety of monomers, initiators, activators, stabilizers, plasticizers and other additives. Two monomers are mainly used: bisphenol A diglycidyl dimethacrylate (Bis-GMA) and triethylene glycol dimethacrylate (TEGDMA) (Bationo et al., 2019). BPA is never present in pure state; it is used as raw material for the formulation of Bis-GMA (Perez-Mondragon et al., 2020). In recent years, the increasing presence of polymers in oral cavity has raised questions about safety of resin matrix components. Despite their increasing popularity, it is worrying that composite resins can be toxic due to the fact that they can release components (Reichl et al., 2008). The toxicity of resin-based materials is due to residual monomers as well as to degradation products linked to activity of the salivary esterases. Elution of BPA may result from impurities left after synthesis of resin due to incomplete polymerization or degradation of this resin (Schmalz, Preiss, & Arenholt-Bindslev, 1999;Van Landuyt et al., 2011). The literature review confirms toxicity of BPA. BPA is an endocrine disruptor with potential toxicity in vitro (Wetherill et al., 2007) and in vivo (Richter et al., 2007). Composite resins are extensively used as restorative materials because of esthetic demands and concerns over adverse effects of mercury from amalgam (Bakopoulou, Triviai, Tsiftsoglou, & Garefis, 2006). In orthodontics, composite resins are the materials of choice to bond orthodontic accessories to dental enamel (Bishara et al., 2007;Paschos et al., 2009). Regarding dental treatments, it is advantageous to maintain maximal tissue vitality and cytotoxic reactions must be prevented, which necessitates the dental compounds to be screened before they are used clinically (Murray, Godoy, & Godoy, 2007). Dental and orthodontic light-cured composite resins whose composition includes BPA derivatives and TEGDMA are studied. The aim of this study was to determine the cytotoxicity of light-cured composite resins when in contact with human gingival fibroblasts (GFs), to assess leachable components and to quantity detected BPA in the fibroblastic medium and the control medium. | Preparation of resin discs Discs 10 mm in diameter and 1 mm thick were prepared from dental and orthodontic light-cured composite resins (Table 1). The discs were cured at the top surface for 20 seconds using BA Optima 10 LED Curing Light (light intensity 1,000-1,200 mW/cm 2 and wavelength 420-480 nm). | Cytotoxicity testing Fibroblasts were cultured from gum operative waste obtained after dental extraction in a patient who signed a consent form. The explants are rinsed with phosphate buffered saline and then laid onto the connective side in a 100 mm 2 Petri dish. The dish is kept semi-open under laminar flow for 30 minutes so that explants can adhere to the culture surface. RPMI 1640 culture medium is instilled on the explants to prevent them from drying out and to maintain cell viability. About 10 mL of RPMI 1640 culture medium supplemented with fetal bovine serum 10%, Penicillin (10,000 U/mL)/Streptomycin (10 mg/mL) 1% and L-Glutamine (200 mM) 1%, is poured in the dish then it is stored in humidified incubator at 37 C under 5% CO 2 in air. The culture medium is replaced every two days up to confluence. Each resin disc was immersed in 400 μL (ISO 10993-12:2012[ISO, 2012 standard for medical-device testing in biologic systems) of fibroblastic medium (n = 3 for each product) and control medium (n = 2 for each product) contained in a 12-well plate and kept for 24 hours in humidified incubator at 37 C under 5% CO 2 in air. 100 μL of fibroblastic medium (n = 95) were then injected onto 96-well plates. About 10 μL (5 mg/mL) of MTT solution were added to each well (96-well plates) and plates were incubated for 3 hours. MTT solution was then removed and 100 μL of dimethyl sulphoxide were added to each well. The plates were shaken and then optical densities (OD) were measured in a plate reader (EPOCH) at a wavelength of 570 nm. The fibroblastic medium was used as cell control (n = 7). Cell viability was calculated using the formula (Vande Vannet, Mohebbian, & Wehrbein, 2006): Cell viability = (OD test group/OD cell control) × 100. | Gas phase chromatography and mass spectrometry analysis The eluates of incubation solutions (fibroblastic medium and control medium) were extracted using solid phase extraction (NH 2 cartridge) and then analyzed by gas phase chromatography and mass spectrometry (GC-MS) (Agilent 6890 Series -Agilent 7673). The control medium was a pure cell culture medium; samples immersed in this medium were used as control group. A capillary column 30 m in length, internal diameter of 320 μm and film thickness of 0.25 μm was used with helium carrier gas at a flow rate of 1.2 mL per minute. The column temperature program was set as follows: initially, 80 C for 1 minute, increasing to 150 C at a rate of 20 C per minute and then increasing to 280 C for 2 minutes at a rate of 10 C per minute. The injector temperature was 280 C and the transfer line was 280 C. Data were acquired by scan mode and selected ion monitoring (SIM) mode and were processed with MSD ChemStation software. BPA calibration curve and response factor were computed with reference BPA and caffeine as internal standard. Linear correlation with efficiency of 0.996 was obtained between BPA amount and corresponding peak area. BPA was quantified after his identification. The limit of quantification was 0.01 μg/mL. lowest OD was observed with Blugloo (0.543). There was significant difference between the studied groups (p < .05). | GC-MS analysis Many chemical compounds were identified in analyzed resin materials by GC-MS (Table 3) including BPA and TEGDMA. XT in different studies. Two research groups found it to be noncytotoxic (Ahrari, Tavakkol Afshari, Poosti, & Brook, 2010;Angiero et al., 2009) and one group mentioned it as less cytotoxic compared to the dual cured orthodontic adhesives (Jagdish et al., 2009). GFs are used for cytotoxicity testing because they are close to dental restorative materials in oral cavity and are more clinically relevant. GFs are also sensitive cells that can be easily isolated and grown in normal culture medium (Hensten-Pettersen & Helgeland, 1981). It has been shown that monomer release from composite resins is completed within 24 hours whence the 24-hours exposure time of GF to composite resins (Ferracane & Condon, 1990). Therefore, most of toxic effects of composite resins occur within first 24 hours. The polymerization and conversion of monomer in organic matrix of resin-based adhesive materials is rarely complete and this seems to be responsible for most of reported adverse effects such as cytotoxicity, allergy and inflammatory potential (Borelli et al., 2017;Goldberg, 2008). Several studies show that dental adhesives are cytotoxic for GF (Huang, Tsai, Chen, & Kao, 2002;Szep, Kunkel, Ronge, & Heidemann, 2002). It is mainly the residual adhesive monomers that cause gingival inflammation and irritation (Gioka et al., 2005). To avoid undesirable side effects of adhesives on gingival tissue, it is preferable to polymerize it quickly after it has been applied ( van Gastel, Quirynen, Teughels, Coucke, & Carels, 2007). Malkoc, Corekci, Ulker, Yalcin, and Sengun (2010) found significant similarities in resin matrix of five different photopolymerizable orthodontic composites when assessing the ingredient of tested materials. However, Transbond XT contained Bis-EMA. Bis-EMA monomer showed analogous cytotoxic effect to that of TEGDMA (Geurtsen et al., 1998). Transbond XT cytotoxicity could therefore be explained by the presence of Bis-EMA in its matrix. The mechanism of cytotoxicity induced by TEGDMA on human fibroblasts has been studied by Stanislawski et al. (2003). TEGDMA and HEMA monomers are cytotoxic towards gingival cells and are probably responsible for allergies to these materials. In addition, unbound monomers promote bacterial growth particularly the microorganisms involved in dental caries formation (Goldberg, 2008). The addition of hydrogen peroxide (whitening component) even at low doses, potentiates toxicity of TEGDMA and UDMA opposite human gingival and pulp fibroblasts but remains without effect on Bis-GMA and HEMA (Reichl et al., 2008). Even completely light-cured, HEMA is not fully linked; a part could be released and therefore an allergic reaction is possible. HEMA has been shown to be able to pass through the dentin tubules and end up in pulp tissue. Several cases of allergic reactions are reported in literature (Bryant, 2016). TEGDMA, Bis-GMA and UDMA have toxic effects on GF and HaCaT cells. Bis-GMA is the most toxic and UDMA the least toxic (Moharamzadeh, Van Noort, Brook, & Scutt, 2007). The potential toxicity of various associated monomers seems to be greater than toxicity of each monomer studied individually. Cytotoxicity on GF is not the same depending on monomer and can be hierarchized as follows: HEMA<TEGDMA<UDMA<Bis-GMA (Reichl et al., 2006). BPA elution could result from impurities left after resin synthesis, first due to incomplete polymerization, and later due to degradation of resins (Schmalz et al., 1999;Van Landuyt et al., 2011). Studies on BPA have focused on hormonal activity (Chao et al., 2012). BPA concentrations >0.01 mmol/L are noted to have effect on estrogen (Kita et al., 2009). Komurcuoglu, Olmez, and Vural (2005) and Polydorou, Trittler, and Hellwig (2007) Blue test and evaluated released monomer both before and after resin polymerization using HPLC. The results showed the role of polymerized resin in determining cytotoxic effect of orthodontic resins and suggested that differences in chemical composition of resin matrix appeared to be much more related to decrease in cell viability than amount of monomer released from orthodontic resins. | CONCLUSION Light-cured composite resins are slightly cytotoxic opposite GFs and release many components including BPA and TEGDMA. Clinical precautions should be taken to decrease the release of these monomers. Resin-based materials that are used in dentistry should be harmless to oral tissues, so they should not contain any leachable toxic and diffusible substances that can cause some side effects.
2,695
2020-10-25T00:00:00.000
[ "Medicine", "Materials Science" ]
Extracting high-order cosmological information in galaxy surveys with power spectra The reconstruction method was proposed more than a decade ago to boost the signal of baryonic acoustic oscillations measured in galaxy redshift surveys, which is one of key probes for dark energy. After moving the observed overdensities in galaxy surveys back to their initial position, the reconstructed density field is closer to a linear Gaussian field, with higher-order information moved back into the power spectrum. We find that by jointly analysing power spectra measured from the pre- and post-reconstructed galaxy samples, higher-order information beyond the $2$-point power spectrum can be efficiently extracted, which generally yields an information gain upon the analysis using the pre- or post-reconstructed galaxy sample alone. This opens a window to easily use higher-order information when constraining cosmological models. (P pre ) because the reconstruction restores the linear signal reduced by the non-linear evolution. The higher order statistics such as B, the bispectrum, induced by the non-linear evolution are, in turn, reduced. Thus we can extract more cosmological information encoded in the linear density field from P post than P pre . On the other hand, the non-linear field contains information on small-scale clustering such as galaxy biases, which can provide better constraints on cosmological parameters by breaking the degeneracy between them. For the pre-reconstructed sample, this information can be extracted by combining the power spectrum with higher-order statistics. However, the power spectrum and higher order statistics such as the bispectrum are correlated, reducing our ability to estimate cosmological parameters. If we instead consider the post-reconstructed power spectrum, the covariance between the power spectrum and the bispectrum is reduced, and we can extract the information more efficiently. In this work, we show that the same improvement can be achieved by a joint analysis of P pre , P post and P cross (the cross-power spectrum between the pre-and post-reconstructed density fields). Due to the restored linear signal in the reconstructed density field, P post is decorrelated with P pre on small scales, which are dominated by the non-linear information. On these scales, the combination of P pre , P post and P cross has a similar ability to extract cosmological information as the combination of P post or P pre with the bispectrum because we are able to use the linear information in P post and higher-order information in P pre separately. Let us rewrite the non-linear over-density field as δ = R + ∆, where R is the over-density field after reconstruction, which is closer to the linear field. It is then straightforward to express P pre , P cross in terms of P RR (= P post ), P ∆∆ (the power spectrum of ∆) and P ∆R (the cross-power spectrum between ∆ and R). Using perturbation theory 14 , we can show that, at the leading order, P ∆∆ contains the integrated contribution from the bispectrum of squeezed-limit triangles while P R∆ contains the integrated contribution from the trispectrum (T) of folded/squeeze-limit quadrilaterals (those quadrilaterals with zero area and two-by-two sides equal). In this fashion when combining P pre and P cross with P post , we are essentially adding in higher-order signal, thus naturally gaining information. Note that in order to match the information obtained by adding these two extra statistics, it is not enough to consider the bispectrum signal of the pre-reconstructed field, but both bispectrum and trispectrum signals. For this reason the information content of P pre + B is different from that contained in P pre + P cross + P post . However, it is important to note that higher-order information that reconstruction brings is only a part from the total contained in the full bispectrum and trispectrum data-vectors. This is why a full analysis using P + B + T will always provide more information. However, such an analysis is not very practical because of the size of the full data-vector and the computational time typically required to measure B and especially T directly. In this paper we show that P pre + P cross + P post is an efficient alternative for extracting the relevant information from higher-order statistics for cosmological analyses. To demonstrate, we perform an anisotropic Lagrangian reconstruction (see Methods for details) on each realisation of the MOLINO galaxy mocks 15 , which is a large suite of realistic galaxy mocks produced from the Quijote simulations 16 at z = 0. We then use these mocks to calculate the data covariance matrix and derivatives numerically for a Fisher matrix analysis 17 using the measured multipoles (up to = 4) of P pre , P post and P cross on the parameter set Θ ≡ Panel a in Figure 1 shows the measured power spectra monopole, and we see that P cross decreases dramatically with scale compared to P pre and P post . This indicates a decorrelation between P pre and P post below quasi-nonlinear scales (k 0.1 h Mpc −1 ), which is largely due to the difference in levels of nonlinearity in P pre and P post . From the original data vector {P pre , P post , P cross }, we can construct their linear combinations, P R∆ , P ∆∆ , P δ∆ defined as Panel b in Figure 1 show these power spectra. As discussed above, these power spectra involving ∆ contain the information of part of the high-order statistics such as bispectrum and trispectrum. The correlation matrix for the monopole of power spectrum and bispectrum (only the correlation with the squeezed-limit of B 0 is visualised for brevity) is shown in Figure 2. It is seen that P pre 0 highly correlates with B 0 , confirming that the bispectrum is induced by nonlinearities. In contrast, P post 0 weakly correlates with B 0 , or with P pre 0 and P cross 0 on nonlinear scales (e.g., at k 0.2 h Mpc −1 ). This, however, does not mean that P post is irrelevant to the bispectrumit actually is a mixture of P pre and certain integrated forms of the bispectrum and trispectrum information 9,14 . Therefore by combining P post with P pre and P cross , one can in principle decouple the leading contribution in the power spectrum, bispectrum and trispectrum. The integrated form of the bispectrum information dominates P ∆∆ , which strongly correlates with B 0 , as shown in Supplement Figure 2. The fact that P post barely correlates with B 0 implies that the information content in P post combined with B 0 may be similar to that in P post combined with P pre and P cross , which is confirmed to be the case by the results from the Fisher analysis presented below. The cumulative signal-to-noise ratio (SNR) for power spectrum multipoles (up to = 4), bispectrum monopole, and various data combinations are shown in Supplement Figure 3, in which we can see that the joint 2-point statistics, P all , is measured with a greater SNR than that of P post or P + B 0 , which may mean that P all can be more informative than P post or P + B 0 for constraining cosmological parameters. To confirm, we then project the information content in the observables onto cosmological parameters using a Fisher matrix approach. Contour plots for (logM 0 , σ 8 ) derived from different datasets with two choices of k max (the maximal k for the observables used in the analysis) are shown in Figure 3 (more complete contour plots are shown in Supplement . The smoothing scale is set to be S = 10 h −1 Mpc when performing the reconstruction. The degeneracies between parameters using P pre , P post and P cross are generally different, because P pre , P post and P cross differ to a large extent in terms of nonlinearity on small scales. This is easier to see in Supplement Figure 4, in which contours for the same parameters are shown for observables used in several k intervals. The contours derived from P pre and P post generally rotate as k increases because of the kick-in of nonlinear effects, which affects P pre and P post at different levels on the same scale. This significantly improves the constraint when these power spectra are combined, labelled as P all , which is tighter than that from the traditional joint power spectrum-bispectrum analysis (P pre + B 0 ). Interestingly, P all can even win against P post + B 0 in some cases, demonstrating the To further quantify our results, in Figure 4 we compare the square root of the Fisher matrix element for each parameter, with and without marginalising over others, derived from P all and P pre + B 0 , respectively, with two choices of k max . For k max = 0.2 h Mpc −1 , we see that the Fisher information for each parameter (panel a: without marginalising over others) derived from P all is identical or even greater than that in P pre + B 0 . In other words, combining all power spectra we can exhaust the information in P pre +B 0 . After marginalising over other parameters, panel b shows that the uncertainty on each parameter gets redistributed due to the degeneracy. The ratios for the HOD parameters are all greater than unity especially for logM 0 and σ logM , demonstrating the power of our new method on constraining HOD parameters. The information content for cosmological parameters in P pre + B 0 is well recovered by using P all , although the recovery for M ν is relatively worse. The overall trend for the case of k max = 0.5 h Mpc −1 is similar, although the advantage of using P all over P pre +B 0 gets degraded to some extent. However, P all is still competitive: it almost fully recovers the information for the HOD parameters in P pre + B 0 with or without marginalisation, and largely wins against P pre + B 0 after marginalisation. Regarding the cosmological parameters, P all recovers all information in P pre + B 0 before the marginalisation, although the recovery is slightly worse for M ν . After marginalisation when the uncertainties are redistributed, the constraint from P all is generally worse than P pre + B 0 , especially for M ν . The 68% CL constraints on each parameter fitting to various datasets are shown in Table 1. To quantify the information gain, we evaluate the Figure- where F denotes the Fisher matrix and N p is the total number of free parameters. For the ease of comparison, for cases with different k max , we normalise all the quantities using the corresponding one for P pre . As shown, for k max = 0.2 h Mpc −1 , (FoM) P all is greater than all others, namely, it is larger than (FoM) Ppre and (FoM) Ppost by a factor of 2.7 and 1.7, respectively, and it is even greater than (FoM) Ppost+B 0 by ∼ 13%. For k max = 0.5 h Mpc −1 , (FoM) P all is also more informative than (FoM) Ppre and (FoM) Ppost by a factor of 2.1 and 1.5, respectively, and is the same as (FoM) Ppre+B 0 , but is less than (FoM) Ppost+B 0 by ∼ 10% in this case. To highlight the constraining power on cosmological parameters, we also list FoM cos , which is the FoM with all HOD parameters fixed. It shows a similar trend as FoM Θ : P all is the most informative data combination for k max = 0.2 h Mpc −1 , but it is outnumbered by P pre + B 0 and P post + B 0 by 13% and 30%, respectively, for the case of k max = 0.5 h Mpc −1 . As demonstrated in this analysis, a joint analysis using P pre , P post and P cross is an efficient way to extract high-order information from galaxy catalogues, and in some cases, P all is more informative even than P post + B 0 , which is computationally much more expensive. In this example, the k-binning for P and B are different, namely, ∆k(B) = 3k f ∼ 0.019 hMpc −1 ∼ 1.9 ∆k(P ) where k f denotes the fundamental k mode given the box size of the simulation. We have checked that using a finer k-binning for B only improves the constraints marginally 19 , namely, the FoM can only be raised by ∼ 10% when ∆k(B) is reduced from 3k f to k f , which is largely due to the strong mode-coupling in B as shown in Figure 2. Such a fine binning is not practical anyway as, for example, using ∆k(B) = k f up to k = 0.5 h Mpc −1 , we end up with more than 50, 000 data points to measure for B 0 . Note that the MOLINO mock is produced at z = 0, where the nonlinear effects are the strongest. At higher redshifts, the density fields are more linear and Gaussian, thus we may expect less gain from our new method. This can be seen from panel a of Supplement Figure 9, in which the correlation between P pre and P post at various redshifts is shown. As expected, P pre and P post are more correlated at higher redshifts, e.g., the correlation approaches 0.95 at z = 5 around k ∼ 0.3 h Mpc −1 , which implies that almost no information gain can be obtained at such high redshifts. As argued previously, the decorrelation at lower z is due to the fact that P pre and P post contain different levels of nonlinearity, as illustrated in panel b, thus are complementary. Also, the Alcock-Paczyński (AP) effect 20 , which is a geometric distortion due to the discrepancy between the true cosmology and the fiducial one used to convert redshifts to distances, is irrelevant at z = 0 21 . As studied 22 , the AP effect can make the small-scale bispectrum more informative for constraining the standard ruler than the power spectrum (P pre ), thus it is worth revisiting the case in which P post and P cross are added to the analysis. To further demonstrate the efficacy of our new method, we perform another analysis at a higher redshift using P and B 0 from an independent set of mocks: a suite of 4000 high-resolution N -body mocks (512 3 particles in a box with 500 h −1 Mpc a side) produced at z = 1.02. This allows us to include the AP effect when performing the BAO and RSD analysis. This test confirms that P pre , P post and P cross are complementary for constraining cosmological parameters, and that P all contains almost all the information in P combined with B 0 , which is consistent with our findings from the MOLINO analysis (see Methods and Supplement Figures 12-15 for more details). Stage-IV redshift surveys including the Dark Energy Spectroscopic Instrument (DESI) 23 , Euclid 24 and the Prime Focus Spectrograph (PFS) 25 will release galaxy maps over a wide range of redshifts with an exquisite precision. As long as the distribution of a tracer in a given redshift range is not too sparse, namely, the number density is not lower than 10 −4 h 3 Mpc −3 so that a reconstruction can be efficiently performed 26 , the novel method presented in this work can be directly applied to extract high order statistics for constraining cosmological parameters from 2-point measurements, which is computationally much more efficient to perform. Since the reconstruction will be performed anyway for most ongoing and forthcoming galaxy surveys to improve the BAO signal, our proposed analysis can be performed at almost no additional computational cost. Additional work is required to build a link between cosmological parameters to the full shape of power spectra for a likelihood analysis, and this is challenging using perturbation-theory-based models on (quasi-) nonlinear scales, especially for the reconstructed power spectrum and the cross power spectrum. However, the model-free approaches including the simulation-based emulation 27-29 , can be used for performing the P all analysis down to nonlinear scales, in order to extract the cosmological information from the power spectra to the greatest extent. The estimation of the covariance matrix and derivatives The 15, 000 MOLINO mocks for the fiducial cosmology are designed for accurately estimating the covariance matrices of the galaxy clustering observables, including the power spectra and bispectra. In addition, a separate set of the MOLINO mocks are produced for estimating the derivatives with respect to cosmological parameters (including the HOD ones) using the finite difference method (see Supplement Information for details). For this purpose, 60, 000 galaxy mocks are constructed at 24 cosmologies that are slightly different from the fiducial one 15 . Since the data covariance matrices and the derivatives are all evaluated numerically using mocks, it is important to ensure that the result derived from the Fisher matrix approach is robust against numerical issues, as argued in 1-3, 15 . We therefore perform numerical tests to check the dependence of our Fisher matrix calculation on the number of mocks, and find that the marginalised uncertainties of all the concerning parameters are well converged given the number of mocks available. The details are presented in the Supplement Information. 1.2 The 4000 high-resolution N -body mocks at z = 1.02 To confirm our findings from the MOLINO mocks, we perform an independent mock test on a suite of 4000 high-resolution N -body simulations with 512 3 dark matter particles in a L = 500 h −1 Mpc box at z = 1.02 11 . The fiducial cosmology used for this set of mocks is consistent with the Planck 2015 4 observations. The COLA mocks at multiple redshifts To investigate how the decorrelation between P pre and P post varies with redshifts, we perform Note that the information content in the reconstructed power spectrum is the same no matter whether the RSD is kept or not during the reconstruction process, and we have numerically confirmed this by performing the analysis with the isotropic reconstruction 13 , in which the RSD is removed using the fidicual f and b used for producing the mocks. Measurement of the power spectrum multipoles The multipoles (up to = 4) of both the pre-and post-reconstructed density fields are measured using an FFT-based estimator 9 implemented in N-body kit 10 . The shot-noise, which reflects the discreteness of the density field, is removed as a constant for the monopole of the auto-power. The k-binning is ∆k = 0.01 hMpc −1 for both the MOLINO and N -body mocks. Care needs to be taken when measuring the cross-power spectrum between the pre-and post-reconstructed density fields, since the raw measurement using the FFT-based estimator is contaminated by a scale-dependent shot-noise: on large scales, the post-reconstructed field resembles the unreconstructed one, making the cross-power spectrum essentially an auto-power, thus it is subject to a shot-noise component. On small scales, however, the shot-noise largely drops because the two fields effectively decorrelate. To obtain a measured cross-power spectrum whose mean value reflects the true power spectrum in the data such that no subtraction of the noise component is required, we adopt the "half-sum (HS) half-difference (HD)" approach 11 . We start by randomly dividing the catalog into two halves, dubbed δ 1 and δ 2 , and the corresponding reconstructed density fields are R 1 and R 2 , respectively. Let and Then HS (R) contains both the signal and noise, but HD (R) only contains the noise. Hence the cross-power spectrum estimator is, The scatter ofP cross around the mean value allows for an estimation of the covariance matrix, which is a 4-point function 12 The noise is anisotropic, and thus it affects even for multipoles with = 0 13 . Since a change in HOD parameters can result in a change in the number density of the galaxy sample and thus affect the shot-noise, the shot-noise can in principle be used to constrain the HOD parameters. We therefore perform an additional Fisher projection with the shot-noise kept in the spectra, and find that the constraints on HOD parameters can be improved in general, but the constraint on cosmological parameters is largely unchanged (see the P SN all column in Table 1). Measurement of the bispectrum monopole We measure the galaxy bispectrum monopole, B 0 , for all of the mock catalogs using the publicly available pySpectrum package 1, 15 . Galaxy positions are first interpolated onto a grid using a fourth-order interpolation scheme and then Fourier transformed to obtain δ(k). Afterwards B 0 is estimated using where δ D is the Dirac delta function, V B is the normalization factor proportional to the number of triplets that can be found in the k 1 , k 2 , k 3 triangle bin, and B SN 0 is the Poisson shot noise correction term. Triangle configurations are defined by k 1 , k 2 , k 3 , and for the MOLINO mocks, the width of the bins is ∆k = 3k f , where k f = 2π/(1000 h −1 Mpc), and for the N -body mocks, ∆k = 0.02 hMpc −1 . An AP test performed on the MOLINO mocks Although the AP effect plays no role for the MOLINO mock since it is produced at z = 0 21 , we perform a test by isotropically stretching the scales and angles using pairs of AP parameters calculated at a non-zero redshift. This gives us an idea about whether this artificial and exaggerated AP effect can change the main conclusion of this work that the cosmological information content in P all is almost the same as or more than that in P pre + B 0 . In practice, we use the (α || and α ⊥ ) pairs computed at z eff = 0.5 and 1.0 respectively to stretch the wave numbers along and across the line of sight directions, and repeat the analysis. As shown in the Supplement Table, this added 'artificial' AP effects can generally tighten the constraint, but the relative constraints from P all and P pre + B 0 are largely unchanged, meaning that the main conclusion of this paper remains the same if the AP effect is taken into account. An AP test on the N -body mocks We perform a Fisher matrix analysis 17 on the AP parameters using the 4000 N -body mocks. Part of the observables (the power spectrum monopole) are shown in Supplement Figure 12. From panel a we see that the amplitude of P cross decreases dramatically with scales, indicating a decorrelation between P pre and P post below quasi-nonlinear scales, which is confirmed by the correlation coefficient (the normalised covariance) plotted in panel b. This decorrelation, which is not caused by the shot noise given the negligible noise level in the mocks, is a clear evidence of the complementarity among the power spectra. The cumulative signal-to-noise ratio (SNR) is shown in panel a of Supplement Figure 13, in which we see that P all is more informative than P pre , and that P + B 0 has slightly higher SNR on small scales. Constraints on the isotropic BAO parameter We first perform an AP test on the isotropic dilation parameter α iso , which is defined as the ratio of the true spherically-averaged scale of the standard ruler to the fiducial one. This dilation parameter depends on cosmological parameters, and can be constrained using the monopole of the power spectrum and bispectrum. The wavenumber k gets dilated by α iso due to the AP effect, thus the observables are, where T denotes the type of P 0 , namely, T = {pre, post, cross}, and the parameters A 0 and A B are used to parameterize the overall amplitudes of power spectrum monopole and bispectrum monopole, respectively. The free parameters are {α iso , ln A 0 , ln A B }. The derivative with respect to α iso is evaluated semi-analytically as Then the constraint on α iso is derived after marginalising over the amplitudes A 0 and A B , and it is shown in panel b of Supplement Figure 13. The FoM of α iso shows up step-like features due to the BAO feature, as previously discovered 22 , and P all offers the greatest FoM, until overtaken by P + B 0 at k max 0.37 hMpc −1 . The anisotropic AP test We use the first three even multipole moments to assemble the two-dimensional power spectrum, i.e., The bispectrum is similarly assembled using the first three even multipoles with m = 0 14 , which are the most informative ones 15 , i.e., The wave-number k i and the cosine of the angle to the line-of-sight µ i are stretched by two dilation parameters α ⊥ and α || due to the AP effect 16,17 , The power spectrum multipoles (the index for the type is omitted for brevity) and bispectrum monopole including the AP effect are respectively given as, The free parameters are {α ⊥ , α || , ln A , ln A B }, where A ( = 0, 2, 4) denotes the overall amplitudes of the power spectrum multipoles, and A B is the amplitude of the bispectrum monopole. The derivatives with respect to the parameters α ⊥ and α || are evaluated numerically by where O ∈ {P , B 0 } denotes the observables, and the step size ∆α i = 0.01. Then the constraints on α ⊥ and α || are derived after marginalising over the amplitudes A and A B . The FoM for α ⊥ , α || is shown in panel c of Supplement Figure 13, and it shows a similar trend as FoM(α iso ). The contour plot for α ⊥ , α || with k max = 0.4 hMpc −1 is shown in Supplement Figure 14, further highlighting the strong constraining power of P all in comparison to that of P + B 0 . A joint BAO and RSD analysis on the N -body mocks In addition to α ⊥ , α || , we add one more parameter to the analysis, which is ∆v, the parameter describing the change of velocities along the line of sight. This parameter mimics the change of the linear growth rate on large scales, but it also changes the velocity of particles coherently on small scales. We compute the derivatives with respect to ∆v numerically. The projection onto the parameters, shown in Supplement Figure 15, demonstrates the advantage of performing a joint analysis using P pre , P post and P cross . On large scales, P pre and P post are both determined by the linear density field, making the power spectra highly correlated. As shown in panels c 1 and c 5 , the contours derived from P pre and P post have similar orientations and we do not gain by combining them. For k > 0.15 h Mpc −1 , the correlation between P pre and P post decreases as the pre-reconstructed density field is dominated by the non-linear field while P post still retains the correlation with the linear density field. The contours shown in lines in panel c 7 , which are derived from power spectra in the k range of [0.2, 0.25] h Mpc −1 , are almost orthogonal to each other, making the constraint from the combined spectra, as illustrated in the shaded region, significantly tightened. On smaller scales, the post-reconstructed density field is also dominated by the non-linear field and the orientations of the contours are again aligned and the complementarity on smaller scales weakens. This shows that the level of nonlinearity in the power spectrum determines the degeneracies between parameters. Since P pre and P post are affected by different levels of nonlinearities on a given scale, which gives rise to different degeneracies, a joint analysis using both P pre and P post (and P cross ) can yield a better constraint by breaking the degeneracies. Data Availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. The convergence of the Fisher matrix The Fisher matrix is formed as follows, where D and C are the data vector storing the numerical derivatives, and the data covariance matrix, respectively, thus the convergence of F relies on the convergence of D and C −1 against the number of mocks. We therefore check the convergence of the marginalised uncertainties of cosmological parameters, as D and C −1 are estimated using different number of mocks. Panel a in Supplement Figure 1 demonstrates the excellent convergence of F against C −1 (with D estimated using all the mocks), with the correction factors applied to remove the bias 18 . We then check the convergence of F against D (with C −1 estimated using all the mocks). We pay particular attention to the Fisher element for the neutrino mass, because it is recommended to estimate using four pieces of the power spectrum (or bispectrum), shown in Eq. (16) 16 , instead of two for other parameters, thus F Mν Mν may be the most affected Fisher element by numerical issues, if there are any. where M ν should be sufficiently small so that the finite difference approaches the derivative. In practice M ν is set to 0.1 eV for the Molino mocks. Note that the way the terms are assembled in Eq. (16) is to remove the second order terms in the Taylor series. However, it is not the only arrangement that is free from the second order terms so may not be optimal for removing the numerical noises. The most general form for ∂ S ∂Mν that is second-order terms free is, and λ is a free parameter. The scheme in Eq. (16) corresponds to λ = −1/12, and the three-term scheme shown in 16 corresponds to λ = 0. One can, in principle, tune λ to optimise the convergence of the Fisher matrix. The optimisation finds that λ ∼ −1/4, which effectively removes the S(M ν ) term. This means that M ν = 0.1 eV for P or B may cause numerical instabilities thus the final expression we use for ∂ S ∂Mν is, The convergence of σ θ against number of mocks is illustrated in panels b and c in Supplement Figure 4. Contour plots derived from the MOLINO mock. The 68% CL contour plots for (σ 8 , logM 0 ) and (σ 8 , σ logM ) derived from various combinations of spectra measured from the MOLINO mock in several k ranges. The 68% CL contour plot for α ⊥ and α || derived from various data combinations with k max = 0.4 h Mpc −1 .
7,259
2022-02-10T00:00:00.000
[ "Physics" ]
Rice yield prediction and optimization using association rules and neural network methods to enhance agribusiness Objectives: This study aims to implement data analytics andmachine learning approaches to rice data and to establish association rules on fixed attributes and their correlations for yield prediction of crops. Methods: The data of rice crop is collected fromdistrict Larkana as per defined parameters: area, production, yield, temperature, rainfall, humidity and wind speed. The pre-processing operations are applied on prepared dataset to execute data analytics and machine learning algorithms. The processed data are then input into an Apriori algorithm for generating association rules. Neural network model is used to perform optimization on resulted association rules. Findings: Minimum support and confidence values equal to 3 and 80 respectively were set using Apriori algorithm on prepared rice dataset and obtained 88 association rules. Among them, results of 28 associated rules predicted `High Yield Production'. Neural network model is experimented to optimize the predicted yield of district Larkana through which overall network performance of 97.8% is calculated. Previously, rice yield data of Larkana were not statistically and digitally predicted and investigated. Application : Group of effective and well-built association rules of yield prediction are core outcome of this study which will be helpful for researchers, farmers and government officials to improve the productivity of rice crop. Introduction Crop yield prediction is helpful in food production and management. Its role in the country's economy is inescapable (1) . Pakistan is a well-known agriculture based country. The agriculture sector of Pakistan contributes approximately 23.4% in its economy. Majority of the people are living in rural areas and purely dependent on agricultural activities (2) . Among the crops, rice has great importance because Pakistan has 4 th position across the world, among the Rice exporters. Province of Sindh is the main contributor with approximately 25% rice produce in various districts especially in Larkana. https://www.indjst.org/ But most of the farmers are practicing traditional crop production methods instead of advanced technology and smart farming. The rapid changes in weather condition, results in the low crop productivity so that essential measures should be considered for yield prediction. The current challenges and threats faced by the farmers are uncertain due to rapid changing conditions of weather, water shortages, poor irrigation facilities, uncontrolled cost and accurate yield prediction (3) . Therefore, it is necessary for the farmers to be equipped with smart farming particularly crop yield prediction using statistical and computational techniques. A research study proves that the yield crop prediction would facilitate farmers to make precautionary actions to improve productivity. Early prediction is possible through collection of previous experience of farmers, weather conditions and other influencing factors, by storing them in a large database with variety of parameters such as area, temperature, rainfall, humidity and wind speed. Modern Technology is able to provide a lot of information on agricultural-related activities, through which necessary information can be extracted and sorted using data analyzing techniques. The big data analytics and other applications are used in this purpose to increase the agricultural yield production and reduce the expenses of the farming by taking appropriate decisions. Various machine learning techniques such as prediction, classification, regression and clustering are available for weather forecasting and crop yield prediction (4) . In this study, the rice data of district Larkana is collected from the Agriculture Statistics Department, Islamabad. These data are experimented to predict the yield using the Association Rule based approach. The Artificial Neural Network (ANN) Feed Forward Back propagation method of supervised learning are used to perform optimization on resulted association rules for decision-making. Literature review Data science is being utilized by most researchers in different areas of research including agriculture. Surya (5) carried a survey pertaining to the various factors of agriculture using the techniques of big data analytics and suggested that the implementation of big data analytics in agriculture gives better results. Furthermore, crop yield prediction is mainly preferred by the researchers using the Data Mining Techniques (DMTs). Rao (6) has used DMTs to envisage yield of crop in India. Many data mining algorithms are reported in literature to acquire the association rules. Some famous algorithms are AprioriTid, Apriori and Eclat implemented on different variables to find the association. The DMTs are also used by Ramakrishna (7) for agriculture land soil and reported the improvement in crop productivity. Specifically, an Apriori association rules were used to generate the advisory reports that facilitate decisions that support crop rotation, fertilizer requirements and harvesting procedures to the farmers. Supriya (8) used DMTs to analyze soil datasets for prediction of crops and developed complete system using K-Nearest Neighbor and Naive Bayes approaches for predicting the crop yield. Additionally, Zingade (9) presented an android system that used data analytics techniques i.e. linear regression to predict the most profitable crop in the current weather and soil conditions. The temperature, rainfall, soil parameters and past year production are used as selected attributes. The classification, clustering and association rule DMTs are analyzed by Chouhan (10) for analyzing the crops production. During the literature review it is found that ANN is commonly used by the researchers for yield prediction and optimization. The ANN and statistical techniques are compared by Paswan (11) . Pandey (12) made prediction of potato yield crop using Generalized Regression Neural Network (GRNN) and a Radial Basis Function Neural Network (RBFNN). The leaf area index, biomass and plant height were used as input data parameters, while the yield of potato set as output dataset to train and test the NN. Author concluded that GRNN is a better predictor than RBFNN on the basis of quick learning capability and lower spread constant parameters. Niedbala (13) used ANN with Multi-Layer Perceptron (MLP) and presented three neural models for prediction of winter rapeseed yield. The temperature, precipitation and information about mineral fertilization are used to make the prediction of yield. The forecasting quality of the models is verified by determination of forecast errors. Similarly, Mathieu (14) used NN classifier for forecasting extreme corn yield losses caused by weather extremes. The average temperature and standardized precipitation evapo-transpiration index are used as set predictors. Moreover, Khaki (15) used deep NN to predict yield, check yield, and yield deference of corn hybrids from genotype and environment data. The study concludes that environmental factors had a greater effect on the crop yield than genotype. Research methodology Some specific and predefined steps are always required to finish the chosen research project. Figure 1 depicted the core research methodology adopted to accomplish the task of rice yield prediction and optimization. Selection of appropriate variables is the first task of this research project in which yield is selected as dependent variable and area, temperature, rainfall, humidity, wind speed and rice production are considered as independent variables. The required data is particularly collected on the basis of mentioned independent and dependent variables. The datasets are separately prepared for preprocessing the needed operations and further analytical experiments. https://www.indjst.org/ This research project is reported in two sections one deals with the yield prediction and other one is related with the optimization of predicted rice yield. The association rules based approach along with Apriori algorithm is used for rice yield prediction and various association rules are generated and analyzed using Frontline Excel Solver V2019. This software tool offers the facility of Analytic Solver through which association rules are automatically generated for further process. Furthermore, Matlab offers various optimization opportunities including the tool of ANN hence, NN model is proposed and implemented on the received yield prediction outcomes for optimization. It is better to develop own programming code with considerable parameters for prediction and obtain more accurate results but this process takes more time so that the built-in tool of Matlab is used in this research project. All manually collected and published data were transferred into spreadsheet/computerized file, so that rice dataset can be input to the machine for the development of association rules. The Rice is Kharif Crop, its sowing in Larkana District starts in the month of June and harvesting starts in the November month of every year. Hence, average value of maximum temperature, average monthly amount of rainfall, average monthly relative humidity and average monthly wind speed of June to November of every year is taken for further preprocessing. The prepared datasets from the year 2010 to 2015 are given in Table 1. Data collection and processing Machine language algorithms process the data, when it is not in continuous form of data. The discretization is the process that allows making the values in limited number of possible states. The output of discretization is the file containing the ordered https://www.indjst.org/ and discrete values. The developed dataset is based on several attributes having continuous data. In this stage, discretization is performed on continues data of all attributes of rice crop dataset of Larkana District. The attributes: area, temperature, rainfall, humidity, wind speed, production and yield are discretized into least, average, and utmost on particularly fixed threshold value with preferred attributes. The set threshold values for all variables are given in Table 2. Association Rules Association Rules is one of the incredibly important concepts of machine learning commonly used by the researchers to solve the problems pertaining with big data analytics including crop yield prediction. The association rules sufficiently help to uncover all the relationships between the items from huge generated databases. For this, Apriori algorithm is used for getting empirical results. Moreover, association rules are if-then statements that help to show the probability of relationships between data items within large data sets stored in databases. An association rule has two parts: an antecedent (if) and a consequent (then), both of which are a list of items. An antecedent is an item found within the data, while a consequent is an item found in combination with the antecedent. Various metrics are in place to help us understand the strength of association between antecedent and the consequent. Experiments with Apriori algorithm The searching of widespread patterns is vital and Apriori algorithm allows significantly doing the task specifically among the items which are already stored in the developed database. This algorithm is utilized effectively and association rules were successfully and digitally produced with chosen item sets. Hence, the pre-processed and discretized rice dataset of district Larkana is given as input to this preferred algorithm for getting the frequent item sets and strong association rules. The process of this algorithm is started by creating the basic item set with the association of single item with all available item sets. On the basis of the candidate set of item the most useful sets of items are created by shortening all the items whose support computations do not fully satisfy the least value of support threshold. Hence, the standard item set consequently acquired items that support values are continuously increasing. Furthermore, strong association rules obtained from the experiments are examined to comprehend the correlation surrounded by the variables i.e. area, temperature, rainfall, humidity, wind speed, production and yield. The established list of 88 various association rules auto generated by using Apriori algorithm on Larkana Rice Dataset. The auto generated list of association rules is divided into two parts; Tables 3 and 4 presented 1 to 44 and 45 to 88 rule lists respectively. Both tables contain Rule id, A-support, C-support, Support, Confidence, Lift-ratio, Antecedent and Consequent https://www.indjst.org/ attributes. The attribute Rule id identifies the rules in the table and Support, Confidence, Lift-ratio, Antecedent and Consequent are core parameters, while values of A-support, C-support parameters is not the scope of this research so that is not intensely discussed. Measurements of support and confidence levels The results are obtained using an Apriori algorithm contains the list of association rules. It is tried to achieve maximum possible combinations and stronger association rules among the items in the rice dataset by setting the minimum support and confidence values. The item sets are denoted with the variable X and the support is represented as supp(X). The X is always described as the proportion of the all possible transactions across the all data sets that hold the calculated item set. After computing the various support levels, it is a correct time to calculate the level of confidence using Eq.1 given below. It is noted here that the computational process of supp(X∪Y) actually represents the support for the occurrences of the both transactions X and Y. Furthermore, confidence is usually interpreted as the probability estimation P (Y|X) through which left and right sides of the obtained rules are calculated. Then, the left side of the rule is calculated using the Eq.2 given below to observe support percentage of the variable X and Y. Only two types of support values were found while the experimenting. Figure 2 presented the support values adjacent to the rules. The highest support value is 4 and the least support value is 3. Surprisingly, rule numbers 3, 9, 11, 27, 28 and 30 have the highest support value which is 4 and remaining rules holds the least support value, hence the minimum support value is set with 3. Fig 2. Calculated support levels Similar to the calculation of Support, the confidence value is calculated to observe the relation between the rules as well as fixed variable. Figure 3 presents the calculated confidence values with all the generated rules. The least confidence value of 80 is observed with the rules 9, 15 and 28. All the other rules represent the maximum confidence value which is 100. Keeping the facts, minimum confidence value equals to 80 is fixed for the analysis. Predicted rice yield Tables 3 and 4 presented the outcomes of the association rules specifically in terms of the support and confidence level of the rules. In many places, the value of temperature, wind speed and other variables are very high as compared to variable yield. Hence, the purified list of those rules having yield at high level is separated for realizing the impact of other variables on the dependent variable yield. The generated list of 28 strong filtered association rules that predict the Rice Yield at maximum level, with their Antecedent, Consequent, Support and Confidence values is given in Table 5. The value of support attribute shows the participation of item set given in Antecedent of each rule. The value of confidence expresses the support for occurrences of transactions where both https://www.indjst.org/ Antecedent and Consequent appears true. Basically, the value of confidence tells the probability of how many times the rule has become correct using the provided dataset. While the value of lift-ratio shows the ratio of observed support to the Antecedent and Consequent were found independent. Neural network c lassification Various machine learning approaches have been used for crop yield prediction as well as optimization. The NN approach has great importance and gives acceptable level accuracy hence, this approach is used for optimization of rice yield data. With the help of the neural network, the process of the data optimization is performed on the extracted information such as Area, Temperature, Rainfall and the Humidity against the Yield and Production which are the targeted parameters. Optimization performed through the neural network using the feed forward back propagation method uses the supervised learning. However, the application of neural network is used in another way by Gandhi (16) and Khaki (17) for crop yield prediction. It is considered that the algorithm is provided with samples of inputs and outputs. Based on the supervised learning, the network calculates the error. The major goal of this algorithm is that it is used to adjust the input weights in order to reduce the error rate so that process of learning and training data is achieved by NN. Figure 4 is the depiction of developed neural network for optimization of the predicted rice yield. The developed network is based on the 5 input and 2 output values. Area, Temperature, Rainfall, Humidity and Wind Speed are the factor effecting yield and production performance. The remote sensing data is also used by Putri (18) with specific variables. Thus, these five variables are selected as input variables while Yield and Production are set as targeted values to train the network. Furthermore, ten hidden neurons are fixed as hyper parameters for developed network to get the accurate and reliable results. Some researchers argued that convolutional neural network is more appropriate than the back propagation network due to the time dependency (19) . In this project, time may be not considered in that way because it is the time period of the rice cropping so that the proposed neural network is sufficient enough for such kind of systems. The training and testing phase performed using a dataset made in excel work sheet later imported in Matlab programming language by creating 2 arrays of 5x6 and 2x6 as shown in Figure 5. https://www.indjst.org/ Using the above mentioned values the network developed on a method known as Feed Forward Back propagation Method. Data was optimized to generate the best output of Yield and Production. However, during the training process the best output is generated at the 7 th epoch by the network out of 1000 which is shown in Figure 6. The batch training performed during which, samples pass through the learning algorithm before updating the weights at the learning rate of 0.001/epoch. While the Figure 7 showing the gradient ration generated by network which is 264971.27 up to the 7 th epoch. The validation check is also shown in the same figure which is a total of 6 in 7 epochs. The selected dataset is used for training as well testing of network. Experimented results showed that admirable outcomes were achieved during training and testing phase. In Figure 8 it is demonstrated that best validation performance results achieved by the network are 648 at the 1 st epoch with respect to its Mean Square Error (MSE). Also, the graph depicted in the Figure 9 shows training as well testing performance against received MSE. While the overall network performance shows that 99.8% received during the network training and overall optimized results achieved are 97.8%. Discussion The already available rice data of Larkana district is experimented using Apriori algorithm by considering the parameters: area, temperature, rainfall, humidity, wind speed, production and yield data of six years. It is noted that the last two variables are https://www.indjst.org/ fixed as targeted variables. The rules that predict the rice yield with satisfactory level of confidence are described in Table 6. After careful analysis of the rules and predicted rice yield, it is realized that many factors directly affect the yield of rice crop. For instance, rule number 3 defines that if the area is low then yield is high this means that the farmer is not able to provide the sufficient water to the rice crop. On the other hand, with the reflection of rule number 9, when the parameter temperature is high as compared to normal level then yield is also high because low temperature reduces the production of rice. The rule 11 provided the surprising results belonging with the humidity because the medium percentage of this factor gives maximum rice yield. But, rule number 23 tells us that low area and medium level humidity is also a cause of high yield. The analysis is consciously realized with the rule number 25 in a way that if area is low and wind speed is high then yield is high. We have some parameters having directed relation such as area and production for example, rule number 30 defines if area is low and production is medium then yield is high. The rice yield is increased when the temperature is soaring and the humidity is average (see rule number 33). The rule 41 is characterized in a way that when both factors humidity and production are at appropriate levels https://www.indjst.org/ then yield is high. With the focusing of rule 51, it is found that if area, temperature and wind speed are touching the high level then yield is increase at acceptable level. The rule number 55 defines that if area is low and temperature is high then production is medium and yield is high but when the area is low and temperature is high and production is medium then yield is utmost showed by rule 56. It is also examined that using the rule number 59 if the area is stumpy and humidity is medium then production is medium but farmers received high level of yield. Additionally, rule 61 described that the low area of cultivation with average humidity and production gives maximum rice yield. It is seldom happening and come to know with the rule number 64 that if the area is least and the other factors like wind speed and production is high and medium respectively then farmers can also get maximum rice yield. When the rule 66 was examined, it was observed that the area of cultivation may be low but if the wind speed is high and the production is average then we can obtain high level of yield. The farmers are upset due to the low area reserved for rice cultivation but rule number 73 defines that if the temperature and wind speed is high then yield will be high with the condition of medium production. Similarly, by applying rule number 74, the problem of area is same and the wind speed is high then it is possible to increase the temperature and yield except production. Moreover, it is attractively observed with rule 76 that when the area is low but wind speed and temperature is at maximum level then yield is high but the production is medium. Another rule having number 77 is examined carefully and found that if area is low and temperature is high and production is medium then wind speed and yield both are high. Most of the farmers crop rice in low area due to unavailability of the proper land. They can get yield high when rule 79 is successfully executed. This rule holds good if wind speed is high, production is medium and temperature is high. We have also experimented rule number 84 that defines if the area is low and temperature as well as wind speed are high then yield is high but the production is medium. During the discussion with farmers and experts, it is found that low wind speed decreases the rice yield therefore with the consideration of rule 84 high level wind speed is beneficial for rice yield. Summary and Conclusion The aim of this study is to share yield predictions made on previous valuable rice data using analytical techniques and machine learning algorithms. The Apriori algorithm is implemented on https://www.indjst.org/ rice crop dataset of district Larkana. The required data is collected from the different associated departments. The association rule approach proved that the Rule 9, Rule 20, Rule 33, Rule 51 and others clearly results in "Yield is high". The predicted results of rice yield optimized using neural network provides high value of yield which can be followed by farmers, growers and government officials of Rice crop. STaleo that these personnels can take better decisions based on past trends and considering forecasting values of correlated attributes discussed in this study. The obtained results are bumper crop production that will strengthen country's economy by increasing rice production and also save the farmers from occurrence of monetary loss. The proposed neural network of optimization of rice yield of district Larkana will be analytically compared with the obtained results of advanced linear and statistical models.
5,478.2
2020-04-02T00:00:00.000
[ "Computer Science" ]
HPV16 E2 could act as down-regulator in cellular genes implicated in apoptosis, proliferation and cell differentiation Background Human Papillomavirus (HPV) E2 plays several important roles in the viral cycle, including the transcriptional regulation of the oncogenes E6 and E7, the regulation of the viral genome replication by its association with E1 helicase and participates in the viral genome segregation during mitosis by its association with the cellular protein Brd4. It has been shown that E2 protein can regulate negative or positively the activity of several cellular promoters, although the precise mechanism of this regulation is uncertain. In this work we constructed a recombinant adenoviral vector to overexpress HPV16 E2 and evaluated the global pattern of biological processes regulated by E2 using microarrays expression analysis. Results The gene expression profile was strongly modified in cells expressing HPV16 E2, finding 1048 down-regulated genes, and 581 up-regulated. The main cellular pathway modified was WNT since we found 28 genes down-regulated and 15 up-regulated. Interestingly, this pathway is a convergence point for regulating the expression of genes involved in several cellular processes, including apoptosis, proliferation and cell differentiation; MYCN, JAG1 and MAPK13 genes were selected to validate by RT-qPCR the microarray data as these genes in an altered level of expression, modify very important cellular processes. Additionally, we found that a large number of genes from pathways such as PDGF, angiogenesis and cytokines and chemokines mediated inflammation, were also modified in their expression. Conclusions Our results demonstrate that HPV16 E2 has regulatory effects on cellular gene expression in HPV negative cells, independent of the other HPV proteins, and the gene profile observed indicates that these effects could be mediated by interactions with cellular proteins. The cellular processes affected suggest that E2 expression leads to the cells in to a convenient environment for a replicative cycle of the virus. Background Human Papillomavirus (HPV) is a small DNA virus that infects squamous epithelia performing a life cycle closely related to the differentiation program of the target cells [1]. HPV of the high-risk group (HR) as types 16 and 18, are associated with cervical cancer (CC) development while the low risk group as types 6 and 11, only with benign lesions. The HPV-HR E6 and E7 gene products are oncoproteins, since E6 binds to p53 inducing its degradation and blocking its function as tumor suppressor, while E7 binds proteins members of the "pocket" family as Rb, blocking its union to the transcription factor E2F and inducing the transcription of genes necessary for the transition towards S phase of the cell cycle [2]. The E6 and E7 gene expression is regulated in early stages of the viral infection by the E2 virus protein. This protein also plays several important roles in the viral cycle, since it regulates the replication of the viral genome together with E1 protein [3] and participates in the viral genome segregation through the cellular mitosis by its association with the cellular protein Brd4 [4]. During CC progression, the HPV genome is frequently integrated into cellular chromosomes loosing the expression of E2 and driving to an uncontrolled expression of E6 and E7, being this fact a critical step in cellular transformation [5][6][7]. Evidences indicate that E2 protein can regulate negative or positively the activity of several promoters of cellular genes, although the precise mechanism of this regulation is not yet well understood. For example, HPV-HR E2 protein negatively regulates the expression of β4-integrin gene [8], as well as the activity of the promoter of hTERT [9]. On the other hand, E2 has a positive regulation on the expression of several cellular genes including p21 [10], involucrin [11], and SF2/ASF [12] with an incomplete knowledge of the mechanism; however, it is believed that also involves its interaction with cellular proteins such as Sp1 [10], the transcription factor C/EBP [11], or TBP and components of the basal transcription machinery [12]. It has been demonstrated that the expression of E2 affects important cellular processes as cellular proliferation or death [13][14][15]. These effects are mainly mediated by its interaction with p53 [16,17] and possibly with TBP-associated factor 1 (TAF1), which regulates the expression of several genes that modulate cell cycle and apoptosis [18]. These interactions could induce changes in the expression of genes involved in these processes. All the above mentioned reports have focused on analyzing the effects of E2 on particular promoters and very specific biological processes; therefore in this study our aim was to identify in a comprehensive way cellular genes and biological processes regulated by HPV16 E2. Using an adenoviral vector we expressed the HPV16 E2 gene in C-33A cells and analyzed the cellular gene expression profile generated by microarrays hybridization; ontological analysis indicated several pathways and cellular processes altered by HPV16 E2 expression. Cell lines and culture conditions The HEK293 and C-33A cell lines were obtained from ATCC. HEK293 cells were cultured in Dulbecco's modified Eagle's medium (DMEM), supplemented with 10% fetal bovine serum (FBS), penicillin (100 units/ml) and streptomycin (100 μg/ml). The C-33A cell line was cultured in Dulbecco's modified Eagle's medium: Nutrient Mixture F12 (DMEM-F12) supplemented with 10% Newborn Bovine Serum. Both cell lines were maintained in an humidified atmosphere at 37°C and 5% CO 2 . Recombinant Adenovirus and Infection The construction of the replication deficient recombinant Adenovirus containing the E2 gene from HPV16 controlled by the cytomegalovirus promoter (CMV) was carried out with the Adeno-X Expression System (Clontech, Inc). The gene E2 was amplified by PCR using the primer forward 5'-TTCGGGATCCATGGAGACTCTT TGCCAACG-3' and the primer reverse 5'-ATCCGAA TTCTCATATAGACATAAATCCAGTAG-3' using as a template the plasmid pcDNA3-E2. The corresponding amplicon was cloned in the pShuttle plasmid (Clontech, Inc) using the EcoRI and KpnI restriction sites. The generated pShuttle-E2 was digested with the restriction enzymes PI-SceI and I-CeuI to obtain the expression unit, and then clone it in the correspondent restriction sites of the pAdenoX plasmid (Clontech, Inc) generating the vector pAdenoX-E2. A pAdenoX-empty vector was also built, incorporating the PI-SceI-I-CeuI fragment from the pShuttle plasmid. This vector allowed us to generate an Adenovirus that does not contain expression cassette, denominated empty Adeno (Ad-empty). The recombinant viruses were generated by transfection into HEK293 cells with Lipofectamine Transfection Reagent (Invitrogen). The viral particles were propagated in HEK293 cells and purified using the system Adeno-X Mini Purification Kit (Clontech, Inc), following the instructions of the provider. The Adenovirus titer was obtained by immunocytochemistry following the protocol reported by Bewig [19]. For the infection of C-33A with the recombinant Adenoviruses, 800,000 cells were seeded in DMEM-F12 with 10% Newborn Bovine Serum and maintained at 37°C and 5% of CO 2 during 24 hrs. The cell cultures were then incubated with 500 moi (multiplicity of infection) of either AdE2 HPV16 or Ad-Empty during 1.5 hrs in serum free DMEM-F12, in order to allow the virus adsorption. The viral stock was then removed away and the infection continued during 48 hrs in DMEM-F12 with 1.5% newborn bovine serum. To evaluate the infection efficiency, viral DNA was extracted using the Hirt method [20] and this material used as a template to amplify by PCR a 287 bp fragment of the Adenovirus 5 genome, using as a primer forward: 5'-TAAGCGACGGATGTGGCAAAAGTGA-3' and as a reverse 5'-CGTTATAGTTACGATGCTAGAGATT-3'. RT -PCR Total RNA from the recombinant Adenovirus infected cells was obtained using the Trizol reagent (Invitrogen) following the indications of the provider. One μg of RNA was used to synthesize cDNA using M-MLV reverse transcriptase (Promega). This cDNA was used as a template to perform a PCR reaction using as primer forward 5'-TTCGGGATCCATGGAGACTCTTTGC-CAACG-3' and as reverse 5'-ATCCGAATTCTCATA-TAGACATAAATCCAGTAG-3' amplifying a 1098 bp fragment corresponding to the full length HPV16 E2 gene. We used also a primer forward 5'-CTGTGGA CCGTGAGGATA-3 and a reverse 5'-CTGTTGGGCA-TAGATTGTT-3' to amplify by PCR a 750 bp fragment of the Ad-5 Hexon gene. Hybridization and analysis of microarrays data 10 μg of total RNA were used for cDNA synthesis and labeling with SuperScript II kit (Invitrogen), using in a first array dUTP-Cy3 incorporation for Non-infected cells (N.I.) and dUTP-Cy5 for Ad-empty; and in a second array dUTP-Cy3 for Ad-empty and dUTP-Cy5 for AdE2; Fluorophorus incorporation efficiency was analyzed measuring absorbance at 555 nm for Cy3 and 655 nm for Cy5. Similar quantities of fluorophorus labeled cDNA were hybridized on the oligonucleotides collection 50-mer Human10K from MWGBiotech Oligo Bio Sets (Germany). Images of the microarrays were acquired and quantified in the scanner ScanArray 4000 using the QuantArray software from Packard BioChips (USA). A first analysis of the images and their data were performed using the Array-Pro Analyzer software from Media Cybernetics (USA). Data were then normalized and analyzed with genArise software (Institute of Cellular Physiology, UNAM) and genes with a Z score ≥ 1.8 or ≤ -1.8 were considered with altered expression. An ontological analysis was performed with selected data using PANTHER classification system (Protein ANalysis THrough Evolutionary Relationships). Quantitative Reverse Transcription PCR (RT-qPCR) Total RNA from recombinant Adenovirus infected cells was obtained using Trizol reagent (Invitrogen) following the indications of the provider; 1 μg of RNA was used to synthesize cDNA with M-MLV reverse transcriptase (Promega) and this material used for relative quantification of the selected genes obtained from the microarrays analyses, by qPCR using the commercial kit ABSOLUTE qPCR SYBR Green Mix (Abgene) following the recommendations of the provider. The evaluation of the mRNA levels of the gene of constitutive expression GAPDH was used to normalize. Amplicons quantification was performed by double delta Ct (ΔΔCt) method. The primer sequences used were: for GAPDH as forward 5'-CATCTCTGCCCCCTCTGCTGA-3' and as reverse 5'-GGATGACCTTGCCCACAGCCT-3'; for N-MYC as forward 5'-TACCTCCGGAGAGGACACC-3' and as reverse 5'-CTTGGTGTTGGAGGAGGAAC-3'; for JAG1 as forward 5'-CTTCAACCTCAAGGCCAGC-3' and as reverse 5'-CTGTCAGGTTGAACGGTGTC-3'; and for MAPK13 as forward 5'-ATGTCTTCACCC-CAGCCTC-3' and as reverse 5'TCCTCACTGAAC TCCATCCC3'. E2 expression in AdE2 infected C-33A cells In order to obtain reliable data on the modifications induced by HPV16 E2 on the cellular gene expression profile, a recombinant adenoviral vector was used, since these vectors are able to efficiently infect cells from epithelial origin and the presence of a very strong promoter (MLP from HCMV) on its expression unit, guarantees a high efficiency of transgene expression. The infection of C-33A cells with the recombinant Adenovirus AdE2 was observed in almost 90% of the treated cell population (not shown). The evaluation of the E2 expression level in cells was carried out by RT-PCR 48 hours after infection, observing a very high amount of E2 mRNA in the infected cells ( Figure 1). Differences in the transcriptional profiles In order to identify the modifications induced on the cellular gene expression profile by HPV16 E2 gene, the expression profiles of AdE2 C-33A infected cells versus the expression profile of Ad-empty infected cells were obtained. To discriminate the effect that the adenoviral vector by itself had on the cellular gene expression, we also compared the expression profile of Ad-empty infected cells against that one from non-infected cells (mock). Data analysis obtained indicates that HPV16 E2 up-regulates 581 genes and down-regulates 1048 genes (Additional file 1, Table S1 and Additional file 2, Table S2). Table 1 shows a list of the 50 genes with higher induction (with a Z score >2.1) and Table 2 those 50 genes highly down-regulated (with a Z score <2.9) for the expression of E2. These results indicate that HPV16 E2 has preferably a negative effect on cellular gene expression. Fold Change refers to differences in Z score values compared to expression in non-infected cells, analysis was perform using genArise software. Changes in gene expression with a cut-off >3.1 was used and the p-value was calculated by Student's t-test. Fold Change refers to differences in Z score values compared to expression in non-infected cells, analysis was perform using genArise software. Changes in gene expression with a cut-off < -6.0 was used and the p-value was calculated by Student's t-test. Gene Ontology analysis performed in PANTHER classification system (Protein ANalysis Through Evolutionary Relationships) [21] indicates that WNT pathway is the most severely affected by the expression of E2, since we found 28 genes down-regulated and 15 up-regulated. Interestingly, this pathway is a convergence point of genes involved in the regulation of several cellular processes, including apoptosis, cell proliferation and differentiation. We found modifications in the expression of genes that play an important role in regulating these processes, like the down-regulation of EGR2 and CASP9, both involved in apoptosis; the up-regulation of CCNA and down-regulation of RHOA, both involved in cell proliferation; and the down-regulation of some of the type I keratins, markers of epithelial cell differentiation such as keratin 14, 24 and 34. Moreover, we found that a large number of genes from pathways such as PDGF, angiogenesis and cytokines and chemokines mediated inflammation, are also altered for expression of HPV16 E2 (Tables 3 and 4). Validation of the microarrays data The validation of the results obtained with the microarrays analysis was performed by Real Time RT-qPCR, evaluating the mRNA levels of some of the genes negatively regulated by E2: MYCN, JAG1 and MAPK13. This approach was taking as basis recent results about validation of microarray experiments [22]. These genes were selected because they have a key role in some of pathways altered by HPV16 E2, such as apoptosis, cell cycle and keratinocyte differentiation [23][24][25]. Figure 2 shows the results of RT-qPCR for the selected genes among the AdE2 infected C-33A cells and those Ad-empty infected. The results of RT-qPCR from the selected genes correlate with the observations obtained with the microarrays analysis, indicating a very trusty landscape of the modifications on cellular gene expression induced by HPV16 E2. Discussion In this work we report the modification of the gene expression profile induced by the expression of HPV16 E2. We used the C-33A cell line to study the changes in the expression level of 10,000 human transcripts when the HPV16 E2 is expressed. In C-33A cells there is not evidence of HPV infection and they represent a convenient model to study the effect of E2 on cellular gene expression, without the involvement of another viral gene. Traditionally it has been considered that the effects observed in the regulation of cellular genes when the protein E2 is expressed in cervical carcinoma derived cell lines is due to the repression of the expression of the viral oncogenes E6 and E7 [2,[26][27][28]; however, in this work we showed that HPV16 E2 induces changes in the expression of cellular genes, independently of the regulation of the viral oncoproteins E6 and E7. The present study showed that HPV16 E2 importantly alters the expression profile of cellular genes, preferentially in a negative way, although a large number of genes were up-regulated. It is well known that E2 protein suppress the activity of papillomavirus promoters by binding to low-affinity binding sites, leading to the displacement of cellular binding factors [8,[29][30][31]. A similar scenario has been proposed for several cellular promoters, since in cultured primary keratinocytes it has been observed that HPV8 E2 represses the transcriptional activity of the β4 integrin promoter, due in part to its binding to a specific E2 binding site on the promoter and leading to displacement of at least one cellular DNA binding factor. However, growing evidence indicates that protein-protein interactions could be even more significant for E2mediated transcriptional regulation of cellular genes since has been shown that E2 protein from several papillomavirus physically and functionally interact with a variety of cellular regulatory transcription factors, including Sp1, C/EBP, CBP/p300 and p53 [10,11,16,32,33]. The interaction with Sp1 is apparently one of the most relevant for transcriptional regulation of cellular genes by E2 since it is involved in the downregulation of the hTERT promoter by HPV18 E2, but also the interaction with this transcription factor plays an important role in the transcriptional activation of several cellular promoters, including p21 by HPV18 E2 [10] or SF2/ASF by HPV16 E2 proteins [12]. Even when E2 shows cooperative activation with a variety of sequence-specific DNA binding factors such as AP1, USF, TEF-1, NF1/CTF, and C/EBP [11,[34][35][36][37], a direct interaction between E2 and these cooperation partners has only been shown for HPV18 E2 with C/ EBP, in the transactivation of the involucrin promoter [11], suggesting the transcriptional cooperation may also occur without a direct binding of these cellular proteins with E2. The analysis of our data indicated that HPV16 E2 negatively regulates a higher number of genes (1048 genes) than those positively regulated (581 genes) (Additional file 1, Table S1 and Additional file 2, Table S2). In agreement with results previously reported [10,11], we found that HPV16 E2 up-regulate the expression of involucrin and cyclin-dependent kinase inhibitor 1A (p21) genes. However, we observed these genes up-regulated at a level below the established cutoffs for our analysis, indicating the relevance of the data set provided by our study for understanding the role of HPV E2 as a regulator of cellular gene expression. Although we do not rule out the possibility that several genes are regulated by a direct interaction of E2 with specific sequences in the particular promoters, the global effect observed suggest that it could be the consequence of the interaction of E2 with several cellular transcription factors such as Sp1 (apparently the most important). E2 protein could destabilize protein-protein interactions between Sp1 and co-activators resulting in the negative regulation of the transactivation function of Sp1 itself, or E2 bound on the promoter via Sp1 may promote the recruitment of transcriptional co-activators such as p300/CBP and pCAF, leading to the transactivation of cellular promoters [38,39]. On the other hand, some HPV E2 proteins have been shown to interact with TBP and a number of components of the basal transcription machinery [18,[40][41][42][43][44][45], regulating the recruitment of the pre-initiation complex and affecting both viral and cellular gene expression. Previous works in our group have demonstrated that E2 protein interacts and cooperates with TAF1 in the activation of E2-dependent viral promoters [18,40]. An analysis of our results indicates that 55 genes regulated by E2 have a natural regulation for TAF1 (data not shown). Transregulation of specific cellular promoters could be also dependent on levels of the E2 protein in cells, since high levels of HPV16 E2 are known to result in inhibition of cell growth and promotion of apoptosis probably because with higher E2 levels, cellular metabolism may be compromised, leading to a reduced ability of cellular factors to control expression of several cellular genes. However, even we observed an abundant expression of E2, cell viability and different metabolic aspects of the cells (such as cell death) were not affected in a period of 72 hours post-infection with the AdenoE2 virus (data not shown). This allows us to assume that the observed modulation of cellular gene expression by E2 is not the consequence of induced quiescence or apoptosis, thus the mechanisms of gene expression regulation by E2 only implicate its transcriptional regulatory properties, strongly influenced by its interaction with cellular proteins. As expected, the results of the microarray analysis showed that HPV16 E2 affect a variety of cellular pathways (Tables 3 and 4), some of them altered in early stages of cervical cancer development, when E2 is still expressed before the integration of the viral genome into cell chromosome. Interestingly we observed that the expression of a high number of genes on the WNT-pathway is modified for the expression of E2. In the last few years it has been reported that WNT-pathway is activated by the expression of E6 and E7 viral oncogenes [46][47][48]. However, our observations suggest that E2 is also targeting this pathway probably with different consequences than the induced by the viral oncogenes, since a tight regulation and controlled coordination of the WNT signaling cascade is required to maintain the balance between proliferation and differentiation. Recently it has been proposed that essentially all cellular information -i.e. from other signaling pathways, nutrient levels, etc. -is funneled down into a choice of coactivators usage, either CBP or p300, by their interacting partner betacatenin (or catenin-like molecules in the absence of beta-catenin) to make the critical decision to either remain quiescent, or once entering cycle to proliferate without differentiation or to initiate the differentiation process [49]. Since CBP and p300 are also interactors for E2, the function of the WNT-pathway could be deeply modified by the low availability of the coactivators when the viral protein is present. The control of this pathways in the viral cycle could have biological consequences as in the case of the regulation of cell proliferation, since the induction of Cyclin A expression by E2, orchestrated with a negative regulation of RhoA, known inhibitor of the cell proliferation, would allow the entry into the S phase of cell cycle [50][51][52]. Similarly, E2 expression could be also involved in apoptosis regulation since it negatively regulate genes involved in this process, such as caspase 9 (CASP9) [53], whose product is an effector of cell death. In the same way, EGR2 [54,55] is negatively regulated bringing as a consequence the inhibition of cytochrome c releasing it from the mitochondria. Interestingly, several genes mainly expressed in keratinocytes from the basal layers of stratified epithelia, such as type I keratins (keratin 14, 24 and 34) [56][57][58], were down-regulated in cells expressing E2 suggesting that the process of cell differentiation could be also regulated by this viral product. Conclusion In conclusion, our results in this work demonstrate that HPV16 E2 has a regulatory effect on cellular gene expression independently of the viral oncoproteins E6 and E7. The analysis data presented in this study demonstrates that E2 predominantly induces a downregulation of gene expression. The gene profile observed in E2 expressing cells suggests that E2 could induce these changes by its interactions with ubiquitous cellular proteins such as Sp1. Several genes involved in pathways altered in early stages of cervical cancer, such as CASP9 and EGR2 involved in apoptosis and MYC-N, CCNA and RhoA involved in cell proliferation, were altered by HPV16 E2 expression. The cellular processes affected suggest that E2 expression leads to the cells in to a convenient environment for a replicative cycle of the virus. Additional material Additional file 1: Up-regulated genes in C-33A cells by HPV16 E2 expression. Z score values represent the change in gene expression related to non-infected cells. Fold Change refers to differences in Z score values compared to expression in non-infected cells. Analysis was performed using genArise software, changes in gene expression with a value ≥ 2.0 were considered as altered genes, the p-value was calculated by Student's t-test. Additional file 2: Down-regulated genes in C-33A cells by HPV16 E2 expression. Z score values represent the change in gene expression related to non-infected cells. Fold Change refers to differences in Z score values compared to expression in non-infected cells. Analysis was performed using genArise software, changes in gene expression with a value ≤ -2.0 were considered as altered genes, the p-value was calculated by Student's t-test.
5,297.6
2011-05-20T00:00:00.000
[ "Biology", "Medicine" ]
Hybrid Beamforming for Secure Multiuser mmWave MIMO Communications Secure communication is critical in wireless networks as the networks are prone to eavesdropping from unintended nodes. To address this challenge, physical layer security (PLS) is being employed to combat information leakage. In this paper, we present the performance evaluations based on the secrecy rate and the secrecy outage probability for multiuser multiple-input multiple-output (MIMO) millimeter-wave (mmWave) communications by employing hybrid beamforming (HBF) at the base station, legitimate users and eavesdropper. Using a 3-dimensional mmWave channel model and uniform planar antenna arrays (UPA), we employ artificial noise (AN) beamforming to jam the channels of eavesdropper and to enhance the secrecy rate. The transmitter uses the minimum mean square error (MMSE) precoder to mitigate multiuser interference for the secure MIMO mmWave systems. It is shown that the overall system performance highly depends on the power allocation factor between AN and the signal of legitimate users. I. INTRODUCTION Over the last decade, many advancements have been made in the field of wireless communications. Among the major technology enablers being explored for wireless networks at the physical layer (PHY), a great deal of attention has been focused on mmWave communications, massive MIMO antenna systems and beamforming techniques [1]. These enablers bring to the forefront great opportunities for enhancing the performance of beyond-5G (B5G) networks, with respect to throughput, spectral efficiency, energy efficiency, latency and reliability. Due to its open nature, wireless communication is prone to information leakage to unintended users. To address this challenge, the concept of PLS is being introduced and explored for wireless communication systems. Along this line, the studies such as [2], [3] have shown that the secrecy of a wireless communication system can be enhanced by PLS methods, in contrast to the conventional cryptographic methods that are more computationally complex. The PLS via beamforming techniques is being explored for mmWave MIMO systems while they are largely limited to single user (SU) MIMO systems or multiuser (MU) multipleinput single-output (MISO) systems [4]- [7]. To the best of our knowledge, an extension to the multiuser (MU) MIMO case for mmWave communications has not been studied in the open literature. This work, therefore, presents the secure multiuser MIMO systems with multiple streams per user in the mmWave communications. The remainder of this paper is organized as follows. The mmWave channel model is described in Section II. The proposed scenario for the secure multiuser MIMO mmWave communications is given in detail in Section III. The simulation results are provided in Section IV, followed by conclusions in Section V. Notations: We use the following notations throughout the paper. The uppercase bold letter A is matrix, the lowercase bold letter a is vector and the lowercase letter a is scalar. || · || F represents the Frobenius norm while |·| indicates the determinant. (·) −1 , (·) T and (·) H denotes the inverse, transpose and conjugate transpose operator, respectively. II. MILLIMETER-WAVE CHANNEL MODEL We consider a three-dimensional (3D) statistical spatial channel model (SSCM) [8] for the mmWave MIMO system. The mmWave channel model composed of the line-of-sight (LOS) and the non-LOS (NLOS) components is given by [9], where H LOS denotes the LOS component of the mmWave channel defined as, Another component of the mmWave channel in Eq.(1) is the NLOS component, H N LOS defined as, where C and S c denote the number of clusters and the number of subpaths in each cluster, respectively. ρ denotes the power portion of each cluster, α represents the instantaneous complex coefficient for each subpath. Moreover, ϕ and θ indicate the azimuth angles and the elevation angles, respectively. The array factor is denoted by a which corresponds to either the angle of arrival (AoA) at the receiver or angle of departure (AoD) at the transmitter, respectively. 978-1-7281-4490-0/20/$31.00 c 2020 IEEE For the MIMO system, the UPA antenna configuration is considered at both transmitter and receiver sides. For the UPA, the array response can be evaluated as [9], where Ψ 1 and Ψ 2 are defined as, where M and N are the number of antennas in the horizontal axis and vertical axis, respectively. The wavelength is denoted as λ and the inter-element spacing between two adjacent antenna elements both for the horizontal axis and vertical axis are indicated by d x and d y , respectively. III. PROPOSED SCENARIO We propose a small cell (pico, Femto, etc.) MU-MIMO mmWave communication system. In the proposed scenario, we consider a single cell downlink communication system that has only one base station (BS) equipped with N T antennas serving to K number of legitimate users equipped with N R antennas. This scenario where each legitimate user communicates with BS over both LOS and NLOS links is a highly proper scenario for small cell networks. Since we assume that there is only one passive eavesdropper assigned to each legitimate user, it tries to gain information from each channel of legitimate users. To hold this assumption, each legitimate user and eavesdropper can share some of their AoDs shown in Figure 1. In addition to that, in this scenario, each node in the cell (BS, legitimate users, and eavesdropper) employs the HBF architecture shown in Figure 2. Since HBF is considered both at the transmitter and receiver sides, the digital precoder is given as F DB ∈ C N RF ×Ns while the analog precoder is given as F AB ∈ C N T ×N RF where N RF and N s are the number of RF chains and the number of data streams at the transmitter, respectively. It is assumed that the channel state information (CSI) of eavesdropper is not known at the BS since the eavesdropper is a passive node that wants to hide its presence from the BS. In this case, we utilize AN beamforming to jam the channel of eavesdropper. Then, the transmit signal including AN precoder is given by, where F AN is the AN precoder, s ∈ C Ns×1 is the transmit symbol such that E{ss H } = I Ns , z is the artificial noise generated by CN (0, 1) and φ is the power allocation factor between s and z. Following transmit HBF, we consider HBF scheme at the receiver side as in Figure 2 such that the analog combiner is W AB ∈ C N R ×M RF and the digital combiner is W DB ∈ C M RF ×ns where M RF and n s are the number of RF chains and the number of data streams at the receiver, respectively. For the kth legitimate user, the received signal is given by [10], where P k is the received power, H k ∈ C N R ×N T is the channel between BS and kth legitimate user and n k is the complex additive white Gaussian noise (AWGN) with zero mean and σ 2 k variance. The W DB,k and W AB,k are the M RF × n s digital combiner and N R × M RF analog combiner for the kth legitimate user, respectively. The proposed scenario, the generalized N T × N RF analog precoder and N RF × N s digital precoder are denoted by F AB and F DB while F AB,k and F DB,k indicate the N T × M RF analog precoder and M RF × n s digital precoder for the kth legitimate user (where M RF and n s are selected from N RF and N s for the corresponding kth legitimate user), respectively. We note that, a reliable communication is performed under the constraint of KM RF ≤ N RF . Considering the worst case scenario, it is assumed the eavesdropper has an ability to eliminate the interference among the users. Hence the received signal by the eavesdropper for the kth legitimate user is given as, For the HBF scheme, the first consideration is to find the analog precoder and combiner in such a way that maximizes Fig. 2. Hybrid beamforming architecture at the BS, the legitimate users and the eavesdropper the effective channel by [11], This optimization problem is non-convex and computationally complex for the optimal search of the analog precoder and combiner [12]. In order to convert this problem to a convex problem, a codebook based analog beamformer design can be used. Using codebook based design, F AB,k and W AB,k are chosen from corresponding AoDs and AoAs, respectively. In this work, we construct the analog precoders and combiners according to Algorithm 1 as given in [13]. After finding the analog beamformers, the next step is to design the digital beamformers, F DB,k and W DB,k for each legitimate user. For the digital precoder, we utilize the MMSE precoder to mitigate the interference among users and streams. Firstly, the generalized effective channel including the effective channel of each legitimate user is given as, where H is concatenated effective channel matrix [10] with dimension of KM RF × M RF . Secondly, the M RF × KM RF MMSE precoder is defined as, where F DB = [F DB,1 F DB,2 . . . F DB,K ] consists of K legitimate user precoders and β is the regularization factor which is 1 σ 2 . In order to satisfy the transmit power constraint such that ||F AB,k F DB,k || 2 F = 1, each legitimate user precoder should be normalized by, Considering the HBF scheme, we propose to find AN precoder from the null-space of combined with analog and digital precoders of each legitimate user by using singular value decomposition as,ŪΣV where F = [(F AB,1 F DB,1 ) . . . (F AB,K F DB,K )] and the AN precoder can be given as, It is important to note that we set M RF = n s to reduce the required time and computation. After finding the digital precoders and AN precoder, the digital combiner for the kth legitimate user can be given by, The corresponding digital combiner of eavesdropper can be obtained as, where W AB,e,k is determined by using Eq.(10) based on the channel of eavesdropper. It is important to note that the Eq. (17) is also the worst case in terms of the secrecy since it is assumed that the eavesdropper knows all the precoding matrices belonging to the legitimate users. Following all matrices, the signal to interference plus noise ratio (SINR) is evaluated for the kth legitimate user in Eq.(18) and the corresponding eavesdropper in Eq.(19) as the top of next page. Then, the data rate can be calculated for the kth legitimate user as, and the data rate of corresponding eavesdropper can be given as, Finally, the average secrecy rate is defined by, where R s k is the average secrecy rate of the kth legitimate user and [x] + max{0, x}. The average secrecy sum rate is also given as, Another important metric for measuring the secrecy is the outage probability. A secrecy outage probability for the kth legitimate user is calculated as, where R th is the given threshold secrecy rate. IV. SIMULATION RESULTS We illustrate the simulation results in terms of the secrecy rate and the secrecy outage probability. The average signalto-noise-ratio (SNR) of the kth legitimate user is SNR = P k The key simulation parameters are given in Table I. In the Figures 3 and 4, only one stream is assigned to each legitimate user as M RF = n s = 1. Since there is only one RF chain at the legitimate users and eavesdropper side, only analog beamforming can be considered instead of the hybrid structure for the case of single stream. For the Figure 3, the average secrecy rate is shown through different optimum power factor for the different number of users and SNR values. It is observed that the power factor has great importance on the average secrecy rate per user. Therefore, the optimum power factor is nearly chosen as φ = 0.9 for low number of users and low SNR (noise-limited) regime. On the other hand, the power factor of AN signal becomes more important for the low SNR regime when the user density becomes high. Besides, in the high SNR regime, the power of AN signal needs to be increased to reach higher secrecy rates especially when the number of users is high. Since adding the AN signal to the system causes a leakage to the legitimate users, it can compensate at the high SNR (interference-limited) regime. In the Figure 4, the performance of the average secrecy sum rate is shown for the different number of users with In the Figure 5, the effect of power factor for multiple streams case is observed in different SNR regimes. In the 2streams case, the power factor is determined adaptively. For the mmWave communications system with 15 legitimate users, the optimum power factor is set as 0.3 for the high SNR regime while it is determined as 0.9 for the low SNR regime. When the number of streams increases, more power for the legitimate users is required to improve secrecy performance. Figure 6 gives the simulation results based on the secrecy sum rate for different number of users with their optimum power allocation values. It is shown that the secrecy sum rate is improved when the number of user increases for the case of 2-streams. In the Figure 7, the effect of the number of data streams on the average secrecy sum rate is shown by setting the optimum power factors. The performance results indicate that as the number of streams increases, the average secrecy sum rate decreases. It is based on the assumption that the eavesdropper can receive messages from the AoDs of legitimate users but not from the LOS link, increasing data streams makes possible the information leakage from legitimate users to eavesdropper. As a result, the eavesdropper can gain more mutual information about the channel of each legitimate user when the number of streams is high. V. CONCLUSIONS We have provided the performance evaluations based on the secrecy rate and secrecy outage probability for the singlecell MU-MIMO with multiple streams downlink mmWave communication system. On one hand, the hybrid beamforming scheme is applied for all nodes in the cell and MMSE precoder is used to mitigate the interference between legitimate users. On the other hand, the AN precoder is designed to increase the secrecy rate. We have provided extensive simulation results for the different number of users and streams. It has been shown that the secrecy of the overall system is improved by adjusting the power allocation factor depending on the number of streams and the SNR region. As future work, the scenario will be extended to a multicell scenario for ultradense networks.
3,427.2
2020-08-01T00:00:00.000
[ "Computer Science" ]
Cell Polarity, Epithelial-Mesenchymal Transition, and Cell-Fate Decision Gene Expression in Ductal Carcinoma In Situ Loss of epithelial cell identity and acquisition of mesenchymal features are early events in the neoplastic transformation of mammary cells. We investigated the pattern of expression of a selected panel of genes associated with cell polarity and apical junction complex or involved in TGF-β-mediated epithelial-mesenchymal transition and cell-fate decision in a series of DCIS and corresponding patient-matched normal tissue. Additionally, we compared DCIS gene profile with that of atypical ductal hyperplasia (ADH) from the same patient. Statistical analysis identified a “core” of genes differentially expressed in both precursors with respect to the corresponding normal tissue mainly associated with a terminally differentiated luminal estrogen-dependent phenotype, in agreement with the model according to which ER-positive invasive breast cancer derives from ER-positive progenitor cells, and with an autocrine production of estrogens through androgens conversion. Although preliminary, present findings provide transcriptomic confirmation that, at least for the panel of genes considered in present study, ADH and DCIS are part of a tumorigenic multistep process and strongly arise the necessity for the regulation, maybe using aromatase inhibitors, of the intratumoral and/or circulating concentration of biologically active androgens in DCIS patients to timely hamper abnormal estrogens production and block estrogen-induced cell proliferation. Introduction Ductal carcinoma in situ (DCIS), also known as intraductal carcinoma, is the most common type of noninvasive breast cancer in women [1]. From the mid-1970s, the incidence of DCIS has sharply increased, primarily because of the adoption of radiographic screening for invasive carcinoma. Currently, it accounts for approximately 25% of newly diagnosed breast cancer cases [1]. DCIS is a nonobligate precursor to invasive breast cancer, and for this reason some members of the 2009 US National Institutes of Health DCIS consensus conference proposed to remove the word "carcinoma" from the term DCIS [2]. Nevertheless, experimental studies have shown the presence of carcinoma precursor cells in DCIS lesions [3][4][5], and clinical evidence indicates that approximately 50% of cases will progress to invasive breast cancer if untreated [6,7]. As a result, the malignant nature of DCIS remains debated, primarily because of the limited knowledge of DCIS arising and development. In fact, while several genome-wide studies have compared the gene profiles of DCIS and invasive breast cancer, very few studies have investigated and recognized the molecular alteration that characterize DCIS with respect to normal tissue [8][9][10][11]. DCIS is defined as an abnormal proliferation of transformed mammary epithelial cells within the closed environment of a duct, likely in response to microenvironment alterations including hypoxia and nutrient deprivation [4,12]. Among the processes early affected during mammary cells transformation, those involved in the establishment and maintenance of epithelial cell identity and tissue specificity are of particular relevance. In fact, epithelial mammary cells are characterized by an asymmetric distribution of cytoplasmic and membrane proteins, termed apicobasolateral cell polarity, essential for a correct cell-cell adhesion and the formation of an epithelial sheet. As a result of an epithelial to mesenchymal transition (EMT), during neoplastic transformation, cell polarity and epithelial morphology are early lost polarized and immotile epithelial cells acquire a fibroblastlike morphology and increased cell motility [13]. EMT process can be induced by a variety of signaling pathways among which the main and best characterized is that involving transforming-growth factor-β (TGF-β). TGF-β is a multifunctional cytokine and a powerful tumor suppressor that governs many aspects of mammary epithelial cells physiology and homeostasis [14]. Under abnormal microenvironment conditions; however, some mammary epithelial cells may acquire resistance to TGF-β, circumvent its cytostatic effect and tumor suppressive activity, and activate EMT [15,16]. Recent studies have demonstrated that EMT may generate cells with stemness-like properties, especially in the transitioning mammary epithelial cell compartment [17,18]. Therefore, a interrelationship among EMT (TGF-β mediated), disruption of the mechanisms deputed to cell polarity and adhesion control, and acquisition of stemnesslike features can be assumed already in DCIS [19,20]. Taking advantage from the only microarray dataset publicly accessible at the ArrayExpress web site, we investigated the pattern of expression of a selected panel of genes involved in TGF-β-mediated EMT or associated with epithelial cells identity (i.e., cell polarity and apical junction complex) and cell-fate decision in a series of DCIS and corresponding patient-matched histologically normal (HN) epithelium [11]. As the whole-gene expression profile of patient-matched atypical ductal hyperplasia (ADH) was also available, we further compared DCIS and ADH profiles to verify the hypothesis according to which breast cancer progression is a multistep process involving a continuum of changes from normal phenotype through hyperplastic lesions, carcinoma in situ, and invasive carcinoma [21]. Materials. As reported in the original paper [11], patient-matched samples (HN, ADH, and DCIS) were isolated via laser capture microdissection from surgical specimens of 12 preoperative untreated patients with an ER-positive (immunohistochemically evaluated) sporadic breast cancer. Gene expression was determined by using the Affymetrix Human Genome HG-U133A GeneChip; corresponding microarray dataset was publicly available at the ArrayExpress web site (http://www.ebi.ac.uk/arrayexpress/) with the Accession number E-GEOD-16873. Statistical Analysis. As some genes are recognized by more than a single probe set, each of which characterized by an individual specificity and sensitivity that differently contribute to gene expression value, a gene expression mean value was calculated after weighting each probe-set for its own sensitivity and specificity score. Specifically, each expression value (already log 2 transformed in the original dataset) was multiplied for the semi sum of sensitivity and specificity scores of the corresponding probe set. Given the patient-matched samples study design, all statistical analyses were performed considering a regression model for repeated measures with random effect, and the differential gene expression was evaluated by t-test on regression coefficients. To correct for multiple testing, the false discovery rate (FDR) was used [40]. To evidence latent variables accounting for genes correlations, a factor analysis was applied [41] in the following three comparisons: DCIS and paired ADH, DCIS and paired HN, and ADH and paired HN. The number of retained factors was selected according to the scree test [42]. To facilitate the interpretation of the factors, varimax rotation was applied. Loading values lower than 0.3 were not considered. All analyses were performed using open source software R 2.11.1 packages HDMD (http://www.R-project.org). Results and Discussion Genes found differentially expressed (P < 0.05) between DCIS and NH or ADH and NH are reported in Table 1. Specifically, 47 of the 172 selected genes were found differentially expressed between DCIS and NH (11 with an estimated FDR < 0.01) and 28 were found differentially expressed between ADH and NH (only one with an estimated FDR < 0.01). Notably, 24 of the 28 genes found differentially expressed between ADH and NH were found differentially ex-pressed (in a similar manner) also between DCIS and NH. The persistence of this "core" of genes, dysregulated in a similar manner in both invasive breast cancer precursors, International Journal of Surgical Oncology seems to support the hypothesis of ADH as the direct precursor of DCIS. In fact, in agreement with the proposed multistep process, DCIS showed an increased number of genes differentially expressed with respect to ADH. With respect to normal tissue, both DCIS and ADH showed the overexpression of ESR1, coding for the estrogen receptor; CD24, coding for a mucin-like cell-adhesion molecule positively associated with a terminally differentiated luminal phenotype [43,44]; GATA3, coding for a transcription factor involved in mammary gland morphogenesis [35,36]; CLDN7, EPCAM, and TJP3, coding for tight junction components; CDC42 and TIAM1, coding for two small GTPase family members involved in cell polarity and apical junction complex formation. Concomitantly, both breast cancer precursors showed the underexpression of EGFR gene, in which expression is generally negatively associated with ESR1 expression, KRT5, KRT14, and KRT17, all coding for cytokeratins associated with a basal phenotype. On the whole, this pattern of expression clearly indicates that DCIS and ADH are both characterized by a terminally differentiated luminal phenotype. Since all specimens were derived from patients with an ER-positive ductal carcinoma, it is conceivable the hypothesis that the establishment of an estrogen-dependent phenotype, in response to estrogens present in the microenvironment, should be a very early event in the tumorigenic process. Such a finding is in agreement with the model for breast cancer development proposed by Dontu et al. [45], according to which ERpositive cancers should derive from transiently amplifying ERpositive progenitor cells. Escaped from proliferation control as a consequence of genetic and epigenetic alterations in genes involved in cell-fate decision, these ERpositive progenitor cells should generate cells constitutively expressing estrogen receptor. Once established in ADH, this terminally differentiated luminal phenotype seems to consolidate in DCIS as demonstrated by the presence, among the genes exclusively expressed in DCIS, of KRT18, KRT19 (coding for some luminal-associated cytokeratins) and DLG1, DLG3, and MPP5 (coding for some cell polarity complex components). With respect to both tumor precursors, histologically normal tissue expressed genes coding for some transcription factors involved in EMT (SNAI2 and TGFBR3) and cell-fate decision (FOXC1 and JAG2), and for stemness-associated features (ABCG2). This finding is more evident considering the 47 genes differentially expressed between DCIS and normal tissue among which we found some other genes involved in the TGF-β-mediated EMT (AKT3, ID4 and TGFBR2), and cell-fate decision and self-renewal (NOTCH4 and PROM1). Such an apparently paradoxical finding, that is, the positive association between histologically normal tissues and EMT-and stemness-related genes (more likely expected in transformed tissues) is not surprising since we observed a similar behavior in an independent cases series composed of primary breast cancers and corresponding patient-matched normal tissue (submitted paper) and in normal pleura with respect to pleural mesothelioma [46]. In agreement with the physiological remodeling of the mammary gland, such a finding should support the persistence of resident stem/progenitor cells in normal tissue [47,48]. Factor analysis, applied to investigate the latent variables intrinsically associated with the selected 172 genes, corroborated these findings and highlighted some other interesting interrelations. In agreement with the results provided by tpaired test, factor analysis indicated that in both precursors (Figure 1(a) for DCIS and Figure 2(a) for ADH), the first factor (F1) was principally characterized by genes associated with an estrogen-dependent epithelial phenotype. In fact, within the genes with a positive loading value on F1, we found those coding for hormone steroid receptors (AR and ESR1) and transcriptional coactivator for steroid and nuclear hormone receptors (NCOA1), for tight (EPCAM, MAGI1, MAGI2, MPDZ, TJP1, TJP2 and TJP3) and adherens (CTNND1, PFN1 and PVRL2) junction components, and for small GTPase family members involved in epithelial cell polarization processes (CDC42 and RHOA). In addition, we found some genes associated with cell-fate decision: BMI-1, coding for a member of Polycomb group required to maintain the transcriptionally repressive state of many genes throughout embryo development [49] and adult tissues differentiation including mammary gland [50]; FDZ4 and FDZ6, two members of the frizzled gene family involved in β-catenin-signaling transduction and intercellular transmission of polarity information in differentiated tissues [51]. Finally, probably associated with the adaptation of DCIS and ADH to hypoxic stress caused by the unbalance between cell proliferation and oxygen supply, we also found JAM2, JAM3, VEGFA, VEGFB, and VEGFC, all genes involved in endothelial cell proliferation. The second factor (F2) identified by factor analysis (Figure 1(b) for DCIS and Figure 2(b) for ADH) was conversely characterized by the positive loading value of several genes coding for mesenchymal markers (EGF, EGFR, KRT5, KRT6B, KRT14 and KRT17), and the negative loading value of GATA3, the gene coding for the transcription factor driving the luminal morphogenesis of the mammary gland [35,36]. In addition, F2 was characterized by the presence of some genes coding for proteins playing a critical role in cell-fate decision and cell-renewal (ALDH1A3, FOXC1, NOTCH2, PROM1, and SOX9) [53]. Notably, when applied to ADH subgroup, factor analysis identified CYP19A1 as included in the panel of genes characterizing the first factor. This gene encodes for cytochrome P450 (better known as aromatase), the enzyme that catalyzes the conversion from circulating and rostenedione to estrone or testosterone to estradiol, and its presence should support the hypothesis of a very early activation of an autocrine production of estrogen. Furthermore, the concomitant presence of CYP19A1, AR, ESR1, and NCOA1 should provide a transcriptomic confirmation for the clinical evidence that high androgens level may have detrimental effect on breast carcinogenesis and progression due to a persistent local estrogen production that incessantly stimulates epithelial cell proliferation [54][55][56]. When applied to DCIS subgroup, factor analysis seems to indicate a consolidation of such an estrogen dependence as suggested by the additional presence of PGR, the gene coding for progesterone receptor and which expression is under estrogenic control. Conclusions Elucidating the initial steps of breast tumorigenesis is of paramount importance to allow an even early diagnosis and consequently an adequate treatment strategy aimed to prevent the malignant transformation of preneoplastic alterations. That is of particular importance for DCIS because of its high incidence [1] and facility in progressing to invasive breast cancer if untreated [6,7]. Experimental evidence till now accumulated has clearly indicated that at the basis of the neoplastic transformation of mammary epithelial cells there are loss of apicobasal epithelial cell identity and acquisition of a functional mesenchymal morphology. Therefore, we investigated the pattern of expression of a selected panel of genes associated with epithelial cells identity (i.e., cell polarity and apical junction complex) or involved in TGF-β-mediated EMT and cellfate decision in a series of DCIS and corresponding patientmatched normal tissue. In addition, we compared DCIS profile with that of patient-matched ADH to investigate the hypothesis according to which breast cancer progression is a multistep process involving a continuum of changes from normal phenotype through hyperplastic lesions, carcinoma in situ, and invasive carcinoma [21]. Statistical analysis seems to support this hypothesis because it identified a "core" of genes, mainly associated with a terminally differentiated luminal phenotype, and differentially expressed in both precursors with respect to the corresponding normal tissue. Notably, these alterations in gene expression did not result in a progressive mesenchymal transition but rather in a terminally differentiated luminal phenotype, in agreement with the model according to which ER-positive invasive breast cancer derives from ERpositive progenitor cells [45]. The constitutive expression of ER should make ADH and DCIS forming cells able to exploit the proliferative stimulus induced by estrogens whereas the establishment of an autocrine production of estrogens, through androgens conversion, should provide an additional selective advantage. The detrimental effect of such continuous estrogen stimulation should be corroborated by the observation that all patients included in the present study developed an invasive ER-positive ductal carcinoma. Experimental evidence supporting the hypothesis that androgens conversion may be involved in DCIS development, and progression has been provided by a recent study in which the intratumoral concentration of estradiol and 5α-dihydrotestosterone (DHT), and the expression of some sex steroid-producing enzymes, including aromatase, has been evaluated in DCIS specimens [57]. The study clearly demonstrated that aromatase expression level was significantly higher in DCIS with respect to nonneoplastic tissue suggesting the self-sustaining process adopted by DCIS. Taken together all these findings provide transcriptomic confirmation that, at least for the panel of genes considered in present study, ADH and DCIS are part of a tumorigenic multistep process and strongly arise the necessity for the regulation, maybe using aromatase inhibitors, of the intratumoral and/or circulating concentration of biologically active androgens in DCIS patients to timely hamper abnormal estrogens production and block estrogen-induced cell proliferation [58]. There is no doubt that present in silico study suffers for the limitation common to the majority of studies involving gene expression profile, that is, the lack of validation, at protein level, of the modulations observed at mRNA level. In fact, it is well known that mRNA transcript levels do not always reflect protein expression. However, the immunohistochemical data provided in The Human Protein Atlas largely confirms the differential gene expression that we observed between histologically normal and cancerous tissue. For instance, when we considered the 11 genes with an estimated FDR < 0.01 (Table 1, in bold), we found that protein expression of normal breast (glandular cell) and breast cancer tissue mainly paralleled the mRNA differential expression we observed in our dataset, making our preliminary findings more reliable and worthy of further investigation. Author's Contribution D. Coradini and P. Boracchi contributed equally to this paper.
3,761.8
2012-04-02T00:00:00.000
[ "Medicine", "Biology" ]
Holographic Models for QCD in the Veneziano Limit We construct a class of bottom-up holographic models with physics comparable to the one expected from QCD in the Veneziano limit of large N_f and N_c with fixed x = N_f/N_c. The models capture the holographic dynamics of the dilaton (dual to the YM coupling) and a tachyon (dual to the chiral condensate), and are parametrized by the real parameter x, which can take values within the range 0<= x<11/2. We analyze the saddle point solutions, and draw the phase diagram at zero temperature and density. The backreaction of flavor on the glue is fully included. We find the conformal window for x>= x_c, and the QCD-like phase with chiral symmetry breaking at xx_c as well as Efimov-like saddle points. By calculating the holographic beta-functions, we demonstrate the"walking"behavior of the coupling in the region near and below x_c. Introduction Holographic techniques have been used recently in order to understand the strong dynamics of gauge theories. QCD is one of the obvious targets of such a program, but strongly coupled gauge theories may also emerge in other contexts, namely in the physics beyond the Standard model (non-perturbative electroweak symmetry breaking is an example), as well as in condensed matter contexts. An interesting mild generalization of QCD, involves SU (N c ) YM coupled to N f Dirac fermions transforming in the fundamental representation (that we will still call quarks). This is a theory, that can be studied in the (N c , N f ) plane. As usual simplifications arise in the large-N c limit. The standard 't Hooft large-N c limit, [1], lets N c → ∞ keeping N f and λ = g 2 YM N c finite. In this limit, the effect of the quarks are suppressed by powers of N f Nc → 0, and therefore it corresponds to the "quenched" limit. In particular interesting dynamical effects as the conformal window, and exotic phases at finite density, driven by the presence of the quarks are not expected to be visible in the 't Hooft large-N c limit. In [2] Veneziano introduced an alternative large-N c limit in which (1.1) in order to make the chiral U(1) anomaly visible to leading order in the 1/N c expansion. This is the large-N c limit we will study in this paper. There are several interesting issues that are accessible in the Veneziano limit. • The "conformal window" with an IR fixed point. The window extends from x = 11 2 to smaller values of x, and includes the Banks-Zaks (BZ) weakly-coupled region as x → 11 2 [3]. • The phase transition at a critical x = x c from the conformal window to theories with chiral symmetry breaking in the IR. • A transition region near and below x c , where the theory is expected to exhibit "walking behavior". The theory flows towards the IR fixed point but misses it ending up with chiral symmetry breaking, so that the coupling constant varies slowly over a long range of energies. The point x = x c is the point of a quantum phase transition where the theory passes from an IR Conformal theory (above x c ) to a theory with a non-trivial chiral condensate (below x c ). Such transitions were termed conformal phase transitions in [6], and have been recently argued to be due to the fusion of a UV and IR fixed points [7]. The scaling of the condensate is similar to that of the 2D Berezinskii-Kosterlitz-Thouless (BKT) transition [8], and is also known as Miransky scaling, [9]. Several such quantum phase transitions have been described recently in holographic theories in [10,11,12]. The location of the lower edge of the conformal window is determined by nonperturbative dynamics. Several estimates of the value for x c , and for the critical value of N f at finite N c have been put forward by using different methods [13,14,15], and the boundary of the conformal window is also being studied actively on the lattice (see, e.g., [16]). The walking region near x c as well as the value of x c , have been of interest for a while, due to their potential relevance for the realization of walking technicolor, [17]. Technicolor has been used as a generic name for non-perturbative electroweak symmetry breaking mimicking the one induced by QCD, [18]. Although non-perturbative effects in a new strongly coupled gauge theory can induce electroweak symmetry breaking, generating the quark and lepton masses is an extra problem. A new interaction (extended technicolor) is usually invoked to generate the requisite couplings. However their magnitude (and therefore the Standard model masses) are controlled by the dimension of the scalar operator that breaks the electroweak symmetry. In free field theory its dimension is 3, as it is a fermion bilinear. However, for the SM masses to have realistic values the dimension must be reduced to at least around 2, i.e., the anomalous dimension of the operator should be at least one. This is indeed expected to be generated by a "walking" theory [13,19]. Notice that coupling between the technicolor sector and the standard model may decrease the dimension substantially [20]. There are issues that so far have made such non-perturbative approaches to be in apparent conflict with data, like the value of the S parameter, [21]. It was argued that in cases where the strongly coupled theory is near a conformal transition the S-parameter can be quite different potentially evading the experimental constraints 1 , [23]. A theory that can be compared is N = 1 supersymmetric QCD with N f flavors. The ground states of this theory have been found by Seiberg, [29], and we understand several issues associated to low energy dynamics, including the Seiberg duality. Such a theory gives already several important hints on the structure expected in non-supersymmetric QCD, [30]. Defining again x = N f Nc , we have the following regimes: • At x = 0 the theory has confinement, a mass gap and N c distinct vacua associated with a spontaneous braking of the leftover R symmetry Z Nc . • At 0 < x < 1, the theory has a runaway ground state. • At x = 1, the theory has a quantum moduli space with no singularity. This reflects confinement with chiral symmetry breaking. • At x = 1 + 1 Nc , the moduli space is classical (and singular). The theory confines, but there is no chiral symmetry breaking. • At 1 + 2 Nc < x < 3 2 the theory is in the non-abelian magnetic IR-free phase, with the magnetic gauge group SU (N f − N c ) IR free. • At 3 2 < x < 3, the theory flows to a CFT in the IR. Near x = 3 this is the Banks-Zaks region where the original theory has an IR fixed point at weak coupling. Moving to lower values, the coupling of the IR SU (N c ) gauge theory grows. However near x = 3 2 1 Recent studies that extrapolate from the Banks-Zaks region suggest that the modification of the Sparameter near the conformal window may be modest, [22]. the dual magnetic SU (N f − N c ) is in its Banks-Zaks region, and provides a weakly coupled description of the IR fixed point theory. • At x > 3, the theory is IR free. The region 3 2 < x < 3 is comparable to the conformal window that is expected in (nonsupersymmetric) QCD. Indeed, the IR coupling of the original gauge theory is becoming stronger as x decreases, but for x above 3/2, a new set of IR states becomes weakly coupled, namely the magnetic gluons and quarks. These states have been interpreted as the ρ mesons and their supersymmetric avatars, [31]. They are massless and weakly coupled in this region. The regime 1 + 2 Nc < x < 3 2 does not seem to have an analogue in QCD. The IR theory is again an IR non-abelian gauge theory, therefore trivially scale invariant in the far IR, but also free. It was suggested already in [32,33] that the presence of a conformal window in N = 1 supersymmetric QCD is associated with the violation of the Breitenlohner-Freedman (BF) bound. A recent attempt to describe related physics was done in [34]. Bottom-up models for QCD in the quenched approximation To construct a bottom-up holographic model for QCD in the Veneziano limit 2 we need to first understand pure YM. The simplest bottom model for pure YM in four dimensions is the hard-wall model, first introduced in [35]. Despite its simplicity it could capture a few qualitative features of the strong interaction. More sophisticated models accounted for the running of the YM coupling constant, incorporating therefore the dilaton into the gravitational action, [36,37]. By simply adjusting a dilaton potential they could exhibit many of the properties of large-N c YM including confinement, a mas gap, asymptotic linear trajectories and realistic glueball spectra at zero temperature. Moreover they fared rather well at finite temperature, [38], and after the tuning of two phenomenological parameters in the dilaton potential 3 , [39], they could agree with lattice data both at zero and finite temperature, [40]. The properties of IHQCD at finite temperature were further explored in [41]. Alternative Einstein-dilaton models exhibiting a cross-over rather than a first order deconfining transition, and matching YM finite temperature dynamics were also developed in [42]. Such bottom up models, were used to compute transport properties of YM, like the bulk viscosity and the diffusion properties of heavy quarks, [43]. Backgrounds having an IR fixed point or a "walking" region, where the system flows close to an fixed point, were studied within in the IHQCD model in [15,44,45,46]. In these studies the fixed point was introduced via the input beta function, without proper modeling of the dynamics of the quarks, even though N f /N c was large. To go beyond YM a new ingredient is needed, namely the flavor branes. An important field in this context is the order parameter for chiral symmetry breaking, dual to a complex bifundamental field T . In the hard wall [25] and soft wall [47] models for mesons, such a field was added using a quadratic action, and chiral symmetry breaking proceeded by giving to such a field a vev by hand. In [48] it was remarked that as the flavor sector of gauge theories in string theory arises from D-brane-antibrane pairs, the bifundamental field T could be naturally be identified with the brane-antibrane tachyon field that had been studied profusely (around flat space) in string theory by Sen and others, [49]. The non-linear action proposed by Sen, [50] could be therefore used as a well-motivated starting point in order to study the holographic dynamics of chiral symmetry breaking. Several general features of this approach were explored in [48], • Chiral symmetry breaking is dynamical and is induced/controlled by the tachyon Dirac-Born-Infeld (DBI) action. • Confining asymptotics of the geometry were shown to trigger chiral symmetry breaking. • The Sen DBI tachyon action induces linear Regge trajectories or mesons. • The Wess-Zumino (WZ) terms of the tachyon action, computed in string theory [51]- [53], produce the appropriate flavor anomalies, include the axial U (1) anomaly and η ′ -mixing, and implement a holographic version of the Coleman-Witten theorem. In the context above, the analysis was done in the quenched approximation: the flavor sector does not backreact on the metric and dilaton. Similar results were also obtained by considering tachyon condensation in the Sakai-Sugimoto model [54]. In [55] an implementation of these ideas was performed by choosing a concrete confining background, that is simple and asymptotically AdS. This was the Kuperstein-Sonnenschein background, [56], with a constant dilaton and an AdS 6 soliton. In this background the tachyon DBI action was analyzed with the following results • The model incorporates confinement in the sense that the quark-antiquark potential computed with the usual AdS/CFT prescription confines. Moreover, magnetic quarks are screened. • The string theory nature of the bulk fields dual to the quark bilinear currents is readily identified: they are low-lying modes living in a brane-antibrane pair. • Chiral symmetry breaking is realized dynamically and consistently, because of the tachyon dynamics. The dynamics determines the chiral condensate uniquely a s function of the bare quark mass. • The mass of the ρ-meson grows with increasing quark mass, or, more physically, with increasing pion mass. • By adjusting the same parameters as in QCD (Λ QCD , m ud ) a good fit can be obtained of the light meson masses. Holographic models in the Veneziano limit To construct V-QCD we will put together the experience from IHQCD and the tachyon implementation in the quenched approximation. Putting the two together we will see that, under reasonable assumptions, we obtain a phase diagram which is qualitatively in agreement with what to expect from QCD in the Veneziano limit. Moreover we will verify that changes of the bulk tachyon and dilaton potentials that are mild give the same qualitative physics. In this sense we can state confidence in our results. The bulk action we will consider is with λ the 't Hooft coupling (exponential of the dilaton φ) and the flavor action is To find the vacuum (saddle point) solution we must set the gauge fields A L,R µ to zero, as they are not expected to have vacuum expectation values at zero density. We also take the tachyon field T to be diagonal and suppressed the WZ terms as they also do not contribute to the vacuum solution. The pure glue potential V g has been determined from previous studies, [39] and we will use the same here. The tachyon potential V f (λ, T ) must satisfy some basic properties, that are determined by the dual theory or general properties of tachyons in string theory: (a) To provide the proper dimension for the dual operator near the boundary (b) To exponentially vanish like log V f ∼ −T 2 + · · · for T → ∞. The function h(λ) captures the transformation from the string frame to the Einstein frame in five dimensions and will be chosen appropriately. As with IHQCD, we will arrange that the theory is logarithmically asymptotically AdS, and will implement the two-loop β-function plus one-loop anomalous dimension for the chiral condensate. Although the geometrical picture is not expected to be reliable near the boundary, the renormalization group (RG) flows that emerge are reliable at least in the IR. The UV boundary conditions we choose can be thought of as a convenient way of anchoring the theory in UV. We can always define a finite cutoff and evolve the theory from there in the IR. We first analyze the fixed points of the bulk theory. Choosing a potential that implements the Banks-Zaks fixed point, its presence exists for a range of the parameter x. We will make choices where this is the whole range: 0 < x < 11 2 . In such a fixed point the dilaton is constant and the tachyon vanishes identically. We have also checked that choices of potential for which the fixed point exists for x * < x < 11 2 , have qualitatively similar physics. We define appropriate β-functions for the YM coupling and the quark mass. We then rewrite the equations following [36] as first order equations that specify the flow of the couplings, as well as non-linear first order equations that determine the β-functions in terms of the potentials that appear in the bulk action (1.2), (1.3). We can calculate the dimension of the chiral condensate in the IR fixed point theory from the bulk equations. We find that it decreases monotonically with x for reasonably chosen potentials. It crosses the value 2 at x = x c where x c corresponds to the end of the conformal window as argued in [7]. We make the following observations which are relevant for technicolor studies: • The lower edge of the conformal window x c lies in the vicinity of 4. Requiring the holographic β-functions to match with QCD in the UV, we find that quite in general which is in good agreement with other estimates [13,14,15]. • The fact that the dimension of the chiral condensate at the IR fixed point approaches two (and the anomalous dimension approaches unity) as x → x c is in line with the standard expectation from field theory approaches [13,19]. It is also to a large extent independent of the details of the model. It is important to stress that in the full analysis of this paper, the backreaction of the flavor sector on the glue sector is fully included. This is very important for the "walking" region, in the vicinity of x = 4, where we expect the backreaction to be important. Indeed we do not expect to see a Conformal Phase Transition in the quenched limit of QCD. Apart from x, there is a single parameter in the theory, namely m Λ QCD where m is the UV value of the (common) quark mass. For each value of x, we solve the bulk equations with fixed sources corresponding to fixed m, Λ QCD , and determine the vevs so that the solution is "regular" in the IR. The notion of regularity is tricky even in the case of IHQCD (pure glue), as there is a naked singularity in the far IR. For the dilaton this has been resolved in [36,38]. For the tachyon the notion of regularity is different and has been studied in detail in [55]. Implementing the regularity condition in the IR and solving the equations from the IR to the UV (this has been done mostly numerically), there is a single parameter that determines the solutions as well as the UV coupling constants and vevs, and this is the a real number T 0 controlling the value of the Tachyon in the IR. This reflects the single dimensionless parameter m Λ QCD of the theory. For different values of x and m we find the following qualitatively different regions: • When x c ≤ x < 11/2 and m = 0, the theory flows to an IR fixed point. The IR CFT is weakly coupled near x = 11 2 and strongly coupled in the vicinity of x c . Chiral symmetry is unbroken in this regime (this is known as the conformal window). • When x c ≤ x < 11/2 and m = 0, the tachyon has a non-trivial profile, and there is a single solution with the given source, which is "regular" in the IR. • When 0 < x < x c and m = 0, there is an infinite number of regular solutions with non-trivial tachyon profile, and a special solution with an identically vanishing tachyon and an IR fixed point. • When 0 < x < x c and m = 0, the theory has vacua with nontrivial profile for the tachyon. For every non-zero m, there is a finite number of regular solutions that grows as m approaches zero. In the region x < x c where several solutions exist, there is a interesting relation between the IR value T 0 controlling the regular solutions, and the UV parameters, namely m. This is determined numerically, and a relevant plot describing the relation between m and T 0 at fixed x is in Fig. 5 (left). As m and −m are related by a chiral rotation by π, we can take m ≥ 0. The solutions are characterized by the number of times n the tachyon field changes sign as it evolves from the UV to the IR. For all values of m there is a single solution with no tachyon zeroes. In addition, for each positive n there are two solutions which exist within a finite range 0 < m < m n , where the limiting value m n decreases with increasing n, and one solution for m = 0. In particular, for large enough fixed m, we find that only the solution without tachyon zeroes exists. For m = 0, out of all regular solutions, the "first" one without tachyon zeroes has the smallest free energy. The same is true for m = 0, namely the solution with non-trivial tachyon without zeroes is energetically favored over the solutions with positive n as well as over the special solution with identically vanishing tachyon, which appears only for m = 0 and would leave chiral symmetry unbroken. Therefore, chiral symmetry is broken for x < x c . The multiplicity of regular solutions is closely related to the regime where the IR dimension of the chiral condensate is smaller than 2, and the associated Efimov vacua. They seem to be associated with the fixed point theory that here exists for all values of x but is not reachable by flowing from the UV of QCD for x < x c . On the other hand, the presence of a fixed-point theory in the landscape of possible theories does not seem necessary for the appearance of multiple saddle points. Indeed, in [55] which employed the quenched approximation and where no such fixed points exist, a second saddle point was found that provided a regular tachyon solution. It was verified however that this second saddle point was perturbatively unstable as meson fluctuations were tachyonic. In the region just below x c we find Miransky or BKT scaling for the chiral condensate. As x → x c , we obtain For x ≥ x c , let m IR (x) be the mass of the tachyon at the IR fixed point and ℓ IR (x) the IR AdS radius. The coefficientK is then fixed aŝ The construction of the holographic V-QCD model opens the road for addressing several interesting questions. 1. The calculation of the spectrum of mesons and glueballs. This is in principle a straightforward albeit tedious exercise, [57]. In the Veneziano limit, mixing is expected between glueballs and mesons to leading order in 1/N c . This will affect the 0 ++ glueball that will mix with the 0 ++ flavor-singlet σ-mesons. On the other hand the 2 ++ glueballs, the 1 −− and 1 ++ vector mesons and the 0 +− mesons do not mix, with the exception of the flavor singlet 0 +− meson (analogous to η ′ ) that will mix with the 0 +− glueball due to the axial anomaly. A particularly interesting question here is the behavior of the mass of the lightest 0 ++ state (the technidilaton, [58]) as x → x c . 2. The structure and phase diagram of the theory at finite temperature, [59]. In the quenched approximation [55] the restoration of chiral symmetry was seen above the (first order) deconfinement transition. The expected structure is not clear here and several options exist. 3. The calculation of the energy loss of heavy quarks in a quark-gluon plasma with non-negligible percentage of quarks. 4. The construction of the baryon states in this theory and the calculation of their properties. 5. The structure of the phase diagram at finite density and the search for exotic phases namely color superconductivity and color-flavor locking. It is plausible that the setup may provide a model for high-T c superconductors by interpreting the x parameter as a "doping" parameter. The reason is that x controls the IR dimension of the "Cooper pair" associated with a quark-antiquark boundstate, charged under the axial charge. As x decreases, the IR dimension of this operator decreases, and the bound state becomes more and more deeply bound. At x = x c , there is an onset of "axial" superconductivity (at zero temperature), that persists down to x = 0. At finite temperature, this picture suggests that the system might resemble the overdoped regime of strange metals, with x = x c the start of the superconduction dome and x = 0 the optimal doping. The connection between the value of x and doping in real systems may not be so far fetched as the changes in the system associated with the change of carriers, is accompanied by a change in the effective number of flavors of strongly interacting effective degrees of freedom. The structure of this paper is as follows. In Sec. 2 we give a brief review on QCD in the Veneziano limit and its phase structure. In Sec. 3 we review the IHQCD model, and discuss the earlier results on mass spectra and the phase structure at finite temperature. Adding the flavor branes is discussed in detail in Sec. 4. The V-QCD model is finally introduced in Sec. 5. Analysis of the model is started by studying the fixed points in Sec. 6. We go on transforming the equations of motion (EoMs) to equations for the holographic beta functions, and discuss their UV/IR asymptotics and solutions in Sec. 7. In Sec. 8 we analyze the background, in particular how the UV expansions map to perturbation theory of QCD, and where the edge of the conformal window appears. We also construct and present the numerical solutions for the background in the physically interesting regions. In Sec. 9 we find the vacua with lowest free energy and check that they support the expected phase diagram. In Sec. 10 we study the system near but below the conformal window, and show that the chiral condensate, as well as many other observables, obey the BKT scaling law. In particular, we check the scaling by comparing numerical results to formulas, that are derived analytically. Finally, we conclude and summarize the main results in Sec. 11. Technical details are presented in Appendices A-H. QCD in the Veneziano limit The conventional large-N c limit of QCD involves a large number of colors N c → ∞, but a fixed number of flavors, N f → finite, [1]. In this limit, fermion loops are suppressed, and the dominant diagrams are classified by the genus of the associated Riemann surface. There is an alternative large-N c in QCD, in which This was first introduced by Veneziano in [2] in order to have the QCD axial anomaly, of order O(N c N f ) appear in the leading order in the large-N c expansion. This alternative large-N c limit is very interesting in order to preserve important effects due to quarks. In the conventional 't Hooft limit such effects are subleading, and this is known as the quenched limit for flavor. Many efforts have been made in the last few years to consider unquenched flavor, in order to estimate the contribution of quarks to the physics of the quark-gluon plasma. Such efforts are summarized in the recent review, [60]. The following effects are not easily visible in the conventional 't Hooft limit: • The "conformal window" with a non-trivial fixed point, that extends from x = 11 2 to smaller values of x. The region x → 11 2 has an IR fixed point while the theory is still weakly coupled, as was analyzed by Banks and Zaks, [3]. • It is expected that at critical x c , the conformal window will end, and for x < x c , the theory will exhibit chiral symmetry breaking in the IR. This behavior is expected to persist down to x = 0. Above x > x c the IR theory is CFT, at strong coupling that progressively becomes weak as x → 11 2 . • Near and below x c , there is the transition region to conventional QCD IR behavior. In this region the theory is expected to be "walking", so that the theory flows towards the IR fixed point but misses it ending up with chiral symmetry breaking. But the approach to the fixed point involves a slow variation of the YM coupling constant for a long range of energies. This has been correlated with a nontrivial dimension for the quark mass operator near two, rather than three (the free field value). • The existence of this "walking" region makes the theory extremely interesting for applications to strong-couplings solutions to the hierarchy problem (technicolor). • New phenomena are expected to appear at finite density driven by strong coupling and the presence of quarks. These involve color superconductivity [4] and flavor-color locking [5]. To discuss the structure expected as a function of the finite ratio x, defined in (2.1) we write the two-loop QCD β-function. With N f (non-chiral) flavors in the fundamental, the β-function reads Using the 't Hooft coupling, and setting The Banks-Zaks region is x = 11/2 − ǫ with ǫ ≪ 1 and positive, [3]. We obtain a fixed point of the β-function at which is trustable in perturbation theory, as λ * can be made arbitrarily small. The infrared fixed point has properties that are computable in perturbation theory. In particular the low-lying operators consist of the conserved stress tensor, T r[F 2 ] that is now slightly irrelevant, and the L, R currents that are still conserved with the exception of the U (1) axial current that its conservation is broken by the anomaly. The mass operator,ψ L ψ R has now dimension slightly smaller than three, as attested by its perturbative anomalous dimension At large N c this becomes One can still perturb this theory by the U (N c )-invariant mass operator (assuming all quarks have the same mass), and the theory is expected now to flow to the trivial (QCD-like) theory in the IR. It is believed that there is also a value x c with 0 < x c < 11 2 so that for x < x c the theory flows to a trivial theory (with a mass gap) in the IR, with chiral symmetry breaking and physics isomorphic to that of standard YM. For 11 2 > x > x c , the theory is expected to flow to a non-trivial IR fixed point, and chiral symmetry to remain unbroken, as happens in the BZ region. Generically, the IR theory is strongly coupled except in the region x → 11 2 where the fixed point theory is weakly coupled (Banks-Zaks fixed point). For x > 11 2 the theory is IR free. A step back: Bottom-up models for large-N c YM The holographic dual of large N c Yang-Mills theory, proposed in [36], is based on a fivedimensional Einstein-dilaton model, with the action 4 : Here, M p is the five-dimensional Planck scale and N c is the number of colors. The last term is the Gibbons-Hawking term, with K being the extrinsic curvature of the boundary. The effective five-dimensional Newton constant is G 5 = 1/(16πM 3 p N 2 c ), and it is small in the large-N c limit. Of the 5D coordinates {x i , r} i=0...3 , x i are identified with the 4D space-time coordinates, whereas the radial coordinate r roughly corresponds to the 4D RG scale. We identify λ ≡ e Φ with the running 't Hooft coupling λ t ≡ N c g 2 YM , up to an a priori unknown multiplicative factor 5 , λ = κλ t . The dynamics is encoded in the dilaton potential 6 , V (λ). The small-λ and largeλ asymptotics of V (λ) determine the solution in the UV and the IR of the geometry respectively. For a detailed but concise description of the UV and IR properties of the solutions the reader is referred to Section 2 of [38]. Here we will only mention the most relevant information: 1. For small λ, V (λ) is required to have a power-law expansion of the form: The value at λ = 0 is constrained to be finite and positive, and sets the UV AdS scale ℓ. The coefficients of the other terms in the expansion fix the β-function coefficients for the running coupling λ(E). If we identify the energy scale with the metric scale factor in the Einstein frame, denoted by e A below, we obtain [36]: 2. For large λ, confinement and the absence of bad singularities 7 require: 4 Similar models of Einstein-dilaton gravity were proposed independently in [42] to describe the finite temperature physics of large Nc YM. They differ in the UV as the dilaton corresponds to a relevant operator instead of the marginal case we study here. The gauge coupling e Φ also asymptotes to a constant instead of zero in such models. 5 This relation is well motivated in the UV, although it may be modified at strong coupling (see [36]). The quantities we will calculate do not depend on the explicit relation between λ and λt. 6 With a slight abuse of notation we will denote V (λ) the function V (Φ) expressed as a function of λ ≡ e Φ . 7 For a description of the notion of "bad versus good singularities" and their resolution the reader is referred to [61]. In particular, the values Q = 2/3, P = 1/2 reproduce an asymptotically-linear glueball spectrum, m 2 n ∼ n, besides confinement. We will restrict ourselves to this case in what follows. In [36], the single phenomenological parameters of the potential was fixed by looking at the zero-temperature spectrum, i.e. by computing various glueball mass ratios and comparing them to the corresponding lattice results. The masses are computed by deriving the effective action for the quadratic fluctuations around the background, [62] and subsequently reducing the dynamics to four dimensions. The glueball spectrum is obtained holographically as the spectrum of normalizable fluctuations around the zero-temperature background. In IHQCD the relevant fields are the 5D metric, one scalar field (the dilaton), and one pseudoscalar field (the axion that is subleading in N c ). As a consequence, the only normalizable fluctuations above the vacuum correspond to spin 0 and spin 2 glueballs (more precisely, states with J P C = 0 ++ , 0 −+ , 2 ++ ), each species containing an infinite discrete tower of excited states. We only compare the mass spectrum obtained in our model to the lattice results for the lowest 0 ++ , 0 −+ , 2 ++ glueballs and their available excited states. These are limited to one for each spin 0 species, and none for the spin 2, in the study of [63], which is the one we use for our comparison. This provides two mass ratios in the CP-even sector and two in the CP-odd sector. The glueball masses are computed by first solving numerically Einstein's equations, and using the resulting metric and dilaton to setup an analogous Schrödinger problem for the fluctuations, [36]. The results for the parity-conserving sector are shown in Table 1, and are in good agreement with lattice data for N c = 3. Unlike the various mass ratios, the value of any given mass in AdS-length units (e.g. m 0++ ℓ) does depend on the choice of integration constants in the UV. Therefore its numerical value does not have an intrinsic meaning. However it can be used as a benchmark against which all other dimension-full quantities can be measured (provided one always uses the same UV boundary conditions). On the other hand, given a fixed set of initial conditions, asking that m 0++ matches the physical value (in MeV) obtained on the lattice, fixes the value of ℓ hence the energy unit. The holographic renormalization of such Einstein-dilaton theories is quite intricate as the AdS boundary conditions on the dilaton is unusual (φ → −∞ near the boundary). It has been derived recently in [64]. Finite temperature In the large N c limit, the canonical ensemble partition function of the model just described, can be approximated by a sum over saddle points, each given by a classical solution of the Einstein-dilaton field equations: where S i are the euclidean actions evaluated on each classical solution with a fixed temperature T = 1/β, i.e. with euclidean time compactified on a circle of length β. There are two possible types of Euclidean solutions which preserve 3-dimensional rotational invariance. In conformal coordinates these are: 1. Thermal gas solution, with r ∈ (0, ∞) for the values of P and Q we are using; 2. Black-hole solutions, with r ∈ (0, r h ), such that f (0) = 1, and f (r h ) = 0. In both cases Euclidean time is periodic with period β o and β respectively for the thermal gas and black-hole solution, and 3-space is taken to be a torus with volume V 3o and V 3 respectively, so that the black-hole mass and entropy are finite 8 . The black holes are dual to a deconfined phase, since the string tension vanishes at the horizon, and the Polyakov loop has non-vanishing expectation value. On the other hand, the thermal gas background is confining. The thermodynamics of the deconfined phase is dual to the 5D black-hole thermodynamics. The free energy, defined as is identified with the black-hole on-shell action; as usual, the energy E and entropy S are identified with the black-hole mass, and one fourth of the horizon area in Planck units, respectively. The thermal gas and black-hole solutions with the same temperature differ at O(r 4 ): where G and C are constants with units of energy. As shown in [38] they are related to the enthalpy T S and the gluon condensate trF 2 : (3.10) Although they appear as coefficients in the UV expansion, C and G are determined by regularity at the black-hole horizon. For T and S the relation is the usual one, For G the relation with the horizon quantities is more complicated and cannot be put in a simple analytic form. However, as discussed in [38], for each temperature there exist only specific values of G (each corresponding to a different black hole) such that the horizon is regular. At any given temperature there can be one or more solutions: the thermal gas is always present, and there can be different black holes with the same temperature. The solution that dominates the partition function at a certain T is the one with smallest free energy. The free energy difference between the black hole and thermal gas was calculated in [38] to be: For a dilaton potential corresponding to a confining theory, like the one we will assume, the phase structure is the following [38]: 1. There exists a minimum temperature T min below which the only solution is the thermal gas. 2. Two branches of black holes ("large" and "small") appear for T ≥ T min , but the ensemble is still dominated by the confined phase up to a temperature T c > T min 3. At T = T c there is a first order phase transition to the large black-hole phase. The system remains in the black-hole (deconfined) phase for all T > T c . The holographic mode has also been confronted successfully with recent lattice data [40] at finite temperature, [39]. Adding Flavor A number N f of quark flavors can be included in our setup by adding space-time filling "flavor-branes". In this case they are pairs of space-filling D 4 − D 4 branes. To motivate the setup it is important to revisit the low-dimension operators (dimen-sion=3) in the flavor sector and their realization in string theory. At the spin-zero level we have the (complex) mass operatorψ dual to a complex scalar transforming as (N f ,N f ) under the flavor symmetry U (N f ) R × U (N f ) L . At the spin-one level we have the two classically conserved currents , or the anti-D branes, A µ L . The bifundamental scalar T , on the other hand, is the lowest mode of the D−D strings, compatible with its quantum numbers. Its holographic dynamics is dual to the dynamics of the chiral condensate. This is precisely the scalar that in a brane-antibrane system in flat space is the tachyon whose dynamics has been studied profusely in string theory, [50]. It has been proposed that the non-linear DBI-like actions proposed by Sen and others are the proper setup in order to study the holographic dynamics of chiral symmetry breaking, [48]. This dynamics was analyzed in a toy example, [55], improving several aspects of the hard [25], and soft wall models, [47]. We will keep referring to T as the "tachyon", as it indeed corresponds to a relevant operator in the UV. The tachyon dynamics is captured holographically by the open string DBI+WZ action, which schematically reads, in the string frame, where the DBI action for the D −D pair is Here T is the tachyon, a complex N f × N f matrix. A L,R µ are the world-volume gauge fields of the U (N f ) L × U (N f ) R flavor symmetry, under which the tachyon is transforming as the (N f ,N f ), a fact reflected in the presence of the covariant derivatives 9 transforming covariantly under as well as the field strengths F L,R = dA L,R − iA L,R ∧ A L,R of the A L,R gauge fields. λ ≡ e Φ = N c e φ is as usual the 't Hooft coupling. We have also used the symmetric trace (≡ Str) prescription although higher order terms of the non-abelian DBI action are not known. It turns out that such a prescription is not relevant for the vacuum structure in the flavor sector (as determined by the classical solution of the tachyon) neither for the mass spectrum. The reason is that we may treat the light quark masses as equal to the first approximation and then in the vacuum, T = τ 1 with τ real, and this is insensitive to non-abelian ramifications. Expanding around this solution, the non-abelian ambiguities in the higher order terms do not enter at quadratic order. Therefore, for the spectrum we might as well replace Str → T r. The WZ action on the other hand is given by 10 : where M 5 is the world-volume of the D 4 -D 4 branes that coincides with the full space-time. Here, C is a formal sum of the RR potentials C = n (−i) 5−n 2 C n , and F is the curvature of a superconnection A. Note also that str in (4.7) stands for supertrace and not symmetric trace. It acts on the space of D andD branes and is defined in Appendix C of [48]. In terms of the tachyon field matrix T and the gauge fields A L and A R living respectively on the branes and antibranes, they are (We will set 2πα ′ = 1 and use the notation of [52]): The curvature of the superconnection is defined as: Note that under (flavor) gauge transformation it transforms homogeneously In [48] the relevant definitions and properties of this supermatrix formalism can be found. By expanding we obtain where Z 2n are appropriate forms coming from the expansion of the exponential of the superconnection. In particular, Z 0 = 0, signaling the global cancelation of 4-brane charge, which is equivalent to the cancelation of the gauge anomaly in QCD. Further, as was shown in [48] This term provides the Stuckelberg mixing between T r[A L µ − A R µ ] and the QCD axion that is dual to C 3 . Unlike the 't Hooft limit, in the Veneziano limit this mixing happens at leading order in 1/N c , [2]. Dualizing the full action we obtain 10 This expression was proposed in [51] and proved in [52,53] using boundary string field theory and where we have set the tachyon to its vev T = τ 1 . This term is invariant under the reflecting the QCD U (1) A anomaly. It is this Stuckelberg term together with the kinetic term of the tachyon field that is responsible for the mixing between the QCD axion and the η ′ . In terms of degrees of freedom, we have two scalars a, ζ and an (axial) vector, A A µ . We can use gauge invariance to remove the longitudinal components of A A . Then an appropriate linear combination of the two scalars will become the 0 −+ glueball field while the other will be the η ′ . The transverse (5D) vector will provide the tower of U (1) A vector mesons. The next term in the WZ expansion couples the baryon density to a one-form RR field C 1 . There is no known operator expected to be dual to this bulk form. However its presence and coupling to baryon density can be understood as follows. Before decoupling the N c D 3 branes, its dual form C 2 couples to the U (1) B on the D 3 branes via the standard C 2 ∧ F B WZ coupling. This is dual to a free field, the doubleton, living only at the boundary of the bulk. Once we add the probe D 4 +D 4 branes the free field is now a linear combination of A B and an N f /N c admixture of A V originating on the flavor branes. The orthogonal combination is the baryon number current on the flavor branes and it naturally couples to C 1 . Therefore the C 1 field is expected to be dual to the topological baryon current at the boundary. Finally the form of the last term requires some explanation. By writing Z 6 = dΩ 5 we may rewrite this term as F 0 ∼ N c is nothing else but the dual of the five-form field strength. This term then provides the correct Chern-Simons form that reproduces the flavor anomalies of QCD. Its explicit form in terms of the gauge fields A L,R and the tachyon was given in equation (3.13) in [48]. The action as described is based on the flat space Sen action for the D −D braneantibrane system. In the presence of curvature and other non-trivial background fields, like the dilaton we expect corrections to the DBI action. Such corrections may affect the tachyon potential as well as the kinetic terms of the vectors and the tachyon. Some generic properties are expected to remain though, and these include the tachyonic nature of the scalar near the AdS boundary and the exponential asymptotics of the potential at large T . The bottom-up models At x = N f Nc = 0 the IHQCD model, [36], is described by the action with λ the 't Hooft coupling (exponential of the dilaton) and a potential that has the following asymptotics. At finite x we must add the flavor sector. For the vacuum structure, and with all masses of the quarks being equal it is enough to add the U (1) part of the tachyon DBI action, where we have set the gauge fields to zero. The total action is S = S g + S f . Note that the overall sign of the DBI action is negative. The function V g and its asymptotics has been discussed in detail in [36]. We will consider it known, and when needed we will use the form that was in agreement with YM data, [39]. The tachyon potential V f (λ, T ) should satisfy some basic principles. For flat space D-branes, V s ∼ 1 λ e −µ 2 T 2 . In our case, near the boundary, where T → 0, λ → 0, we expect, in analogy with V g , a regular series expansion with V 0,1 (λ) having regular power series expansions in λ. As we will see later, the functions V 0,1 (λ) may be mapped into the perturbative β-functions for the gauge coupling constant, and the anomalous dimension of the quark mass operator. Near the condensation point, T → ∞ we expect the potential V f to vanish exponentially. This is based on very general arguments due to Sen that guarantee that the brane gauge fields disappear beyond that point. Finally the function h(λ, T ) was introduced to accommodate the fact that the action in (5.3) is written in the Einstein frame. In flat space, this factor is unity in the string frame but becomes nontrivial (h ∼ λ − 4 3 ) in the Einstein frame. The equations of motion Collecting the action of the glue and flavor sectors together, (5.5) We shall take the following Lorentz-invariant Ansatz for the metric: In our Ansatz the warp factor A, the scalar λ and the tachyon T depend only on the radial coordinate r. The Einstein equations take the form: where T g ab and T f ab are the energy momentum tensors of the glue and flavor sectors, respectively. These equations translate into where primes stand for r-derivatives. For x = 0 these equations agree with [36]. Finally, the equations of motion for the dilaton and the tachyon are given by: Conformal fixed-point solutions There are solutions to the equations above that are conformal, and are related to the fixed points of an "effective potential". To find them we must set λ ′ = T ′ = 0, and λ ′′ = T ′′ = 0 in the equations (5.10), (5.11) which imply that we must be at a critical point of the effective potential, For any solution λ * , T * of the above conditions, we obtain an AdS 5 space with 12 as is obvious from equations (5.8) and (5.9). When V f has the standard dependence on the tachyon [50], there are two solutions to the condition ∂ T V eff = 0, namely T = 0 (chiral symmetry unbroken) and T → ∞, (chiral symmetry broken). • T = 0. In this case the second condition of extremality (6.1) is ∂ λ (V g (λ)−xV f (λ, 0)) = 0. In the UV, λ → 0, this potential is constructed to emulate the perturbative βfunction, and therefore has a free-field theory fixed point. This is the UV theory. In the IR, it will also have a fixed point at λ = λ * . In the BZ region λ * ≪ 1. A priori there are two possibilities. We will discuss these two options later on in this paper. • T → ∞. In this case as lim T →∞ V f (λ, T ) = 0 we obtain that the second extremality condition is ∂ λ V g = 0. This is equivalent to the existence of a fixed point in large-N YM theory. It has been argued however in [36] that this is not true. Therefore, there is no fixed point with T → ∞. Holographic β-functions In holographic theories we may define non-perturbative β-functions that capture the dependence of coupling constants on the RG scale. This concept was developed and used first in [36], in order to explore the physics of Einstein dilaton theories and their holographic relation to YM theory. In particular in [36] it was shown that the β function is intimately related to the generalized superpotential. It was shown later in [38] that such defined β-functions are indeed related to the quantum breaking of scale invariance, and appear in the trace of the stress-tensor. Such relations were shown in full generality in [64] and have been confirmed also in [65]. To define the β-functions we need a notion of energy scale. At the two-derivative level, such a function is the scale factor e A , and indeed near the AdS boundary it can be identified as the energy scale. It remains always a decreasing function, and becomes zero in the ultimate IR. It can therefore be taken as the energy scale in the whole of the bulk space. We therefore define the β-function and "anomalous" dimension γ as 11 The equations of motion provide equations for the β-functions. To obtain them we must convert radial derivatives to derivatives with respect to A, Notice that, as we shall see later, it is the ratio γ(λ, T )/T (rather than γ(λ, T )) which corresponds closely to the anomalous dimension of the quark mass in QCD. Excluding the extra T in the definition simplifies many of the equations below. which can in principle be solved for A ′2 e 2A as a function of λ, V g , V f , h, β, γ. In general, the solution is not unique. 12 Similarly from (5.8) we obtain Then (5.10) and (5.11) become where we eliminated A ′′ by using (7.4). This is a system of two first-order partial differential equations for β, γ, with inputs V f , V g , h. The equations are highly non-linear. Setting x = 0, the system reduces to In this case ∂β ∂T = 0 and (7.7) can be rewritten as These equations are similar to those in the probe (quenched) limit. Note also that although the equations of motion are linear in x, the β-system (7.5,7.6) is non-linear in x, with the understanding that A ′2 e 2A is to be eliminated using (7.3). The UV Fixed point Near the boundary where λ, T → 0, we expect that γ → 0, so that equation (7.3) becomes According to the discussion of Sec. 3, the potentials are expected to be regular at λ = 0, We also take the following Ansatz for the beta functions: Inserting these into the equations (7.5,7.6), we find = 0 (7.14) To leading order the solutions around the UV fixed point are In order for T to have the proper UV dimension we must have γ 0 = −1. In the massless case, T is dominated by the vev, and we have to choose γ 0 = −3. These solve Eq. (7.14) if As we shall point out below, the combination γ(λ, T )/T is mapped to the anomalous dimension of the quark mass in QCD (2.8). Remarkably, the solution (7.12) is consistent with QCD perturbation theory: γ(λ, T )/T has a series expansion in λ. The leading term is fixed according to the UV dimension, and the correction terms are identified with the anomalous dimension. Confining IR asymptotics In the IR, a confining asymptotic has the property that A ′2 e 2A → ∞. There are two possibilities for the tachyon: 1. If T = 0, γ = 0 and then V g → V g − xV f (T = 0). In this case 2. T → ∞ in the IR, V f → 0 exponentially, and we obtain the pure YM case in the IR. Here If we assume that V f (λ, 0) does not grow faster than V g (λ) in the IR, the solution in both cases is the same as in pure YM (x = 0), [36], This non-perturbative β-function indicates that the scaling dimension of the dual operator tr(F 2 ) in the IR is ∆ = 5 2 read from its linear term. However this does not imply there is a scaling regime in the IR, as the metric is far from AdS. The tachyon equation becomes In the IR, for a class of potentials γ → −∞ and 1 → 0 as V f vanishes exponentially while γ increases polynomially with increasing T . We also expect hV g to be approximately constant and 1 − β 2 9λ 2 → 3 4 . We also expect ∂ log h ∂T → 0 and β ∂γ ∂λ to be subleading. Therefore the leading terms in equation (7.20) are expected to be If we define δ = 1 γ , δ satisfies a linear equation with T * large enough so that we are in the IR regime. For γ we obtain Note that from the whole one-family of solutions above, only one is diverging for large T and this is the only reliable solution. For a tachyon potential of the form V f T = e −aT 2 the diverging solution is For the type of potentials we will be using hV g / √ log λ approaches a constant which we denote by b, and V f λ ∼ λ 2 so that Note that if we assume a more elaborate tachyon potential of the form V f = V f 0 (λ)e −a(λ)T 2 with a(λ) increasing with λ in the IR then the asymptotic behavior of γ changes and it vanishes in the IR. To capture this behavior from (7.20) we take γ → 0 in the IR and We also expect that the derivative terms γ ∂γ ∂T and β ∂γ ∂λ are subleading. The two leading terms in (7.20) are then expected to be The solution behaves for large T as where b is defined as above. Some simple β functions We may engineer a β function that interpolates between the one-loop perturbative YM β-function, and the non-perturbative one in (7.19). We could also have a γ function interpolating between an operator with dimension ∆ UV in the UV, and ∆ IR in the IR: The flow equations can be integrated to 7.31) Such β functions can be converted into potentials V g , V f , via the equations (7.5), (7.6). Numerical solutions for the β and γ functions The equations (7.3), (7.5), and (7.6) for the β and γ functions can be solved numerically for fixed potentials by combining solutions evaluated along the RG flow, as detailed in Appendix A. Fig. 1 shows the results for various values of x. We used the potentials of scenario I from Appendix C and required that the solutions flow to the good IR singularity, as explained in Appendix A. Notice that the values x = 3.9 and 4.2 were chosen to lie slightly above and below the edge of the conformal window, which is at x c ≃ 3.9959 for the potentials used in the plots. The plots of the beta functions show a smooth transition as T evolves from small ≪ 1 to large 1 values, reflecting the expectation of Sec. 6. For small tachyon, the beta function has a fixed point corresponding to the maximum of the effective potential V eff , which moves to lower values of λ as x is increased, whereas for large T the λ-dependence of the beta function approaches the Yang-Mills form. The γ functions show a transition between the small and large T regions as well. For large T and λ, the solutions agree with Eq. (7.27). For small T , the structure is richer. For x = 2 the γ tends to a constant value as T → 0 (except for very small λ) so that γ(λ, T )/T , which is plotted in Fig. 1, diverges. This behavior is pushed for larger λ as we increase x to 3.9, and has disappeared for x = 4.2, and the gamma function is instead linear in T in accordance with Eq. (7.12). These changes are related to the tachyon changing sign during the flow. If the tachyon has a zero, we hit the T = 0 line before reaching the UV singularity. Then γ approaches a constant value as T → 0, and Eq. (7.12) does not apply. As will be discussed in detail below, tachyon zeroes are indeed expected for x < x c . Notice also that γ/T tends to −1 as λ → 0 with fixed T at least for small T , reflecting the expected value of γ 0 in Eq. (7.12). Generic properties of the background We start by discussing the symmetries and integration constants of the equations of motion of V-QCD, (5.8)-(5.11). We shall assume the exponential Ansatz for the potential of the tachyon DBI action. Then the background EoMs have the following symmetries The first symmetry can be used to fix the value of the UV AdS radius ℓ, and is usually associated with the units of energy in the boundary theory. The second one will be used to fix the normalization of h in the UV. The third one is essentially different from the first two, since it does not involve the potentials. It will therefore remain as a true symmetry of the background solutions. We may choose a set of independent equations of motion which contains one first order and two second order differential equations. Therefore, their general solution includes five integration constants. These can be identified as the coefficients of the UV expansions of the fields as follows (assuming that the solution has the standard UV singularity with λ → 0, see Appendix D). The tachyon UV expansion has the usual free constants related to the normalizable and non-normalizable solutions, identified as the quark mass m and the vacuum expectation value σ of theqq operator, respectively. In close analogy, the solution for λ involves two constants, identified as the UV scale Λ = Λ UV of the expansions, and another constant which we will define in Section 9, related to the gluon condensate and the free energy of the system. The fifth integration constant is simply the location of the UV singularity which can take to be r = 0 by the translation symmetry of Eq. (8.4). We shall require that the system has a repulsive, "good" kind of IR singularity, which fixes the values of the condensates σ and in terms of m and Λ UV . In addition, we still have the scaling symmetry of Eq. (8.4), which can be used to vary the units of all constants, and reflects the corresponding scale transformation of the dual field theory. Therefore, the single parameter which characterizes all nontrivially linked physical backgrounds (for fixed potentials and, in particular fixed x = N f /N c ) is the ratio of the "source" coefficients m/Λ UV 13 . For the choices of the functions h, V g , and V f of interest to us, the tachyon typically decouples from the other fields asymptotically in the UV and in the IR. The UV and IR asymptotics are discussed in detail in Appendices D and E, and we shall repeat only the main features of the physically interesting possibilities here. The physically relevant UV asymptotics are restricted. As pointed out in [36], the fields A and λ can be expanded in −1/ log r at the UV boundary r = 0 in the probe limit. Similar expansions work also at finite x = N f /N c . The tachyon is required to vanish linearly T (r) ≃ mr (or faster if m = 0) in r in the UV. Taking ǫ = −1/ log r → 0, the tachyon T ≃ m exp(−1/ǫ) vanishes exponentially while A and λ have power-like behavior in ǫ. Since, in addition, the functions h and V f must be regular in the UV (see Sec. 8.2 below), the tachyon can be set to zero in the leading order action. We find that A and λ satisfy their probe limit equations of motion, but with the dilaton potential V g replaced by V eff (λ) = V g (λ) − xV f (λ, T = 0), which also verifies that the UV expansions of λ and A have the same form as in the probe limit. We shall only discuss cases where the tachyon indeed decouples asymptotically in the IR. This is the case, if we take a tachyon potential having an exponential T dependence, V f (λ, T ) ∝ exp(−aT 2 ), and the tachyon has a power-law (or faster) divergence as r → ∞. This is indeed what is suggested by tachyon condensation in string theory. Therefore, the flavor part of the action is exponentially suppressed in the IR. Consequently, the tachyon decouples asymptotically from A and λ, and their asymptotic expansions in the IR have exactly the same form as in the probe limit, and are determined by the potential V g (λ). In summary, even though all fields couple nontrivially for general values of the coordinate, the probe limit description will be valid in the UV and IR. In particular, this guarantees that the interpretation of the integration constants is the same for finite x as in the probe limit. The decoupling of the tachyon in the IR leads to the system having similar "good" IR singularities as in the probe case. An important difference with respect to the probe limit discussion is that the potentials which characterize the backgrounds in the UV and IR regions will be, in general, qualitatively different. As discussed above in Section 6, the potential V eff (λ) = V g (λ)− xV f (λ, T = 0) which controls the UV behavior will be chosen such that it admits a fixed point (extremum of the potential) at least for large values of x as required by the Banks-Zaks analysis. The fixed point of V eff will play an important role in the dynamics in the intermediate region between UV and IR. In particular, for identically vanishing tachyon, the background simply flows from the UV fixed point at λ = 0 to the IR fixed point at finite λ. Adding, a tiny quark mass (or chiral condensate), the solution in the UV region will not be changed drastically. However, no matter how small the quark mass is, the tachyon will eventually become large and start coupling to λ and A, which will drive the system away from the IR fixed point. We can make two observations. First, the special case of zero tachyon will be essentially different from all other solutions. This solution has vanishing quark mass and chiral condensate, and is therefore identified as the solution corresponding to conserved chiral symmetry. We will discuss this special case separately below. Second, there is a possibility to have solutions which come very close to the IR fixed point, but are eventually driven away by the increasing tachyon. Such solutions will be identified with quasiconformal or "walking" dynamics of the dual field theory, where the coupling constant remains approximately fixed over a large range of energies. We shall see below how the phase structure of QCD in the Veneziano limit, which includes a quasiconformal region, arises in our class of models. Matching UV behavior with the QCD β-functions We shall now discuss the most important links of the potential functions to the physics of the dual field theory. We start by an analysis of the UV region, where the behavior of the system can be mapped to the QCD β-functions [36] as already discussed in Section 7. In particular, in the probe limit (x → 0 limit), the UV behavior is controlled by V g (λ), that has the expansion Here V 0 > 0 can be freely chosen and it fixes the AdS scale for x = 0 as V 0 = 12/ℓ 2 0 . The other coefficients can be mapped to the Yang-Mills β-function. At one-loop order we have [36] V g (λ) = 12 where b 0 YM is the one-loop coefficient of the β-function, from which V 1 can be solved. Moreover, as discussed above, at finite x the UV behavior is similar to the probe limit, but the role of V g is taken by the potential V eff (λ) = V g (λ) − xV f (λ, T = 0). Therefore, we take and the relation to the QCD β-function reads where b 0 is the leading coefficient of the β-function in the Veneziano limit. Similarly to V 0 , the coefficient W 0 can be freely chosen, and the other coefficients can be solved from Eq. 8.8. However, there are constraints: the AdS scale must remain positive for all 0 < x < 11/2, and W 0 should also be positive (see Appendix C). These boil down to In addition, as pointed out in Section 7, we can map the UV behavior of the tachyon action to the (anomalous) dimensions of the quark mass and the chiral condensate of the dual field theory. For definiteness, we parametrize V f (λ, T ) = V f 0 (λ) exp(−a(λ)T 2 ) and assume that h depends only on λ. Then the dimension of the quark mass constrains the UV behavior of h(λ) and a(λ). A detailed analysis is done in Appendix D.1.1. Remarkably, assuming that the potential functions have analytic expansions at λ = 0 is consistent with QCD perturbation theory. Indeed, requiring the dimension of the quark mass to approach one in the deep UV fixes the leading terms as 10) and the next-to-leading coefficient h 1 can be matched with the one-loop anomalous dimension of the quark mass γ m (λ). By using Eq. (D.10) from Appendix, we obtain −γ 0 = 9 8 4 3 where γ 0 is the leading coefficient of the anomalous dimension, −d log m/d log µ = γ m (λ) = γ 0 λ+· · · . Notice that, for example, the non-normalizable term in the tachyon solution (D.9) reads after this identification where the logarithmic correction is consistent with the one-loop solution for the running quark mass in QCD. Condensate dimension and the edge of the conformal window The most important new feature of the system discussed in this article is the description of the phase diagram of QCD as a function of x = N f /N c in the Veneziano limit. We shall now indicate which potentials lead to the desired structure. This constraint to a large extent independent of the one discussed above that was set by the UV expansions. The phase structure is linked to the dimension of the chiral condensate at the IR fixed point which is found at the maximum of V eff = V g − xV f 0 in the conformal window (in the limit of small tachyon background). We denote the value of the coupling at the fixed point by λ * so that From the action (5.5) we can calculate the IR AdS scale To calculate the dimension we need the mass of T in the IR fixed point, which can be extracted from the action (5.3) and reads . has a certain dependence on x. It must start at a value smaller than 4 at the end of the BZ region (which is guaranteed as the standard UV boundary conditions fix it to 3 there), grow as x decreases, and become 4 at some value of x so that the BF bound [66] is saturated [7]. Indeed, the solutions near the edge of the conformal window stabilize such that the critical x c , defining the location of the conformal phase transition for massless quarks, is determined by We shall discuss why this is the case in detail later on. Note that in addition to the explicit dependence on x in the factor xV f 0 , Eq. (8.16) depends on x through λ * due to the definition of Eq. (8.13). We stress that the saturation of the BF bound means that for theories near the critical x c , ∆ IR → 2, or in other words, the anomalous dimension of the quark mass at the fixed point γ * → 1. That is, our model reproduces the standard assumption for the energy dependence of the chiral condensate near the edge of the conformal window. Recall that this is extremely important for realizations of walking technicolor. If the potentials are matched with the UV physics of QCD, the expression (8.16) can be further simplified. Fixing the tachyon mass in the deep UV we obtain a(0)/h(0) = 3/(2ℓ 2 ). In the limit T → 0, the beta function β(λ) = dλ/dA satisfies a first order differential equation which depends on V eff (λ), as can be immediately concluded by comparing our action to the probe limit one [36] for T = 0. In terms of the phase function X, defined by Solving this, we can parametrize the potential in terms of the β-function: The IR dimensions now satisfy where in the second line we used the fact that β(λ * ) = 0. If we do the matching with perturbative QCD results in the UV, the β-function can be identified as the QCD βfunction in the Veneziano limit. Then the exponential factor is small in the sense that it is roughly proportional to b 0 λ * where both the one-loop coefficient of the β-function b 0 and the coupling at the fixed point λ * vanish in the BZ limit x → 11/2. Therefore we expect that ∆ IR (4 − ∆ IR ) depends on x mostly through the functions h and a. In the two-loop approximation of the β-function we obtain In Fig. 2 we plot the dependence of ∆ IR (4−∆ IR ) on x for the two scenarios of potential choices of Appendix C. The solid lines give the results for the value W 0 = 12/11 used in the Appendix, and the dashed lines show the sensitivity of the result for the choice of W 0 as this parameter is varied over its allowed range. In all cases, ∆ IR (4 − ∆ IR ) intersects the value of 4, shown as the horizontal dotted line in the plot. This suggests that the class of potentials that has the desired phase structure quite in general includes those ones that are matched with the UV behavior of QCD. Moreover, the critical value of x is found within a quite narrow band 3.7 23) and the largest source of uncertainty is the choice of W 0 . 14 We stress that there is no strict bound on the allowed values of x c , but the numbers in Eq. (8.23) rather give the expected magnitude for the variation of x c within natural potential choices. It is also instructive to show the dependence of the anomalous dimension of the quark mass at the IR fixed point on x (see Fig. 3). Within our model the anomalous dimension is defined by γ * = ∆ IR − 1 where ∆ IR is the smaller of the two roots. The result for the potentials of scenario I of Appendix C is shown as the solid blue line in Fig. 3, and as in Fig. 2, the dashed lines show the maximal variation as the parameter W 0 is varied over its allowed range. Our result is very similar to the prediction obtained by calculating the anomalous dimension from Dyson-Schwinger equations in the rainbow approximation [67], and evaluating the result at the zero of the perturbative two-loop β-function (2.3) (dotted red curve of Fig. 3). As in our model, the conformal window ends at the point where γ * reaches one and becomes complex in this approach. The deviation from the simple perturbative estimate (dot-dashed magenta curve), which was obtained by using the two-loop anomalous dimension (2.8) instead, is also small. However, an all-orders β-function [68] (long-dashed green curve) predicts much smaller values of γ * for low values of x. Constructing the background solutions We will select concrete potentials for evaluating the background numerically. These potentials must satisfy the constraints of the previous two subsections in order to produce the desired phase structure of QCD in the Veneziano limit as well as the perturbative UV physics. In addition, the system needs to have an acceptable IR singularity where the tachyon diverges, which adds extra requirements to the large λ behavior of the potentials. However, as we shall demonstrate briefly, different choices for the IR behavior do not change the qualitative features of the background. The construction of explicit potentials is detailed in Appendix C, where also the IR behavior is fixed by using the generic analysis of Appendix E. For clarity, we repeat the final result here. The potentials of the scenario I in Appendix C read V g (λ) = 12 + 44 9π 2 λ + 4619 3888π 4 λ 2 (1 + λ/(8π 2 )) 2/3 1 + log(1 + λ/(8π 2 )) (8.24) We shall use this choice when calculating the background numerically, unless stated otherwise. Recall that only three of the equations of motion (5.8)-(5.11) are independent. We choose a set of three second-order equations for the numerical calculations, and treat the remaining first order equation as an extra constraint. As usual, we choose boundary conditions that satisfy the constraint, which is then automatically satisfied for all r. 15 When solving the equations numerically, it turns out that shooting from the IR is numerically stable. 16 We shall first discuss backgrounds where the tachyon has a nontrivial profile. We fix the boundary conditions in the deep IR by using the asymptotic IR expansions at the "good" IR singularity at r = ∞ (see Appendix E.2.2). For the functions given above, the asymptotics becomes Here we already used the translation symmetry of Eqs. the solution (for fixed x) depends nontrivially on only one free parameter T 0 in the IR, as is characteristic for a good IR singularity. To continue, we choose an IR cutoff R IR such that the tachyon is large, and therefore decoupled from the fields A and λ. With the above choices of potentials, a sufficient value turns out to be T (R IR ) = 70: with this choice, the tachyon has decoupled to a good precision, and R IR /R is large enough for the asymptotic expansions to work. We use the asymptotic expansions to fix the values of A, λ, T , A ′ , and T ′ at r = R IR , and solve λ ′ from the constraint (the first order EoM). The solution is then obtained by numerically solving the set of second order equations of motion toward the UV, until some of the UV asymptotics described in Appendix D are reached. The obtained UV behavior is depicted in Fig. 4 (left) as a function of the only remaining free parameters, T 0 and x. For comparison, we also present the same plot for the scenario II of Appendix C on the right hand side, where T 0 is replaced by the parameter r 1 of Eq. (C.21). For clarity, we shall only refer to the variable T 0 in the discussion below. Because of the invariance of the Lagrangian under T → −T , scanning over positive T 0 is enough to catalog all possible solutions. We shall explain the notation and the results here, and discuss at qualitative level how the structure arises, whereas the details are discussed in Appendix F. We shall also show concrete examples of the backgrounds as well as their β-and γ-functions along the holographic RG flow below. The solid blue line represents a change in the UV asymptotics. The standard UV asymptotics of Appendix D.1.1 is obtained left of the blue line, whereas right of the blue line the solution "bounces back" at finite λ, as discussed in Appendix D.1.2. In the bounce back region the β-function evaluated along the RG flow becomes zero at finite λ, after which λ starts growing toward the UV, and the standard UV singularity is not reached. Left the blue solid curve, the "standard" tachyon UV expansion of Appendix D.1.1 defines the quark mass and the chiral condensate, which are not defined right of, or on, the blue line. The red dashed line has m = 0. This line exists only for small x and terminates at a critical value x c ≃ 3.9959 (in scenario I), which matches with the definition of Eq. (8.17). We stress that the solution with zero quark mass does not exist for x ≥ x c . Right of the red dashed line, the quark mass takes positive values and is monotonic in T 0 . The contours give the quark mass, obtained by fitting the deep UV behavior of the tachyon solution to the expansion of Appendix D. 18 The backgrounds in the contoured region will be identified as the physical ones, having lowest free energy. The value of the mass goes to zero when the solid blue (for x > x c ) or the red dashed (for x < x c ) curves are approached from within the contoured region as shown in Fig. 5, which is not evident from Fig. 4 due to limited resolution. Between the blue solid and red dashed curves, the quark mass is small, but depends on x and T 0 in a complicated manner, as we shall discuss below. The right hand plot in Fig. 4, which was obtained after modifying the potentials in the IR, shows similar qualitative features as the left hand plot. This is the case because the structure seen in Fig. 4 arises from the behavior of the solutions in the UV region and close to it, which is analytically tractable and almost independent of the change of the potentials, if, e.g., we keep the quark mass fixed (see the discussion in Appendix F). However, the mapping to the IR asymptotics (in particular to T 0 or r 1 ) is completely different in the two scenarios, which causes the differences between the plots. Let us then discuss how the structure of Fig. 4 arises from the background solutions (see Appendix F for a detailed analysis). The main point is that the closer we are to the thick blue curve (when approaching the curve from the right), the closer the background is to reaching the IR fixed point, when the tachyon finally grows large and drives the system away from it. Therefore the backgrounds near the blue curve will be quasiconformal, or "walking", so that the coupling λ is approximately constant over a large range of energies. The mass dependence can be then understood be studying the tachyon EoM and in particular the tachyon mass at the IR fixed point. Notice that as the red dashed curve of solutions with zero quark mass ends on the blue curve as x → x c , quasiconformal behavior is expected in this region. To understand the behavior near the blue and red curves it is important to plot the UV parameter, m, versus the IR parameter T 0 . We show this in figure 5 for x < x c (left) and x ≥ x c (right). 19 For x ≥ x c , there is a unique saddle point (regular classical solution) for each value of the quark mass. The situation for x < x c is more complex and reflects the fact that the chiral condensate operator violates the BF bound in the (potential) IR fixed point. 20 We see from the left of figure 5 that for each m > 0, there is a finite number of regular classical solutions that we 18 The mass here is given in IR units, i.e., we actually plot mR = m/ΛIR, since we fixed the scale R of the IR expansions to unity in the numerics. 19 As m and −m are related by a chiral rotation by π, we expect that we can take m ≥ 0. The chiral rotation is reflected in the background solutions in the symmetry T → −T . Consequently, we can turn the negative mass solutions of Understanding of the qualitative shape of the left figure 5 is given by the fact that for x < x c , T ≪ 1 and λ near the fixed point value λ * , the approximate solution of the tachyon is given by T ∼ r 2 sin [k log r + φ]. Note that the constant k is fixed for fixed x but the normalization and φ are determined by the boundary conditions and IR regularity. Therefore the tachyon starts at the boundary, evolves into the sinusoidal form for a while, and then at end diverges. Different solutions differ in the region in which they are sinusoidal, and it is this region that controls their number of zeros. This is explained in more detail in Appendix F. For the solutions with high label n, tachyon changes sign several times before diverging in the IR. As we move to the left toward the vertical blue line in Fig. 5 (left), a new zero in the tachyon solution appears every time the mass curve crosses the horizontal axis. For m = 0 we expect an infinite number of regular solutions for all positive integers n ≥ 1. The presence of several such solutions reflect the violation of the BF bound, and are reflecting the Efimov minima seen in other contexts (see [7,10]). This agrees also with similar recent observations in [12]. The hint of such multiple regular solutions/saddle points was seen already in [55] that treated the flavor sector in the quenched approximation. Indeed, a second regular solution was seen beyond the dominant one. In that case a calculation of the spectrum of mesons in this second solution indicated that this saddle point was unstable, as the spectra were tachyonic. It is interesting to point out that the presence of the Efimov-like tower of regular solutions is not tied uniquely to the existence of an IR fixed point solution in the landscape of the bulk theory that violates the BF bound. Even modifications of the potentials that do not allow this IR fixed point may still have the Efimov tower. The qualitative reason Sec. 6), the structure is expected to be the one described here at least for x * < x < xc, and the structure in the region x ≤ x * can be analyzed numerically. is that once the bulk theory has a regime that is near critical, this is enough to trigger the presence of such multiple saddle points. This is indeed the case in [55] where in the quenched approximation a second solution exists even though there are no such IR fixed point. We have checked that exactly the same happens here in the probe limit x → 0, where the fixed point is absent, if the potentials of scenario I are used. Moreover, for the scenario II the full Efimov tower remains even in the probe limit. There is a more detailed analysis of the regular solutions in appendix F. The comparison of the free energies of the various regular solutions is made in section 9. Background solutions at vanishing quark mass To summarize the results of the above analysis, we identified the solutions of the contoured region of Fig. 4 as the physical ones. For x < x c we found solutions for all m ≥ 0, whereas for x ≥ 0 we found that m > 0 so that the solution for m = 0 was absent. We will discuss now how the background solutions vary as we move around Fig. 4 (left). We will start with the massless case, where also an additional special background exist (for all x), for which the tachyon is identically zero. Solution with identically vanishing tachyon Let us start by analyzing the special solution with vanishing tachyon, which is not included in Fig. 4. In the absence of the tachyon, the system is otherwise the one studied in [36], but with the dilaton potential V g replaced by V eff = V g − xV f 0 . This potential is guaranteed to have a maximum, corresponding to an IR fixed point, in the Banks-Zaks region since we matched it with the QCD β-function. For the choice of Eqs. (8.24)-(8.28) the fixed point actually exists for all 0 < x < 11/2. 21 There is a single solution to the equations of motion that reaches the IR fixed point, described in Sec. E.3 of Appendix E. It is similar to the backgrounds studied in [44] where a β-function inspired by supersymmetry was used. Similar backgrounds were also studied at finite temperature in [45]. We identify this special solution as the background corresponding to the chiral symmetry conserving phase, as the vev σ vanishes. It is easy to construct the background numerically by shooting from the vicinity of the IR fixed point, e.g., by using the expansions of Appendix E.3. We plot the background for x = 2 and 4 in Fig. 6, where we fixed the scale R of the IR expansions (E.48) and (E.49) to one. The geometry is expected to asymptote to AdS both in the UV and in the IR, reflecting the flow from the IR fixed point to the standard UV one. Actually A is very closely linear in log r for all r, so that the deviation from AdS is not visible in the plots. Similar observation was made in [44]. While the solution with vanishing tachyon exist for all x, we will show in the next section that the other massless solution which involves a nontrivial tachyon profile and therefore chiral symmetry breaking (red dashed line in Fig. 4) has lower free energy whenever it exist, i.e., for x < x c . Therefore, this background which correspond to a field theory flowing to an IR fixed point, is the physical one (for massless quarks) only for x ≥ x c , and x c is indeed the edge of the conformal window. Solutions having nontrivial tachyon dependence The backgrounds having vanishing quark mass lie on the red dashed line in Fig. 4. We plot the corresponding background as a function of r for a few values of x in Fig. 7. As we matched with the IR expansions and chose their scale R to be unity, the IR scale is approximately fixed to O(1). The changing of the background as the critical value x c ≃ 3.9959 is approached, is best visible in the solutions of λ (the solid red curves). The dependence of the solutions on x meets the expectations from field theory. For x = 2 the solution is "running": λ has simple and smooth dependence on log r. As x is increased to 3, a small distortion appears which becomes better visible for x = 3.5. The solution of λ is developing a plateau at λ ≃ 25, as it approaches a fixed point. Indeed for x = 3.9 the coupling constant λ "walks": it takes an approximately constant value as r changes by a few orders of magnitude. Such walking backgrounds have been studied in the context of IHQCD by using a model beta function in the probe limit in [46]. We also note that A (dashed blue curves) depends linearly on log r up to the IR region so that approximately A ≃ − log(r/ℓ), and the metric is thus very close to the AdS one, even over the quasiconformal region where λ walks. The tachyon (dotted magenta curves) is small and decoupled from the evolution of A and λ in the UV and in the walking region. 22 It becomes O(1) (the curve crosses zero) only after the coupling has already started to diverge. This agrees with our expectation that the UV behavior, the behavior in the walking region, and in particular the phase structure is basically independent of the choices of IR behaviors of the potentials and the form of the tachyon action for large T . We also show the UV and IR expansions of the various fields, given in Eqs. (D.2), (D.9) of Appendix D, and in Eqs.8.29, respectively, as thin lines where possible. These lines are often poorly visible since they overlap with the background. In the running region (x = 2) the full solution can be obtained to a good approximation by interpolating between the expansions. As x increases, the region of validity of the UV expansions is pushed to smaller log r, and neither UV nor IR expansions work in the walking region that grows as x → x c . It is also illustrative to discuss the behavior of the system in terms of the β-functions. For this we recall that in the absence of the tachyon, potentials and beta functions are linked according to Eqs. (8.18), and (8.19). By using this fact we can quantitatively estimate the validity of the tachyon decoupling in the UV and in the IR that was discussed above. First, recall that V f (λ, T ) vanishes exponentially for large T . Therefore, in accordance with the discussion of Sec. 6, the behavior of λ and A is described by V g (λ) − xV f 0 (λ) (V g (l)) in the UV (IR) where T → 0 (T → ∞). Eq. (8.19) gives directly the approximate beta function in the UV, whereas in the IR we must replace V eff (λ) → V g (λ) as in the Yang-Mills case [36]. We show in Fig. 8 the β-functions corresponding to the UV (dashed blue curves) and IR (dotted magenta curves) potentials, obtained by solving Eq. 8.19, and compare them to dλ/dA evaluated along the RG flow of the numerical solution (red solid curves) for various values of x. First, notice that the x dependence of the effective β-function dλ/dA is as expected from Fig. 7: for x = 2 it is qualitatively similar to the Yang-Mills β-function, and as x approaches x c we find a typical quasiconformal behavior where the fixed point is almost reached at a finite value of the coupling. As λ → 0 the effective β-function dλ/dA matches very well with the expectation from the tachyon decoupling (the dashed blue curves). Similarly, toward the IR (λ → ∞) the asymptotics of the red curves are similar to the magenta ones, which were obtained by taking λ → ∞. However the convergence towards the decoupling limit (blue curves in the UV, and magenta ones in the IR) is slower in the IR than in the UV. The main reason for this is understood by studying Fig. 7. Since we plot the β-functions as functions of the coupling λ, and λ diverges in the IR much faster than the tachyon, the tachyon decoupling as λ → ∞ takes place slowly. This is confirmed in Fig. 9 by plotting the factor exp(−aT 2 ) of the tachyon potential, which controls the tachyon decoupling, as a function of λ. Indeed this factor approaches the constant value of one quickly in the UV, whereas the convergence to zero in the IR is slower. Notice the clear similarity in the λ dependencies of this factor and the β-function dλ/dA in Fig. 8. Finally we plot the effective gamma function along the RG flow against λ and log r in Fig. 10. When the quark mass is zero, γ/T approaches −3 in the deep UV (see Appendix D.1.1). In the UV region γ/T is approximately independent of x and increases with λ until it reaches −2 near the value λ ≃ λ c where the fixed point develops as x → x c . This is in line with the discussion of the preceding sections. In particular, when plotted as a function of log r we see that γ/T is close to −2 in the walking region, corresponding to the saturation of the BF bound. We have checked that this behavior gets more pronounced as we choose values of x even closer to x c so that a plateau near the value −2 develops in the left hand plot of Fig. 10. Backgrounds at generic quark masses We now discuss the solutions of Fig. 4 for generic quark masses. As pointed out above, the massive solutions (the contoured region left of the blue and dashed red lines in Fig. 4) exist for all values of x. Except for the modified UV asymptotics of the tachyon, no new classes of qualitatively different backgrounds with respect to the massless case are found. However, adding a mass introduces a new scale to the system, which affects the background in a different way depending on its size and the value of x. 23 This effect may be illustrated by studying the ratio of the UV and IR scales of the background, which measures how close to the IR fixed point the system comes. We define the scales in term of the UV and IR expansions, i.e., Λ UV = Λ in Eqs. (D.2) and Λ IR = 1/R in Eqs. (8.29), which we have fixed to one. The dependence of Λ UV /Λ IR on the quark mass in UV units m/Λ UV and on x is depicted in Fig. 11, and meets the expectations from field theory. The left-hand plot shows the ratio as a function of x for various choices for the quark mass. We see that there is a qualitative difference between the regions with x < x c and x > x c . For x < x c , chiral symmetry breaks spontaneously even for m = 0, and there is some range of small masses where the background is essentially independent of m. This is best seen on the right hand plot, where Λ UV /Λ IR levels for small masses for values of x below the critical line (lowest curves). When the mass grows large enough (essentially larger than the scale of the spontaneous symmetry breaking), it starts to fix the IR scale directly, and Λ UV /Λ IR decreases. For x > x c there is no spontaneous chiral symmetry breaking, and the IR scale is determined smoothly by the value of the mass. From the log-log plot on the right we see that the dependence of Λ UV /Λ IR on m is a power-law. Naively one could expect that the IR scale is directly given by the quark mass, m ∼ Λ IR , so that Λ UV /Λ IR ∝ (m/Λ UV ) −1 (which is also the result one gets by approximating T (r)/ℓ = mr and using the arguments of Appendix F). The nontrivial energy dependence of the quark mass modifies the power from −1 to smaller values. The free energy We now analyze the free energy for zero quark mass. In this case we identified two distinct solutions, one with identically vanishing tachyon and the other with nontrivial tachyon background. We shall show that the latter one is energetically favorable in the region where it exists (x < x c ). We start with the generic definition of free energy for our action. The free energy is given by the on-shell Euclidean action plus counterterms (which we will not need in this article). From (5.5) the Euclidean action takes the form: By using the Einstein equations (5.7) we can eliminate R, which leads to 23 See [69] for an analysis within a different framework. Next, one can solve V g from (5.8) and (5.9): and inserting this into (9.3) the on-shell action can be integrated: Here ǫ and r 0 are the UV and IR cutoffs, respectively. In all the backgrounds which we consider here the contribution from the IR vanishes, so we will drop that term. Following [36] the Gibbons-Hawking boundary term is given by where we have used that [36] K = 4e −A A ′ . We obtain the following expression for the free energy of the system: The free energy difference of the m = 0 backgrounds The free energy calculated above is obviously divergent as ǫ → 0, and needs to be regularized. However, different solutions with the same UV boundary conditions (the quark mass m and the UV scale Λ UV ) have the same divergent terms as well as counterterms and differ only through a finite term which can be extracted from the UV expansions. This is the case in particular for the two backgrounds having zero quark mass, one with vanishing tachyon and the other with a nontrivial tachyon solution. we now discuss in detail how the free energy difference between these two backgrounds can be obtained, by expanding all quantities as series at the UV singularity r = 0. Since the leading UV free energy behaves as 1/ǫ 4 , corrections O(r 4 ) to the behavior of A and λ will possibly contribute to the finite terms. As discussed above, in the UV the tachyon decouples from the equations of motion for A and λ. For m = 0, the leading corrections to these equations due to the tachyon are suppressed by T 2 or e −2A T ′2 , i.e., by O(r 6 ). Therefore the coupling to tachyon does not affect the free energy directly in the massless case, and we can set it to zero. Let us take The finite contribution to the free energy can be studied by writing A(r) = A 0 (r) + r 4 A 1 (r) + O(r 6 ) (9.8) λ(r) = λ 0 (r) + r 4 λ 1 (r) + O(r 6 ) in close analogy to Eqs. (3.9), where A i (r) and λ i (r) have now series expansions in 1/ log r at r → 0. The expansions of for A 0 and λ 0 are given in Eqs. (D.2). Inserting these as well as suitable Ansätze for A 1 and λ 1 in the EoMs of A and λ, and expanding up to O(r 4 ) we find A 1 (r) = 1 − 19 12 log(rΛ) + · · · (9.9) where Λ is the same UV scale that appears in the expansions of A 0 and λ 0 , and is a free parameter. It is recognized as an integration constant of the EoMs that did not appear in the leading order (O(r 0 )) UV expansions. Two solutions having the same Λ and m = 0, but different IR behavior, will have different as its value is not fixed by the EoMs asymptotically in the UV. Inserting the results for A 1 and λ 1 in the expression (9.6) for the free energy, we find where ∆E and ∆ are the differences between the two solutions in free energy and the constantÂ, respectively. It is useful to also study the corresponding variation in the β-function. We will actually use the phase variable defined by We may again write X(r) = X 0 (r) + r 4 X 1 (r) + O(r 6 ) . (9.12) Substituting the expansions (9.8), (9.9) in the definition of X, we identify (9.13) By using the logarithmic expansion from above, the result can be expressed in terms of λ: Finally, we recall that when the tachyon has decoupled, X satisfies Indeed it is easy to check that the result (9.14) is consistent with this equation, and that A is the integration constant which parametrizes all solutions of the differential equation for given potential V . The remaining task is to extract the coefficients from the chiral symmetry breaking (with nontrivial tachyon profile) and conserving backgrounds (with identically vanishing tachyon) which have zero quark mass, and then use the formula (9.10) to calculate the free energy difference. Extraction of the coefficients is done by studying the variation X 1 of the phase function, and details are given in Appendix G.1. We find that the chiral symmetry breaking solution is the energetically favorable one in the region where it exists, i.e., for 0 < x < x c . We plot the free energy difference (setting M 3 N 2 V 4 = 1) in Fig. 12 (solid blue curve). Notice that ∆E approaches zero both as x → 0 and as x → x c . In the case x → 0, we do expect that the effect vanishes as indeed the number of flavors, which controls the backreaction of the tachyon, vanishes. One expects linear dependence ∆E ∝ x: since the background configurations behave smoothly as x → 0, the energy difference arises due to the explicit x dependence in the action and due to its linear effect on the background. The case x → x c will be discussed in the next section. For x c < x < 5.5 only one solution with zero quark mass exists (the one with T ≡ 0), as discussed in Sec. 8, so there is no need for comparison. There are also solutions with zero quark mass where the tachyon has one or more zeroes (see Appendix F). We have verified that these solutions have larger free energies than the one without a tachyon zero. The red dashed line in Fig. 12 is the free energy difference between the background with vanishing tachyon and the one with a nontrivial tachyon solution having one zero. The energy difference between the T ≡ 0 solution and the solutions having more than one tachyon zeroes is even smaller. Therefore, for x < x c and m = 0, the standard tachyon solution has the lowest free energy and the chirally symmetric one, T = 0, has the largest free energy. All other undulating solutions have free energies that are between these two (and closest to the T = 0 solution). For |m| > 0, the standard tachyon solution has the lowest free energy and the nonstandard ones, have higher free energy. BKT scaling of the chiral condensate We shall now argue that the chiral condensate ∝ σ obeys the BKT [8] or Miransky [9] scaling behavior, as x approaches the critical value x c where the solution ceases to exist. This argument will be supported with numerical results in section 10.3. The ratio of the UV and the IR energy scales will show similar scaling. This behavior is known to arise both in Dyson-Schwinger [9,19] and holographic approaches [7,11,12,46]. Indeed, our analysis has many similarities with both Dyson-Schwinger and earlier holographic approaches, and in particular with the recent study [12] of a related model. We shall not give a precise proof but only sketch how the scaling arises. It is enough to study the region near the UV where the tachyon is small T (r) ≪ 1, so the tachyon decouples from the EoMs of λ and A. We will neglect the logarithmic corrections to the tachyon (see Appendix D.1.1) which play no role in the scaling argument. In the deep UV, where the coupling is small λ ≪ 1, the tachyon behaves as As r increases T stays small, and λ starts to approach the fixed point value λ = λ * which maximizes V g (λ) − xV f 0 (λ). The behavior of λ and A in this region is given by the T = 0 asymptotics of Sec. E.3: where δ is a positive parameter defined in Eq. (E.51). This approximation is valid for some intermediate region r UV ≪ r r IR , where at the scale r IR the tachyon finally becomes O(1), and drives the system away from the IR fixed point. We choose x near the critical value, 0 < x c − x ≪ 1. The tachyon IR mass was calculated above in Sec. 8: . Keeping formally x fixed while varying λ * , the right hand side will become equal to four as λ * reaches a critical value λ c : G(λ c , x) = 4. The scaling behavior will appear as λ * → λ c from above, and then also x → x c . We expand around this point: For 0 < λ * − λ c ≪ 1, we obtain The tachyon solution becomes When λ moves away from the fixed point, while moving towards the UV, it will at some point become smaller than λ c so that the asymptotic solution (10.9) fails. However, as we shall see, the leading scaling originates from the region where λ * −λ ≪ λ * −λ c , and we need the solution for smaller λ only to make contact with the deep UV behavior and the definition of σ. Even for λ λ c , κ(λ c − λ) is small for almost the whole range of r with r UV < r. Therefore, within the this region, it is sufficient to neglect κ(λ c − λ) and approximate T (r) ∝ r 2 . Further, we introduce an intermediate scaler where λ * − λ ∼ λ * − λ c , and write the approximation as r UV ≪ r r , λ λ c (10.10) where we user instead of r UV as the reference value of the logarithm for later convenience. For r r IR there is no obvious way to write a good approximation for the tachyon solution. However, as we shall see, such an approximation is not necessary for finding the scaling behavior. The scaling behavior can be found by matching the tachyon solutions in the different regions. First, we require that the solutions of Eq. Notice that σ is not a free parameter here, but it will later be fixed by the matching procedure. Further requiring approximate continuity at r ≃r we find 24 The remaining task is to match with the unknown IR behavior at r ≃ r IR . First, r IR was defined as the scale where the tachyon becomes O(1) and drives the system away from the fixed point, so T (r IR ) ∼ 1, which fixes r IR in terms of σ: (10.14) Finally, we need to match the solution to T ′ (r IR ), which is O(1/r IR ), since the tachyon EoM is apparently regular in this region. Notice that we must indeed fix this number to have a solution that asymptotes to the "good" singularity in the IR. The good IR asymptotics have one free parameter, the normalization of the tachyon in the IR. This is however already determined by requiring T (r IR ) ∼ 1. Therefore, the argument of the sine function in (10.11) is basically fixed to a given O(1) number at r = r IR , which gives the desired BKT scaling: with K positive. Finally, we notice that the connection betweenr and r UV can be obtained from (10.3) by using the definition ofr. This results in a power law: which can be neglected as a subleading correction to the exponential scaling. Taking this into account The inverse of this scaling result is expected to hold for any ratio of IR and UV energy scales independently of their precise definitions. By using Eq. (10.14) we find the scaling results for σ: Notice that x c and λ c were defined by G(λ * (x c ), x c ) = 4 and G(λ c , x) = 4, respectively, so that λ * = λ c at x = x c . Since G is smooth in this region, λ * − λ c could readily be replaced by x c − x in the results above (after rescaling K). In the above discussion we neglected several subtleties. These issues are analyzed in Appendix H. In particular, after more careful analysis, we are able to find explicit results for K andK: As a final remark, we stress that the above sketch was to a large extent independent of the details of the model. In particular, we did not need any information on the nonlinear terms in the tachyon EoM and on how the IR boundary conditions are fixed. Scaling of the free energy Let us then study the scaling of the free energy difference ∆E between the solutions without and with chiral symmetry breaking (at zero quarks mass), which was studied numerically in Sec. 9. Linearizing the EoMs for A and λ by writing . However, as we have pointed out, in the walking region r UV r r IR the tachyon is O(r 2 ), and therefore the tachyon contributions O(r 4 ) do couple to A 1 and λ 1 . 25 Consequently, we expect that, e.g., r UV r r IR , (10.22) where ∆A 1 (r) denotes the difference in A 1 between the solutions with intact and broken chiral symmetry. We expect that to a good enough approximation, the size of A 1 equals the coefficient defined in Eq. (9.9) even in the walking region where the UV series does not converge. 26 Therefore, we obtain ∆ ∼ σ 2 r 2 UV (10.23) and finally the scaling result for the free energy difference reads Comparison with numerical results We now compare the analytic results above to numerical solutions. In Fig. 13 we plot the tachyon (left) and the coupling λ (right) as functions of log r in an extreme walking case with x = 3.992 such that x c − x ≃ 0.004. The numerical tachyon solution with zero quark mass (blue thick curve on the left) was obtained by gluing together the various solutions described in Appendix G.2. We compare the solution to the analytic approximations of Eqs. (10.2), (10.10) and (10.11), shown as thin green dotted, magenta dashed, and solid red curves, respectively. The parameters of these curves were chosen such that σ has the extracted value (see Appendix G.2), r UV = 1/Λ UV with Λ UV obtained by fitting λ to its UV expansion, andr is the value where λ reaches λ c . The parameters λ c , λ * and κ were calculated directly from the potentials, andφ was given an arbitrary small value. The agreement between the approximation and the full numerical solution is remarkably good. We have also compared the expected scaling to the values of chiral condensate that were extracted from the background (see Sec. 8) as detailed in Appendix G.2. The results for various values of x are the dots in Fig. 14. The solid line is a fit to the BKT scaling behavior, given by which agrees with the fitted result 6.8 within the precision of the fit. We have checked that using only a few of the data points with the highest x brings the fitted valued ofK closer to the analytical one. In Fig. 15 (left) we compare the ratio of the scales Λ UV = Λ and Λ IR = 1/R as defined by the UV and IR expansions of the background, respectively, to a BKT scaling fit witĥ K ≃ 3.4. We also checked that the scaling of Eq. (10.14), is satisfied to a high precision on Fig. 15 (right). Conclusions We analyzed a novel class of holographic models (V-QCD), which reproduces the main features of QCD in the Veneziano limit of large N f and N c with x = N f /N c fixed. V-QCD is on one hand based on a successful holographic model of Yang-Mills (YM) theory, and termed improved holographic QCD (IHQCD). IHQCD contains a dilaton coupled to five-dimensional gravity background. Its characteristic feature is the holographic renormalization group flow of the YM coupling constant, identified as the exponential of the dilaton, as a function of the energy scale, identified roughly as the inverse of the bulk coordinate. On the other hand, the model builds on earlier work on including matter in holographic models via flavor branes in the quenched approximation, i.e., neglecting the backreaction of the brane on the dilaton and the background metric. In particular, we use the tachyon Dirac-Born-Infeld (DBI) action originally introduced by Sen. Putting together these two frameworks, the dynamics in the Veneziano limit is modeled by a system of a dilaton and a tachyon coupled to five-dimensional gravity. The dilaton action is fixed as in IHQCD. For the tachyon we consider a generalized DBI action, where the dilaton dependence is parametrized in terms of a few potentials, which are a priori unknown. We may then constrain the unknown potentials, among other methods, by requiring that the UV physics implements that of (perturbative) QCD and that the solutions are regular in the IR. The essential observation for uncovering the dynamics of the system, is the identification of the effective potential. It involves terms from both the dilaton and DBI actions, and takes the role of the dilaton potential of IHQCD. For large x, the perturbative Bank-Zaks IR fixed point can, and must be implemented trough the effective potential. Even further away from the Banks-Zaks region, the solutions can continue to flow to the fixed point in the IR. Interestingly, the fixed point can also be "screened" by the tachyon dynamics, such that the theory comes very close to it and the coupling almost freezes, but eventually starts running again in the deep IR. This kind of backgrounds are termed as the quasiconformal or "walking" ones. In this article we did a detailed analysis of the backgrounds and the zero-temperature phase structure of V-QCD. Our main results are as follows: • We generalize the holographic RG flow of IHQCD to include the evolutions of both the dilaton and the tachyon, which are controlled by the holographic β-and γ-functions, respectively. Remarkably, for potentials that are analytic in the UV, the interplay of the dilaton and the tachyon automatically results in the anomalous dimension of the quark mass having physically reasonable UV asymptotics, i.e., a power series in the 't Hooft coupling. • If we require that the UV expansions of the potentials capture the essential QCD physics, and choose potentials that join smoothly with their UV expansions, the V-QCD phase diagram is always the physically relevant one. That is, for zero quark mass we find a phase transition at x = x c . The conformal window, where the backgrounds have an IR fixed point, extends from x = x c up to the maximal value x = 11/2 where asymptotic freedom is lost. Below x = x c , chiral symmetry is broken, and the theory has similar behavior in the deep IR as the QCD we have observed in Nature. • The edge of the conformal window is stabilized such that the dimension of the quark mass at the IR fixed point approaches two (the anomalous dimension approaches one) as the edge is approached. Under reasonable assumptions for the potentials, we find values of x c within a narrow band around the value x c = 4. High value of the anomalous dimension is of importance for applications to (walking) technicolor. • Below x c but close to the edge of the conformal window, we find quasi-conformal or walking backgrounds. • Backgrounds for (any) nonzero quark mass exist. In the conformal window the quark mass triggers chiral symmetry breaking, and below x c introducing the quark mass affects the scales of the theory in a physically reasonable manner. • We verify that the standard vacua which give the above physical phase structure also have the lowest free energies. They have monotonically increasing or identically vanishing tachyon profiles, when the chiral symmetry is broken or intact, respectively. For x < x c we also found a tower of unstable "Efimov" vacua, for which the tachyon can change its sign arbitrary many times. • Finally, as x approaches x c from below for the backgrounds with zero quark mass, we show that the chiral condensate, as well as the length of energy range where the background stays close to the fixed point, obey the characteristic Miransky or BKT scaling law. A. General solutions for β and γ It is not difficult to construct numerically β and γ which solve the partial differential equations (7.5) and (7.6). We can use the fact that they were derived from a set of ordinary differential equations. More precisely, the equation have the form where F β and F γ are independent of the partial derivatives of the β-and γ-functions (and the dependence on potentials was left implicit). Now we can apply a standard method for solving first order partial differential equations (PDEs), which in this case applies even for a pair of PDEs. We first look for curves along which the system reduces to ordinary DEs. Not surprisingly, such curves coincide with the holographic RG flow. That is, we can implicitly define such family of curves, by requiring that each curve (λ(A), T (A)), where A parametrizes the curve, satisfies (A.4) Further, the equations along the curves are essentially in one-to-one correspondence with the system (5.8)-(5.11). Indeed, the original system of DEs can be formally recovered be eliminating the β and γ functions by using Eqs. (A.3). Therefore, any solution of the original system satisfies the PDEs along the curve (λ(r), T (r)) that it defines. As the final step of the method, we notice that the PDEs only depended on the derivatives of β and γ along the curves. The derivatives perpendicular to the curves can be freely chosen. Therefore, any (continuously parametrized) family of curves which satisfies (A.3) will define a solution to the PDEs in some region of the (λ, T )-plane. That is, the general solution to the PDEs is given by the planes that the solutions of Eqs. (5.8)-(5.11) draw in (β, λ, T ) and (γ, λ, T )-spaces as the boundary conditions to the equations are varied in an arbitrary manner. Remarkably, the amount of degrees of freedom of the solution matches with the expectation for a system of two first order PDEs, for which the solution should depend on two arbitrary functions. Since β, γ, λ, and T are all invariant under the symmetry of (8.4), the number of integration constants in the system (5.8)-(5.11) which are relevant for the solution of PDEs is three. A generic one-dimensional family of these parameters, and consequently a generic solution to the PDEs, can be defined by giving the dependence of two of these in terms of the third one which makes two arbitrary functions. We have demonstrated how the boundary conditions of the PDEs are mapped to those of the original set of EoMs. Now the analysis of the solutions of the EoMs (see Appendices D and E) suggests at least two natural ways to choose special one-parameter families of the boundary conditions, and to define special solutions to the PDEs. 1. Require that the solutions to EoMs end in the "good" IR singularity of Sec. E.2.2 (with varying quark mass). 2. Require the more generic IR singularity of Sec. E.2.1 and keep the quark mass fixed. (In the second case, the "good" IR singularity is expected to arise as some limit of the more generic IR behavior so that it appears at a boundary of the region in (λ, T )-space where the β and γ functions are defined.) B. A coordinate transformation It turns out that it is convenient to solve the system numerically using A as a coordinate instead of r. Therefore, we present the system after this transformation: where the dots are derivatives with respect to A and we defined C. Examples of explicit potential choices We will construct explicit examples of potentials, which give physically reasonable backgrounds. We consider an Ansatz for V f of the form and assume that h does not depend on T . First, we expect in the UV The potentials must also produce the good kind of IR singularity discussed in Appendix E.2.2 with P = 1/2 [36], which constrains the asymptotics of V g to (assuming that V f 0 plays no role in the IR for the chiral symmetry breaking solution as tachyon and/or λ diverge so that exp(−a(λ)T 2 ) tends to zero). In addition, the IR behavior of V f 0 should be chosen such that the fixed point (maximum) of V g (λ) − xV f 0 (λ), which is guaranteed to be present at large x → 11/2 if we fix the small λ series of the potential appropriately, continues to exists up to sufficiently low x. This is most easily achieved if V f 0 diverges faster than V g as λ → ∞, so that the fixed point exists for all x. Another possibility, mentioned in Sec. 6, is that the fixed point exist only for x ≥ x * , where x * > 0 is relatively small. We have checked that choosing such potentials does not change any results at qualitative level, and do not discuss this choice further here. We also require that the potentials are analytic at λ = 0. A simple Ansatz that meets these requirements and involves free coefficients up to two-loop order is Notice that the AdS radius must be well defined for all x up to x = 11/2, which sets an upper bound for W 0 for given V 0 . We also expect V f to be positive at small λ. 27 Therefore, we take However, as discussed in Appendix D.1.1, the sum of the anomalous dimensions of the quark mass and the chiral condensate is not equal to 4 if W 0 = 0 (then δ = 1 in Appendix D.1.1). Therefore, we discard this option. Notice also that if we saturate the upper limit with W 0 = 2 11 V 0 , the AdS radius diverges in the Banks-Zaks limit x → 11/2 unless we choose an x-dependent V 0 . In addition, we can fix the coefficients V i , W i by mapping to the field theory β-functions as (C.10) Here the (scheme independent) QCD β-function in the Veneziano limit with vanishing quark masses up to two-loop order are . Setting V 0 = 12 and W 0 = 12/11, which lies in the middle of the allowed region of Eq. (C.8), we obtain (C.12) Further, we choose λ 0 = 8π 2 to prevent the higher order terms in the UV expansion of the potentials from growing unnaturally large. In addition, we need to choose the functions a(λ) and h(λ) appropriately. As discussed in Section 8 and in Appendix D.1.1, we must have Here the coefficient h 1 can be matched with the one-loop anomalous dimension of the quark mass, which reads in the Veneziano limit By matching with Eq. (D.10) from Appendix, we obtain − 3 (4π) 2 = 9 8 4 3 The IR behavior of the functions h(λ) and a(λ) is linked to the tachyon behavior in the IR for the solution which breaks chiral symmetry. They are discussed in detail in Appendix E.2.2, where essentially only two different cases, which are consistent with the good IR singularity, are found. These possibilities are produced by the following choices. I We can choose the function a(λ) to be constant, and the function h(λ) to have powerlaw IR asymptotics: II The other choice is practically a generalization of the first one. It has a simple form of h(λ), but more complicated a(λ): so that ρ = 4/3 and σ = 2/3 in Appendix E.2.2. The extra term proportional to h 2 was added to make σ positive. We choose its coefficient to be small, h 2 = 1/λ 2 0 . The tachyon behaves as for large r. Here r 1 is a free parameter. We have checked that both scenarios lead to qualitatively similar results. In the numerical calculations we use, for definiteness, the choice I, unless stated otherwise. D. UV behavior In this Appendix we shall discuss the UV behavior of the system (5.8)- (5.11) in general. This analysis should be compared to that carried out in [36,38] in the absence of the tachyon backreaction. Recall that apart from the two degrees of freedom of the transformation (8.4), the solutions contain three integration constants. In the discussion below, the degrees of freedom refer to these three "nontrivial" constants. We shall not discuss the most general behavior of the solutions, but make some physically motivated assumptions. In particular, we restrict ourselves to the potentials V g and V f which are bounded as λ → 0, and which are smooth at any finite λ. In general we are interested two types of potentials: ones that start from a constant value at λ = 0, are monotonic as λ increases, and approach +∞ as λ → ∞, and ones that start at a constant value at λ = 0, increase until they reach a maximum at some λ = λ * , and thereafter monotonically decrease to −∞ as λ → ∞. This should be kept in mind while reading the analysis below, as some of the arguments below may fail for more generic potentials, even though no assumptions are listed explicitly. Further, we mostly restrict to effective β-functions β eff = dλ/dA which are negative. Near the standard UV singularity, the tachyon must behave schematically as T (r) ∼ mr + σr 3 whereas the other fields have a logarithmic dependence on r. Therefore the tachyon decouples asymptotically as r → 0, and we may analyze its UV behavior by first solving A and λ with T = 0 and then by analyzing the tachyon EoM for this background. Asymptotic behavior of A and λ We take Then the (leading) UV expansions of A and λ can be written as Notice that they contain no free parameters (in addition to Λ). In fact, after using the equations of motion there is one degree of freedom left in the coefficients of the above expansion, but as it turns out, this freedom can be eliminated by rescaling Λ. We have removed this extra parameter by requiring that the coefficient of the 1/(log rΛ) 2 term in the expansion of λ vanishes. Tachyon UV asymptotics We take and parametrize where δ is a nonnegative integer. Here the leading coefficient of h/a was already fixed in order to have the correct UV mass of the tachyon [48]. We further assume that It is enough to study the linear terms in the tachyon EoM, which read From this one could expect that the solution has the form T (r) ∼ mr (1 + O (1/ log r)) + σr 3 (1 + O (1/ log r)) , (D. 8) i.e., the logarithmically suppressed corrections to the EoM show up as logarithmically suppressed corrections to the functions. However, this is not the case: an Ansatz for the solution which assumes this kind of corrections fails. The correct asymptotics reads where C i and D i are known functions of δ, ξ, h 1 , V 1 , V 2 , W 1 , and W 2 , which we suppressed for brevity. The "surprising" logarithmic power corrections in Eq. (D.9) can actually be identified as the nontrivial running of the quark mass and the condensate in the UV, which arises as their anomalous dimensions are different from zero. To make this explicit, we calculate the gamma function T ′ /A ′ in the UV. For m = 0 it is dominated by the linear tachyon solution: whereas for m = 0 we find (D.11) The next-to-leading terms in (D.10) and (D.11) are mapped to the one-loop anomalous dimensions of the quark mass and the chiral condensate in QCD, respectively. Since they should add up to zero, we must have δ + ξ = 0. The easiest way to satisfy this is to take δ = 0 = ξ. In particular, the expression h(λ) = λ −4/3 with ξ = −4/3, which was found in the probe limit [36], does not work, since δ was required to be an integer to ensure that the β-functions have power series with integer powers at λ = 0. Finally, it is easy to verify that the UV expansion presented here match with those of Sec. 7.1, which were derived by using the holographic beta functions. D.1.2 A bounce back at finite λ A bounce back may take place when the potential V eff (λ) = V g (λ) − xV f (λ, T = 0) has a maximum at some λ = λ * signaling the presence of an infrared fixed point, and V ′ (λ) < 0 for λ > λ * . If the tachyon is sufficiently small, the effective β-function dλ/dA hits zero, and becomes positive when the system is evolved toward the UV. Therefore the coupling has a finite minimum (> λ * ) and the above "standard" UV singularity at λ = 0 is not reached. All fields are analytic at the point where λ ′ = 0. The bounce back behavior is found in the white regions of Fig. 4. Examples of the β and γ-functions evaluated along the RG flow for the bounce back scenario are shown as the dotted curves in Fig. 16 in Appendix F. D.2 Special case: UV fixed point at finite λ We have identified one special UV singularity, which is found as a limiting case between the two first generic behaviors discussed above (the blue curve of Fig. 4). In this case the asymptotic solution is expected to depend on two integration constants. The solution terminates as the β-function dλ/dA approaches zero at the maximum λ * of the effective potential V eff (λ) = V g (λ)−xV f (λ, T = 0). The singularity is found at a fixed value of r = r * where A diverges and λ approaches λ * from above. Examples of the β and γ-functions evaluated along the RG flow with this UV fixed point are shown as the thick blue curves of the middle and right columns of Fig. 16 in Appendix F. Depending on the value of the tachyon mass at the fixed point (see Sec. 8.3), the asymptotics may be written in two different forms. Let us recall the definition of the dimension ∆ at the fixed point: . When x > x c the right hand side of the definition is smaller than 4 so that there are two real roots ∆ = ∆ ± . The geometry approaches the AdS one near the fixed point, where ∆ − is the smaller root, ℓ 2 * = 12/V eff (λ * ), we chose r * = 0, and the constants A 0 , λ 0 , and T 0 satisfy two constraints which can be solved from the EoMs (5.8)-(5.11). There are two free parameters which can be taken to be the coefficients of the tachyon solutions with the dimensions ∆ ± . The solution associated to ∆ + will appear at the next-to-leading order only if we choose the coefficient T 0 of the above solution to vanish. E. IR behavior In this Appendix we discuss the IR behavior of the system (5.8)-(5.11) in general. As above, we restrict to certain quite simple potentials and to cases where the β-function dλ/dA takes negative values in the vicinity of the IR singularity. The IR structure is much richer than the UV one mostly because there are much less obvious constraints on the potentials. We shall discuss here both generic and special singularities. Of these the most interesting ones for us will be the the special ones that depend only on one free parameter (excluding trivial reparametrization symmetries). They can be identified as the "good", or fully repulsive, IR singularities. Examples of such singularities are identified below in Sec. E.2.3 partially based on the analysis of Sec. E.2.2. E.1 Generic cases There are two generic IR "singularities" which in fact do not involve divergences of any of the fields. Therefore, they also appear independently of the details of the potentials to a large extent. E.1.1 Divergence of the derivative of the tachyon A typical, generic IR behavior is similar to what was found in the probe limit in [36], where the tachyon goes to a constant value but its derivative diverges. Indeed, the Ansatz solves the equations of motion (5.8)-(5.11) quite in general. We do not present the rather complicated constraint equations which follow for the constants in the expansions, but it is not difficult to check that the solution has three independent integration constants and is thus indeed generic. Since none of the fields diverge at r = r * , it is natural to take all the potentials to be analytic at the point of expansion, and the solution is expected to exist to a large extent independently of the choices for them. Notice also that there is no real singularity at r = r * : one can make a coordinate transformation such that all fields are analytic in the vicinity of this point. A natural choice that realizes this is to use T as the coordinate. Of course, this leads to all fields being double-valued functions of r, with the two branches having the same absolute value of T 1 but opposite signs. If this is allowed, it is not hard to find analytic solutions which, for example, start at a UV singularity at λ = 0, bounce back at a point where the tachyon derivative diverges, and return to another singularity at λ = 0. E.1.2 A bounce back as dλ/dA → 0 There is also a three-dimensional space of solutions where the coupling reaches a maximum value, and then starts to decrease with decreasing A so that the β-function dλ/dA becomes positive. All fields are analytic in r at the point where the β-function is zero. We have not checked how the solutions continue to evolve in the region of positive β-function. E.2 Special IR singularities In addition to the generic IR behaviors discussed above, we have identified several true IR singularities, where the fields A and λ diverge. Note that above, the divergence of the tachyon derivative was proportional to the parameter T 1 , which could take both positive and negative values. Solutions with arbitrary small |T 1 | also exist, as well as the limit T 1 → 0 where the solution, which in general ends in the divergence of A and λ rather than T . Therefore we expect that spaces of generic solutions with positive and negative T 1 , respectively, will be separated by a subspace of solutions with singularity in the IR. The behavior of the system in this case depends strongly on the asymptotics of the potentials. However, the tachyon often decouples asymptotically from λ and A, in particular if the tachyon diverges in the IR, which is the expected behavior for physically relevant singularities [48,55]. Here we shall assume that the decoupling takes place, since completely general classification of the singularities seems daunting. As the tachyon decouples, the classification of singularities for A and λ follows earlier studies [36,38]. We shall review the results here for clarity. Assuming that the tachyon tends to T 0 in the IR, the effective potential that drives the metric and the coupling in the IR is where T 0 can be infinite in which case V IR (λ) = V g (λ). We parametrize as λ → ∞. The equations of motion are Eqs. (5.8) and (5.9) with T (r) ≡ T 0 so that T ′ (r) = 0. There are two types of singularities (see [38]): "generic" ones where in the IR, and "special" ones where X → − 3 4 Q. E.2.1 Generic metric singularity The generic singularities exist for Q ≤ 4/3 and depend on one free parameter. The system is solved by the Ansatz is the "conformally invariant" distance from the singularity, λ 0 is the free parameter, and the dropped terms are suppressed by 1/ log δr. Plugging this in the equations of motion, For Q = 4/3 the singularity exists if P < 1. In this case the asymptotics reads (E.13) E.2.2 Special metric singularity This kind of singularity exist for 0 < Q < 4/3 so that the asymptotic value −4Q/3 of X lies between zero and one. We need to require that the potential is asymptotically positive, V 0 > 0. The solution does not involve any integration constants in addition to the ones linked to the reparametrization symmetry, which suggest that when combined with a proper tachyon solution, "good" IR asymptotics can be identified. If 2/3 < Q < 4/3 we find a singularity at finite value r * of r: where again δr = (r * − r)/R. Here R and r * are the integration constants which reflect the reparametrization symmetry, but no other free parameters appear. If 0 < Q < 2/3 similar formulas hold for r → ∞: where nowr = (r − r 0 )/R. If Q = 2/3, and P < 1, there is a singularity at r = ∞. The asymptotic solution reads As pointed out in [36], this special case produces a good match with the IR physics of QCD, in particular if we choose P = 1/2. We will confirm below that the potentials of the tachyon action can be chosen such that the tachyon asymptotics also meets all requirements known to us, and the produced singularity is of the "good" kind. We will use these singularities in our analysis, and the above formulas will be used to fix the IR boundary conditions for the numerical solutions. If Q = 2/3, and P > 1, we find a singularity at finite value r = r * of the coordinate. The asymptotics transforms to whereᾱ = 1/(P − 1) and A 0 is related to R as in Eq. (E.21). Finally, for Q = 2/3 and P = 1 the metric factor A diverges exponentially as r → ∞, E.2.3 Tachyon behavior To complete the analysis, one should insert each of the above asymptotics to the tachyon EoM and check what the tachyon asymptotics is for various choices of the potentials V f and h, and start looking for the "good" kind of singularities. Once the potentials are fixed, one can check if a solution, which is consistent with the assumption that the tachyon decouples, indeed exists. If it does, it can depend on one or two additional parameters. For the good, fully repulsive singularities the number of free parameters (excluding those related to the reparametrization symmetry) is equal to one, i.e., we must have a special metric singularity combined with a one-parameter tachyon asymptotics. In addition we should require that the tachyon diverges in the IR, since that kind of solutions have bulk flavor anomalies similar to those of QCD [48,55]. Here we shall restrict to the special metric IR singularity with Q = 2/3 and P < 1, since it is expected to include the most interesting cases due to additional constraints from confinement and excitation spectra [36]. We parametrize With this parametrization, the tachyon EoM reads 30) and the primes are derivatives with respect to r. Notice that the last two terms vanish if a(λ) is constant. We consider generic power-law asymptotics of the potentials at large λ, and introduce a shorthand notation for the asymptotic behavior in (E.18)-(E.20): where we set r 0 = 0. Then the leading behavior of the coefficients F i at large r is For σ = 0 the leading terms of F 5 and F 6 , given above, become zero. If a(λ) is constant for all λ, these terms actually vanish identically. If a(λ) only asymptotes to a constant value, the leading behavior of F 5 and F 6 is determined by the next-to-leading term in the asymptotics of a(λ). These will contribute in some particular cases, as we discuss below. • σ > 0 and ρ < 4/3. The tachyon EoM (E.27) is dominated by the terms ∝ T , T ′ T 2 , which have the coefficients F 2 and F 5 , respectively. Solving the tachyon from these terms gives the asymptotics Substituting this back to the full equation of motion, we see that with the above constraints for σ and ρ it is indeed a solution. Further, the factor exp −a(λ)T 2 vanishes double-exponentially, which confirms the decoupling of the tachyon from the other fields. In addition to the trivial reparametrization symmetry, the only free parameter of the asymptotic solution is T 0 , which suggest that the singularity is of the "good" kind. However, tachyon solutions that are regular in the IR have bulk flavor anomalies which differ from those of QCD [48,55]. Therefore, we discard this option. • σ > 0 and ρ = 4/3. The same terms continue to dominate, but the asymptotic changes. We find instead where r 1 is a free parameter. One can again check that this is indeed a solution, and that the tachyon decouples from A and λ. The terms proportional to T ′2 T and T ′3 T 2 are suppressed only by r −α , but taking them into account results in a trivial factor multiplying the equation of motion, so that the solution in Eq. (E.41) is unchanged for any value of α. The asymptotics has only one free parameter, and the tachyon diverges as r → ∞, so this solution is acceptable. • σ ≤ 0 and ρ < 4/3 − σ. We find two different cases. The asymptotics is qualitatively similar to (E.40), but arises in a slightly different way. The leading terms are those proportional to T ′ and T , corresponding to coefficients F 1 and F 2 , respectively, as well as the double derivative term T ′′ . This term being leading, one might expect that the asymptotics contains two free parameters, and is thus not of the good kind. This is indeed the case for τ > 2 + σ. For τ ≤ 2 + σ only one parameter family of the asymptotic solutions is consistent with the other terms being subleading. However, since the tachyon becomes constant in the IR, we have the aforementioned problem with flavor anomalies, and hence we shall discard this solution in any case. • σ = 0, ρ = 4/3, and τ < 10/3. The leading terms are typically those proportional to T ′3 and T ′2 T . However, if a(λ) is not constant, there is an extra constraint from the next-to-leading term in the expansion of a(λ). If, for example, a(λ) = a 0 + a 1 /λ + · · · , we need to require α > 1. Assuming that all constraints are met, we find the exponential behavior where T 0 is a free parameter and This special case is the asymptotics that was discussed in the probe limit in [36], and is acceptable. By inserting the expressions for A c and λ c from Eqs. (E. 18), (E.20), and (E.21), the coefficient simplifies to • σ = 0, ρ = 4/3, and τ > 10/3. The leading terms are proportional to T ′ and T , which results in the tachyon vanishing asymptotically. In this case the factor exp −a(λ)T 2 goes to one rather zero, which suggest that the correct physical picture, as discussed in the main text, cannot be achieved, even though the tachyon apparently decouples in the IR. • σ < 0 and ρ = 4/3 − σ. The leading terms are again proportional to T ′ and T . For τ < 10/3 − σ/2, the solution is exponentially increasing, The factor exp −a(λ)T 2 vanishes in the IR limit if α < 1, which is the expected behavior, so the solution is acceptable. For α > 1 or τ > 10/3 − σ/2 the factor goes to one instead, and it seems that the correct physical picture cannot be obtained. The borderline case α = 1 has either behavior depending on the values of other parameters. In all the remaining cases, in particular for large ρ, the asymptotic solution of Eq. (E.27) oscillates with increasing frequency as r → ∞. Therefore, the tachyon is apparently not decoupled from the other fields, and the singularity discussed in the subsection does not exist. In summary, we found good solutions only if ρ = 4/3 and σ ≥ 0, or ρ = 4/3 − σ and σ < 0. The latter case is included for 0 < α < 1, and the former case we found some extra constraints at the endpoint σ = 0, which are detailed above. In the numerics we shall fix P = 1/2 so that α = 2, and use potentials with ρ = 4/3 and σ ≥ 0, see Appendix C. E.3 Singularity at the IR fixed point with T ≡ 0 As discussed in the main text, some of the solutions with identically vanishing tachyon are expected to correspond to field theories where the chiral symmetry is conserved. Potentials for Yang-Mills theory were discussed extensively in [36,38]. Therefore, we will only consider the case of potentials V eff (λ) = V g (λ) − xV f (λ, 0) which have a (single) maximum at some λ = λ * , interpreted as an infrared fixed point. In the absence of the tachyon the space of solutions is one-dimensional. We identify two distinct one-parameter families of solutions: one where the solution bounces back to smaller couplings as the β-function dλ/dA goes to zero (a special case of the bounce-back solutions discussed above in Sec. E.1.2) and another where the β-function asymptotes as dλ/dA ∼ −3λ (a special case of the solutions discussed in Sec. E.2.1). These families are separated by a single solution where the β-function terminates at zero at the maximum of the potential λ = λ * . The limiting solution has a singularity at r = ∞ where A diverges and λ approaches λ * from below. We expand the potential around λ = λ * as where V 2 is negative. The equations of motion are solved by where A 0 and δ are related to the IR AdS radius and the derivative of the β-function, respectively, by and we also find (E.52) E.3.1 Generalization to T ≪ 1 A generalization of the T ≡ 0 case, which is of high interest to us, is where the tachyon mass and the condensate are very small in the UV, such that the tachyon remains small (|T | ≪ 1) even as λ approaches the fixed point (λ * − λ ≪ 1). However, for any nonzero tachyon profile, the tachyon will eventually become O(1) at some high r = r IR , and drive the flow away from the fixed point. Setting r 0 = 0, the above formulas (E.48) and (E.49) then hold in the limit R ≪ r ≪ r IR (with the understanding that depending on the value of δ, the next-to-leading terms may be affected by the tachyon solution). This approximation is useful in the quasiconformal, or "walking" region of backgrounds for x below, but close to the edge of the conformal window at x c . Notice that R is the scale where the coupling starts to deviate significantly from its fixed point value λ * , and may be therefore identified as the UV scale of the theory. Therefore we shall denote R = r UV in this case. The tachyon profile can also be derived in this region. Inserting the solutions of Eqs. (E.48) and (E.49) into the tachyon EoM (5.11), and taking |T | ≪ 1, we get There are two kinds of solutions depending on the value of the tachyon mass term. As in Sec. 8.3 we can define the dimension ∆ . If the combination on the right hand side is less than 4, which corresponds to x > x c we find two real roots ∆ ± . In this case the tachyon solution is for r IR ≪ r ≪ r IR , where ∆ − is the smaller root (unless we tuned the boundary conditions such that the quark mass is very small, in which case the solution with ∆ − → ∆ + needs to be included). For x < x c we have two complex roots ∆ ± = 2 ± ik instead, and the tachyon behaves as This oscillating solution is the root of the rich structure of backgrounds found for x < x c in Sec. 8, which are also discussed below in Appendix F. F. Structure of the background as a function of x and T 0 In this Appendix we will explain in detail how the phase structure of the background solutions seen in Fig. 4, and in particular the region with nearly conformal behavior, arises. The structure in the plots is linked to the transition of the system from the UV region, where the tachyon is small and the background is characterized by the potential V eff (λ) = V g (λ) − xV f 0 (λ), to the IR region, where the tachyon is large, and the background is characterized by V g (λ). First, recall that V eff (λ) has a maximum at some λ = λ * which depends on x. For x → 0 we find from Eqs. (8.24) and (8.26) that λ * → ∞, whereas for x → 11/2 we obtain λ * → 0. This maximum suggests a presence of an IR fixed point of the β-function for the coupling λ. However, for a nontrivial tachyon profile the fixed point is not reached. When approaching λ = λ * from the UV, the solution is driven away from the fixed point as soon as the tachyon becomes large, T ∼ O(1), and the system enters the region where all EoMs are nontrivially coupled. The point where this happens, is controlled by the normalization of the tachyon in the IR, i.e., the value of T 0 (Assuming potentials of scenario I). The larger T 0 , the smaller is the value of λ where the tachyon decouples. The solid blue curve in Fig. 4 is the critical value of T 0 where the tachyon decouples at λ = λ * . Actually, precisely at this curve the UV asymptotics is that of Sec. D.2, and the solution ends at the fixed point. If T 0 is smaller than the critical value, the tachyon will become small ≪ 1 during the flow toward the UV when we still have λ > λ * , so that the beta function corresponding to the effective potential V eff is positive. As the tachyon decouples, the beta function flow approaches that defined by V eff . Therefore, to the the region with "standard" UV behavior is not reached at all, but the solution bounces back at a finite value of the coupling (see also Fig. 16 below) where the beta function (evaluated along the RG flow) crosses zero. The strong dependence of the blue curve on x is explained by the dependence of λ * on x: for example as x → 0, the fixed point moves to large values of λ * , and the critical tachyon IR value T 0 required for avoiding the bounce back approaches zero. The argument above is not rigorous, as it involves the location of the decoupling of the tachyon which is not defined precisely. However, one should notice that as the blue curve is approached from above, the system is on the verge of reaching an IR fixed point so that the coupling freezes, i.e., it evolves very slowly for a large range of r. Meanwhile, the tachyon grows relatively fast with r. Therefore, the value of λ where the tachyon decoupling takes place becomes more and more precisely defined as the blue curve is approached from above. This can also be seen in the numerical examples below and in Section 8. To understand how the red dashed curve arises, we need to study the tachyon solutions in the UV region. Below the red curve the tachyon solution develops a zero, as is required by the negative value of the quark mass. This zero actually appears in the region, where (the absolute value of) the tachyon is still small. If we continue on the plot toward lower values of T 0 , the quark mass becomes again zero, and then positive extremely close to the solid blue curve. This happens as the tachyon develops a second zero in the UV. We can continue further, and find solutions with an arbitrary number n of zeroes (which are very hard to construct numerically). See Fig. 5 (left) in the main text for the qualitative behavior of the quark mass as the blue curve is approached. The oscillating behavior of the tachyon in the UV is linked to the violation of the BF bound: when ∆ IR (4 − ∆ IR ) is smaller than 4, the solutions for ∆ IR are complex, which results in the oscillations of the tachyon solution. We plotted the squared mass of the tachyon at the IR fixed point in Fig. 2, where the solid thick blue curve corresponds to the present choice of parameters. As we approach the solid blue curve from within the contoured region in Fig. 4, the background can be approximated near the fixed point as discussed above in Sec. E.3.1. By the definition of Eq. (8.17), the BF bound is violated at the fixed point for x < x c . Therefore, as we approach the blue curve from above in this region and the system is about to develop a fixed point, the tachyon necessarily oscillates as soon as values of λ close enough to λ * are reached. In this case, the tachyon is well approximated by the solution of Eq. (E.56), where k is fixed by the potentials, but φ and T 0 are free parameters, which will be fixed by the boundary conditions. For x > x c , the BF bound is not violated at the fixed point, and therefore no oscillations are expected, and the tachyon dependence is as in Eq. (E.55). The (uppermost) curve of zero quark mass (the red dashed one in the plots) essentially limits the region of oscillating tachyon: for example on the left hand plot above this curve quark mass is positive (no tachyon zeroes) and below it the mass is negative (one tachyon zero), see the mass dependence in Fig. 5. Notice that the red curve must therefore join the blue curve at x = x c . As we approach the fixed point keeping x, the range r UV ≪ r ≪ r IR where Eq. (F.1) holds grows without limit. 28 Here r UV is the scale where the coupling starts to deviate significantly from its fixed point value λ * as we follow the flow toward the UV. According to Eq. (F.1), the number of encountered tachyon zeroes is We may take one step further, and find the scaling of the quark mass and the chiral condensate as r IR /r UV → ∞ and n → ∞. This can be done by matching the "intermediate" tachyon solution (F.1) with the UV (and IR) solutions (see Sec. 10 and Appendix H where we do the matching procedure more carefully for x → x c ). The tachyon is supposed to become large at r ∼ r IR , so T 0 ∼ 1. On the other hand at r ∼ r UV we enter the standard UV region, where roughly T ∼ mr + σr 3 . Matching in the UV gives the typical sizes for m and σ for large n: In particular, the maximal masses for which solutions with n tachyon nodes exist, or in other words the sizes of the bumps in Fig. 5 (left), numbered from right to left, must obey this scaling law. As the mass scale vanishes exponentially for n → ∞, for any fixed m = 0 there are only finite number of backgrounds as n is limited from above, whereas for m = 0 we find an infinite tower of solutions. At this point is good to remind that the solution with no tachyon nodes (n = 0) always has smaller the free energy than the solutions with n > 0 (see Sec. 9). Also, the n = 0 solution is not found in the scaling region where the system is close to having a fixed point in general. For m = 0 this solution (red dashed curve of Fig. 4 enters the scaling region (which is close to the solid blue curve in the same figure) only in the limit x → x c , which will be discussed in detail in Sec. 10. We conclude with one more observation on the location of the curve of the vanishing quark mass. The above discussion was relying on the dimension ∆ IR at the fixed point, and therefore we could argue how the tachyon behaves in the limit where the background comes arbitrarily close to the fixed point, i.e., as we approach the blue solid curve from above in Fig. 4. However, we can also present rough qualitative arguments on the behavior of the curve further away from the solid blue curve. Because the tachyon is in any case small in the interesting region, we can read directly from the linearized tachyon EoM whether it oscillates or not. The EoM reads Recall that the tachyon is decoupled in this region and evolves independently of the other fields. Assuming decoupling the coefficients are essentially independent of T 0 and can be solved directly from the potential V eff (λ) = V g (λ) − xV f 0 (λ). A is quite well approximated by A ≃ − log r, 29 the second term in the square brackets of the coefficient of T ′ is small, and therefore the essential term is, as in the case of the fixed point, the ratio a(λ)/h(λ), which increases with λ. As λ grows, at some critical λ c the ratio becomes large enough, and the tachyon starts to oscillate. In terms of the gamma function, this means that γ/T reaches the value of approximately −2. The oscillations do not take place if tachyon grows large already for λ < λ c so that nonlinear terms contribute in Eq. F.5. Therefore, for the limiting solutions, the tachyon becomes O(1) roughly at λ = λ c . As the start of the oscillations means that the quark mass goes to zero, this mechanism also fixes the location of the curve with zero quark mass (red dashed curve in Fig. 4). In summary, the solid blue curve of Fig. 4 is stabilized by the tachyon growing large at λ = λ * , whereas the dashed red curve is stabilized by the tachyon growing large at λ = λ c . The two curves meet when λ c = λ * , which gives an alternative way to formulate the definition of x c . It is interesting to compare the picture in our model to that arising in the Dyson-Schwinger approach in the rainbow approximation (see, e.g., [67]). Also in this framework it is useful to define two values of the coupling, corresponding to λ * and λ c above. The definitions are similar as here: λ * is the zero of the β-function, and λ c is the value of λ where the anomalous dimension of the chiral condensate reaches unity (so that ∆ of Sec. 8.3 equals two). Indeed the latter definition corresponds to saturating the BF bound in the present approach, which in the vicinity of the IR fixed point means the start of the tachyon oscillations, matching with our definition of λ c . Similarly as in our model, λ c = λ * at the edge of the conformal window. A similar description of the conformal transition was also found in the holographic model of [12]. F.1 Numerical analysis We illustrate the analysis above by studying the background numerically near the red dashed and blue solid curves of Fig. 4. We choose as reference values x = 2, 3.9 and 4.25, which have qualitatively different behavior. For zero mass, they correspond to field theories with running, walking and IR conformal behavior of the coupling constant. Fig. 16 shows, for the above values of x, the Dotted, dashed and solid curves are β-functions for backgrounds with a bounce back towards the IR, m < 0, and m > 0, respectively. The limiting cases between these behaviors are the given by the thick blue solid curves, which terminate at an UV fixed point, and the thick red dashed ones, which have m = 0. The magenta dotted curve is the β-function when the tachyon is completely decoupled, solved from Eq. (8.19). Right column: gamma functions 1/T × dT /dA as T 0 is varied. The lines are marked as for the β-functions. quarks mass in IR units (left column), the variation of the β-function (middle column), and the variation of the gamma function (right column) as we scan over a range of T 0 which includes the blue and red dashed curves of Fig. 4. For x = 2 (top row) the quark mass (left column) is negative for a wide range of T 0 between the zero mass solution (dashed red vertical line) and the critical value of T 0 where the UV behavior of the solution changes such that the mass is no longer defined (solid blue vertical line). The mass curve oscillates as we approach the vertical blue line of critical T 0 as depicted in Fig. 5, but the oscillations lie too close to line to be resolved at the used scale of T 0 . As x is increased to 3.9 (middle row), the zero mass solution is driven very close to the critical T 0 . For x = 4.25 (bottom row), the solutions with negative quark mass, and in particular the one with zero mass, have disappeared. In the the middle column we plot as the thin black curves the β-function as T 0 is varied over the range of the left hand plots with a constant step size. The blue thick curve is the limiting β-function that always terminates at an UV fixed point and corresponds to the blue curve of Fig. 4. For x = 2 all β-functions with positive quark mass (black thin solid curves) are running, including the zero mass solution (red dashed thick curve), whereas all walking solutions have negative quark mass (thin dashed curves) 30 . The running curves are the lowest ones and all of them basically overlap. The dotted thin lines are the β-functions with bounce-back behavior in the UV, which occur for values of T 0 that are smaller than the critical one. Increasing x to 3.9 (middle row), the zero mass solution moves into the walking region, and approaches the critical one marked with thick blue curve. As x is further increased (bottom row), the zero mass solution disappears by joining the blue curve. Similar, but slightly more complicated behavior is seen for the gamma functions (right column). The dashing and coloring have the same meaning as for the β-functions. Notice that γ/T asymptotes to −3 as λ → 0 for the zero mass solution (red thick dashed curve) whereas for the solutions with a finite mass it asymptotes to −1, as expected. The solutions with negative quark mass (thin dashed curves) have a zero of the tachyon, which shows up as a pole in γ/T . The main change in the plots as x increases from 2 (top) to 4.25 (bottom) is the movement of the solution with the UV fixed point (blue thick curve) towards smaller λ, as all solutions it "crosses" change drastically. Otherwise the gamma functions (i.e., those solutions left of the blue curve) are roughly independent of x. Finally, we comment on the m → 0 limit and discuss also Fig. 11 in this same limit. For x < x c the background converges smoothly to the m = 0 one with chiral symmetry breaking in this limit, as also seen from the β-and γ-functions on Fig. 16 (top and middle rows). Taking m → 0 for x < x c in Fig. 11 defines the enveloping curve of the fixed mass curves, which diverges for x = x c . (We shall discuss the m = 0 case in more detail in Sec. 10 below.) For x > x c , the scale ratio Λ UV /Λ IR diverges for m → 0 (see Fig. 11), and we can consider two different limits. If we keep Λ UV fixed as m → 0, the IR scale is driven to infinity and the background converges pointwise to the one with identically vanishing tachyon and an IR fixed point, discussed in Sec. 8.5.1. If we instead keep the IR scale fixed, the background converges towards the one having an UV fixed point of Sec. D.2 (see the βand γ-functions shown with the thick blue line on the bottom row of Fig. 16). Notice that the backgrounds with vanishing tachyon and x < x c are not connected to the backgrounds having a finite quark mass by any limiting procedure, which is in line with our expectation that they are unphysical. Indeed we shall show in Section 9 that these solutions have larger free energy than the ones with nontrivial tachyon and chiral symmetry breaking. G. Extracting UV coefficients from numerical solutions G.1 Extracting free energy differences As pointed out in Section 8, we need to check numerically which one of the two solutions 30 As mentioned above, there are also positive quark mass solutions which basically overlap with the thick blue curve. Our resolution is not enough to resolve these. with vanishing quark mass, the one with chiral symmetry breaking or without, minimizes the free energy. In order to do this we need to extract (for fixed Λ) defined in Eq. (9.9) for both solutions and then calculate the energy difference through Eq. (9.10). The corrections involving are highly suppressed ∼ r 4 in the region which is under perturbative control (log(rΛ) is small). Extracting directly from the numerical solutions of A and λ is practically impossible, since it is difficult to require the two solutions to have the same Λ to a high enough precision. Therefore we study variations in X, which is invariant in scalings of r. In principle we could match the numerically extracted variation of X to the correction term in Eq. (9.14). This is doable (except for values of x very close to the critical one x c ≃ 3.9959), but a large uncertainty in the value of still remains. Therefore we proceed as follows. We substitute X 0 + X 1 in the equation (8.19), where X 1 is treated as a small perturbation. From the linearized equation, we can solve X 1 exactly: where X C is a constant. So, if X 0 is known, this equation gives X 1 in Eq. (9.12) to all orders in log(rΛ). Since the UV expansion of X 0 is also known, we can use that to expand X 1 and to calculate the relation between the constants and X C by using Eq. (9.13). A straightforward calculation giveŝ To obtain the free energy difference between two solutions, we calculate numerically the variation of X and match with Eq. (G.1), where we use either of the two solutions to evaluate X 0 and the integral numerically. Since X 1 given by Eq. (G.1) is a good approximation already at small r (log(rΛ) does not need to be small) we can obtain the difference ∆X C between the solutions to a good precision. The value is then used to calculate numerically ∆ and ∆E through equations (9.10) and (G.2). G.2 Extracting the chiral condensate at m = 0 Let us then discuss how the value of the chiral condensate can be extracted from a numerical solution to the differential equations. Recall that our method for constructing the backgrounds requires tuning the normalization of the tachyon in the IR (T 0 ) such that the quark mass vanishes. This procedure is however limited by the numerical precision of the solution, and therefore exactly zero quark mass cannot be obtained. As it turns out, this makes direct extraction of the condensate value difficult, since the approximately linear term ∼ mr of the tachyon dominates over the cubic solution ∼ σr 3 in the deep UV. This is particularly problematic as x approaches x c , since the ratio of the UV and IR energy scales grows, pushing the asymptotic UV region to smaller r while the condensate value ∝ σ decreases. Therefore, we employ a subtraction procedure. In the UV, the tachyon solution (with the mass m 1 as small as can be achieved) can be written as T 1 (r) = m 1 T m (r) + σ 1 T σ (r) . (G. 3) The UV expansions of the two (approximately) linearly independent solutions T m ≃ ℓr(− log(Λr)) C and T σ (r) ≃ ℓr 3 (− log(Λr)) −C are given in Appendix D.1.1. We will remove the remaining mass term by subtracting another solution T 2 (r) = m 2 T m (r) + σ 2 T σ (r) (G. 4) which is calculated at the same value of x but for different T 0 such that m 2 is about the same order as m 1 . Since the mass terms dominate at very small r, it is easy to extract m 2 /m 1 to a high precision, and construct 31 If σ is analytic at m = 0, σ i = σ (0) + σ (1) m i + σ (2) m 2 i + · · · , (G. 6) we obtain T δ (r) = σ (0) 1 − m 1 m 2 + σ (2) m 1 (m 1 − m 2 ) + · · · T σ (r) . (G.7) If the masses m i are sufficiently small, the quadratic correction ∝ σ (2) as well as higher order terms can be neglected, and the difference T δ is proportional to σ (0) to a good precision. The reliability of the subtraction procedure can be tested by varying, say, m 2 and checking that this does not affect the result. Now σ (0) can extracted from T δ in a straightforward manner. However the result still has quite large error bars in particular for x close to x c , because of numerical uncertainty in the deep UV. We do an additional trick to remove this problem. The extracted T δ extends close enough to the UV singularity to ensure that the tachyon is decoupled from A and λ. On the other hand, we can obtain a decoupled tachyon solution with zero quark mass by solving first the background functions A and λ with T ≡ 0, inserting these in the tachyon EoM, and solving that by shooting from the UV. We can basically extend this solution as close to the UV singularity as we want. Matching the two solutions in the region where both are reliable, we can extend T δ easily up to ∼ r −100 in the UV. Finally, to extract σ (0) we consider the expansion of the tachyon at small λ: which is obtained by using Eqs. (D.9) and (D.2). The coefficients V i and h 1 were defined in Eqs. (D.4), and Λ = Λ UV is the scale of the UV expansions. We match this expansion with the extended T δ , and extrapolate to λ = 0 to obtain log(σ (0) /Λ 3 ). Finally, let us notice the free energy scaling result for the solutions having several tachyon zeroes. Following the arguments of Sec. (10.2), we see that the energy difference between the solution with vanishing quark mass and the solution with n tachyon zeroes scales as ∆E (H.11) Since we have numerically checked that ∆E is positive, this result verifies that the n = 0 solution has the lowest free energy for x → x c .
41,017.2
2011-12-06T00:00:00.000
[ "Physics" ]
Face Mask Detection Using Deep Convolutional Neural Network and MobileNetV2-Based Transfer Learning The rapid spreading of Coronavirus disease 2019 (COVID-19) is a major health risk that the whole world is facing for the last two years. One of the main causes of the fast spreading of this virus is the direct contact of people with each other. There are many precautionary measures to reduce the spread of this virus; however, the major one is wearing face masks in public places. Detection of face masks in public places is a real challenge that needs to be addressed to reduce the risk of spreading the virus. To address these challenges, an automated system for face mask detection using deep learning (DL) algorithms has been proposed to control the spreading of this infectious disease e ff ectively. This work applies deep convolution neural network (DCNN) and MobileNetV2-based transfer learning models for e ff ectual face mask detection. We evaluated the performance of these two models on two separate datasets, i.e., our developed dataset by considering real-world scenarios having 2500 images (dataset-1) and the dataset taken from PyImage Search Reader Prajna Bhandary and some random sources (dataset-2). The experimental results demonstrated that MobileNetV2 achieved 98 % and 99 % accuracies on dataset-1 and dataset-2, respectively, whereas DCNN achieved 97 % accuracy on both datasets. Based on our fi ndings, it can be concluded that the MobileNetV2-based transfer learning model would be an alternative to the DCNN model for highly accurate face mask detection. Introduction Coronavirus is the latest evolutionary virus that has taken over the world in just a few months. It is a type of pneumonia that was initiated at the beginning of December 2019 near Wuhan City, Hubei Province, China, while, on 11th March 2020, it was declared as a world pandemic by the World Health Organization (WHO) [1]. According to WHO statistics, till 24 February 2021, more than 111 million people were affected by the virus and about 2.46 million deaths were reported [2]. The most common symptoms of Coronavirus are fever, dry cough, and tiredness among many others. It mainly spreads through close direct contact of people with respiratory drops of an infected person generated through coughs, sneezes, or exhales. Since these droplets are too dense to swing in the air for long distances and quickly fall on floors or surfaces, therefore, it also spreads when individuals touch the impaired surfaces with the virus and touch back to their face (e.g., eyes, nose, and mouth) [3]. The WHO has declared a state of emergency all over the world and has developed some emergency precautionary measures to limit the spread of the virus, i.e., washing hands regularly with soap for the 20 s, using sanitizers, keeping distance, regularly disinfecting the surfaces, using disposable tissues while coughing or sneezing, and the most importantly wearing of face masks in public places [4,5]. Like controlling the community spread of the SARS virus effectively during the SARS epidemic in the year 2003 [6], wearing community-wide face masks has also been proven very effective in controlling the widespread of Coronavirus [7][8][9][10][11]. Due to effective controlling the respiratory droplets, the wearing of masks has become a prominent feature of the COVID-19 response. For instance, the efficiency of N95 and surgical masks in blocking virus transmission (through blocking the respiratory droplets) is 91% and 68%, respectively [12,13]. Wearing face masks can effectively interrupt airborne viruses and particles so that certain contaminants cannot reach the respiratory system of another person [14]. The worldwide scientific cooperation has improved dramatically due to the outbreak of Coronavirus and is searching for new tools and technologies to fight this virus. One such technology that can be used is artificial intelligence (AI). It can track the spread of the virus quickly, can recognize high-risk patients, and can potentially control the pandemic in real-time [3]. It is also beneficial in early predicting infection by the analysis of the previous patient's data, which in turn can reduce the mortality risks because of the virus. As already discussed that wearing face masks is the most effective protective measure against Coronavirus transmission, however, ensuring the wearing of face masks in public places is a difficult task for the government and the relevant authorities. Luckily, AI as a tool (by using machine learning (ML) or deep learning (DL) algorithms) can help ensure the wearing of face masks in public places just by detecting face masks in real-time with the help of an already installed camera network (surveillance camera network or any other). It is an easy method to manage the people in the society, to maintain social distancing, and to make sure that everyone has worn a face mask. Because of the importance of face mask detection in public places, here in this study, we have demonstrated the use of two popular DL-based architectures, i.e., DCNN and advanced CNN based on "transfer learning," i.e., Mobile-NetV2 for effective face mask detection. To evaluate the performance of the employed DL architectures, two different datasets have been used, i.e., (1) our own developed face mask detection dataset having 2500 images (dataset-1) and (2) the dataset taken from PyImage Search Reader Prajna Bhandary and some random sources (dataset-2). Finally, the results achieved by the algorithms on both datasets have been compared. Moreover, dataset-1 was collected from Karakoram International University, Pakistan, keeping in view the limitations of datasets under different real-world scenarios. In the future, this technology can be employed in real-time applications that require face mask detection for safety reasons due to the COVID-19 pandemic. This project can be integrated with embedded systems for applications in offices, schools, airports, and public places to ensure public safety guidelines. Related Work Loey et al. [15] introduced a face mask detection model that works on deep transfer learning and classical ML classifiers (classical ML classifiers refer to the ML algorithms that work on handcrafted extracted and engineered features from the input data). They used the Residual Neural Network (ResNet 50) algorithm for feature extraction. The extracted features were then used to train three classical ML algorithms, i.e., Support Vector Machine (SVM), Decision Tree (DT), and Ensemble Learning (EL). Three different face mask datasets have been used in the study for the investigation, i.e., (i) Real-World Masked Face Dataset (RMFD), (ii) Simulated Masked Face Dataset (SMFD), and (iii) Labeled Faces in the Wild (LFW) dataset. Finally, the trained classifiers were tested for possible face mask detection. During the testing experiment, the SVM classifier achieved the highest detection accuracies as compared to DT and EL classifiers. In RMFD and SMFD, it achieved 99:64% and 99:49% detection accuracies, respectively, while, in the case of LFW, it achieved 100% detection accuracy. Militante and Dionisio [16] developed an automatic system to detect whether a person wears a mask or not and if the person does not wear a mask the system generates an alarm. To develop their system, the authors used the VGG-16 architecture of CNN. Their system achieved overall 96% detection accuracy. In the future, the authors decide to make a system that will not only detect whether a person is wearing a mask or not but will also detect a physical distance between each individual and will sound an alert if the physical distancing is not followed properly. Rahman et al. [17] built a framework that gathers data from the IoT (Internet of Things) sensors of the smart city network (where all the public places are monitored with Closed-Circuit Television (CCTV) cameras) and detects whether an individual wears a mask or not. Real-time video footage from CCTV cameras of the smart city is collected for extracting facial images from it. The extracted facial images of people with and without masks are then used to train CNN architecture. Finally, the trained CNN architecture is used to distinguish people with and without facial masks. One advantage of using the CCTV network of the smart city is that when the system detects people without wearing masks, the information is sent through the city network to the concerned authority of the smart city to take appropriate action. The overall dataset used in this work is 1539 images including 858 images with masks and 681 images without masks. 80% of data was used for training, and the rest of the data was used for testing. The system achieved overall 98.7% accuracy on previously unseen data for distinguishing people with and without masks. [18] introduced a model using a DL algorithm to detect whether a person wears a mask or not in public areas. To do this, they used the MobileNetV2 image classification method, which is a pretrained method. In this experiment, the authors used two datasets, i.e., (1) RMFD, taken from Kaggle, and (2) dataset collected from 25 cities of Indonesia using CCTV cameras, traffic lamp cameras, and shop cameras. Both the datasets were used to train their model. The trained model achieved 96% and 85% detection accuracies on the test sets of these two datasets, respectively. Sandler et al. [19] present a method for automatically detecting whether someone wears a mask or not. They designed a transfer learning-based method with Mobile-NetV2 for detecting masks in images and also in video streaming. Their system achieved 98% detection accuracy at a dataset of 4095 images. According to the authors, their model can work on devices with minimal computational capability and can process on real-time image data. Oumina et al. [20] developed a system by combining pretrained DL models such as Xception, MobileNetV2, and VGG19 for the extraction of features from the input images. After extracting features, they used different ML classifiers such as SVM and k-nearest neighbor (k-NN) for the classification of extracted features of images. They used a total of 1376 images of two classes (i.e., with mask and without mask). The experimental results show that the combination of MobileNetV2 with SVM achieved the highest classification accuracy of 97.11%. Related to the above-mentioned literature, here in this research, we have used two DL architectures for face mask detection, i.e., DCNN and transfer learning-based Mobile-NetV2. The performance of these architectures is evaluated on its own collected dataset as well as on the dataset collected from PyImage Search Reader Prajna Bhandary and some random sources. The purpose of evaluating the perfor-mance of these two architectures on two different datasets is to compare their performance and to know how better the models perform on our own collected dataset. Dataset. This research performed its experiments on two different datasets: The first is our own collected dataset, which was collected at Karakorum International University, Pakistan, considering the real-world scenarios. The images were taken from each individual with and without wearing face masks. To increase the size of the dataset (for achieving better model performance), some augmentation techniques (like rotating, zooming, and blurring) have been performed on the collected images. The final version of our dataset included 2500 images labeled as with mask and without mask. The second dataset used in our experiments was taken from PyImage Search Reader developed by Mikolaj Witkowski and Prajna Bhandary (available at Kaggle) and some from random sources. This dataset contains 4436 images belonging to two classes (i.e., with mask and without mask). Mikolaj and Prajna created this dataset by taking normal images of faces and then by creating a custom computer vision python script to add a face mask to them, thus creating an artificial dataset that is used as a with mask. Details of the images included in both datasets have been provided in Table 1. Furthermore, Figures 1 and 2 show some pictures taken from our own collected dataset and the dataset developed by Prajna Bhandary. Methodology of the Proposed Study. In this study, we are considering two DL architectures, i.e., DCNN and transfer learning-based MobileNetV2. To evaluate the performance of these two models, two different datasets have been used. For convenience, these datasets were named dataset-1 and 3 Wireless Communications and Mobile Computing dataset-2, respectively. Dataset-1 contains 2500 and dataset-2 contains 4000 with and without mask images (refer to Figures 1 and 2 for a few samples of images taken from dataset-1 and dataset-2), respectively. Each dataset is split into two groups, one for training the models, while, the other for testing the models. In the case of training MobileNetV2 architecture, 80% data of each dataset was used, whereas, the remaining 20% data was used for testing the model. In the case of DCNN, 90% data of each dataset was used for training, and the remaining 10% was used for testing the model. Data augmentation technique was used to increase the amount of data by making slight changes like resizing, zooming, and rotating the images. This technique helps to reduce the problem of overfitting during training the model. We resized images to 100 * 100, rotated images to 40 degrees, and zoomed images using a 0:2 zoom-in factor. A schematic diagram of DCNN and MobileNetV2 for face mask detection has been presented in Figure 3. Deep Convolutional Neural Network (DCNN). DCNN is not just a deep neural network with many hidden layers, but actually, it is a deep network that mimics the way the human brain's visual cortex processes and recognizes images [21]. A given input image is processed by assigning relevant weights (learnable parameters) to various parts of the image and then making distinctions between the various characteristics. It is substantially less preprocessing and timeconsuming than other classification methods when compared to DCNN. While traditional techniques necessitate the creation of filters by manually, DCNN can learn to create these filters with sufficient training. The DCNN architecture that we used in our research is depicted in Figure 4. These five layers include convolutional layers, average-pooling layers, and one fully connected layer. Convolutional layers and avg-pooling layers are included in this network. For every convolution layer, the layer is convolved with their respective kernel size and after every convolution; Rectified Wireless Communications and Mobile Computing Linear Unit (ReLU) activation function is added. ReLU is used for filtering information that propagates forward through the network. After every convolution, the avgpooling operation takes place. The fully connected layer also known as the classification layer includes the flattening process. Flattening converts the matrix found after the last avgpooling into a single column matrix for inputting it to the final output layer. At the last output layer, a "softmax" activation function is used that predicts a multinomial probability distribution. MobileNetV2. MobileNetV2 is a Google-based developed architecture that is pertained on 1.4 million images of 1000 classes [19]. It is an advanced DCNN architecture that performs well on mobile devices. In MobileNetV2, we do not have to train the model from scratch, we only change the last output layers according to our domain. The architecture of MobileNetV2 is based on its previous version (i.e., Mobile-netV1). To preserve the information, it introduced a new structure named "inverted residual." The problem of information destroying in convolution blocks by a nonlinear layer applies the technique of Depthwise Separable Convolution (DSC) by using a linear bottleneck layer [22]. Figure 5 shows the basic architecture of MobileNetV2. Evaluation Metrics. The performance of the classification models on testing data was evaluated using the accuracy (Equation (1)), precision (Equation (2)), recall (Equation (3)), specificity (Equation (4)), F1-score (Equation (5)), and kappa coefficient (Equation (6) Kappa coefficient is the measure of agreement between predicted and true values in testing datasets. The value of kappa can be 0 to 1. If the value of kappa is 0, there is no agreement between the predicted and actual image, and if the value of kappa is 1, then the predicted and actual image are identical. Thus, the higher the value of kappa, the more accurate the classification. Moreover, the random accuracy for binary classification can be calculated as Experimental Results and Analysis On two separate datasets of images, extensive experiments were conducted to evaluate the performance and effectiveness of the suggested models. On dataset-1, Figure 6 shows the MobileNetV2 model's training and validation curves. It shows that over 20 epochs, the training and validation accuracy achieved by MobileNetV2 are 99% and 98%, respectively. Similarly, Figure 7 provides the training and validation plots of the MobileNetV2 model on dataset-2. Figure 7 indicates that over 20 epochs, the training and validation accuracy achieved by MobileNetV2 are 99% and 99%, respectively. Hence, the MobileNetV2 model achieved equal training accuracy on both datasets, however, higher validation accuracy on dataset-2 as compared to dataset-1. Figure 8 presents the training and validation loss curves of the MobileNetV2 model on dataset-1 which shows that over 20 epochs, the training and validation losses are 5%. Similarly, Figure 9 provides the training and validation loss curves of the MobileNetV2 model on dataset-2. It shows that over 20 epochs, the training and validation losses acquired by MobileNetV2 are 5% and 4%, respectively. Hence, the MobileNetV2 model achieved comparatively equal training losses on both the datasets, whereas higher validation losses on dataset-1 as compared to dataset-2. Figures 8 and 9 further specify that there is less gap in training and validation loss curves, which indicates that the employed model is well converged on the datasets and there Figure 14: Training/validation accuracy graph of dataset-1 and dataset-2 using DCNN and MobilNetV2. Figure 15: Results of the test images with a face mask and without a face mask. Wireless Communications and Mobile Computing Hence, the DCNN model performed equally well on dataset-1 and dataset-2. Figures 12 and 13 describe the training and validation loss curves of the DCNN model achieved during experiments, plotted throughout 50 epochs using dataset-1 and dataset-2, respectively. Both the graphs indicate that in both the datasets, the DCNN model achieved the minimum and equal training and validation losses, i.e., 0:5% and 1%, respectively. Figures 12 and 13 further specify that there is less gap in training and validation loss curves, which indicates that the DCNN model was well converged during training and validation and no problem of overfitting occurred during the training and validation process. By comparing the training and validation accuracy as well as training and validation losses of MobileNetV2 and DCNN models, it can be concluded that the MobileNetV2 model achieved higher accuracy and minimum losses over DCNN in both datasets. Figure 14 provides the overall comparative summary of the training and validation accuracy achieved by Mobile-NetV2 and DCNN models. It indicates that for both datasets, the MobileNetV2 model achieved higher accuracy as compared to the DCNN model. Furthermore, in the case of dataset-2, the accuracy achieved by the MobileNetV2 model are better as compared to dataset-1. Tables 2 and 3 provide the classification reports of Mobi-leNetV2 and DCNN on dataset-1 and dataset-2, respectively. Both models achieved higher evaluation metric scores of accuracy, precision, recall, F1-score, specificity, error rate, and kappa coefficient on both datasets. These higher values indicate that both models performed well on both datasets. However, from the overall experimental results, it can be concluded that even with the less amount of data, the Mobi-leNetV2 model can provide better accuracies than DCNN. It is because MobileNetV2 models are already trained on a large amount of data and we do not need to train them from the scratch. We only have to change the last two layers of the MobileNetV2 model according to our problem. Moreover, results further show that in the case of our collected dataset (dataset-1), both the models performed well. It is because we collected our dataset with real face masks and without face masks and did not create a dataset with masks artificially. However, in the case of dataset-2, the dataset was generated with artificial masks. Furthermore, our models are succeeded in detecting the real-time images with a mask and without a mask with the accuracy displayed on the images as shown in Figure 15. Conclusion COVID-19 is one of the fast-spreading viruses that have been threatening human health, world trade, and the economy. Its high mutation and spreading rate made the situation difficult to be under control. Taking precautionary measures may reduce the spreading of this virus, and one of the most important measures is to wear a face mask in public places. Therefore, in this study, a deep learningbased approach has been applied to detect the face mask automatically. The learning models, i.e., Deep Convolutional Neural Network (DCNN) and MobileNetV2 transferred learning-based model, have been evaluated on two different datasets. The datasets consist of our own collected dataset containing 2500 images of individuals with and without masks (dataset-1) and dataset-2 from PyImage Search Reader Prajna Bhandary (dataset-2) and some random sources. The comparative results show that MobileNetV2 achieved 98% and 99% classification accuracy on dataset-1 and dataset-2, respectively, whereas Deep Convolutional Neural Network achieved 97% accuracy on both datasets. The main contribution of the study is the development of our own face mask detection dataset with 2500 images, which were collected from Karakoram International University, Pakistan, keeping in view the limitations of datasets under different real-world scenarios. In the future, we will increase the size of the dataset by embedding real-time video streams into it to detect face masks in real-time. Data Availability The data used in this research can be obtained upon request to the corresponding authors. Conflicts of Interest The authors declare that they have no conflicts of interest.
4,947.6
2022-05-04T00:00:00.000
[ "Computer Science" ]
Complete genome sequence data of tropical thermophilic bacterium Parageobacillus caldoxylosilyticus ER4B Parageobacillus caldoxylosilyticus, or previously identified as Geobacillus caldoxylosilyticus, is a thermophilic Gram-positive bacterium which can easily withstand growth temperatures ranging from 40 °C to 70 °C. Here, we present the first complete genome sequence of Parageobacillus caldoxylosilyticus ER4B which was isolated from an empty oil palm fruit bunch compost in Malaysia. Whole genome sequencing was performed using the PacBio RSII platform. The genome size of strain ER4B was around 3.9Mbp, with GC content of 44.31%. The genome consists of two contigs, in which the larger contig (3,909,276bp) represents the chromosome, while the smaller one (54,250bp) represents the plasmid. A total of 4,164 genes were successfully predicted, including 3,972 protein coding sequences, 26 rRNAs, 91 tRNAs, 74 miscRNA, and 1 tmRNA. The genome sequence data of strain ER4B reported here may contribute to the current molecular information of the species. It may also facilitate the discovery of molecular traits related to thermal stress, thus, expanding our understanding in the acclimation or adaptation towards extreme temperature in bacteria. Biology Specific subject area Microbiology and Genomics Type of data Table Image Figure How data were acquired Whole genome sequence of Parageobacillus caldoxylosilyticus ER4B was obtained using PacBio RSII Data format Raw and Analyzed Parameters for data collection Pure culture of strain ER4B was grown in Lennox Broth (LB) at its' optimal growth temperature 64 °C and the genomic DNA was extracted when the culture reaches mid log phase. Description of data collection The genomic DNA was sequenced using PacBio RSII, while subsequent genome assembly and annotation was done using Canu (V1. 6 Value of the Data • The data from this work represents the first complete and gapless genome of Parageobacillus caldoxylosilyticus as four other genome sequences from the same species deposited in NCBI GenBank are draft genomes. • Complete whole genome sequence of Parageobacillus caldoxylosilyticus ER4B could provide valuable information about thermal adaptation in the bacterium, particularly at high growth temperature. • Comparative genomics can also be carried out using this genomic data against the genome of different strains, or even different species, and this will definitely contribute in the further development and understanding molecular basis of thermal adaptation in different bacteria. • The data can be very useful for scientists and students working in the field of microbiology, genomics, and biotechnology in extremophiles, especially thermophiles. Data Description We present the whole genome sequence of P. caldoxylosilyticus ER4B that was obtained from PacBio RSII. P. caldoxylosilyticus ER4B was previously isolated from an oil palm empty fruit bunch compost, and it would grow optimally at 64 °C in Lennox broth (LB). The genome features of strain ER4B were summarized in Table 1 . The assembled genome is approximately 3.96Mbp in size, and comprises of two contigs, where the larger contig represents the chromosome and the smaller contig represents the plasmid. Subsequent genome annotation revealed a total of 4,164 genes were successfully predicted from both chromosome and plasmid, including 3,972 protein coding genes and 192 non-coding RNAs. Among all the predicted CDS, 2,933 of them are genes with known functions, whereas 1,039 are categorized as hypothetical genes. The position of each CDS and RNA genes can be better visualized in the genome map in Fig. 1 . The whole genome sequence of strain ER4B was utilized to construct a clearer and more accurate evolutionary relationship with other bacterial whole genomes closely related to Para- Fig. 1. Genome map of ER4B was constructed using DNAPlotter. From the outer track: 1 st track represents total annotated genes, 2 nd track represents forward CDS, 3 rd track represents reverse CDS, 4 th track represents rRNA, 5 th track represents tRNA, 6 th track represents miscellaneous RNA (miscRNA), 7 th track represents tmRNA, 8 th track represents GC plot, and the last track represents GC skew. Major tick marks interval was set at 1/10 th of the overall genome size, which is 396,352bp, so 0 represents both the beginning and the ending of the sequence. geobacillus and Geobacillus through PhyloSift. Fig. 2 clearly depicted that P. caldoxylosilyticus CIC9 is the closest strain to ER4B, followed by the other strains from the same species. This reconfirmed the identity of strain ER4B as it is strongly affiliated with other P. caldoxylosilyticus strains. It is also noteworthy that Parageobacillus genomosp. appeared to be closer to P. caldoxylosilyticus as compared to the other three species Parageobacillus toebii, Parageobacillus thermoglucosidans and Parageobacillus thermantarcticus . Besides, Fig. 2 also showed that Parageobacillus and Geobacillus were separated at the main node, forming two distinct clades between the two genera as proposed in previous study [1] . This clustering suggests that PhyloSift is able to provide higher phylogenetic resolution and better taxonomy assignment in phylogenetic analysis as compared to the more congurent single-gene phylogenetic analysis. The annotated genome was further classified into orthologous group based on their function. 3,819 of the annotated genes were successfully classified into any one of the COG categories. As depicted in Fig. 3 , 35.41% of the annotated genes were classified into "Metabolism" major category, followed by "Poorly characterized" with 28.37%, "Cellular processes and signaling" with 19.44%, and "Information storage and processing" with 16.79%. While there was a total of 26 functional COG categories, no eggNOG-annotated genes were found to be categorized under the RNA processing and modification (A), general function prediction only (R), extracellular structures (W), and nuclear structure (Y). From Fig. 3 , it was also clear that most of the annotated genes (28.37%) fell in category S, as these genes encode for hypothetical or novel proteins in which their functions were not readily assigned. Among the remaining genes with assigned functions, 8.96% of the genes which have Fig. 3. Functional distribution of genes within the P. caldoxylosilyticus ER4B genome classified by clusters of orthologous groups (COG). COG in red box refers to major category "information storage and processing"; green box refers to "cellular processes and signaling"; blue box refers to "metabolism"; and yellow bow refers to "poorly characterized". functions related to amino acid transport and metabolism were classified into category E, making E a category with second highest abundance of genes. The third highest category is L with 6.70%, followed by K (5.84%), C (5.79%), Z (5.79%), and G (5.66%). Similar to other thermophilic bacteria, strain ER4B is constantly exposed to high growth temperatures. Although group O occupied only 3.04%, it is important for the survival of strain ER4B at high temperatures as many of the heat stress related proteins were categorized in this group. The genome of strain ER4B was found to harbour several genes encoding for heat-related proteins, including GrpE, GroEL, GroES, DnaJ, DnaK, and ClpB [2 , 3 , 4] as shown in Table 2 . These heat shock proteins work in conjunction with one another to prevent protein aggregation at high temperature. Besides, various features responding to heat-induced stress, such as general stress proteins, DNA SOS response proteins, and oxidative stress proteins ( Table 2 ), can also be found in the genome of this bacterium. These proteins would trigger stress responses to prevent or mitigate the cellular damage caused by heat stress, thus crucial in contributing to the thermophilicity in strain ER4B. Interestingly, several copies of cold shock protein B (CspB) was also found in the genome of this thermophilic bacterium [5] . Experimental Design, Materials and Methods The genomic DNA of strain ER4B was prepared from cells in the exponential growth phase. DNA extraction was then carried out using Qiagen DNeasy® Blood & Tissue kit (Qiagen, Valencia, CA, USA) according to manufacturer's protocols, with several optimization (personal communication, Yong Sheau Ting) to maximize both quality and quantity of the genomic DNA extracted. The complete genome of strain ER4B was sequenced using the PacBio RSII instrument (Pacific Biosciences, Menlo Park, CA, USA). The Single Molecule Real Time (SMRT) sequencing was conducted using 20kb SMRT bell templates and DNA Polymerase Binding kit P6-V2 on top of PacBio RSII system. The raw sequencing data obtained was then proceeded with reads correction, trimming, and de novo assembly using Canu v1.6 [6] . Subsequently, the assembled genome was annotated using Prokka v1.12 [7] , and the complete genome map was constructed using DNAPlotter [8] . The annotated genome of strain ER4B was then used for the construction of the phylogenetic tree using PhyloSift [9] . Furthermore, the annotated genome was further distributed into clusters of orthologous groups (COGs) based on functional annotation using eggNOG-mapper [10] . Ethics Statement This work did not involve any animals or human subjects. The manuscript represents the author's original work which has not been published elsewhere. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article.
2,055.2
2021-12-01T00:00:00.000
[ "Environmental Science", "Biology" ]
The relative fitness of drug resistant Mycobacterium tuberculosis: a modelling study of household transmission in Lima, Peru The relative fitness of drug resistant versus susceptible bacteria in an environment dictates resistance prevalence. Estimates for the relative fitness of resistant Mycobacterium tuberculosis (Mtb) strains are highly heterogeneous, many are derived from in-vitro experiments and few studies have estimated relative fitness in the field. Measuring fitness in the field allows us to determine how the environment (including factors such as host genetics, TB treatment regimen etc.) influences the ability for bacteria to survive, be transmitted and cause secondary cases of disease. We designed a household structured, stochastic mathematical model to estimate the fitness costs associated with multi-drug resistance (MDR) carriage in Mtb strains in Lima, Peru over a 3-year period. By fitting the model to data from a large prospective cohort study of TB disease in household contacts we estimated the fitness, relative to susceptible strains with a fitness of 1, of MDR-Mtb strains to be 0.33 (95% credible interval: 0.17-0.54) or 0.39 (0.26-0.58), if only transmission or progression to disease, respectively, was affected by MDR. The relative fitness of MDR-Mtb increased to 0.57 (0.430.73) when the fitness cost was modelled to influence both transmission and progression to disease equally. We found the average relative fitness of MDR-Mtb circulating within households in Lima, Peru between 2010-2013 to be significantly lower than susceptible-Mtb circulating at the same time and location. If these fitness levels do not change, then existing TB control programmes are likely to keep MDR-TB prevalence at the current levels in Lima, Peru. Introduction Mycobacterium tuberculosis (Mtb) is a highly prevalent bacterium, thought to infect just under a quarter of the world's population (1). Treatment of tuberculosis (TB) disease is not simple and drug-susceptible tuberculosis (DS-TB) requires a multiple drug regimen taken for at least 6 months (2). Multidrugresistant tuberculosis (MDR-TB) treatment regimens are significantly longer, cause serious side effects and are very expensive (3). Whilst currently 5% of all TB cases globally are estimated to be MDR-TB (2), predicting the future burden of DS-and MDR-TB is essential for TB control programmes. One key parameter that determines the future prevalence of drug resistant TB is the relative fitness of drug resistant Mtb strains as compared to drug susceptible Mtb strains (4)(5)(6)(7). Fitness is a complex, environment-dependent trait that can be defined as the ability of a pathogen to survive, reproduce, be transmitted and cause secondary cases of disease. These abilities are affected by multiple environmental factors such as a host's genetics, the current TB treatment regimen and other risk factors for transmission, which are all time-varying. The importance of this parameter has been highlighted by several mathematical models which show how even small changes in its value can predict widely varying future levels of MDR-TB burden (4-6, 8, 9). Thus, gaining environment dependent, accurate estimates of fitness is of critical importance. Within Mtb, it has been shown that the appearance of drug resistance mutations affects fitness (10)(11)(12). These previous studies have shown that resistant Mtb is, usually, less fit than susceptible Mtb under a range of fitness definitions: either by demonstrating a lower growth rate in vitro (e.g. (13)), less progression to disease after inoculation in guinea pigs (e.g. (14)) or a lower chance of causing secondary cases of disease (e.g. (12,15)). The latter definition is important for epidemiological predictions of burden, whilst the first provides the potential underlying biological cause. The epidemiological fitness of a Mtb strain can be split into an ability to (1) cause secondary infections (transmission) and (2) cause subsequent active disease (progression). For example, resistant Mtb may be transmitted equally as well, but subsequent disease rates in those infected may be lower or less severe. For Mtb this split is especially pertinent due to the importance of the latent, non-infectious, stage of disease. Also highly important for Mtb is the spatial location of transmission (16). Few studies have considered the critical influence of household structure on transmission of Mtb. To our knowledge, no studies have considered the spread of drug-resistant tuberculosis in the context of a householdstructured stochastic mathematical model. The difference in definitions of fitness and corresponding experimental data makes translation from data analysis to predictive mathematical modelling difficult. Here, we tackle this problem by fitting a mathematical model to a detailed data set on the transmission of Mtb strains collected in a large cohort study of households undertaken in Lima, Peru between 2010 and 2013 (17). We derive estimates of fitness in this specific setting with different fitness definitions (either effects on transmission and/or progression to disease) and test the robustness of these estimates under a range of assumptions. These parameters will allow for better predictions of future MDR-TB levels and an improved understanding of MDR-TB spread. Results Fit to the data Model structures 1-3 could all replicate the data from the household study ( Figure 2). The MCMC trace and density plots of the posterior distributions are shown in SI Text. Parameter estimates The estimates of the external force of infection for DS-and MDR-TB were similar across the three models (Table 3, Figure 3). The per capita transmission rate of DS-TB within households was also similar across the three models. The relative fitness of MDR-Mtb was similar for Model 1 and 2, but increased in Model 3, as might be expected as in this third model the reduction in fitness is applied to two rates. For Model 1, that is assuming a resistance phenotype affects transmission, the relative fitness of MDR-Mtb was estimated to be 0.33 (95% CI: 0.17-0.54) vs. DS-Mtb with a fitness of 1. In Model 2, where a resistance phenotype affected disease progression, a similar relative fitness was estimated: 0.39 (0.26-0.58). If both rates were affected, then the relative fitness of MDR-Mtb was estimated to be 0.57 (0.43-0.73) (Table 3, Figure 3). Comparing the external force of infection for DS-vs. MDR-TB we found that the ratio of the two was around 0.5 (median estimate 0.42 / 0.54 / 0.55 from the three models). This single value for the external force of infection (foi) represents a complex set of processes (contact patterns, length of infectiousness etc.) and so cannot be used to determine relative fitness. However, the ratio is in the range that supports our estimates of the relative fitness from the internal household model. Probability of remaining free from tuberculosis We explored the probability of remaining free from tuberculosis as was presented from the original study ( Figure 2 in (17)). By comparison we had highly similar dynamics to the study (SI text Figure S5) Scenario analysis Our five scenarios gave very similar estimates for the relative fitness of MDR-Mtb (a range of medians from 0.22 -0.41, SI text). This suggests that the estimates of relative fitness are robust to: increasing the initial proportion of households that were initially infected with latent MDR-Mtb from 2% to 10% (in the pre-study), setting TB incidence to high or low levels (see SI text for parameter details), extending the initial run-in period from 10 to 30 years or removing the saturation of transmission within households. Discussion Our results suggest that the average relative fitness of MDR-Mtb strains in circulating in households in Lima, Peru in 2010-2013 was substantially lower than that of drug susceptible strains (~40-70% reduction). When the effect of resistance was measured as an effect on transmission only, then the relative fitness of MDR-Mtb strains was lower than if the effect of resistance lowered only the progression rate to disease. When a resistance phenotype was assumed to affect both transmission and progression to disease rates, then the relative fitness of MDR-TB strains was higher at ~60%. These costs to resistance carriage were observed despite the longer infectious period of drug-resistant TB arising from delayed diagnosis and treatment. The strengths of this study are that we were able to fit a stochastic household-level model to detailed location-specific data, accounting for accurate distributions of both household size and study follow-up time. We were also able to differentiate between internal and external transmission, matching the resistance typing data from the household study (17). This model and its MCMC fitting algorithm can be applied to other settings and then used as the basis for predictions of future levels of DS-and MDR-TB. In particular, this novel way of estimating fitness costs, by fitting dynamic transmission models to resistance-specific incidence data could be used for other TB prevalent settings or for other bacteria. Furthermore, the estimates given can be directly translated into dynamic transmission models for prediction whilst previous estimates, for example of differences in growth rates have less clear epidemiological translations. Our modelling analysis is limited by homogeneity -of both hosts and strains. The characteristics of the DS-and MDR-TB contacts under consideration in the underlying household study were highly similar (17). Thus, as our estimate is of a relative fitness we believe that including host differences in our model may have had little effect on our relative results. Strain heterogeneities however, mean that our result is (potentially) an average across many different drug resistant strains. It is known that differences in resistance and compensatory mutation combinations result in a diversity in fitness across strains (13). This diversity is highly important for predictions of MDR-TB levels in the future (18). Our estimate must therefore be taken as a population average in Lima, at a certain time and indicative of the mean fitness rather than an indicator of the range of potential fitness in the population. If one highly fit MDR-TB strain were to emerge (or were already present), then future prevalence predictions based on our (mean) estimate could be an underestimate. We also assumed that transmission of the strains was internal if the resistant phenotype was the same as the index, but external if different. We made this same assumption for both susceptible and resistant strains. Hence, our estimates would only be affected if we thought that a different percentage of infection for susceptible strains was occurring outside the household than for resistant strains and at the moment we cannot determine this. Our Model 1, where a transmission effect is assumed, is the most similar to previous models of MDR-TB transmission (6,9,19). However, our MDR-TB fitness predictions are at the lower end of the range seen previously (10). This may reflect the situation in Peru where there is a strong tuberculosis control infrastructure with a well-developed MDR-TB treatment program and a growing economy. These two factors may have combined to limit the spread of MDR-TB and hence prevent the adaptation of MDR-TB to a higher fitness. At the bacterial level, compensatory fitness mutations that could influence the ability of drug resistant Mtb strains to spread may not have emerged or not been allowed to spread. Calibrating the model to other settings would help clarify this issue. Alternatively, it may be that our estimates are providing, for the first time, a better direct translation of fitness from epidemiological data to a transmission model parameterisation. There is a paucity of evidence for whether differences in TB disease prevalence in general are due to infection or progression to disease (20). In particular, for resistant strains it is unclear where the effect of becoming resistant should be applied in the natural history of tuberculosis infection. Both Snider and Teixeira (21,22) demonstrated similar levels of tuberculin skin test (TST) conversion among MDR-and DS-TB household contacts but lower levels of disease in contacts of those with MDR-TB. This was also seen in a recent study in children (23), whilst a higher prevalence of TST positivity was found in household contacts of MDR-TB patients than contacts of newly diagnosed TB patients in Viet Nam (24). This evidence combines to suggest that the fitness cost to resistance, if any, was to be observed on the progression to disease. We make this assumption in our Model 2, where the hypothesis is that those with active TB disease, whether due to resistant or susceptible bacteria, have a similar bacterial load and hence ability to transmit successfully. However, once successfully established in a new host, resistant bacteria may be less able to combat the immune system and establish a disease state. This has been assumed in a previous model of HIV and MDR-TB interaction (25). The relative fitness estimates from Model 1 and 2 are similar, whilst the relative fitness estimates from Model 3 are higher, potentially due to the effect being on two processes and hence multiplicative. To determine which model structure is more appropriate, data on infection as well as disease status in contacts of index cases would be required. As mentioned above, TST surveys have found similar rates of infection in contacts of DS-TB and MDR-TB cases (21, 23) but lower rates of disease in contacts of MDR-TB cases, suggesting that Model 2 may be the most biologically appropriate choice (where progression to disease but not transmission rates are affected by becoming resistant). Previous models have assumed that resistant strains could become more fit (i.e. have a relative fitness greater than 1), whilst we capped the relative fitness of the resistant strains at 1, due to the data from the household cohort (17). Our posterior parameter distributions for the estimated relative fitness parameter (reflected in the 95% CI for f, see SI text) suggest that this is a valid assumption for the resistant strains circulating at this time in Lima. Importantly, all our estimates are of "relative" fitness, and therefore should be robust to changes in natural history assumptions as these would affect both drug susceptible and resistant strain transmission. Future work will include adding in detail on host and strain heterogeneity to the model. However, as described above the former characteristics distribution was similar across DS-and MDR-TB households whilst the latter requires data that is currently unavailable. Data collection of strain heterogeneity along with active contact tracing and an understanding of where and from whom transmission occurs would drastically improve our understanding of fitness and hence improve estimates of future MDR-TB levels. As we were fitting to a specific household-based study we modelled the community effect on transmission as external force of infection rather than explicitly including a community compartment as has been done for previous models of TB spread (26). Exploring the external infection methods and potential changes in this force of infection over time (i.e. making it dynamic) would allow for models that can predict levels of MDR-TB in Lima. Importantly, this model provides a key estimate of a parameter that is needed for many existing mathematical model structures, where only a single "resistant" strain is included. Future predictive transmission modelling using our relative fitness estimates are likely to suggest that if treatment objectives are maintained and this fitness measure remains constant, that MDR-TB prevalence will remain under control in Lima in the short term. In conclusion, if the fitness cost of drug resistance in Mtb is exerted on the rate of progression to active disease rather than transmission, we estimate that the relative fitness is between 30-40% relative to drug susceptible strains (at 100%). Importantly this paper provides direct transmission model estimates, using a novel method, of the relative fitness levels of drug resistant Mtb strains. If these fitness levels do not change, then the existing TB control programmes are likely to keep MDR-TB prevalence at their current levels in Lima, Peru. These estimates now need to be gained for Mtb in other settings and the values used in models to explore future global burden. Data The details of the study and participants can be found in (17). Briefly, 213 and 487 households were recruited with an index case of diagnosed MDR-or DS-TB respectively during 2010 to 2013. Households were followed up for variable periods of time up to a maximum of 3 years (SI text Figure S1). During the study households were visited every 6 months, and household contacts were monitored for TB disease. It was found that 35/1055 (3.32%, 95% CI [2.32, .4.58]), of the MDR-TB contacts, and 114/2356 (4.84%, 95% CI [4.01, 5.78]), of the DS-TB contacts developed TB disease, suggesting that DS-TB has higher fitness. There were no significant differences between cohorts by HIV status, age, gender or household size (17). The specific data used to calibrate the model was 1) the incidence of MDR-TB and 2) DS-TB in households with an index DS-TB case and 3) the incidence of MDR-TB and 4) DS-TB in households with an index MDR-TB case ( Table 1). We assumed that transmission of the strains was internal if the resistant phenotype was the same as the index, but external if different. The percentages of incident cases with resistance profiles matching the index was used to multiply the incidence levels accordingly. Model structure The mathematical model was a standard two-strain dynamic TB model ( Figure 1), with transmission modelled at the level of the household. A Gillespie stochastic simulation algorithm in R (27) was developed using the R package "GillespieSSA" (28). Using a stochastic transmission model was important as the model was implemented independently in households where the small populations mean stochastic effects are highly important. We assumed that saturation of transmission could occur and hence scaled our transmission rate by the size of the household (number of people), assuming households have the same ventilation level (or at least that this did not vary by index case Mtb resistance status) and within-household homogeneous mixing (29). This assumption means that in households with more people, household members are assumed to have lower individual chance of infection from an active disease case than in smaller households, due to decreased exposure. This has been observed for another airborne pathogen, influenza (30) and was explored in sensitivity analysis. All natural history parameters were taken from the literature, are listed in Table 2 and the dynamics explained in the legend to Figure 1. Four parameters were estimated from the data ( Table 1): (1) the per capita transmission rate of DS-TB within households (β S ), (2) the relative fitness of MDR-Mtb strains vs. DS-Mtb strains (f) expressed as an effect on transmission or progression or both, and the external (to households) force of infection (foi) of (3) DS-TB foi s and (4) MDR-TB foi r . Three model formulations Resistant strains were allowed to have an equal or lower fitness relative to susceptible strains. The mechanisms behind this reduction were estimated to affect two different rates: the transmission rate, the rate of progression to disease, or both ( Figure 1). We assumed that the fitness of the resistant strains could not rise above that of susceptible strains due to the data from the household cohort (17). Model 1 (transmission fitness cost model) assumed that fitness costs directly affected the number of secondary infections by reducing the transmission parameter for MDR-Mtb (0 < f 1 < 1, f 2 = 1, Figure 1). This is the standard assumption for the effect of resistance on fitness for transmission dynamic models of Mtb (6,9,19) and other pathogens (31). Model 2 (progression fitness cost model) assumed that although MDR-TB transmission occurred at the same rate as DS-TB, there is a fitness cost to progression to disease (f 1 =1, 0 < f 2 < 1, Figure 1). Model 3 assumed that there was a fitness cost to both transmission and progression, and that the cost was the same for both processes (0 < f 1 = f 2 < 1, Figure 1). We could not explore a Model with fitness affecting both processes at differing levels as we did not have data on levels of infection. Without this data, a model with high transmission fitness cost but low progression cost would be equally as likely as a model with a low transmission fitness cost but a high progression cost and hence would be uninformative. Note that fixing either f 1 or f 2 equal to one is the same as ignoring this parameter altogether and leaving the multiplied rate at its background level as they are both scalar constant parameters with no units. Model simulation The model initially sampled 700 household sizes from the distribution of household sizes in the trial (32), and initial conditions of latent infection taken from data (2, 33) (SI text). The model was then simulated for 10 years with a sampled set of the four unknown parameters (pre-study period). A random time point from over this 10-year period in which there was at least one active case with the same sensitivity as the initial case in the household (i.e. DS-TB or MDR-TB) was taken to be the time the household entered the study and the active index case was detected. This allows for simulation of changes in latency in the household and provides initial conditions dependent upon each parameter sample. The above randomly sampled time point of entry to the study was taken to be the initial conditions for the simulation of the model that was fitted to the household study (17) (study period). The same values of the four unknown parameters was used as in the pre-study period and the simulation time for each household was randomly sampled from the distribution of follow-up times in the study (SI text Figure S1). The only parameter that changed, to match the altered patient care in the study, was the case detection rate which increased for the study period from the WHO estimates to a screen occurring every 6 months ( Table 2). The TB incidence from the model was calculated by determining the total number of new cases of active TB in all 700 households over the follow-up time, and dividing this by the total number of follow-up years in these households. The total number of follow-up years was a product of the number of household members and the follow-up time for the household taking into account any deaths over this time. We assumed that no-one left the households other than by death (natural or due to TB). For a detailed overview of the process see SI text Figure S2. Model fitting Approximate Bayesian Computation (ABC) was paired with Markov chain Monte Carlo (MCMC) methods to estimate the four unknown parameters (34). All other parameters were kept fixed at their baseline value ( Table 2). The summary statistic used was the TB incidence from the model falling within the 95% CI for all four TB incidence measures from the data. Uniform priors were assumed for all four parameters (Table 1). To estimate the standard deviation required for the MCMC for the four unknown parameters, Latin Hypercube Sampling (LHS) from the prior ranges was initially used (Stage A). The empirical standard deviation from the accepted fits was then used as proposal distribution of a Metropolis-Hastings MCMC sampler (Stage B), used to estimate posterior probabilities of the parameters. We used the sampled trajectories to consider the probability of remaining free of tuberculosis from the model output and compare the general trends to the data ( Figure 2 from (17)). Scenario analysis A scenario analysis was used to explore the sensitivity of Model 1 results to key natural history parameters. Firstly, we changed the initial proportion of the population latently infected with MDR-Mtb from 2% to 10%. A full sensitivity analysis of the parameters kept fixed in the model fits was not possible due to limitations imposed by computation time. Instead, to determine which further scenarios to explore, we determined the parameters most correlated with TB incidence in our model, and hence likely to have the biggest impact on our model fit and parameter estimates. To determine these parameters, we used LHS to choose 10,000 parameter sets from (uniform) prior distributions for all parameters ( Table 2). We then ran Model 1 with these 10,000 parameter sets and determined the parameters that were statistically significantly correlated with any of the four TB incidence outputs (Kendall correlation, p < 0.01). These parameters were then used to design two scenarios -one with a combination of these parameters at their prior values which gave highest TB incidence and the combination which gave the lowest TB incidence. We also increased our 10-year initial run-in period for the population to 30 years and explore the impact on the estimates. Furthermore, we explored removing the assumption of saturating household transmission (per capita transmission rate was then not dependent on household size). Mtb was used. Uninfected people become infected at a rate dependent on the number of active cases (dynamic transmission). Once infected, the majority of people (85%) are assumed to enter a Latent slow (LS / LR) state. The remainder enter a rapid progression (Latent fast, LFS / LFR ) state which has a higher rate of progression to active disease (AS / AR). Resistance mutations are acquired during active disease. Those with active disease recover to the Latent slow state via treatment or natural cure. The fitness cost to resistance is assumed to affect the rate of transmission (f 1 ) or the rate at which those latently infected with MDR-TB progress to active disease (f 2 ). Only the effect on primary transmission of f 1 is highlighted here, but reinfection is also decreased. f 1 and f 2 are set at 1 or allowed to vary between 0 and 1 in the three separate models: Table 1: Fitted parameters with description, data used for fitting, prior distributions and any differences by model structure. All parameters are fitted to the TB incidence date from the household (HH) study (17). The three models have different assumptions around the effect of decreased fitness, with f varying to be f 1 (affects transmission rate) or f 2 (affects progression to disease rate) (see Figure 1). Detailed overview of simulation A detailed overview of all the stages used in the simulation are provided in Figure 2. The model initially sampled 700 household sizes from the distribution of household sizes in the trial [1]. 213 of these had an initial MDR-TB case, 487 an initial DS-TB case. Tuberculin skin test (TST) prevalence surveys across Lima have found 52% (95% CI: 48-57%) to be infected with Mtb [2]. Hence, the number of cases initially latently infected was sampled from a normal distribution with mean 0.5 and standard deviation of 0.1. Informed by the TB prevalence in Lima, it was assumed that initially, 98% of these latent infections were with DS-TB strains, 2% with MDR-TB strains in all households [2]. This proportion was varied in scenario analysis. Random sampling from a binomial distribution, with this 98% DS-TB, determined the distribution of latent DS-TB and MDR-TB cases across the 700 households. The proportion of latent cases that were "latently fast" cases ( Figure 1) was taken to be 3% to reflect that although the proportion of new infections that are fast latent is 15%, over time these will change state more rapidly than latent slow. Probability of remaining free from tuberculosis We compared the probability of remaining free from TB in our model to that presented in the original study (Figure 2 in [3]). We had highly similar dynamics to those in the main study ( Figure 5). The total follow-up time of MDRTB contacts was 1,425 person-years (mean follow-up time per MDRTB contact 494 d, standard deviation 199 d), during which 35 second cases arose, equating to an incidence of 2,456 per 100,000 contact follow-up person-years. The total followup time of drug-susceptible tuberculosis contacts was 2,620 person-years (mean follow-up time per drug-susceptible tuberculosis contact 406 d, standard deviation 189 d), during which 114 second cases arose, equating to an incidence of 4,351 per 100,000 contact follow-up person-years (multivariate analysis, HR 0.56, 95% CI 0.34-0.90, p = 0.017; Fig 2). Scenario analysis results The parameters estimates for the five scenarios are given in Table 1 and Figure 19. Our first scenario analysis explored increasing the initial proportion of households that were initially infected with latent MDR-Mtb from 2% to 10% (in the pre-study). Fitting the four unknown parameters revealed that this increased MDR-Mtb latency proportion had very little impact on the estimates. Our correlation analysis revealed four parameters (other than the four unknown parameters) to be correlated with TB incidence: the proportion of (re)infected individuals which progress to "latent fast" (p), the protection from developing active TB upon re-infection (), the proportion of new active cases which directly become infectious (d) and the progression rate of latent fast individuals to active disease (pf). The second scenario set these four parameters to be (p, , d, pf) = (0.25, 0.25, 0.75, 0.9) (high TB incidence) and the third (low TB incidence) to be (0.08,0.45,0.25,0.1). These second and third scenarios affected the estimates for the external force of infection and per capita transmission rate as would be expected due to the nature of the change in the natural history parameters. However, the estimates for the relative fitness (f) remain relatively consistent with our initial parameter set in Model 1 at approximately 0.30. Scenario 3 has a lower mean fitness at 0.22. The fourth scenario, extended the initial run-in period from 10 to 30 years. All parameter estimates are similar to those of Model 1, including the relative fitness. The 95% credible intervals are larger as would be expected from the larger initial variation that will come from taking initial conditions from a 3x bigger run-in. The fifth scenario removed the saturating household effect. The parameter estimates from this were also highly similar to the main analysis, except for the per capita transmission parameter, which was lower, reflecting the change to the model structure (no longer divided by household size). 20 foi_s foi_r f M1 S1 S2 S3 S4 S5 M1 S1 S2 S3 S4 S5 M1 S1 S2 S3 S4 S5
6,913.2
2017-10-11T00:00:00.000
[ "Biology" ]
Nuclear forces from EFT: Recent developments Nuclear forces are considered based on chiral perturbation theory with and without explicit ∆-isobar degrees of freedom. We discuss the subleading corrections to chiral three-nucleon forces in the ∆-less formalism which contain no additional free parameters. In the formalism with explicit ∆-isobar we present the complete next-to-next-to-leading order analysis of isospin-conserving and next-to-leading order analysis of isospinviolating nuclear forces. The perturbative expansion of nuclear forces in the ∆-full case is shown to have much better convergence compared with the ∆-less theory where the ∆-resonance is integrated out and is encoded in certain low-energy constants. 1 Nuclear forces in chiral effective field theory One of the most important questions in nuclear physics is how the nucleons as the constituents of nuclei interact with each other. Already 1935 Yukawa made an attempt to explain the nature of the nucleon-nucleon interaction by a virtual meson exchange between nucleons [1]. Later the discovery of pions and heavier mesons laid the foundation for the development of highly sophisticated phenomenological models for nuclear forces which were motivated by the original idea of Yukawa. In the two-nucleon sector they give a perfect description of the experimental data at the cost of often more than 40 parameters fixed from the fit to data. In the three-nucleon sector it is probably not feasible to follow a similar phenomenological path. The rich spin-isospin structure allows much more possibilities to parameterize the three-nucleon force leading to much more unknown constants which can hardly be fixed from the experimental input. In order to improve our understanding of nuclear forces we need to learn how they are connected to the underlying theory of strong interactions, Quantum Chromo Dynamics (QCD). In the above mentioned phenomenological models this connection is obviously missing. From the QCD point of view, nuclear forces are given as residual interaction between hadrons described by quark-gluon dynamics. However, in the low-energy sector far below the chiral-symmetry-breaking scale Λχ ∼ 1 GeV, quarks and gluons are not the most e fficient degrees of freedom for the a<EMAIL_ADDRESS>description ofnuclearprocesses. Nuclear forces in this region are largely driven by chiral Goldstone boson dynamics which appears due to spontaneous and explicit breaking of chiral symmetry in QCD. In the SU(2) scenario the Goldstone bosons can be interpreted as pions which get their mass due to the explicit chiral symmetry breaking by the small up and down quark masses. Since pions are Goldstone bosons, their interactions vanish with vanishing fourmomenta. This fact allows us to construct an e ffective field theory (EFT) with nucleons and pions as degrees of freedom and the same underlying symmetries of QCD in form of a perturbative expansion in low external momenta and pion mass divided by the hard scale Λχ ∼ 1 GeV [2,3]. In this systematic, order-by-order improvable approach called chiral perturbation theory (ChPT) we get both a direct connections to QCD and a description in e fficient hadronic (rather than quark) degrees of freedom by construction. In the pure meson and one-nucleon sectors, ChPT has been used to calculate various processes like form factors, pion-nucleon, Compton scattering etc. (see [4] for a recent review). In the two and more nucleon sector an additional problem appears which does not allow to use ChPT as just a perturbation theory. Almost two decades ago we learned from the seminal paper of Weinberg [5] that diagrams with two (or more) nucleon cuts violate the power counting of ChPT. As a solution to this problem he suggested to define an e ff ctive potential which per construction should not have any two (or more) nucleon cuts and thus can be calculated perturbatively to a desired order by using chiral EFT. In order to describe a given nuclear process we need to solve Lippman-Schwinger, Faddeev or EPJ Web of Conferences , 05001 (2010) DOI:10.1051/epjconf/2010 05001 © Owned by the authors, published by EDP Sciences, 2010 This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial License 3.0, which permits unrestricted use, distribution, and reproduction in any noncommercial medium, provided the original work is properly cited Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20100305001 EPJ Web of Conferences Faddeev-Jakubowsky equations numerically (with the previously constructed effective potential as input) in the two-, three-, fourand more-nucleon sector, respectively. This path has been followed in the last two decades by several groups. The chiral effective nucleon-nucleon (NN) potential has been constructed up to next-to-next-to-nextto-leading-order (N3LO). At this order 24 unknown lowenergy-constants (LECs) coming from short-range NN contact interactions have been fitted to the proton-neutron scattering phase shifts obtained from the Nijmegen partial wave analysis (PWA) [6,7]. At N3LO the chiral effective NN potential describes the Nijmegen PWA phases almost as good as other phenomenological potentials. The three-body forces have been analyzed numerically only up to N 2LO. At this order there appear two additional short-range LECs which have been fitted e.g. to the binding energy of 3H and Swave doublet neutron-deuteron scattering length. At N 2LO the description of three-body observables is in most cases satisfactory. The results for the differential cross section of elastic neutron-deuteron scattering e.g. are in good agreement with experimental data. On the other hand there are some observables which are rather poorly described at this order even at low energies. One example is the so-called symmetric space-star configuration in neutron-deuteron break up reaction. In this configuration, the plane in the CMS spanned by the outgoing nucleons is perpendicular to the beam axis, and the angles between the nucleons are 120. At Elab = 15 MeV theoretical calculations based on both phenomenological and chiral nuclear forces are unable to describe the data (see recent reviews [9,10] for extensive discussion). There is, however, hope of improvement once N3LO three-body forces are included. 2 N3LO chiral three-nucleon force An interesting point about the N 3LO contributions to chiral three-nucleon forces is the absence of additional LECs at that order. Their potential appearance is just prevented by the underlying symmetries of QCD. The rich spin-isospin structure of the N3LO contributions makes them also promising to resolve the discrepancies in the description of some until now poorly described three-nucleon observables. The N3LO contributions can be divided into two parts. The first part is given by long-range contributions which include following contributions: – Two-pion exchange (2π) graphs (see graph (a) in Fig. 1) which have been considered by Ishikawa and Robilotta [12] using the so-called infrared regularization method and by our group [13] in the framework of unitary transformations combined with Heavy Baryon formalism. – Two-pion-one-pion exchange (2π − 1π) graphs which are visualized by graph (b) in Fig. 1. – Three-pion exchange, so-called ring diagrams, which are visualized by graph (c) in Fig. 1. Analytic expressions of all these contributions can be found in [13]. The second part is given by shorter-range contributions which is visualized by graphs (d) and (e) in Fig.1 and corresponding 1/mcorrections (withm the nucleon mass). Their construction has been by now finished and will be published elsewhere. It is important to note that the construction of the N3LO forces is unique only modulo unitary transformations. In the static limit, however, the natural choice of unitary transformation is entirely dictated by renormalizability requirement of the three-nucleon forces [14]. One can even speculate that in general an e ffective potential in the static limit is unique once we additionally require its renormalizability. 3 Chiral EFT with ∆-isobar degrees of freedom In the standard chiral EFT discussed so far only pions and nucleons are treated as dynamical degrees of freedom. All other resonances like e.g. ∆ orρ are integrated out and their contributions are encoded in the LECs. In this scheme pion four-momentaQ and massMπ are treated as a soft scale and the mass difference ∆ = m∆ − mN between the lightest baryon∆-resonance and nucleon as a hard scale which is assumed to be of the same size as the chiral symmetry breaking scaleΛχ ∼ 1 GeV: Q ∼ Mπ ≪ ∆ = 293 MeV. (1) One can still argue that scales of the order of ∼ 300 MeV can be treated as a soft scale. In this case one has to introduce the∆ degrees of freedom explicitely into the theory and enlarge the set of expansion parameters by the deltanucleon mass difference ∆/Λχ: Q ∼ Mπ ∼ ∆ ≪ Λχ. (2) This expansion is known in the literature as the small scale expansion (SSE) [15]. Integrating out of ∆ degrees of freedom leads often to an enlargement of the values of the LECs. This e.g. happens with the ππNN LECs c3 andc4 which are saturated by the ∆-isobar and are known to be unnaturally large. In the theory with explicit ∆, however, they have natural size. Therefore the unnatural enlargement of someci’s in the delta-less theory can be explained by large∆-contributions which get absorbed by the ci. The appearance of unnaturally large LECs in an EFT can spoil its convergence. This can be seen in the chiral effective potential: subleading-order contributions to chiral 2π-exchange potential in the ∆-less theory appears to be larger than the leading one. From this point of view, the explicit inclusion of∆-isobar is well motivated: One can expect the LECs to be of natural size and the SSE of chiral nuclear forces to possess a natural convergence. 4 Nuclear forces with explicit ∆-isobar The consideration of chiral nuclear forces with explicit ∆i Nuclear forces in chiral effective field theory One of the most important questions in nuclear physics is how the nucleons as the constituents of nuclei interact with each other.Already 1935 Yukawa made an attempt to explain the nature of the nucleon-nucleon interaction by a virtual meson exchange between nucleons [1].Later the discovery of pions and heavier mesons laid the foundation for the development of highly sophisticated phenomenological models for nuclear forces which were motivated by the original idea of Yukawa.In the two-nucleon sector they give a perfect description of the experimental data at the cost of often more than 40 parameters fixed from the fit to data.In the three-nucleon sector it is probably not feasible to follow a similar phenomenological path.The rich spin-isospin structure allows much more possibilities to parameterize the three-nucleon force leading to much more unknown constants which can hardly be fixed from the experimental input. In order to improve our understanding of nuclear forces we need to learn how they are connected to the underlying theory of strong interactions, Quantum Chromo Dynamics (QCD).In the above mentioned phenomenological models this connection is obviously missing.From the QCD point of view, nuclear forces are given as residual interaction between hadrons described by quark-gluon dynamics.However, in the low-energy sector far below the chiral-symmetry-breaking scale Λ χ ∼ 1 GeV, quarks and gluons are not the most efficient degrees of freedom for the a e-mail<EMAIL_ADDRESS>of nuclear processes.Nuclear forces in this region are largely driven by chiral Goldstone boson dynamics which appears due to spontaneous and explicit breaking of chiral symmetry in QCD.In the SU(2) scenario the Goldstone bosons can be interpreted as pions which get their mass due to the explicit chiral symmetry breaking by the small up and down quark masses.Since pions are Goldstone bosons, their interactions vanish with vanishing fourmomenta.This fact allows us to construct an effective field theory (EFT) with nucleons and pions as degrees of freedom and the same underlying symmetries of QCD in form of a perturbative expansion in low external momenta and pion mass divided by the hard scale Λ χ ∼ 1 GeV [2,3].In this systematic, order-by-order improvable approach called chiral perturbation theory (ChPT) we get both a direct connections to QCD and a description in efficient hadronic (rather than quark) degrees of freedom by construction. In the pure meson and one-nucleon sectors, ChPT has been used to calculate various processes like form factors, pion-nucleon, Compton scattering etc. (see [4] for a recent review).In the two and more nucleon sector an additional problem appears which does not allow to use ChPT as just a perturbation theory.Almost two decades ago we learned from the seminal paper of Weinberg [5] that diagrams with two (or more) nucleon cuts violate the power counting of ChPT.As a solution to this problem he suggested to define an effective potential which per construction should not have any two (or more) nucleon cuts and thus can be calculated perturbatively to a desired order by using chiral EFT.In order to describe a given nuclear process we need to solve Lippman-Schwinger, Faddeev or EPJ Web of Conferences Faddeev-Jakubowsky equations numerically (with the previously constructed effective potential as input) in the two-, three-, four-and more-nucleon sector, respectively.This path has been followed in the last two decades by several groups.The chiral effective nucleon-nucleon (NN) potential has been constructed up to next-to-next-to-nextto-leading-order (N 3 LO).At this order 24 unknown lowenergy-constants (LECs) coming from short-range NN contact interactions have been fitted to the proton-neutron scattering phase shifts obtained from the Nijmegen partial wave analysis (PWA) [6,7].At N 3 LO the chiral effective NN potential describes the Nijmegen PWA phases almost as good as other phenomenological potentials.The three-body forces have been analyzed numerically only up to N 2 LO.At this order there appear two additional short-range LECs which have been fitted e.g. to the binding energy of 3 H and Swave doublet neutron-deuteron scattering length.At N 2 LO the description of three-body observables is in most cases satisfactory.The results for the differential cross section of elastic neutron-deuteron scattering e.g. are in good agreement with experimental data.On the other hand there are some observables which are rather poorly described at this order even at low energies.One example is the so-called symmetric space-star configuration in neutron-deuteron break up reaction.In this configuration, the plane in the CMS spanned by the outgoing nucleons is perpendicular to the beam axis, and the angles between the nucleons are 120 • .At E lab = 15 MeV theoretical calculations based on both phenomenological and chiral nuclear forces are unable to describe the data (see recent reviews [9,10] for extensive discussion).There is, however, hope of improvement once N 3 LO three-body forces are included. N 3 LO chiral three-nucleon force An interesting point about the N 3 LO contributions to chiral three-nucleon forces is the absence of additional LECs at that order.Their potential appearance is just prevented by the underlying symmetries of QCD.The rich spin-isospin structure of the N 3 LO contributions makes them also promising to resolve the discrepancies in the description of some until now poorly described three-nucleon observables. The N 3 LO contributions can be divided into two parts.The first part is given by long-range contributions which include following contributions: -Two-pion exchange (2π) graphs (see graph (a) in Fig. 1) which have been considered by Ishikawa and Robilotta [12] using the so-called infrared regularization method and by our group [13] in the framework of unitary transformations combined with Heavy Baryon formalism.-Two-pion-one-pion exchange (2π − 1π) graphs which are visualized by graph (b) in Fig. 1. -Three-pion exchange, so-called ring diagrams, which are visualized by graph (c) in Fig. 1. Analytic expressions of all these contributions can be found in [13].The second part is given by shorter-range contributions which is visualized by graphs (d) and (e) in Fig. 1 and corresponding 1/m corrections (with m the nucleon mass).Their construction has been by now finished and will be published elsewhere.It is important to note that the construction of the N 3 LO forces is unique only modulo unitary transformations.In the static limit, however, the natural choice of unitary transformation is entirely dictated by renormalizability requirement of the three-nucleon forces [14].One can even speculate that in general an effective potential in the static limit is unique once we additionally require its renormalizability. Chiral EFT with ∆-isobar degrees of freedom In the standard chiral EFT discussed so far only pions and nucleons are treated as dynamical degrees of freedom.All other resonances like e.g.∆ or ρ are integrated out and their contributions are encoded in the LECs.In this scheme pion four-momenta Q and mass M π are treated as a soft scale and the mass difference ∆ = m ∆ − m N between the lightest baryon ∆-resonance and nucleon as a hard scale which is assumed to be of the same size as the chiral symmetry breaking scale Λ χ ∼ 1 GeV: One can still argue that scales of the order of ∼ 300 MeV can be treated as a soft scale.In this case one has to introduce the ∆ degrees of freedom explicitely into the theory and enlarge the set of expansion parameters by the deltanucleon mass difference ∆/Λ χ : This expansion is known in the literature as the small scale expansion (SSE) [15].Integrating out of ∆ degrees of freedom leads often to an enlargement of the values of the LECs.This e.g.happens with the ππNN LECs c 3 and c 4 which are saturated by the ∆-isobar and are known to be unnaturally large.In the theory with explicit ∆, however, they have natural size.Therefore the unnatural enlargement of some c i 's in the delta-less theory can be explained by large ∆-contributions which get absorbed by the c i .The appearance of unnaturally large LECs in an EFT can spoil its convergence.This can be seen in the chiral effective potential: subleading-order contributions to chiral 2π-exchange potential in the ∆-less theory appears to be larger than the leading one.From this point of view, the explicit inclusion of ∆-isobar is well motivated: One can expect the LECs to be of natural size and the SSE of chiral nuclear forces to possess a natural convergence. Nuclear forces with explicit ∆-isobar The consideration of chiral nuclear forces with explicit ∆isobar started more than one decade ago.Ordonez et al. [16] and Kaiser et al. [17] worked out the leading ∆-resonance contributions to the chiral nuclear forces.They have shown that the large N 2 LO contributions to the 2π-exchange potential in the standard chiral EFT are shifted to NLO such that the dominant contribution to the nuclear forces appear already at NLO in the ∆-full scenario.In the standard chiral EFT three-nucleon forces start to contribute at N 2 LO.This changes in a ∆-full theory where the first three-nucleon contribution starts already at NLO and is given by the well known Fujita-Miyazawa force [19,20].Recently we calculated the subleading ∆-resonance effects [18] which contribute to N 2 LO chiral nuclear forces.An interesting point is that at N 2 LO there are no corrections to the three-nucleon forces coming from ∆-isobar excitations: all possible NNN∆ contact interactions are forbidden by the Pauli exclusion principle.The forces constructed out of building blocks with that contact interaction disappear after antisymmetrization.So the only non-zero ∆-resonance contribution to the three-nucleon forces upto N 2 LO is given by the Fujita-Miyazawa force [21].In order to discuss numerical results for the ∆-full forces upto N 2 LO we need to fix all the unknown constants which appear at this order.These are leading and subleading combination of the pion-nucleon-delta coupling constants h A and b 3 + b 8 , respectively, and c i , i = 1, 2, 3, 4, from πN sector.We fixed them in two different fits to the S-and P-wave threshold parameters of pion-nucleon scattering calculated upto second order in SSE.The diagrams contributing to this order are shown in Fig. 2. In the first fit we take large N c value for where g A = 1.27 is the nucleon axial vector coupling, and fitted all other constants to various threshold parameter.In the second fit we used h A = 1.05 which has been extracted from the Heavy-Baryon ∆-width analysis in the static limit [15] and is consistent with quark model relation. Numerical values for various LECs determined in this way are given in Table 1.Note that the constants c 3 and c 4 are strongly reduced in the ∆-full theory which is consistent with our previous considerations.With these constants we found for all effective NN potentials much better convergence compared with the potentials in the delta-less theory (see [18] for an extensive discussion).The same behavior can be seen in peripheral partial waves (see e.g. 3 F 3and 3 F 4 -partial waves within ∆-full and ∆-less theories in comparison with the Nijmegen and Virginia Tech PWA in Fig. 3).Note that although the convergence in the ∆-full theory is much better than in the ∆-less one, the overall N 2 LO results in both formulations are very similar. Leading isospin-breaking ∆-isobar contributions To estimate the size of isospin-breaking effects we also studied isospin-breaking contributions to chiral NN forces within the ∆-full theory up to next-to-leading order.In this presentation we concentrate only on charge symmetry breaking (CSB) contributions to nuclear forces.For the full discussion see [23] and the forthcoming publication [24].The leading CSB contributions to the nuclear force in the ∆-full theory are proportional to the nucleon-and ∆mass splittings.At leading order there are electromagnetic and strong isospin-breaking contributions to the ∆-mass splittings.While both of them contribute to the equidistant splitting δm 1 ∆ in the ∆ quartet, the non-equidistant splitting δm 2 ∆ is of a pure electromagnetic origin.To estimate the values of the ∆-mass splittings we proceeded in two ways.In the first fit we used the most recent Particle Data Group values for m ∆ ++ = 1230.80± 0.30 MeV, m ∆ 0 = 1233.45± 0.35 MeV together with the average mass m ∆ = 1233 MeV.With this input we get for the ∆-mass splittings Alternatively, instead of using m ∆ = 1233 MeV, we employed the quark model relation [25] where δm N is the nucleon mass splitting.The results for the ∆-mass splittings determined in this way appear to be consistent with the first determination: Having determined the values for the ∆-mass splittings we studied the CSB contributions to the nuclear forces.In Fig. 4 we show CSB two-pion-exchange contributions to the two-nucleon potential in configuration space.The contributions due to the nucleonmass splitting δm N for the potentials V III S ,T appear to be similar in the ∆-less and ∆-full theories.In the central potential V III C , however, we see sizeable differences.In all potentials we observe strong cancellations between the δm Nand δm 1 ∆ -contributions which lead to significantly weaker V III C,S ,T potentials in the ∆-full theory. Subleading isospin-breaking ∆-isobar contributions In order to study the convergence of the isospin-breaking nuclear forces within SSE we need to calculate them at least up to next-to-leading order.We concentrate here again on the two-pion-exchange part of CSB forces and refer to forthcoming publication [24] for the full discussion.At NLO there appear three unknown constants: one of them is β which is a combination of the LECs accompanying the leading isospin-breaking πNN vertex [11].Two other unknown constants which we call β 1 ∆ and β 2 ∆ are given by a particular combination of the LECs accompanying the leading isospin-breaking πN∆ vertex [24].For our current considerations we set all the three constants to zero.Similar to LO there are two contributions to CSB forces at NLO which are proportional to nucleon-and ∆-mass splittings δm and δm 1 ∆ , respectively.In Fig. 5 we show individual contributions to CSBforces at NLO (right panel) compared with the corresponding forces at leading order (left panel).NLO corrections to CSB-forces appear to be of a natural size.In all the potentials both at LO and at NLO we observe strong cancellations between the contributions proportional to δm and δm 1 ∆ .In all CSB-forces at NLO, however, one can see that δm-contributions are almost by factor 2 larger than δm 1 ∆contributions.This leads to stronger V III C,S ,T potentials even in the ∆-full theory.The entire effect which leads to weaker CSB forces at leading order in SSE and which can be interpreted as higher order effects from the resonance saturation point of view in the ∆-less theory is completely compensated at NLO in SSE.This can be clearly seen from Fig. 6 where we show the results for CSB-forces up to NLO both in the ∆-full (left panel) and ∆-less theory (right panel).Similar to the isospin-conserving case one can see that the overall results in both theories with and without explicit ∆ degrees of freedom are remarkably close.Only the convergence of the nuclear forces is improved once the ∆ is explicitely taken into account. Summary We discussed the long-and shorter-range N 3 LO contributions to chiral three-nucleon forces.No additional free parameters do appear at this order.There are five different topology classes which contribute to the forces.Three of them describe long-range contributions which consti- tute the first systematic corrections to the leading 2π exchange that appears at N 2 LO.Another two contributions are of shorter range and include, additionally to an exchange of pions, also one short-range contact interaction and all corresponding 1/m corrections.The requirement of renormalizability leads to unique expressions for N 3 LO contributions to the three-nucleon force (except for 1/mcorrections).We presented the complete N 2 LO analysis of the nuclear forces with explicit ∆-isobar degrees of freedom.Although the overall results in the isospin-conserving case are very similar in the ∆-less and ∆-full theories we found a much better convergence in all peripheral partial waves once ∆-resonance is explicitely taken into account.The leading CSB contributions to nuclear forces are proportional to nucleon-and ∆-mass splittings.There appear strong cancellations between the two contributions which at leading order yield weaker V III potentials.This effect is, however, entirely compensated at subleading order such that the results in the theories with and without explicit ∆ degrees of freedom are remarkably similar. We are looking forward to numerical studies of N 3 LO three-nucleon forces and implementations of N 2 LO forces in the ∆-full EFT in future NN partial-wave analyses. Fig. 1 . Fig. 1.Various topologies that appear in the three-nucleon force at N 3 LO.Solid and dashed lines represent nucleons and pions, respectively.Shaded blobs are the corresponding amplitudes.The long-range part of the three-nucleon force considered in this paper consists of (a) 2π exchange graphs, (b) 2π-1π diagrams and the ring diagrams (c).The topologies (d) and (e) involve four-nucleon contact operators and are of shorter range. [22] from fit 2 at Q 3 without explicit ∆'s (the errors given are purely statistical and do not reflect the true uncertainty of the LECs).The LECs c i and b 3 + b 8 are given in GeV −1 . Fig. 4 . Fig. 4. CSB two-pion exchange potential.The left (right) panel shows the results obtained at leading order in chiral EFT with explicit ∆-resonances (at subleading order in chiral EFT without explicit ∆ degrees of freedom).The dashed and dashed-double-dotted lines depict the contributions due to the ∆-and nucleon-mass differences δm 1∆ and δm N , respectively, while the solid lines give the total result. Fig. 5 .Fig. 6 . Fig. 5. CSB two-pion exchange potential.The left (right) panel shows the results obtained at leading order (at subleading order) in chiral EFT with explicit ∆-resonances.The dashed and dasheddouble-dotted lines depict the contributions due to the ∆-and nucleon-mass differences δm 1∆ and δm N , respectively, while the solid lines give the total result. Table 1 . Determinations of the LECs from S-and P-wave threshold parameters in πN scattering based on the Q 2 fits with and without explicit ∆'s.LECs used as input are marked by the star.Also shown are the values determined in Ref.
6,349
2010-04-01T00:00:00.000
[ "Physics" ]
Nonlinear Color-Metallicity Relations of Globular Clusters. X. Subaru/FOCAS Multi-object Spectroscopy of M87 Globular Clusters We obtained spectra of some 140 globular clusters (GCs) associated with the Virgo central cD galaxy M87 with the Subaru/FOCAS MOS mode. The fundamental properties of GCs such as age, metallicity and $\alpha$-element abundance are investigated by using simple stellar population models. It is confirmed that the majority of M87 GCs are as old as, more metal-rich than, and more enhanced in $\alpha$-elements than the Milky Way GCs. Our high-quality, homogeneous dataset enables us to test the theoretical prediction of inflected color$-$metallicity relations (CMRs). The nonlinear-CMR hypothesis entails an alternative explanation for the widely observed GC color bimodality, in which even a unimodal metallicity spread yields a bimodal color distribution by virtue of nonlinear metallicity-to-color conversion. The newly derived CMRs of old, high-signal-to-noise-ratio GCs in M87 (the $V-I$ CMR of 83 GCs and the $M-T2$ CMR of 78 GCs) corroborate the presence of the significant inflection. Furthermore, from a combined catalog with the previous study on M87 GC spectroscopy, we find that a total of 185 old GCs exhibit a broad, unimodal metallicity distribution. The results corroborate the nonlinear-CMR interpretation of the GC color bimodality, shedding further light on theories of galaxy formation. INTRODUCTION Globular clusters (GCs) are an old simple stellar population (SSP) with a small internal dispersion in age and chemical composition, and provide powerful clues to help understand the formation histories of their host galaxies. The exciting discovery of GC color bimodality in many luminous early-type galaxies (e.g., Geisler et al. 1996;Kundu & Whitmore 2001;Larsen et al. 2001;Peng et al. 2004;Harris et al. 2006;Peng et al. 2006;Lee et al. 2008;Jordán et al. 2009;Sinnott et al. 2010;Liu et al. 2011;Blakeslee et al. 2012;Brodie et al. 2012;Pota et al. 2013;Kartha et al. 2014;Richtler et al. 2015;Cho et al. 2016;Kartha et al. 2016;Caso et al. 2017;Harris et al. 2017;Choksi et al. intrinsic metallicity and its proxy, colors, holds the key to the color bimodality phenomenon. Paper I demonstrated that conversions to colors from the unimodal metallicity distribution of old GCs (> 10 Gyr) can be naturally achieved through nonlinear CMRs. Nonlinear relations between [Fe/H] and colors are also reported observationally (Peng et al. 2006;Blakeslee et al. 2012;Chies-Santos et al. 2012;Cho et al. 2016). The nonlinear-CMR scenario was scrutinized in a subsequent series of papers (Yoon et al. 2011a(Yoon et al. ,b, 2013Kim et al. 2013;Chung et al. 2016;Kim & Yoon 2017;Lee et al. 2019;Lee et al. 2020, hereafter Papers II, III, IV, V, VI, VII, VIII, and IX). In Papers II and IV, the shapes of CMRs according to different colors were examined for the GC systems of M87 and M84, respectively. They showed that the u-band-related colors in particular, are useful diagnostics of the underlying metallicity distribution function. Different combinations of ugz bands from Hubble Space Telescope (HST) multiband photometry yielded histograms of varying morphologies, which is consistent with the nonlinear-CMR assumption. Paper III focused on the analysis of GC metallicities and their link to the host galaxy. It was shown that when transformed through nonlinear CMRs, GCs and halo field stars share similar metallicity distribution shapes. This suggests a connected origin of GCs and halo field stars, contrary to previous views. Papers V and VII examined the spectra of old, spheroidal GCs in the M31 and NGC 5128, respectively. The observed distributions of high-quality spectroscopic absorption-line indices of M31 GCs (Caldwell et al. 2011) and NGC 5128 GCs (Beasley et al. 2008;Woodley et al. 2010) were reproduced successfully using the nonlinear metallicityto-index transformation, exactly analogous to the view that the nonlinear metallicity-to-color transformation is responsible for the photometric color bimodality of GCs. A further theoretical spectroscopic study, Paper VI, showed that the predictions from the Yonsei Evolutionary Population Synthesis (YEPS) models (Chung et al. 2013;Chung et al. 2017) exhibit nonlinearity in the Ca II triplet (CaT ; 8498, 8542, and 8662 A) index versus metallicity plane. The CaT is a wellknown metallicity indicator for stellar populations and is used to derive metallicities of extragalactic GCs assuming a linear relation between CaT and metallicity (e.g., Foster et al. 2010;Brodie et al. 2012;Usher et al. 2015). They also showed that the conversion via nonlinear CaT −metallicity from a unimodal metallicity distribution reproduces the observed CaT distribution well. More recently, two theoretical photometric studies also reinforced the nonlinearity scenario for the GC color bimodality. Paper VIII showed that the individual color distributions of 78 GC systems in early-type galaxies from the ACS Virgo and Fornax Cluster Surveys are correctly reproduced based on the nonlinear-CMR scenario. Paper IX showed that the difference between blue and red GCs in M49, M60, M87, and NGC 1399 in terms of the radial surface number density profile arises naturally from the nonlinear metallicity-to-color conversion together with the observed radial metallicity gradient of the GC systems. This paper is the 10th in the nonlinear-CMR series and adopts the same line of analysis in exploring and determining more precise forms of GC CMR. We use a homogeneous set of high-quality M87 GC spectra with wide metallicity ranges, obtained with the Subaru/FOCAS MOS mode. M87 is a cD galaxy at the center of the Virgo cluster harboring a population of ∼ 10 4 GCs. The previous spectroscopic study of its GC system (Cohen et al. 1998, hereafter C98) reported ages and metallicities for ∼ 100 GCs. Our sample GCs are located similarly within 10 ′ of galactic radius and have nine GCs in common with the C98 catalog. C98 spectroscopic GCs are brighter than ours by 0.5−1.0 mag in the V band. Recently, Villaume et al. (2019) reported a new metallicity measurement on M87 GCs via full-spectrum SSP model fitting using the C98 GC in which they obtained systematically higher metallicities compared to the previous findings on M87 (see the discussion). The paper is outlined as follows. Section 2 presents the observation and reduction of the data used in the study. Section 3 describes the measurements of radial velocities and selection of member GCs. Section 4 provides the measurement of Lick/IDS absorption-line indices (Worthey 1994;Worthey & Ottaviani 1997, hereafter W94 and W97, respectively), followed by the inferred age, metallicity, and α-element abundance of the M87 GCs in Section 5. In Section 6, we present the derived CMRs of the clusters. The implications of our findings are discussed in Section 7. OBSERVATIONS AND DATA REDUCTION The spectroscopic GC targets are selected from the photometric data on the M87 GC candidates obtained with Suprime-Cam on the 8.2 m Subaru telescope (Tamura et al. 2006a;Tamura et al. 2006b). In Figure 1, the V versus V − I color−magnitude diagram (CMD; upper panel) and the color histogram (lower panel) show that the selected clusters cover the typical color range of GCs (0.8 < V − I <1.4) and exhibit a clear color bimodality. Figure 2 shows the locations of the GCs as blue (V − I < 1.1) and red (V − I > 1.1) filled circles. In the inner target fields (< 10 ′ ) where GC density is very high, we prioritized the red clus-ters with V = 21.5 − 22.0 mag because, at larger distances, color bimodality is less clear due to the decreasing contribution of red GCs. The magnitude range is chosen in order to avoid the possible effect of the known blue tilt of the M87 GC system at V ≤ 21.5 (Strader et al. 2006). The phenomenon of a lack of bright blue clusters is commonly observed in massive elliptical galaxies (Ostrov et al. 1993;Dirsch et al. 2003;Harris et al. 2006;Mieske et al. 2006;Strader et al. 2006;Harris et al. 2009;Mieske et al. 2010). We performed spectroscopy on a total of 157 M87 GC candidates with the Multi-Object Spectrography (MOS) mode of the Faint Object Camera and Spectrograph (FOCAS) on the Subaru telescope on 2007 March 16 − 19. We used a configuration of a 300B grism and L600 filter resulting in a dispersion of 1.35Å/pixel and a spectral coverage of 3700 − 6000Å. Object spectra were obtained through 0. ′′ 5 slits, rendering an overall spectral resolution of R ∼ 800 or 5.4Å, which is calculated using the spectral dispersion (1.35Å/pixel) and the FWHM of a sky line 4.0 pixel. Seeing was on average 1 ′′ , ranging between 0. ′′ 9 and 1. ′′ 5 over the four nights. FOCAS has a circular 6 ′ field of view and the four masks contained between 28 and 46 objects ( Table 1). The typical exposure time for the M87 fields was 1800 s, which adds up to the total integration time of ∼ 22 hr (Table 1). In addition, we observed a set of calibration stars including two flux standard stars (Feige34 and HZ44), three RV standard stars (HD68874, HD69148, and HD160952), and 26 Lick standard stars with a long-slit mode. Dome flats and arc frames for both long-slit and MOS observations were taken in between exposures for object spectra. We used basic spectroscopic data reduction procedures (bias subtraction, cosmic-ray removal, flatfielding, field distortion correction, flexure effect correction, wavelength calibration, sky subtraction, spectrum extraction, extinction correction, and flux calibration) using the FOCASRED package. We applied 3 pixel on-chip binning in the spatial direction and normalized the spectra by the median flux density at 4900 − 5100 A. Finally, to further explore the CMRs of M87 GCs, we have also cross-identified our spectroscopic targets with the M − T 2 Washington photometry data. We obtained M and T 2 images of M87 GCs with the Mosaic II CCD imager of the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO). The Mosaic II CCD consists of eight 2048 × 4096 array with a pixel scale of 0.27, yielding a field of view of 36 ′ × 36 ′ . Images were preprocessed through IRAF 1 using the MSCRED package (Valdes 1998). The standard data reduction was then performed, and the photometry of the objects was obtained using the DAOPHOT II/ALLSTAR package (Stetson 1987). We used standard stars by Landolt (1992) and Stetson (2000) for the photometric calibration. Eight standard fields, each containing 15-25 standard stars with a wide color range, were taken at various airmasses. Reddening correction and astrometry were done using the reddening maps by (Schlegel et al. 1998) and the USNO-B1.0 catalog (Monet et al. 2003), respectively. RADIAL VELOCITY MEASUREMENT Among our selected sample cluster candidates, we now identify bona fide cluster members through the measurement of RVs of the candidate GCs. RVs of the 157 GC candidates are measured by the Fourier cross-correlation technique (Tonry & Davis 1979), using the FXCOR task in the IRAF RV package. We fit the continuum of the spectra using the spline fit with 3σ clipping, then subtract the fit from the original spectra of GC candidates. For the reference spectra required for this method, we observed three velocity standard stars of spectral types G8III during the observing run. Then the continuumsubtracted spectra of GC candidates are cross-correlated with those of RV stars. Each velocity standard star is separately used, yielding three independent values of RVs for GCs. The derived velocity values using three different standard stars are consistent with each other within 1σ, and much smaller than the individual measurement errors. Because the cross-correlation technique is rather sensitive to the noisy part of an input spectrum, which can smear the correlation signal, the choice of wavelength range to be cross-correlated is important. We choose the wavelength region of 4000−5500Å for the task in order to avoid the relatively low signal-to-noiseratio (S/N) region of our object spectra below ∼ 4000 A, and a strong sky emission-line region around 5600Å. This range includes 15 − 18 Lick features. In obtaining the final velocity values for GCs, the cross-correlated range of wavelengths was adjusted on an individual object basis. Given the consistency between the velocity values derived with different velocity standard stars, the final RVs of GCs are measured using the RV standard star HD 68874 based on the quality of the standard star data. The uncertainties of the RVs vary with the mask fields due to the S/N variation of the fields. The uncertainties range mostly from 35 − 73 km s −1 , with the typical error of ∼ 61 km s −1 within 10 ′ fields. The typical velocities of M87 GCs are known to be 200 − 2550 km s −1 based on the previous kinematic studies of the system (C98; Hanes et al. 2001). Because objects with RV < 200 km s −1 are almost definitely foreground Galactic stars, we consider the velocity criterion of 200 < RV < 2550 km s −1 for the GCs associated with M87. Except for obvious contamination (emission-line galaxies and/or AGN) and objects whose spectra are useless due to bad columns, we find that 130 GCs satisfy this criterion. The measured velocities of all cluster candidates are presented in Table 2 along with the corresponding magnitude and color information by Tamura et al. (2006a); Tamura et al. (2006b) in the fourth and fifth columns. The histogram of RVs of the confirmed GCs is shown in Figure 3. The RV distribution of M87 GCs has double peaks, and the clusters are distributed roughly symmetrically around the mean velocity of M87 GCs, 1299 km s −1 . This value is in good agreement with the known RV of M87 (1307 km s −1 , C98; Hanes et al. 2001;Strader et al. 2011, hereafter S11) at a distance of 16.1 Mpc. Some of the GCs have previous measurements by C98 and S11. Figure 4 illustrates these GCs in comparisons between each set of measurements. The left panel shows nine GCs in common with the C98 catalog, and the right panel shows 39 GCs in common with the new velocity measurements by S11. The velocities of these common GCs are in general consistent. The dotted line in each panel represents the least-squares fit to the data, and the rms of both fits are found to be comparable to the measurement errors. One exception is F233 (RV = 962 km s −1 ) which has measurements in all three catalogs. The velocity value for this cluster by C98 is 1762 km s −1 . The cause of this discrepancy is unknown. Our value for this cluster is in good agreement with the more recent measurement by S11. The Lick/IDS system of absorption-line indices provides a reference frame in which one can derive metallicities, ages, and abundance ratios for stellar populations. The Lick/IDS system was originally created by Burstein et al. (1984), and has been continuously revised and improved over the years (W94; Trager et al. 1998). The system has also been tried and developed in SSP modeling, with their variations with stellar age, metallicity, and α-element abundance considered (e.g., Bruzual & Charlot 2003;Thomas et al. 2003;Chung et al. 2013). The observed Lick index values are compared to the SSP Lick indices to estimate ages, metallicities, and [α/Fe] ratios. Figure 5 shows three representative spectra of GCs with RV = 0 km s −1 and the marked locations of the absorption features (gray solid lines). The metallicity [Fe/H] of the spectra increases from the top to bottom panel. Following the wavelength-dependent Lick spectral resolution, we smoothed our spectra (red solid lines) to match the original Lick resolutions (W97). All indices and errors were measured using the C ++ program Indexf (Cardiel et al. 1998). In this study, we adopt the passband and pseudo-continuum definitions by W94 and W97. The object spectra and error spectra were provided as inputs for Indexf Lick index measurements. Indexf generates Lick index uncertainties by performing 100 Monte Carlo simulations; indices are measured on the Poisson noise added spectra, and the 1σ standard deviation of the measured index is taken as the Lick index uncertainty. In Figure 6, we calibrate our spectra to the Lick system by comparing our measured indices for the 26 Lick standard stars to the Lick/IDS measurements. The offsets between the two measurements were then applied to the Lick indices of our 130 M87 GCs listed in Table 2. The measured Lick indices and their uncertainties are given in Tables 3 and 4, respectively. In addition to the above Lick indices, we have also combined individual indices to create a more robust composite index, such as (Thomas et al. 2003) and We show the comparisons of our Lick index measurements with those by C98 in Figure 7. Among the 150 GCs obtained by C98, 10 GCs are also on our slit masks and the Lick indices were successfully measured for 9 of these. After excluding one GC with inconsistent RV values, Lick indices of eight common GCs are compared. While Mgb shows good agreement between the two data sets, the discrepancies are apparent for Fe and Balmer indices. We note that the Lick indices by C98 are measured using the older definitions by Burstein et al. (1984), Faber et al. (1985) and Gorgas et al. (1993), while ours adopts the definitions by Trager et al. (1998). This may be partly responsible for some of the larger rms values. AGE, METALLICITY, AND ELEMENTAL ABUNDANCE In this section, we explore the stellar population properties-ages, metallicities, and elemental abundance ratios-for our sample GCs via various methods and check for consistency between the results. In Section 5.1, we first examine the general ranges and trends of GC age, metallicity, and abundances in the index−index diagnostic diagrams by comparing the observations to the SSP model grids. Then, in Section 5.2, we obtain the purely empirical metallicity values, which are independent of models, using the most recent comprehensive catalog of Milky Way globular clusters (MWGCs, Kim et al. 2016). These MWGCs are old and of solar abundance ratios. Next, in Section 5.3, in order to assign ages, metallicities, and elemental abundance ratio values to each individual GC we adopt the method of multi-index fitting to a set of SSP models via χ 2 minimization described by Proctor et al. (2004). The derived values in Section 5.3 are used in the analysis for the rest of the paper. Absorption-line Diagnostic Plots The index−index diagnostics provide a simple, qualitative picture on the distributions of ages, metallicities, and elemental abundance ratios of a GC system. In Figure 8, we present plots with selected indices as a function of the metallicity-sensitive composite index [MgFe] ′ (Equations 1). The combination of indices containing the three Balmer indices Hβ, Hγ A , and Hγ F represent the age−metallicity distributions of the clusters. The superimposed grids are the model predictions from the Yonsei Evolutionary Population Synthesis (YEPS) model (Chung et al. 2013) To estimate the abundance ratios [α/Fe] of the clusters, we choose tracers of α-elements (Mgb) and ironpeaked elements ( Fe , Equations 2) (lower-right panel). α-elements and more than two-thirds of iron-peak ele-ments are produced by Type II and Type Ia supernovae (SNe), respectively. By virtue of the distinct characteristics of each SN type in terms of timing and duration of events, the ratio of [α/Fe] provides clues to the star-formation history of the host. Enhanced [α/Fe] (∼ 0.3) is commonly found for GCs in early-type galaxies (Worthey et al. 1992;Matteucci 1994;Trager et al. 2000;Thomas et al. 2005), indicating rapid star formation. The observational points are denoted with the symbols as before, and the 12 Gyr model GCs with varying [α/Fe] ratios of 0.0 (dotted line), 0.3 (solid line), and 0.6 (dashed line) are overlaid. The majority of M87 GCs lie within the range of [α/Fe] = 0.0−0.3 which is consistent with the previous findings on the extragalactic GC system for early-type galaxies. At the low-metallicity end, it is difficult to draw a conclusion on the distribution of [α/Fe] due to convergence of the model grids as well as somewhat large observational uncertainties. We next examine the abundance pattern of carbon and nitrogen through a set of index−index diagnostics of M87 GCs using CN 2 and G4300 indices ( Figure 9). In the left panel, the CN 2 distribution of M87 GCs are shown as light-gray filled squares, and the clusters with higher S/N are plotted as dark gray squares. Although there is a large scatter of M87 data compared to the MWGCs (open squares; Kim et al. 2016), a general trend with metallicity is obvious. The overplotted model grids represent the varying [α/Fe] ratios of solar (gray) and 0.3 dex (black), respectively. CN 2 exhibits systematically stronger index strengths for both MWGCs and M87 GCs across the full metallicity range with respect to the model predictions. This indicates carbon and/or nitrogen overabundances for the two GC systems, the trend of which seems to be common in typical GC systems (Puzia et al. 2005;Cenarro et al. 2007;Beasley et al. 2008;Woodley et al. 2010). In the right panel, G4300 is known to be mainly sensitive to carbon (Tripicco & Bell 1995) and shows a smaller offset between model predictions and the observation. The CN-strong features observed in GCs are ascribed to several possible sources, including pollution of gas by intermediate-mass AGB stars (Cottrell & Da Costa 1981;D'Antona & Ventura 2007), early enrichment achieved by the stellar wind of fast-rotating massive stars (Maeder & Meynet 2006;Prantzos & Charbonnel 2006;Decressinet al. 2007), and difference in surface gravity (Mucciarelli et al. 2015;Lee 2016;Gerber et al. 2018), opacity (MacLean et al. 2018) and stellar mass function (Renzini 2008;Kim & Lee 2018). The nitrogen overabundance may also be attributed to the two generations of the constituent stars of GCs (Carretta et al. 2008;D'Ercole et al. 2008). In this scenario, second-generation stars are formed from the material preenriched by supernovae explosions. Because the observed G4300 indices with higher S/N do not seem to stray far from the models, we can assume that carbon enhancement is not likely. Hence, overenrichment of nitrogen in M87 GCs may be a source of the significant discrepancy between the observation and model. Due to large uncertainties, we cannot conclusively determine the abundance pattern of M87 GCs, but the overall trends of CN 2 and G4300 against the metal-sensitive index [MgFe] ′ indicate an overabundance in carbon and nitrogen. Empirical Metallicities We employ a simple, model-independent approach to assigning metallicities to individual M87 GCs in which MWGCs are used as metallicity calibrators. It is noted in Beasley et al. (2008) and Strader & Brodie (2004) that, although mainly tied to the Zinn & West (1984) metallicity scale, there is no metallicity scale universally agreed upon. In this study, we refer to our empirically derived metallicity as [M/H] which may not strictly reflect either [Fe/H] or overall metallicity ([Z/H] (see Beasley et al. (2008) for a more detailed discussion). We use spectroscopic data on 53 MWGCs (Kim et al. 2016) which are calibrated onto the Lick system and thus directly comparable to our M87 GC data. We adopt the method that closely follows that of Beasley et al. (2008). In order to derive empirical metallicities, we obtain correlations between metallicities provided by the Harris catalog (Harris 1996) and the measured Lick indices of 53 MWGCs (Kim et al. 2016). To better define the relationships that exhibit a hint of nonlinearity, we fit the multiorder polynomial to the data using orthogonal distance regression (ODR;Jefferys et al. 1988). In most cases, the correlations are characterized by second-order polynomial, except for Hβ and CN indices, which are better defined by higher-order correlations. Figure 10 illustrates the resulting fits for the 18 Lick indices and the fitted coefficients are listed in Table 5. Nine indices are chosen for the derivation of empirical metallicities based on both the goodness of the ODR fits on the MWGCs and the index measurement qualities for M87 GCs. These indices include Fe4383, Fe4531, Hβ, Fe5015, Mgb, Mg 2 , Fe5270, Fe5335, and Fe5406. As shown previously in Figure 8, young GCs cannot be readily separated due to the possible effect of hot horizontal-branch stars present in the GCs. Taking this into consideration, to examine the properties of the old members of the M87 GC system exclusively, we leave out four clusters that are found to be ≤ 8 Gyr in all three Balmer−[MgFe] ′ planes ( Figure 8). As the spectroscopic ages of M87 GCs and stellar population are in general old ( > 12 Gyr, Cohen et al. (1998);Kuntschner et al. (2010)), we assume the rest are old GCs. For 83 old GCs with sufficient quality (10 < S/N < 40) for an abundance analysis (Figure 11), we obtain empirical metallicities [M/H] using the fitted coefficients for the chosen indices and list the values in Table 3. Multi-index Fits via χ 2 Minimization For a more quantitative analysis on the metallicities, ages, and [α/Fe] of M87 GCs, we perform χ 2 multiindex fitting with the YEPS SSP models (Chung et al. 2013). This technique, introduced and explained in detail by Proctor et al. (2004), simultaneously fits a set of Lick indices to the model grids of [Fe/H], ages, and [α/Fe] until χ 2 is minimized. The method has the advantage in that by using multiple absorption-line indices, it is more robust against individual index uncertainties. We adopt the same method of rejecting the deviant indices and recalculating the χ 2 statistics as described in Proctor et al. (2004). Note that we exclude several indices (e.g., CN 1 , CN 2 , Ca4227 and G4300) from the initial fitting process because of their inconsistency with respect to the models. We determined 13.5 Gyr to be the best-fit model age parameter to represent the old clusters in M87 and thus fixed the model as such in our multi-index fitting process to obtain the metallicities [Fe/H] and abundance ratios [α/Fe]. We perform a χ 2 routine with the same 83 GCs as the previous section. The [Fe/H] values are compared to the empirical metallicities derived in Section 5.2 ( Figure 12) and found to be in good agreement with each other. Because the empirical [M/H] is set by MWGCs, we might expect offsets for GCs with [α/Fe] higher than MWGCs' abundance ratios (see the bottom-right panel of Figure 8). GCs with higher α-element abundance ratios ([α/Fe] > 0.5) are denoted with gray squares and they exhibit larger inconsistencies in the metal-poor region. These GCs are excluded from the least-squares fits to the data. The derived values of ages, metallicities, and abundance ratios for 87 GCs in this section are used in the analysis for the rest of the paper. 6. TESTS FOR NONLINEAR-CMR SCENARIO FOR GC COLOR BIMODALITY Color−Metallicity Relations Our high-quality, homogeneous dataset enables us to test the theoretical prediction of inflected CMRs. This figure is similar to Figure 2 of Paper II where the model CMR was explored for a combined heterogeneous data set of GCs in M87, M49, and the MW, but it now uses the by far more homogeneous dataset of M87 GCs only. We find that the CMRs of M87 GCs in both metallicity−(V − I) and metallicity−(M − T 2) planes display wavy features where slopes of the relations change at around [Fe/H] = −1. The observed significant inflection along the optical CMRs supports the notion that the nonlinear CMRs between intrinsic metallicity and its proxy, colors, holds the key to the color bimodality phenomenon. The effect of this nonlinear conversion from metallicities to colors will be demonstrated in Section 6.3. Metallicity Distribution Functions Here we combine our metallicities derived with the SSP model-fitting method with C98 metallicities in order to have a better understanding of the intrinsic metallicity distribution function (MDF) shape of the M87 GC system. The left panel of Figure 14 illustrates the transformation of C98 metallicities to our M87 metallicities. The C98 catalog comprises bright GCs, which occupy regions similar to as our sample within 10 ′ of the galactic radius ( Figure 2). The solid line represents the orthogonal least-squares fit, and the derived transformation is given by In the right panel, the distribution of the M87 GC metallicities from this study is shown as a red hatched histogram. The blue hatched histogram displays the resulting metallicity distribution of C98 GCs with high-S/N C98 shifted to the same scale as that of ours. The gray filled histogram represents the combined metallicities of Subaru and Keck spectroscopy. The combined list of M87 GCs contains 98 higher-S/N GCs from C98 and 87 GCs from our study. For the GCs present in both catalogs, we take the values from our study. This is one of the largest spectroscopic data on M87 GCs and of quality that allows reliable abundance study on the system. A Gaussian mixture model (GMM) test on the observed MDFs (Table 7) gives a high probability of the distribution being a unimodal Gaussian one (P (χ 2 ) = 0.246). Color Distribution Functions In Figure 15 we simulate the M87 GC colors and compare them with the observations. The top row presents the observations and the best-fit model predictions (13.5 Gyr and [α/Fe] = 0.3) for the CMRs of M87 GCs. The models trace the features present in the observations well. In order to demonstrate the nonlinear-CMR hypothesis at work, we carry out the conversion process from metallicities to colors via our theoretical metallicity−color relations. For an MDF, we make a simple assumption of a single Gaussian distribution of 10 6 model GCs (top row, along the y-axes) to avoid small number statistics. The assumed mean [Fe/H] and σ([Fe/H]) of −0.85 and 0.5 are similar to those of observed distribution in Figure 13. In the second row, we plot the color−magnitude diagrams of 5000 randomly selected model GCs for the V − I and M − T 2 colors as an additional aid to visualize the simulated color distributions. We take observational uncertainties into account in the simulations. The inflection in our theoretical [Fe/H]−color relations has the effect of projecting the equidistant metallicity intervals onto wider color intervals, causing scarcity in the color domain near the quasi-inflection point on each [Fe/H]−color relation. As a result, the division into two groups of model colors are visible, which agrees with the observations ( Figure 1). The resultant color histograms (third row) show bimodal distributions with stronger blue peaks, and with the paucity at quasi-inflection points translated to the dip positions for both colors. Therefore, the theoretical prediction and the observed distributions of GC colors (bottom row) are consistent with each other. This result is also in good agreement with that of Paper II on the general behaviors of M87 GCs. In order to quantify the visual impression of the distributions, we perform a GMM test on the simulated and observed color histograms. Table 7 presents the resulting GMM outputs. Bimodality is preferred for color distributions of the observed and simulated GCs (P (χ 2 ) = 0.001). The fraction of GCs assigned to the blue group is slightly higher in simulations. We add vertical dotted lines in Figure 15 at the blue and red peaks for the observed histograms (fourth row) to better guide the eye for a comparison with the simulation (third row). We attribute the slight offset in red peak locations for M − T 2 between the observed and simulated color histograms to several contributing factors, such as different choices of model ingredients involved. Hence, we put more weight on the consistent histogram morphology between the observations and simulations. SUMMARY AND DISCUSSION We have examined the fundamental stellar population properties of M87 GCs and reported spectroscopic metallicity estimates using the homogeneous spectro-scopic data. We have adopted empirical and theoretical methods in estimating the individual GC metallicity values and found them consistent with each other (Figure 12). The empirical metallicity was obtained using the most recent comprehensive catalog of MWGCs (Kim et al. 2016). The theoretical metallicity was based on multi-index fitting to a set of SSP models via χ 2 minimization described by Proctor et al. (2004). The nonlinear-CMR scenario for GC color bimodality has been tested in the previous series of studies (Papers I ∼ IX) using multiple heterogeneous data sets of extragalactic GC systems, both photometric and spectroscopic, and found to be successful in explaining the bimodality phenomenon. Here we have used homogeneous spectroscopic data of the M87 GC system to continue to examine and test the nonlinear-CMR scenario. We have shown that M87 GCs exhibit nonlinearity in CMRs ( Figure 13). We have also shown that the MDF of the M87 GC system is characterized by a broad, unimodal distribution (Figure 14). In order to strengthen the statistical significance of the MDF, we have combined the metallicity measurements of our study and those of C98 with high S/N, making one of the largest catalogs of spectroscopically derived metallicity on M87 GCs. This combined distribution of metallicities derived with the spectroscopic data is consistent with the GC MDFs of several elliptical galaxies inferred from various colors through nonlinear CMRs (Papers II, III, and IV). Finally, the observed bimodal color distributions are reproduced using our nonlinear metallicity-to-color transformation scheme (Figure 15), bringing our results in good agreement with the findings of the previous studies. Recently, Villaume et al. (2019) reported a new metallicity measurement on M87 GCs. Via full-spectrum SSP model fitting, they obtained a bimodal MDF with systematically higher metallicities compared to the previous findings on M87 (C98, Peng et al. (2006)). Their Figure 10 compares the M87 CMR with the literature (Peng et al. (2006)) and the MWGCs, and they found that the shapes of CMRs differ significantly. Villaume et al. suggested that the difference in the metal-poor region might be due to the age−metallicity degeneracy or the effect of blue horizontal-branch stars. Given that many extragalactic GCs are found to be old (e.g., C98; Forbes et al. 2001;Puzia et al. 2005) and that the YEPS models employ a realistic treatment of horizontalbranch stars in stellar population modeling, we do not consider them major causes for the discrepancy. We suspect the distinct CMRs might be the result of a different approach to deriving metallicities. Fahrion et al. (2020) compares GC metallicities of the galaxies in the Fornax cluster obtained with different techniques and various sets of constraints in their appendix. For example, the CMRs obtained by full-spectrum fitting and by using line-strength indices clearly exhibit offsets where GCs have different metallicities at fixed colors (see their Figure A.1). In the conventional scenarios of GC formation, blue GCs come from the accretion of low-mass neighboring galaxies. However, whether the accretion of metal-poor GCs is solely responsible for blue GCs is still an open question. A piece of evidence against the accretion scenario is the observed sharp blue peaks of giant elliptical galaxies. The mean colors of GC systems correlate tightly with host galaxies' mass (Larsen et al. 2001;Strader & Brodie 2004;Peng et al. 2006). Such correlation suggests that accreted galaxies used to have a narrow mass range to form the sharp blue peak, which is unlikely. Further counterevidence is the significant fraction of blue GCs in galaxies in lower-density and isolated regions (Peng et al. 2006;Cho et al. 2012), where galaxies have few accompanying galaxies and thus it is difficult to acquire sufficient metal-poor GCs via accretion. Moreover, in our study, a unimodal shape of GC MDF is obtained in the inner main body of M87 (< 6.3 R eff ), where accreted GCs should be less populated than in the remote halo (see Paper III for further discussion). Current hierarchical models of galaxy formation predict that the emergence of a massive galaxy involved a large number of protogalaxies, which may leave little room for the existence of merely two GC subpopulations in galaxies such as M87. In these models, the chemical evolution in the early stage of galaxy formation took place in a rather quasi-monolithic manner, which naturally produces a broad, unimodal MDF of GCs. We conclude from our results in this study, together with the evidence offered from our series of observational and theoretical studies (Papers I ∼ IX), that in the case of massive giant galaxies like M87, GCs are formed on a relatively short timescale via early, vigorous star formation (e.g., Paper III, Chiosi et al. 2014). Beasley et al. 2008) and by using SSP models (x-axis; Proctor et al. 2004). The solid and dotted lines represent the orthogonal least-squares fits to the black squares and the one-to-one relations, respectively. Gray squares are the GCs with higher alpha-elemental abundance ratios ([α/Fe] > 0.5.). Note-The full table is available in the online version. Note-a The resulting significance of the GMM test expressed as a P-value. b The number ratios between the blue and red GC groups. c The mean and the standard deviations of the blue and red GC groups. Note-a The resulting significance of the GMM test expressed as a P-value. b The number ratios between the blue and red GC groups. c The mean and the standard deviations of the blue and red GC groups.
8,575
2021-09-16T00:00:00.000
[ "Physics" ]
Rapid high performance liquid chromatographic method for determination of clarithromycin in human plasma using amperometric detection: application in pharmacokinetic and bioequivalence studies. A rapid, sensitive and reproducible HPLC method using amperometric detector was developed and validated for the analysis of clarithromycin in human plasma. The separation was achieved on a monolithic silica column (MZ- C8 125×4.0 mm) using acetonitrile-methanol-potassium dihydrogen phosphate buffer (40:6:54,v/v), with pH of 7.5, as the mobile phase at a flow rate of 1.5 mL/min. The assay enables the measurement of clarithromycin for therapeutic drug monitoring with a minimum quantification limit of 20 ng/mL. The method involves simple, protein precipitation procedure and analytical recovery was complete. The calibration curve was linear over the concentration range of 0.1-6 μg/mL. The coefficients of variation for inter-day and intra-day assay were found to be less than 6%. This method was used in bioequivalency and pharmacokinetic studies of the test (generic) product 2 × 500 mg clarithromycin tablets, with respect to the reference product. Introduction Clarithromycin ( Figure 1) is a semi-synthetic macrolide antibiotic derived from erythromycin with similar actions and uses. Because of its good antimicrobial activity against a wide range of Gram-positive and Gram-negative organisms, clarithromycin is used to treat the respiratory tract infections and skin and soft tissue diseases. Clarithromycin is rapidly absorbed from the gastrointestinal tract. Oral bioavailability of clarithromycin is about 55% and food has no effect on clarithromycin absorption (1, 2). Several analytical methods have been developed for determination of clarithromycin in human plasma. Some of those described HPLC methods utilizing positive ion electrospray ionization coupled to a tandem mass spectrometry (LC-MS/MS) detection (3-6). These methods are very sensitive but they are not available for most laboratories because of their specialty requirement and financial reasons. In addition, some of the HPLC systems described methods utilizing ultra violet (UV) and electro chemical detectors (ECD) (7-12). Their Limits of quantification were not sensitive enough for pharmacokinetic studies and their and 500 µL of acetonitrile. After mixing (30 s), the mixture was centrifuged for 10 min at 4000 rpm. Then 30 µL of supernatant was injected into liquid chromatography. Biological samples Twelve healthy male volunteers were included in this study. The study protocol was approved by the Ethics Committee of Shahid Beheshti University of Medical Sciences and written informed consent was obtained from the volunteers. Clarithromycin was administered in a double dose of 500 mg to the volunteers after overnight fasting. Plasma samples were collected at several intervals after dosing and then frozen immediately at -20 o C until assayed. Stability The stability of clarithromycin was assessed for spiked plasma samples stored at -20°C for up to two months and at ambient temperature for at least 24 h. The stability of stock solutions stored at -20°C was determined for up to one month by injecting appropriate dilutions of stocks in distilled water on day 1, 15 and 30 and comparing their peak areas with fresh stock prepared on the day of analysis. Samples were considered to be stable, if the assay values were within the acceptable limits of accuracy and precision. Plasma calibration curves and quantitation Blank plasma was prepared from heparinized whole-blood samples collected from healthy volunteers and stored at -20°C. After thawing, 50 µL of clarithromycin working standards were added to yield final concentrations of 0.1, 1, 2, 3, 4, 5 and 6 µg/mL. 50 µL internal standard solution was added to each of these samples. The samples were then prepared for analysis as described above. Calibration curves were constructed by plotting peak area ratio (y) of clarithromycin to the internal standard versus clarithromycin concentrations (x). A linear regression was used for quantitation. Precision and accuracy The precision and accuracy of the method were examined by adding known amounts of clarithromycin to pool plasma (quality control sample preparations required derivatization or extraction steps and were time consuming. The present study describes a rapid and validated HPLC method using a monolithic column with amperometric detection, which enables the determination of clarithromycin with good accuracy at low drug concentrations in plasma. Separation was performed on a reversedphase column. The sample preparation only involves a simple one-step protein precipitation and no evaporation step is required. We also demonstrate the applicability of this method for bioequivalency and pharmacokinetic studies in humans. Chemicals Clarithromycin and Roxithromycin were supplied by Dorsa Daru Pharmaceutical Company (Dorsa Daru, Tehran, Iran). Clarithromycin is available as an oral tablet containing 500 mg of clarithromycin and other inactive ingredients. HPLC-grade acetonitrile and methanol and all other chemicals were obtained from Merck (Merck, Darmstadt, Germany). Water was obtained by double distillation and purified additionally with a Milli-Q system. Instruments and chromatographic conditions The chromatographic apparatus consisted of a model Wellchrom K-1001 pump, a model Rheodyne 7125 injector and Amperometric Detector +1.250 V, sens: 100 nA connected to a model Eurochrom 2000 integrator, all from Knauer (Berlin, Germany). The separation was performed on Chromolith Performance (MZ-C8 125*4.0 mm) column from Merck (Darmstadt, Germany). The mobile phase consisted acetonitrile-methanol-potassium dihydrogen phosphate buffer (40:6:54, v/v), with pH of 7.5, at a flow rate of 1.5 mL/min. The mobile phase was prepared daily and degassed by ultrasonication before use. The mobile phase was not allowed to recirculate during the analysis. Sample preparation To 500 µL of plasma in a glass-stoppered 15 mL centrifuge tube were added 50 µL of roxithromycin as internal standard (20 µg/mL) α-Amylase activity in saltwater halotolerant fungi samples). For intra-day precision and accuracy five replicate quality control samples at each concentration were assayed on the same day. The inter-day precision and accuracy were evaluated on three different days. Limit of Quantification (LOQ) and recovery For the concentration to be accepted as LOQ, the percent deviation from the nominal concentration (accuracy) and the relative standard deviation must be ± 10% and less than 10%, respectively, considering at least five-times the response compared to the blank response. The analytical recovery for plasma at three different concentrations of clarithromycin (1.4, 2.8 and 4.2 µg/mL) was determined. Known amounts of clarithromycin were added to drugfree plasma and the internal standard was then added. The relative recovery of clarithromycin was calculated by comparing the peak areas for extracted clarithromycin from spiked plasma and a standard solution of clarithromycin in water/ methanol containing internal standard with the same initial concentration (six samples for each concentration level). Pharmacokinetic analysis Clarithromycin pharmacokinetic parameters were determined by noncompartmental methods. Elimination rate constant (K) were estimated by the least-square regression of plasma concentration-time data points lying in the terminal log-linear region of the curves. Half-life (T1/2) was calculated as 0.693 divided by K. The area under the plasma concentrationtime curve from time zero to the last measurable concentration at time t (AUC0-t) was calculated using the trapezoidal rule. The area was extrapolated to infinity (AUC0-∞) by addition of Ct/K to AUC0-t where Ct is the last detectable drug concentration. Peak plasma concentration (Cmax) and time to peak concentration (Tmax) were derived from the individual subject concentration-time curves. Results and Discussion Under the chromatographic conditions described, clarithromycin and the internal standard peaks were well resolved. Endogenous plasma components did not give any interfering peaks. Separation was performed on a reversed-phase column, which allows easy optimizing chromatographic conditions to obtain desirable resolution in a short time. Owing to use of the MZ-C8 column, much faster separations are possible as compared to traditional chromatographic columns packed. Accordingly, the chromatographic elution step is undertaken in a short time (less than 6 min) with high resolution. Figure 2 shows typical chromatograms of blank plasma in comparison to spiked samples analyzed for a pharmacokinetic study. The average retention times of clarithromycin and roxithromycin were 4.8 and 5.7 min, respectively. The peaks were of good shape and completely resolved from another at therapeutic concentrations of clarithromycin. In our method, sample preparation involves protein precipitation and no evaporation step is required. Protein precipitation became more efficient with increasing volumes of acetonitrile. However, greater volumes of acetonitrile diluted the sample, thereby affecting the sensitivity of the assay. To improve the sensitivity, a 1:1 ratio of acetonitrile to plasma was considered for sample preparation. Under this condition, the majority of protein was precipitated and clarithromycin and internal standard were free of inference from endogenous components in plasma. The calibration curve for the determination of clarithromycin in plasma was linear over the range 0.1-6 µg/mL. The linearity of this method was statistically confirmed. For each calibration curve, the intercept was not statistically different from zero. The correlation coefficients (r) for calibration curves were equal to or better than 0.999. The slops of plasma standard curves in the nine different preparations were practically the same (the CVs were less than 2% for the slopes of plasma standard curves). For each point of calibration standards, the concentrations were recalculated from the equation of the linear regression curves. The mean linear regression equation of calibration curve for the analyte was y = 0.669x + 0.0014, where y was the peak area ratio of the analyte to the internal standard and x was the concentration of the analyte. The analytical recovery for plasma at three different concentrations of clarithromycin was determined. Known amounts of clarithromycin were added to drug-free plasma in concentrations ranging from 1.4-4.2 µg/mL. The internal standard was added and the absolute recovery of clarithromycin was calculated by comparing the peak areas for extracted clarithromycin from spiked plasma and a standard solution of clarithromycin in methanol containing internal standard with the same initial concentration. The average recovery was 95.5 ± 0.58% (n = 6) and the dependence on concentration is negligible. The recovery of internal standard, roxithromycin was almost complete (98.9%) at the concentration used in the assay (20 µg/mL). Using amperometric detection method, the limit of quantification (LOQ), as previously defined, was 20 ng/mL for clarithromycin. This is sensitive enough for drug monitoring and other purposes such as pharmacokinetic studies. We assessed the precision of the method by repeated analysis of plasma specimens containing known concentrations of clarithromycin. As shown in Table 1, coefficients of variation were less than 2.2%, which is acceptable for the routine measurement of clarithromycin. Stability was determined for spiked plasma samples under the conditions as previously described. The results showed that the samples were stable during the mentioned conditions. The aim of our study was to develop a rapid and sensitive method for the routine analysis of biological samples in pharmacokinetic and bioequivalence studies of clarithromycin. This method is well suited for routine application in the clinical laboratory because of the speed of analysis and simple extraction procedure. Over 300 plasma samples were analyzed by this method without any significant loss of resolution. No change in the column efficiency and back pressure was also observed over the entire study time, thus proving its suitability. In this study plasma concentrations were determined in twelve healthy volunteers, who received 1000 mg of clarithromycin each. The mean values for the variable Cmax were 4.2 µg/mL. The mean values for the variable AUC were 39.6 h µg/mL. Figure 3 shows the mean plasma concentration-time profile of clarithromycin. The derived pharmacokinetic parameters of 12 healthy volunteers are summarized in Table 2. These pharmacokinetic parameters are in good agreement with that found previously (1, 2). Conclusion A rapid and simple HPLC method has been described for the analysis of clarithromycin in plasma. Using MZ-C 8 column, the chromatographic elution step is undertaken in a short time (< 6 min) with high resolution. In addition, the use of a simple one-spot sample preparation instead of more complex extraction procedures makes this method suitable for pharmacokinetic and bioequivalence studies of clarithromycin in humans.
2,702
2013-03-11T00:00:00.000
[ "Biology", "Chemistry" ]
Valuation: From the Discounted Cash Flows (DCF) Approach to the Real Options Approach (ROA) There exists an abysm between market prices and traditional valuation approaches such as Discounted Cash Flows (DCF), a fact that neither academics nor practitioners could continue ignoring. Recently, a complementary approach has taken a foothold into the valuation world. Building on the DCF approach yet going further in the sense of incorporating flexibility in management investment decisions, and taking advantage of the advances in option pricing theory, the real options approach (ROA) has become the alternative to capital budgeting and, lately, to corporate valuation. Empirical evidence shows that ROA explains actual prices better than DCF approaches and nowadays there is no question that from a theoretical point of view, ROA is a much more appealing concept than passive NPV. However, its acceptance by practitioners has been very slow due to the complexity of real options pricing. It took more than twenty years for Discounted Cash Flows Approaches (DCF), mainly the NPV approach, to replace Years to Payback and other ancient methods for capital budgeting and corporate valuation purposes. Yet this long wait proved to be worthy since after positioning itself as the most widely used approach to valuation, its reign seemed endless. Still in the nineties it continued to be the main approach, albeit an abysm between market prices and DCF values, a fact that neither academics nor practitioners could continue ignoring. Although it is true that part of that difference was caused by a market bubble formed around Internet stocks which pushed up prices of growth stocks mainly, it is also true that once the bubble burst in 2000, market valuations remained higher than their initial level. A study by Ernst and Young 2 estimates that only 25% of market capitalization at the time of the study was based on cash flow anticipated in the next five years in a sample that included both growth and value stocks. Certainly, new approaches to valuation needed to be put in place. During the Internet madness in the nineties, very peculiar approaches to valuation were suggested -and implemented!-in a wide range that included computing multiples of Market Price to Revenue up to multiples of the number of visitors to the Internet firm's web page. For a while, the market forgot about the essentials of firm's value such as its ability to generate future cash flows to shareholders -the basis for DCF-or to create value in general. Simultaneously, a much more appealing approach -from a theoretical point of viewwas taking a foothold into the valuation world. Building on the DCF approach yet going further in the sense of incorporating flexibility in management investment decisions and taking advantage of the advances in option pricing theory, the real options approach (ROA) has become the alternative approach to capital budgeting and, lately, to corporate valuation. It is precisely the fact that ROA is based on a properly estimated NPV what explains the perception of practitioners that, more than a revolutionary solution, it is "an evolutionary process to improve the valuation of investments and the allocation of capital, thereby increasing shareholders value" (Triantis and Borison, 2001, p.10). But before discussing how ROA builds on and complements the DCF approach, particularly the passive NPV, I present an overview of the different traditional methods which have been used for capital budgeting and corporate valuation. Examining these traditional methods will show the reasons why we need to evolve to a more comprehensive approach to valuation as, I will argue, it is ROA. 2 2 Campbell, J. and C. Knoess. "How to Build a FutureWealth Campbell, J. and C. Knoess. "How to Build a FutureWealth Company, Ernst and Young Company, Ernst and Young' 's Point of View on Value on the s Point of View on Value on the New Economy". New Economy". http://www.ey.com/GLOBA http://www.ey.com/GLOBAL L. Cited in . Cited in Boer (2002). Boer (2002). Before the long reign of the Net Present Value (NPV) approach, managers based their decisions about undertaking projects on simple rules such as the Payback rule. In order to apply it, the Payback period of a project is computed by counting the number of years it takes before the cumulative forecasted cash flow equals the initial investment. If this period does not exceed the cutoff date considered appropriate for the project, according to management, the project is accepted. This rule has been criticized because it ignores all cash flows after the cutoff date and it gives equal weight to all cash flows before this date. (Brealey and Myers, 2000). In other words, this rule ignores not only the time value of money -although some managers used the discounted-payback version of it -but also leads to rejection of projects with low cash flows in the beginning, even though future cash flows would fully compensate for the initial investment. Compared to other methods, at least this Payback rule is based on cash flows. Some other commonly used methods for corporate valuation are based on the firm's balance sheet information, i.e. the book value, the liquidation value, and the replacement cost. Since this information obeys accounting rules, in most cases is not a good estimate of the firm's value as it is discussed next. The book value is the net worth of a company as shown in the balance sheet. However, this value depends on the selection of a depreciation method for fixed assets and the fact that the price of these assets is the historical price, not the current one. In relation to the Book Value to Market Value ratio, it often indicates that accountants are missing something in their books since the market consistently is valuing companies a lot over their book values. It is true that assets are registered by their historical prices, but that argument can tell only part of the story. Today, there is no discussion that Book Value is an inadequate method to value firms and its usefulness is confined to provide a floor for the true value. Another measure that sets a floor to the value of the firm is the liquidation value. It "represents the amount of money that could be realized by breaking up the firm, selling its assets, repaying its debt, and distributing the remainder to the shareholders" (Bodie, Kane, Marcus, 2002). If the previous measures establish a floor to the firm's value, the replacement cost provides a cap for it because when the market value of a firm is above the replacement cost, it would be better for investors to replicate the firm. In consequence, the ratio of market price to replacement cost -known as the Tobin's qshould tend to one. However, there is much evidence on the fact that this ratio is frequently above one, i.e. Linderberg and Ross (1981). These authors propose market share, concentration of the market, and barriers to entry as possible factors explaining why some firms exhibit a q greater than one. EVA: Searching for the Economic Value A somewhat different measure of the value of a project or a firm is the Economic Value. In the nineties, Stern Stewart reintroduced the Economic Value Added (EVA) concept which is computed as the net operating profit after taxes (NOPAT) minus an appropriate charge for the opportunity cost of all capital invested in the enterprise. Economic Value is, then, the sum of the EVAs added by the enterprise in each successive year. EVA is an estimate of the amount by which earnings exceed or fall short of the required minimum rate of return which shareholders and lenders might earn by investing in alternative securities of comparable risk (Boer, 2002). Put in different words, it is the difference between the return on assets (ROA) and the cost of capital multiplied by the capital invested by the firm (Bodie, Kane, Marcus, 2002). EVA is an important and popular measure of firm's performance, but its application remains more at this level than as a valuation method. The Discounted Cash Flows Approach: passive NPV, IRR, and Risk-Weighted Cost of Capital This valuation approach applies to both project and corporate valuation. It is based on the free cash flows the project or entity will generate in the future which are defined as the net cash flows to shareholders after future investments. This approach improves over the previous methods in the sense that it measures the ability to generate cash and, as such, it gives a better estimate of the shareholder's wealth created by the project or entity. After computing the free cash flows, they are discounted at an appropriate rate equals to the opportunity cost of capital and the initial investment required by the project is deducted to obtain the Net Present Value (NPV) of the project: where: E (FCFt) is the expected free cash flow of the project at time t; k is the opportunity cost of capital; I is the required initial investment. The "Net Present Value (NPV) is the single most widely used tool for large investments made by corporations" currently (Copeland, Antikarov, 2001, p.56).These authors cite evidence showing that the use of this approach in corporate America increased from 19% to 86% of the firms surveyed in a period of less than twenty years -1959 to 1978 -. There are two equivalent ways to computing the passive NPV. The first one is discounting the free cash flows to equity at the cost of equity where these free cash flows are equal to: FCF to equity = Net Income + Depreciation -Capital Expenditures A second way to calculate the passive NPV is discounting the free cash flows from operations at the weighted average cost of capital. These free cash flows are the after -tax cash flows the entity would have if it had no debt and may be computed in the following manner: Earnings Before Interest and Taxes (EBIT) -Taxes on EBIT + Depreciation -Capital Expenditures = FCF from Operations The corresponding discounting rates, that is, the cost of equity and the weighted average cost of capital will be explained shortly. Once the passive NPV is computed, the rule says that managers should undertake the project if it has a positive passive NPV. A related approach, also based on discounted cash flows, is the internal rate of return (IRR) of the project, also known as the discounted cash flows (DCF) rate of return. This is the rate of discount which makes the passive NPV equals to zero. A project would be accepted if its IRR is greater than the opportunity cost of capital. Although this approach provides a rule to accept or reject investments, it does not answer the question of which is their value as it does the NPV approach. Additional to being a method based on cash flows, the NPV approach also improves over the previous methods by accounting for both the time value of money and the risk aversion, all combined through the risk-adjusted discount rate. The risk-adjusted discount rate, k, is the sum of the risk-free interest rate which accounts for the time value of money and a risk premium, Ψ, used to compensate for the risk associated with the project. (Trigeorgis, 1986). The firm's average opportunity cost of capital may be used to discount the expected project's free cash flows if the riskyness of the project is similar to that of the firm; otherwise, the marginal opportunity cost of capital for the particular project should be used. Frequently a constant discount rate is used but the riskyness of the cash flows from the project may vary through time, suggesting that a risk-adjusted discount rate appropriate for each period should be used instead of a constant one. The question remains of how to compute this risk-adjusted discount rate. If the passive NPV is computed discounting the free cash flows to equity, the cost of equity shall be estimated. For this purpose, the Capital Asset Pricing Model (CAPM) is the most common approach: (2) where: r i : the return on asset i; r f : the risk-free rate; β i : the beta of asset i; r m : the return on the market portfolio; σ ²m : the variance of the market porfolio's returns; The CAPM considers an investor who may diversify the risk through a portfolio comprising all the securities in the market. This investor requires a certain risk premium from each security according to its marginal contribution to the riskyness of the market portfolio. However, the total risk of each asset involves two different components: a systematic risk which affects all the securities in the market and it is non diversifiable since it depends on the correlation between the asset's return and the market portfolio's return as measured by its beta; the other component is the non systematic risk, which depends on characteristics or factors that affect each individual security, therefore, it can be diversified away. In consequence, the investor should demand a risk premium from the asset to compensate for its systematic risk only. If its beta remains constant through the entire time of the project, a constant k may be used to discount its cash flows. However, frequently that is not the case since as Myers and Turnbull (1977) 3 point out, a project's beta depends on its life, the growth rate of its expected cash flows, the pattern of the expected cash flows over time, the characteristics of any individual underlying components of these cash flows, the procedure by which investors revise their expectations of cash flows, and the relationship between forecast errors for the cash flows and those for the market return. The security which beta is going to be used as the beta for the project should match the project in all these aspects. Otherwise, when the project's beta varies through time, a generalized riskadjusted discount rate form of NPV with a varying k should be computed as opposed to the single risk-adjusted discount rate NPV from equation (1.1). NPV = (3) The other way to compute the passive NPV is discounting the operating free cash flows instead of the free cash flows to equity. In this case, the weighted average cost of capital (WACC) should be used as the risk-adjusted discount rate. The WACC is the weighted average of the after-tax marginal costs of capital which includes the costs of both sources of capital: equity and debt. The previous analysis in relation to using a constant or a varying riskadjusted discount rate applies here as well. Finally, a last consideration shall be made when applying the NPV approach: "if NPV is calculated using an appropriate riskadjusted discount rate, any further adjustment for risk is double counting" Myers (1976) i.e. adjusting for risk both, the free cash flows and the discount rate, will be a mistake. This leads us to different approaches to valuation which handle uncertainty by focusing mainly on the risk associated with future cash flows instead of adjusting the discount rate for risk. These approaches are Sensitivity Analysis and Monte Carlo Simulation. Free cash flows are usually the result of forecasting a number of variables such as unit product price, quantity of sales, growth, salvage value, etc. A sensitivity analysis tries to measure the change in the NPV as a result of a change in one of these variables, keeping the others constant. The main objective is to identify to which variables the NPV is more sensitive, hence, which ones are more determinant of the investment's risk. Once these variables are identified, management should be especially careful forecasting them. Although sensitivity analysis provides more information for management to make decisions, it could eventually mislead it since it assumes that all variables are independent and not serially correlated. In the first case, when some variables are not independent, changing one variable while keeping constant the correlated variables will lead to a wrong estimation. The same happens when variables are serially correlated since a forecast error in one period will affect its value for future periods, generating a wrong estimate of the NPV at the end. One way to get a better estimation is by changing combinations of variables which may be correlated instead of changing just a single variable. However, there is a better methodology known as the Monte Carlo simulation method which considers all possible combinations of variables. The Monte Carlo method estimates NPV by generating random samples of the main variables which determine the project's cash flows based on their probability distributions. Firstly, it models the project with a set of mathematical equations and identities describing the relationship among the different variables involved. Secondly, it specifies a probability distribution for each variable, empirically or subjectively, and the correlation among variables. Finally, it draws a random sample based on the defined distribution for each value and computes the NPV for each sample. All these sample NPV generate a probability distribution from which the expected NPV may be estimated as well as its standard deviation, along with other statistics. The previous approach provides a probability distribution of NPV, not cash flows. If these NPVs are correctly computed, they should have been discounted at the risk-free rate. Discounting them at a risk-adjusted rate would be accounting twice for risk according to Myers (1976) quoted above. On the other hand, applying the Monte Carlo method gives as a result a probability distribution of the NPV, which is hard to associate with the price of an asset in a competitive market, the original question on valuation. Due to this critique, Brealey and Myers (2000, p.275) recommend to use simulation "not to generate a distribution of NPVs. Use it to understand the project, forecast its expected cash flows, and assess its risk. Then calculate NPV the old-fashioned way, by discounting expected cash flows at a discount rate appropriate for the project's risk". Adding sensitivity analysis and simulating cash flows to compute NPV certainly improve the passive NPV approach to valuation. However, all these NPV approaches present a serious flaw since it assumes, from the beginning of the project, a commitment to certain investments which will be made at fixed points in time. In the real world, management may wait until more information is gathered i.e. about the product or the market, before committing more money to the project. As time passes and more information is available, uncertainty decreases and the management may make a better decision. This is the main critique to the NPV approach which has lead to the search of a better valuation method that incorporates the flexibility management has to change the initial plans at a future time. Two approaches have been developed for this purpose: Decision Tree Analysis (DTA) and Real Options Approach (ROA). Decision Tree Analysis (DTA) The DTA recognizes the interdependencies between the initial and the subsequent investment decisions. It allows to map out all the different alternatives that management will have at different points in time conditional on the state of nature. In that way, it allows analyzing complex sequential investment decisions when uncertainty is resolved at discrete points in time, not only at the initial one as it is the case in the NPV approach. In order to apply DTA, a tree is build where each branch represents a state of nature at a certain point in time. At each node, management should choose the alternative which maximizes the risk-adjusted expected NPV. In this way, this approach allows to incorporate flexibility recognizing that future decisions will depend on which is the state of nature at that point in time, that is, the decision needs not to be made today when management is deciding whether to undertake the project or not. This initial decision is the only one to which the management is committed to. All this makes DTA appealing as a valuation method but applying it in the real world is very hard. In practice, when a complex reality is being portrait, the tree becomes really cumbersome. This tree branches at each period, for each state of nature, and for each decision that may be taken. For all these reasons, the tree grows exponentially. Restricting DTA to the minimum number of cases may help but, by doing so, managers may incur in an oversimplification which can lead to an inadequate decision. Yet this is not the main critique to DTA. The most important flaw of this approach is the use of a constant discount rate along the entire tree. By doing this, it assumes that uncertainty is resolved continuously at a constant rate over time, when the tree has been built considering discrete events, therefore, different discount rates should be used for each period. As Trigeorgis (1986) points out, some authors try to solve this problem by discounting cash flows at the risk-free rate and then deciding on the basis of the distribution of NPV. This solution is inconsistent with building a tree forward using the actual probabilities and the expected rate of return and then moving backwards discounting at the risk-free rate. The main conclusion after the previous analysis of sensitivity analysis, simulation, and DTA, is that "although these methodologies have been helpful in improving management's understanding of the structure of the investment decision, nevertheless stop short of offering a manageable, consistent solution" (Trigeorgis, 1986, p.63). On the other hand, even though DTA goes one step forward by incorporating flexibility, it is hard to apply it in the real world. Next, I will present a more recent valuation methodology, known as the real options approach (ROA), which overcomes all the critiques made to the previous methods. From The NPV approach to ROA The reign of the DCF, particularly the NPV approach, has lasted for decades up to our time. However, as Boer (2002) remarks, "the era of discounted cash flows came to an end,.., in part because of a growing crisis in valuation. In effect, while the concept of basing value on cash flows was by itself intellectually impregnable, the marketplace was rejecting it. Sky-high valuations were accorded to companies with promising intellectual property but minuscule or negative short-term cash flow. At the same time, investments in "value" stocks, those in companies with steady and reliable cash flow, underperformed the market for a decade". Clearly, part of the story may be explained by the presence of a bubble in the stock market, but even after it burst in 2000, valuations remain at a much higher level after the crisis. Possible explanations for the NPV approach failure are that it does not take into account important determinants of value such as intellectual capital, market power, and real options, hence, resulting in undervalued projects or corporations. A study by Ernst and Young mentioned previously estimates that only 25% of current market capitalization is based on cash flows anticipated in the next five years in a sample that includes both growth and value stocks. The other 75% of value may be explained by the aforementioned factors. Intellectual capital and market power are more relevant for growth stocks whereas real options are relevant for both, value and growth stocks. Literature on real options has grown rapidly in the recent years. Much of it focuses on capital budgeting, but it can also be applied to corporate valuation. A real option is "the right but not the obligation to take an action (e.g. deferring, expanding, contracting, or abandoning) at a predetermined cost called the exercise price, for a predetermined period of time -the life of the option" (Copeland and Antikarov, 2001, p.5). This concept of real options has been developed as an analogy to financial options, where the underlying asset is a real investment opportunity. The theory of option pricing is based on no-arbitrage equilibrium, where it is possible to replicate the option payoffs by building portfolios of traded securities, hence, to avoid arbitrage, the value of the option should be the same as the value of the replicating portfolio. The main caveat about the use of no-arbitrage equilibrium for real options valuation is that investment opportunities or projects are not traded. This difficulty may be overcome in the same way as it was done in the DCF approach, that is, by identifying a twin security for the project, which is traded in financial markets and has the same risk characteristics. Then, this twin security's required rate of return -computed based on CAPM-can be used as the appropriate discount rate. However, even if this twin security exists -which may not be the case always-, the fact that the project is a non traded asset demands an adjustment for the drift rate by a risk premium Ψ. Hence, risk-neutral valuation can be used replacing the drift rate α for an adjusted drift rate α* as follows: This risk-neutral valuation may be used for a rich variety of real options. One of the most frequently encountered in projects is the real option to defer (D) the decision of making additional investments until i.e. there is more information about product demand. This option can be assimilated to a Call option where its value is the present value of the expected payoff from the option. This payoff is the maximum of the underlying asset's value -at the expiration of the option-minus the exercise price or zero: (5) where: • E* denotes "risk-neutral" or equivalent Martingale expectations. • ST is the value of the underlying asset at the time of expiration of the option. For real options the underlying is the real investment opportunity. • X is the exercise price which is the investment required to exercise the option. • T is the time to expiration of the option. Finally, the volatility of underlying asset's value is measured by its standard deviation σ. Some other real options i.e. the option to abandon (A) a project, may be assimilated to a Put option instead. The value of such option is also the present value of its expected payoff but the payoff is computed the maximum of the exercise price -at the expiration of the optionminus the value of the underlying asset or zero: If the decision to defer may be done at different points in time before the option's expiration, D can be considered as an American option. The same can be said for the abandonment option if management may make such decision at different times before expiration. By exercising this option, management can abandon a project after being initiated, recovering only its scrap value. The costs saved are the exercise price in this case whereas the difference between the initial value of the project and the scrap value is the underlying asset's value. Additional to the option to defer or to abandon, other real options frequently mentioned in the literature are: • The Shutdown Option: it is the management's option to shutdown a plant when output price falls below variable cost. McDonald and Siegel (1985) model this option in a way that the decision to shutdown may be made costless. The option payoff at each period will be the maximum of revenue minus costs or zero since the firm may choose between producing and shutting down. The final value of the firm is the present value of the expected option payoff. • The Option to Expand: similar to a call option where by making an additional investment I' -the exercise price for this option-, management can expand the scale of a project by a certain amount proportional to S: S'. The payoff from this option will be and the value of the option will be the present value of its expected payoff. FACTOR (Increases) Increases σ Increases • The Option to Contract: is the opposite of the option to expand. Management may decide to contract the scale of the project by a certain percentage to (1 -S"), therefore, saving certain planned expenditures I'', i.e. reducing advertising or the size of the plant if the demand of the product is not as strong as initially predicted. This type of real option is similar to a put option where the option payoff is [Max(S"-S",0)] and the option value is the present value of its expected payoff. • The Option to Switch: it gives management the flexibility to switch i.e. technologies or outputs. It can be seen as a put option on the value of the investment opportunity under its current use and exercise price equal to its value under its alternative use. • Growth Option: it emerges when the project is designed in phases i.e. discovery, product trials, production. In this case, additional investments will be made only if the discovery phase is successful. It is similar to a call option and the timing of those additional investments is uncertain and conditional on the discovery. The value of a growth option (G) will be: Recently, an approach to valuation based on real options has been suggested. This Real Options Approach (ROA) estimates the value of a project as the sum of its static or passive NPV and the value of the real options the project offers to management. Under ROA, a project or investment opportunity should be undertaken only if ROA Value = Passive NPV + Real Option Value > 0 Hence, according to ROA, just using passive NPV frequently undervalues projects. Another consequence of using ROA is that sometimes projects with negative NPV may be undertaken when the value of the real options it offers exceeds the negative NPV. ROA corrects the NPV approach estimate in two ways. Firstly, by taking into account real options, it adds value to the project by introducing asymmetry on its cash flows. This implies that project cash flows cannot be calculated in a single "mean" scenario, even if the underlying probability distributions are symmetric (Kulatilaka and Marcus, 1992) Also, while the NPV approach fails to recognize the "strategic value" of a project resulting from its interdependencies with future, follow-up investments, and from competitive interaction, ROA does take these interdependencies into account, usually resulting in higher estimates than those based on the NPV approach (Trigeorgis, 1988). Under ROA, higher uncertainty and more time before undertaking the project may increase the value of the project, contrary to what happens in the passive NPV approach. As it was shown above in Table 1, the value of the real option is greater since managerial flexibility becomes more valuable in these cases. The final project's value will be higher when the real options value increases by more than what the NPV decreases due to greater uncertainty or time to undertake the project. As DTA, ROA recognizes the flexibility management has to avoid negative states of nature by deferring decisions until more information is gathered about the project's future cash flows. However, finding the right discount rate for the expected cash flows is difficult under DTA as well as under DCF as it was discussed above. Precisely, an advantage of ROA over these methods is that it uses financial option pricing theory to value real options. This theory have proved to be helpful in valuing real options, although it should be applied with some caution since there are some differences between financial and real options. An example of these differences is the stochastic nature of the exercise price for real options since the value of the necessary investment to exercise the option may change as time passes. Another difference is that the underlying asset's value may jump and its volatility may not be constant until expiration. These characteristics make a numerical method, i.e. Monte Carlo, more suitable to value real options than the standard analytical methods as the Black-Scholes or the Binomial model which are used frequently for financial option pricing. On the other hand, although there is an extensive theoretical literature on ROA, the number of empirical studies on how this approach performs is much more limited. Paddock, Siegel, and Smith (1988) perform a study on bids to develop oil leases using a government discounted cash flows model and a deferral option. With any of these models they are able to explain only half of the actual winning bids. Adding the deferral option could not explain the high prices that were paid which the authors explain as a "winner's curse" problem. Bailey (1991) uses ROA with a shutdown option to explain the prices of seven palm and rubber plantations. He finds that in six out of seven cases, ROA explains better the actual stock prices compared to a DCF model, and the difference between these two models is statistically significant in two of those cases. Quigg (1993) analyzes 2,700 land transactions in Seattle based on ROA incorporating an option to wait to develop the land. She runs regressions of property prices on building and lot sizes, building height and age, and dummy variables for location and season. She also calculates the price assuming the option would be exercised by building when the ratio of the building's price to the development cost was greater than one plus the market rate. After that, actual transaction prices are regressed against the option value and the regression value and finds that the option model fit is higher than in the regression model in nine out of fifteen cases. Additionally, when the option premium is added to the multiple regression, it is significant in fourteen out of fifteen cases. More empirical evidence to support ROA is shown in Bulan (2001) who examines the implications of real option models with irreversible investment for the relationship between firm investment and uncertainty. Using data on 2,467 U.S. firms, she finds support for the predictions of ROA in the sense that firms delay investment during times of greater uncertainty. On the other hand, Moel and Tufano (2000) perform a study on 285 developed North American gold mines based on ROA including a switching option -between operating and shutting modes-. Their results are consistent with a real options model where mine closings depend on the price of gold, its volatility, operating costs and closing costs. Tong and Reuer (2004) search for evidence on whether firms actually capture option value from their investments. They estimate the proportion of firm´s value accounted for by growth options and link the growth option value to corporate investments that have been commonly viewed as providing valuable growth options. The empirical analysis examines internal and external corporate development activities of a panel of 293 manufacturing firms during 1989-2000. The results indicate that investments in research and development and in joint ventures contribute to growth option value, and that investments in tangible capital and in acquisitions have no effect in general. Finally, Clark, Gaddad, Rousseau (2004) look at divestitures by 144 UK firms listed on the LSE from 1985 to 1991 and investigate whether and how accurately investors price the firm's option to abandon assets in exchange for their exit value. Theory prices this real option as an American style put. The empirical implications are that investors do price the abandonment option but that they price it imperfectly because the exit price is private information. Not surprisingly, this empirical evidence shows that ROA explains actual prices better than DCF approaches. However, in most cases it does not explain fully the difference between actual prices and the passive NPV and I suggest two possible explanations for it. It could be the case of a failure in defining properly the stochastic process of the underlying asset which is crucial in applying option pricing, i.e. assuming a diffusion process when the actual one is a jump-diffusion process (Maya, 2003). Another possible explanation is that, sometimes, some other factors need to be considered to determine the value of the firm, i.e. intellectual capital and market power. In Maya (2004) I propose a Creative Destruction -Real Options Approach which incorporates all these three determinants of value: real options, intellectual capital, and market power, to explain actual stock prices of start-up firms in growth industries. Nowadays there is no question that from a theoretical point of view, ROA is a much more appealing concept than passive NPV. However, its acceptance by practitioners has been very slow, mainly due to the difficulties in understanding and applying option pricing theory. Although extending option theory to real investments was first proposed more than twenty years ago, it was only in the last three to five years that such concept has really attracted the interest of large companies and consultants. In a survey of thirty-four companies in seven industries, Triantis and Borison (2001) found that firms which have shown interest in ROA are those that operate in industries where large investments with uncertain returns are commonplace, i.e. oil, gas, and life sciences. In many cases, these are industries which have undergone major restructuring that makes traditional approaches to valuation less helpful, i.e. electric power. Finally, they tend to be engineering-driven industries where sophisticated analytical tools have been widely used. Surprisingly, the financial industry has shown less interest. According to Miller and Park (2002), to date, mostly natural resources, utility, and R&D managers are using ROA due to the accessibility to publicly traded prices to proxy option parameters. They cite Busby and Pitts (1997) study where questionnaires were sent to Finance Directors in all firms in the FT-SE 100 Index. This study found that 50% of them recognized options within their business, with most options being growth or abandonment. 35% of the respondents considered options as highly or extremely important in influencing investments decisions, however, more than 75% did not have procedures for valuing real options, confirming that the main obstacle to its use in real world has been the complexity of real options pricing in spite of its widely recognized theoretical appealing. Conclusions It took more than twenty years for the DCF approaches, mainly the NPV, to replace Years to Payback and other ancient methods for capital budgeting and corporate valuation purposes. Yet this long wait proved to be worthy since after positioning itself as the most widely used approach to valuation, its reign seemed endless. Still in the nineties it continued to be the main approach, albeit an abysm between market prices and DCF values, a fact that neither academics nor practitioners could continue ignoring. This NPV approach is based on the free cash flows the project or entity will generate in the future which are defined as the net cash flows to shareholders after future investments. It improves over those methods based on balance sheet's information in the sense that it measures the ability to generate cash and, as such, it gives a better estimate of the shareholder's wealth created by the project or entity. Additionally, it accounts for both the time value of money and the risk aversion, all combined through the riskadjusted discount rate. Adding sensitivity analysis and simulating cash flows to compute NPV certainly improves the passive NPV approach to valuation. However, all these NPV approaches present a serious flaw which is that it assumes, from the beginning of the project, a commitment to certain investments which will be made at fixed points in time. In the real world, management may wait until more information is gathered i.e. about the product or the market, before committing more money to the project. As time passes and more information is available, uncertainty decreases and management may make a better decision. A much more appealing approach -from a theoretical point of view-has taken a foothold into the valuation world. Building on the DCF approach yet going further in the sense of incorporating flexibility in management investment decisions, and taking advantage of the advances in option pricing theory, the real options approach (ROA) has become the alternative approach to capital budgeting and, lately, to corporate valuation. ROA corrects the passive NPV approach estimates in two ways. Firstly, by taking into account real options, it adds value to the project by introducing asymmetry on its cash flows. Also, while passive NPV fails to recognize the "strategic value" of a project resulting from its interdependencies with future, follow-up investments, and from competitive interaction, ROA does take these interdependencies into account, usually resulting in higher estimates than those based on NPV. Empirical evidence shows that ROA explains actual prices better than DCF approaches. However, in most cases it does not explain fully the difference between actual prices and the passive NPV and I suggest two possible explanations for it. It could be the case of a failure in defining properly the stochastic process of the underlying asset which is crucial in applying option pricing, i.e. assuming a diffusion process when the actual one is a jumpdiffusion process. Another possible explanation is that, sometimes, some other factors need to be considered to determine the value of the firm, i.e. intellectual capital and market power. Nowadays there is no question that from a theoretical point of view, ROA is a more appealing concept than passive NPV. However, its acceptance by practitioners has been very slow, mainly due to the difficulties in understanding and applying option pricing theory.
9,521.4
2004-10-15T00:00:00.000
[ "Economics", "Business" ]
Combining Multifractal Analyses of Digital Mammograms and Infrared Thermograms to Assist in Early Breast Cancer Diagnosis . We used a 1D wavelet transform modulus maxima (WTMM) method to analyze the temporal fluctuations of breast skin temperature recorded with an infrared (IR) camera from a panel of patients with breast cancer. This study shows that the multifractal complexity of temperature fluctuations observed in healthy breasts, is lost in the region of the malignant tumor in cancerous breasts. Then, we applied the 2D WTMM method to analyze the spatial fluctuations of breast density in the X-ray mammograms of the same patients. Compared to the correlated roughness fluctuations observed in the healthy areas, some clear loss of correlations is detected in malignant tumor foci. These physiological and architectural changes in the environment of malignant tumors detected in both thermograms and mammograms open new perspectives in computer-aided multifractal methods to assist in early breast cancer diagnosis. our computational approach and its potentiality to assist in early breast cancer diagnosis, progression and treatment. INTRODUCTION According to World Health Organization, breast cancer is one of the most common causes of cancer deaths among women worldwide. Clinical studies have demonstrated that patients' survivability increases if breast anomalies are detected at early stages [1]. Although X-ray mammography is currently the most effective way of early breast cancer detection, it remains difficult to interpret due to high tissue variability and to two-dimensional projection effects [2,3]. The miss rate in mammography is increased in dense breasts of young women [4,5]. Criticism of the use of screening mammography due to over-diagnosis made clinicians use additional modalities such as ultrasound, magnetic resonance imaging, IR imaging, etc. Since the original observation by R. Lawson [6] that skin temperature over a malignant tumor is higher than its neighborhood, possibly resulting from abnormal increase in metabolic activity and vascular circulation in the tissues beneath [7][8][9][10], IR thermography has been considered as a promising non-invasive screening method of breast cancer detection [11][12][13]. However, the suitability of IR imaging for routine screening has been highly questioned [14], because of insufficient sensitivity for detection of deep lesions and limited knowledge of the relationship between skin surface temperature distribution and thermal diseases. This growing wave of criticism of breast cancer screening techniques has been strengthened by a similar movement criticizing the increasing use of computer-aided detection (CAD) methods that were shown to decrease specificity [15], and to increase recall rates of healthy women [16][17][18][19] resulting in an increase in unnecessary stress in women. Note that most of these CAD methods [20][21][22][23][24] were designed for texture analysis and feature extraction with the prerequisite that background roughness fluctuations of normal breast mammograms are homogeneous and uncorrelated. In this paper, we review the results of recent and ongoing analyses of dynamic infrared thermograms and X-ray mammograms based on wavelet-based multifractal methodologies [25][26][27]. Application of dynamic IR imaging as a diagnostic tool is based on the detection of variations in temperature rhythms generated by the cardiogenic and vasomotor frequencies [28,29]. Human cardiogenic frequency levels lie between 1 and 1.5 Hz and vasomotor frequencies between 0.1 and 0.2 Hz, with higher harmonic frequency rhythms. But, beyond intensity differences in these rhythms between normal and tumor tissues, there is much more to learn from dynamical IR imaging of breast cancer. Using a 1D wavelet-based multiscale analysis of the temperature intensity fluctuations about these physiologic perfusion oscillations, we propose to characterize the multifractal properties of these temperature fluctuations as a novel discriminating method that can be implemented in early screening procedures to identify women with a high risk of breast cancer [25,26]. The main premise in our investigation of mammograms [27] was related to the suggestion that healthy dense and fatty tissues, indeed display persistent long-range correlations and anti-correlations in mammogram spatial fluctuations, respectively [30,31]. Using a 2D wavelet-based multifractal methodology, we recently observed some clear loss of correlations in mammogram area surrounding the malignant tumor and which strongly correlate with regions of anomalous monofractal (as compared to healthy multifractal) temperature fluctuations previously detected in dynamic IR thermograms [27]. This loss of complexity in temperature time-series and mammographic subimages is likely to be the signature of physiological and architectural alterations of the environment of malignant breast tumors. Besides their potential clinical impact, these results open new perspectives in the investigation and understanding of the cellular mechanisms that underlie the loss of tissue homeostasis, cancer initiation and progression. The 1D Wavelet Transform (Time-Frequency Analysis) The WT is a space-scale analysis which consists in expanding a signal in terms of wavelets that are constructed from a single function, the "analyzing wavelet" , by means of translations and dilations. The WT of a function ( ) t is defined as [37]: where 0 t and ( 0) a are the time and scale (inverse of frequency) parameters, respectively. By choosing a wavelet whose (n + 1) first moments are zero ( ( )d 0,0 ), m t t t m n one makes the WT microscope blind to order-n polynomial behavior, a prerequisite for multifractal fluctuations analysis [54][55][56][57]. The 2D Wavelet Transform (Space-Scale Analysis) With an appropriate choice of the analyzing wavelet, one can reformulate Canny's multiscale edge detection [58] in terms of a 2D wavelet transform [37]. The general idea is to start smoothing the discrete image data by convolving it with a filter and then to compute the gradient of the smoothed image. Let us consider two wavelets that are, respectively, the partial derivatives with respect to x and y of a 2D-smoothing function ( , ) x y : x y x and 2 ( , ) ( , ) . x y x y y The WT of I (x, y) with respect to 1 and 2 has two components and therefore can be expressed in a vectorial form [37]: from which we can extract the modulus and argument of WT: The Wavelet Transform Modulus Maxima Method The wavelet transform modulus maxima (WTMM) method consists in computing the WT skeleton defined, at each fixed scale a, by the local maxima L(a) of the WT modulus M (·, a). In 1D, these WTMM are disposed on curves connected across scales called maxima lines which defines the WT skeleton [54-56, 59, 60]. In 2D, these WTMM lie, at a given scale, on connected chains called maxima chains. When considering the points along these maxima chains where M (·, a) is locally maximum, we define the so-called WTMMM L(a). The WTMMM are then linked through scales to form the WT skeleton [31,[61][62][63]. Along these maxima lines the WTMM (resp. WTMMM) behave as a h(t) (resp. a h(x) ), where h(t) (resp. h(x)) is the Hölder exponent characterizing the singularity of (resp. I) at time t (resp. position x). The multifractal formalism amounts to characterize the relative contributions of each Hölder exponent value via the estimate of the D(h) singularity spectrum defined as the fractal dimension of the set of points t (resp. x) where h(t) (resp. h(x)) = h. This spectrum can be obtained by investigating the scaling behavior of partition functions defined in terms of WT coefficients [31,[54][55][56][57]: where q . Then from the scaling function (q), D(h) is obtained by a Legendre transform: In practice, to avoid instabilities in the estimation of the singularity spectrum D(h) through the Legendre transform, we instead compute the following expectation values [31,[54][55][56][57]: where ˆ( , , ) ( , ) ( , ) W q l a M a Z q a are the equivalent of the Bolzmann weight in the analogy that links the multifractal formalism to thermodynamics [56]. Then from the slopes of h(q, a) and D(q, a) vs ln a, we get h(q) and D(q) and therefore the D(h) and, consequently, singularity spectrum as a curve parametrized by q. Monofractal versus Multifractal Functions The (q) data are fitted by the quadratic approximation The corresponding singularity spectrum has a characteristic single humped shape is the fractal dimension of the support of singularities of (resp. I), c 1 is the value of h that maximizes D(h) and 2 , c the so-called intermittency coefficient [64], characterizes the width of the D(h) spectrum. Homogeneous (mono) fractal functions, i.e. functions with singularities of unique Hölder exponent H are characterized by a linear (q) curve 1 ( ) . h q c H The D(h) singularity then reduces to a single point 2) as the signature of no change in WTMM statistics across scales [31, 39, 54-57, 61-63, 65, 66]. A nonlinear (q) is the signature of nonhomogeneous functions exhibiting multifractal properties, meaning that the Hölder exponent h(t) [54][55][56] (resp. h(x) [31,[61][62][63]) is a fluctuating quantity that depends on t (resp. x). Then D(h) has a single humped shape 2 ( 0) c that traduces some change in WTMM statistics across scales [31,55,63,65]. Experimental Protocols In this work [25][26][27], we analyzed the thermograms of 33 patients (from 37 to 83 years old) of Perm Regional Oncological Dispensary with breast cancer (invasive ductal and/or lobular cancer) and 14 healthy volunteers (from 23 to 79 years old) [67]. Tumors (1.2 to 6.5 cm in size) were found at 1 to 12 cm depths. Both breasts of each patient and volunteer were imaged with an IR camera (FLIR SC5000) at 50 Hz frame rate during the 10 min immobile patient phase. Each image set contained 30 000 image frames. The mammographic procedure involved two X-ray images of both compressed breasts of each patient, referred to as the medio-lateral oblique (MLO) and the craniocaudal (CC) views. Spatial resolution of images was 200 pixels per 1 cm. For mammogram analysis, our database includes mammograms collected from 30 of the mentioned 33 patients with breast cancer. WTMM Multifractal Analysis of Breast Infrared Thermograms We grouped single-pixel temperature time-series into 8×8 pixel 2 squares covering the entire breast [25,26]. The reported results are averaged over 64 temperature time-series in these 8×8 subareas. We analyzed 1-pixel temperature time-series taken from 8×8 pixels 2 -squares covering patient entire breasts. As expected, these timeseries generally fluctuate at a higher temperature when recorded in the tumor region of a cancerous breast (Fig. 1a) than in a symmetrically localized square on the contralateral unaffected breast (Fig. 1b), as well as on a healthy breast (Fig. 1c). As commonly done for noise signals [38,55] and previously experienced when analyzing rainfall timeseries [68], we applied the WTMM method to the cumulative of these temperature time-series using the secondorder compactly supported version (2) (3) of the Mexican hat [26,69] (hence the singularities with a possible negative Hölder exponent 1 0, h become singularities with cum 0 1 1 h h in the cumulative). We found that the partition functions Z(q, a) (Eq. (6)) display nice scaling properties for 1 to 5, q over a range of time-scales limited to [0.43 s, 2.30 s] for linear regression fit estimates in a logarithmic representation. The (q) so-obtained are well approximated by quadratic spectra (Fig. 1d). For the cancerous breast of patient 20, (q) is nearly linear as quantified by a very small value of the intermittency coefficient (Fig. 1e). In contrast, the (q) spectra obtained for the contralateral unaffected breast of patient 20 and the right healthy breast of volunteer 14 (Fig. 1d) respectively. This hallmark of multifractal scaling is confirmed by the computation of the corresponding D(h) spectra in Fig. 1e that both display a single-humped shape [26]. The results of our comparative wavelet-based multifractal analysis of (cumulative) temperature fluctuations over the 33 cancerous breasts, the 32 contralateral unaffected breasts (no right breast for patient 6) and the 28 volunteer healthy breasts are shown in Fig. 2 [25,26]. In Fig. 2a, the corresponding histograms of c 1 values extend over a rather wide range 1 0. 6 1.8, c but turn out to be quite similar with the mean values 1 1.066 0.002 c (cancerous), 1.104 0.002 (contralateral unaffected) and 1.103 0.002 (healthy). In contrast, the intermittency parameter c 2 has a definite discriminatory power. The histogram for cancerous breasts 2 ( 0.045 0.001) c is definitely shifted towards smaller values relative to the ones for contralateral unaffected 2 ( 0.056 0.001) c and normal breasts of healthy volunteers 2 ( 0.058 0.001) c (Fig. 2b). The small values of c 2 dominate in cancerous breasts compared to normal breasts, confirming that the cancerous breasts are enriched in squares where temperature fluctuations display significantly reduced multifractal properties. This justifies that we considered 2 0.03 c as the threshold below (resp. above) which a square was qualified as monofractal (resp. multifractal) (Figs. 3d-3f). According to the proposed classification criteria, 8×8 pixel 2 squares covering breast thermograms were color coded in the following way: squares with 2 0.03 c were shown in black, squares with 2 0.03 c in white and squares without scaling in gray. A high percentage of squares in the cancerous breast displays monofractal temperature fluctuations with small intermittency coefficient values (49.7% in Fig. 3a), while only few of those squares are found in the contralateral unaffected breast (7.7% in Fig. 3b) and in the healthy volunteer breast (11% in Fig. 3c). Consistently, in the thermograms of the contralateral unaffected and volunteer's normal breasts, a majority of squares (89.4% and 65%, respectively) manifests multifractal scaling with an intermittency coefficient 2 0.03 c (Figs. 3b and 3c). Note that 43.1% of the squares in the cancerous breast also display multifractal temperature fluctuations, as observed for normal breasts (Fig. 3a). These squares indeed cover regions of the breast that are far from the tumor area (left upper quadrant) mostly covered by monofractal square. Let us point out that for each breast, a small percentage (<20%) of (gray) squares was removed from our analysis because of lack of scaling. A comparative statistical analysis of monofractal and multifractal square distributions in breast thermograms confirmed that the percentage of monofractal (black) squares covering cancerous breasts (26.8 ± 3.5%) is about twice higher than the ones covering contralateral unaffected breasts (13.1 ± 2.3%) and breasts of healthy volunteers (11.3 ± 2.2%). This excess is compensated by a smaller percentage of multifractal (white) squares in cancerous breasts (56.9 ± 4.4%) than in contralateral unaffected breasts (68.5 ± 3.8%) and normal breasts (69.4 ± 4.3%) in healthy volunteers (Table 1). We have compared the percentages of monofractal squares in cancerous and contralateral breasts [26] and found that 25 patients have higher quantity of monofractal squares in the cancerous breast. 18 cancerous breasts have a percentage of monofractal squares greater than the mean value (26%). Among the other 15 cancerous breasts, 7 have a smaller percentage of monofractal squares but these squares are well localized in the tumor region. The remaining 8 cancerous breasts were detected as false negatives due to small percentage of monofractal squares and their distant location with regard to the tumor region. Four of them correspond to rather deep tumors (6 cm, 7 cm, 8 cm and 12 cm depth) in the breasts with massive adipose tissue that can explain the absence of significant changes in skin temperature dynamics. Importantly, in five of the contralateral unaffected breasts a high (>26%) percentage of monofractal squares was detected which could be an early sign of pathology and require thus further check-ups. As a control, we reproduced this comparative analysis on both normal breasts of the 14 healthy volunteers. Only 4 volunteers manifested high (>26%) percentage of monofractal squares, while in most of them this value did not exceed 10% [26]. Breast-wide multifractal analysis of skin temperature temporal fluctuations. As estimated from the (q) spectra computed with the WTMM method (Fig. 1d), the 8×8 pixels 2 squares covering the entire breast are coded (a-c) and represented as dots in the 1 2 ( , ) c c plane (d-f To evaluate the efficiency of the test based on the multifractal analysis of breast infrared thermograms, we computed sensitivity (Se) and specificity (Sp). The results reported in Table 2 for our rather small set of patients, namely 76% sensitivity and 86% specificity, are quite encouraging for future applications to larger cohorts of females with breast cancer. WTMM Multifractal Analysis of X-ray Mammograms We divided each mammogram view into 360×360 pixel 2 squares spanning 18×18 mm 2 [27]. From the central 256×256 pixels 2 part of the WT skeleton, we computed the partition functions Z(q, a) (Eq. (6)) that were found to display scaling properties over a range of scales 2 1 log 3, a corresponding to [0.7 mm, 2.8 mm], for linear regression fit estimates in a logarithmic representation. Within the range 1 3, q statistical convergence was achieved and the so-obtained (q)-spectrum was found remarkably linear (Fig. 4e) for a great majority of squares in both the cancerous and contralateral unaffected breasts, as the signature of monofractal roughness fluctuations [31,61]. We systematically run our 2D WTMM algorithm for all mammographic views in our database. We confirmed the result of our preliminary study [30] that normal tissue regions display monofractal scaling properties 1 2 ( , 0) c H c as characterized by the Hurst exponent values H < 0.5 in fatty areas which look like anti-persistent self-similar random surfaces, while H > 0.5 in dense areas which exhibit persistent long-range correlations in mammographic roughness fluctuations [31,61]. Importantly, our multiscale method provides an objective way to identify areas with some clear loss of correlations in roughness fluctuations ( 0.5). H We extended our waveletbased multifractal analysis of CC and MLO mammographic images to the entire sets of 256×256 pixels 2 squares that cover each breasts of the 30 patients. As illustrated in Figs. 4a-4d for patient 20, we color-coded each of these squares according to the monofractal diagnosis obtained with the 2D WTMM method. Besides confirming the existence of areas of correlated spatial roughness fluctuations, our methodology reveals that peculiar areas of uncorrelated roughness fluctuations are mainly found in the neighborhood of malignant tumors and strongly correlate with the regions of anomalous monofractal (as compared to healthy multifractal) temperature fluctuations previously detected in dynamic IR thermograms (Fig. 3a). We computed the percentages of monofractal uncorrelated H = 0.5 squares in the mammograms of the two breasts of 30 patients with breast cancer. Good agreement was observed between the percentages of uncorrelated H = 0.5 squares in the CC and the corresponding MLO mammograms of both breasts of each patient (Fig. 5a). Obviously, the percentage of uncorrelated H = 0.5 squares in the cancerous mammograms is much larger than in the opposite breast mammograms. When comparing the two breasts of patients with breast cancer (Fig. 5b), we definitely evidenced some important asymmetry with a significant excess of H = 0.5 squares in the mammograms of the cancerous breast compared to the contralateral unaffected breast. This is a very promising indication that the loss of correlations in the mammographic spatial density fluctuations can be used as an indicator of malignancy. We correlated our findings in IR thermograms and X-ray mammograms of the 30 patients (Fig. 5c). It is clear that the majority of patients have consistent large numbers of "risky" uncorrelated H = 0.5 squares in mammograms and of "risky" monofractal squares in thermograms of their cancerous breast as compared to their contralateral unaffected breast. We further investigated the spatial distribution of uncorrelated H = 0.5 squares over the entire breast defining a H = 0.5 cluster as an aggregation of H = 0.5 squares sharing a common edge [27]. Isolated H = 0.5 squares were considered as cluster-singlet. As reported in Table 3, the mean size of a cluster in a cancerous breast is larger than in the contralateral unaffected breast. This clustering effect is even more evident for the size of the largest H = 0.5 square that is twice bigger for the cancerous breasts than for the opposite ones. These findings suggest that uncorrelated H = 0.5 squares conglomerate in rather big clusters in cancerous breasts likely defining "risky" area that can be used as a new indicator of malignancy. CONCLUSIONS By combining a 1D WTMM multifractal analysis of IR temperature time-series and a 2D WTMM multifractal analysis of X-ray mammograms, we have revealed some change in both temporal and spatial scaling properties in the region of the malignant tumor in cancerous breasts. This is a clear indication of some physiological alterations and architectural disorganization of the "niche" surrounding breast tumor that may be the signature of a transition from an antitumorigenic to a tumorigenic microenvironment underlying the process of oncogenic transformation, tissue invasion and metastasis invasion during cancer progression [70][71][72][73]. Despite the exploratory nature of this preliminary study involving a rather small cohort of females who all went through surgery to remove the histologically confirmed malignant tumor, the obtained results are quite encouraging in the perspective of generalizing our approach to the analysis of longitudinal studies. This will allow us to quantify the statistical performance (sensitivity, specificity) of our computational approach and its potentiality to assist in early breast cancer diagnosis, progression and treatment. ACKNOWLEDGMENTS This work was supported by INSERM, ITMO Cancer for its financial support under contract PC201201-084862 "Physiques, mathématiques ou sciences de l'ingénieur appliqués au Cancer", the Russian Foundation for Basic Research (grant No. 16-41-590235) and the Maine Cancer Foundation. The study reported in this article was conducted according to accepted ethical guidelines involving research in humans and/or animals and was approved by an appropriate institution or national research organization. The study is compliant with the ethical standards as currently outlined in the Declaration of Helsinki. All individual participants discussed in this study, or for whom any identifying information or image has been presented, have freely given their informed written consent for such information and/or image to be included in the published article.
5,042.8
2016-08-02T00:00:00.000
[ "Medicine", "Physics" ]
Primarily investigation with multiple methods on permafrost state around a rapid change lake in the interior of the Tibet Plateau Changes of the lakes on high-altitude regions of the Tibet Plateau influence the state of the surrounding permafrost. Due to the climate warming and wetting trend, extreme events including lake outburst has occurred more frequent. In 2011, an outburst event occurred on the Zonag Lake and this event changed the water distribution in the basin, leading a rapid expansion of the Tailwater lake, named as the Salt Lake. However, the construction of the drainage channel in the Salt Lake ended the expansion process and the shrinkage of the lake started since 2020. To investigate the permafrost state around the Salt Lake, multiple methods, including drilling boreholes, the unmanned aerial vehicle survey and the ground penetrating radar detection have been applied. By integrating these multi-source data, the thermal regime, topography and the spatial distribution of the permafrost around the Salt Lake were analyzed. The result showed that the permafrost state around the Salt Lake was related to the distance from the lake water. The permafrost table appears at 90 m away from the Salt Lake and interrupted by a nearby thermokarst lake at 220 m. The ground temperature in the natural field is 0.2 °C lower than the temperature in the lake at a depth of −5 m. Introduction The Tibet Plateau (TP), as the major distribution area of high-altitude permafrost in the world, has experienced increased sensitivity to climate change in recent years [1][2][3].According to the IPCC AR6, the warming rates of the Qinghai-TP and the North and South Poles are significantly higher than the global average [4,5].However, severe climate changes can trigger regional extreme events in this region, including extreme precipitation, high temperature weather, and lake outbursts [6][7][8][9].The occurrence of these extreme events highlights the vulnerability of the permafrost environment in the plateau and has significant impacts on engineering and human activities distributed in the plateau [10][11][12]. Various methods have been used to investigate the permafrost state on TP, including drilling boreholes, ground penetrating radar (GPR) and electrical resistivity tomography (ERT) [13][14][15].Based on the 72 boreholes drilled in Beiluhe Basin, Lin et al [16] analyzed the variability in near-surface ground temperatures in a warming permafrost region and the results showed that surface temperatures were strongly controlled by slope aspect.By collecting the GPR data together with the ground temperature, Luo et al [17] indicated that the rising of the permafrost limit in Xidatan region is due to global warming, hydrologic processes and the anthropic disturbances.Combining with the ERT survey, drilling exploration and ground temperature monitoring, You et al [18] investigated the icing near the tower foundation along the Qinghai-Tibet Power Transmission Line and concluded that freezing the soil around the tower foundation or lowering the groundwater table may mitigate the re-occurrence of ground icing.In recently years, the unmanned aerial vehicle (UAV) has been developed and was used in the field survey on the TP widely.By analyzing the images collected by the UAV, Chai et al [19] extracted the type and damage ratio of the Qinghai-Tibet Highway pavement distress.Combining the UAV with the thermal infrared remote sensing technology, Luo et al [20] evaluate the thermal dynamics of permafrost slopes along the Qinghai-Tibet Engineering Corridor and indicated that the influence of Qinghai-Tibet Highway on permafrost slopes is more pronounced than Qinghai-Tibet Railway and electric towers. Known as the 'Asia Water Tower' and the 'Third Pole' , the TP is a major lake distribution area of China.The statistical data shows that lakes on the TP accounts for 51.4% in area and 39.2% in number among all lakes in China [21].Driven by the continuous warming and wetting process, the lakes on the TP have experienced rapid changes in areas, water levels and volumes during the past decades [22].In 14 September 2011, the Zonag Lake in the interior of the TP has suffered an outburst due to heavy rainfall [23][24][25].The meteorological data showed that the precipitation reached 20.5 and 19.4 mm in 17 and 21, August, which was much higher than the daily average precipitation at 1.31 mm.This outburst event triggered the expansion of downstream lakes, including Kusai Lake, Haidingnor Lake, and Salt Lake, which in turn had significant impacts on the surrounding permafrost environment and may threaten the safety of the engineering infrastructures nearby [26].The permafrost around the Zonag Lake has become more developed due to the shrinkage of lake area [27,28].While as the Tailwater lake in the basin, it has been proved that the expansion procedure of the Salt Lake accelerated the permafrost degradation [29].Additionally, previous researches indicated that if the Salt Lake continued to expand, another lake outburst in the basin may occurred [30].To control the expansion of the Salt Lake and address the potential risks, several measures including the construction of a drainage channel has been proposed and applied [31,32].The artificial channel transformed the endorheic basin into the exorheic basin and make the Zoang-Salt Lake basin be a part of the Yangtze River. The previous researches have provided sufficient data about the permafrost state around the drainage lakes.In 1978, a tundra lake in western Arctic coast was drained, the ice wedge began to growth at a rate of 3 cm/a and ceased within about 12 years [33,34].In northwestern Alaska, Nitze et al [35] identified 192 lakes has been drained and predicted the intensification of permafrost degradation.Morgenstern et al [36] analyzed the thermokarst lakes in north Siberia and indicated that the drainage causes the sediment refreeze.Most previous studies focused on the permafrost state around the thermokarst lakes which are generally small.In this study, the borehole data near the Salt Lake has been collected since 2020 and a GPR line was settled from lake to the natural field.In addition, the orthomosaic image and digital surface model (DSM) were collected by the UAV.Based on these data, the permafrost state around the Salt Lake was exhibited spatially and temporally and the mechanisms of the interaction between the Salt Lake and permafrost was analyzed. Study area The study area is located in the northeastern section of Hoh Xil in the inner TP with the Kunlun Mountains located in the north.Influenced by the climate and Quaternary glacial conditions, the soil in the study area is mainly composed by gravelly sandy soil and accordingly are difficult to hold up rich vegetation.With the vegetation coverage less than 30%, Stipa grass is the dominant plant.The permafrost is well-developed in the study area (figure 1(a)).Based on the borehole drilling data, the active layer thickness is between 2 and 3 m.The permafrost thickness is between 30 m and 60 m with the temperature higher than −1 • C. Beneath the permafrost table, the ice content in study area ranges from 30% to 70%.The climate of the study area was semi-arid, with the mean annual air temperature (MAAT) ranges from −7.0 • C to −3.0 • C and the yearly precipitation in this area ranges from 150 mm to 450 mm.During the past decades, the MAAT and the precipitation has shown an increasing trend with a rate of 0.33 • C/10a and 23.4 mm/10a [37].In 2019, an artificial drainage channel was excavated in the east of the Salt Lake to constrain the lake expansion.In this study, the monitoring of the permafrost state and the field investigations was mainly focused on the edge of the Salt Lake where the artificial drainage channel was constructed (figures 1(b) and (c)). The GPR technique The working principle of GPR is to use antennas to transmit and receive high-frequency electromagnetic waves (EWs) to detect the characteristics and distribution rules of the internal material of a geophysical object.When the EW propagate in the underground medium, they will reflect when encountering interfaces with different electric properties.According to the differences in the waveform, amplitude strength, and time of the received EW, it can be inferred the nature and distribution of the underground medium.The offset method is a widely utilized technique for data acquisition in GPR technique.During the acquisition process, the transmitting antenna (T) and receiving antenna (R) move in unison along the where C is the velocity of EW in the vacuum (3 × 10 8 m s −1 ) and ε is the dielectric constant of the medium.The previous researches have proved that the velocity of EW in permafrost varies with position and influenced by the permafrost conditions, including ice content, grain size and soil moisture [39][40][41].However, considering the measurement error between the accuracy and the average velocity is 1-13 cm, the average velocity was used in the processing of the GPR data and this propagation velocity was calculated by comparison with manual frost probing data in the study area. In this study, the XPRT Crossover CO730 twodimensional GPR (Impulse Radar, Sweden) were used in the field survey in the Salt Lake, Hoh Xil.The GPR system shielded antennas with frequencies centered around 70 and 300 MHz to measure different depths at different precisions.The parameters of the data acquisition and processing procedure are shown in the table 1.To guarantee the accuracy of the UAV result, an experimental flight was conducted 5 km away from the study area (figure 3(a)).At the experimental place, 30 points were positioning by the Global Navigation Satellites System Real-Time Kinematic (RTK) technology and can be used to correct the UAV results (figure 3(b)).Thirty RTK points in the experimental area were divided into two sets, the experimental points coded from 1 to 15 were mainly used to calculate the altitude difference between the RTK and the DSM image (figure 3(c)).The altitude difference of these points ranged from 59.78 to 51.26 m and the average difference was 50.66 m.Then, the test points coded from 16 to 30 would subtracting the average difference from the DSM image of UAV (CUAV, correction of UAV).By comparing the data of the CUAV and the RTK points, the difference was control in a range of −0.07 m and 0.5 m, 0.19 m average (figure 3(d)).Based on the results, the average difference between UAV image and the RTK points obtained from the experimental area were used to correct the DSM data in the study area. Ground temperature monitoring around the Salt Lake Seven boreholes with a depth of 20 m were drilled from the lake to natural field around the Salt Lake in September, 2020.Each borehole contains 30 thermistor temperature sensors (TTS) which have a working range from −40 • C to 40 • C with a temperature resolution of 0.01 • C-0.005 • C in negative temperature conditions and of 0.01 • C-0.03 • C in positive temperature conditions.In the first 10 m beneath the surface, the interval of each TTS was 0.5 m and changed into 1 m in the depth of 10 m and 20 m (figure 4(a)).All the TTS were connected to the data logger by the cables buried beneath the surface (figure 4(b)).The data logger was powered by the solar energy to ensure the continuous monitoring and the ground temperature was collected every 6 h (figure 4(c)). The topography around the Salt Lake obtained by the UAV By processing the images obtained from drones, the topography around the Salt Lake is presented by the orthomosaic image (figure 5(a)) and the DSM image (figure 5(d)).The orthomosaic image has shown the soil in the surrounding area of the Salt Lake is relatively moist in the surface.However, as distance increases, small grasses begin to appear on the surface of the ground and large amounts of white salt crystals exuding from the ground.The white salt crystals, which might be left in the soil when the area was submerged by the lake water, reduced as the distance increases further.By using the threshold segmentation method on the green band of the UAV image, the changes of the surface soil along the Salt Lake can be visualized more clearly (figures 5(b) and (c)).The DSM image has revealed the altitude information around the Salt Lake.There is also a elevation difference between the Salt Lake and the diversion channel.At the junction between the Salt Lake and the diversion channel, the elevation is 4425.4m, while at the center of the diversion channel about 370 m away, the elevation is 4423.2m, 2.2 m lower than the junction point. By superimposing the line of GPR and the line of the ground temperature boreholes (1# to 7# ordered from the natural field to the Salt Lake) in the orthomosaic image and the DSM image, the spatial location and relationship between these lines and the topography can be reflected.Located in the south of the Salt Lake, a thermokarst lake is 145 m away from the Salt Lake and 37 m away from the line of GPR (figures 5(a) and (c)).Additionally, the distance between the line of ground temperature boreholes and the line of GPR is about 10 m. The result of the ground temperature around the Salt Lake The ground temperature data since 2020 revealed the time series of the permafrost thermal regime.Located in the natural field, the data collected by borehole 1# has shown the changes of ground The isothermal line of −0.5 • C was started at a depth of −6 m in November, 2020 and decreased to −6.3 m (figure 6(a)).The ground temperature of borehole 2# The spatial distribution of permafrost around the Salt Lake The permafrost spatial distribution can be observed by the GPR.The processed GPR data in different frequency (70 MHz and 300 MHz) is capable to reflect the permafrost state at different depth from the shoreline of the Salt Lake to the natural field 290 m away (figures 5 and 7).In the image of 70 MHz, the ground ice is distributed in the first 50 m of the GPR line with a depth over 10 m.As the distance from the Salt Lake increases, the permafrost table appears in the image and become more and more clear.However, when the GPR line approach the thermokarst lake at 220 m and 250 m, the permafrost table disappear in the image (figures 5(c) and 7(a)).The image of 300 MHz presented the detail variation process of the permafrost table along the Salt Lake.The permafrost table at a depth of approximate −2.5 m is divided in to two parts.The first part starts from 90 m to 220 m and the second parts starts from 250 m to 290 m (figure 7(b)).When analyzing the GPR data together with the topography image in figure 5(a), it is evident that permafrost table is mainly exists at a specific distance from the surface water such like lakes. The changes in area of the Salt Lakes on in recent years Climate change has significant implications for the lake dynamics [42,43].Based on the satellite data, the rapid lake expansion in the endorheic basin on the TP since 2000 was investigated, and the potential driving factors, including climate change, glacier melting and permafrost degradation, were discussed [44].By examining the annual changes in lake area, level, and volume during 1970s to 2015, Zhang et al [45] indicated that the lake volume was a slight decrease from 1970s to 1995, and followed by the increase since 1996.Between 1956 and 2007, the lakes experienced an area decline and close to half area difference was caused by the persistent drainage in Old Crow Flat, Alaska [43]. Located in the continuous permafrost region of the TP, the Salt Lake has also experienced area changes in different period.Figure 8(a) shows the boundary changes of the Salt Lake since 2010.From the shoreline of the lake in different years, the lake expanded mainly in the southwest direction and several small lakes around the original lake has been swallowed up.A notable example is that a small lake located on the west side of the Salt Lake in 2019 had already become part of the lake by 2020.From 2020 to 2021, the boundary of the Salt Lake exhibited a shrinkage trend instead of the expansion in the years before.Figure 8(b) focus on the area changes of the Salt Lake since 2000.The changes of area can be divided into five stages due to the different expansion rate.The first stage showed a slow expansion of the area during 2000 and 2010, which was before the outburst of the Zonag Lake.However, during the period of the lake outburst (stage 2, 2010-2012), the area of the Salt Lake expanded in a significantly rapid rate.During the stage 3 from 2012 to 2018, the Salt Lake was still in an expansion period with a relatively slow growth rate compared to the stage 2. While in stage 4 from 2018 to 2020, the growth rate has accelerated and the area of the Salt Lake has reached the peak.In contrast to the previous stages, stage 5 has exhibited a shrinkage state in the area since 2020.These expansion rates in different period of the Slat Lake indicated the different impact of the climate change (stage 1), the upstream lake outburst (stage 2 to stage 4) and the human intervention (the construction of the drainage channel, stage 5) on the lake area change, of which the upstream lake outburst domain the area change over the past years. The permafrost thermal regime around the Salt Lake As an excellent heat carrier, the surface water can bring a large amount of heat and cause the rapid deepening of the active layer, warming of permafrost layer and even development of taliks.By setting sensors in a thermokarst lake in Beiluhe Basin and analyzing the heat transport process, Lin et al [46] indicated that the permafrost beneath the lake bottom is completely thawed and changes to be a through talik and the continuous heat absorption leads the underlying permafrost to degrade in the lake area near the shore.The thermal regime at the bottom of the lake bottom along the Qinghai-Tibet Engineering Corridor has also showed that the thermal exchange process between lake bed and underlying soil layers can lead to the thawing of permafrost [47]. Based on the data collected by the ground temperature sensors at different depth and different position, the permafrost thermal regime around the Salt Lake is presented in figure 9.The ground temperature at −5 m indicated that the max ground temperature appeared at Salt Lake (7#) and the min temperature is at the farthest natural field borehole (1#).Due to the shrinkage of the Salt Lake, the ground temperature from 3# to 5# shows a decrease trend while an increase trend is shown at 1#, 2#, 6# and 7# (figure 9 .The ground temperature around the Salt Lake has showed the permafrost has experienced a warming process.However, the ground temperature at 3# to 5# with a depth of −5 m shows the permafrost in this area has experienced a cooling process.Considering the lake area changes in recent years, it is evident that the shrinkage of the Salt Lake lead to the development of permafrost in the surrounding area. The mechanisms of interaction between the Salt Lake and permafrost The variation in lake area usually affects the development of permafrost.The observed data indicated that the depth of the permafrost table around the shrinking Zonag Lake changed from −4.9 m to −5.7 m during 2012 and 2014 and the simulation results predicted that the permafrost layer would keep increasing in the future [48].By comparing the ground temperature data collected at the Zoang Lake and the Salt Lake, Liu et al [27] pointed out that the permafrost layer appeared opposite states, the permafrost around Zonag Lake developed while the degradation occurred around the Salt Lake. Before the outburst of the Zonag Lake in 2011, the area of the Salt Lake remained stable.Ground ice distributed beneath the edge of the lake and permafrost layer located surround the lake (figure 10(a)).After the outburst, the Salt Lake has received the most incoming water and expanded rapidly.Based on the field observation data, the water level raised over 8 m with an average accelerate rate of 2.7 m yr −1 [24]. As a lake with a depth over 25 m, the heat brought by the lake water accelerated the degradation of the surrounding permafrost and the ground ice beneath the lake [27,49] (figure 10(b)).The construction of the drainage channel transferred the Salt Lake from an endorheic lake into an exorheic lake and resulted in the shrinkage of the Salt Lake [26,31].As a consequence, the shrinkage exposed the previously submerged soil and the state of permafrost surrounding the lake has divide into two modes.The permafrost beneath the newly exposed soil is in a cooling process and the permafrost beneath the lake water is still in a warming process (figure 10(c)). Conclusions The drainage and outburst events of thermokarst lakes have widely occurred in permafrost environment during the past decades.With continuous climate changes, plenty of large lakes situated in permafrost regions would face the same risk of outburst or drainage.In this study, the Salt Lake with an area exceeding 200 km 2 in the interior of the TP was selected as an example.Due to upstream lakes overtopping and human intervention, the lake has undergone rapid changes in area and influenced the surrounding permafrost.Multiple methods including UAV, GPR and borehole temperature observation has been utilized comprehensively to investigate permafrost state around the lake.The UAV images provide the topographic information around the Salt Lake and aided in the interpreting the GPR data.The GPR showed that the permafrost table appears at 90 away the lake and interrupted by a thermokarst lake at 220 m.In contrast to the UAV and GPR data which mainly focused on the spatial information, the borehole temperature observation can reflect the permafrost thermal condition from a temporal perspective.During the observation period from October, 2020 to May, 2022, the permafrost originally located within the lake (3# to 5#) at a depth of −5 m showed a cooling process which caused by the shrinkage of the lake.By integrating the methods, the permafrost state could be analyzed spatially and temporally. Although the utilization of the multiple methods could reflect the permafrost condition from various perspectives, the limitation of these techniques such as the offset of the UAV images could not be neglected.The integration and application of these technologies should be further strengthened in future research. Figure1. Figure1.The overview of the study area: (a) the distribution of permafrost on the TP and the location of the study area [37, 38]; (b) the Landsat satellite image (2021) showing the state of the Salt Lake; (c) the Sentinel satellite image (2022) showed the coverage of UAV and the line of GPR in this study. Figure 2 . Figure 2. The working principle of GPR: (a) the common offset method of GPR (T: Transmitter antenna, R: Receiver antenna); (b) the velocity of EW in different medium. Figure 3 . Figure 3.The correction of the UAV images: (a) the distance between the study area and the experimental area; (b) 30 RTK points were divided into experimental points and the test points; (c) the difference between the UAV image and the experimental RTK points; (d) the difference between the correction UAV (CUAV) image and the test RTK points. Figure 4 . Figure 4.The ground temperature profile around the Salt Lake: (a) the placement of 30 TTS in each borehole; (b) the ground temperature boreholes were connected to the data logger by cables (the photograph captured in 27 June 2022); (c) the data logger collected the ground temperature data. Figure 5 . Figure 5.The topography around the Salt Lake obtained by the UAV: (a) the threshold segmentation image around the Salt Lake; (b) the threshold segmentation image of the natural field; (d) the DSM image of around the Salt Lake. Figure 7 . Figure 7.The processed GPR images in different frequency around the Salt Lake: (a) 70 MHz; (b) 300 MHz. Figure 8 . Figure 8.The area changes of the Salt Lake: (a) the boundary changes since 2010; (b) the annual area of the Salt Lake since 2000. Figure 10 . Figure10.The interaction between the changes of the Salt Lake and permafrost: (a) the original state; (b) the permafrost state when the lake was in an expansion period; (c) the permafrost state when the lake was in a shrinkage period. Table 1 . The parameters of the GPR. Table 2 . The parameters of the UAV in the flight and the processing periods.
5,629.4
2023-10-05T00:00:00.000
[ "Environmental Science", "Geology" ]
Decoding the Dopamine Signal in Macaque Prefrontal Cortex: A Simulation Study Using the Cx3Dp Simulator Dopamine transmission in the prefrontal cortex plays an important role in reward based learning, working memory and attention. Dopamine is thought to be released non-synaptically into the extracellular space and to reach distant receptors through diffusion. This simulation study examines how the dopamine signal might be decoded by the recipient neuron. The simulation was based on parameters from the literature and on our own quantified, structural data from macaque prefrontal area 10. The change in extracellular dopamine concentration was estimated at different distances from release sites and related to the affinity of the dopamine receptors. Due to the sparse and random distribution of release sites, a transient heterogeneous pattern of dopamine concentration emerges. Our simulation predicts, however, that at any point in the simulation volume there is sufficient dopamine to bind and activate high-affinity dopamine receptors. We propose that dopamine is broadcast to its distant receptors and any change from the local baseline concentration might be decoded by a transient change in the binding probability of dopamine receptors. Dopamine could thus provide a graduated ‘teaching’ signal to reinforce concurrently active synapses and cell assemblies. In conditions of highly reduced or highly elevated dopamine levels the simulations predict that relative changes in the dopamine signal can no longer be decoded, which might explain why cognitive deficits are observed in patients with Parkinson’s disease, or induced through drugs blocking dopamine reuptake. Introduction The dopamine signal in the prefrontal cortex (PFC) is crucial for working memory as well as for reinforcement learning [1][2][3]. Manipulations of the dopamine level in monkey PFC, by blocking the dopamine receptors or increasing the dopamine levels, have revealed that there is an optimal level for the performance of cognitive tasks involving working memory. Too high or too low levels of dopamine in PFC are detrimental for cognitive performances of monkeys [4][5][6]. Anatomical studies of dopaminergic projections from the midbrain to striatal and cortical areas suggest that dopamine is not released at synapses, but is released into the extracellular space so allowing it to bind to receptors remote from the release sites [7,8]. Thus a critical aspect of dopamine transmission is the spatiotemporal change of the extracellular dopamine concentration in relation to the firing activity of the midbrain dopaminergic neurons. Differences in the expression level of the dopamine re-uptake transporter and the density of dopaminergic axons innervating subcortical or cortical areas, predict a different spatiotemporal dynamic between those target areas. Models of the dopamine signal in the densely innervated striatum indicate that the timing of dopamine release in relation to glutamatergic synaptic activity can provide the selectivity of dopamine as a reinforcement signal [9,10]. Consistent with these predictions, a bidirectional interaction of activated NMDA-and dopamine D1 receptor has been observed experimentally. When dopamine acts at the D1 receptors, it increases the NMDA conductance. The activated NMDA receptors in turn increase the number of available D1 receptors on the membrane of spines [11][12][13]. This reciprocal interaction has further been proposed to underlie the modulatory effect of dopamine on synaptic plasticity and learning in neuronal networks [14]. To simulate the dopamine transmission in PFC, we used the parameters taken from published data and our own measurements of immunohistochemically labelled dopaminergic axons of layer 3 in prefrontal area 10 [15]. Prefrontal area 10 (frontopolar cortex) is active during highly complex cognitive tasks [16,17]. The simulations reveal that even at baseline firing levels there is enough dopamine to bind on receptors anywhere in the neuropil. We provide a working hypothesis of how changes in dopamine concentration relative to baseline level could serve as a relative teaching signal for working memory representation and rewardrelated learning. Tonic Firing to Steady State Level At a tonic background firing of 5.6 Hz and a re-uptake constant of 1.5 s 21 , the average dopamine concentration reached at steady state was 26 nM (Std: 10.5 nM, Range: 9.5 nM-250 nM). The steady state level lies in the measured range of dopamine concentration in monkey prefrontal cortex [18]. Figure 1A shows the simulation volume with all randomly located release sites. Due to the sparse distribution of dopamine release sites, the concen-tration varies by a factor of 25. Figure 1B shows a 2-D crosssection through the cubic volume at steady state. The local heterogeneity of dopamine concentration is also shown in Figure 1C, where the dopamine concentrations are plotted along an arbitrary line drawn through the volume. The dopamine concentration at steady state varies between 9.5 nM up to 250 nM (Boxplot Figure 1D), which reveals that at the baseline levels maintained during tonic release, the dopamine receptors in their high affinity state are activated everywhere in the volume. Change after Bursts and Depressed Activity To simulate the change in activity after an unexpected or omitted reward, we examined three different simulation conditions: an increase in firing up to 15 Hz for 150 ms followed by a pause of 0 Hz for 150 ms, an increased firing up to 26 Hz for 150 ms and a depressed activity of 0 Hz for 150 ms. As the number of dopaminergic neurons projecting to PFC with synchronous, phasic firing is not known, we varied the proportion of release sites responding to phasic activation from 10%, 25% and 50%. All the other release sites maintained tonic release. The measured values of all simulation conditions are summarized in Table 1. Dopamine Concentration at Specific Distances to Release Sites The change in dopamine concentration over time was measured at three different measurement points: 1, 2 and 5 mm (red, yellow resp. green line) away from every release site with phasic activity. The mean value (solid lines) with its standard deviation (dashed lines) is plotted along time for the respective simulation protocols in Figure 2. The graphs show the amount of dopamine available for the receptors to bind at different distances to a release site. Thus, the simulation predicts that even 5 mm away from release sites with phasic activity, there is a considerable transient change in dopamine concentration after a change in firing frequency. Local Heterogeneity When we look at the changes in dopamine concentration along a sample line in the simulation volume (Figure 3), we see the local variability in dopamine level as a result of all the release sites with different activity at various distances. The transient local change in dopamine level is variable and not necessarily most pronounced at the sites with the highest absolute dopamine level. For every simulation condition (with 50% synchronised release), the change of dopamine concentration of the entire volume is plotted in Figure 4 (resolution of measurement 1 mm 3 ). The relative change in dopamine level varies strongly across the volume. For example, the dopamine concentration increases 14.4% on average over the . Very close to a release site with phasic activity the dopamine concentration reaches more than double its baseline level and yet approximately 5% of the volume has a relative increase of less than 1% of its baseline level (see Table 1). In summary, the tonic and phasic release of dopamine results in a heterogeneous pattern with local 'hot-spots' of increased dopamine levels, yet all the protocols result in values above the effective concentration required to bind on D1 receptors in their high affinity state everywhere in the neuropil. Altered Dopamine Release or Uptake Highly reduced density of dopamine release sites. In primate models of Parkinson's disease, dopamine depletion is typically induced by treatment with the selective neurotoxin methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). It has been reported that the dopamine level and innervation density in prefrontal areas of MPTP treated monkeys is reduced by 40-74% [19,20]. Our unpublished observations of reduced dopamine innervation in area 10 of MPTP treated monkeys indicate MPTP effects on the density of release sites can be even more severe. We assumed a depletion of 70% of release sites in our model, which resulted in a steady state level of dopamine concentration of 8 nM assuming innervation density decreased to 0.61 *10 24 release site/ mm 3 . Simulation condition with phasic firing of 15 Hz for 150 ms and depressed activity of 0 Hz for 150 ms has been simulated. Figure 5 summarizes the dopamine concentration in the Parkinsonian-like condition. The dopamine level is very low and during tonic firing 42% of the volume receives less than 5 nM and only 22% of the volume reaches more than 10 nM ( Figure 5 A-D). If the effective concentration for the D1 dopamine receptors in the PFC is 10 nM, this would mean that most of the volume does not receive sufficient dopamine to activate their high affinity receptors ( Figure 5D). The mean of absolute change in dopamine concentration after phasic activity is only 1.4 nM ( Figure 5E and Table 1) and though the relative change is on average almost 10% ( Figure 5F and Table 1), as the steady state level is very low, the dopamine concentration in the entire volume stays low even after phasic release ( Figure 5D). The difference of extracellular dopamine concentration between the two conditions with normal or highly reduced number of release sites is shown in Figure 6. The histogram in Figure 6A shows the distribution of the local concentration (without outliner values) at steady state, during tonic release of the two conditions. Sampling from a random line within the volume also shows the decrease in extracellular dopamine concentration in comparison to the normal condition ( Figure 6B). Drug-induced block of dopamine re-uptake. The dopamine re-uptake transporter is blocked by drugs such as Amphetamine, Methylphenidate (Ritalin) or Benzoyl-methyl-ecgonine (Cocaine) [21] and we tested in our simulation volume a condition with a completely blocked re-uptake of dopamine. Our simulation does not consider the biochemical and pharmaceutical dynamics of the transporters and drug and we are aware that drugs do not lead to a complete block of the transporters. However, the simulation can be refined by adjusting the depressed uptake dynamics. If the re-uptake is blocked completely and the tonic release activity is kept constant the dopamine concentration increases on average 39 nM per second and would reach mM levels after 20-25 sec (Figure 7). The change in local dopamine concentration is plotted at the three measurement points: 1, 2 and The absolute values of the dopamine concentration at steady state (tonic firing) and after phasic firing (t = 150 ms), the absolute difference in dopamine concentration between tonic and phasic firing and the relative difference in dopamine concentration after phasic release are listed as mean with standard deviation (std Discussion By simulating dopamine diffusion in prefrontal area 10 we could show that non-synaptic release of dopamine leads to local heterogeneity of dopamine concentration due to the low density of the release sites. During the phasic release of dopamine the 'hotspots' enlarge and the overall dopamine level increases. Our results address important questions about how dopamine neurons encode their signal, how this signal is decoded by the recipient neurons in the PFC and also, by what means cell assemblies and individual synapses are modulated? Sparse Innervation and Local 'Hot-spots' of Increased Dopamine Levels In contrast to the striatum, where any given structure of the neuropil lies within a radius of 1 mm [22], of a dopaminergic bouton the nearest neighbour distances between potential dopamine release sites in area 10 of macaque PFC is on average 9 mm [15]. This sparse innervation in primate neocortex raises the question of whether there is a restricted sphere of neuropil within which the dopamine concentration is effective and if so, whether the composition of the neuropil within this sphere is special or different from the neuropil distant from a release site. The effective radius for dopamine to bind on the high affinity receptor after a single release event in the striatum has been predicted to be around 7-8 mm [23] or even more than 20 mm [24]. Based on the tonic and phasic activity of several release sites we examined the local variation of dopamine concentration within the extracellular space. In line with the prediction of the effective radius after a single release event, our simulations showed that the tonic activity of all the release sites generates a dopamine concentration high enough to activate all the dopamine receptors in their high affinity state anywhere in the neuropil. Even though there are 'hot-spots' of high dopamine concentration centred on release sites, the dopamine signal is broadcast with sufficient concentration to all sites of the neuropil. There are no regions where the dopamine concentration is too low to be effective. Local Variability within the Nanomolar Range Although there is enough dopamine to activate receptors in their high-affinity state throughout the neuropil, the sparse innervation nevertheless results in dopamine concentrations that range over tens to several hundreds of nanomolar. 'Hot-spots' of increased dopamine concentration occur, but the simulations demonstrate that, unlike the dopamine level in the simulation of the striatum [10], micromolar levels are not reached in the prefrontal cortex. Micromolar concentrations of dopamine would have the consequence that the receptors would be bound to dopamine even in their resting conformation (low affinity state). The local variation of dopamine concentration in our cortical simulation stays below micromolar levels, which implies that the receptors in their low affinity state never bind dopamine, but in their high affinity state do bind with a variable probability according to a decrease or increase in local dopamine concentration. Working Hypothesis of Decoding the Dopamine Signal From the predictions of our simulation, we propose it is not the absolute value of dopamine, but the relative change of dopamine level that is detected against the baseline level of dopamine maintained by the tonic firing. Given that during tonic firing dopamine reaches a sufficient concentration to be bound by receptors in their high-affinity state, an increase or decrease in the concentration of dopamine principally affects the binding probability of the dopamine receptors and eventually the number of activated dopamine receptors. The crucial signal for the recipient site is hence not the absolute level, but whether there is more or less dopamine available relative to the local baseline level. Physiological recording of dopaminergic neurons indicate that they have three firing patterns: tonic firing as background activity, bursts of firing after the occurrence of an unexpected reward, and depressed activity of the neurons after an omission of an expected reward [25]. The prediction error associated with the firing pattern of dopaminergic neurons would be reflected in the change in dopamine concentration from its baseline level. A phasic increase or decrease of extracellular dopamine due to phasic firing or absence of release would change the binding probability of the available receptors, and thus the number of activated dopamine receptors. In this way the dopamine signal can be encoded and decoded in a graduated manner in relation to its baseline signal. Dynamic Balance between the Affinity States The two-state model of G-protein coupled receptors proposes that there is a dynamic balance between the high and low affinity state [26]. The equilibrium constant between the conformational states can alter the probability of a receptor being in the high or low affinity state. Richfield and colleagues [27] described the rodent striatum as having 80% of the D1R in their low affinity state and only 20% in their high affinity state. We propose that the relative proportion of dopamine receptors in their high or low affinity states is locally adjusted in order to account for the local heterogeneity of dopamine concentration and thereby to ensure an optimal detection of any change relative to the steady state level at any site in the neuropil. There are several other ways to regulate the total number of receptors: more receptors recruited on the membrane, less inactivation of receptors, or restricted lateral diffusion of receptors within the membrane. This suggests a high degree of flexibility on the part of the recipient synapse to adapt to the prevailing dopamine concentration by regulating the number of dopamine receptors in the high-affinity state. Condition of Altered Dopamine Signalling Our simulation of the Parkinsonian-like condition predicts that when only 30% of release sites are active, a detectable and effective dopamine concentration is provided only for receptors close to the release sites and not for the entire volume. On the other hand, when drugs block re-uptake the resultant increase in dopamine level to micromolar concentrations means that dopamine receptors can be occupied in their low affinity state. The dopamine receptors would then be bound to their ligand in both affinity states and no transient change in dopamine concentration would be detected as a change in binding probability of the dopamine receptors. For both very low numbers of release sites, or blocked re-uptake, a graduated dopamine signal would be no longer possible. Our simulation thus explains the inverted Ushaped relationship of dose-dependency for dopamine efficacy [5]. This relationship has direct relevance for the cognitive impairments, especially in working memory, as observed in Parkinson's disease and in drugs that block dopamine re-uptake. Combination of Activity State and Change in Dopamine Level Ensures Specific Action A reciprocal interaction has been described between the NMDA and D1 receptors [11,13,28]. NMDA receptor activation increases the recruitment of D1 receptors and thus the sensitivity to dopamine. On the other hand D1 receptor activation enhances the excitatory postsynaptic current mediated by the NMDA receptors. There is thus a bidirectional interaction between the two receptors and reciprocal signal enhancement. This synergy between the dopamine D1 receptor and NMDA receptor is hypothesized to be an important mechanism for synaptic plasticity [14,29]. Put in relation to the results of our simulation, this observation suggests the mechanism whereby the broadcast dopamine signal can nevertheless act specifically. The simulation shows that dopamine can affect any synapse in the neuropil, however the reciprocal interaction between activated NMDA and D1 receptors implies an enhanced action of dopamine only on glutamatergic synapses that are coincidently active during increased dopamine release. Bursts of firing are associated with an unexpected reward and the increase in dopamine would thus enhance currently active cell assemblies. The combination of the activity state of the recipient synapse and the change in dopamine concentration could give rise to the specific action of dopamine for synaptic plasticity. With this view the role of dopamine as a teaching signal is optimally provided by a non-synaptic signal, broadcast to all synapses in the neuropil and yet to act specifically due to dopamine receptors and their interactions. The Role of Dopamine Receptors in Signal Transmission In the current study we were primarily interested in examining the spatiotemporal dynamic of the dopamine signal in the prefrontal cortex where the dopaminergic innervation is very sparse. However, these investigations could be extended within the simulation framework of Cx3Dp to include detailed neuronal structures with synaptic connectivity into the simulation volume, which would allow an exploration of the dynamics as well as plasticity of cell assemblies in relation to a change in dopamine concentration associated with a 'teaching' signal. Our simulations could also be extended to refining theoretical models of dopamine signal on working memory representation. We propose a central role of the dopamine receptors in the transmission of dopamine: dopamine is provided to all synapses in the volume and act on all structures in the neuropil that express the receptors necessary to receive the signal. Active synapses in a given cell assembly could be mm 3 after phasic activity. A-C. 4. The relative increase of the dopamine concentration after phasic activity is plotted for the different protocols as percentage of steady state. doi:10.1371/journal.pone.0071615.g004 Cx3Dp, the Simulation Tool The simulation of the diffusion was performed using Cx3Dp, a simulation framework that provides a three dimensional space to study biological processes with physical rules (http://www.ini.uzh. ch/amw/seco/cx3d/). It was originally developed to model cortical growth and development [30] but was well suited to the present study. Diffusion of chemical factors is a basic function integrated into Cx3Dp and is applied to different biological growth processes. For the simulation of dopamine diffusion in cortex it was necessary only to adjust the diffusion process with specific parameters for uptake and release of dopamine. The Simulation Volume The diffusion of dopamine is simulated in a cubic space representing an idealized block of cortical area 10, layer 3 with a volume of 0.0026 mm 3 (64 mm on each side). In Cx3Dp, the space is divided into discrete compartments (1 mm 3 ). Diffusion has been implemented with Fick's Law as 1 st order Euler step function with finite element methods across the compartments. The detailed composition of the neuropil was approximated by assuming the extracellular fraction of the volume was 23% and had a tortuosity of 1.6 [31]. The Release Sites The density of release sites was measured on immunohistochemically stained tissue of monkey area 10 [15]. As these axons formed very rarely synapses, putative release sites were defined as regions along the axon where vesicles clustered. These clusters were located principally in boutons. A density of 0.0018 vesiclefilled profiles/mm 2 was determined from ultramicrographs in layer 3. Considering an average bouton diameter of 0.9 mm, a 3-D density of 0.0002/mm 3 was obtained. A volume of 0.0026 mm 3 contained 52 randomly located release sites with an average nearest neighbour distance of 9 mm. As the arbor of a single dopaminergic axon projecting to the cortex has not been characterised yet and the average interbouton distance or bouton density on a single dopamine axon is not known, we assumed release occurred independently at all sites. Dopaminergic cells have a tonic background firing rate of 5.361.5 Hz [32]. Phasic firing of 15-26 Hz and depressed firing of 0 Hz (pause) were observed after an unexpected reward or the omission of a reward [3,33]. This was captured in the simulation by phasic activity of 10, 25 or 50% of release sites in synchrony ( Table 2). The remaining release sites remained at tonic firing level. No measurements about the release probability of dopamine in primate cortical areas was available in the literature at the time we performed our simulation, so we assumed a release probability of 0.5. Very recently however the release probability of dopaminergic neurons of mouse VTA has been reported to be on average 0.25, with a range of 060.02 to 0.7360.01 [34]. Our assumed value was thus within their measured range, though higher than their average measure. The Uptake Sites Immunohistochemical studies of dopaminergic receptors and transporters indicated that they are expressed distant to putative release sites or synapses [7,[35][36][37]. The low expression level of dopamine transporters and comparisons of the dopamine signal between striatum and cortex led to the hypothesis that the uptake rate in the PFC was ten times slower than in the striatum [36]. The striatal uptake rate has been described to be 20 s 21 [23] and we set the cortical uptake rate to 1.5 s 21 . For the purpose of the simulation the dopamine receptors and transporters were uniformly distributed in the space and thus the uptake was modelled to occur uniformly through the volume. We were interested in the activation of dopamine receptors D1 and D2 in relation to the dopamine concentration during tonic background activity and after the transient phasic release. D1 and D2 of the striatum are thought to be in high or low affinity states [27]. The affinity states of the G-protein coupled dopamine receptors change with their conformational states and have values of low nM in the All the simulations had a tonic firing of 5.6 Hz and release probability of 0.5 during the first 4 sec. For the following phasic activity, different firing rates were applied. For every condition A-D, 10, 25 or 50% of the release sites showed phasic or depressed activity. doi:10.1371/journal.pone.0071615.t002 Table 3. List of parameters applied in the simulation of dopamine diffusion and their references. Description Value Reference Density of potential release sites 2/10000 mm 3 [15] Volume of simulation 64664664 mm 3 a, extracellular space % 0.23 [31] Vesicle volume 6.5*10 220 l [39], [23] Cv [23,42] D1 receptor affinity ,10 nM high affinity 1-2 mM low affinity (70% in low affinity) [23,27,43] D2 receptor affinity ,10 nM high affinity 1-5 mM low affinity (20% in low affinity) [23,27,43] doi:10.1371/journal.pone.0071615.t003 high affinity state (effective conformation) and few mM for the low affinity state (resting conformation) [38]. There are no measurements of the affinity state and their dynamic balance between the effective and resting state for the dopamine receptors of the primate prefrontal cortex, so we used the same values as assumed for the striatum in our cortical model. All numeric values of the applied parameters and the corresponding references are given in the table below (Table 3).
5,752.6
2013-08-12T00:00:00.000
[ "Biology", "Computer Science" ]
Quenching the CME via the gravitational anomaly and holography In the presence of a gravitational contribution to the chiral anomaly, the chiral magnetic effect induces an energy current proportional to the square of the temperature in equilibrium. In holography the thermal state corresponds to a black hole. We numerically study holographic quenches in which a planar shell of scalar matter falls into a black hole and rises its temperature. During the process the momentum density (energy current) is conserved. The energy current has two components, a non-dissipative one induced by the anomaly and a dissipative flow component. The dissipative component can be measured via the drag it asserts on an additional auxiliary color charge. Our results indicate strong suppression very far from equilibrium. In the presence of a gravitational contribution to the chiral anomaly, the chiral magnetic effect induces an energy current proportional to the square of the temperature in equilibrium. In holography the thermal state corresponds to a black hole. We numerically study holographic quenches in which a planar shell of scalar matter falls into a black hole and rises its temperature. During the process the momentum density (energy current) is conserved. The energy current has two components, a non-dissipative one induced by the anomaly and a dissipative flow component. The dissipative component can be measured via the drag it asserts on an additional auxiliary color charge. Our results indicate strong suppression very far from equilibrium. Anomaly induced transport phenomena have been in the focus of much research in recent years [1,2]. The prime example is the so called chiral magnetic effect (CME) [3]. The CME has a component due to the gravitational contribution to the chiral anomaly. In the presence of a magnetic field B and at temperature T chiral fermions build up an energy current [4,5] J ε = 32π 2 T 2 λ B . (1) In the background of gauge and gravitational fields chiral fermions have the anomaly In holography the gravitational anomaly is implemented via a mixed Chern-Simons term. Space-time is curved in the additional holographic direction and this is what generates (1) from the mixed Chern-Simons term [6]. Perturbative nonrenormalization has been shown in [7,8]. The relation of (1) with the gravitational contribution to the chiral anomaly has also been derived in a model independent manner combining hydrodynamic and geometric arguments in [9,10]. An additional constraint on effective actions consistent with (1) stems from considerations based on global gravitational anomalies [11,12]. Transport signatures induced by (1) have recently been reported in the Weyl semimetal NbP [13]. So far anomaly induced transport has mostly been studied in a near equilibrium setup in which local versions of temperature and chemical potentials can be defined. An application of anomaly induced transport is the quark gluon plasma created in heavy ion collisions [14]. There the magnetic field is extremely strong. It is also very short lived and might have already decayed before local thermal equilibrium is reached [15]. One therefore needs a better understanding of how anomalies induce transport far out of equilibrium. Holography is an extremely efficient tool to study both, anomalous transport phenomena and out of equilibrium dynamics of strongly coupled quantum systems. Previous studies of anomalous transport in holographic quenches focused on the *<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>pure U(1) 3 anomaly [16,17]. This motivates us to study anomalous transport induced by the gravitational anomaly in a holographic quantum quench. To develop intuition we first consider anomalous hydrodynamics [18][19][20] in a spatially homogeneous magnetic field. The anomalous transport effect we want to monitor is the generation of an energy current in the magnetic field (1). We start with an initial state at temperature T 0 and heat the system up to a final temperature T . In a relativistic theory the energy current J ε i = T 0i is the same as the momentum density P i = T i0 . Momentum is a conserved quantity and can not increase if it is not injected into the system. On the other hand the anomaly induced energy current does increase and this increase must be balanced by a collective flow not included in (1). The increase in the the non-dissipative anomalous energy current 1 will be exactly counterbalanced by a dissipative contribution at any given moment. If we introduce a uniform but very light density of impurities carrying some additional charge drag will generate a convective current proportional to the density of impurities [32]. This current measures how much energy current is generated via (1). We thus need to consider a system with two U(1) charges. The first one has the mixed gravitational anomaly (2) and carries the magnetic flux. The second one is a anomaly free auxiliary U(1) charge that serves to monitor the build up of the current (1) as the temperature changes. We now consider the hydrodynamics of the system. Since all spatial gradients vanish the constitutive relations are The specific form of the anomalous transport coefficients σ B , σ B and σ B,X depend on the hydrodynamic frame choice [22]. In Landau framê Note that despite J X µ having no anomaly it does have a nontrivial chiral magnetic transport coefficient in Landau frame. arXiv:1709.08384v1 [hep-th] 25 Sep 2017 This makes the effect due to dragging manifest. We assume that the system is neutral with respect to the anomalous charge, i.e. µ = ρ = 0. As initial condition we take J = J X = 0 and the energy current to be given by (1). Thus the energy current at any given moment is where we work in the linear response regime, i.e. we assume λ B to be a small perturbation compared to the the energy density and pressure such that u µ ≈ (1, v). Solving for the velocity v and using the constitutive relation for the current J X we find that it is given by This equation determines the current build up due to drag at any given moment if the system undergoes a slow near equilibrium time evolution such that an instantaneous temperature T can be defined. It is independent of the choice of Landau frame. For a generic conformal field theory p = KT 4 and ε = 3p, and we obtain with j X = If a system undergoes near equilibrium evolution j X must lie for any given moment on that curve. Deviation from (11) will be a benchmark for far from equilibrium behavior. We will now consider a holographic model that allows to implement the physics described in a non-equilibrium setting. The action of our model is In addition to gravity and a scalar field φ , it involves two gauge fields A M and X M with field strengths F = dA and F X = dX. Only gauge transformations of A are anomalous. The Chern-Simons term is the holographic implementation of the mixed chiral-gravitational anomaly. We do not include possible pure gauge Chern-Simons terms since we want to isolate the effects of the gravitational anomaly. We set from now on 2κ 2 = 1. The dual field theory will be brought out of equilibrium by abruptly varying a coupling. These type of processes are known as quantum quenches [23]. Holographically, the leading mode of bulk fields at the AdS boundary are interpreted as a couplings for the dual field theory [24,25,30]. We will perform a quench on the coupling associated to the scalar φ . For simplicity we have chosen φ to be massless and neutral under the gauge fields A M and X M . We consider the following simple boundary profile for its leading mode Since (12) is invariant under global shifts of φ , the Hamiltonians before and after the quench will be equivalent. The energy density induced by the quench is a function of both its amplitude η and time span τ. In order to study the transport properties, it is enough to solve the equation of motion to first order both in the Chern-Simons coupling and the magnetic field. Besides, we will work in the decoupling limit of large charge q for the anomaly free gauge field. Equivalently we can work at q = 1 and treat X µ perturbatively to first order. In that case its dynamics does not backreact onto the other sectors of the theory. With these simplifications, the equations of motion reduce to where At zero order in a λ -expansion the gauge fields do not backreact on gravity and the scalar, and thus can be ignored when solving their leading dynamics. We parameterize the zeroorder solution of the metric as Time reparameterizations are used to fix δ (t, 0) = 0, such that the Minkowski metric is reproduced at the AdS boundary. Field theories at thermal equilibrium are dual to black hole backgrounds. Our initial geometry will be a Schwarzschild black hole, for which f = 1−π 4 T 4 0 z 4 and δ = 0. Around t = 0 the quench takes place, creating an extra energy density on the boundary theory and equivalently, a matter shell that enters from AdS boundary into bulk. The shell subsequently undergoes gravitational collapse and is absorbed by the original black hole. The resulting black hole represents the final equilibrium state with T >T 0 . Similar holographic setups have been extensively studied in the last years [26][27][28][29]. The gauge fields A m and X M satisfy the same equation at zero order in λ . However the different roles they should fulfill select different leading solutions. We want A M to induce a constant background magnetic field on the boundary theory and no charge density. Choosing the magnetic field to point in the x 3 direction, this is achieved by the simple solution F 12 =B with B constant throughout the bulk. Contrary we wish X M to induce a non-vanishing charge density and no boundary field strength, which leads to The expectation value of its associated current is given by The integration constant ρ X is the desired charge density. At linear order in λ , the magnetic field induces a nondiagonal component in the metric where c is an integration constant. In the initial and final state the bulk scalar field stress tensor vanishes. We fix the integration constant by demanding that the initial state reproduces (1), which implies c = 8π 2 T 2 0 . In our conventions the energy current can be read off the asymptotic metric expansion as g 03 = 1 4 T 03 z 2 + · · · [31]. Neither the scalar field and nor anomalous gauge field receive a correction at order λ . However the off-diagonal component of the metric backreacts on X M and generates an entry parallel to the magnetic field. It is governed by the equation Although X 3 is sourced by the Chern-Simons term even in the initial state, it leads to a vanishing J 3 X . It is the subsequent evolution what generates a boundary current parallel to the magnetic field. We also note that due to the fact that J 3 X is treated in the decoupling limit it reacts immediately to the drag. We focus first on fast quenches. These are processes whose time span is small in units of the inverse final temperature, 2τT < 1. Fig.1a shows the result of two such processes which the same initial temperature T 0 and time span τ but different final temperatures. A comparison with the benchmark curve (11), plotted in the lower inset of Fig.1a, rules out a possible near equilibrium description. In hydrodynamic process the current j X attains the maximum when the temperature reaches T m /T 0 = √ 2, at which it takes the value 1/4. For higher temperatures the equilibrium current decreases, reflecting the large inertia of a hot medium. One of the processes in Fig.1a has been tuned to reach T m , while the other generates a higher temperature T /T 0 =3. In both cases j X exhibits a maximum before stabilizing. This contradicts (11), which would predict a monotonic growth for the process whose final temperature is T m , and shows that the evolution is far from equilibrium. Moreover the current at the maximum is larger than 1/4 in the first quench (blue) and smaller in the second (red), which again disagrees with (11). When the currents of both processes are normalized to one at the final equilibrium state, their profiles in the rescaled time tT are very similar. The current overshoots around 4% its final value before attaining it, as can be seen in the upper inset of Fig.1a. The time span of a fast quench also has very small impact on the evolution of the current. This is illustrated in the inset of Fig.1b, which shows three processes with the same final temperature T /T 0 = 2.5 and different time span. It is interesting to compare the evolution of j X with that of another important observable, the energy density. The energy density builds up during the quench and attains its final value as soon as the quench ends. In terms of the holographic model the energy pumping into the system happens while the time derivative of the scalar at the AdS boundary is nonvanishing. The profile (13) is within 3% of its final value at t/τ ≈ 1.75. We observe in Fig.1 that instead j X equilibrates at tT ≈ 1.2. Hence the energy density and the current generated in fast quenches have independent equilibration timescales, τ and 1/T respectively. This provides an alternative and simple way to discard a near equilibrium evolution, since no single effective temperature can describe both observables. At the geometrical level the different equilibration timescales have further implications. While the energy density only depends on the bulk total mass, the anomaly free current must be sensitive to the interior geometry close to the emerging horizon. Consistently with this, j X turns out to mainly build up as equilibrium is approached. The body of Fig.1b describes a very fast quench with τT 0 =0.007. The purple curve gives the time profile of the scalar field at the asymptotic boundary (13). The time interval when the quench occurs has been highlighted in blue. Notably, the current is practically zero in this example even sometime after the quench has finished. This clearly shows that the anomaly free current does not react to the initial far from equilibrium state, but to the onset of the equilibrium. A central conclusion of our study is that anomalous transport properties related to the gravitational anomaly are very much linked to the system being at or close to thermal equilibrium. We wish to now study the transition from fast to slower quenches. Fig.2a shows the last stages in the evolution of the current for several processes with the same initial temperature and time span. For the sake of comparison we normalize the final current to one, and take the span of the quench instead of the final temperature as time unit. Processes with 2τT < 0.5 behave as explained above. When 2τT ≈ 0.65 the maximum before equilibration starts to weaken out, and it practically disappears for 2τT ≈ 1. Processes with 2τT 1 exhibit a monotonic growth of the current to its equilibrium value. We will refer to them as intermediate quenches. We have highlighted the time span of the quench, up to the moment when φ 0 is within 3% of its final value. The transition between fast and intermediate quenches happens when time scale set by the final temperature approaches that of the quench. It is important to stress that, in general, intermediate quenches do not admit either a near equilibrium description. Indeed when the final temperature is higher than T m , the absence of a transient max-imum signals out of equilibrium dynamics. For the parameters associated to Fig.2a we have 2τT m = 0.5, such that processes with final temperature below T m behave as fast quenches. 2ΤT Finally we consider slow quenches. First we consider process that are slow with respect to the final temperature and can be expected have a sizable final period of near equilibrium evolution. The hydrodynamic current described by the benchmark curve (11) decreases with rising temperature when T > T m . In this case, a final period of near-equilibrium evolution implies that j X must exhibit a transient maximum. In Fig.2b we investigate quenches with 2τT >2 for the same initial conditions as in Fig.2a. The red curve coincides in both plots for the sake of comparison. We observe that a maximum reappears with distinctive characteristic from that exhibited by fast quenches. Contrary to them the maximum of the quotient j X / j Xeq increases with the temperature, and can be attained even before the quench is half way through. It is natural that the larger τT the earlier the system enters the near-equilibrium regime, whose onset is qualitatively signaled by the maximum of the current. In the previous processes τT 0 = 0.175, such that they are slow with respect to the final temperature but fast with respect to the initial one. This constrains when the near equilibrium regime sets in and hence whether the current can reach 1/4, the maximum of the equilibrium curve (11). Keeping the same τT 0 , Fig.3a shows evolutions with very high final temperature. Since we are interested in the maximal value of the current, j X has not been rescaled as we did in Fig.2. Instead of approaching 1/4, the maximum turns out to slowly decreases with increasing final temperature. This shows that the near equilibrium regime in processes with τT 0 = 0.175 can only be associated with temperatures well above T m . In order for the complete evolution to be hydrodynamic, the quench needs to be also slow with respect to the initial temperature. We plot in Fig.3b processes with fixed T /T 0 = 2.6 and growing time span. The maximum of the current tends now to 1/4 as expected. We have checked that, consistently, the current builds up in a monotonic way for these slow processes whose final temperature is equal or smaller than T m . We have studied holographic quantum quenches of the CME induced by the gravitational anomaly. A rich phenomenology arises depending on the time scale of the quench. Figure 3. Left: Evolution of j X in processes with τT 0 = 0.175 and very high final temperature. Right: Processes with fixed initial and final temperatures T /T 0 = 2.6 and growing τ. If the quench is fast with respect to the initial and final temperatures, the evolution is far from equilibrium until the final exponential approach to stabilization. The current builds up late in the time evolution and slightly overshoots before it achieves its equilibrium value. In equilibrium the anomalous conductivity is proportional to the square of the temperature. The main motivation of this work was to analyze what activates the anomalous conductivity out of equilibrium, where there is no notion of temperature. It could have been governed by energy density, which in equilibrium is also measured by the temperature. In this case the current should have reacted as soon as energy is injected into the system. Our result on fast quenches shows that this is not the case. Rather the system has to evolve closer to equilibrium to build up the anomalous current. Intermediate quenches leave the far from equilibrium stage while they are reaching the final state, resulting in a monotonic growth of the current. Processes which are slow with respect to the final temperature but fast with respect to the initial one, have a finite period of near-equilibrium evolution. This extends to the complete evolution for large τT 0 . We set c = 8π 2 T 2 0 such that our initial current is given by (1). Furthermore we note that for this value of c, g 03 vanishes at the horizon at the initial temperature T 0 . Equation (22) for the anomaly free gauge field reduces to with d another integration constant. Requiring X 3 to be regular at the horizon imposes d =0 in the initial state, which turns out to yield a vanishing J 3 X . Regularity of X 3 is preserved by the time evolution. Therefore in the final state with T > T 0 we find d = 8λ Bπ −2 (T 2 0 − T 2 )/T 4 . This leads to the current To recover equation (10), note that the energy density can be read off the expansion of the metric when put in Fefferman-Graham coordinates as ε = 3π 4 T 4 and that ε = 3p.
4,842.6
2017-09-25T00:00:00.000
[ "Physics" ]
AlexNet, AdaBoost and Artificial Bee Colony Based Hybrid Model for Electricity Theft Detection in Smart Grids Electricity theft (ET) is an utmost problem for power utilities because it threatens public safety, disturbs the normal working of grid infrastructure and increases revenue losses. In the literature, many machine learning (ML), deep learning (DL) and statistical based models are introduced to detect ET. However, these models do not give optimal results due to the following reasons: curse of dimensionality, class imbalance problem, inappropriate hyper-parameter tuning of ML and DL models, etc. Keeping the aforementioned concerns in view, we introduce a hybrid DL model for the efficient detection of electricity thieves in smart grids. AlexNet is utilized to handle the curse of dimensionality issue while the final classification of energy thieves and normal consumers is performed through adaptive boosting (AdaBoost). Moreover, class imbalance problem is resolved using an undersampling technique, named as near miss. Furthermore, hyper-parameters of AdaBoost and AlexNet are tuned using artificial bee colony optimization algorithm. The real smart meters’ dataset is used to assess the efficacy of the hybrid model. The substantial amount of simulations proves that the hybrid model obtains the highest classification results as compared to its counterparts. Our proposed model obtains 88%, 86%, 84%, 85%, 78% and 91% accuracy, precision, recall, F1-score, Matthew correlation coefficient and area under the curve receiver operating characteristics, respectively. I. INTRODUCTION Electricity has a greater influence on our daily lives. Several electronic devices, communication devices and modern electric vehicles are dependent on electricity. The government and private companies distribute electricity to various enterprises, agencies and end users. However, electricity losses often occur in transmission and distribution lines. These losses are classified as technical losses (TLs) and non-technical losses (NTLs) [1], [2]. TLs happen because of short circuits in transformers, electric shocks, lethal fires, etc. Whereas, NTLs emerges because of energy theft (ET), direct hooking, unpaid bills, meter tampering, meter The associate editor coordinating the review of this manuscript and approving it for publication was Yang Li . hacking, etc. Among them, ET is a major source of NTLs. The study conducted by the northeast group in 2015 stated that the world faces loss of $89.3 billion yearly due to NTLs [3]. In Pakistan, the power utility companies lose major portion of their total energy supply per year due to ET. The line losses of Peshawar electric supply company (PESCO) are decreasing every year by 2%, but in the year 2018-19, these losses increased up to 36.2% [4]. Moreover, these losses are not limited to underdeveloped countries but they also affect the economy of developed countries as well. According to a report, the United States and the United Kingdom accounted for loss of $10.5 and £175 billion per annum because of ET, respectively [5]. Advanced meter infrastructure (AMI) is a new concept, which is introduced in traditional power grids. It consists of smart meters, sensors, computing devices and modern communication technologies, which are used to perform two-way communication between consumers and utilities. AMI is also responsible to collect data about electricity consumption (EC), prices at current time and the status of power grids. The involvement of the Internet in AMI opens numerous ways for attackers to remotely hack the smart metering system and steal electricity. Due to the above-mentioned issue, the detection of NTLs is an important need for the current era. In this regard, the researchers introduced various techniques to deal with NTLs [6]. These techniques are listed as follows. 1. State based: In state based techniques, various hardware components, such as sensors and transformers are integrated with smart meters to detect NTLs. These techniques perform well, however, extra monetary cost is required to maintain the embedded devices [7]. 2. Game theory: In game theory based techniques, two players are involved: power utility and electricity thief. Both players play a game with each other for achieving the equilibrium state. These techniques are not feasible because formulating a suitable utility function is a tedious task [8]. 3. Data driven: These techniques get the attention of the research community because they only require data for their training. The integration of smart meters produces a substantial amount of data. Therefore, the researchers propose various data driven techniques to perform electricity theft detection (ETD) in power grids. These techniques belong to machine learning (ML), meta learning, ensemble learning and deep learning (DL) [9]. In this era, these techniques become popular because they only demand data for training. The solutions proposed in the existing work of ETD have the following limitations: (i) In ETD, the imbalance data is one of the main problems in which the records of one class is more than the second class. It raises the problem of poor generalization and overfitting because the classification model skews towards the larger class and ignores the smaller class. (ii) Due to the massive volume of EC data, the diversity and dimensionality of data are increased to a greater extent. The high dimensional data mostly misleads classifiers during the accurate ETD. (iii) The presence of several non-malicious factors increases the misclassification rate of supervised learning models. These factors also become the reason of high false positive rate (FPR). The classification model gets confused and mistakenly classifies the normal consumers as abnormal. The major contributions of this study are listed below. • An AlexNet model is employed to perform feature extraction, which selects the most suitable patterns from the high dimension EC data. • The class imbalance problem is resolved using an undersampling technique, named as near miss (NM). • An ensemble learning adaptive boosting (AdaBoost) model is exploited to perform the classification of malicious and non-malicious energy consumers. • The hyper-parameter tuning of the proposed ETD model is done using artificial bee colony (ABC), which belongs to the family of the meta heuristic algorithms. • The effectiveness of the proposed model is checked using suitable indicators, such as precision, accuracy, recall, F1-score, Matthew correlation coefficient (MCC) and area under the curve receiver operating characteristics (AUC-ROC). The remaining manuscript is organized as follows: Section II describes the related work while Section III explains the identified problems. The proposed system model is discussed in Section IV. Section V presents the simulation results and the conclusion is presented in Section VI. II. RELATED WORK This section presents the explanation of the existing tools and techniques, which are already proposed in the literature to handle NTLs. These techniques are mostly based on data driven solutions. In [10], the authors employ convolutional neural network (CNN) for performing ETD. The effectiveness of the model is checked using suitable undersampling and oversampling techniques. The authors work on ensemble approaches to differentiate between normal and malicious samples. The main purpose of their proposed model is to reduce bias and variance. In [11], the authors introduce a CNN, gated recurrent unit and particle swarm optimization (CNN-GRU-PSO) based hybrid model to detect NTLs. CNN-GRU extracts abstract and temporal features from the EC data while PSO is leveraged for hyper-parameters' optimization. The underline focus of the proposed solution is to reduce the high FPR because electric utilities have a low budget for onsite inspection. The authors introduce a binary black hole algorithm (BHA) for handling the curse of dimensionality issue [12]. BHA performs well as compared to other metaheuristic techniques. The BHA algorithm is a novel and evolutionary algorithm in which multiple stars participate and the star having the highest fitness value is selected for black hole. In the proposed study, the stars represent features. The features that have a large influence on the detection accuracy are selected and the remaining ones are discarded. In [13], the authors develop a hybrid DL model to detect NTLs in power grids. The hybrid model is the combination of GRU and GoogleNet. Both 1D daily and 2D weekly EC data are used to train the model. Moreover, the simulations confirm that the hybrid model performs better than the stateof-the-art models. The authors of [14] employ long shortterm memory (LSTM) for NTL detection in the EC data. LSTM captures the long-term tendency from the EC data and avoids the effects of non-malicious factors. Moreover, random undersampling boosting (RUSBoost) is used for data balancing and classification. The RUSBoost model first performs under sampling of majority class and then applies an ensemble boosting strategy to perform final classification of energy thieves. Similarly, the authors of [15] utilize a combined DL-boosting model for efficient detection of NTLs. In the proposed work, feature extraction is carried out through the DL model and final classification is performed using a boosting model. In [16], the authors develop a combined ETD model, which integrates both CNN and random forest (RF) for accurate results. The CNN model performs efficient extraction of potential features while RF is utilized for the classification purpose. In [17], a hybrid DL model is designed, which uses the advantages of both LSTM and CNN models. The proposed hybrid model efficiently extracts temporal correlations and hidden patterns from the consumers' profiles and performs ETD accordingly. In [18], a framework is developed by combining maximum overlap discrete wavelet packet transform (MODWPT) and RusBoost algorithms. The former is employed to capture time-related patterns from the EC data. While latter is utilized to differentiate between normal and malicious EC patterns. In [19], bidirectional GRU (BiGRU) and kernel principal component analysis (KPCA) are exploited to identify fraudulent electricity consumers. In the proposed study, BiGRU is utilized for the classification purpose and KPCA is leveraged to perform feature extraction. The authors of [20] propose a deep neural network, which combines an ensemble technique and a DL model. The DL model initially resolves the high dimensionality problem and then selects the relevant prominent features. Afterwards, the selected features are passed to the ensemble model for the final detection of fraudulent electricity consumers. The ETD results prove the efficacy of the proposed model over the compared models. In [21], the authors introduce a clustering based deep model for detecting energy thieves in power grids. The proposed technique utilizes density based clustering approach and builds different clusters, which contain similar EC records. The EC samples which do not lie in any cluster are declared as energy thieves. Similarly, in [22], an ETD model is designed with the help of gradient boosting models. The boosting model increases the ETD accuracy to a greater extent. Moreover, the class imbalance problem is resolved by introducing synthetic theft attacks. The authors claim that the motive of electricity theft is to consume less energy as compared to the normal consumers. Furthermore, supervised learning methods require labeled data to learn patterns from the EC data. However, it is a labor intensive and challenging task to collect labeled data. Due to this reason, unsupervised learning models like auto encoders attract the attention of research community. In [23], the authors introduce a stacked sparse denoising autoencoder (SDAE) for the accurate detection of fraudulent electricity consumers. The results confirm that the 'unsupervised learning models perform well as compared to the supervised learning models in case of less availability of labeled data. Moreover, they introduce some noise and sparsity in the regular autoencoder to enhance its robustness against intelligent attacks. Furthermore, a meta-heuristic technique, termed as PSO, is used to find optimal hyper-parameters of SDAE. By doing this, the performance of SDAE is improved to a greater extent. In [24], the authors propose a novel technique to detect malicious patterns in the EC data. The proposed model is developed for smart homes where statistical and machine learning techniques are integrated to detect the anomalous EC patterns. Similarly, the authors of [25] design a robust ETD model to detect the following types of attacks: zero days, shunt and double tapping. In the proposed model, a pattern recognition technique is used for capturing the fraudulent EC patterns. In [26], the authors develop a novel technique for ETD by combining a clustering method with a statistical technique to explore the EC profiles of consumers. In the proposed work, the correlation among EC patterns is identified by maximum information coefficient method. Meanwhile, anomalous electricity consumers are detected by the density based clustering method. In [27], a new ensemble method is introduced to identify the anomalous patterns from the EC data. The proposed method efficiently detects real time anomaly from the EC data and overcomes the high FPR accordingly. In [28], a finite mixture model is leveraged for the soft clustering while GA is employed for the parameter tuning. The results confirm that the proposed work is an efficient solution for ETD. III. PROBLEM ANALYSIS Electricity thefts are harmful to the power grids. They not only increase the revenue losses, but also generate fatal electricity shocks. In the past, electricity thefts were captured through onsite field inspection, which is a labor intensive and financially expensive task. To overcome these problems, data driven based ML and DL techniques are employed for ETD. However, these techniques face the problems of overfitting, curse of dimensionality and imbalanced proportion of classes [13], [17], [22]. Moreover, a high VOLUME 10, 2022 FPR problem occurs when the dataset is imbalanced and the classifier has limited abilities to learn long-term temporal patterns [29]- [31]. Keeping the above issues in view, a hybrid model is developed to perform ETD in power grids. Moreover, it is important to mention that this article is the extension of our conference paper [32]. IV. PROPOSED METHODOLOGY This section provides the in depth discussion of the proposed ETD model. Fig. 1 shows the working mechanism of the model. Furthermore, different components of the model are explained below. A. DATA PREPROCESSING The underlying purpose of preprocessing is to give a standardized structure of the EC data before training and testing of the model. In this stage, various data preprocessing operations are performed, such as filling missing values, handling outliers and normalizing data. The description of each operation is described below. 1) HANDLING MISSING VALUES In this work, the real-time smart meter dataset is used that is collected through onsite inspection. Therefore, it contains missing and noisy values. In most cases, missing values in data are denoted by not a number (NaN). If data with missing values is fed into ML or DL models, it decreases their performance. In the existing studies, different approaches are introduced to cope with the missing values. In this work, a linear interpolation (LI) method is utilized to fill the missing values [33]. The LI method takes the average of the next day and the previous day EC values and fills the missing values accordingly. 2) REMOVING OUTLIERS The outlier is a value that has significantly different behavior from other observations or values in a dataset. The presence of outliers disturbs the performance of classification models. In the proposed work, three-sigma rule of thumb is exploited to handle the outliers present in the EC profiles of consumers [33]. 3) NORMALIZING DATA VALUES DL models poorly performed on the diverse range of values. Moreover, these values negatively affect the learning process of DL models and push them towards the gradient exploding problem. In this regard, we use min-max normalization to scale the values of features between 0 and 1 [33]. B. SOLVING UNEVEN DISTRIBUTION OF CLASSES The uneven distribution of classes is a critical problem being faced when performing ETD. In real world scenarios, normal electricity consumers are more than the abnormal consumers. The less availability of abnormal consumers creates the problem of imbalance data distribution, which adversely affects the performance of supervised learning models. Furthermore, the classification models are skewed towards the larger class and overlook the smaller class, which generates false results. Therefore, in this study, an undersampling technique is utilized to resolve the problem of imbalance data. Synthetic Minority Oversampling Technique (SMOTE) is an oversampling technique, which is mostly used to handle uneven distribution of class samples [29] and [30]. It takes samples of both classes as input and generates synthetic samples of the minority class until both classes have equal distribution. However, this sampling technique raises an overfitting problem because of creating duplicate records. So, we use an under-sampling technique, termed as NM, to solve the class imbalance problem. This technique intelligently eliminates instances from the larger class to equalize the distribution of both classes. It eliminates those points, which are near the decision boundary. The working of NM is illustrated in Fig. 2. Moreover, the steps of NM are defined below. 1) First, it calculates the distance between all points of minority and majority classes. 2) Then, it selects those instances, which have least distance to the smaller class. The n number of instances will be eliminated. 3) If there are m instances in the minority class, then the algorithm will return m * n instances of the majority class. C. PROPOSED DEEP LEARNING BASED ENSEMBLE MODEL In this work, a hybrid model is developed for ETD that integrates the benefits of both AlexNet and AdaBoost. The AlexNet model extracts latent and dense characteristics from the EC profiles of consumers. Whereas, the final ETD is carried out through AdaBoost. In [16], the authors introduce a combined model by integrating CNN and RF in a sequential manner. They prove that the combined models give more satisfactory results than the standalone ML and DL models. Therefore, after getting motivation from [16], we develop a hybrid model to perform ETD in power grids. The proposed model combines two modules. The first module contains AlexNet model, which is used to learn features. The AlexNet model is a variant of CNN, which overcomes its limitations and gives better performance results. While the second module consists of an AdaBoost classifier, which takes extracted features of the AlexNet model as input and differentiates between normal and malicious patterns. The performance of AdaBoost is increased by enhancing the number of decision trees. Moreover, the ABC algorithm is practiced to select suitable hyper-parameters for both AlexNet and AdaBoost models. The description of each module is given below. 1) AlexNet MODEL AS FEATURE EXTRACTOR The EC data is collected through smart meters. It often contains missing and noisy values. The noise in the EC data may be in terms of anomalies, outliers, overlapping and redundant records, missing records, inconsistent EC values, etc. These noises should be handled, otherwise, the proposed ETD model produces false results and increases the FPR to a greater extent. Therefore, in this work, we adopt essential preprocessing techniques to handle the noises. The anomalies and outliers are handled through three sigma rule, missing values are filled through LI and the inconsistent values are tackled through normalization. Moreover, the irrelevant and noisy features are discarded by the AlexNet model. The AlexNet model auto selects the relevant features and reduces the effects of noises to a minimal level. In addition, the selection of suitable EC features is a necessary task towards performing efficient ETD. Therefore, we utilize AlexNet to extract hidden and dense features from the consumers' profiles. It was designed to reduce the shortcomings of the traditional models of that time like LeNet [34]. The architecture of AlexNet is similar to the LeNet model. However, it has more number of filters as compared to the LeNet model. It comprises of convolution, pooling and fully connected layers. Convolution layers are used to attain abstract and latent features while pooling layers are helpful to get high level features, which reduce the curse of dimensionality issues. Moreover, dropout layers are used in place of regularization techniques to control the overfitting problem. However, dropout layers increase the training time of the AlexNet model. The basic architecture of the AlexNet model is illustrated in Fig. 3. Moreover, the explanation of each component is given below. a: CONVOLUTION LAYER Convolution layers are an important part of CNN. These layers apply two dimensional (2D) filters on input images to extract the high level features that have high relevancy with predictive labels. Moreover, these layers are also used to obtain features from one dimensional (1D) and three dimensional (3D) data. In this study, the EC data is converted into 2D weekly data. In the literature, some studies show that it is possible to extract high level features from EC data if we align it according to weeks. 2D filters are moved on the input feature vector and different feature maps are created accordingly. These feature maps are the high level representation of 2D input data. Moreover, the optimal values of these filters are attained using stochastic gradient descent method, which gives better results as compared to handcrafted filters. Furthermore, Fig. 4 shows the architecture of convolution layer. The mathematical representations of the steps used in the convolution layers are as follows. Dim (input shape) = (n h , n w ) where n h denotes features and n w depicts observations. Filter = (f w , f h ) where f w and f h represent width and height of the filter, respectively. The convolution operation between input shape and filters is described by the equation given below [35]. Conv(inputshape, filter) The learnable process in the convolution is described as [11]. where X ft t is the output feature map after the application of filter f t . W ft t and b ft t refer to the learnable parameters, which describe the weight and bias factors. σ represents the rectified linear unit (ReLU) activation while * is a dot product between filters and features. b: POOLING LAYER Convolution layers are very helpful because they extract the low-level features from the EC data. These layers extract the precise location of features. As a result, a minor change in the input data creates a different feature map. These changes happen due to cropping, rotation and flipping of input images. Researchers use signal processing techniques to decrease the resolution size of feature map. However, these techniques do not give good results and decrease the performance of DL models. Pooling layers are a new concept in the DL models. They are applied after performing the convolution operations. These layers diminish the spatial dimension of feature map without affecting the original resolution of input data. Moreover, they overcome the computational overhead issues. In the literature, several pooling operations are introduced by researchers. Among them, max-pooling performs better than the rest. Therefore, we adopt max-pooling strategy reduce the spatial dimensions of the EC data [36]. The max-pooling strategy only keeps the largest value from the specific area of a feature map and discards the rest of the values. Fig. 5 represents the working mechanism of the max-pooling operation. The mathematical representation of the max-pooling layer is as follows [35]. where y m shows the outcomes of max-pooling operations and R represents the set of real numbers. i and j denote the i th convolution layer and the j th neuron, respectively. functions create exploding and vanishing gradient problems, which disturb the performance of DL models. In this study, we use ReLU as an activation function after each convolutional layer. This function converts negative values into zeros and passes the remaining values to the upcoming convolutional layers without any changes. The research work shows that ReLU function overcomes exploding and vanishing gradients problems of DL models and enhances their performance. The mathematical equation used for ReLU function is given below [33]. d: DROPOUT LAYER DL models are sensitive to overfitting with a few training examples of datasets. Ensemble DL models are introduced to overcome this issue. However, these models have their own disadvantages and are difficult to maintain the training of multiple models. A single neural network can be simulated with different architectures by introducing the concept of dropout layers. These layers deactivate some neurons of hidden layers to overcome the overfitting and poor generalization issues. The average activation rate of the dropout layer is from 0 to 1. However, the literature shows that DL models give optimal results when the dropout rate is set at 0.5 where half number of neurons are deactivated during the training process. Moreover, Fig. 6 shows the activation and deactivation of neurons before and after the dropout operations. e: FLATTEN LAYER After performing above mentioned operations, feature maps are converted into 1D data and passed into a simple neural network to differentiate between normal and malicious EC patterns. However, in this study, output of flatten layer is considered as an extracted feature set, which is obtained after removing the overlapping and noisy values. This feature set is a more suitable representation of the EC data. Fig. 7 represents the working of flatten layer. f: FULLY CONNECTED LAYER This layer is the last layer of AlexNet [37]. It connects the neurons of preceding layers to the neurons of upcoming layers. Moreover, it extracts global features from the provided feature maps. It also compiles the result of the previous layer to perform final classification. The architecture of fully connected layer is shown in Fig. 8. Furthermore, the mathematical formula used for the fully connected layer is presented below. where x t represents input feature vector. W t and b t denote weight and bias factors, respectively. σ shows the ReLU function while f o represents the final output. ReLU is used after each convolution to avoid the vanishing gradient problem. The basic purpose of dropout layer is to solve overfitting and vanishing gradient problems. 2) AdaBoost MODEL AS CLASSIFIER Ensemble methods attract the attention of the research community because they win many Kaggle competitions. Kaggle is a data science community, which launches different competitions to solve real-world problems [38]. Ensemble methods are of two types: bagging and boosting. In former methods, multiple weak learners are initially trained on different ratios of data and then a voting mechanism is used to decide about the normal and malicious samples. RF is a bagging method, which trains multiple decision trees on data. The bagging methods give good results if all weak learners get diverse knowledge from the EC data. However, their performance is decreased when the results of weak classifiers are overlapped. Contrarily, in boosting methods, different weak learners are integrated in a sequential manner to make a strong learner. The AdaBoost classifier is based on the boosting approach where high weights are assigned to the wrong predictions of a weak learner and passed to the next weak learner for better classification. So, the extracted features of AlexNet are passed to AdaBoost classifier, which uses ensemble approach to differentiate between normal and malicious EC patterns. Through this approach, the performance of the standalone ML classifier is increased by reducing loss function's value iteratively. The working mechanism of AdaBoost classifier is shown in Fig. 9. 3) TUNING HYPER-PARAMETER OF THE PROPOSED MODEL THROUGH ENHANCED ABC ALGORITHM In [39], the authors proposed a hybrid model by combining a Jaya optimization algorithm with an ensemble learning model. The model is introduced to identify the transient VOLUME 10, 2022 stability status of power systems. The Jaya algorithm has significantly increased the efficiency of the proposed model by finding the optimal hyper-parameters. Therefore, being motivated from this work, we employ ABC algorithm for hyper-parameters' tuning of the proposed ETD model. The ABC algorithm comes under the umbrella of meta heuristics techniques [40]. It was founded by Karaboga in 2005. Mostly, it is used for optimization tasks. It mimics the working procedure of the honey bees when they search for food. The collection of honey bees is called a swarm. In ABC algorithm, there are three types of bees: employee, onlooker and scout. The employee bee initially searches the food sources using its own intelligence and memory power and then informs the onlooker bees about these sources. The onlooker bee finds the rich source of food through the information provided by the employee bee. The onlooker bee chooses those food sources, which have good quality of fitness. The scout bee is picked from the onlooker bees. This bee tries to find new sources and abandon the old ones for a rich food source [35]. The total number of solutions, onlooker bees and employee bees are equal in the swarm [36]. Fig.10 The ABC algorithm is a well-known and powerful optimization algorithm. The selection of ABC is made because of its powerful exploration abilities and some unique properties compared with other optimization algorithms. These properties are as follows. The ABC algorithm has fewer control parameters as compared to other optimization algorithms like PSO and GA [40]. It efficiently handles stochastic cost objective function. Furthermore, the convergence rate of ABC is fast due to the separate phases for exploitation and exploration as compared to other optimization techniques [41]. Moreover, the authors of [42], [43] exploit ABC algorithm for hyper-parameters' tuning of the proposed deep learning models. The simulation results prove that the selected hyper-parameters of ABC greatly improve the performance results. Therefore, based on these unique properties and being motivated from [42], [43], we utilize ABC algorithm for hyper-parameters' tuning of the proposed ETD model. The founded hyper-parameters of ABC improves the ETD performance to a greater extent. Besides, it is important to discuss that the traditional ABC algorithm has the problem of premature convergence in which the algorithm gets stuck in exploitation phase due to less diversity in the population. Therefore, in this work, an advance variant of the ABC algorithm is used that is proposed by [44]. In the modified version, a new control parameter, named as cognitive learning factor (CLF), is introduced to regulate the current position of candidate solution in employee and onlooker bee phases. This enhancement significantly reduces the problem of premature convergence in ABC. Through simulations, it is seen that the proposed model efficiently performs on the selected hyper-parameters of the enhanced ABC algorithm. Moreover, the working of the proposed ETD model is given in Algorithm 1. V. SIMULATION RESULTS In this section, the simulation results and discussion of the proposed and baseline models are provided. To perform simulations, we use a real dataset of smart meters. The detail of dataset is provided in Section V-A. Moreover, Python 3.0 and Google Colab are used for the implementation of the proposed and baseline models. A. DATASET ACQUISITION The dataset used in this work is released by the state grid of China (SGCC), a well-known electric utility in China [33]. The dataset is publicly available on the Internet. It comprises the EC records of 42,372 consumers from Jan 1, 2014 to Oct 31, 2016. In the dataset, rows exhibit the overall records of electricity consumers and columns denote the daily electricity usage. Table 3 presents the explanation of the dataset. B. PERFORMANCE METRICS The suitable selection of performance metrics is a necessary task in the case of imbalance classification. The AUC-ROC metric is a reliable indicator for imbalanced classification problems. It computes the difference between positive and negative class samples. The value of AUC ranges between 0 and 1. Moreover, it plots TPR and FPR at different classification thresholds. The mathematical formula used for AUC is given below [4]. where S shows positive class and T represents negative class. The classification model performs well when the value of AUC is 1. Similarly, if the value of AUC is 0.5, the model performs by random guessing. where PRE and REC denote precision and recall, respectively. Moreover, precision is another useful metric to measure the relevancy of total positive predicted results. Similarly, recall calculates the number of truly predicted positive samples out of the actual class labels. In addition, F1-score measures the balance ratio between precision and recall for better evaluation of the model [36]. On the other hand, MCC is another suitable performance metric, which provides the correlation between positive and negative predictions [45]. The formula used to calculate the MCC score is described below [45]. where TP, FP, FN and TN represent true positive, false positive, false negative and true negative, respectively. All of these outcomes are computed through a confusion matrix, which provides the summary of the overall correct and incorrect predictions of the classification model. Table 2 presents the mapping of identified problems with suitable suggested solutions. Moreover, the problems, solutions and validations are labeled as L, S and V, respectively. L1 is about class imbalance problem, which is solved through NM. In L2, curse of dimensionality issue is discussed, which is handled through the AlexNet model. The overfitting problem is highlighted in L3, which is solved through generating synthetic theft data. In L4, the problem of high FPR is discussed, which is minimized through the proposed hybrid model. D. BENCHMARK MODELS AND THEIR SIMULATION RESULTS The description and the performance analysis of the baseline models are provided in this section. 1) SUPPORT VECTOR MACHINE It is a popular classification technique, which is mostly used in the ETD. It is mostly used for a binary classification task. The simulation results of support vector machine (SVM) [46] using SMOTE, NM and imbalance data distribution are given VOLUME 10, 2022 in Table 4. The results exhibit that SVM performs well on NM instead of SMOTE and imbalanced data distribution. However, it underperforms on all baseline models. The reason is that it draws n−1 hyperplanes for making the classification boundary where n represents number of features. The EC data is high dimensional, which increases the computational overhead of SVM to a great extent. Moreover, the convergence speed of the SVM model is greatly affected. Fig. 13 shows AUC-ROC of SVM during training and validation process. 2) LOGISTIC REGRESSION Logistic regression (LR) is a well-known ML model, which is mostly used for binary classification tasks [47]. It is a probabilistic model. In LR, the final classification is performed using a sigmoid function. 3) CONVOLUTIONAL NEURAL NETWORK It is a deep learning model, which is mostly practiced in ETD as a benchmark model [17], [48]. Firstly, CNN uses different convolutional and pooling operations to capture the prominent features from the consumption profiles of consumers. Subsequently, the classification is carried out accordingly. The ETD results of CNN are provided in Table 6. E. PROPOSED AlexNet-AdaBoost-ABC The performance analysis of the proposed hybrid model is presented in this section. Fig. 11 and Fig. 12 show the learning process of the model in terms of loss and accuracy. The blue curve denotes training loss and orange curve exhibits testing loss. In the figures, the decrement in loss and increment in accuracy prompt that the hybrid model intelligently learns EC patterns. It is seen that the model converges quickly due to the self learning of latent features. Table 7 and Table 8 describe the overall performance results of the AlexNet-ABC and the proposed model, respectively. Moreover, Fig. 13 illustrates AUC-ROC of the proposed and benchmark models. The proposed model obtains the highest AUC-ROC of 0.91 due to the strong feature learning abilities of AlexNet and the integration of ABC for hyper-parameter tuning. The ABC attempts to find the suitable parameter set for both AlexNet and AdaBoost models. By doing this, the convergence rate of the hybrid model is increased to a greater extent. Moreover, high value of AUC indicates that the model efficiently differentiates between the positive and the negative class samples. Furthermore, Fig. 14 and Fig. 15 show the performance results of the proposed and baseline models using SMOTE and NM sampling techniques. From the figures, it is seen that the model performs well using NM as compared to SMOTE. The reason is that SMOTE generates overlapping samples after a certain threshold, which leads to overfitting issue. However, NM intelligently removes the overlapping samples from the larger class. That is why it gives better results as compared to SMOTE. Similarly, Fig. 16 and Fig. 17 show the MCC score of the proposed and baseline models using NM and SMOTE sampling techniques, respectively. The proposed model obtains the highest MCC score using NM as compared to SMOTE. This implies that the model successfully maintains the correlation between positive and negative predicted classes. This achievement enables the application of the proposed model in real world scenarios. Moreover, it is important to mention that the basic building block of our proposed ETD model is based on data driven models [37]. These models perform more efficiently when the dataset is huge. In this work, a massive dataset of smart meters is leveraged to train the proposed ETD model. So, the model learns EC patterns more efficiently when a large number of EC records is used. As a result, the proposed ETD model easily deals with the less or high volume of data, which shows the scalability of the proposed ETD model. Furthermore, the training time of the proposed and benchmark models is shown in Fig.18. The figure illustrates that the computational complexity of the proposed ETD model is greater than the baseline models except SVM. The reason is that the integration of ABC incurs extra time to find the optimal subset of hyper-parameters. However, there is a tradeoff between computational time and detection accuracy. The ETD results of the proposed model are better than its counterparts. So, in ETD, accurate detection of energy thieves is more important instead of computational time. Moreover, SVM has the highest computational overhead because it does not perform well on non-linear and high dimensional data. In SVM, n − 1 (n represents features) hyperplanes are drawn and only one hyperplane is selected from them having maximum margin from the support vectors. Due to this reason, it has the highest computational overhead. In contrast, LR has the least execution time because it consists of a single layer of neural networks. However, the performance of LR is not satisfactory because it is unable to capture complex non-linearity from the EC data. In addition, CNN consumes less execution time as compared to other deep models. However, it underperforms while achieving satisfactory ETD accuracy. Similarly, the computational complexity of the AlexNet model is more than CNN and LR because of extra convolution and max pooling layers. VI. CONCLUSION AND FUTURE WORK In this article, a hybrid deep learning model is introduced for the detection of malicious and non-malicious electricity consumers. The proposed model combines AlexNet, AdaBoost and ABC optimization algorithm. The Alexnet model is employed to cope with the high dimensionality problem. Whereas, the AdaBoost model is utilized to differentiate between honest and dishonest electricity consumers. Moreover, the problem of imbalanced data distribution is handled using an undersampling technique, named as NM. The NM technique intelligently equalizes the proportion of classes by minimizing the majority class samples. Furthermore, ABC optimization algorithm is exploited for hyper-parameters' tuning of the proposed ETD model. The efficacy of the proposed model is ensured on a realistic smart meters' dataset that is taken from SGCC, a well-known electric utility in China. Afterwards, the appropriate performance metrics are opted to measure the effectiveness of the proposed ETD model. The simulation results exhibit that the proposed ETD model outperforms the benchmark models. Our model achieves 88%, 86%, 84%, 85%, 78% and 91% accuracy, precision, recall, F1-score, MCC and AUC-ROC scores, respectively. Besides, the proposed model has some shortcomings. The model is trained on only EC dataset without considering other non-sequential factors, which limits its performance towards identifying the accurate location of the energy thieves. Moreover, the low-sampled EC data is considered, which affects the ability of the proposed model while capturing more granular information about energy theft. So, in the future, we will consider high sampling EC data and other non-sequential data for the accurate identification of energy thieves. He is also a Lecturer with the Department of Information Technology, Bayero University. He has authored research publications in international journals and conferences. His research interests include data science, optimization, security and privacy, energy trading, blockchain, and smart grid.
9,214.8
2022-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Etiology and Treatment of Growth Delay in Noonan Syndrome Noonan syndrome is characterized by multiple phenotypic features, including growth retardation, which represents the main cause of consultation to the clinician. Longitudinal growth during childhood and adolescence depends on several factors, among them an intact somatotrophic axis, which is characterized by an adequate growth hormone (GH) secretion by the pituitary, subsequent binding to its receptor, proper function of the post-receptor signaling pathway for this hormone (JAK-STAT5b and RAS/MAPK), and ultimately by the production of its main effector, insulin like growth factor 1 (IGF-1). Several studies regarding the function of the somatotrophic axis in patients with Noonan syndrome and data from murine models, suggest that partial GH insensitivity at a post-receptor level, as well as possible derangements in the RAS/MAPK pathway, are the most likely causes for the growth failure in these patients. Treatment with recombinant human growth hormone (rhGH) has been used extensively to promote linear growth in these patients. Numerous treatment protocols have been employed so far, but the published studies are quite heterogeneous regarding patient selection, length of treatment, and dose of rhGH utilized, so the true benefit of GH therapy is somewhat difficult to establish. This review will discuss the possible etiologies for the growth delay, as well as the outcomes following rhGH treatment in patients with Noonan syndrome. INTRODUCTION Noonan syndrome (NS -MIM# 163950) is a congenital disorder with an estimated incidence of one in 1,000 to 2,500 live births (1,2). This estimate, however, may not be accurate since the condition may be underdiagnosed, particularly in mild cases, complicating the determination of its true frequency in the general population (3). The presence of familiar cases is consistent with an autosomal dominant inheritance pattern, but sporadic cases are more frequent (4). Although this syndrome presents three cardinal characteristics: distinctive facial features, postnatal short stature and cardiac anomalies (5), it coexists with minor signs such as cryptorchidism, delayed puberty, bleeding disorders, thoracic abnormalities, skin disorders, variable degrees of neurocognitive delay and predisposition to myeloproliferative disorders. GROWTH DELAY IN NOONAN SYNDROME One of the cardinal features of Noonan syndrome is proportional growth retardation, occurring in most patients (10,11), and represents the main cause of consultation to the clinician. These patients tend to have a birth length which is normal or slightly subnormal for gestational age. Cessans and co-workers found, in a molecular characterized cohort of 386 NS patients, that those patients with PTPN11 and RAF1 pathogenic variants were shorter at birth (12). In contrast, Malaquias and co-workers did not find this relationship between genotype and birth length in a cohort of 127 NS patients (13). Interestingly, both studies reported a significantly higher frequency of prematurity in NS patients, which could be associated with polyhydramnios, which is frequently observed in NS patients (14). Concerning postnatal growth, it usually follow a channel that is below, but parallel to the third percentile during infancy and childhood (12,13). Even though deregulation of the RAS/MAPK pathway may explain this scenario, other factors may be involved, such as feeding problems (15) or a history of cardiac surgery. Patients with NS exhibit slow bone maturation and delayed puberty, which is associated with a limited pubertal growth spurt, leading to progressive growth retardation (16). A genetic correlation has been proposed, with patients with PTPN11, RAF1, and KRAS pathogenic variants showing more severely impaired growth than those patients with SOS1 variants (12,13). This may be explained by different magnitudes of RAS/MAPK activation for each genotype, and/or by the involvement of other signaling pathways related to RAS/MAPK (e.g. PI3K/AKT, JAK2/ STATb5). Finally, adult heights, in different populations, have been reported below the third percentile in most affected patients (4,17,18). The average adult height for men with Noonan syndrome is 162.5 cm, and for women is 153 cm. It should be mentioned that women with Noonan syndrome reach their adult height by the end of their second decade of life, whereas men reach this milestone at the beginning of their third decade, suggesting that they undergo a limited but prolonged pubertal spurt. Pathophysiology of Short Stature The pathophysiology of short stature in this condition remains poorly understood. Different potential mechanisms have been hypothesized, including growth hormone deficiency (GHD) (19)(20)(21), neurosecretory dysfunction (22) or mild GH resistance (23). The somatotrophic axis has been studied extensively in these patients, showing normal to elevated serum GH, with adequate responses to GH stimulation tests, but in some cases, they exhibit low mean GH concentrations during a 12-hour overnight profile. In addition, these patients show serum IGF-1 and ALS concentrations at the lower limit of the normal range, with IGFBP-3 concentrations usually within the normal range (20,24,25). The IGF-1 generation test, as reported by Bertelloni and co-workers in a well-defined pre-pubertal NS cohort with PTPN11 pathogenic variants, showed an impaired responsiveness to GH stimulation compared with patients with short stature and healthy children (26). This hormonal profile suggests that some of these patients may show evidence of partial GH insensitivity, possibly at a post-receptor level. Growth Regulation in Murine Models Several NS murine models have been developed (27). These models express pathogenic variants of Ptpn11 (D61G) (28)(29)(30), Sos1 (E846K) (31), Kras (V14I) (32), Rit1 (A57G) (33) and Raf1 (L613V) (34), in all tissues. All of them exhibit a normal birth length, but show reduced body length by weaning, which is concordant with the postnatal growth retardation observed in human NS patients. Regulation of growth has been investigated only in the murine model that expresses the pathogenic variants Ptpn11 D61G/+ (28)(29)(30). De Rocca and co-workers (24) showed that serum IGF-1 levels are significantly lower in NS mice until the sixth week of life, when serum IGF-1 reach normal levels. A similar pattern was observed for serum IGF binding protein-3 (IGF-BP3), so the authors suggested that this scenario might be restricted to the growing period. At a molecular level, Araki and co-workers (28) showed increased immunostaining of phospho-Erk (effector of RAS/ MAPK pathway) at the developing limb buds of this model. In mouse embryonic fibroblasts, however, there was no difference in Erk activation in response to growth factors between heterozygous (D61G/+) and homozygous (D61G/D61G) mice. In addition, De Rocca (29) observed strong Erk phosphorylation in the liver of NS animals compared with their WT littermates, and Araki (28) suggested that Erk activation in this murine model might be dependent on the cell context. Furthermore, De Rocca demonstrated that inhibition of Erk phosphorylation significantly increased IGF-1 blood levels and body length in NS mice. Tajan and co-workers (30) evaluated growth plate development in this NS murine model, and showed that the hypertrophic zone was smaller compared to WT animals. This finding correlated with a lower expression of transcripts associated with the hypertrophic stage, and with decreased alkaline phosphatase (ALP -chondrocyte differentiation marker) activity in NS chondrocytes. Moreover, increased basal and growth factor induced Erk1/2 phosphorylation was observed in NS hypertrophic chondrocytes. Interestingly, Tajan (30) showed that IGF-1 supplementation or inhibition of Erk phosphorylation partially corrects the growth retardation observed in NS mice. In addition, inhibition of Erk phosphorylation restored the length of the hypertrophic zones and ALP activity. Thus, Tajan hypothesizes that systemic IGF-1 deficiency is not the only mechanism responsible for poor growth plate development in NS. In addition, De Rocca et al. proposed that autocrine/paracrine IGF-1 production may be altered in bone, and thus contribute to the impaired growth observed in NS. Several authors have suggested a direct participation (IGF-1 independent) of the RAS/MAPK pathway on physiological growth plate development (35,36). Moreover, the important role of the RAS/MAPK pathway in growth plate development has been documented in achondroplasia and hypochondroplasia, the most common primary skeletal dysplasias in humans, which are caused by gain of function variants of the fibroblast growth factor receptor 3 (FGFR3) (37). It has been demonstrated that specific RAS/MAPK signaling inhibition in an achondroplasia murine model (Fgfr3 Y367C/+) leads to a significant recovery of bone growth (38). Data from murine NS models provide evidence that the growth retardation in NS, at least in those cases associated with a specific Ptpn11 variant, may be due either to partial GH insensitivity at a post-receptor level, and/or to an effect on RAS/ MAPK activation. This murine model (Ptpn11 D61G/+) not only reproduces several Noonan syndrome characteristics, but also allows for the understanding of the molecular process involved in the development of these features, in a tissue specific context. This has helped to clarify the effects of RAS/MAPK pathway deregulation in growth plate physiology. Further investigation of growth plate development, especially in those murine models with pathogenic variants associated with severe growth deficiency, as in the case of the murine model developed by Hernandez-Porras and co-workers (Kras +/V14I), would be very informative. IGF-1 Reduction and RAS/MAPK Hyperactivation An intriguing aspect regarding linear growth in NS is the possible molecular relationship between GH induced hyperactivation of the RAS/MAPK pathway and reduced generation of IGF-1. The clues appear to reside in the interplay between this pathway and the JAK2-STAT5b signaling pathway, which induces IGF-1 production after GH receptor activation (39). Although the inhibitory effect of SHP2 which is activated in NS, on JAK-STAT5b signaling has been shown in some cell lines (40), the Ptpn11 D61G/+ murine model does not reveal any change in STAT5b activity when stimulated with GH (29). Another issue to consider is the possible participation of the Src family of kinases (SFK), whose activation by GH is independent of the JAK2-STAT5b pathway, and results in induction of RAS/MAPK signaling (41,42). Further research will be necessary to better understand the interplay between these intracellular pathways, and to clarify the mechanisms whereby pathogenic variants in RAS/MAPK, which prolong the intracellular signal induced by GH, results in a reduced generation of IGF-1. GROWTH HORMONE TREATMENT FOR NOONAN SYNDROME Treatment with recombinant human growth hormone (rhGH) has been shown to accelerate growth in patients with NS, but the benefit of long-term therapy over adult height is still the subject of some debate. The US Food and Drug Administration approved the treatment of NS patients with rhGH in 2007, and recently the European Medicine Agency (EMA) approved it too. In several other countries, such as Brazil, Israel, The Philippines, South Korea and Switzerland, rhGH is also licensed for this indication (43). Most of the evidence regarding rhGH therapy in patients with NS originates from observational uncontrolled studies with relatively small numbers of subjects. Thus, very few controlled trials reporting adult height or near adult height in these patients have been published, but some longitudinal prospective trials and retrospective studies based on post-marketing studies are available. These data are difficult to compare, however, due to the heterogeneous treatment protocols employed, as well as the different cohort selection criteria, age at onset of therapy, rhGH dose employed, and duration of treatment. Some of the series published show a significant gain in adult height of approximately 1.4 ± 0.8 SDS (9.5 ± 5.4 cm) (44), but others have not confirmed these findings (45). The published studies can be classified into post-marketing observational studies, generally financed by the pharmaceutical industry, and prospective and retrospective clinical trials conducted in selected groups of patients. Short-Term Studies Several short-term studies with less than one year of rhGH therapy have reported a transient increase in height velocity and height SDS in NS patients, as described for other clinical conditions characterized by short stature (20,46,47). In addition, Siklar and co-workers reported the effect of rhGH treatment during three years in a group of 47 patients with NS (48). In this study, the height standard deviation score (HSDS) increased from -3.62 ± 1.14 to -2.85 ± 0.96, and this increase was significantly greater when compared with the patients who did not receive rhGH ( Table 1). Longer-term studies, such as those published by MacFarlane (49) and Lee (50), also reported an increase in height velocity and height SDS during rhGH therapy in patients with NS ( Table 1). The MacFarlane study is the only controlled trial reporting data from 31 children (23 treated and 8 untreated), which showed that after 3 years of rhGH therapy, the treated group gained an average of 3.3 cm more than the untreated group (49). In addition, Lee compared the responses to rhGH therapy in patients with NS and Turner Syndrome (TS), which were enrolled in the Nordinet International Outcome Study (IOS), and demonstrated that both groups of patients achieved a similar response after 4 years of rhGH therapy (50). In NS and TS patients, the 4-year adjusted DHSDS were 1.14 ± 0.13 and 1.03 ± 0.04, respectively. Based on untreated, disease-specific references, the four year adjusted DHSDS for NS and TS were 1.48 ± 0.10 and 1.79 ± 0.04 (p<0.0001) respectively, with DHSDS being greater in younger patients at the onset of treatment. Finally, Horikawa and co-workers investigated the long-term efficacy and safety of Norditropin ® in a 4 year randomized, double blind, multicenter trial in prepubertal patients with NS (51) ( Table 1). The patients treated were randomized 1:1 to receive rhGH at a dose of either 0.033 or 0.066 mg/kg/day. Height SDS increased from -3.24 to -2.39 in the low dose group (change in HSDS was 0.85), and from -3.25 to -1.41 in the high dose group (change in HSDS was 1.84) after 208 weeks of therapy (p<0.001). It should be noted that the authors did not observe any serious adverse events during therapy with rhGH. Adult Height/Near Adult Height Adult or near-adult height data have been reported in four studies, and there is additional information from four patients documented by Municchi and co-workers in 1995 (52). None of these studies included a control group, so the growth of the treated patients was compared with historical growth references for NS. Unfortunately, some of these references are from cross sectional studies without genetic confirmation of the condition, and with small numbers of patients. As observed in Table 2, age at start of treatment and duration of rhGH varied widely among these studies, whereas mean heights at the start of treatment were similar. These studies show a relatively large variation in the height gain observed during rhGH therapy (0.6 -2.0 SDS), with the best results reported in patients who started treatment at a younger age. More recent data suggest that an additional spontaneous height gain of 1.0 SDS may occur during the end of the second decade of life in girls with NS, with a further gain of 0.57 SDS at the start of the third decade in boys with NS (58). Therefore, the potential positive effects of a late pubertal growth spurt should be considered in the projections of adult height in patients with NS. Osio and co-workers reported adult height in 18 children with NS who were treated with rhGH during a mean period of 7.5 years (53). These authors reported an increase in adult height from −2.9 to −1.2 SDS after completion of rhGH treatment in this uncontrolled study ( Table 2). In addition, they observed a further height gain of 0.9 SDS in males and 0.5 SDS in females during late puberty, so these patients attained their adult height at a mean age of 19.5 years (range 17-21 years). Noordam and co-workers have also published adult height data in 29 patients with NS (55). The mean adult height SDS of these patients was -1.5 and 1.2 according to National and Noonan standards, respectively. The mean adult height for boys was 171 cm and 157 cm for girls indicating a final height gain of 1.3 SDS, which corresponds to approximately 9.5 cm ( Table 2). Linear regression analysis showed that age at start of puberty made the only statistically significant contribution to the gain in adult height in this series. In addition, Tamburrino and coworkers reported growth data in patients affected by RASopathies with a molecularly confirmed diagnosis. Adult height was reported in 33 patients, including 16 patients with GH deficiency (GHD). This study showed that long-term rhGH therapy accelerates growth in GHD subjects affected by RASopathies, normalizing adult height for Ranke standards, although most patients did not show the characteristic catch-up growth observed in patients with isolated GHD treated with rhGH (59). Recently Malaquias and co-workers, reported 42 patients (35 PTPN11+) who were treated with rhGH, and followed 17 until adult height. This study showed that PTPN11+ patients had a greater growth response than PTPN11-patients. Among the patients that reached adult height, AH-CDC SDS and AH-NS SDS were -2.1 ± 0.7 and 0.7 ± 0.8, respectively. The total increase in height SDS was 1.3 ± 0.7 and 1.5 ± 0.6 for normal and NS standards, respectively (60). Observational Post-Marketing Studies The KIGS program (sponsored by Pfizer) for NS patients from the United Kingdom, showed a mean increase in adult height of 0.8 SDS. This was a cohort of 10 very short patients (-3.0 SDS) treated for an average of 5.3 years, whose age at initiation of therapy was relatively late (20). A study by Raaijmakers and co-workers, reported near adult height in 24 patients after at least 4 years of treatment. The median age at the start of treatment was 10.2 years and the median duration of rhGH treatment was 7.6 years. Median gain in height was 0.61 SDS according to Tanner standards, and 0.97 SDS according to Noonan standards (54) ( Table 2). In addition, Romano and co-workers, evaluated the response to rhGH therapy in 252 patients by analyzing growth data from children with NS who were enrolled in the National Cooperative Growth Study (sponsored by Genentech). These authors reported near adult height in 65 patients, and the mean height gain was 1.4 SDS, which represents a mean gain of 8.9 cm for girls and 10 cm for boys (16) ( Table 2). These results were compared with children treated with rhGH for idiopathic growth hormone deficiency and Turner Syndrome. Children with NS monitored for at least 4 years had a significant increase in height SDS scores, and their height velocity was greater than in girls with Turner syndrome treated for the same period. The most recent study of the KIGS program, published in 2019, describes a cohort of 140 patients treated up to adult height ( Table 2). The males reached an adult height of 163.7 cm (-2.0 SDS), and the females 149.5 cm (-2.5 SDS), with an increase in 1.1 SDS for boys and 1.3 SDS for girls, after a mean duration of treatment of 6.8 and 6.3 years, respectively (56). Variables Correlated With Increased Growth Several authors (50,61) have shown that early initiation and longer duration of therapy are associated with a greater height gain. Thus, initiation of rhGH therapy at a relatively late age may explain the modest results observed by some authors (20). In addition, the duration of rhGh therapy before puberty and the height SDS at the onset of puberty are important contributors to near adult height in patients with NS. This suggests that growth optimization may be possible with earlier initiation and longer duration of rhGH therapy in patients with this condition (56,59). No significant correlation with rhGH dose nor with gender, however, has been observed in most of these studies (56,61). In addition, whether the clinical phenotype is moderate or severe does not appear to be a predictor of the response to rhGH therapy. Both phenotypes respond similarly to rhGH treatment despite the significantly higher mean GH levels observed in the severe phenotype (62). In some short-term studies, a correlation between the growth response and the genotype has been suggested, with a lower growth response in patients who carry a PTPN11 pathogenic variant. This finding has not been confirmed by other authors, however, who have not observed any differences in height gain, height velocity SDS, adult height and/or serum IGF-1 in NS patients with or without PTPN11 pathogenic variants (16,56). Serum IGF1 has been used to monitor the response to rhGH therapy in NS, and usually rises during rhGH therapy in parallel with an increase in height SD (61). Two studies have investigated the response to rhGH treatment in NS patients with GH deficiency. The study by Tamburrino and co-workers (59) showed that GH-deficient patients, treated with the doses of rhGH employed for classical GH deficiency, had a gain in height SDS that placed them within the normal range for NS, but not for the normal population. This suggests that GH deficiency is not the only explanation for the short stature observed in some of these patients, and that the doses of rhGH employed for patients with classical GH deficiency are insufficient to normalize height in NS. Zavras and co-workers (63) assessed growth response following GH therapy (0.25 mg/Kg/week) in five GH-deficient NS (NSGHD) patients and in five idiopathic GH deficient patients (IGHD). At the beginning of GH treatment, height velocity were statistically lower in NSGHD children than in IGHD. During the first three years of rhGH therapy, the NSGHD patients showed a slight increase in height (from −2.71 SDS to −2.44 SDS) and in height velocity (from −2.42 SDS to −0.23 SDS), but these parameters remained significantly lower than in IGHD children. In addition, after five years of rhGH therapy, height gain was higher in IGHD children (mean 28.3 cm) than in NSGHD patients (mean 23.6 cm). Safety of rhGH Therapy The overall experience with rhGH therapy in most short children is relatively reassuring. However, some studies have shown an increased risk for cardiovascular events and second tumors in children with a primary tumor treated with rhGH during childhood and adolescence (64,65). Other publications, however, have not shown a significant increase in overall mortality in low-risk patients, such as those with isolated GH deficiency or idiopathic short stature. In patients with an inherent increased mortality risk, such as those with a predisposition or harboring tumors, the increased mortality rates appear to be related the underlying diagnosis rather than to rhGH therapy (66). We should state, however, that none of the published series with patients with NS has reported serious adverse effects during rhGH therapy. Key parameters, such as BMI and markers of carbohydrate metabolism usually remain within the normal range during rhGH therapy (47). Two prospective studies specifically designed to evaluate cardiac anatomy and function after 1 and 4 years of rhGH therapy, did not show any change in myocardial function, or of ventricular wall thickness (20,67). In addition, RAS/MAPK hyperactivation is a common feature of many types of somatic malignancies. NS represents a cancerprone syndrome and is associated with increased risks for childhood leukemia and solid tumors, as shown by Kratz and co-workers (68). Some genotypic variants of NS are associated with the development of neoplasms, such as the substitution of codons p.Asp61 and p.Thr73Ile at SHP2 (PTPN11), which are associated with Myeloid Leukemia (69). Even though there are a few cases described in the literature, most studies have not shown an increase in the incidence of neoplasms in patients with NS treated with rhGH. It should be stated, however, that there are no long-term studies specifically designed to address this outcome, so continuing surveillance is of paramount importance in NS patients. CONCLUSION Despite the recent description of new genes associated with Noonan syndrome, and the availability of several murine models with pathogenic variants of the RAS/MAPK pathway, the molecular pathophysiology for the growth delay observed in these patients has not been elucidated. One of the most intriguing questions is the possible molecular link between RAS/MAPK hyperactivation and IGF-1 expression, and its relationship with growth plate development. In addition, therapy with rhGH increases height velocity in patients with NS, but firm conclusions regarding the effects of this therapy on adult height are not available. Therefore, there is a need for large controlled clinical trials in patients with this condition, in order to accurately assess the effects of rhGH therapy over adult height. In addition, given the complexity of this disorder, in terms of the high prevalence of cardiac defects and the possible risks of malignancy, it is very important to maintain a strict surveillance of these patients. AUTHOR CONTRIBUTIONS FR developed the Introduction and the Growth delay in Noonan syndrome sections. XG developed the Growth hormone treatment for Noonan syndrome section. FC developed the Conclusion section and perform sections integration and manuscript revision. All authors contributed to the article and approved the submitted version.
5,646.2
2021-06-04T00:00:00.000
[ "Biology", "Medicine" ]
Limiting Behaviour of Fr\'echet Means in the Space of Phylogenetic Trees As demonstrated in our previous work on ${\boldsymbol T}_{4}$, the space of phylogenetic trees with four leaves, the global, as well as the local, topological structure of the space plays an important role in the non-classical limiting behaviour of the sample Fr\'echet means of a probability distribution on ${\boldsymbol T}_{4}$. Nevertheless, the techniques used in that paper were specific to ${\boldsymbol T}_{4}$ and cannot be adapted to analyse Fr\'echet means in the space ${\boldsymbol T}_{m}$ of phylogenetic trees with $m(\geqslant5)$ leaves. To investigate the latter, this paper first studies the log map of ${\boldsymbol T}_{m}$, a generalisation of the inverse of the exponential map on a Riemannian manifold. Then, in terms of a modified version of the log map, we characterise Fr\'echet means in ${\boldsymbol T}_{m}$ that lie in top-dimensional or co-dimension one strata. We derive the limiting distributions for the corresponding sample Fr\'echet means, generalising our previous results. In particular, the results show that, although they are related to the Gaussian distribution, the forms taken by the limiting distributions depend on the co-dimensions of the strata in which the Fr\'echet means lie. Introduction The concept of Fréchet means of random variables on a metric space is a generalisation of the least mean-square characterisation of Euclidean means: a point is a Fréchet mean of a probability measure μ on a metric space (M, d) if it minimises the Fréchet function for μ defined by provided the integral on the right side is finite for at least one point x. Note that the factor 1/2 will simplify some later computations. The concept of Fréchet means has recently been used in the statistical analysis of data of a non-Euclidean nature. We refer readers to Bhattacharya and Patrangenaru (2005), Bhattacharya and Patrangenaru (2014), Dryden et al. (2014), Dryden and Mardia (1998) and Kendall and Le (2011), as well as the references therein, for the relevance of, and recent developments in, the study of various aspects of Fréchet means in Riemannian manifolds. The Fréchet mean has also been studied in the space of phylogenetic trees, as motivated by Billera et al. (2001) and Holmes (2003). It was first introduced to this space independently by Bacak (2014) and Miller et al. (2015), which both gave methods for computing it. Limiting distributions of sample Fréchet means in the space of phylogenetic trees with four leaves were studied in Barden et al. (2013), and it was used to analyse tree-shaped medical imaging data in Feragen et al. (2013), while principal geodesic analysis on the space of phylogenetic trees, a related statistical issue, was studied in Nye (2011), Nye (2014), and Feragen et al. (2013). A phylogenetic tree represents the evolutionary history of a set of organisms and is an important concept in evolutionary biology. Such a tree is a contractible graph, that is, a connected graph with no circuits, where one of its vertices of degree 1 is distinguished as the root of the tree and the other such vertices are (labelled) leaves. The space T m of phylogenetic trees with m leaves was first introduced in Billera et al. (2001). The important feature of the space is that each point represents a tree with a particular structure and specified lengths of its edges in such a way that both the structure and the edge lengths vary continuously in a natural way throughout the space. The space is constructed by identifying faces of a disjoint union of Euclidean orthants, each corresponding to a different tree structure. In particular, it is a topologically stratified space and also a C AT (0), i.e. globally non-positively curved, space (cf. Bridson and Haefliger 1999). A detailed account of the underlying geometry of tree spaces can be found in Billera et al. (2001) and a brief summary can be found in the Appendix to Barden et al. (2013). As demonstrated in Basrak (2010) and Hotz et al. (2013) for T 3 and in Barden et al. (2013) for T 4 , the global, as well as the local, topological structure of the space of phylogenetic trees plays an important role in the limiting behaviour of sample Fréchet means. These results imply that the known results (cf. Kendall and Le 2011) on the limiting behaviour of sample Fréchet means in Riemannian manifolds cannot be applied directly. Moreover, due to the increasing complexity of the structure of T m as m increases, the techniques used in Barden et al. (2013) for T 4 could not be adapted to derive the limiting behaviour of sample Fréchet means in T m for general m. For example, although the natural isometric embedding of T 4 is in 10-dimensional Euclidean space R 10 , it is intrinsically 2-dimensional, being constructed from 15 quadrants identified three at a time along their common axes. This made it possible in Barden et al. (2013), following Billera et al. (2001), to represent T 4 as a union of certain quadrants embedded in R 3 in such a way that it was possible to visualise the geodesics explicitly. That is, naturally, not possible for m > 4. The need to describe geodesics explicitly arises as follows. In a complete manifold of non-positive curvature, the global minimum of a Fréchet function would be characterised by the vanishing of its derivative. In tree space, as in general stratified spaces, such derivatives do not exist at non-manifold points. However, directional derivatives for a Fréchet function, which serve our purpose, do exist at all points and for all tangential directions. They are defined via the log map, which is a generalisation of the inverse of the exponential map of Riemannian manifolds, and is expressed in terms of the lengths and initial tangent vectors of unit speed geodesics. In this paper, we derive the expression for the log map using the geometric structure of geodesics in T m obtained in Owen (2011) and Owen and Provan (2011). As a result, we are able to establish a central limit theorem for iid random variables having probability measure μ that has its Fréchet mean lying in a top-dimensional stratum. In this case, our central limit theorem shows that sample means behave in a predictable way, similar to the case for arbitrary Euclidean data although with a more complex covariance structure. In particular, our result says that, as the size of a sample increases, the mean of that sample approaches the true mean of the underlying distribution in a quantifiable way. This means that we can use the mean of sample data as an approximation of the true mean as we would do with Euclidean data. This opens the door for work on confidence intervals, such as Willis (2016). Moreover, we take advantage of the special structure of tree space in the neighbourhood of a stratum of co-dimension one to obtain the corresponding results when the Fréchet mean of μ lies in such a stratum. In particular, we show that, in this case, the limiting distribution can take one of the three possible forms, distinguished by the nature of its support. Unlike the Euclidean case, the limiting distributions in both cases here are expressed in terms of the log map at the Fréchet mean of μ. This is similar to the central limit theorem for sample Fréchet means on Riemannian manifolds (cf. Kendall and Le 2011). Although it may appear non-intuitive, it allows us to use the standard results on Euclidean space. For example, in the top-dimensional case, the limiting distribution is a Gaussian distribution and so some classical hypothesis tests can be carried out in a similar fashion to hypothesis tests for data lying in a Riemannian manifold as demonstrated in Dryden and Mardia (1998) for the statistical analysis of shape. However, in the case of codimension one, the limiting distribution is non-standard and so the classical hypothesis tests cannot easily be modified to apply. Further investigation is required and we aim to pursue this, as well as the applications of the results to phylogenetic trees, in future papers. The remainder of the paper is organised as follows. To obtain the directional derivatives of a Fréchet function, we need an explicit expression for the log map that is amenable to calculation. This in turn requires a detailed analysis of the geodesics which we carry out in the next section using results from Owen (2011) and Owen and Provan (2011). The resulting expression (6) for the log map in Theorem 1 and its modification (9) are then used in the following two sections which study the limiting distributions for sample Fréchet means in T m ; Sect. 3 concentrates on the case when the Fréchet means lie in the top-dimensional strata, while Sect. 4 deals with the case when they lie in the strata of co-dimension one. In the final section, we discuss some of the problems involved in generalising our results to the case that the Fréchet means lie in strata of arbitrary co-dimension. The log map on a top-dimensional stratum The log map is the generalisation of exp −1 , the inverse of the exponential map on a Riemannian manifold. For a tree T * in T m the log map, log T * , at T * takes the form as T varies, where v(T ) is a unit vector at T * along the geodesic from T * to T and d(T * , T ) is the distance between T * and T along that geodesic. This is well defined since T m is a globally non-positively curved space, or C AT (0)-space (cf. Bridson and Haefliger 1999), and so this geodesic is unique. Note that for data in Euclidean space, since tangent vectors at different points may be identified by parallel translation. log T * (T ) would be represented by the difference T − T * of the two position vectors. Then, log T (T * ) would be its negative T * − T . However, although for convenience we shall embed T m in a Euclidean space, that embedding is not unique and there is no canonical relation between log T * (T ), which is a vector tangent to T m at T * , and log T (T * ) which is tangent at T . The log map can be thought of as a projection from the more complicated tree space onto a simpler Euclidean space, or more generally onto a Euclidean cone, that has minimal distortion around T * , with the amount of possible distortion increasing as trees get further from T * . Specifically, the log map is a bijection for trees in the orthant of T * . For trees outside of this orthant, multiple trees can be mapped to the same Euclidean point, including two trees with no edges in common. Thus, we generally cannot use information about the position of a point on the tangent space to say something about the inverse tree, and thus the log map is primarily a mathematical tool rather than something that has biological meaning. To analyse this log map further, we first recall some relevant aspects of the structure of trees and tree spaces. Apart from the roots and leaves of a tree, which are the vertices of degree 1 mentioned above, there are no vertices of degree two and the remaining vertices, of degree at least 3, are called internal. An edge is called internal if both its vertices are. A tree with m labelled leaves and unspecified internal edge lengths determines a combinatorial type. Then, T m is a stratified space with a stratum for each such type: a given type with k ( m − 2) internal edges determines a stratum with k positive parameters ranging over the points of an open k-dimensional Euclidean orthant, each point representing the tree with those specific parameters as the lengths of its internal edges. Note that, for this paper, we shall only consider the internal edges The edge between vertices v 1 and v 2 shrinks to 0 to form a vertex of degree p + q + 2 of a tree. So by 'edge' we always mean 'internal edge' and, to simplify the notation, we consider T m+2 , rather than T m . The metric on T m+2 is induced by regarding the identification of a stratum τ with a Euclidean orthant O as an isometry. Then each face, or boundary orthant of codimension one, of O is identified with a boundary stratum σ of τ . A tree of type σ is obtained from a tree of type τ by coalescing the vertices v 1 and v 2 of degree p and q of the edge whose parameter has become zero, to form a new vertex v of degree p + q − 2. See Fig. 1. We are particularly interested in the top-dimensional strata. These are formed by binary trees, in which all internal vertices have degree 3. A binary tree with m + 2 leaves has m + 1 internal vertices and m internal edges so that the corresponding stratum has dimension m. There are (2m + 1)!! such strata in T m+2 (cf. Schroder 1870). For these strata, the boundary relation results in two adjacent vertices of degree 3 coalescing to form a vertex of degree 4. Since each vertex of degree 4 can be formed 3 different ways, each stratum of co-dimension one is a component of the boundary of three different top-dimensional strata. Figure 2 shows an example of these strata in T 4 . If a tree T * lies in a top-dimensional stratum of T m+2 , since such a stratum can be identified with an orthant O in R m , we may identify the tangent space to T m+2 at T * with R m . Then, for each point T ∈ T m+2 , the geodesic from T * to T in T m+2 will start with a linear segment in O, which determines an initial unit tangent vector v(T ) ∈ R m at T * . Thus, we may identify the image of the log map defined in (1) as the vector d( For example, the space T 3 of trees with three leaves is the 'spider': three half Euclidean lines joined at their origins. Denoting the length of the edge e of T by |e| T , then d(T * , T ) = ||e| T * − |e| T |, if T * and T lie in the same orthant of T 3 , and d(T * , T ) = |e| T * + |e| T , otherwise. Thus, the log map for T 3 can be expressed explicitly as: if T and T * are in the same orthant; −(|e| T + |e| T * )e otherwise, where e is the canonical unit vector determining the orthant in which T * lies. Note that we abuse notation by calling the (single) internal edge e in all 3 trees, despite these edges dividing the leaves in different ways. The explicit expression for the log map for the space T 4 of trees with four leaves is already much more complicated than this and was derived in Barden et al. (2013). To obtain the expression for the log map at T * for the space T m+2 of trees with m + 2 (m > 2) leaves, we first summarise without proofs the description, given in Billera et al. (2001), Owen (2011), Owen andProvan (2011) andVogtmann (2007), of the geodesic between two given trees in T m+2 . When an (internal) edge is removed from a tree, it splits the set of the leaves plus the root into two disjoint subsets, each having at least two members, and we identify the edges from different trees that induce the same split. Each edge has a 'type' that is specified by the subset of the corresponding split that does not contain the root. For example, in the tree in Fig. 3a, the edge labelled x 3 has the edge-type {a, b}, while the edge labelled x 1 has the edge-type {a, b, c, d}. There are possible edge-types. Two edge-types are called compatible if they can occur in the same tree, and T m+2 may be identified with a certain subset of R M , each possible edge-type being identified with a positive semi-axis in R M . To make this identification explicit, we choose a canonical order of the edges by first ordering the leaves and then taking the induced lexicographic ordering of the sets of (ordered) leaves that determine the edges. Then, if is a set of mutually compatible edge-types and O( ) is the orthant spanned by the corresponding semi-axes in R M , each point of O( ) represents a tree with the combinatorial type determined by and T m+2 is the union of all such orthants. c The geodesic between the trees corresponding to u * and u is marked with the dashed line. The −x 1 ,−x 2 ,x 3 octant does not exist in tree space, but the −x 2 , x 3 quadrant does, so the geodesic is restricted to lying in the grey area. It bends at the points p and q. d The isometric embedding of the grey area in c into V 2 . Intuitively, this corresponds to "unfolding" the bends. u * and u are mapped to v * and v. The Euclidean geodesic between v * and v in V 2 is contained in the grey area, and thus can be mapped back onto the geodesic in tree space For a set of edges A in a tree T , define A T = e∈A |e| 2 T and write |A| for the number of edges in A. For two given trees T * and T , let E * and E be their respective edge sets, or sets of non-trivial splits. Assume first that T * and T have no common edge, i.e. E * ∩ E = ∅. Then, the geodesic from T * to T can be determined as follows. Lemma 1 Let T * and T be two trees with no common edges, lying in top-dimensional strata of T m+2 . Then, there is an integer k, 1 k m, and a pair (A, B) of partitions A = (A 1 , . . . , A k ) of E * and B = (B 1 , . . . , B k ) of E, all subsets A i and B j being non-empty, such that (P1) for each i > j, the union A i ∪ B j is a set of mutually compatible edges; there are no non-trivial partitions C 1 ∪ C 2 of A i and D 1 ∪ D 2 of B i such that C 2 ∪ D 1 is a set of mutually compatible edges and C 1 T * D 1 T < C 2 T * D 2 T . 106 D. Barden et al. The geodesic is the shortest path through the sequence of orthants and has length ( is the orthant in which T * lies and that T is in O k . These results, developed from Vogtmann (2007), are given in this form, though not in a single lemma, in Owen and Provan (2011, section 2.3), where the properties (P1), (P2) and (P3) are stated in identical terms. The edge set for O i is denoted by E i in the statement of Theorem 2.4 there and the formula for the length of the geodesic is equation (1) in that statement. Following Vogtmann (2007) and Owen and Provan (2011), respectively, we call the orthant sequence C the carrier of the geodesic, and the pair of partitions (A, B) the support of the geodesic. In general, the integer k and the support (A, B) need not be unique. However, they are unique if all the inequalities in (P2) are strict (Owen and Provan 2011, Remark, p.7) and, in this case, we shall refer to the carrier and support as the minimal ones. Under the above assumption, the integer k appearing in Lemma 1 is the number of times that the geodesic includes a segment in the interior of one orthant followed by a segment in the interior of a neighbouring orthant. Hence, the constraints 1 k m: k = 1 implies that the geodesic goes through the cone point and k = m that it passes through a sequence of top-dimensional orthants. We can now give an isometric embeddingC in R m of C ⊆ R M with T * mapped to u * = (u * 1 , . . . , u * m ) in the positive orthant, where the u * i > 0 represent the lengths of the edges of T * , and with T mapped to u = −(u 1 , . . . , u m ) in the negative orthant, where the u i > 0 are the lengths of the edges of T . Let (t * 1 , . . . , t * m ) be the coordinates of T * ordered by the canonical ordering given just before Lemma 1 that embeds T m+2 in R M . Then, we can reorder the coordinates u * i such that the edges in A 1 correspond to the first |A 1 | positive semi-axes in R m , the edges in A 2 correspond to the next |A 2 | positive semi-axes in R m , etc., while the edges in B 1 correspond to the first |B 1 | negative semi-axes in R m , the edges in B 2 correspond to the next |B 2 | negative semi-axes in R m , etc. By (P1), the edge sets B 1 , . . . , B i , A i+1 , . . . , A k are mutually compatible for all 0 i k, implying that the images of these edges in R m are mutually orthogonal, and so they determine an isometric embedding of O i , defined by (3), and hence the required isometric embeddingC of C. Let π be the inverse of the permutation of the coordinates described above, so that Example 1 Figure 3c shows the embedded geodesic and minimal carrier between the trees T * and T (see Fig. 3a, b), which correspond to the points u * and u, respectively. The minimal support consists of For convenience, π is the identity permutation in this case. The minimal carrier consists of the all positive octant determined by x 1 > 0, x 2 > 0 and x 3 > 0; the 2-dimensional quadrant formed by the positive x 3 and negative x 2 axes; and the all negative octant. For any 1 l m, let V l be the subspace of R l that is the union of the (closed) orthants P i , i = 0, . . . , l, where For the given T * , T , and corresponding k from Lemma 1, there are k + 1 orthants in the carrier of the geodesic between T * and T . If k = m (the intrinsic dimension of T m+2 ), then the carrier C is isometric toC = V m , with O i coinciding with P i and with the geodesic from u * to u being a straight line contained inC. Otherwise if k < m, the spaceC is strictly contained in V m , and some of the top-dimensional orthants of V m may not correspond to orthants in tree space. Additionally, the geodesic between u * and u inC will bend at certain orthant boundaries within the ambient space V m . We now give an isometric embedding onto V k of a subspace ofC containing the geodesic in V m such that the image geodesic is a straight line. The geodesic between u * and u passes through k orthant boundaries. At the ith orthant boundary, the edges in A i , which have been shrinking in length since the geodesic started at u * , simultaneously reach length 0, and the edges in B i simultaneously appear in the tree with length 0 and start to grow in length. The length of each edge in A i changes linearly as we move along the geodesic, and thus since these lengths all reach 0 at the same point, the ratios of these lengths to each other remain the same along the geodesic. An analogous statement can be made for the lengths of the edges in B i (cf. Owen 2011, Corollary 4.3). The basic idea behind the embedding into V k is that because the lengths of the edges in A i , for any i, are all linearly dependent on each other, we can represent those edges in V k using only one dimension, and analogously for the edges in B i . More specifically, for 1 i k, let be the projection of u * on the orthant O(A i ). That is, the coordinates of v * i are the lengths of the edges in A i , ordered as chosen above. Similarly, let so that the coordinates of v i are the lengths of the edges in B i , in that order. Then, the geodesic between T * and T in C is piece-wise linearly isometric with the Euclidean geodesic between the vectors and in V k and, hence, in R k . In particular, the Euclidean distance between these two Euclidean points is the same as the distance between T * and T in C. Thus, we have the following result, the essence of which appears in Owen (2011, Theorem 4.10), to which we refer readers for more detailed proof. Lemma 2 For any given T * and T in T m+2 with no common edge and with T * lying in a top-dimensional stratum, there is an integer k, 1 k m, for which there are two vectors v * , v ∈ R k , depending on both T * and T , such that the geodesic between T * and T is homeomorphic and piece-wise linearly isometric, with the (straight) Euclidean geodesic between v * and v, where v * lies in the positive orthant of R k and v in the closure of the negative orthant. For Example 1, k = 2, and thus the grey area shown in Fig. 3c is isometrically mapped to V 2 , as shown in Fig. 3d. In the general case where T * and T have a common edge, say e, this common edge determines, for each of the two trees, two quotient trees T * i and T i , i = 1, 2, described as follows (cf. Owen 2011;Vogtmann 2007). The trees T * 1 and T 1 are obtained by replacing the subtree 'below' e with a single new leaf, so that e becomes an external edge. These two replaced subtrees form the trees T * 2 and T 2 , with the 'upper' vertex of the edge e becoming the new root. Then, the geodesic γ (t) between T * and T is isometric with (γ e (t), γ 1 (t), γ 2 (t)), where γ e is the linear path from |e| T * to |e| T and γ i is the geodesic from T * i to T i in the corresponding tree space. For this, we treat T 1 and T 2 , the spaces of trees with no internal edges, as single points, so that any geodesic in them is a constant path. Assuming that T * i and T i have no common edge for i = 1 or 2, we may obtain, as above, a straightened image of each geodesic γ i in V k i with T * i represented in the positive orthant and T i in the negative one. Combining these with the geodesic γ e , which is already a straight linear segment, we have an isometric representation of γ as a straight linear segment in R + × V k 1 × V k 2 . In this case, the sequence of strata containing the tree space geodesic between T * and T is contained in the product of the carriers for the relevant quotient trees, together with an additional factor for the common edge. For example, if 0 < t 1 < t 2 < t 3 < t 4 < 1 and the geodesic γ 1 spends [0, , 1] in orthant P 3 , then the carrier for the product geodesic would be the sub-sequence of the full lexicographically ordered sequence of nine products. If T * and T have more than one common edge, then either T * 1 and T 1 , or T * 2 and T 2 , will have a common edge and we may repeat the process. Having done so as often as necessary, we arrive at a sequence of orthants determined by the non-common edges of T * and T . These we relabel O 0 to O k as in Lemma 1. If O −1 is the orthant determined by the axes corresponding to the common edges of T * and T , then the sequence is the carrier of the geodesic from T * to T . Similarly, the support for the tree space geodesic between T * and T is found by interleaving the partitions in the supports of the relevant quotient trees, so that property (P2) is satisfied in the combined support. The resulting partitions A and B are then preceded by the set A 0 = B 0 of axes corresponding to the common edges so that In this generalised context, the value k = 0 is now possible, implying that all edges are common to T * and T . In other words, they lie in the same orthant. Note that this presentation differs slightly from that in Section 1.2 of Miller et al. (2015) in that, by collecting all the common edges in a single member A 0 = B 0 of the support, we are implicitly suppressing the axiom (P3) for that set. Note that the maximum value of the number k, which is determined by the non-common edges of T * and T , is m − |A 0 | in the general case. Definition 1 We call k, the number of changes of orthant in the unique minimal carrier of the geodesic from T * to T , the carrier number k(T * , T ) of T * and T . The minimal carrier and support determine the corresponding u * , v * i , v * and v in a similar manner to the special case where there is no common edge between T * and T given in Lemma 2, modified to account for the common edges. For this, the first |A 0 | coordinates of u * will be the (A 0 ) T * and those of u will be +(B 0 ) T ; for k = k(T * , T ) and 1 i k, and v i is modified similarly; and v * and v have additional first coordinates v * 0 = (A 0 ) T * and v 0 = (B 0 ) T , respectively. Then, with this modification, the geodesic between T * and T is homeomorphic and piece-wise linearly isometric, with the (straight) Euclidean geodesic between v * and v, where v * lies in the positive orthant of R |A 0 | + × R k . This generalisation of Lemma 2 to the general case was obtained, with different notation, in Billera et al. (2001), Owen (2011) andVogtmann (2007). Then, the log map as defined by (1) can be expressed using these vectors as follows. where the ordering of the coordinates is that induced by the canonical ordering for R M . For T ∈ T m+2 , there are vectors v * and v in R is the number of common edges of T * and T , k is the carrier number k(T * , T ) and v * lies in the positive orthant of the corresponding space, and a linear map ρ such that . . , k, with the additional initial coordinates v * 0 when T * and T have common edges, where v * i are as defined by (5), and v be determined similarly. The piece-wise linear isometry that straightens the geodesic from T * to T , given in Lemma 2 for the special case as well as the above for the general case, has an inverse on the positive orthant in R |A 0 | + × R k . This inverse is given by and the identity on the |A 0 | initial coordinates, where e i is the (|A 0 |+i)th standard basis vector in R Although it is not expressed precisely as it is here, the idea for a more detailed derivation of this in this case is captured in Theorem 4.4 in Owen (2011), where χ is denoted by g 0 . Since (2), is also an isometry preserving the initial segments of the geodesics. It follows that Noting that the maps π and χ are linear and π •χ(v * ) = t * , the required result follows by taking ρ = π • χ . Figure 4 shows the log map for the tree T * for Example 1. Although T m+2 is C AT (0), log T * is not a one-to-one map. In particular, if T 1 and T 2 are two different trees such that k(T * , T 1 ) = k(T * , T 2 ) is not maximal, then it is possible that log T * (T 1 ) = log T * (T 2 ), as observed in the case of T 4 in Barden et al. (2013). As another example, consider two trees T * , T ∈ T m+2 with no common edges such that the geodesic between them passes through the cone point with a given length l. Then for any other tree T with a geodesic to T * of length l, passing through the cone point, we also have that log T * (T ) = log T * (T ). Fig. 4 The log map for tree T * in Example 1. The vector between u * and log T * (T ) is shown as a dashed line. It coincides with the geodesic between T * and T in the starting orthant, but then continues into the ambient space, while the geodesic must bend to remain in the tree space Recalling that each component of v * i and v i is, respectively, the length of an edge in A i and B i then, with some ambiguity in the ordering of the edges of T * , another equivalent way to express log T * is To derive the limiting distribution of sample Fréchet means, the ordering must be kept explicit and independent of T . Hence, we have to use the expression for the log map given by (8), even though it is not as transparent as this one. Note also that, although the definitions for both π and χ implicitly depend on the ordering we chose for the coordinates of u * , the composition π • χ is independent of that choice, and so the log map is well defined, as long as we chose the same ordering for u * for both π and χ . The minimal carrier that determines the maps π and χ as well as the vectors v * and v depends on both T * and T , although we have suppressed that dependence in the notation. However, there are only finitely many choices for the carrier number k(T * , T ) and the minimal support when T * is fixed and T varies within a given stratum of T m+2 . In particular, if k(T * , T ) remains constant in a neighbourhood of (T * , T ), then π and χ do not change for small enough changes in T * and T . It follows that there are only finitely many possibilities for the form (6) that π •χ takes when T varies in T m+2 . Here, by form, we mean the algebraic expression of log T * as a map. That is, by 'log T * (T 1 ) and log T * (T 2 ) taking the same form', we mean that they can be obtained using a single algebraic expression for log T * . Since the permutation π returns all the axes to their canonical order, this expression is determined by the partition A of the edges of T * , with the subsets of non-common edges possibly permuted. For example, in the case of T 4 , log T * only takes two possible forms, depending on whether the geodesic from T * to T passes through the cone point or not where the cone point, the origin in R M , represents the tree whose two edges have zero length. The two corresponding subsets of T 4 are, respectively, indicated by the unions of light and dark grey regions in Figure 3 of Barden et al. (2013) when T * is the tree corresponding to (x i , x j ). The different possibilities for the form (6) give rise to a polyhedral subdivision of tree space T m+2 , defined as follows. Definition 2 For a fixed T * lying in a top-dimensional stratum of T m+2 , the polyhedral subdivision of tree space T m+2 , with respect to T * , is determined by the possible forms that log T * can take: each polyhedron of the subdivision is the closure of the set of trees T that have a particular form for log T * (T ). We shall call each such top-dimensional polyhedron a maximal cell of the polyhedral subdivision and let D T * be the subset of T m+2 consisting of all trees that lie on the boundaries of maximal cells determined by the polyhedral subdivision with respect to T * . Note that, if the geodesics to T 1 and T 2 from T * pass through the same sequence of strata, then log T * (T 1 ) and log T * (T 2 ) take the same form. However, the converse is not always true. For example, it is possible that T 1 and T 2 lie in different strata, but in the same maximal cell. Hence, the definition of polyhedral subdivision of T m+2 defined here is similar to, but coarser than, the concept of 'vistal polyhedral subdivision' given in section 3 of Miller et al. (2015). This is due to the fact that, while A and B in the minimal support play a symmetric role for the geodesic between T * and T , their roles in the log map log T * are asymmetric. When T varies, as long as the corresponding partition A either is unchanged or, at most, its subsets corresponding to the non-common edges are permuted, the algebraic expression for log T * remains the same. This polyhedral subdivision varies continuously with respect to T * . If T lies in the interior of a maximal cell of the subdivision and T * , itself in a top-dimensional (open) stratum, varies in a small enough neighbourhood, then the support for T * and T is unique. Then, the derivative of the log map will be well defined. When T lies on the boundary of a maximal cell of the subdivision, but not on a stratum boundary, the possible supports for T * and T are those determined by the polyhedra to which that boundary belongs. However, all these supports give rise to the same geodesic between T * and T , as they must, since T m+2 is a C AT (0)-space, and among them will be the minimal support that we are assuming for our analysis. Moreover, in this case, there is at least one non-minimal support for T * and T with the property that, for the corresponding v * and v, v Recall that from Definition 1 that the carrier number counts the number of orthants that the geodesic from T * to T meets in a linear segment of positive length. It will become clear later that the set of trees T for which, for a given T * , the carrier number k(T * , T ) is less than its possible maximum m − |A 0 |, where |A 0 | is the number of common edges of T * and T , plays a role that distinguishes the limiting distributions of sample Fréchet means in the tree spaces from those in Euclidean space. Hence, we introduce the following definition. Definition 3 A point T ∈ T m+2 is called singular, with respect to a tree T * lying in a top-dimensional stratum, if the carrier number k(T * , T ) of T * and T is less than m − |A 0 |. The set of such singular points will be denoted by S T * . The following result describes the image, under log T * , in the tangent space at T * of the set S T * : although S T * may be rather complex, its image is relatively simple. Corollary 1 If T * ∈ T m+2 lies in a top-dimensional stratum, then the image, under log T * , of the set S T * of the singular points with respect to T * is contained in the union of the hyperplanes Proof The number of orthants in the minimal carrier of the geodesic from T * to T is less than m − |A 0 | if and only if the dimension j i of some vector v * i is greater than one for i 1. Then, χ maps the line determined by e i in R |A 0 | + × R k into the subspace of R m that is the intersection of the co-dimension one hyperplanes x i u * j = x j u * i in R m , where j 1 + · · · + j i−1 < i = j j 1 + · · · + j i and where the ordering of the coordinates u * i , and hence of the x i , is as in the minimal carrier. Then, applying the permutation π and using the same notation for the permuted x-coordinates, the result follows. logT*(T) u * = (u1 * , u2 * , u3 * ) x1 x3 x2 Fig. 5 The grey area is part of the hyperplane x 1 · u * 2 = x 2 · u * 1 , which contains some of the singular points for the log map log T * for Example 1 For example, see Fig. 5 for an illustration of one of the hyperplanes for Example 1. To describe the limiting behaviour of sample Fréchet means, it will be more convenient to have a modified version of the log map, T * , at T * defined by In the present context, where T * lies in a top-dimensional stratum, T * (T ) = π •χ(v). Note that, when T * lies in a top-dimensional stratum, the map corresponding to T * here obtained in Barden et al. (2013) in the case of T 4 was expressed as the composition of a similarly defined map on Q 5 , a simpler auxiliary stratified space, with a map from Q 5 to T 4 . Instead of the log map, that map on Q 5 was expressed in terms of the gradient of the squared distance function. The relationship between the latter and the log map shows that the resulting expression in Barden et al. (2013) is equivalent to the one defined here. The derivation of T * from log T * implicitly requires that the tangent space to T m+2 at T * , in which the image of log T * lies, be translated to the parallel copy R m at the origin, in which it makes sense to add the coordinate vector t * . As a result, for all T * in the same stratum as T * , the image of T * will lie in this same subspace R m . Fréchet means on a top-dimensional stratum Let μ be a probability measure on T m+2 and assume that the Fréchet function for μ is finite. The space T m+2 being C AT (0) implies that the Fréchet function for μ is strictly convex so that, in particular, the Fréchet mean of μ is unique when it exists. In this section, we consider the case when this mean, denoted by T * , lies in a top-dimensional stratum. For this, as in the previous section, we identify any tree T * in the stratum of T m+2 in which T * lies with the point in the positive orthant of R m having the lengths of the internal edges of T * as coordinates in the canonical order. In particular, T * = (t * 1 , . . . , t * m ). First, we use the log map to give a necessary and sufficient condition for T * to be the Fréchet mean of μ as follows, generalising the characterisation of Fréchet means on complete and connected Riemannian manifolds of non-negative curvature. In particular, it shows that, when T is a random variable on T m+2 with distribution μ and T * is in a top-dimensional stratum, then T * is the Fréchet mean of μ if and only if T * is the Euclidean mean of the Euclidean random variable T * (T ). Lemma 3 Assume that the Fréchet mean T * of μ lies in a top-dimensional stratum. Then, T * is characterised by the following condition: Proof It can be checked that, since T * lies in a top-dimensional stratum, the squared distance d(T * , T ) 2 is differentiable at T * and its gradient at T * is −2 log T * (T ). Thus, as discussed in Barden et al. (2013) T * , lying in a top-dimensional stratum, is the Fréchet mean of a given probability measure μ on T m+2 if and only if Then, the required result follows by re-expressing the above condition for T * to be the Fréchet mean of μ in terms of T * given by (9). The derivation of the central limit theorem for Fréchet means in T m+2 requires the study of the change of T * as T * changes with T remaining fixed. For this we recall that, for a fixed T , the minimal support for the geodesic between T * and T determines a particular maximal cell, in which T lies, of the polyhedral subdivision with respect to T * . When the minimal support for the geodesic between T * and T is the same as that for T * and T , we shall say that the two resulting maximal cells correspond to each other. We have the following result on the derivative of T * with respect to T * , noting that the derivative of the map where, in particular, when l = 1, M † x 1 = 0. (5) prior to Theorem 1 and v i defined analogously for i = 1, . . . , k, where k = k(T * , T ), then the derivative of T * (T ) at T * , with respect to T * , is given by Lemma 4 Assume that T * ∈ T m+2 lies in a top-dimensional stratum. Then, for any fixed T ∈ T m+2 lying in the interior of a maximal cell of the polyhedral subdivision with respect to T * , T * (T ) is differentiable with respect to T * . Moreover, for such T , if v * i is as defined in where v i = v i and P T * , T denotes the matrix representing the permutation π defined by (4). Note that, for the sub-matrix v i M † v * i to be non-zero, v * i must be at least 2-dimensional and, by definition, v i is non-positive so that T must lie in S T * . In particular, if k(T * , T ) = m − |A 0 |, in other words, if the geodesic between T * and T is 'straight', then the derivative of T * (T ) at T * is zero. This could be seen directly: since, in that case, the tree space geodesic between T * and T would be a Euclidean geodesic between them. Then, log T * (T ) = t − t * so that T * (T ) = t independent of t * . Proof By the discussion preceding the lemma, the edges common to T * and T will make no contribution to the derivative. Since the polyhedral subdivision is continuous with respect to T * , it is sufficient to show that, whenT * is sufficiently close to T * , so that in particular T * andT * lie in the same top stratum and T lies in the interior of the corresponding maximal cells of the polyhedral subdivisions with respect to T * and T * , we have To show (12), it is sufficient to assume that T * and T have no common edge. Moreover, since π T * , T , and so P T * , T , is a linear map, its derivative is identical with itself. Hence, by applying the appropriate permutation to re-order theũ * and u corresponding to T * and T when necessary, it is sufficient to show that Since T lies in the interior of a maximal cell of the polyhedral subdivision of T m+2 with respect to T * , then v where all v i are negative. By continuity, all these strict inequalities hold when v * i is replaced byṽ * i if T * is sufficiently close to T * . Hence, T lies in the interior of a maximal cell of the polyhedral subdivision of T m+2 with respect to T * . Thus, the only difference between the expressions for T * (T ) and T * (T ) is that v * and v * i in the former are replaced byṽ * andṽ * i , respectively, in the latter. It follows that, in this case, the difference The required result follows by applying the first-order Taylor expansion to each subvector component and using the formula preceding the statement of the Lemma. If T lies on the boundary of a maximal cell of the polyhedral subdivision of T m+2 with respect to T * , each choice of maximal cell of the polyhedral subdivision with respect to T * will determine a support for the geodesic from T * to T . If we restrict the neighbouring T * to move from T * in a direction such that T lies in the corresponding maximal cell of the polyhedral subdivision with respect to T * , then the argument in the proof for Lemma 4 still holds. Thus, T * (T ) will have all directional derivatives, at T * , with respect to T * having similar forms to that given in Lemma 4. However, some different directions will require different choices of maximal cell of the polyhedral subdivision with respect to T * in which T lies. Thus, the directional derivative will have different forms and T * (T ) will not be differentiable. Lemma 4 enables us to obtain the limiting distribution of the sample Fréchet means of a sequence of iid random variables on T m+2 when the Fréchet mean of the underlying probability measure lies in a top-dimensional stratum as follows, recalling that D T * , defined in Definition 2, is the subset of T m+2 consisting of all trees that lie on the boundaries of maximal cells determined by the polyhedral subdivision with respect to T * . On one hand, the result shows that, in this case, the limiting distribution, being a Gaussian distribution, bears a certain similarity to that of the sample means of Euclidean random variables. On the other hand, recalling that the derivative of T * (T ) at T * is zero if T / ∈ S T * , it also shows that the role played by S T * in the limiting behaviour of the sample Fréchet means is reflected in the covariance structure of the Gaussian distribution, departing from the limiting distribution of the sample means of Euclidean random variables. Theorem 2 Let μ be a probability measure on T m+2 with finite Fréchet function and with Fréchet mean T * lying in a top-dimensional stratum. Assume that μ(D 1} is a sequence of iid random variables in T m+2 with probability measure μ and denote byT n the sample Fréchet mean of T 1 , . . . , T n . Then, where V is the covariance matrix of the random variable log T * (T 1 ), or equivalently that of T * (T 1 ), and assuming that this inverse exists, and where M T * (T ) is the m × m matrix defined by (11). Proof The main argument underlying the proof is similar to that of the proof in Barden et al. (2013) for T 4 , i.e. to express the difference between the Fréchet mean of the underlying probability measure and the sample Fréchet means in terms of the difference T * (T i ) − T * (T i ). However, the proof in Barden et al. (2013) relies on an explicit embedding that is only valid for T 4 . As a consequence of Lemma 4, we can now achieve this for any tree space. SinceT n is the Fréchet sample mean of T 1 , . . . , T n , then for sufficiently large n,T n will be close to T * a.s. (cf. Ziezold 1977) and, in particular, lie in the same stratum as T * . Thus, the above results (10) and (12) give 1} is a sequence of iid random matrices, the following theorem follows from the standard Euclidean result as in Barden et al. (2013). Recalling that M T * (T 1 ) = 0 for T 1 not lying in the singularity set of log T * , we see that the contribution to E[M T * (T 1 )] consists of all singular points of log T * . For m = 1, i.e. the case for T 3 , the only possible choice for k is k = 1 = m which implies that M T * (T ) ≡ 0, so that the above result for this special case is the same as that obtained in Hotz et al. (2013). For m = 2, i.e. the case for T 4 , the only possible case for T lying in the singularity set of log T * is when k = 1, which corresponds to the geodesic between T * and T passing through the origin and T * ( so that the above result for this case recovers that in Barden et al. (2013). Note that μ induces, by log T * , a probability distribution μ on the tangent space of T m+2 at T * . Then, the sample Fréchet means of μ are the standard Euclidean means so that the rescaled sample Fréchet means have the limiting distribution N (0, V ). However, the sample Fréchet means of μ are generally different from log T * (T n ), the log images of the sample Fréchet means of μ, and there is no closed expression for the relationship between the two. It is also interesting to compare the result of Theorem 2 with the limiting distributions for the sample Fréchet means on Riemannian manifolds obtained in Kendall and Le (2011). Both limiting distributions take a similar form, with the role played by curvature in the case of manifolds being replaced here by the global topological structure of the tree space. Fréchet means on a stratum of co-dimension one A stratum O( ) of co-dimension one corresponding to the set of mutually compatible edge-types arises as a boundary face of a top-dimensional stratum when one, and only one, internal edge of the latter is given length zero so that its two vertices are coalesced to form a new vertex of valency four. The four incident edges determine disjoint subsets A, B, C, X of leaves and root, where X contains the root. Then, an additional internal edge may be introduced to , namely α, β or γ that correspond, respectively, to the sets of leaves A ∪ B, A ∪ C or B ∪ C. This gives top-dimensional strata O( ∪ α), O( ∪ β) or O( ∪ γ ), all of whose boundaries contain the stratum O( ). Moreover, these are the only such top-dimensional strata. For example, in If A > B > C is the canonical order of the sets of leaves, then α < β < γ is the induced order of the edges and corresponding semi-axes and, if we write the coordinates of a tree T * in O( ) as (t * 2 , . . . , t * m ), we can write the coordinates of trees in the neighbouring orthants as (t * α , t * β , t * γ , t * 2 , . . . , t * m ) where precisely two of t * α , t * β and t * γ are zero, since the remaining m − 1 edge-types are common to all the trees involved in these three orthants and their common boundary component. Note however that, although the coordinates (t * α , t * β , t * γ ) and (t * 2 , . . . , t * m ) can be chosen in canonical order, the resulting sequence (t * α , t * β , t * γ , t * 2 , . . . , t * m ) will not in general be in canonical order. It is clear now that the tree space T m+2 is not locally a manifold at any tree in the strata of co-dimension one. However, the stratification enables us to define, at a tree in a stratum of positive co-dimension, its tangent cone (cf. Bridson and Haefliger 1999) to consist of all initial tangent vectors of smooth curves starting from that tree. Then, the tangent cone to T m+2 at a tree in a stratum of co-dimension one is an open book (cf. Hotz et al. 2013) with three pages extending each of the three strata and with the stratum of co-dimension one in which the tree lies being extended to form its spine. The definition of the log map (1) applies equally to a tree T * in a stratum σ of co-dimension one: if the geodesic from T * to T passes through one of the three strata whose boundary includes σ , the unit vector component of log T * (T ) is taken in the same direction in the page of the tangent book that corresponds to that stratum. The scalar component of the log map is still the distance between the trees. Similarly, the definition (9) for T * remains valid in this case. From now on, we assume that T * lies in a stratum O( ) of co-dimension one. Although the squared distance d(T * , T ) 2 is no longer differentiable at T * , it has directional derivatives along all possible directions. Hence, the condition for T * to be the Fréchet mean of a probability measure μ on T m+2 , i.e. the condition for T * to satisfy becomes that the Fréchet function for μ has, at T * , non-negative directional derivatives along all possible directions. To investigate the latter condition, we label the three strata joined at the stratum O( ), of co-dimension one, in which T * lies as the α-, βand γstrata and denote by log α T * , log β T * and log γ T * , respectively, the modifications of the map log T * that agree with log T * on the domains for which the image lies in the pages of the tangent book tangent to the α-, βand γ -strata, respectively, and are zero elsewhere. That is, for example, if T is such that log T * (T ) lies in the page of the tangent book tangent to the α-orthant 0 otherwise. Write e α , e β and e γ for the outward unit vectors in the tangent book at T * lying in the page tangent to the α-, βand γ -strata, respectively, and orthogonal to its spine, and define We also define log s T * to be the modification of log T * with respect to the spine of the tangent book, the tangent space to O( ), analogous to the above log i T * . Then, we have the following characterisation of T * in a stratum of co-dimension one to be the Fréchet mean of μ, in terms of the derivatives of the Fréchet function along the three directions orthogonal to the tangent space to O( ), as well as the Euclidean mean of log s T * (T ), where T is a random variable on T m+2 with distribution μ. Lemma 5 With the notation and definition above, a given tree T * in a stratum O( ) of co-dimension one is the Fréchet mean of a given probability measure μ on T m+2 if and only if and T m+2 Proof Recall that since T * ∈ O( ), the condition for T * to be the Fréchet mean of a probability measure μ on T m+2 is that the Fréchet function for μ has, at T * , non-negative directional derivatives along all possible directions. For any vector w at T * which is tangent to O( ), the non-negativity of the directional derivative along w can be expressed as T m+2 log s T * (T ), w dμ(T ) 0. Since −w also tangent to O( ) at T * , this inequality must be an equality for all such w, which gives (15). Hence, by linearity, the non-negativity of directional derivatives, of the Fréchet function for μ, at T * along all possible directions may be characterised by requiring the non-negativity of the directional derivatives along the e α , e β and e γ directions, together with (15). However, analogously to the deduction in Barden et al. (2013), it can be checked that the requirement for the directional derivative along each of the e α , e β and e γ directions to be non-negative is, respectively, equivalent to each of the inequalities (14). To see the relation between the inequalities (14) and the asymptotic behaviour of sample Fréchet means, we will use a folding map F α (cf. Hotz et al. 2013) that operates on the tangent book at T * . The map F α folds the two pages that are tangent to the βand γ -strata onto each other, so that they form the complement in R m of the closure of the page tangent to the α-stratum. Define F β and F γ similarly. Then, F α • log T * maps T m+2 to R m and, in fact, is the limit of log T * when T * tends to T * from the α-stratum. In addition, we modify the definition (7) of χ T * , T (e i ) to be π −1 T * , T (e α ) when, and only when, the v * i in (7) contains t * α and is 1-dimensional. With this modification and by noting that the argument leading to Lemma 2, as well as its result, still hold when T * lies in a stratum of co-dimension one, the results of Theorem 1 and Lemma 4 can be extended to obtain the expression for F α • log T * and its derivative, and the analogues with β or γ replacing α, when the necessary care is taken of which stratum is to contain the initial geodesic. Moreover, These observations lead to the following lemma which extends the results obtained in Hotz et al. (2013) for open books and in Barden et al. (2013) for T 4 and relates the Fréchet means of large samples avoiding a stratum to the strict-positivity of the derivative of the Fréchet function along the corresponding orthogonal direction to the tangent space to O( ). Lemma 6 Let T * be the Fréchet mean of a given probability measure μ on T m+2 , and lie in a stratum O( ) of co-dimension one. Assume that μ(D T * ) = 0, where D T * is defined in Definition 2, and that, at T * , I α < I β + I γ . If {T i : i 1} is a sequence of iid random variables in T m+2 with probability measure μ then, for all sufficiently random large n, the sample Fréchet meanT n of T 1 , . . . , T n cannot lie in the α-stratum. Proof SinceT n converges to T * a.s. as n tends to infinity (cf. Ziezold 1977) we only need to show that, for all sufficiently large n,T n cannot lie in the neighbourhood of T * , restricted to the α-stratum. Consider the probability measure μ α induced from μ by F α •log T * on the Euclidean space. Then, under the given conditions, it follows from (16) that the Euclidean mean of μ α lies on the open half of the Euclidean space complement to the page tangent to the α-stratum (cf. also Hotz et al. (2013)). Thus, for all sufficiently large n, the Euclidean mean of the induced random variables F α • log T * (T 1 ), . . . , F α • log T * (T n ), does not lie in the closed half of this Euclidean space where the page tangent to the α-stratum lies. This implies that, for all sufficiently large n, If it were possible that, for arbitrarily large n,T n lies in the α-stratum, we could obtain a contradiction. Firstly, noting the observations prior to the lemma and following the arguments of the proof for Lemma 4, for all sufficiently large n, we have where M T * (T ) is given by (11) and F α • T * = F α • log T * +T * . However, on the one hand, since 1 n n i=1 T n (T i ) =T n and sinceT n lies in the α-stratum, T n , e α > 0, so that While, on the other hand, it follows from T * , e α = 0 and from (17) that It can also be checked that where v α i = v i,s , if t α corresponds to a coordinate of v * i,s and if the dimension of v * i,s is greater than one, and v α i = 0 otherwise. Then, since v α i 0, for each i Equations (20) and (21) together imply that, for all sufficiently large n, the e α -component of the right hand side of (18) is negative, which contradicts (19). With the result of Lemma 6, we now have the limiting distribution of the sample Fréchet means on T m+2 given by the next theorem, which is the generalisation of the result for T 4 given in Theorem 2 in Barden et al. (2013). In particular, it shows that the limiting distribution can take any of four possible forms, all related to a Gaussian distribution, depending on the number of the strictly positive derivatives of the Fréchet function along the three directions orthogonal to the tangent space to O( ). For clarity, we have assumed in the following that the coordinates (t i , t 2 , . . . , t m ), i = α, β, γ , discussed at the beginning of the section are all in the canonical order, so that they give the coordinates for trees in each of the three strata. Otherwise, a further permutation of the coordinates, which we have suppressed, will be necessary to bring them into canonical order and so to validate the result. Theorem 3 Let T * in a stratum O( ) of co-dimension one be the Fréchet mean of a given probability measure μ on T m+2 . Assume that μ(D T * ) = 0, where D T * is defined in Definition 2. Let further {T i : i 1} be a sequence of iid random variables in T m+2 with probability measure μ and writeT n for the sample Fréchet mean of T 1 , . . . , T n . (a) If all three inequalities in (14) are strict then, for all sufficiently large n,T n will lie in the stratum O( ) and the sequence √ n{(t n 2 , . . . ,t n m ) − (t * 2 , . . . , t * m )} of the coordinates of √ n{T n − T * } on the spine will converge in distribution to N (0, A s V s A s ) as n → ∞, where V s is the covariance matrix of the random variable log s T * (T 1 ), A s = P s AP s , P s is the projection matrix to the subspace of R m with the first coordinate removed and A is as given in (13). (14) is an equality and the other two are strict then, for all sufficiently large n,T n will lie in the α-stratum and (b) If the first inequality in and A is as in (13) with t * 1 = 0. (c) If the first two inequalities in (14) are equalities and the third is strict then, for all sufficiently large n,T n will lie either in the α-stratum or in the β-stratum and the limiting distribution of √ n{T n − T * }, as n → ∞, will take the same form as that of (η 1 , . . . , η m ) above, where the coordinates ofT n are taken as (t n α ,t n 2 , . . . ,t n m ), respectively (−t n β ,t n 2 , . . . ,t n m ), ifT n is in the α-stratum, respectively the β-stratum. (d) If all the equalities in (14) are actually equalities, then we have the same result as in (a). Proof (a) By Lemma 6, when n is sufficiently large,T n must lie in the stratum O( ) of co-dimension one so that it has zero first coordinate, i.e.T n = (0,t n 2 , . . . ,t n m ). Noting that F α • log sT n = log sT n , the result (15) of Lemma 5 shows thatt n i , i = 2, . . . , m, are the respective coordinates of 1 n n i=1 F α • T n (T i ), the sample Euclidean mean of F α • T n (T 1 ), . . . , F α • T n (T n ). Then, a modification of the proof of Theorem 2 to restrict it to the relevant coordinates of {F α • T n (T i ) : i 1} gives the required We deduce from the assumed strict inequalities, from (15) and (16) and from Lemma 6 that T * is the Euclidean mean of F α • T * (T 1 ) and that, when n is sufficiently large,T n can only lie in the closure of the α-stratum, so that it has coordinatesT n = (t α n ,t n 2 , · · ·t n m ). WriteF Then,F α • T n (T ) lies in R m and, by (15),t n j , j = 2, . . . , m, are the respective coordinates of 1 n n i=1F α • T n (T i ). To see relationship between 1 n n i=1F α • T n (T i ) andt n α , we note that, ift n α > 0, where the first equality follows from the definition ofF α • T n (T ) and the second follows from Lemma 3 asT n lies in a top-dimensional stratum. Hence, On the other hand, ift α n = 0, thenT n lies in O( ) and Applying Lemma 5 and (16) to the empirical distribution centred on T 1 , . . . , T n with equal weights 1/n, we also have Thus,t n α = max 0, Now, similarly to the proofs of Theorem 2 and Lemma 6, the observations prior to Lemma 6 imply that where M T * (T ) is given by (11). Since the first coordinate of T * is zero, so too are the entries, except for the diagonal one, in the first row and column of M T * (T ) and so also are the corresponding entries in the matrix A. Moreover, noting the comments following Lemma 4 and the definition of M † prior to that lemma, we see that the first diagonal entry of M T * (T ) is always non-positive. Thus, the first diagonal entry of A must be positive, so that this is also the case for {I − 1 n n i=1 M T * (T i )} −1 , when n is sufficiently large. Thus, whent n α > 0, it follows from (23) and (24) that In particular, for all sufficiently large n, the first coordinate of the random vector given by the first term on the right is positive. Whent n α = 0, the above approximation still holds except for the first coordinate. In that case, (T n − T * )M T * (T i ), e α = 0, following from the form of M T * (T ) noted above, and by (24), for sufficiently large n, up to higher order terms, which is equivalent to , e α 0 up to higher order terms. Hence, for sufficiently large n, we have so that the required result follows from a similar argument to that of the proof for Theorem 2. (c) In this case, it follows from Lemma 5 that T * is the Euclidean mean both of F α • T * (T 1 ) and of F β • T * (T 1 ). Moreover, the integral I γ becomes zero and so, since the integrand is non-negative, the support of the measure on the tangent book at T * induced by μ is contained in the union of the leaves tangent to the αand β-strata together with the spine. (d) Noting that all integrands in (14) are non-negative, the three equalities will together imply that log T * (T 1 ) = log s T * (T 1 ) a.s., so that for i = α, β, γ On the other hand, if it were possible that, for arbitrarily large n,T n lies in one of the α-βor γ -strata, say the α-stratum, then T n − T * , e α > 0. On the other hand, sinceT and since, as noted in ( This contradiction implies that, for all sufficiently large n,T n must lie in the stratum O( ) of co-dimension one. Thus, the argument for (a) implies that, when the inequalities in (14) are all equalities, the central limit theorem for the sample Fréchet means takes the same form as that when the three inequalities are all strict. Similar to the note at the end of the previous section, one can also consider the distribution μ , induced by log T * from μ on the tangent book of T m+2 at T * . Then, one can apply the result of Hotz et al. (2013) to obtain the limiting distribution of the sample Fréchet means of μ . Again, although the limiting distribution obtained in this way retains the local topological feature of the space, the influence of the global topological structure is lost. More importantly, since there is no clear relationship between the sample Fréchet means of μ and μ , the limiting distribution for the former cannot be easily deduced from that for the latter. Strata of higher co-dimension The structure of tree space in the neighbourhood of a stratum of higher co-dimension is basically similar to, but in detail rather more complex than, that of a stratum of co-dimension one. For example, a stratum σ of co-dimension l, where 2 l m, corresponds to a set of m − l mutually compatible edge-types. It arises as a boundary (m − l)-dimensional face of a stratum τ of co-dimension l , where 0 l < l and when the internal edges of the trees in σ are a particular subset of m − l of the internal edges of the trees in τ . For this situation, we say that σ bounds τ and τ co-bounds σ . Recall from the previous section that the tangent cone to T m+2 at a tree T in σ consists of all initial tangent vectors to smooth curves starting from T , the smoothness only being one-sided at T . For simplicity assume, without loss of generality, that under the isometric embedding of T m+2 in R M all trees in σ have zero for their first l coordinates. Then, the tangent cone at T has a stratification analogous to that of T m+2 itself in the neighbourhood of T : for each stratum τ of co-dimension l that co-bounds σ in T m+2 there is a stratum (R l−l τ ) + × R m−l in the tangent cone at T , which may be identified with a subset of the full tangent space of R M at T , where R m−l is the (full) tangent subspace to σ at T and (R l−l τ ) + is the orthant determined by the edge-types that have positive length in τ but zero length in σ . For example, the cone point in T 4 is a stratum of co-dimension two. Its tangent cone can be identified with T 4 itself. This rather involved structure of the tangent cone results in a much more complicated description of the log map and, consequently, of its behaviour. Nevertheless, with the above conventions, it is possible to generalise our expression for the log map to this wider context and hence to obtain analogues of Theorem 1 as well as Lemma 4. These results can then be used to describe, in a fashion similar to those of Lemmas 5 and 6, certain limiting behaviour of sample Fréchet means when their limit lies in a stratum of higher co-dimension. For example, the limiting behaviour of sample Fréchet means in T 4 when the true Fréchet mean lies at the cone point has been studied in Barden et al. (2013). The picture given there is incomplete and, although those results can be further refined and improved, it is clear that a complete description of the limiting behaviour of sample Fréchet means in the wider context is still a challenge and the global topological structure of the space will play a crucial role.
18,916.8
2014-09-26T00:00:00.000
[ "Mathematics" ]
GproteinDb in 2024: new G protein-GPCR couplings, AlphaFold2-multimer models and interface interactions Abstract G proteins are the major signal proteins of ∼800 receptors for medicines, hormones, neurotransmitters, tastants and odorants. GproteinDb offers integrated genomic, structural, and pharmacological data and tools for analysis, visualization and experiment design. Here, we present the first major update of GproteinDb greatly expanding its coupling data and structural templates, adding AlphaFold2 structure models of GPCR–G protein complexes and advancing the interactive analysis tools for their interfaces underlying coupling selectivity. We present insights on coupling agreement across datasets and parameters, including constitutive activity, agonist-induced activity and kinetics. GproteinDb is accessible at https://gproteindb.org. Introduction G proteins are the major signal proteins of ∼800 receptors for hormones, neurotransmitters, tastants and odorants, including numerous drug targets ( 1-3 ).There are 16 human G proteins grouped in four families with distinct downstream signaling pathways: G s (G s and G olf ), G i / o (G i1 , G i2 , G i3 , G o , G z , G t1 , G t2 , G gust ), G q / 11 (G q , G 11 , G 14 and G 15 ) and G 12 / 13 (G 12 and G 13 ).Despite their relatively low numbers, the G protein subtypes / families can mediate numerous distinct cellular and physiological responses as receptors selectively engage different G protein profiles spatially and temporally (tissue and kinetic selectivity, respectively) (4)(5)(6).Furthermore, the same receptor, when bound to different ligands, can exhibit biased signaling via alternative G proteins or other signaling proteins-a mechanism allowing for design of safer drugs by avoiding undesired G protein signaling causing adverse effects (7)(8)(9).Hence, it is of great importance to advance fundamental structure-function of G protein signaling and to explore new therapeutic mechanisms, including biased ligands and inhibitors of constitutively active cancer mutants ( 10 ). GproteinDb is an online research platform first published in 2022 ( 11 ).In addition to a dedicated web site ( https:// gproteindb.org ), its menu system is also available via GPCRdb ( 2 ), ArrestinDb ( 12 ) and Biased Signaling Atlas ( 13 ) plings integrating three major datasets ( 6 ), crystal and cryo-EM structures with annotations extending PDB ( 14 ), tailored tools to investigate GPCR-G protein interface interactions and predicted selectivity determinants enabling mutagenesis experiments.Despite being relatively new, GproteinDb has already supported a breadth of original research studies spanning e.g.autoregulation of signaling ( 15 ), allosteric modulation ( 16 ) / binding sites ( 17 ), GPCR-G protein selectivity ( 18 ) as well as community guidelines for ligand bias ( 19 ) and a review on GPCR signaling ( 20 ). Here, we present the first major update of GproteinDb greatly expanding its structural templates, adding AlphaFold2 structure models of GPCR-G protein complexes and advancing the interactive analysis tools for their interfaces underlying coupling selectivity.These data and tools should serve to extend the use cases of this comprehensive research platform (Figure 1 ) across computational, structural, pharmacological and drug design applications. G protein coupling data processing This release excludes data from chimeric G proteins.For this reason, we only include G q (wildtype) couplings from the Inoue TGF α shedding dataset ( 5 ) and chose not to include a new dataset from the Orlandi lab ( 21 ).For the Inoue TGF α shedding dataset we also applied a cutoff log(Emax / EC 50 ) ≥ 7.0, which were based on the largest agreement with couplings and non-couplings determined by both the Bouvier GEMTA and Martemyanov FreeG βγ-Nluc datasets.The new Lambert RGB-GDP datasets (22)(23)(24) were subjected to a cutoff of 0.009 (defined by outlier identification with a false discovery rate of 1%).The GEMTA biosensors label / measure distinct effectors of the G i / o , G q / 11 and G 12 / 13 families and EMTA (G s ) labels G α ( 4 ).To enable comparison across G protein families, we min-max normalized Emax values from each of these four biosensors setting the highest value among all GPCR -G protein pairs to 100.No normalization was needed for the other biosensors for which the different G protein subtypes are measured with the same labeled molecules.For the two 5-HT 7 receptor isoforms in the Martemyanov FreeG βγ-Nluc dataset ( 25 ), we used the full-length isoform, 5-HT 7A while excluding the truncated isoform, 5-HT 7B .Furthermore, the heterodimer of GABA B1 / GABA B2 was stored for GABA B2 , as GproteinDb does not yet capture receptor dimers.The MS Excel file containing all coupling datasets and their processing for import into GproteinDb can be downloaded via a link in the Couplings page. Refining G protein crystal and cryo-EM structures We updated our refined complex structures from ( 11 ) to use our AlphaFold2-Multimer complex models (next section).Hence, instead of using experimental structures of other receptors, the AlphaFold2-Multimer model of the same receptor is used as swap-in template to fill in missing coordinates of G α proteins and receptors.Each structure and swap-in template were superposed based on the backbones of four residues flanking both sides (one side for termini) of the segment missing coordinates.To avoid clashes, GPCR C-termini that were swapped in from the AlphaFold model were truncated to 10 residues.Finally, any residues with clashes or flanking swap-in sites were optimized using MODELLER ( 26 ). Mutated residues were reverted to wildtype.For chimeric G α proteins, we used the subtype described by authors when this referred to a single subtype.When authors listed multiple wildtype G α subunits, we used the construct sequence of G.H5 to determine the subtype.We mapped the residues of chimeric G α constructs to wildtype sequences by implementing a BLAST-like sequence alignment.This resolved incorrect sequence mappings and increased the quality of structure refinement of the G proteins.The alignment of chimeras and wildtype G α subunits can be found in F ASTA format at https:// github.com/protwis/ gpcrdb _ data/ blob/ master/ g _ protein _ data/g _ protein _ chimeras _ gapped.fasta . Building GPCR-G protein complex structure models We build models of GPCRs with the heterotrimeric G proteins that have been identified as the primary transducer from one of the quantitative coupling datasets in GproteinDb (1124 such models completed at time of writing).Furthermore, we build complex models of every human G α protein and human GPCR in GPCRdb ( 2 ).All models were built using Al-phaFold2 version 2.3.1 with the Multimer option and amber relaxation ( 27 ).Structural templates were included up to 1 January 2023.We employed a diverse set of metrics to assess model quality .Firstly , to evaluate the overall structural quality, we employed AlphaFold2's predicted version of the template modeling (TM) score.The TM-score is calculated by measuring pairwise distances between a superimposed predicted model to a reference structure.It refines itself through various superimpositions, reducing the influence of local variations and emphasizing topological similarity.The predicted version provided by AlphaFold2, known as pTM, is an estimate of the TM-score, that uses a neural network to predict the C α positioning errors for TM score calculations.Secondly, we used the interface predicted TM-score (ipTM) to assess the quality of the interface between the GPCR and the G α.The interfaces are evaluated by comparing residue interactions between protein chains.We established thresholds for pTM ≥0.5 and ipTM ≥0.2, both ranging from 0 to 1, with 1 signifying a highly accurate model.This choice was guided by our examination of the relative positioning of the G α and GPCR, with emphasis on preventing loop knots.A pTM cutoff of ≥0.5 indicated good overall model quality and enabled us to adopt a more permissive ipTM ( ≥0.2).This approach ensured the expected positioning of the G α relative to the GPCR.These cut-offs are consistent with the levels of significance on the TM scores identified in previous studies assessing protein fold similarity ( 28 ).Thirdly, we also assessed the relative positioning of G α and GPCR using the predicted aligned error (PAE).The PAE is a matrix that represents the positional error of the C α of predicted models ( 29 ).Utilizing the data from this matrix, we calculated the PAE mean focusing on the section that scores the position of the G α in respect to the GPCR.We excluded long flexible C and N termini of the GPCRs due to the uncertainty they introduce.Furthermore, with PAE scores ranging from 0 to 31.75 Å, we established a cut-off of 26 Å for the PAE mean score of compared residues (even if the ipTM score was satisfactory).Finally, we visualized the local per-residue consistency within models using predicted local Distance Difference Test (plDDT) values ( 30) which span 0-100. To increase model accuracy and avoid cross-chain clashes, it was necessary to truncate or delete very long receptor segments in some GPCR classes.For Class A GPCRs, the third intracellular loop 3 was shortened to 14 residues.For Class B2 ( Adhesion ) GPCRs, we truncated 18 receptors directly after their GPS site (HL|T / S motif) in line with their selfactivating mechanism where GPS cleavage causes the truncated N-terminal 'Stachel' sequence to activate the seventransmembrane domain ( 31 ).Except for AGRA1 which has a short N-terminus, the N-termini of the remaining 14 class B2 receptors lacking a GPS site were also truncated just before the first transmembrane helix-as were Class C GPCRs (removing the large ligand-binding Venus Flytrap domain (VFTD)). Structure superposition tool A Structure superposition tool was developed by modifying and extending the code for a previously published similar tool for GPCRs ( 32 ) and implementing the Common G α numbering of residues ( 33 ). GPCR-G protein interface interactions We expanded the interface interaction tools ( Interface interaction & profiling page) to include our refined complex structures and new complex models.To enable detailed investigation of specific complexes, we developed a 'biflare plot, a visualization tool for interface interactions in single complexes.It uses a customized logic for spatial amino acid placement.G α residues are placed into an inner circle and are ordered based on segment and generic residue number.GPCR residues are positioned in an outer circle based on the average angle calculated across all the interacting G α residues and the center of the inner circle.Javascript and the D3 library were used to design and develop the visualization of the plot.Python and the Django framework were used for data handling and processing. Application Programming Interface (API) We used the REST framework ( https://www.django-restframework.org/) to implemented API calls enabling programmatic retrieval of G protein data. New coupling data and Datasets page GproteinDb serves as a one-stop-shop for G protein couplings integrated from multiple datasets which can now be browsed in a new page Datasets .This release has added three new datasets (lab and biosensor): Martemyanov FreeG βγ-Nluc ( 25 ), Lambert RGB-GDP (22)(23)(24) and Inoue NanoBiT-G ( 5 ), respectively.While the new datasets cover fewer G protein subtypes (4-8 compared to 12-15) they represent all four G protein families.However, G 15 couplings to GPCRs often differ substantially from other members of the G q / 11 family ( 4 , 6 , 25 ) and were not tested in the Inoue NanoBiT-G and 2019-2020 Lambert RGB-GDP publications.The Inoue NanoBiT-G couplings, like the previously included, represent log(Emax / EC 50 ) values ( 6 ,11 ).In contrast, the Martemyanov FreeG βγ-Nluc ( 25 ) and Lambert RGB-GDP (22)(23)(24) datasets bring the new parameters activation rate (s −1 ) and E constitutive (efficacy from constitutively active receptors), respectively.Interestingly, the latter dataset spans 48 orphan GPCRs not activated by an agonist ligand ( 24 ) and 27 other, non-orphan receptors both with and without ligand.Measuring constitutive activity made it possible to profile the couplings of 22 of the 48 orphan receptors that lack agonists and are therefore not covered by previous datasets, which typically use the endogenous ligand.The increased coverage of couplings from other datasets, and a more stringent view on the biological system, led us to remove data for chimeric G proteins-reducing Inoue TGFα shedding dataset to only G q , which is wildtype. New Biosensors page A new page Biosensors provides an overview of the biosensors used to determine the couplings deposited in GproteinDb ( 4-5 , 22 , 24 , 34-42 ).To make it easier to track the evolution of biosensors, we introduce a classification of biosensors by their type, which is defined based on the labeled molecules (column Biosensor type).For example, the biosensor type GG refers to G α -G βγ dissociation assays that arose in the mid 2000s (GABY) ( 35 ) and have been optimized and expanded to additional reporters and G protein subtypes in 2019 (NanoBiT-G) ( 5 ) and 2020 (TRUPATH) ( 37 ), respectively.To clarify what has been measured we list the measured process (e.g.G α -G βγ dissociation) and its downstream steps (after receptor binding).For further reference, we provide the original publication year and link. New Couplings page This release features a new Couplings page with unique layout and functionality.A new section of the data browser tabulates the number of supporting datasets for G protein family couplings to a given GPCR.This allows filtering all couplings (default) to a subset with experimental evidence from several sources.To show G protein coupling selectivity or promiscuity of GPCRs, we provide: (i) a G protein family count, (ii) the primary transducer family / subtype (marked 1 ), (iii) family rank orders (1 , 2 , 3 and 4 ) and (iv) family / subtype percent activity (relative primary transducer).This allows swift cross-dataset analysis of any GPCR's coupling profile or a primary transducer's set of receptors.Furthermore, the family rank orders contain the note 'nc' for experimentally determined non-couplings-enabling analyses that contrast sets of coupling versus non-coupling GPCR-G protein pairs, for example to discover determinants of coupling selectivity. Each GPCR has multiple rows -one for each dataset.However, non-redundantconsensus couplings can be shown by applying a filter "GproteinDb" in the "Lab" column.The GproteinDb row contains a mean across datasets in the family / subtype percent activity sections and use these to calculate consensus family rank orders (for receptors with only GtoPdb data this is just 1' or 2').This release presents alternative coupling parameters E constitutive (efficacy from constitutively active receptors), log(Emax / EC 50 ), and activation rate (s −1 ), which are defined first in the section 'Quantitative values'.Selected couplings (via filtering) can be downloaded as a MS Excel spreadsheet for further analysis.Furthermore, the Couplings page provides a link to the MS Excel file via which all couplings were imported to GproteinDb and containing additional raw data, formulas (for normalizations, cut-offs etc.) and statistical significances (SDs or SEMs) not shown online. Updated Coupling selectivity (tree) / (Venn) pages The Coupling selectivity (tree) page maps the G protein family couplings onto trees of receptors divided by class (separate trees) and classified by their ligand types and receptor families (share endogenous ligand) ( 11 ).The Coupling selectivity (Venn) page intersects G protein families and provides a list of receptors based on their coupling selectivity.To use GproteinDb's new coupling datasets, both pages now include all couplings datasets and an option to use all couplings or only those supported by the Guide to Pharmacology (GtoPdb) database or two or more other datasets. Structures and refined structures The Structures page is updated quarterly with new structures from the Protein Data Bank (structures from RCSB ( 43 ) and SIFTs data from PDBe ( 14)) enriched with annotation of the overall structure (publication date, experimental method, resolution), receptor (name, chain ID), ligand (name, type, Pub-Chem ID, modality), nanobodies, fusion proteins, G proteins (name and chain IDs of subunits), arrestins (name, chain ID), RAMPs and GRKs.The associated Structure coverage page overviews the availability of structures across G protein families, GPCRs and GPCR classes.There are currently 690 G protein structures, whereof 544 are structures of GPCR -G protein complexes.The structures are also available in a refined version which models-in missing segments (truncated / deleted from the construct sequence or not resolved in density map) and reverts mutated residues to wildtype.GproteinDb also provides a refined sequence of complex structures.Specifically, the residue numbering in chimeric G α subunits is mapped to wildtype sequence and gets assigned generic residue numbers with the common G protein residue numbering scheme ( 44 ). Structure models The Structure models page presents models of human G protein-GPCR complexes.The first publication of Gpro-teinDb contained homology models of GPCR-G protein complexes built using MODELLER ( 26 ).However, these models were limited in numbers and quality due to the relatively few complex templates at the time (see Discussion).This release features AlphaFold2-Multimer-based ( 27) structure models of 1121 GPCRs and heterotrimeric G proteins, including all primary transducers.Furthermore, we present 5595 models of the 16 human G α subunits and 425 human GPCRs in GPCRdb ( 2 ) spanning the GPCR classes (families) A ( Rhodopsin ), B1 ( Secretin ), B2 ( Adhesion ), C ( Glutamate ), F ( Frizzled ) and T ( Taste 2 ) (Figure 2 ).This represents 82% of all theoretical G protein-GPCR complexes and the remainder will be added in coming releases as they pass our quality filters (see Methods).Whereas not all complexes will occur biologically, it is difficult to definitely rule out unseen couplings as they depend on many factors e.g.ligand and tissue / cell type and having access to the models may still be useful to inform studies investigated for example GPCR-G protein selectivity ( 33 ,45 ).To allow users to select models of only validated complexes, the Structure models page features sorting and filtering based on GproteinDb's experimental coupling data. We provide several model quality metrics.The 3D viewer colors residue by pLDDT scores, which assess the local perresidue consistency within models using the predicted local Distance Difference Test ( 30 ).Furthermore, the quality is visualized in horizontal bars placing the model score within the respective range limits of three scores; pTM (overall model), ipTM (protein interface) and PAE mean (positioning of G α relative GPCR) ( 30 ).Together, this comprehensive system grants users a swift, yet in-depth perspective on the quality and reliability of the G protein-GPCR complex models. New structure superposition tool We present a new tool to superposition G protein-GPCR complex structures or models in GproteinDb.This tool is available from a dedicated page Structure superposition as well as within the pages Structures and Structure models (via the 'Superposition' button).The tool works by first selecting a reference structure and then additional structures / models for superposition-based on a custom set of G α segments or residues (defined based on the Common G α numbering (CGN) scheme ( 33)).Finally, the superposed structures / models are downloaded (in PDB format) for visualization in any software. G protein-GPCR interface interactions in a single structure The Structures and Structure models pages have data browsers that link on to a dedicated page for the selected structure, refined structure or model.In this release, this page has been equipped with four visualizations of interface interactions (Figure 3 ).Firstly, a 'biflare' plot shows two concentric circles for G protein and GPCR residues, respectively connected by differently dashed lines indicating their type of molecular interaction (Figure 3 A).This plot can show many or fewer interactions based on strict and loose definitions, respectively (from ( 46 ), defined in a tool tip) and toggle between backbone / sidechain interactions or a single selected G protein or GPCR segment or residue.Furthermore, the biflare plot offers color-coding by interaction type, segments or amino acid properties.Secondly, a 3D viewer shows the structure, refined structure or model as an interactive cartoon representation where the interacting residues are labeled with their one-letter amino acid code and sequence number (Figure 3 B).Interacting residues are depicted with sticks and line representation denoting interactions with strict and loose definitions, respectively .Thirdly , an interaction data browser offers sorting and filtering of interacting residues by generic residue number and A-C) or multiple (D) str uct ure(s).( A ) Biflare plot showing all residue-residue interactions in G α -GPCR complex (here G o -5HT 1A model).( B ) 3D view labeling interacting residues with amino acid number (G α orange, GPCR red).Residues are depicted with sticks and line representation denoting interactions with strict and loose definitions, respectively (from ( 46 ), defined in a tool tip).( C ) Interaction browser featuring sorting and filtering of interactions based on sequence, generic residues numbers, segments and interaction types.( D ) GPCR-G protein interface interaction matrix illustrating expanded data from refined str uct ures and models.The matrix shows GPCR-G protein residue interactions for the same complex (G s -Calcitonin (CT) receptor) across the unrefined str uct ure (PDB: 5UZ7) ( 54 ), GproteinDb's refined str uct ure modeling-in missing residues / atoms (5UZ7_refined) and AlphaFold2-Multimer model (AFM_CALCR_Gs) ( 54 ).Of note, these display an increasing number of interface interactions; 14, 20 and 26, respectively.interaction type (Figure 3 C).These data can also be downloaded (MS Excel spreadsheet) enabling analysis in other software.Finally, a matrix (similar to Figure 3 D) shows interacting G α and GPCR residues (Y-and X-axes, respectively).Interactions (matrix cells) are colored by interaction type and amino acids are colored by property. G protein-GPCR interface interactions across several structures The Interface interactions and profiling page features interactive analysis tools and visualizations of residue-residue interactions across several GPCR-G protein interfaces.The application of these tools has here been greatly expanded through the addition of our new structure models increasing the number of GPCR-G protein complexes that can be analyzed / visualized from 527 to > 6000.Furthermore, as many experimental structure complexes have poor density (330 out of 527 structures have a resolution worse than 3.0 Å) their interface residues are often missing sidechain atoms leading to lacking interactions.This is addressed in this update by adding the option to analyze refined structure complexes and AlphaFold2 complex models (Figure 3 D).This enables investigation of residue interactions with increased completeness on the residue and protein complex levels by combining any type of structural templates ( Type column under the Structure section).Furthermore, it makes it possible to assess how G α chimera or mini-G proteins ( 47 ) affects GPCR coupling and residue-level interactions. Application Programming Interface (API) G protein-related data can be programmatically accessed through our API at https:// gproteindb.org/services . Discussion The new Couplings page greatly facilitates exploration of GPCR -G protein selectivity by allowing filtering based on primary transducer, rank orders and percent activities.Importantly, these descriptors enable comparisons across all datasets, and replace the previous common parameter mean log(Emax / EC 50 ) across datasets, which may not be meaningful as the underlying biosensors differ substantially.For example, the biosensors measure different molecules and processes at downstream signaling steps ranging; 0: RGB and RGB-GDP , 1: GABY , TRUPATH and NanoBiT-G, 2: FreeG βγ-Nluc and EMT A, 3: GEMT A and 5: TGF α-shedding (see Biosensors page).To account for excessive signal amplification from the TGFα shedding biosensor, we applied a filter requiring log(Emax / EC 50 ≥ 7.0).Notably, this achieves an agreement of 94% for couplings and 82% for non-couplings between Gq and GPCRs compared to the EMT A / GEMT A and FreeG βγ-Nluc datasets (Figure 4 A).Our Biosensors page introduces a classification of biosensor types facilitating comparison and tracking of their evolution to improve signal-to-noise ratios and utility.Furthermore, it would be clarifying for the field to use more precise terminology distinguishing G protein coupling (i.e.receptor binding), activation (G α -G βγ dissociation), signaling (further downstream) and kinetics (e.g.activation rates). Levering the new coupling datasets, we analyzed to what extent GPCR's primary transducers differ across datasets and parameters (constitutive activity, log(E max / EC 50 ) and activa- D 473 tion rate (s −1 )).We find that, of 18 GPCRs with couplings from constitutive activity, 17 (94%) have the same primary transducer G protein after agonist activation (Figure 4 B).The primary G protein family is the same for 54 (68%) of 79 GPCR -primary transducer family pairs reported for 66 shared GPCRs in the GtoPdb, Bouvier GEMTA and Martemyanov FreeG βγ-Nluc datasets (Figure 4 C left).The GtoPdb and Bouvier GEMTA datasets -focusing on E max and EC 50 -have a larger agreement, 85% with each other than to the kinetics-based dataset, (Martemyanov FreeG βγ-Nluc) (77% and 71%, respectively).This may indicate that, like for ligands, potency / efficacy metrics and kinetic parameters cannot be assumed to correlate.Furthermore, the Bouvier GEMTA and Martemyanov FreeG βγ-Nluc datasets show a marked lower agreement, 22% on the subtype-level (Figure 4 C, right).Next, we investigated the G protein family profiles activated by the 66 shared GPCRs (79 combinations) in the GtoPdb, Bouvier GEMTA and Martemyanov FreeG βγ-Nluc datasets (Figure 4 D).Here, only 0-2 couplings are unique to the kinetic assay , Martemyanov FreeG βγ-Nluc.Noticeably , 26%, 42% and 39% of Gi / o, Gq / 11 and G12 / 13 family couplings are lacking only in GtoPdb and therefore represent new couplings supported by both other datasets.Of note, 54% of G 12 / 13 couplings-which are underrepresented in GtoPdb ( 6 )-are unique to the Bouvier GEMTA dataset.As new coupling datasets emerge, future studies should further advance our understanding of how GPCR -G protein selectivity (profiles, rank orders and primary transducers) differs across biosensors and parameters-as well as across cells / tissues with different effector stoichiometries / concentrations. The first publication of GproteinDb contained homology models of 3081 GPCR-G protein complexes built using MODELLER ( 26 ).At the time, we were missing templates for, and could not model, several GPCR class / G protein family complexes.No homology models could be built of the G protein family G 12 / 13 or class T GPCRs, and all and class B2 (Adhesion GPCRs) had to be modeled based on class B1 templates.Furthermore, models could not be built of class B1 / G q / 11 , class B2 / G q / 11 , class C / G q / 11 / G s , class F / G q / 11 / G s complexes.This is addressed in this GproteinDb release that used many more templates and AlphaFold2-Multimer to attempt models of all theoretical complexes of the 16 human G α proteins with 425 human GPCRs in GPCRdb.So far, this represents 5595 out of 6800 theoretical human GPCR-G α complexes and more models will be added as they pass our quality filters (see Methods).This is a near-doubling of the number of structure models greatly expanding the GPCR-G protein 'couplome' for which it is possible to formulate structure-based hypotheses across a plethora of studies involving e.g.molecular dynamics, molecular mechanistic, mutagenesis, kinetics and drug design. We focused on the G α subunit, as it has the vast majority of receptor interactions, and it was not possible to compute high-quality models including G β and G γ for the complete interactome.However, we will revisit this possibility as the AlphaFold2-Multimer algorithm continues to refine.We choose to model all theoretical G protein-GPCR complexes, as it is difficult to definitely rule out specific complexes that cannot form.For example, a complex not observed in one experiment may form in another experiment if changing the cell line or ligand (system and ligand bias, respectively).To help users assess the experimental support of complexes, we pro-vide GPCR-G protein coupling data in the Structure models browser and individual pages of structure models. The new data and visualization of interface interactions ties into a stream of scientific studies investigating determinants of GPCR -G protein coupling selectivity or promiscuity ( 6 , 33 , 45 , 48-49 ).It provides the opportunity to resolve missing sidechains interactions (due to lacking density) by analyzing refined structure complexes or to modelled complexes.The visualization allows comparison of sequence (biflare plot in Figure 3 A) and structure (3D view in Figure 3 B) and dissection of unique and common interactions across receptors (matrix in Figure 3 D).This may contribute to ongoing discussions in the field regarding to what extend GPCR-G protein selectivity is driven by differences in sequence ( 33 ), structure ( 50 ,51 ), dynamic location and duration of intermolecular contacts at the periphery of the interface ( 45 ), tissues / cells with different protein stoichiometry ( 19 ), transient intermediatestate complexes ( 52 ) or cellular localization ( 53 ).Likely several mechanisms act in concert.For example recent analysis indicates that the volume of the G protein cavity, along with intracellular loops 2-3 ( 49 ), largely influences primary transducer coupling whereas secondary transducers may bind in a noncanonical fashion not requiring the same structural features and may instead be more dependent on sequence and other factors ( 48 ). 467 Figure 1 . Figure 1.G protein data, analysis resources and data-driven experiment tools in the GproteinDb 2024 release. Figure 2 .Figure 3 . Figure 2. GproteinDb features AlphaFold2-Multimer-based ( 27 ) str uct ure models of GPCRs and the heterotrimeric G protein (G αβγ complex) that constitute its primary transducer G protein.Furthermore, GproteinDb builds models of the theoretical interactome spanning all 16 human G α subunits and 425 human GPCRs in GPCRdb ( 2 ).The figure shows representative such models whereof the following complexes are with heterotrimeric G proteins (Class A -all, Class B1 -all, Class C -all e x cept G olf , G i3 , G t1 , G t2 , G gust , G 12 and Class T -all e x cept G 14 , G 12 ) and the remaining are G α-GPCR comple x es. Figure 4 . Figure 4. Agreement of couplings across datasets and parameters.( A ) Intersection of Gq couplings for 50 shared GPCRs in the Bouvier GEMTA, Inoue TGF-α shedding and Martem y ano v FreeG βγ-Nluc datasets.The large agreement was achieved by filtering (only) Inoue couplings requiring log(Emax / EC 50 ) ≥ 7.0 to account for downstream signal amplification.Couplings and non-couplings were defined based on support from at least two out of three datasets (majority rule).( B ) Intersection of constitutive and agonist-induced couplings from the Lambert RGB-GDP datasets.( C ) Intersection of the primary transducers of 66 shared GPCRs (79 combinations) in the GtoPdb, B ou vier GEMTA and Martem y ano v FreeG βγ-Nluc datasets.( D ) Intersection of the receptor sets among, the 66 shared GPCRs, coupling to each G protein family from the GtoPdb, Bouvier GEMTA and Martemyanov FreeG βγ-Nluc datasets.(A-D) The underlying datasets and calculations of abo v e agreements are a v ailable in the file GPCR-G_protein_couplings.xlsx a v ailable from the Couplings page.Gre y -scaling, numbers and percentages were obtained from VENNY 2.1 ( https:// bioinfogp.cnb.csic.es/tools/ venny/ index.html).
7,089.6
2023-11-24T00:00:00.000
[ "Computer Science", "Biology", "Medicine", "Chemistry" ]
How Frost Forms and Grows on Lubricated Micro- and Nanostructured Surfaces Frost is ubiquitously observed in nature whenever warmer and more humid air encounters colder than melting point surfaces (e.g., morning dew frosting). However, frost formation is problematic as it damages infrastructure, roads, crops, and the efficient operation of industrial equipment (i.e., heat exchangers, cooling fins). While lubricant-infused surfaces offer promising antifrosting properties, underlying mechanisms of frost formation and its consequential effect on frost-to-surface dynamics remain elusive. Here, we monitor the dynamics of condensation frosting on micro- and hierarchically structured surfaces (the latter combines micro- with nano- features) infused with lubricant, temporally and spatially resolved using laser scanning confocal microscopy. The growth dynamics of water droplets differs for micro- and hierarchically structured surfaces, by hindered drop coalescence on the hierarchical ones. However, the growth and propagation of frost dendrites follow the same scaling on both surface types. Frost propagation is accompanied by a reorganization of the lubricant thin film. We numerically quantify the experimentally observed flow profile using an asymptotic long-wave model. Our results reveal that lubricant reorganization is governed by two distinct driving mechanisms, namely: (1) frost propagation speed and (2) frost dendrite morphology. These in-depth insights into the coupling between lubricant flow and frost formation/propagation enable an improved control over frosting by adjusting the design and features of the surface. W ater is widely available in the atmosphere and in the environment. The interplay between the chemical properties of water and the thermal conditions of our planet results in the formation of clouds, rain, or frost. 1,2 Despite the ubiquitous nature of frost, understanding and controlling its formation 3 and propagation 4,5 still poses several challenges. Control of frosting is relevant for a range of industries: In the energy, 6 transportation, 7 or telecommunication 8 sectors, frost constitutes serious hazards when it forms on critical components of machines and devices, causing them to fail. To control frosting, a strong momentum of research in anti-icing technology 9−12 was recently generated. Surfaces which passively avoid, repel, or retard frost or ice formation were designed. 13−16 Lubricant-infused surfaces (LIS) were shown to resist frost formation more efficiently compared to dry micro/nanostructured or flat, chemically functionalized variants. 16 Lubricant-infused porous surfaces are characterized by solid three-dimensional (3D) structures, infused with a liquid lubricant. 17,18 The lubricant renders the surface slippery, resulting in a high mobility of contacting liquids. Capillary forces retain lubricants within the surface by virtue of the porous, structured geometry. 19 During condensation frosting, however, the lubricant in the porous structure of LIS appears to reorganize, leading to depletion of the structure. 20,21 Aiming to gain insight in the underlying dynamics, here, we monitor and model lubricant reorganization and frost propagation dynamics (space-and time-resolved) on micro-and hierarchically (combining micro-with nanofeatures) structured surfaces under moderately low subzero temperatures (−12°to −22°C ). Despite the widespread occurrence of frost, the understanding of its formation and growthparticularly on LIS remains elusive. The reasons are multifold: Insights into frosting are hampered by poor optical contrast between frost and lubricant and by the nonequilibrium nature of frost formation that involves multiple time and length scales. Mathematical modeling of the involved nonequilibrium processes requires the description of freezing, dendrite growth, propagation of frost bridges, and lubricant flow in mutual interplay. The first clue of lubricant reorganization was obtained by focused ion beam and scanning electron microscopy, revealing that frozen water droplets on LIS are covered by a thin layer of lubricant. 20 However, none of these techniques and investigations allow for insights into the coupled evolution of both frost and lubricant. To reach better insight, we have developed a setup that enables the in situ monitoring of lubricant reorganization and frost formation using laser scanning confocal microscopy (LSCM). Such a setup allows to discriminate between frost and lubricant. In particular, our setup provides the spatially and temporally resolved data of the formation and growth of frost, accompanied by quantifiable lubricant reorganization. The summary of our findings is that analogously to frost propagation on dry surfaces, frost bridges 4,22 form. A strong capillary pressure inside the frost structure induces lubricant flow during condensation frosting. Three-dimensional surface mapping by quantitative confocal microscopy provides a timeresolved evolution of lubricant during frosting. We then utilize a long-wave approximation 23 to model the experimentally quantified lubricant reorganization during the transient process of frost formation and propagation. We derive and solve the governing equations numerically to predict the height profile of the lubricant. The experimental data and numerical predictions for lubricant reorganization on micro-and hierarchically structured surfaces show excellent agreement. Notably, we find that lubricant reorganization and depletion is primarily affected by the frost propagation speed and dendritic frost geometry. RESULTS AND DISCUSSION Condensation frosting on lubricated micro-and hierarchically structured surfaces is temporally and spatially resolved by LSCM. As model surfaces (approximately 5 cm 2 large), we use regularly spaced micropillar arrays (square orientation) ( Figure 1). To analyze the influence of nanoroughness on condensation frosting, the lubricant distribution is monitored on bare micro ( Figure 1a) and hierarchically structured surfaces, characterized by both micro and nanoscopic features. Here, the hierarchical structure is facilitated by coating the micropillar arrays with a thin layer of silica nanoparticles ( Figure 1b). The cylindrical micropillars are fabricated by photolithography using an epoxy-based photoresist (SU-8, cf. Methods). Two variants of functionalized silica nanoparticles (amine and epoxy groups) are then deposited in a "layer-bylayer" method, via a two-step dip-coating procedure (cf. Methods). This creates a thin, homogeneous nanoparticle coating of approximately 1 μm thickness on the micropillar array. The average roughness on top of the micropillars increased from 3.4 ± 0.3 nm to 35.1 ± 1.7 nm, after the nanoparticle coat was added ( Figure S1). We then infuse the surfaces with 2 μL of fluorescence marked silicone oil (viscosity, η SiOil = 194 mPa s) as a lubricant. After leaving the surface overnight, homogeneous lubricant distribution in the pillars' interstices (approximately 10 μm height) with a thin layer on top of each pillar (below 1 μm) is verified by confocal microscopy. The LIS is placed on a cooling element in a sealed (frosting) cell and directly monitored simultaneously using a custom-built confocal microscope (Figure 1c, Figure S2, Methods). We cool down the surfaces to set point temperatures below the atmospheric freezing point of water ( Figure S3) in a dry atmosphere (<4% relative humidity, RH) which were then held constant for 10 min. To trigger condensation frosting (Figure 1d), we flow a humidified nitrogen carrier gas (30% RH at 18.1 ± 0.6°C, Figure S4, Methods) for 30 s into the cell. Thereafter, the cell is sealed. The long working distance of all objective lenses (100×/0.80:2 mm; 10×/0.4:3.1 images of hierarchical structure consisting of the micropillar array coated with silica nanoparticles on the entire surface. Highly magnified image (top right) of the particle coating shows that individual particles have diameters of <50 nm. The entire surface is evenly coated with nanoparticles, while the microstructure is retained (bottom). (c) Experimental setup: The surface is mounted on the cooling element and monitored through an objective lens (2.5×, 10×, 100×). A green (532 nm) and a blue (473 nm) laser are used to illuminate the sample. A humidified carrier gas (nitrogen) is introduced into the chamber. During the experiment, the temperature and humidity in the far field are recorded. (d) Schematics of condensation frosting (blue) on the infused (yellow) micropillar substrate. mm, 2.5×/0.07:9 mm) provides sufficient space between lens and sample to ensure evenly distributed humidity around the sample substrate. The laser illumination is non-invasive and does not interfere with the observed processes on the sample substrate (cf. SI S1). Condensation on Microstructured LIS. Approximately 10 s after introducing the stream of humidified nitrogen into the frosting chamber, the formation of distinct and individual droplets on micropillar tops becomes visible (dark spheroids) (Figure 2a,b). Vertical cross-sectioning through the surface shows condensed droplets wrapped in lubricant and the accompanied depletion of lubricant between the micropillars (Figure 2a). A 3D image reveals that the droplets (dark spots) are formed on top of the micropillars (Figure 2b). The thin lubricant film on top of the micropillars is permeated easily by atmospheric water vapor molecules via diffusion (D W/SiOil = 2 × 10 −9 m 2 /s) and accumulates at the micropillar−lubricant interface due to the finite solubility of water in silicone oil of up to 40 mM. 24 Here, stable nucleation sites for condensation are most likely due to sufficiently low energy sinks at this location (cf. SI S2). 25−28 After nucleation, the droplets grow in size and soon become visible under the microscope: Droplets appear as dark spheroids, surrounded by lubricant (yellow) (Figure 2a,b). At the contact line where the droplet, lubricant, and air meet, the surface stresses of the respective phases fulfill a force balance. 29 The vertical component of the droplet surface stress promotes the formation of annular wetting ridges 30 of silicone oil around condensing water droplets. Due to a positive spreading parameter, 31 S = γ sv − (γ sl + γ lv ) ≈ 12 mN/m 15 silicone oil spreads on water, leading to completely cloaked 32,33 droplet−air interfaces by lubricant. The lubricant layer around each droplet (ridge and cloak) slightly retards coalescence. 34 However, when the droplets grow, the layer thins until it becomes unstable and disintegrates, leading to coalescence. Eventually, only one single droplet (diameter ≈30 μm) remains on top of each micropillar (Figure 2c). Condensation on Hierarchical LIS. While the initial emergence of femtoliter droplets was similar on both surface types, the consecutive droplets growth phase deviates on hierarchically structured surfaces. The separating lubricant layer around droplets appears to be more stable. On hierarchically structured surfaces, not a single droplet but multiple sessile droplets remain on top of each pillar ( Figure 2d). The increased surface roughness introduces pinning sites for the droplets, leading to delayed coalescence. Hence, the nanoparticle coat prevents the coalescence of condensate droplets into a single one. To quantify the migration of lubricant during condensation, we monitor the film height in the array's interstices, that is, in a square domain of 40 μm × 40 μm, containing 4 half micropillars (field of observation) (Figure 2e (Figure 2e), from the initial average height, h 0 ≈ 10 μm within the first 100 s. The approximate square root scaling of ⟨h⟩(t) is independent of substrate temperature and field of observation. We rationalize lubricant depletion in the micropillars' interstices by the formation of wetting ridges and cloaking of the condensing droplets. The direct correlation between condensation and depletion is supported by the square root scaling of the latter, which is similar for diffusioncontrolled condensation (eq 6, Methods). Furthermore, the depletion rate is proportional to the set-point temperature of the substrate (viz. Figure 2e), which, again, is typical for diffusion-controlled condensation, where supersaturation (or here undercooling) determines the magnitude of the condensation rate. For hierarchically structured surfaces, the depletion dynamics changes slightly: While lubricant height evolutions show a similar dependence between set-point temperature and depletion rate (cf. Figure 2e), lubricant depletion deviates from the square root behavior (Figure 2f). We speculate that the delayed coalescence accompanied by a large number of smaller droplets causes the altered depletion dynamics. It should be noted that the wetting properties of the surface affects the location of preferred droplet nucleation, the number and size of condensed droplets, 25 and therefore frost formation and propagation. Mesoscopic Frost Formation and Propagation. At the subzero surface temperatures, prolonged exposure eventually results in frosting of condensate droplets. We observe formation and propagation of the lubricant-covered frost patches with a lower magnification objective (2.5×), providing a wider field of observation (Figure 3a,b). Placing the focal plane several μm above the micropillar base allows to accurately trace the frost. The higher focal plane also ensures that the lubricant within the micropillar array does not contribute to the integrated fluorescence intensity signal. Sometime after the initial 100 s, circular frost patches of increased fluorescence signal can be observed on the surfaces (Figure 3a). The patches propagate over the surface ( Figure 3b), with an average front speed of u ice = 1.4 ± 0.5 μm/s, Figure S6. To understand the formation and propagation of frost, we reconsider the situation where we have one large water droplet on each micropillar top ( Figure S7). Freezing is initiated by the formation of stable, heterogeneous nucleation sites (ice crystals) in the liquid droplets at the pillar−liquid interface. 35 In general, freezing is a random process due to the inherently stochastic nature of nucleation. 36 Hence, the droplets do not freeze simultaneously, but instead in a staggered manner. Vapor pressure over liquid water exceeds that over frozen droplets by a ratio of approximately 1.1 at −12°C . 37 This generates a vapor flux along the vapor pressure gradient. Water vapor molecules attach at the interface of the frozen droplet toward air. Small spikes and edges on the interface (perturbations) cause the compression of the vapor field locally above the solid−vapor interface, yielding a higher supersaturation. This amplifies the growth of the small spikes and edges (Mullins−Sekerka type of instability), giving rise to the formation of frost dendrites. 38 Ice bridges grow out from the frozen droplet and past the spacing distance of micropillars, reaching adjacent liquid droplets. Upon contact with the ice bridges, adjacent droplets freeze immediately. 22 This process repeats, enabling a frost cluster to grow gradually. 4 Depletion Zones. Frost propagation is accompanied by capillary-induced suction of lubricant into the dendritic frost domains, as reflected by the high fluorescence intensity in the frost (Figure 3a,b). Further, directly in front of the propagating frost (2−3 pillars), the reflection signal (cyan) is noticeably more pronounced. This results from the pillar's top faces that were initially coated by an approximately 1 μm-thick layer of , purple) and hierarchically structured surfaces (−17°C, turquoise and −22°C, light green). Initial offsets are due to the condensation regime. Curves are time-shifted such that t co (crossover between condensation and frosting) coincides for all measurements; t co depends on the location of observation, the supercooling of the surface and the humidity. (e) Lubricant reorganization during frosting as logarithmic plot, focusing on the time domain after the crossover time, t co reveals identical dynamics. lubricant (Figure 2a). Plain SU-8 has a higher refractive index (n SU-8 ≈ 1.6) 39 compared to silicone oil (n SiOil ≈ 1.4). 32 Thus, a stronger reflection signal indicates lubricant depleted zones. Depletion zones, however, do range further than only 2−3 pillars in front of the propagating frost, as we visualize with magnified (10× objective lens) top-view recordings of the frost front (Figure 3c, Movie S1). Here, the frost propagates from the left to the right side within the field of view. To the left of the dashed white line, the irregular yellow domains reveal the frost-covered areas. To the right, the pillars are still covered by a single droplet per pillar (cyan). Figure 3d shows the average fluorescence intensity within the dashed box of panel Figure 3c. Right in front of the frost front, the integrated fluorescence intensity signal passes a minimum and then gradually rises again further ahead of the frost. This unravels the presence of depletion zones, ranging 100−200 μm, in the direction normal to the frost front. Frost propagation ends when the condensed water droplets are entirely transformed into frost. The surface is then mostly covered with frost, leaving only small uncovered islands. The final area covered by frost depends on the amount of vapor which is initially induced into the frosting chamber. Frost-Induced Lubricant Depletion on Microstructured LIS. To monitor the dynamics of lubricant depletion during frost propagation, we return to the microscopic field of observation, centered on four half-pillars (100× objective lens, 40 μm × 40 μm). On microstructured surfaces, the average lubricant height was then measured for 420 s at two characteristic locations (visualized by the red and the blue curves) (Figure 4a). One location was eventually covered with frost (blue curve), while the other remained uncovered (red curve). As the profiles of the blue and the red curve nearly overlap in the condensation phase, for t < 100 s, we deduce that early depletion is unaffected by the location of the field of observation. However, while the decrease of lubricant height of the red curve changes only marginally after 100 s, ⟨h⟩(t) may cross over into a steep decrease (blue curve). Considering the global process of frost propagation in Figure 3a, it becomes clear why frosting-induced depletion dynamics bifurcates into two distinctively different types, per red and blue curves. For the blue curve, the frost front approaches the field of observation, sucking the lubricant from the array into the frost dendrites. This becomes notable at the crossover time t co ≈310 s, at which the two linear slopes (Figure 4a dashed lines) before and after crossover intersect. At the last time point shown (t end = 420 s), the frost front arrives at the field of observation, and dendritic frost is clearly visible (cf. Figure 4b,c). Lubricant cloaks the frozen water, revealing the morphology of the frost features. For the red curve, only farrange lubricant depletion from a distant frost patch influenced the lubricant height. This led to significantly slower depletion dynamics. Eventually, the surface is only partially covered with frost patches because all condensed water has been converted into frost. The height depletion of the red curve stabilizes, as the field of observation resides at an uncovered region. Thus, after the condensation period of t > 100 s, the evolution of the average height ⟨h⟩(t) in Figure 4a depends on the field of observation, and the lubricant dynamics is governed by the proximity to frost patches. Frost-Induced Lubricant Depletion on Hierarchical LIS. To understand whether the two-step depletion depends on degrees of surface roughness, we compared measurements on both micro-and hierarchically structured LIS. Figure 4d,e shows the evolution of lubricant height on micro-(−22°C, purple) and hierarchically structured surfaces (−17°C, turquoise, and −22°C, light green). Qualitatively, the lubricant evolution on the two surface types did not differ. However, the crossover time t co is not identical across various measurements due to stochastic nucleation delay. After shifting the curve such that t co coincides for all measurement, the longterm depletion dynamics follows the same power law, h ∼ t 1.27 , Figure 4e. This hints that the long-term depletion dynamics does not depend on the roughness of the micropillars. As each set point temperature experiment was conducted on the same surface, we note that the surfaces' original filling could be restored, after removing the ice via melting and evaporation. Modeling Frosting Induced Lubricant Depletion on LIS. Next, we aim to understand the coupling between lubricant flow and frost formation/propagation. The frost dendrites ( Figure 4c) form a dynamically arranging porous network into which the lubricant can wick. The wicking of the lubricant is facilitated by a higher capillary suction pressure exerted on the lubricant by the dendritic structure than that exerted by micropillars. The pressure difference due to capillary suction can be estimated as The pressure difference is governed by the differing capillary radii of the lubricant in the micropillar array, r MP , and in the dendritic network, r ice (eq 1). The capillary radius in the micropillar array is given by the geometry of micropillar spacing (r MP ≈10 μm, cf. Methods). For an effective suction flux of lubricant into the dendritic structures, r ice has to be smaller than r MP . The surface tension of the lubricant (silicone oil) is denoted with γ SiOil . The suction of lubricant into the dendritic structure generates its flow in the space between the micropillars. To model lubricant dynamics, we develop a theoretical framework of the flow during the frost propagation (cf. SI S5). To this end, we introduce a set of coordinates with the origin at the traveling front of the frost and x = [0,∞) being parallel to the horizontal plane. The micropillars are not directly considered but incorporated through an increased viscosity of the lubricant (η SiOil = 2.9 Pa s, cf. SI S4), which is, therefore, considered continuous (Figure 5a). The initial lubricant height is taken to be the same as the height of the micropillars, h 0 = 10 μm. The capillary number (Ca = u ice η SiOil /γ SiOil ), which relates the viscous to the interfacial forces, is based on the propagation velocity of the frost front, u ice ; here, η SiOil is the dynamic viscosity of the lubricant. The small capillary number, Ca ≈ 2 × 10 −4 , enables a suitable long-wave approximation 23 The prefactor ∂κ/∂x is the average measure of the change of film curvature 40 when crossing the frost front. Considering mass conservation yields the evolution equation for the average lubricant height perpendicular to the frost domain (cf. Methods): To determine the average height of the lubricant film, we solve eq 3 numerically (cf. SI S6). Computations are carried out by a well-resolved finite difference method. 41 The number of grid points was chosen to ensure convergence. Eq 1 enters as a boundary condition at the frosting front. We set r ice to 6 μm, which yields a good agreement between experimental observations and the numerical results. The chosen value for r ice also aligns well with optical observations of the frost structure (Figure 4b,c). Figure 5b shows a vertical cross section of the film profile. Consistently with experimental observations (cf. Figure 4d), we observe a lubricant depleted zone just ahead of the frost front. The resulting average lubricant height, together with the corresponding vertical velocity profiles at representative locations are sketched in Figure 5a and its insets. In the model, the lubricant moves toward the frost in the comoving frame with u = u ice (Figure 5a), with a uniform velocity far away from the frost front. The frost domain generates a suction pressure per eq 1 due to the fine dendritic geometry. The suction into the frost domain generates a lubricant depleted zone that precedes the frost front. This results in a dimple in the overall height profile (Figure 5a,b); this dimple leads to free surface curvature gradients that affect the flow. Toward the frost front, the curvature of the dimple induces a suction pressure, resulting in a higher lubricant flow. Passing the depletion zone, the curvature becomes positive, facilitating a backflow. Therefore, the width and the height of the depletion zone are governed by two independent effects: The growth speed of the frost, u ice , and the frost suction pressure. We discuss their respective effects on the height profile, starting with u ice . For larger values of u ice , the depletion zone is supplied by more lubricant and becomes more narrow and thicker. Additionally, the lubricant flux into the frost domain is also enhanced due to the higher availability of lubricant in the direct vicinity of the frost front. A strong suction pressure results in a deep and narrow depletion zone. Note that the suction pressure counteracts the capillary pressure within the micropillar array domain, per eq 1, so that the net lubricant flux can be tuned to effectively zero if the length scales characterizing the capillary structure of the micropillar array and the frost approach each other, that is, r MP ≈ r ice . To test whether the simulations quantitatively reproduce the experimental data, we compare the lubricant profiles between simulations and experiments ( Figure 5c). First, we transition the experimental results to the comoving frame and define x = x min + u ice (t end − t), so that x measures the distance from the moving front. Here, x min is chosen as the location in the comoving frame, where the depleted zone of the numerically calculated height (cf. Figure 5b) is minimal. The time t end is defined as the time at which the experimentally monitored lubricant film height is minimal (e.g., t end = 420 s in the blue curve, Figure 4a). Finally, we express the results relative to the maximum height ⟨h⟩ max of the numerical solution shown in the dashed box in Figure 5b. Figure 5c plots the numerical results together with the experimental data obtained from the red curve in Figure 4a and similar curves in repeat experiments. We find excellent agreement between experimental and theoretical results, indicating that the proposed model contains all the important ingredients needed to capture the relevant physical effects. CONCLUSIONS Confocal microscopy is a powerful tool for obtaining quantitative information about condensation and frost formation on lubricated surfaces, since it enables clear discrimination between the water/frost, lubricant, and the surrounding air. The formation of frost dendrites during condensation frosting induces a strong suction pressure. This leads to direct lubricant drainage and depletion. Drainage was essentially the same for micro-and hierarchically structured surfaces. Although detrimental at the first sight, we note that frost-induced lubricant depletion is reversible. Evaporation or sublimation of water restores the lubricant impregnated surface, accompanied by its characteristic features such as low friction. This is facilitated by the special properties of silicone oil, whose excellent spreading behavior often results in complete surface coverage. 17,32 During frost formation, lubricant drainage is coupled to the speed of the dynamically forming frost front, which continuously soaks up the lubricant. These two driving mechanisms (frost propagation and capillary suction) induce lubricant depletion during frosting on liquidinfused surfaces. Interestingly, for optimally robust antifrosting surfaces, this implies a contradicting strategy: (a) Reduce frost growth speed by increasing the spacing between micropillars; 4,22,42 and (b) increase capillary forces within the micropillar array by reducing the spacing between micropillars. We expect that such improved understanding behind the mechanism of condensation frosting on lubricated surfaces will foster the optimal design of next-generation frost-resistant lubricant-infused surfaces. METHODS Fabrication of Lubricant-Infused Surfaces. The rigid micropillar surface was manufactured by spin-coating an epoxy-based SU-8 photoresist (SU-8 5, MicroChem) on a glass slide (24 × 60 mm 2 , 170 ± 5 μm thickness, Menzel-Glaser). The glass slides were cleaned by acetone and subsequently activated by oxygen plasma under 300 W for 5 min. The SU-8 photoresist was then spin-coated (500 rpm for 5 s followed by 3000 rpm for 30 s, SÜSS MicroTec) on the glass slides. The coated slides were heated at 65°C for 3 min, 95°C for 10 min, and then at 65°C for 30 min, respectively. Subsequently, the samples were slowly cooled down within 2 h and exposed to UV light (mercury lamp, 350 W) under a photolithography mask for 14 s (masker aligner SÜSS MicroTec MJB3 UV400). To cross-link the photoresist, the samples were heated at 65°C for 1 min, 95°C for 3 min, and 65°C for 30 min and then cooled down slowly. Next, the samples were immersed in the SU-8 developer solution for 6 min, washed with isopropanol and deionized water, and then dried in air. The micropillar array on the glass slide was cut to a circular area of approximately 3 cm 2 . Thereafter, it was plasma cleaned and infused with 0.56 μL/cm 2 of lubricant liquid per substrate area. The wettability of flat, plasma cleaned SU-8 was measured by wetting experiments and characterized by an advancing contact angle of 12 ± 2°. Fabrication of Nanopatterned Hierarchical LIS. To achieve nanopatterned hierarchical LIS, an additional scale of nanoroughness was conferred to microstructured LIS. To achieve this, surfacefunctionalized nanoparticles were synthesized. The surface functionalization comprises two components: an epoxy terminated variant and an amine terminated variant. The epoxy terminated variant was synthesized by a methoxy-based sol−gel method, by stirring 1 g of fumed silica (Aldrich, 7 nm) in 50 mL of deionized water and 2.6 mL of (3-glycidyloxypropyl)trimethoxysilane (Aldrich, 99.9%) at 500 rpm, 20°C, 72 h. The amine terminated variant was synthesized by an ethoxy-based sol−gel method, by stirring 1 g of fumed silica (Aldrich, 7 nm) in 50 mL of toluene (200 ppm water) and 2.8 mL of aminopropyltriethoxysilane (Aldrich, 99.9%) at 500 rpm in a roundbottom flask, under reflux at 80°C for 72 h. Surfaces were prepared in excess reaction ratios, at 30 μmol/m 2 . Resulting colloidal solutions were then centrifuged at 10,000 rpm for 10 min and washed in their respective solvents (50 mL) for 3 cycles before being dried in a vacuum oven (50 mbar, 60°C) overnight. Thermogravimetric analysis revealed that nanoparticles are functionalized to ca. 10 w/w % in both instances (amine and epoxy variants). Both nanoparticle variants were dispersed (separately) in isopropanol (2 mg/mL) by magnetic stirring (500 rpm) for 24 h, followed by ultrasonication for 1 h. The surfaces (170 μm-thick glass slides decorated with micropillars) were first cleaned via oxygen plasma under 120 W for 2 min. The surfaces were then dipped into the amine-functionalized nanoparticle dispersion for 10 s. The surfaces were dried in ambient air for 2 h. Thereafter, they were dipped into the epoxy-functionalized nanoparticle dispersion, again for 10 s. The hierarchically coated surfaces were then dried overnight (24 h) before use. Prior to lubricant infiltration, the surface was plasma cleaned. The wettability of flat, plasma cleaned, nanoparticle-coated SU-8 was measured by wetting experiments and characterized by an advancing contact angle of below10°. Lubricant and Dye. For the lubricant, we used a silicone oil (vinyl terminated polydimethylsiloxane, Gelest; surface tension: γ SiOil ≈20 mN/m). 15 A fluorescence marker (Lumogen Red F300, BASF, excitation at 532 nm, emission at 610 nm) was added to the silicone oil. To enhance the solubility of the fluorescence dye in silicone oil, the fluorescence dye was initially dissolved in chloroform (chloroform, 99.8+%, Fisher Chemical). The dye-chloroform concentration was diluted down to c Lumogen Red/CHCl 3 = 0.1 mg/mL and ultrasonicated for1 min. The dye-chloroform solution was mixed with the silicone oil such that a Lumogen Red-silicone oil concentration of c Lumogen Red/SiOil = 0.1 mg/mL was received. The mixture was stirred for 5 min. Afterward, the mixture was exposed to 40°C and 50 mbar for 24 h under vacuum-assisted evaporation. We did not observe changes in the interfacial tension by the dye nor an accumulation of the dye at the interfaces. Humidity and Temperature Control. To control the temperature of the lubricant-infused surface, the sample was mounted on a cooling element within the frosting chamber (volume: 240 mL, Figure S2, Linkam, THMS600). The cooling element is closed-loop controlled within a temperature range of −196 to 600°C and a cooling rate up to 100 K/min. To control the chamber's humidity, two nitrogen gas lines were used: one water vapor enriched (humidified) line and another line with dry nitrogen gas. We utilized a water-bubbler system to enrich the first gas line with water. The sample was cooled down alongside continuous purging of the chamber using the dry steam at 10 L/min. The temperature was kept at the desired set point for 5 min before commencement of the experiment. Thereafter, the flow rate of the dry nitrogen gas line was set to 4 L/min and the flow rate of the humidified gas line to 2 L/min. Both lines are connected such that the respective gas streams mix. The mixed stream was introduced into the frosting chamber for 30 s. Laser Scanning Confocal Microscopy. The experiments were monitored with a custom-built, inverted laser scanning confocal microscope. The microscope was controlled with a LabVIEW program. The microscope has two illuminating lasers (Cobolt DLC 25; wavelength: blue 473 nm; green 532 nm). The laser beam is sent through a magnifying objective lens to the lubricant-infused surface. The objective lens (dry; Leica HC PL FLUOTAR 2.5×/0.07; Olympus UPIanSApo 10×/0.40; Olympus LMPIanFLN 100×/0.80) of the microscope is mounted on a piezo table to translate the sample within a domain of up to 200 μm in the vertical direction. The horizontal plane is sampled with a counter rotation scanner (Cambridge Technology, 215H Optical Scanner) which sweeps with a sampling rate of 7910 ± 15 Hz in one direction. The horizontal plane of view spans an area of 40 μm × 40 μm. The observed height was 20 μm. This spatial configuration allowed a recording frequency of 6.6 Hz. The combined reflection-fluorescence modes enable discrimination between the bulk liquid lubricant phase and the surface features on the lubricant-infused surface. Height Extraction. The captured point cloud of the substrate from the microscope was processed with the open source package ImageJ and a custom MATLAB script. This reconstructs the spatial distribution of lubricant film ( Figure S8). The lubricant height h is measured in the space centrically in between the micropillars. The field of observation allows a collection of 128 sampling points, which corresponds to a length of 70 μm. This corresponds to the center of one micropillar to another. The points are given with . This height profile is arithmetically averaged, which yields a scalar value, namely h hx ( ) The constants c 0 and c 1 are determined using the initial concentration (c 0 ) in the whole domain, while the saturation concentration (c 1 ) at the substrate depends on the set-point temperature. Eq 5 was fitted to the monitored relative humidity in the frosting chamber ( Figure S4b). where m ̅ vapor is the the mole mass of water vapor. Long-Wave Approximation. We consider incompressible lubricant flow during frost propagation and fluid mechanics conservation laws (cf. SI S5). The flow is characterized by the capillary number (Ca ≈ 2 × 10 −4 ), and the Reynolds number is based on the front propagation speed, u ice (Re = ρ SiOil u ice h 0 /η SiOil ≈ 2.8 × 10 −10 ). For such small values of Ca and Re, and considering the separation of length scales in the vertical and in-plane direction, it is appropriate to consider the problem within the long-wave approach (cf. Kondic 44 and references therein). This approach leads to the following equations for the depended variables (velocity (u,v) T and pressure p) that in steady state read as We chose no-slip at y = 0 for the velocity and no-stress at y = ⟨h⟩ for the shear tensor. The pressure at y = ⟨h⟩ is given by the Laplace pressure p = γ SiOil ∂κ/∂x. Since p ≠ f(y), eq 7 can be integrated twice leading to eq 2, and eq 9 gives v. The evolution of the average film height ⟨h⟩ is then given by Note that the velocities(u,v) T in eq 9 are evaluated at y = ⟨h⟩. By introducing (u,v) T into eq 9 and considering steady-state solutions only (∂⟨h⟩/∂t = 0), we obtain eq 3. * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsnano.0c09152. AFM roughness measurements on micropillar tops. Detailed description of experimental setup. Humidity and temperature recordings in chamber atmosphere. Raw data of 3D LSCM scans/3D image processing/laser beam influence. Measurements of area and perimeter evolution of frost patch. LSCM top-down recordings of droplet condensation on micropillar tops. Discussion on energy sinks on micropillar tops. Discussion on diffusion-driven condensation. Effective viscosity of silicone oil in micropillar array. Derivation of longwave approximation of reorganizing silicone oil under frost formation and discretization of governing equations (PDF) Movie S1: Top-down view of propagating frost front on lubricated impregnated microstructured surface (AVI) D.V. wrote the manuscript. All authors reviewed and approved the manuscript.
8,206.8
2021-03-01T00:00:00.000
[ "Materials Science" ]
Lipid Signaling in Ocular Neovascularization Vasculogenesis and angiogenesis play a crucial role in embryonic development. Pathological neovascularization in ocular tissues can lead to vision-threatening vascular diseases, including proliferative diabetic retinopathy, retinal vein occlusion, retinopathy of prematurity, choroidal neovascularization, and corneal neovascularization. Neovascularization involves various cellular processes and signaling pathways and is regulated by angiogenic factors such as vascular endothelial growth factor (VEGF) and hypoxia-inducible factor (HIF). Modulating these circuits may represent a promising strategy to treat ocular neovascular diseases. Lipid mediators derived from membrane lipids are abundantly present in most tissues and exert a wide range of biological functions by regulating various signaling pathways. In particular, glycerophospholipids, sphingolipids, and polyunsaturated fatty acids exert potent pro-angiogenic or anti-angiogenic effects, according to the findings of numerous preclinical and clinical studies. In this review, we summarize the current knowledge regarding the regulation of ocular neovascularization by lipid mediators and their metabolites. A better understanding of the effects of lipid signaling in neovascularization may provide novel therapeutic strategies to treat ocular neovascular diseases and other human disorders. Introduction Living organisms are composed of various organic compounds, including proteins, carbohydrates, nucleic acids, and lipids. Lipids play crucial roles in numerous cellular processes, acting as cellular structural components and biological barriers and regulating numerous signaling pathways [1,2]. The biological membranes of eukaryotes are amphiphilic sheaths consisting of a lipid bilayer, which acts as a cell barrier [3]. The lipid backbones, head groups, chain length, and position and number of carbon double bonds in fatty acyl chains vary immensely, contributing to the diversity of membranes [4]. Glycerophospholipids, sphingolipids, and sterols are major components of membrane lipids [5]. Additionally, bioactive lipid mediators produced from membrane lipids play pivotal roles in various biological processes and have been implicated in numerous disorders [6][7][8][9][10]. Mounting evidence implies that lipid signaling plays a crucial role in angiogenesis [11,12]. Signaling in Ocular Neovascularization Both vasculogenesis and angiogenesis are essential for the formation of vascular networks. Vasculogenesis is the process of blood vessel development by endothelial cells originating from mesoderm-derived progenitor cells [13]. Angiogenesis is the formation of new vessels in response Retinal neovascularization and choroidal neovascularization (CNV) are the most common vascular diseases of the eye. Retinal neovascularization diseases, such as RVO, ROP, and diabetic retinopathy, are primarily caused by chronic or severe retinal ischemia [36]. Retinal ischemia induces the production of intravitreal angiogenic factors [37], leading to the proliferation of retinal blood vessels. Pre-retinal neovascularization in the vitreous cavity can cause tractional retinal detachment and vitreous hemorrhage due to increased vascular vulnerability ( Figure 2). In severe retinopathy cases, pre-retinal neovascularization can lead to vision loss. Considering its importance in angiogenesis, VEGF is a promising therapeutic target for pathologic retinal neovascularization. Hence, intravitreal injection of VEGF-targeting agents is currently a standard treatment option for patients with visual impairments due to pathologic retinal neovascularization [38,39]. Retinal neovascularization and choroidal neovascularization (CNV) are the most common vascular diseases of the eye. Retinal neovascularization diseases, such as RVO, ROP, and diabetic retinopathy, are primarily caused by chronic or severe retinal ischemia [36]. Retinal ischemia induces the production of intravitreal angiogenic factors [37], leading to the proliferation of retinal blood vessels. Pre-retinal neovascularization in the vitreous cavity can cause tractional retinal detachment and vitreous hemorrhage due to increased vascular vulnerability ( Figure 2). In severe retinopathy cases, pre-retinal neovascularization can lead to vision loss. Considering its importance in angiogenesis, VEGF is a promising therapeutic target for pathologic retinal neovascularization. Hence, intravitreal injection of VEGF-targeting agents is currently a standard treatment option for patients with visual impairments due to pathologic retinal neovascularization [38,39]. AMD is another common condition that can cause visual impairments. It is more prevalent in older people and in developed countries. In AMD, lesions often develop within the macula, the central area of the retina responsible for high-acuity vision. Early-stage AMD manifests as abnormalities in retinal pigment epithelial (RPE) and drusen deposits without impaired visual acuity [40]. Lipoproteins and cholesterol represent major components of soft drusen deposits. Late-stage AMD manifests as exudative AMD or geographic atrophy. CNV is a hallmark of exudative AMD, causing subretinal hemorrhage and vascular leakage ( Figure 1). Currently, intravitreal administration of VEGF-targeting agents is the most common treatment for exudative AMD [41][42][43]. The cornea is characterized by complete avascularity under physiological conditions to maintain the clarity required for visual acuity [44]. Under pathological conditions, including corneal infection, traumatic injury, ocular surface inflammation, and limbal stem cell deficiency, corneal neovascularization and sprouting from the pericorneal plexus can occur [45][46][47], potentially leading to vision loss. The new vessels can induce corneal opacification and impair vision. Patients with severe corneal neovascularization may require corneal transplantation. Various corneal neovascularization animal models have been established to investigate disease pathology and test novel therapeutics for artificial general neovascularization as well as corneal neovascularization [48]; in most of these models, corneal neovascularization is induced by suture, chemical treatments, and surgical implantation of hydron pellets with reagents containing angiogenic factors [49]. Although numerous efforts have been made to elucidate the mechanisms underlying ocular neovascularization diseases, the role of bioactive lipids remains elusive. In this review, we summarize current knowledge regarding the role of lipid signaling in ocular neovascularization. Glycerophospholipids in Ocular Neovascularization Glycerophospholipids are the most prevalent membrane lipid in mammalian cells. They are composed of a glycerol backbone, two long-chain fatty acids at sn-1 and sn-2 positions of glycerol, and phosphoric acid esterified at sn-3 as a headgroup [50]. The different combinations of headgroups and fatty acids allows for over a thousand glycerophospholipid variants in mammalian cells. Based on their polar headgroups, glycerophospholipids can be classified into phosphatidic acids (PA), phosphatidylcholine (PC), phosphatidylethanolamines (PE), phosphatidylserines (PS), phosphatidylglycerol, and phosphatidylinositol ( Figure 3) [51]. Other lysophospholipids include lysophosphatidic acid (LPA), lysophosphatidylserine (LysoPS), platelet-activating factor, and 2-arachidonoylglycerol, all of which are involved in various cellular responses by acting as signaling molecules [54]. Despite the low intracellular levels of LPA and LysoPS, they regulate important processes, including angiogenesis. Sphingolipids in Ocular Neovascularization Sphingolipids are one of the main components of the membrane lipid bilayers in eukaryotic cells. They are derived from palmitoyl-CoA and serine and contain a backbone of sphingoid bases, a fatty acid attached to the long-chain sphingoid base via an amide bond, and a headgroup [95]. In mammalian cells, the headgroup of sphingolipids is a phosphocholine or oligosaccharide. Sphingomyelin, a sphingolipid with phosphocholine as a polar headgroup, is a primary component of membrane microdomains called lipid rafts. Sphingomyelin is hydrolyzed by sphingomyelinases (SMase) into ceramides and phosphocholine [96]. Ceramide regulates various cellular processes, including apoptosis and senescence [97], and is a central player in sphingolipid metabolism because sphingosine is synthesized only from ceramide. Ceramidases, including acid ceramidase, neutral ceramidase, alkaline ceramidase 1 (ACER1), ACER2, and ACER3, catalyze the hydrolysis of ceramide to sphingosine [98], which is then phosphorylated by sphingosine kinases to produce sphingosine 1phosphate (S1P) ( Figure 5). Sphingolipids act as signaling molecules regulating inflammation, cell viability, and migration [99,100]. S1P has been implicated in both physiological and pathological neovascularization, as detailed in the next section. Numerous human gene analysis and preclinical studies have investigated the roles of LPA and LPA receptors [68,69]. These studies have shown that LPA is essential for vascularization [70]. Notably, Atx −/− mice were not viable after E10.5 due to severe vascular defects resembling those in Vegf −/− mice [71]. Atx deletion disrupts the development of vascular networks in the embryo and yolk sac. Allantois explants from Atx −/− mice formed vessels, although the lack of ATX and LPA caused endothelial disassembly. These findings imply that the ATX-LPA axis plays a critical role in vascular formation by stabilizing the immature vascular network. Among the LPA receptors, LPA-Gα12/Gα13 is reportedly responsible for this process, as a deficiency thereof results in a similar phenotype to that of Atx −/− mice [72]. A previous study using zebrafish revealed the requirement of ATX for vascular development, and that suppression of LPA-LPA1 and/or LPA-LPA4, both of which interact with Gα12/Gα13, was responsible for these vascular defects [73]. Similarly, in the absence of Rho-associated protein kinases (ROCK-1/2), which act downstream of LPA, mice exhibited impaired vascular remodeling in the yolk sac [74]. These findings highlight the vital role of LPA/LPA1 and LPA/LPA4 in vascular development. In normal human cells, LPA facilitates angiogenesis by enhancing the production of VEGF, IL-8, MCP-1, and MMP-9 [75,76] in a Gαi/NF-κB-dependent manner, while VEGF upregulation requires the binding of LPA on LPA1 and LPA3 [77]. Moreover, LPA induces HIF-1α expression and nuclear translocation in various cancer cells by activating PI3K/Akt/mTOR and p42/p44 MAPK signaling pathways [78]. Of note, hypoxia or HIF-1α activation enhances LPA-driven cellular responses [79]. Additionally, LPA induces the expression of various cytokines. Notably, LPA promoted IL-6 expression by binding to LPA1 [80], whereas binding to LPA1, LPA2, or LPA3 induced IL-8 expression [81]. The LPA-mediated upregulation of these cytokines was dependent on NF-κB activation by PI3K/Akt and protein kinase C (PKC) signaling pathways. LPA/LPA1 and LPA/LPA3 signaling also upregulated the expression of numerous cell adhesion molecules [82]. LPA binding to LPA1 activated Rho and iNOS in vivo [83,84]. Therefore, it has become evident that LPA regulates angiogenesis in ocular tissues by activating the expression of various angiogenic factors. Nevertheless, the relevance of each LPA receptor in angiogenesis in different tissues remains unclear. Lysophosphatidic Acid and Ocular Neovascularization The biological roles of ATX and LPA in ocular tissues have been partly characterized. ATX is highly expressed in retina [85], and the ATX-encoding gene (ENPP2) has been identified as an RPE signature gene [86]. LPA is required for retinogenesis, promoting retinal growth cone collapse [87,88]. In human RPE cells, LPA was shown to regulate barrier integrity [89]. These findings imply involvement of the ATX/LPA axis in retinal function. A recent study assessing the role of LPA and its receptors in angiogenesis during retinal vascular development demonstrated that endothelial cells of mice deficient in LPA4 and LPA6 exhibited reduced filopodia and vessel sprouting, with a reduced number of blood vessels. LPA4/LPA6 coupling with Gα12/Gα13 was also shown to promote retinal angiogenesis, as well as the global vascularization described above, in mice by activating the Gα12/Gα13-Rho-ROCK axis in endothelial cells [90]. Gα12/Gα13-Rho-ROCK pathway activation inhibited DLL4/Notch signaling by activating Yes-associated protein (YAP) and transcriptional co-activator with PDZ-binding motif (TAZ) in endothelial cells at the vascular front. These findings imply that LPA4 and/or LPA6 facilitate tip cell formation and stalk cell proliferation in the retina, promoting angiogenesis ( Figure 4). Several clinical studies have also investigated the role of LPA in pathological ocular neovascularization. Intravitreal LPA concentrations were significantly elevated in patients with PDR (proliferative diabetic retinopathy) [91]. Intravitreal concentrations of LPA and PA were significantly associated with levels of the inflammatory biomarker vascular cell adhesion molecule-1 (VCAM-1). AGK levels were also higher in PDR patients, implying involvement of the LPA signaling pathway in the pathology of diabetic retinopathy ( Figure 3). A subsequent study confirmed the elevated levels of LPA and AGK in the vitreous fluid of PDR patients. On the other hand, ATX levels were significantly lower in PDR patients than in patients without diabetes, implying that AGK-LPA rather than ATX-LPA may be involved in the development and progression of PDR [92]. According to a study investigating LPA expression in RVO, levels of LPA and ATX were significantly higher in vitreous samples from patients with RVO than in samples from patients without RVO. LPA levels were significantly associated with visual acuity impairments and expression levels of MCP-1, VEGF-A, IL-6, and IL-8. Changes in central macular thickness secondary to RVO were also correlated with ATX expression. These findings imply that both LPA and ATX are involved in the pathogenesis of RVO [93]. Nonetheless, the implication of LPA in the pathology of CNV and AMD remains understudied. As LPC metabolism-related markers were higher in the serum of exudative AMD patients [94], it is likely that LPA is also involved in AMD pathology. Future in vivo and in vitro investigations are required to elucidate the biological roles of LPA in pathological ocular neovascularization conditions, in addition to the physiological vascular formation. Sphingolipids in Ocular Neovascularization Sphingolipids are one of the main components of the membrane lipid bilayers in eukaryotic cells. They are derived from palmitoyl-CoA and serine and contain a backbone of sphingoid bases, a fatty acid attached to the long-chain sphingoid base via an amide bond, and a headgroup [95]. In mammalian cells, the headgroup of sphingolipids is a phosphocholine or oligosaccharide. Sphingomyelin, a sphingolipid with phosphocholine as a polar headgroup, is a primary component of membrane microdomains called lipid rafts. Sphingomyelin is hydrolyzed by sphingomyelinases (SMase) into ceramides and phosphocholine [96]. Ceramide regulates various cellular processes, including apoptosis and senescence [97], and is a central player in sphingolipid metabolism because sphingosine is synthesized only from ceramide. Ceramidases, including acid ceramidase, neutral ceramidase, alkaline ceramidase 1 (ACER1), ACER2, and ACER3, catalyze the hydrolysis of ceramide to sphingosine [98], which is then phosphorylated by sphingosine kinases to produce sphingosine 1-phosphate (S1P) ( Figure 5). Sphingolipids act as signaling molecules regulating inflammation, cell viability, and migration [99,100]. S1P has been implicated in both physiological and pathological neovascularization, as detailed in the next section. Sphingosine 1-Phosphate S1P is a bioactive lipid consisting of a long-chain sphingoid base and a phosphate polar headgroup. It is synthesized via the ATP-dependent phosphorylation of the hydroxyl group of sphingosine by sphingosine kinase-1 (SphK1) and SphK2 [101]. SphK1 is predominantly found in the cytosol adjacent to the plasma membrane, whereas SphK2 is located in the endoplasmic reticulum, Sphingosine 1-Phosphate S1P is a bioactive lipid consisting of a long-chain sphingoid base and a phosphate polar headgroup. It is synthesized via the ATP-dependent phosphorylation of the hydroxyl group of sphingosine by sphingosine kinase-1 (SphK1) and SphK2 [101]. SphK1 is predominantly found in the cytosol adjacent to the plasma membrane, whereas SphK2 is located in the endoplasmic reticulum, nucleus, and mitochondria [102,103]. S1P interacts with specific G-protein-coupled receptors localized on the cell surface (S1P1-5), initiating autocrine, paracrine, or endocrine signaling. S1P receptors are coupled to Gαi/o, Gαq/11, and Gα12/13 ( Figure 6) [104]. S1P1, S1P2, and S1P3 are ubiquitously expressed, whereas S1P4 is primarily expressed in lymphoid tissues, and S1P5 is expressed in the brain and spleen [105,106]. S1P levels are relatively higher in the blood than intracellularly; the concentration of serum S1P in healthy humans is approximately 1 µM [107]. S1P is stored in endothelial cells, erythrocytes, and thrombocytes [108,109]. for both physiological and pathological retinal angiogenesis and that S1P/S1P2 may be an essential regulator of pathological retinal neovascularization without affecting the retinal vascular morphology. Laser-induced CNV and sub-retinal fibrosis induction were alleviated by the intravitreal administration of anti-S1P antibodies [128]. Several humanized anti-S1P monoclonal antibodies repressed CNV in mouse models by inhibiting the IL-8-mediated lymphocyte trafficking [129]. Notably, S1P2 promoted CNV formation by regulating the production of angiogenic factors and inflammatory mediators as well as promoting barrier disruption [130,131]. Although the mechanisms underlying the S1P regulation in pathological neovascularization remain elusive, these findings pinpoint S1P signaling as a promising therapeutic target to suppress pathological retinal and choroidal neovascularization. Unfortunately, a phase II clinical trial found that the intravitreal administration of an anti-S1P antibody could not improve visual impairment in exudative AMD patients [132]. However, there is a possibility that other anti-S1P antibodies or drugs targeting a specific S1P receptor may have the therapeutic effect for exudative AMD. Yonetsu et al. [133] evaluated the role of S1P and S1P receptors using a corneal neovascularization rabbit model; they found that an S1P1-3 antagonist suppressed corneal neovascularization. Similarly, the S1P receptor modulator FTY720 attenuated corneal neovascularization and vascular leakage induced by VEGF or S1P [134]. Although it remains unclear which S1P receptor was responsible for these effects, these observations imply that S1P and S1P receptors are also involved in the pathogenesis of corneal neovascularization. The Role of Fatty Acids and Their Metabolites in Ocular Neovascularization Eicosanoids are fatty acids synthesized from C20 fatty acids, such as arachidonic acid (AA) and eicosapentaenoic acid (EPA), by fatty acid oxygenases (Figure 7) [135]. Docosanoids are derived from C22 fatty acids, including docosahexaenoic acid (DHA). ω-6 fatty acids, such as prostaglandin (PG), thromboxane (TX), leukotriene, and lipoxin, are produced from AA [136]. These AA metabolites are further metabolized by cyclooxygenase (COX)1 and 2 to produce various bioactive lipids, including PGD2, PGE2, PGF2α, prostacyclin (PGI2), and TXA2 (also known as 2-series prostanoids), which are generally considered to have proinflammatory effects. In contrast, ω-3 fatty acids generated from EPA, including PGD3, PGE3, PGF3α, PGI3, and TXA3 (3-series prostanoids), exert anti-inflammatory Similar to LPA, S1P acts as an angiogenic factor, promoting embryonic vascular development. Sphk1/2 −/− mice were devoid of S1P and embryonically lethal due to profound blood vessel defects between E9.5 and E13.5, leading to cranial hemorrhage and implying that S1P is essential for vascular development [110]. Among S1P receptors, S1P1 is considered the most important for the S1P-mediated effects in vascular development. S1pr1-deficient mice exhibited severe vascular smooth muscle defects and hemorrhage, which led to embryonic death [111]. Moreover, the deletion of S1pr2 and/or S1pr3 in S1pr1 −/− mice elicited even more severe vascular maturation defects, leading to lethality. S1pr1-3 triple knockout mice had a reduced number of branches and capillary networks, implying that the interplay of the signaling pathways induced by S1P1, S1P2, and S1P3 is also crucial for vessel development [112]. These reports collectively imply that the S1P/S1P1-3 axis is essential for embryonic vascular development. S1P is believed to promote neovascularization through the activation of angiogenic factors ( Figure 6). S1P binding to S1P2 promotes VEGF and MMP-2 activation [113]. S1P has been shown to prevent HIF degradation, promoting HIF-1α signaling independently of hypoxia [114]. Additionally, S1P has been shown to induce HIF-1α expression by activating MAPK and PKCβI in a Gαi/o-dependent manner [115]. Conversely, HIF-2α, a HIF-α subunit predominantly expressed in endothelial cells, cardiomyocytes, and glial cells [116], upregulated the expression of SphK1 by binding to SphK1 promoter [117]. These reports imply that S1P regulates both physiological and pathological angiogenesis by promoting the expression and activation of various angiogenic factors and that hypoxic cellular responses promote S1P expression. Additionally, S1P binding to S1P2 enhanced IL-8 secretion by activating p38 MAPK and extracellular signal-regulated kinase (ERK) 1/2 pathways [118,119]. S1P/S1P1 also modulates endothelial cellular junctions by inducing the translocation of VE-cadherin to adherens junctions and enhancing barrier integrity through Rho and/or Rac activation [120]. On the other hand, the S1P2/Rho/ROCK/PTEN axis as well as VEGF signaling promotes the phosphorylation of VE-cadherin [121]. As different S1P receptors may have opposing biological functions [122], the effects of S1P vary depending on the S1P receptor. Sphingosine 1-Phosphate and Ocular Neovascularization S1P is generated in the retina among other tissues [123], and the essential roles of S1P and S1P receptors in the retinal vascular formation have become apparent. S1pr1 deficiency in endothelial cells increased the number of tip cells and enhanced filopodia formation. Additionally, S1pr1 depletion in endothelial cells led to ectopic endothelial hyper-sprouting and subsequent endothelial hyperplasia without pericyte coverage in the retina, resulting in vessel defects and lack of mural cells. The loss of S1pr1 also impaired VE-cadherin stabilization at endothelial cell-cell junctions, resulting in vascular leakage and lethality [124]. These reports imply an important role for S1P1 in stabilizing sprouting angiogenesis in the retina. Consistently, S1pr1-3 triple knockout mice exhibited a disorganized retinal vascular endothelium with hyper-sprouting and a lack of vascular endothelial barrier and capillary lumens [125]. Moreover, S1pr1-3 deficient mice had insufficient blood perfusion in the retina and severe vascular structure defects due to impaired endothelial cell specialization. S1P has been shown to regulate vascular maturation partly by suppressing the expression of the transcriptional factor JunB in endothelial cells located behind the vascular front. These findings imply that S1P and S1P receptors (mainly S1P1) are required for endothelial cell specialization and vessel maturation in the retina independently of VEGF. In addition to their role in the physiological vascular development, S1P and S1P receptors have been implicated in pathological retinal neovascularization. A study using an oxygen-induced retinopathy (OIR) mouse model demonstrated that SphK2 overexpression promoted retinal angiogenesis under normoxic conditions [126], reducing the avascular retinal area and exacerbating pathological retinal neovascularization in hypoxia. Conversely, SphK2 −/− ameliorated retinal neovascularization and decreased the expression of VEGF and angiopoietin, highlighting the crucial role of SphK2/S1P in normal and pathological retinal angiogenesis. Similar to SphK2 −/− mice, S1P2 −/− mice lacked intravitreal pathological neovascular tufts, confirming the role of S1P2 in pathological retinal neovascularization [127]. However, in contrast to SphK2 deletion, S1pr2 deficiency significantly decreased the avascular retinal area and restored the formation of retinal vasculatures and capillary plexus, partly due to the negative regulation of eNOS. Considering that S1P1 is the most important S1P receptor regulating retinal vessel formation, these results imply that S1P is required for both physiological and pathological retinal angiogenesis and that S1P/S1P2 may be an essential regulator of pathological retinal neovascularization without affecting the retinal vascular morphology. Laser-induced CNV and sub-retinal fibrosis induction were alleviated by the intravitreal administration of anti-S1P antibodies [128]. Several humanized anti-S1P monoclonal antibodies repressed CNV in mouse models by inhibiting the IL-8-mediated lymphocyte trafficking [129]. Notably, S1P2 promoted CNV formation by regulating the production of angiogenic factors and inflammatory mediators as well as promoting barrier disruption [130,131]. Although the mechanisms underlying the S1P regulation in pathological neovascularization remain elusive, these findings pinpoint S1P signaling as a promising therapeutic target to suppress pathological retinal and choroidal neovascularization. Unfortunately, a phase II clinical trial found that the intravitreal administration of an anti-S1P antibody could not improve visual impairment in exudative AMD patients [132]. However, there is a possibility that other anti-S1P antibodies or drugs targeting a specific S1P receptor may have the therapeutic effect for exudative AMD. Yonetsu et al. [133] evaluated the role of S1P and S1P receptors using a corneal neovascularization rabbit model; they found that an S1P1-3 antagonist suppressed corneal neovascularization. Similarly, the S1P receptor modulator FTY720 attenuated corneal neovascularization and vascular leakage induced by VEGF or S1P [134]. Although it remains unclear which S1P receptor was responsible for these effects, these observations imply that S1P and S1P receptors are also involved in the pathogenesis of corneal neovascularization. ω-6 Polyunsaturated Fatty Acids ω-6 polyunsaturated fatty acids (PUFAs) are considered a pathological angiogenic factor in ocular tissues. Of note, AA or its derivatives promote retinal vascular degeneration, and the inhibition of multiple 2-series prostanoid receptors repressed retinal and choroidal neovascularization in animal models [11,139,140]. The biological effects of PGE2 have been extensively investigated over many years. It is synthesized by COX from PGH2, which also produces PGD2, PGF2α, PGI2, and TXA2. PGE2 is a potent angiogenic lipid mediator exerting its effects through G protein-coupled receptors (EP1-4) via autocrine and paracrine signaling. EP1 is coupled to Gαq/11, modulating intracellular Ca 2+ levels [141]. EP2 and EP4 are coupled with Gαs activating adenylate cyclase, which subsequently produces ω-6 Polyunsaturated Fatty Acids ω-6 polyunsaturated fatty acids (PUFAs) are considered a pathological angiogenic factor in ocular tissues. Of note, AA or its derivatives promote retinal vascular degeneration, and the inhibition of multiple 2-series prostanoid receptors repressed retinal and choroidal neovascularization in animal models [11,139,140]. Hypoxia induced the expression of COX-2 and PGE 2 in retinal Müller cells; PGE 2 subsequently enhanced VEGF expression, likely via EP2 and/or EP4 [156]. These results imply that PGE 2 is an important prostanoid associated with retinal neovascularization. Numerous studies have shown that PGE 2 mediated retinal vascularization in a Gαi/o-dependent manner. Mice lacking EP3 exhibited impaired retinal embryonic angiogenesis due to DLL4/Notch signaling inhibition in endothelial cells [157]. The same study also showed that the endothelial-specific silencing of EP3 attenuated the recruitment of tip cells, implying that EP3 modulates developmental angiogenic processes in endothelial cells. EP4 has also been implicated in pathological angiogenesis. PGE 2 /EP4 axis inhibition suppressed pathological neovascularization in OIR and laser-CNV mouse models [158]. The importance of PGE 2 in corneal neovascularization has also been reported. PGE 2 levels were elevated in a corneal suture-injury mouse model; PGE 2 exacerbated corneal neovascularization by promoting chronic inflammatory neovascularization [159]. It is worthy to note that COX2 inhibitors suppressed ocular pathological neovascularization in retina, choroid, and cornea [160][161][162], implicating COX2 is also involved in the pathogenesis of ocular neovascularization as well as PGE 2 . ω-3 Polyunsaturated Fatty Acids In contrast to 2-series prostanoids, 3-series prostanoids have anti-angiogenic and anti-inflammatory effects that protect against several disorders [163]. EPA and DHA found in retinal microvessels are thought to have protective effects against retinal vascular diseases [164]. For instance, ω-3 PUFAs (EPA and DHA) inhibited the expression of TNF-α, IL-1β, VEGF, and cell adhesion molecules in the retina in response to hypoxia or angiogenic factors [165,166]. Similarly, bioactive compounds derived from EPA and DHA, including resolvin E1 (RvE1) and D1 (RvD1), suppressed retinal vascular diseases [167,168]. Oral supplementation with ω-3 PUFAs, including EPA and DHA, suppressed retinal obliteration and pathological neovascularization in an OIR mouse model, by inhibiting proinflammatory cytokine secretion in retinal microglia [169]. RvD1 and RvE1 also inhibited neovascularization, implying that ω-3 PUFA supplementation is a promising prevention strategy for retinal vascular diseases. Additionally, peroxisome proliferator-activated receptor (PPAR), whose ligand is ω-3 PUFAs, reportedly has protective effects against pathological retinal and choroidal neovascularization [166,170,171], implicating that PPAR agonists can be considered as a therapeutic option. Furthermore, dietary intake of ω-3 PUFAs may protect against AMD. The prevalence of exudative AMD presenting with CNV was higher in people who did not consume fish, which are enriched in EPA and DHA [172]. Furthermore, ω-3 PUFA intake lowered the twelve-year incidence of AMD [173]. These findings strongly imply that ω-3 PUFAs have a protective effect against AMD. Consistently, ω-3 PUFA supplementation suppressed laser-induced CNV in mice; it also decreased intravitreal concentrations of VEGF-A and alleviated exudative CNV in humans [174,175]. RvD1 and RvE1 attenuated corneal neovascularization induced by herpes simplex virus (HSV)-1 infection, manual suture, or implantation of pellet secreting angiogenic factors. Their anti-angiogenic effects were attributed to their ability to inhibit the infiltration of inflammatory cells and secretion of cytokines, such as VEGF-A, MMP-9, IL-1β, TNF-α [176,177]. The findings of these studies indicate ω-3 PUFAs to be a promising therapeutic option for patients with ocular pathological neovascularization. Future Perspectives Numerous studies on membrane lipids and their metabolites demonstrate the crucial role of lipid signaling in ocular neovascularization, among other conditions. Glycerophospholipids, sphingolipids, and fatty acids have strong pro-angiogenic or anti-angiogenic effects by activating complex signaling circuits. Future studies are required to elucidate the mechanisms underlying the effects of lipid signaling in ocular neovascularization and other human disorders.
6,183.8
2020-07-01T00:00:00.000
[ "Medicine", "Biology" ]
A Remote Sensing Approach for Surface Urban Heat Island Modeling in a Tropical Colombian City Using Regression Analysis and Machine Learning Algorithms : The Surface Urban Heat Islands (SUHI) phenomenon has adverse environmental consequences on human activities, biophysical and ecological systems. In this study, Land Surface Temperature (LST) from Landsat and Sentinel-2 satellites is used to investigate the contribution of potential factors that generate the SUHI phenomenon. We employ Principal Component Analysis (PCA) and Multiple Linear Regression (MLR) techniques to model the main temporal and spatial SUHI patterns of Cartago, Colombia, for the period 2001–2020. We test and evaluate the performance of three different emissivity models to retrieve LST. The fractional vegetation cover model using Sentinel-2 data provides the best results with R 2 = 0.78, while the ASTER Global Emissivity Dataset v3 and the land surface emissivity model provide R 2 = 0.27 and R 2 = 0.26, respectively. Our SUHI model reveals that the factors with the highest impact are the Normalized Difference Water Index (NDWI) and the Normalized Difference Build-up Index (NDBI). Furthermore, we incorporate a weighted Naïve Bayes Machine Learning (NBML) algorithm to identify areas prone to extreme temperatures that can be used to define and apply normative actions to mitigate the negative consequences of SUHI. Our NBML approach demonstrates the suitability of the new SUHI model with uncertainty within 95%, against the 88% given by the Support Vector Machine (SVM) approach. Introduction Urban expansion transforms natural areas into surfaces covered with concrete, asphalt, and buildings (highly impervious materials), reducing evapotranspiration and decreasing the cooling capacity of the air, which in turn helps to reduce the impacts of high urban surface temperature on the urban surface. Due to the existing urban growth, the climate in these areas becomes warmer than the regional areas of the suburban and rural regions, resulting in the phenomenon of Urban Heat Islands (UHI) [1]. The UHI refers to a phenomenon in which urban areas tend to have higher air or surface temperatures than their surroundings [2]. Traditionally, terrestrial observation methods, such as ground meteorological stations that record specific values of air temperature, have been used to model UHI [3]. The difference between air temperature measurements recovered from urban and rural meteorological stations is a direct method used to model UHI [4]. However, the high heterogeneity in urban areas makes temperature spatially diverse, making it difficult for a small number of stations to realistically represent the real variability [5]. When the UHI phenomenon is monitored by remote sensing, it is referred to as Surface Urban Heat Island (SUHI). The reason is that the parameter considered here is the Land Machine Learning (NBML) algorithm to segment the geographical space into regions of different thermal intensity, not explored in previous literature. Understanding and quantifying urban temperatures in space and time is significantly relevant for city planners in defining policies that generate adaptation strategies in the face of adverse effects of SUHI. Here we study the application and assessment of modeling procedures that allow evaluating the contribution of various factors to SUHI. For this purpose, a combination of PCA and MLR techniques applied along with Machine Learning Algorithms is used to detect high thermal intensity patterns in the tropical Colombian Andean city of Cartago. Although SUHI is a derived quantity, expressed as the difference between urban and rural LST, the delimitation of thermal zones using LST ranges allows establishing comparisons with other zones, e.g., rural areas, and classifies the space into zones with greater or lesser thermal activity. In this study, the LST ranges are taken from Wang et al. [36], as they are based on statistical criteria, and they appear to conveniently reflect the LST differences of urban areas with their surroundings. The spatial patterns of the SUHI phenomenon can be represented through LST ranges, which, combined with the weights of the involved variables, are further classified using Machine Learning algorithms. The methodology suggested in this article establishes an effective method for assessing SUHI patterns, locally, and attempts to draw several recommendations for planning sustainable urban development and for the regeneration of areas with thermal excesses. Study Area The city of Cartago is located in the south-west area of Colombia in the Andean region at an altitude of 917 m above sea level. It has an extension of 279 km 2 with moderate topographic relief. The geodetic coordinates of the city center are 4.75 • N and 75.9 • W. This area belongs to Valle del Cauca Department and is surrounded by the Cauca and La Vieja rivers. The climate in this area is tropical dry and the average air temperature is 23.8 • C, with an annual rainfall of 1578 mm. March is the warmest month with an average temperature of 24.3 • C, while October is the coldest with 23.3 • C. According to official reports, the urban population growth rate during 2001-2020 was 12.3%, while the rural population decreased by 44% [37]. The population density is 464 inhabitants per km 2 . The appearance of new urban units (red oval areas in Figure 1) denotes the urban growth from the city center to the north-east, near the La Vieja River, as well as to the south-east and south-west. Temperatures in tropical zones show small changes throughout the year. According to official reports from the study area, the difference between the average temperature of the warmest month and that of the coldest month is 1 • C [38]. Data Data used for the study area (Figure 1a,b) were freely acquired from ESRI World Countries (https://hub.arcgis.com/, accessed on 20 October 2021). The base cartography for the construction of thematic maps and the topographic model is available at https://geoportal.dane.gov.co/, accessed on 20 October 2021 and https://geoportal.igac.gov.co/, accessed on 20 October 2021. The primary information sources used are satellite Earth images from the Thematic Mapper (TM) instrument onboard the Landsat 5 and 7 satellites Data Data used for the study area (Figure 1a,b) were freely acquired from ESRI World Countries (https://hub.arcgis.com/, accessed on 20 October 2021). The base cartography for the construction of thematic maps and the topographic model is available at https: //geoportal.dane.gov.co/, accessed on 20 October 2021 and https://geoportal.igac.gov.co/, accessed on 20 October 2021. The primary information sources used are satellite Earth images from the Thematic Mapper (TM) instrument onboard the Landsat 5 and 7 satellites (L5TM, and L7ETM+), and from the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) onboard the Landsat 8 satellite (L8OLI/TIRS). The images used in this study are sparsely distributed within the period 2001-2020, as shown in Figure 2. Each Landsat product contains separated spectral bands in GeoTIFF format and is referenced to the WGS84 datum in the UTM (18N) cartographic projection. L5TM, L7ETM+ and L8OLI VIS and NIR bands have a spatial resolution of 30 m, while for the TIR satellite instruments, the resolutions are 120, 60, and 100 m, respectively. We employ a total of 37 Landsat scenes (satellite path 009 and row 057) including 2 images of L5TM, 20 of L7ETM+, and 15 of L8OLI/TIRS. In addition to the Landsat products, 11 multispectral Level-2A atmospheric corrected images from the Sentinel-2 Multispectral Instrument (S2-MSI) were also used to extract the so-called Fractional Vegetation Cover (F cover ) biophysical variable. S2-MSI offers a different spatial resolution; the three visible and the near infrared bands have 10 m spatial resolution. The three Red Edge bands, an NIR band, and two SWIR S2-MSI bands have 20 m spatial resolution. These data are very appropriate for the retrieval of geophysical surface parameters. Meanwhile, the three other S2-MSI bands (coastal aerosol, water vapor, and SWIR-Cirrus) have a resolution of 60 m resolution. The reflectance S2-MSI products are freely available on the European Space Agency (ESA) DataHUB server (ESA, https://scihub.copernicus.eu/, accessed on 20 October 2021). Details on the retrieval of F cover are given in Section 2.3.2. The estimation of LST from the Thermal Infrared Sensor (TIRS) is highly dependent on the intrinsic properties of the coverage, such as the emissivity of the land surface. The emissivity retrieval method based on the F cover is very suitable due to its ease of application. The performance shown in works such as Sobrino et al. [39] and Valor and Caselles [40]. Methods The proposed methodology comprises five processing steps: (1) data calibration, (2) extraction of contributing factors, (3) estimation of temperature and emissivity, (4) validation of temperatures, and (5) modeling of the SUHI phenomenon. These are described in the following sections. Data Calibration The conversion of image digital values to top of atmosphere radiance (LTOA) was carried out using the gain and offset parameters included in the product metadata file. We use the radiance models provided by the USGS website [41]. Subsequently, the images Methods The proposed methodology comprises five processing steps: (1) data calibration, (2) extraction of contributing factors, (3) estimation of temperature and emissivity, (4) validation of temperatures, and (5) modeling of the SUHI phenomenon. These are described in the following sections. Data Calibration The conversion of image digital values to top of atmosphere radiance (LTOA) was carried out using the gain and offset parameters included in the product metadata file. We use the radiance models provided by the USGS website [41]. Subsequently, the images were corrected from atmospheric effects to minimize the radiance scattering and absorption errors caused by water vapor, dust particles, and aerosols. We employ the Fast Lineof-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) module of ENVI ® , which incorporates the MODerate Resolution Atmospheric TRANsmission (MODTRAN) model [42]. Given the geographic location of the area of interest, the tropical atmospheric model is applied to the Landsat products. FLAASH solves the radiative transfer equation by determining the water vapor for each pixel in the image. Water vapor content (WVC) retrieval is not a straightforward solution for Landsat bands, so this parameter was taken from a standard atmospheric model. Regarding the aerosol concentration or aerosol optical depth (AOD), the dark vegetation reflectance algorithm of Kaufman et al. [43] was applied. Finally, all images were subset to fit the boundaries of the study area. Definition and Extraction of Contributing Factors Spectral indices such as NDBI, NDVI, and NDWI are used to examine the underlying properties of SUHI formation. Analytical expressions of these indices can be found in Zha et al. [44], Tucker [45], and Gao [46]. Moreover, the components of the tasseled cap (TC) components (brightness, greenness, and wetness) are also computed [47]. The rationale for the selection of these biophysical indices is as follows. • Energy exchange between latent and sensible heat is related to NDBI, since it detects impervious surfaces that reduce humidity and increase the average temperature of the environment [48]. • Temperature and vegetation maintain a spatially dependent relationship [49]. Vegetation reduces surface irradiation and increases humidity through physiological processes that allow energy exchange, while producing a cooling effect. In this sense, an index for measuring this photosynthetic activity is the NDVI. • The presence of water bodies has a cooling effect on urban temperature [50]. In this scheme, the NDWI quantifies the water content in the vegetation, while suggesting a significant effect in reducing SUHI. Likewise, rivers play an important role as thermal regulators of urban climate, increasing the cooling potential through evaporation and facilitating airflow. Given that the urban center is the main point for the development of socioeconomic activities, two additional variables were considered to describe the expression of the proximity, i.e., the proximity map of the water body (PW) and the proximity map (PW) and the city center (PUC). A greater distance would imply a lower thermal intensity [51]. The proximity indices are computed by means of a Euclidean distance using the inverse weight distance operator in ArcGIS ® (https://esri.com/, accessed on 20 October 2021). The above indices conform to the contributing factors to our proposed SUHI model. To compute the emissivity values required to retrieve LST from Landsat thermal bands, a novel method is proposed through extracting F cover biophysical variable, although this information can be derived indirectly from NDVI, Leaf area index (LAI), or other biophysical variables [52][53][54]. Bacour et al. [55] proposed a robust procedure based on the Neural Network training of the PROSAIL (PROSPECT leaf optical properties model and SAIL canopy bidirectional reflectance) model. This F cover variable is implemented in the ESA's Sentinel Application Platform (SNAP (https://step.esa.int/main/toolboxes/snap/, accessed on 20 October 2021), and requires S2-MSI images. Detailed descriptions of this scheme are available in Weiss and Baret [56]. The F cover variable provides the emissivity values necessary to compute LST with the L8OLI/TIRS thermal band 10. Compared to traditional methods based on NDVI, this new approach for extracting the emissivity is suitable for thermal radiation models. Due to temporal synchronization between S2-MSI and L8OLI/TIRS images, this method is only applicable since 2015. Estimation of Land Surface Temperature and Emissivity Land surface temperatures are retrieved from L5TM, L7ETM+, and L8OLI/TIRS. For L8OLI/TIRS, only band 10 is used, since band 11 has large uncertainties, as reported by the USGS [57]. The consistency of Landsat 5, 7 and 8 satellite thermal instruments in recovering LST was compared by Sekertekin and Bonafoni [58] and validated with in situ LST measurements. The RMSE values were 2.39 • C, 2.57 • C and 2.73 • C, respectively, resulting in an average difference of 0.2 • C between the sensors. The uncertainty values are adequate uncertainty for the purpose of this study. In Figure 3, our model to retrieve LST is presented in a flow chart. Temperatures are derived using the radiative components implemented by Barsi et al. [59] for single-channel algorithms. This method simulates the attenuation effects of the atmosphere that disturb the TIR signal. are tested to accurately estimate LST. First, the field-measured LSE (land surface emissivity) values are obtained from different authors, and are listed in Table 1. Then the emissivity data of the ASTER-GEDv3 product [61] were considered. Then, the Fcover model of Valor and Caselles [40] is applied; this model allows the calculation of the emissivity in the Landsat 8 thermal band, considering the Fcover index and the minimum and maximum values of the emissivity in the corresponding spectral band. Finally, the three LST models are compared and validated. In this study, land use features are categorized into seven classes. These are water bodies, cropland, forest, low vegetation, bare soil, urban/densely built, and suburban/medium built. We applied this scheme following the land cover classes proposed by Park et al. [62]. Since impervious surfaces exhibit a large spectral variation [63], two classes are used to represent artificial surfaces: urban/dense and suburban/medium. These classes are particularly identified by the impact on emissivity. For this purpose, an object-based classification is carried out using Trimble' s eCognition Developer software (https://geospatial.trimble.com/products-and-solutions/ecognition, accessed on 20 October 2021). The radiative values can be used in atmospheric correction models, e.g., Equation (1), taking also into account the correction of spectral emissivity. In this equation, L TOA is the spectral radiance at the top of the atmosphere (registered by the sensor), τ is the atmospheric transmittance, ε is the spectral emissivity, L T is the spectral radiance of a black-body target of kinetic temperature T, L λ↑ and L λ↓ are the upwelling atmospheric path radiance and the downwelling or sky radiance, respectively. Implementing Equation (1) requires the supply of adequate emissivity values for a suitable estimation of LST. Since different land covers emit thermal radiation differently, spectral emissivity corrections are necessary [60]. In this work, three emissivity models are tested to accurately estimate LST. First, the field-measured LSE (land surface emissivity) values are obtained from different authors, and are listed in Table 1. Then the emissivity data of the ASTER-GEDv3 product [61] were considered. Then, the F cover model of Valor and Caselles [40] is applied; this model allows the calculation of the emissivity in the Landsat 8 thermal band, considering the F cover index and the minimum and maximum values of the emissivity in the corresponding spectral band. Finally, the three LST models are compared and validated. In this study, land use features are categorized into seven classes. These are water bodies, cropland, forest, low vegetation, bare soil, urban/densely built, and suburban/medium built. We applied this scheme following the land cover classes proposed by Park et al. [62]. Since impervious surfaces exhibit a large spectral variation [63], two classes are used to represent artificial surfaces: urban/dense and suburban/medium. These classes are particularly identified by the impact on emissivity. For this purpose, an object-based classification is carried out using Trimble' s eCognition Developer software (https://geospatial.trimble.com/products-and-solutions/ecognition, accessed on 20 October 2021). The second emissivity dataset in this work is the ASTER Global Emissivity Dataset v3 (ASTER-GEDv3) [61], (https://emissivity.jpl.nasa.gov/aster-ged, accessed on 20 October 2021). This method was developed by the NASA Jet Propulsion Laboratory (JPL) as an algorithm based on temperature and emissivity separation along with an atmospheric correction model. More details can be found in Hulley and Hook [66]. The third emissivity model requires knowledge of the F cover variable [40]. This method provides the emissivity of a heterogeneous surface as follows: In this equation, ε v = 0.985 and ε g = 0.960 are reference vegetation and bare soil emissivity, respectively. 'dε' is the cavity effect associated with the indirect radiance emitted due to internal reflections between the interfaces. Here, F cover is obtained from S2-MSI Level 2A products (see Section 2.3.2). The procedure to retrieve the F cover variable differs from the NDVI methods [67,68], and is presented as a novel alternative for thermal modeling with Landsat data. In tropical areas, throughout the year, vegetation dynamics does not Remote Sens. 2021, 13, 4256 8 of 24 exhibit abrupt changes, and this implies that F cover lacks significant seasonal variations. For the L5TM and L7ETM+ thermal instruments, the emissivity model of Equation (3) [69] is used. This last method to obtain F cover is based on the NDVI parameter. In this equation, ε nonveg and ε veg are the reference emissivity values for nonvegetated and vegetated areas, being 0.97 and 0.99, respectively [70]. In this work, the F cover variable is recovered using the NDVI, as it effectively reflects the conditions of vegetation cover [42]. This is estimated by Equation (4). In this equation, NDVI s is the NDVI value of pure soil, and NDVI v is the NDVI of pure vegetation obtained from the NDVI image. This method is based on the Carlson and Ripley [69] model. Finally, the conversion from LTOA to LST is estimated by using the constants for sensor calibration and the inversion of the Planck equation [71]. Assessment of the Land Surface Temperature Retrieved from L8TIRS B10 Landsat-retrieved LST was verified with in situ measurements. Unfortunately, due to the Landsat overpasses schedule, starting in 2001, it was impossible to undertake a validation by means of field surveys for the entire Landsat time series. Due to these inconveniences, the comparison was limited only to two overpasses of L8OLI/TIRS band 10. The L5TM and L7ETM+ data, the Carlson and Ripley results [69] can be used as a reference. In situ temperatures were measured using 30 thermometers assembled into DS18B20 digital sensors. The direct calibration method was applied, which consists in recording the readings of the test and standard thermometers. The latter are preserved in an isothermal medium. This calibration procedure produced standard deviations of ±0.5 • C. The field survey consists of distributing 30 devices, as shown in Figure 4. LST measurement coincident with the two L8OLI/TIRS overpasses were recorded on 22 January and 9 September 2019. Each device recorded the temperature values by means of a probe in contact with the ground surface. The ground sensors were placed in areas with homogeneous land cover to minimize the spatial thermal variation caused by different emissivity values. These records will be used to contrast the LST values derived from satellite measurements. In Section 3.1, we perform a sensitivity analysis for the three emissivity models. Remote Sens. 2021, 13, x FOR PEER REVIEW 9 of 25 field survey consists of distributing 30 devices, as shown in Figure 4. LST measurement coincident with the two L8OLI/TIRS overpasses were recorded on 22 January and 9 September 2019. Each device recorded the temperature values by means of a probe in contact with the ground surface. The ground sensors were placed in areas with homogeneous land cover to minimize the spatial thermal variation caused by different emissivity values. These records will be used to contrast the LST values derived from satellite measurements. In Section 3.1, we perform a sensitivity analysis for the three emissivity models. According to Rasul et al. [72], SUHI modeling consists of identifying the spatial variation in time of thermal features in urban areas. Here, through the combination of thermal images from remote sensing and sparse measurements on field, our SUHI model employs the PCA to analyze space-time data. The PCA is a multivariate statistical technique that preserves the total variance of a dataset while reducing its dimensionality [73]. In this way, the PCA can retrieve the main spatial patterns of variability in a time-series. The According to Rasul et al. [72], SUHI modeling consists of identifying the spatial variation in time of thermal features in urban areas. Here, through the combination of thermal images from remote sensing and sparse measurements on field, our SUHI model employs the PCA to analyze space-time data. The PCA is a multivariate statistical technique that preserves the total variance of a dataset while reducing its dimensionality [73]. In this way, the PCA can retrieve the main spatial patterns of variability in a time-series. The application of PCA provides a generalization of the changes that characterize the variability patterns in a time series of images [18]. Then, the impacts of the eight factors considered in this study are assessed using a MLR approach. The MLR technique is a parametric model that adjusts the relationship between explanatory variables, that is, the contributing factors, and the response variable, e.g., LST. The inclusion or elimination of predictors depends on the significance of these variables within the model, which is defined by a test hypothesis based on the coefficients associated with the response variable. When using MLR techniques, it is important to examine the key assumptions of autocorrelation, normality of residuals, and multicollinearity. These factors determine the reliability of the model [74]: • Autocorrelation of a variable represents its self-dependence and implies redundant information that makes the estimator lose efficiency. The Durbin-Watson statistic is used to measure autocorrelation [75]. • The normality of a residuals guarantees a satisfactory representation of the model. • Multicollinearity occurs when the predictor variables are highly correlated. Multicollinearity increases the variance, causing instability of the regression and thus increasing the standard error [76]. Multicollinearity is measured with the Variance Inflation Factor (VIF). Finally, outliers can also alter the modelling approach, causing problems with regression assumptions [77,78], and these must be controlled or removed from the dataset. Here, our MLR analysis is an equation capable of describing the thermal intensity depending on the contributing factors. To verify the relative importance of each individual predictor of the LST model, a normalization procedure was previously performed to standardize the coefficients. We use the deviation of the mean values, which is divided by the standard deviation of the response variable in LST. This allows us to derive the standardized coefficients [79]. Subsequently, the contribution of each variable to LST is obtained by weighting the absolute value of each variable. The resulting weights are further used for assessing the subsequent Machine Learning procedure that derives the multitemporal intensity of the SUHI model. This provides a technical basis for analyzing the factors that influence the thermal environment, which is of great significance for rational urban planning and sustainability. The methodological workflow in Figure 5 shows the spatiotemporal model followed to characterize the impact of environmental factors on the thermal changes. First, the multitemporal factors, such as LST, spectral indices, and other variables, are derived from the Landsat 2001-2020 dataset. Then, the PCA technique is applied to extract the main patterns of variability. Subsequently, all the variables involved are included in the MLR scheme to model the possible dependences on LST. The MLR is implemented with the software R Studio (https://rstudio.com/, accessed on 20 October 2021). Finally, the SUHI phenomenon is segmented into different zones depending on the thermal intensity. Thermal value ranges follow the categories of Wang et al. [36], which consider the average temperature of the land surface and its standard deviation (SD). Segmentation provides a definitive SUHI product that categorizes the urban environment according to specific conditions. Here we test two different Machine Learning methods for classification; Support Vector Machine (SVM) and Naïve Bayes Machine Learning (NBML). Both SVM and NBML methods have shown in previous research their robustness for the characterization of various types of geospatial data [80,81]. The SVM method defines a separate hyperplane in a higher-dimensional space that optimally classifies the data. This method is particularly useful for solving nonlinear relations [82], and is available as open-source software in Orfeo ToolBox (OTB) at https://www.orfeo-toolbox.org/, accessed on 20 October 2021. The NBML technique is based on the Bayes theorem for conditional probability and assumes independence between predictors, variables, or features [83]. assessing the subsequent Machine Learning procedure that derives the multitemporal in-tensity of the SUHI model. This provides a technical basis for analyzing the factors that influence the thermal environment, which is of great significance for rational urban planning and sustainability. The methodological workflow in Figure 5 shows the spatiotemporal model followed to characterize the impact of environmental factors on the thermal changes. First, the multitemporal factors, such as LST, spectral indices, and other variables, are derived from the Landsat 2001-2020 dataset. Then, the PCA technique is applied to extract the main patterns of variability. Subsequently, all the variables involved are included in the MLR scheme to model the possible dependences on LST. The MLR is implemented with the software R Studio (https://rstudio.com/, accessed on 20 October 2021). Finally, the SUHI phenomenon is segmented into different zones depending on the thermal intensity. Thermal value ranges follow the categories of Wang et al. [36], which consider the average temperature of the land surface and its standard deviation (SD). Segmentation provides a definitive SUHI product that categorizes the urban environment NBML is often referred to as the maximum a posteriori decision rule [84], and its code can be easily written in any programming language. NBML assigns the most likely class to a certain observation by estimating the probability density of the training classes [85]. An observation is classified as a certain class when the posterior probability reaches the maximum value according to the following expression: In this equation, k(x) is the maximum a posteriori of x i for the class labeled as C k , p(C k ) is the prior probability for class C k , p(x i |C k ) represents the conditional probability distribution of x i given C k , and (w i ) is a particular weight applied to each factor. Usually, the independence assumption is not fulfilled, and the weighting of the features involved in the assignment process can satisfy the required assumptions [86]. Here, each feature or factor is affected by a particular weight (w i ), which can be formally defined by: In this equation, w i denotes the weight value of the ith attribute, with values restricted to the range [0, 1]. In this work, attributes are the contributing factors involved in the SUHI phenomenon, while the C k classes are the seven temperature categories defined by Wang et al. [36]. These are described in Table 2. Table 2. Range of LST intervals. T s represents land surface temperature; T a is the average land surface temperature. SD is the standard deviation. Temperature Grade Range Extreme high temperature (EHT) T S > T a + 2SD High temperature (HT) T a + SDT S ≤ T a + 2SD Sub-high temperature (SHT) T a + SD/2T S ≤ T a + SD Medium temperature (MT) The prior p(C k ) and conditional probabilities p(x i |C k ) are determined through a training process. Then, Equation (6) becomes: In this equation,p(C k ) andp(x i |C k ) are estimates of the probabilities density functions (PDFs). These are derived from the frequency of their respective arguments in the training sample. Here,p(C k ) can also be estimated from a preliminary outcome of a SVM process. Equation (7) allows us to weight each environmental factor to generate the final SUHI product. The resulting map is generated according to the architecture shown in Figure 6, which is based on the NB decision rule. This approach categorizes the urban environment according to a specific condition and assigns a specific type of action based on each temperature category. This analytical procedure allows one to obtain a map that delimits the areas of different thermal intensities. The resulting areas are based on the spatiotemporal trends of the contributing factors, facilitating the management and application of measures to mitigate/adapt the SUHI phenomenon. Equation (7) allows us to weight each environmental factor to generate the final SUHI product. The resulting map is generated according to the architecture shown in Figure 6, which is based on the NB decision rule. This approach categorizes the urban environment according to a specific condition and assigns a specific type of action based on each temperature category. This analytical procedure allows one to obtain a map that delimits the areas of different thermal intensities. The resulting areas are based on the spatiotemporal trends of the contributing factors, facilitating the management and application of measures to mitigate/adapt the SUHI phenomenon. Land Surface Temperature Sensitivity analysis of the three emissivity models is performed prior to retrieving land surface temperatures from the 37 Landsat images; the results of the assessment for the LST retrieved from L8/TIRS B10 (described in Section 2.3.4) are presented here. Figure 7 shows the differences between LST derived from the three models evaluated in this study (Fcover, AS-TER-GEDv3, and LSE), and these are compared with in situ LST. In this figure, the minimum, maxima, median and mean values are shown for (a) January 2019 and (b) September 2019. In both cases, the lowest differences agree with the LST values from our Fcover emissivity model. Land Surface Temperature Sensitivity analysis of the three emissivity models is performed prior to retrieving land surface temperatures from the 37 Landsat images; the results of the assessment for the LST retrieved from L8/TIRS B10 (described in Section 2.3.4) are presented here. Figure 7 shows the differences between LST derived from the three models evaluated in this study (Fcover, AS-TER-GEDv3, and LSE), and these are compared with in situ LST. In this figure, the minimum, maxima, median and mean values are shown for (a) January 2019 and (b) September 2019. In both cases, the lowest differences agree with the LST values from our F cover emissivity model. Land Surface Temperature Sensitivity analysis of the three emissivity models is performed prior to retrieving land surface temperatures from the 37 Landsat images; the results of the assessment for the LST retrieved from L8/TIRS B10 (described in Section 2.3.4) are presented here. Figure 7 shows the differences between LST derived from the three models evaluated in this study (Fcover, AS-TER-GEDv3, and LSE), and these are compared with in situ LST. In this figure, the minimum, maxima, median and mean values are shown for (a) January 2019 and (b) September 2019. In both cases, the lowest differences agree with the LST values from our Fcover emissivity model. The interquartile ranges show a narrower dispersion for the F cover model compared to the ASTER-GEDv3 and the LSE model. This feature is obvious for the campaign in September 2019. The regression analysis between the LST from ground-based sensors and that of the L8OLI/TIRS band 10 is shown in Figure 8. The dark gray areas represent the confidence boundaries of 95%, while the solid lines represent the line of best fit between the computed and in situ LST. The best determination coefficient is given by F cover with R 2 = 0.78 and SD = 0.73 • C (Figure 8a). For the other two cases, the coefficients are R 2 = 0.27 and R 2 = 0.26 for the ASTER-GEDv3 and LSE models, respectively. September 2019. The regression analysis between the LST from ground-based sensors and that of the L8OLI/TIRS band 10 is shown in Figure 8. The dark gray areas represent the confidence boundaries of 95%, while the solid lines represent the line of best fit between the computed and in situ LST. The best determination coefficient is given by Fcover with R 2 = 0.78 and SD = 0.73 °C (Figure 8a). For the other two cases, the coefficients are R 2 = 0.27 and R 2 = 0.26 for the ASTER-GEDv3 and LSE models, respectively. Principal Component Analysis The PCA was carried out using all contributing factors during the period 2001-2020 (37 images for each variable). Figure 9 shows the contribution to the total variance of each PCA component. We can observe that the first PCA component (PCA1) of T-cap Brightness and T-cap Wetness provide a lower contribution to the total variance, with 54% and 64%, respectively. On the other hand, the rest of the variables show larger patterns of variability with only the first PCA component (above 75%). Principal Component Analysis The PCA was carried out using all contributing factors during the period 2001-2020 (37 images for each variable). Figure 9 shows the contribution to the total variance of each PCA component. We can observe that the first PCA component (PCA1) of T-cap Brightness and T-cap Wetness provide a lower contribution to the total variance, with 54% and 64%, respectively. On the other hand, the rest of the variables show larger patterns of variability with only the first PCA component (above 75%). Principal Component Analysis The PCA was carried out using all contributing factors during the pe (37 images for each variable). Figure 9 shows the contribution to the total v PCA component. We can observe that the first PCA component (PCA1) o ness and T-cap Wetness provide a lower contribution to the total variance 64%, respectively. On the other hand, the rest of the variables show larger p iability with only the first PCA component (above 75%). Regarding the explained variance (%) of the second principal component or the T-cap Brightness, it is observed that it still retains a large amount of variance (12%), when compared to other factors. Since the goal of the PCA is to reduce the set of variables, in the case of the T-cap Brightness, the former dataset cannot be strictly explained by the first principal component, as it is the case of the remaining factors. This implies that further analysis is addressed towards investigating the second or even third components of the T-cap Brightness. We employ the Jenks Natural Breaks grouping model [87] to identify the main groups and the inherent patterns that minimize the deviation of each class with respect to the mean value of the other groups. This method reduces the variance within the classes and maximizes the variance between classes. In this scheme, we obtain four groups for each factor, representing the spatiotemporal trends between 2001-2020. Figure 10 shows the resulting maps where the results for LST (Figure 10a) show the maximum concentration of temperature in densely populated areas, similar to the results of NDBI (Figure 10b). The LST results show gradual variations from low to high temperatures near the perimeter of urban areas. Regions with lower temperatures are mainly located in areas close to water bodies and dense vegetation. Vegetation areas can be identified in the NDVI results (Figure 10c). The NDVI and T-Cap Greenness maps Figure 10c,f have high similarity, while the T-Cap Brightness and Wetness maps Figure 10e,g lack spatial correlation with the thermal phenomenon. In the next section, we analyze the spatial correlations in more detail. The LST results show gradual variations from low to high temperatures near the perimeter of urban areas. Regions with lower temperatures are mainly located in areas close to water bodies and dense vegetation. Vegetation areas can be identified in the NDVI results (Figure 10c). The NDVI and T-Cap Greenness maps Figure 10c,f have high similarity, while the T-Cap Brightness and Wetness maps Figure 10e,g lack spatial correlation with the thermal phenomenon. In the next section, we analyze the spatial correlations in more detail. Multiple Linear Regression A large number of outliers were identified and removed in the brightness and humidity factors to avoid introducing "noise" into the MLR analysis. All residuals greater than 3σ standard deviation from the mean value are considered outliers and thus removed. Since NDVI and Greenness factors are highly redundant (R 2 = 0.99), the latter was excluded. Concerning the Brightness factor, under the special circumstances observed in Section 3.2, the two first principal components only explain 66% of the total variance. Moreover, the low spatial correlation with LST ( Figure 10) suggests excluding this variable. Then, in the Fisher hypothesis test for the PW factor is larger than 0.1, and it was removed. Finally, the scrutiny explanatory variables are NDBI, NDVI, NDWI, and PUC, and the resulting MLR model outcomes as follows: Multiple Linear Regression A large number of outliers were identified and removed in the brightness and humidity factors to avoid introducing "noise" into the MLR analysis. All residuals greater than 3σ standard deviation from the mean value are considered outliers and thus removed. Since NDVI and Greenness factors are highly redundant (R 2 = 0.99), the latter was excluded. Concerning the Brightness factor, under the special circumstances observed in Section 3.2, the two first principal components only explain 66% of the total variance. Moreover, the low spatial correlation with LST ( Figure 10) suggests excluding this variable. Then, in the Fisher hypothesis test for the PW factor is larger than 0.1, and it was removed. Finally, the scrutiny explanatory variables are NDBI, NDVI, NDWI, and PUC, and the resulting MLR model outcomes as follows: LST trend = 0.29 + 0.48 NDBI trend + 0.21 NDVI trend − 0.61 NDWI trend − 0.51 PUC (8) The regression analysis coefficients are shown in Table 3. In this table, p (>|t = 0.05|) represents the probability of observing any value larger than t. In our model, all p-values are below the significance level (0.05). This implies that NDBI, NDVI, NDWI and PUC are statistically significant predictors. The model has a high coefficient of determination (R 2 = 0.82), this means that these variables explain 82% of the variability observed in the LST. The p-values of the regression analysis are shown in Table 3. All p-values are smaller than 0.05, indicating that the relationships between independent and dependent variables are statistically significant. Finally, to support the validity of the model, the following key assumptions were verified: autocorrelation, normality, and multicollinearity. The resulting values are given in Table 4. In this table, we can appreciate that the NDBI and NDWI VIF values are greater than 10, thus exceeding the tolerance. This implies that these two variables should be disregarded. As stated by Szymanowski and Kryza [88], the variables that exceed this tolerance may be considered to improve a regression model. Moreover, these two predictors are very important variables in many UHI studies [89][90][91]. In the UHI study by Cruz et al. [92], after performing a multicollinearity test, explanatory variables with VIFs between 50 and 70 were selected for their multiregression analysis. These were considered an important component for modeling this phenomenon. These are the reasons for maintaining the NDBI and NDWI as explanatory variables in this study. The independence between residuals was verified using the Durbin-Watson statistic (D-W) with a value of 2.0, which falls within the critical values of 1.5 < D-W < 2.5, indicating the absence of autocorrelation. The normality of the residuals was proved by applying a Kolmogorov-Smirnov (K-S) test, which confirms the normal distribution. Figure 11 shows the scatter diagrams, the histograms, and the correlation values for each pair of explanatory variables in the model. To verify our model assumptions, four scatterplots of residuals against fitted values are investigated. Figure 12 suggests that the data are randomly distributed around zero, with constant variability. There are no patterns that indicate that the assumptions of the model are fulfilled for the dataset. In this figure, good correlation of NDBI and NDWI with LST is observed. Figure 11. Histograms of the model variables (i.e., LST, NDBI, NDVI, NDWI, and PUC) in the main diagonal. Scattergrams between the model variables (below main diagonal), and the corresponding Pearson correlation (above main diagonal). In this figure, good correlation of NDBI and NDWI with LST is observed. The contribution of each variable (w i ) was obtained through the standardized regression coefficients (ŵ i ), which are weighted means absolute value. The resulting standardized regression coefficients and the contribution of the factors to the model are presented in Table 5. In this table, we can observe that the main contributing factors are the variables Remote Sens. 2021, 13, 4256 16 of 24 NDWI and NDBI, followed by NDVI and PUC. The derived weights are used in the next section to derive the SUHI model. Figure 11. Histograms of the model variables (i.e., LST, NDBI, NDVI, NDWI, and PUC) in the main diagonal. Scattergrams between the model variables (below main diagonal), and the corresponding Pearson correlation (above main diagonal). In this figure, good correlation of NDBI and NDWI with LST is observed. The contribution of each variable (w ) was obtained through the standardized regression coefficients (w ), which are weighted means absolute value. The resulting standardized regression coefficients and the contribution of the factors to the model are presented in Table 5. In this table, we can observe that the main contributing factors are the variables SUHI Modeling The SUHI phenomenon depends on the properties of land cover properties which, combined with their energy absorption capacity, produce a thermal increase on the surface and represent a threat to the thermal regime of urban ecosystems. Our modeling approach is based on segmentation through the identification of potential thermal areas. Here we employ the seven temperature zones defined by Wang et al. [36]. The definition of the training areas is achieved with the LST variable (Figure 10a). The thermal ranges for the training process are those defined in Table 2. These ranges are based on LST averages and the standard deviations. We test both the SVM and NBML algorithms. The application of the NBML method requires estimating the conditional probability functions for each contributing factor. The Gaussian and Logistic probabilities density functions showed the best results for the respective training frequencies of observation/category. Each conditional probability was weighted according to Table 5. Moreover, the weighting capability of the NBML method allows taking into account the relevance of each factor for deriving the SUHI product. This feature is not possible with the SVM method. The segmentation results for both methods are shown in Figure 13. In both cases, the results were validated with the criteria established in Table 2. Table 6 reports the Kappa index for SVM and NBML are approximately 88% and 94%, and the overall accuracy are 88% and 95%, respectively. capability of the NBML method allows taking into account the relevance of each factor for deriving the SUHI product. This feature is not possible with the SVM method. The segmentation results for both methods are shown in Figure 13. In both cases, the results were validated with the criteria established in Table 2. Table 6 reports the Kappa index for SVM and NBML are approximately 88% and 94%, and the overall accuracy are 88% and 95%, respectively. Figures 13 and 14 show the final SUHI map that categorizes the urban environment according to a specific SUHI state and assigns a specific type of action based on temperature. The proposed actions are: intervene, monitor, strengthen, and preserve. The intervene action is directly related to the SUHI areas exposed to the maximum thermal concentration. These areas need to be immediately intervened in and are considered an 'Extreme-high' class. The monitor action groups 'High' and 'Sub-high' categories, and points to the SUHI areas that should be kept under observation and intervened in a medium term. The strengthen action classifies the 'Medium' and 'Sub-medium' classes into SUHI areas that have gradually presented a temporary thermal trend increase. The preserve action contains the 'Low' and 'Very-low' classes and comprises the SUHI areas that must be preserved. Figures 13 and 14 show the final SUHI map that categorizes the urban environment according to a specific SUHI state and assigns a specific type of action based on temperature. The proposed actions are: intervene, monitor, strengthen, and preserve. The intervene action is directly related to the SUHI areas exposed to the maximum thermal concentration. These areas need to be immediately intervened in and are considered an 'Extremehigh' class. The monitor action groups 'High' and 'Sub-high' categories, and points to the SUHI areas that should be kept under observation and intervened in a medium term. The strengthen action classifies the 'Medium' and 'Sub-medium' classes into SUHI areas that have gradually presented a temporary thermal trend increase. The preserve action contains the 'Low' and 'Very-low' classes and comprises the SUHI areas that must be preserved. Sensitivity Analysis The results of the January 2019 campaign (Figure 7a) suggest that the amplitude of the errors in recovering LST is similar between the models evaluated. The September 2019 measurement records (Figure 7b) indicated that the Fcover model provides the smallest deviation with a mean error of 1.14 °C. This is very obvious compared to ASTER-GEDv3 and Sensitivity Analysis The results of the January 2019 campaign (Figure 7a) suggest that the amplitude of the errors in recovering LST is similar between the models evaluated. The September 2019 measurement records (Figure 7b) indicated that the F cover model provides the smallest deviation with a mean error of 1.14 • C. This is very obvious compared to ASTER-GEDv3 and the LSE model, which shows mean deviations of 3.67 • C and 3.85 • C, respectively. As shown in the Results section, the F cover model exhibited better performance with a mean error of 1.33 • C. Data reported by Duan et al. [93], and Malakar et al. [94] showed differences for L5TM, L7ETM+, and L8OLI/TIRS among recovered LST and in situ LST between 0.7 and 1.2 • C. Furthermore, we observed mean differences between 1.1 • C and 1.3 • C. Authors such as Chen and Zhang [14] and Liu and Li [95] have analyzed the SUHI phenomena with similar differences. In this work, the F cover model provided the smallest errors in LST recovery among all the tested schemes, and it is considered the most suitable for this kind of studies. Statistical Analyses The PCA was applied to derive the time trend of each variable and to analyze the LST variation. Then, the main PCA component was employed in the MLR. We achieve a coefficient of determination of approximately R 2 = 0.82. These results are in agreement with recent studies that have used regression models to quantify the impact of contributing factors on LST [16,96]. Moreover, the combination of these factors defines how the different types of land cover absorb temperatures. These absorptions manifest themselves with the corresponding increase in emissivity and surface temperature. Our findings confirm results of earlier studies, such as those of Rasul et al. [72], who modeled with the MLR method the spatiotemporal trend of temperature data, and provided robust results in determining SUHI areas. Regarding the conditions for ensuring the validity of our proposed approach, several considerations must be addressed. First, the multicollinearity of the predictor variables and their effect on the model need to be validated with the VIF. Our results show that two of the VIF parameters exceeded the value of 10, which would exclude two of the explanatory variables of the prediction model. However, it is found that NDBI and NDWI are the factors with the highest contribution, while NDVI and PUC have lower VIF values, contributing to a lesser extent. Strong correlations, 0.89 and −0.89, were found between the LST and NDBI and NDWI, respectively. A similar correlation was found between NDVI and NDWI. It is important to note that removing highly correlated variables can benefit the overall result and simplify the approach. However, having high contributing predictor variables such as NDBI and NDWI may indeed improve prediction products, as noted in [88]. Although some collinearity was presented, the Pearson correlation indices and the fulfillment of independence and normality of the residuals have denoted a very reliable model. Regarding the direct relationships of LST with the different physical variables and contributing factors, a strong correlation with NDBI was observed: building construction. This justifies why impervious areas have high caloric retention capacity and low water storage capacity, in turn reducing humidity. Previous studies have demonstrated strong correlations between LST and NDBI [97,98]. In contrast, a strong correlation was found between LST and NDVI/NDWI. Please note that temperature decrease follows increases in vegetation and humidity. The higher the vegetation cover, the lower the surface temperature becomes. The reason for this may be strongly related to the soil moisture content in vegetated areas, which alters the energy balance and causes variation effects from solar radiation. These results are in line with those obtained by Ibrahim and Rasul Faqe [99], who reported a strong negative correlation between these variables. Our results show urban planners that the identification of factors and their contribution to the SUHI phe-nomenon serves as a support to define adaptation measures to cities for thermal change, allowing them to adapt with other territorial planning priorities. A moderate correlation was found between LST and PUC, indicating that LST increases moderately according to the proximity to the urban center. This correlation may be related to urban density distribution and road infrastructure, which, compared to the distance variable, are responsible for generating a complex structure not well represented by linear models. Bonafoni and Keeratikasikorn [100] also implemented a ring-based method and analyzed LST as a function of building density and proximity to urban centers. This issue is to be addressed in future research. The SUHI Model To reveal the multitemporal intensity of the SUHI phenomenon, two Machine Learning techniques were tested, the SVM and the NBML. Both algorithms performed satisfactorily, with Kappa indices of 89% and 93%, respectively. Better performance was observed for certain categories for the NBML algorithm (Figure 13b). Although both procedures are able to detect high-density urban areas affected by extremely high temperatures, NBML allows coupling criteria to assign individual weights to each class, increasing the quality of the results. Conventional NBML classifiers consider the model to be applicable when the Gaussian probability density function is present in the data set [84]. Molina et al. [101] showed that the combination of the best-fit distribution model (not necessarily Gaussian), and the weights of each variable led to satisfactory results. The SUHI phenomenon is a complex system that occurs as a result of the interaction of various factors [102]. This interactivity produced by anthropic effects generates thermal imbalances requiring intervention, monitoring, strengthening, and preservation, as a fundamental expression between causes and effects of urban/rural ecosystems. Our results show that the highest temperatures are concentrated in the central area of the city and gradually decrease toward the periphery. The characterization of the space through the four proposed classes of actions (intervene, monitor, strengthen, and preserve) makes it possible to regulate the conditions that could mitigate the SUHI effect. The areas designated as intervene correspond to the center of the city and tend to have a higher population density and old buildings. It is recommended to change black roofs to less thermic roofs that have reduced solar energy absorption and increased energy savings, as suggested by Alshayeb and Chang [103]. Since these areas do not have appropriate physical space to create green areas, an alternative might be the use of road dividers to plant trees with large foliage and roots that do not weaken existing infrastructure. An interesting measure that allows the reduction of anthropogenic heat is to restrict the transit of private vehicles and limit access to specific areas. The access methods can be substituted for public transportation or cycling. These measures have already been implemented in many locations. The areas identified as monitor should implement small tree-lined sites and natural corridors to refresh the space. Rainwater irrigation channeled through sewerage systems can be used as a contribution to the restoration of urban wetlands. The areas indicated as strengthen show less thermal intensity than the above areas and are associated with urban growth. Within this policy, the morphology of these areas should integrate green spaces that allow increased water infiltration and cooling [104]. In general, the use of highly reflective building materials is recommended, reducing the amount of solar radiation absorbed by the surface, such as, for example, the use of cool pavements suggested by the U.S. Environmental Protection Agency [105]. Finally, areas marked as preserve have the highest vegetation cover and play an important role in the urban ecosystem. They reduce carbon dioxide emissions, becoming spaces that reduce the radiant load produced by various economic activities, and generate thermal regulation. In addition, these have a great potential for ecotourism. Further development of this research can be undertaken by applying simulation techniques with Machine Learning Algorithms that allow the integration of weights to the variables involved in the predictive model, and that allow characterization of future thermal scenarios associated with the spatiotemporal trends of the explanatory variables. The interactions produced by biophysical factors and the geometric changes that are transforming cities make the relationships between objects and phenomena increasingly complex. In this sense, it would be very pertinent to further explore the classification of local climatic zones in tropical cities in countries such as Colombia. These are highly vulnerable to climate change. Conclusions The results of this work demonstrate that emissivity data have a large impact on the retrieval of LST. Here, LST is obtained from L8OLI/TIRS band 10 and LSE from Sentinel-2. Both sources are more accurate and homogeneous than using traditional ground-based methods. Our innovative approach proposes quantifying the SUHI phenomenon from a set of contributing factors. We first employ the PCA to retrieve the main spatiotemporal variations in the initial data. Then, MLR is applied to integrate the dependencies and to analyze their impacts on SUHI. According to our regression model, the most influential factors in the SUHI are NDWI with a contribution of 52%, NDBI with 21%, NDVI with 13%, and PUC with a 14%. Finally, the integration of these predictors within an SVM and a NBML approaches confirms the existence of coupling mechanisms between each variable. The satisfactory results of the NBML confirm the suitability of the proposed approach, with an overall accuracy of 95%. We expect to improve the results of the model with future upgrades associated with structural complexity of the landscapes. The spatial variation of SUHI points out an enhanced phenomenon towards areas of high urban density. Our research demonstrates the suitability of Machine Learning Algorithms for mapping SUHI intensities, providing spatially explicit descriptions of urban heat distribution. The derived products are crucial for defining sustainable urban planning policies, as well as for adequate responses to thermal risks. These actions will in turn make it possible to define mitigation and adaptation strategies. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
12,964
2021-10-22T00:00:00.000
[ "Environmental Science", "Mathematics" ]
UvA-DARE (Digital Academic Repository) Entanglement Wedge Reconstruction via Universal Recovery Channels In the context of quantum theories of spacetime, one overarching question is how quantum information in the bulk spacetime is encoded holographically in boundary degrees of freedom. It is particularly interesting to understand the correspondence between bulk subregions and boundary subregions in order to address the emergence of locality in the bulk quantum spacetime. For the AdS/CFT correspondence, it is known that this bulk information is encoded redundantly on the boundary in the form of an error-correcting code. Having access only to a subregion of the boundary is as if part of the holographic code has been damaged by noise and rendered inaccessible. In quantum-information science, the problem of recovering information from a damaged code is addressed by the theory of universal recovery channels. We apply and extend this theory to address the problem of relating bulk and boundary subregions in AdS/CFT, focusing on a conjecture known as entanglement wedge reconstruction. Existing work relies on the exact equivalence between bulk and boundary relative entropies, but these are only approximately equal in bulk effective field theory, and in similar situations it is known that predictions from exact entropic equalities can be qualitatively incorrect. We show that the framework of universal recovery channels provides a robust demonstration of the entanglement wedge reconstruction conjecture as well as new physical insights. Most notably, we find that a bulk operator acting in a given boundary region ’ s entanglement wedge can be expressed as the response of the boundary region ’ s modular Hamiltonian to a perturbation of the bulk state in the direction of the bulk operator. This formula can be interpreted as a noncommutative version of Bayes ’ s rule that attempts to undo the noise induced by restricting to only a portion of the boundary. To reach these conclusions, we extend the theory of universal recovery channels to finite-dimensional operator algebras and demonstrate that recovery channels approximately preserve the multiplicative structure of the operator algebra. I. INTRODUCTION The AdS/CFT correspondence is a duality or equivalence between a gravitational theory in (d þ 1)-dimensional asymptotically AdS spacetime and a conformal field theory with one less spatial dimension [1][2][3][4][5].Roughly speaking, the CFT lives on the boundary of the bulk AdS spacetime, and the quantum state of the boundary CFT is dual to the state of the quantum-gravity theory in the bulk.In the limit of a large number N of degrees of freedom (d.o.f.) and strong coupling in the CFT, the gravity side of the correspondence can be approximately described by classical Einstein gravity.Hence, AdS/CFT provides a theoretical laboratory to understand how classical spacetime emerges from microscopic d.o.f. and how quantum effects alter the physics, at least in a toy model of quantum gravity.Only certain CFT states correspond to classical geometries in the bulk, and this important class of states occupies much of our attention.For such states, there is an emergent spatial direction in the bulk theory, and understanding how locality in the bulk geometry arises in the boundary theory is a longstanding problem.One way to approach the problem is to ask which regions of the bulk are completely described by a given region of the boundary. The question above has been phrased in various forms over the years [6,7], and a particularly natural approach is to think operationally.Physical observables are described by Hermitian operators, so the goal becomes the identification of all local bulk operators that can be expressed in terms of boundary operators with support only on the boundary subregion.Such operators correspond to those observables in the bulk subregion whose physics can be completely reproduced using boundary observables in a boundary subregion.Significant progress has been made on this problem, starting with the so-called HKLL prescription [2], so named in honor of its authors Hamilton, Kabat, Lifschytz, and Lowe.The method works in certain cases but falls short in general; there are bulk operators that should be expressible on a particular boundary subregion that are inaccessible to the HKLL technique.An example is the operator corresponding to the area of the minimal surface in the bulk that calculates the entropy of the boundary subregion, the so-called Ryu-Takayanagi surface [8,9].For a single hemispherical region in empty AdS, HKLL and related techniques suffice to reconstruct bulk operators in terms of a boundary spherical region, but almost any more-complex scenario falls outside of the purview of HKLL. The problem of finding the dual to a boundary subregion is at the heart of the subject of bulk reconstruction: Given an operator in the bulk, can one find a representation of this operator acting on a subregion of the boundary?Recently, it has been proposed that for a given subregion A of the conformal boundary, any low-energy bulk operator acting on a spacetime region called the entanglement wedge of A can be reconstructed using only information in A [6,10,11].Roughly speaking, the entanglement wedge of a boundary subregion is the bulk spacetime region between the boundary subregion and its corresponding bulk minimal surface, the aforementioned Ryu-Takayanagi (RT) surface.More precisely, for time-symmetric situations, the entanglement wedge is the set of bulk spacetime points that, due to the bulk light-cone restriction, can be influenced only by the bulk spatial region between the RT surface and the boundary subregion (see below for a fully general definition).The entanglement wedge reconstruction conjecture was strengthened in Ref. [12] and established in tensor network toy models of holography [12][13][14].Very recently, it was proved in Refs.[15,16] under the condition that the bulk and boundary relative entropies are exactly equal. At large but finite N, within the framework of bulk effective field theory, one expects only approximate equality of the bulk and boundary relative entropies.When similar situations were studied in the quantum-information theory literature, it was found that algebraic consequences of exact entropic equalities often do not provide qualitatively correct predictions in the approximate case.An important and, in fact, closely related example is the exact saturation of strong subadditivity of the von Neumann entropy, which is known to imply that the underlying state is a quantum Markov chain [17].Such states have an orderly pattern of correlation in which different subsystems are independent of each other given an appropriate buffer between them.The associated algebraic structure slightly generalizes operator algebra quantum-error correction and is precisely the structure relevant in AdS/CFT [15].Near saturation of strong subadditivity, however, fails to imply that the state is nearly a quantum Markov chain state for systems of large Hilbert space dimension [18,19].As we discuss above, a large number of local d.o.f. is essential in AdS/CFT, so the failure of the approximate version of the implication in large Hilbert spaces represents a potentially serious issue for AdS/CFT.Indeed, the most direct generalization of the reconstruction theorem from Ref. [15] to approximate relative entropy equalities does not lead to the goal of full approximate quantum-error correction but rather to a significantly weaker structure known as a zero-bit code [20,21].Ignoring this distinction can lead to qualitatively wrong conclusions for code spaces whose dimension grows too fast in the large-N limit (e.g., code spaces containing a large number of black-hole microstates) [22].For mixed states in such code spaces, the entanglement wedge of a boundary region A may be strictly smaller than the bulk complement of the entanglement wedge of the complementary boundary region Ā even in the limit N → ∞.A naive application of Ref. [15] would incorrectly suggest that any operator in the larger latter region can be reconstructed in region A; in fact, only operators in the entanglement wedge of A can be reconstructed in a stateindependent way. In this article, we demonstrate the entanglement wedge reconstruction conjecture without assuming exact equality of bulk and boundary relative entropies.In doing so, we also provide an explicit formula for entanglement wedge reconstruction.Our analysis builds on recent results in quantum-information theory giving sufficient conditions to approximately reverse the effects of noise.We show that having access only to a subregion of the boundary is equivalent to throwing away or damaging a known part of the error-correcting code describing the bulk spacetime, and it is this correspondence which opens the door to the methods of universal recovery channels.A general quantum map known as a quantum channel N (i.e., a completely positive, trace-preserving map) is said to be reversible if there exists another quantum channel R known as the recovery channel, such that the composition R ∘ N acts as the identity on all states in the domain of N (i.e., R ∘ N ½ρ ¼ ρ).For example, all unitary operations are reversible, with the adjoint of the unitary acting as a recovery channel (since U † U ¼ 1), and quantum-errorcorrecting codes are designed around noise processes such that the noise can be reversed on the code subspace. When a channel N is reversible, it has been known for some time how to construct a recovery channel R [23].Exact reversibility will almost never be satisfied in practice, however, and for many applications one may require only that a channel be approximately reversible.For example, in JORDAN COTLER et al. PHYS.REV.X 9, 031011 (2019) 031011-2 approximate quantum-error correction, one requires only that the recovered state R ∘ N ½ρ be close to the input ρ up to some small tolerance.For a channel that is not exactly reversible, it is natural to ask whether or not there exists a recovery channel that works approximately in the above sense [24].This question has spurred a flurry of research and was answered only recently in Ref. [25], wherein it was shown that for any quantum channel N , there indeed exists an approximate recovery channel R such that R ∘ N ½ρ ≈ ρ; ∀ ρ, with the quality of the approximation controlled by the behavior of the relative entropy under the action of the channel N .We review these recent results in the next section.In the context of AdS/CFT, there is a map from the bulk to the boundary, and a noisy quantum channel arises from tracing over a subregion of the boundary.Our ultimate goal is to recover from that noise process (i.e., recover from the loss of part of the boundary). The paper is organized as follows.We begin with a review of recent results on universal recovery channels, as well as a few basics of AdS/CFT and recent results in holography.We then apply the theory of universal recovery channels to the problem of entanglement wedge reconstruction in AdS/CFT, and we arrive at an explicit expression for a bulk operator recovered on the boundary.After discussing its salient structural properties, we sketch how our formula applies to AdS 3 -Rindler reconstruction, and we conclude with possible avenues for future research.In the Appendixes, we prove our entanglement wedge reconstruction result for arbitrary finite-dimensional algebras of observables, thereby obviating simplifying assumptions used in the main body of the paper.To do this, we extend the universal recovery results of Ref. [25] to finitedimensional von Neumann algebras.Importantly, we also prove that approximate recovery channels automatically approximately preserve the multiplicative structure of the original bulk algebra, which ensures that correlation functions of boundary reconstructions of the individual operators, even if each operator is reconstructed using a different entanglement wedge. II. BACKGROUND AND PRELIMINARIES In this section, we review the theory of recovery channels and basics of AdS/CFT, as well as recent developments in each that we later use to establish our results. A. Universal recovery channels To develop intuition of quantum recovery channels, it is helpful to first set aside quantum mechanics and consider good old probability theory.Our goal is to reverse the effects of applying a stochastic map.In particular, given a stochastic map pðyjxÞ and an observation of y, try to infer x.One way to do this is to introduce a new stochastic map pðxjyÞ via Bayes's rule which has the property that P y pðx 0 jyÞpðyjxÞ ¼ δ xx 0 if the noise can be reversed.In situations where the noise cannot be perfectly reversed, Bayes's rule provides an excellent (and in many ways optimal) estimate of the input.Since it will prove useful in solving the quantum version of the problem, it is worth noting that we can trivially rewrite Eq. (1) as In other words, the recovery channel pðxjyÞ can be expressed as the logarithmic directional derivative of the matrix pðyÞpðxÞ −1 in the direction of the channel pðyjxÞ. Let us now consider a noncommutative generalization of Bayes's rule.We want to find a quantum channel (i.e., a completely positive, trace-preserving map) that reverses the action of some input channel.Instead of a classical stochastic map, our goal is to reverse the action of a noisy quantum channel.To make the problem precise, consider two Hilbert spaces H A and H B .Let SðH A Þ and SðH B Þ represent the sets of density operators on systems A and B, respectively.A quantum channel A simple example in which reversible channels play a starring role is quantum-error correction.In this case, N is the composition of encoding some d.o.f. in a code space H A into a potentially larger Hilbert space, followed by a noise channel (wherein we lose certain d.o.f. or otherwise corrupt the encoded state).R is then a decoding map, and Eq. ( 3) corresponds to perfect quantum-error correction.One way to quantify the noisiness of a quantum channel is by comparing the distinguishability of input states and output states using the relative entropy.Under the action of a quantum channel N , the relative entropy between two states can never increase.This fact is known as the monotonicity of relative entropy or the data processing inequality: where DðρkσÞ ≔ Trρ log ρ − Trρ log σ is the relative entropy between ρ, σ.If there were a recovery channel R such that ðR ∘ N ÞðρÞ ¼ ρ and ðR ∘ N ÞðσÞ ¼ σ, monotonicity applied a second time to R would imply saturation of Eq. ( 4).In fact, the converse is also true. where N à denotes the adjoint of the channel N .Because P σ;N does not depend on ρ, it can be used for all ρ saturating Eq. ( 4) [provided that σ is chosen to be full rank to ensure that the relative entropies in Eq. ( 4) are finite for all states ρ].P is an exact recovery channel often referred to as the Petz map.Since the condition is necessary and sufficient, failure to saturate Eq. ( 4) means that an exact recovery map R cannot exist.But what if N almost saturates Eq. ( 4)?In this case, does there exist an approximate recovery map, which would behave well in cases of near saturation, as does Bayes's rule in the stochastic case?Indeed, an approximate version of the recovery channel was developed by Junge et al. [25], who show that, for any ρ; σ ∈ SðH A Þ and any quantum channel N , there exists a recovery channel R σ;N , such that where Fðρ; σÞ ≔ k ffiffi ffi ρ p ffiffiffi σ p k 1 is the fidelity.The inequality says that the fidelity between the recovered state and the original is controlled by the extent to which the channel saturates monotonicity of relative entropy [Eq.( 4)], with perfect fidelity in the case of saturation.Importantly, there is no dependence on the dimension of the Hilbert space.Moreover, Junge et al. [25] gave a concrete expression for the channel R σ;N called the twirled Petz map and given by where P σ;N is the Petz map of Eq. ( 5), and β 0 is the probability density β 0 ðtÞ ≔ ðπ=2Þ½coshðπtÞ þ 1 −1 .The twirled Petz map is an example of a universal recovery channel, since Eq. ( 6) holds for any channel N .When σ is full rank, both the Petz map and the twirled Petz map are trace-preserving completely positive maps (i.e., quantum channels).For a completely positive map N , the Choi operator is defined by Φ N ≔ ðid ⊗ N Þ½Φ, where jΦi ¼ P j jjijji is an un-normalized maximally entangled state.By working with the Choi operator, we can rewrite the recovery channel R σ;N in a form similar to Eq. ( 2).In the case of the recovery channel R σ;N , the Choi operator can be expressed as where N ½σ is the complex conjugate of N ½σ, and σ −1 is the inverse of σ on its support.A proof can be found in the Appendix B. This is the appropriate generalization of Bayes's rule to the noncommutative case.When the channel is reversible, the twirled Petz map R σ;N reduces to the Petz map P σ;N .In the classical case, both the Petz map and the twirled Petz map reduce to Bayes's rule. B. AdS/CFT background The AdS/CFT correspondence states that quantum gravity in d þ 1 spatial dimensions is dual to a CFT in d spatial dimensions.There are two main dictionaries that describe the mapping between bulk and boundary quantities in the AdS/CFT correspondence: the differentiate dictionary [27,28] and the extrapolate dictionary [29].The differentiate dictionary relies on the equivalence between the partition functions of the bulk and boundary theories (Z CFT ¼ Z grav ).On the other hand, the extrapolate dictionary relies on the fact that local CFT operators living in the boundary theory can be expressed as the limit of appropriately weighted bulk fields as they are taken to the conformal boundary of the AdS spacetime.In particular, where O is a boundary field, Δ is the scaling dimension of O, and ϕ is a bulk field.With this equivalence, boundary correlation functions can be expressed as The HKLL procedure [2] uses the extrapolate dictionary and the bulk equations of motion to write a local bulk field operator ϕðx; zÞ as a smearing of boundary operators O acting on the "strip" of boundary points that are spacelike separated from ðx; zÞ, as shown in Fig. 1.In particular, where the smearing function K can be computed using a mode-sum expansion.The choice of smearing function K is not unique, and there are different choices of K that will reproduce the same bulk field ϕ.For instance, bulk field operators ϕðx; zÞ can be written using a smearing function that is supported only on a subset of the boundary, namely, the domain of dependence of a boundary subregion whose causal wedge contains the bulk point, as shown in Fig. 1.Using the HKLL procedure, one can find representations of a given bulk operator on different regions in the boundary.As observed in Ref. [30], this redundancy is a reflection of the quantum-error-correcting properties of AdS/CFTİf we fix a boundary region A, the entanglement wedge of A is defined to be the bulk domain of dependence of any achronal bulk surface bounded by A and the covariant RT minimal surface associated to A. For readers unfamiliar with the entanglement wedge, it is easily visualized on a fixed time slice: The entanglement wedge is the region in the bulk bounded by A and the RT surface of A. The full spacetime picture of the entanglement wedge is then the bulk domain of dependence of the region on a fixed time slice.The entanglement wedge reconstruction proposal asserts that any local bulk operator acting on the entanglement wedge of A has a representation with support only on A. The key input from holography is a result by Jafferis, Lewkowycz, Maldacena, and Suh (JLMS for short) in which they show that bulk and boundary relative entropies are approximately equal [16].The relative entropy between two "nice" states in the bulk and their associated states on the boundary are equal to leading order in 1=N.To be precise, let ρ and σ be two bulk states with the same semiclassical geometry, ρ a and σ a be their reduced density matrices on the entanglement wedge of A, ρ and σ be the corresponding boundary states, and ρA and σA be their reduced density matrices on region A. Jafferis et al. [16] then showed that At higher orders in 1=N, the equivalence in Eq. ( 9) is no longer well defined, since the choice of minimal surface used to define the entanglement wedge becomes state dependent.Crucially, AdS/CFT provides only a global map from bulk to boundary states (ρ ↦ ρ).Our approach to entanglement wedge reconstruction will be to construct a suitable local quantum channel mapping states in the entanglement wedge to states in the boundary region (ρ a ↦ ρA ).Only then can we interpret Eq. ( 9) as an approximate saturation of the monotonicity of the relative entropy for a quantum channel, so that Eq. ( 6) guarantees the existence of an approximate recovery map.It is the adjoint of this recovery map that we ultimately use for reconstruction. III. ENTANGLEMENT WEDGE RECONSTRUCTION We make the following assumptions: (1) The bulk Hilbert space contains a code subspace that is mapped via a quantum channel into the CFT Hilbert space.(2) The JLMS relative entropy condition holds to leading order in 1=N.In fact, the first condition is slightly relaxed in the most general version of our result (Theorem 4).Just for the purposes of illustration, we also pretend that the bulk and boundary Hilbert spaces admit simple tensor factorizations, since this is a familiar convention in the community.However, the reader is cautioned that this convention is not actually correct-even for free theories, the Hilbert space does not factorize, and the problem is only compounded for gauge theories.The proper approach is to work at the level of algebras of observables, without ever making a tensor product assumption.As such, we adopt the more general algebraic approach in the Appendix, proving all claims made in this paper rigorously at the level of finitedimensional von Neumann algebras. With these assumptions, let us formalize the problem.Let H code be a code space with density operators SðH code Þ, and let H CFT be the Hilbert space of a CFT with density operators SðH CFT Þ.There are many valid definitions of what it means to be a code space in AdS/CFT, but our results hold for any suitable definition.For instance, one can define the code space to be the set of all states formed by acting with a finite number of low-energy local bulk operators on the vacuum [15,30].In this context, low energy means the action of the operator does not change the bulk geometry appreciably.The AdS/CFT correspondence relates states in SðH code Þ to states in SðH CFT Þ.We model this relationship by an isometry J∶H code → H CFT embedding the code space into the CFT Hilbert space.Note that H code can be identified with its image under the isometry J, resulting in the code subspace of previous works [15,31].In general, one could consider an arbitrary quantum channel mapping states on the code space to states on the CFT Hilbert space.We prove our general result in Theorem 4 without requiring the mapping between code and CFT states to be an isometry. We now partition the CFT into two regions A and Ā, and we partition the bulk into a and ā, where a is supported only on the entanglement wedge of A, as shown in Fig. 2. As discussed earlier, we assume that The problem of entanglement wedge reconstruction then amounts to constructing a boundary observable O A supported only on A, such that, for any bulk operator ϕ a supported in the entanglement wedge a of A, for all ρ ∈ SðH code Þ and for some small δ > 0. Using our new notation, Eq. ( 9) says that, for all ρ; σ where ϵ is controlled by 1=N, and the notation ð Note that in this paper, we use the approximate equality of relative entropies in a and A in order to prove that operators in region a can be reconstructed in A. In contrast, the starting assumption in Ref. [15] was an exact equality between relative entropies in ā and Ā.As we discuss in the Introduction, an approximate version of this latter condition implies only that the zero bits of region a are encoded in region A, which is a significantly weaker condition than full entanglement wedge reconstruction [20][21][22]. Since the relative entropies approximately agree, one might expect that we can find a universal recovery channel that would undo the effect of the partial trace over Ā.However, there is an obstacle: A priori, ðJρJ † Þ A depends on the state ρ defined on the whole bulk not just on the reduced state on the entanglement wedge ρ a .In this form, the theory of recovery channels is not applicable.To overcome this challenge, we first restrict the recovery problem to special code states of the form ρ ¼ ρ a ⊗ σ ā, where σ ā is some fixed fiducial state.We thus obtain a quantum channel ρ a ↦ ðJρJ † Þ A mapping states on the entanglement wedge to states on the boundary region.We then verify that the recovery map R obtained for this channel works in fact for all code states ρ, only increasing the error by a small amount, since Eq. ( 11) also implies that the CFT states corresponding to any ρ and its factorized version ρ a ⊗ σ ā are approximately indistinguishable on the boundary region A. The adjoint R à of the recovery channel then maps bulk operators ϕ a supported in the entanglement wedge to boundary operators O A supported on A and satisfying Eq. ( 10), thereby achieving entanglement wedge reconstruction. In more mathematical detail, we first define the local channel for all states ρ a ∈ SðH a Þ, where σ ā is some fixed full-rank state.If we also choose a full-rank σ a ∈ SðH a Þ, then we can use Eq. ( 7) to obtain a recovery channel R ¼ R σ a ;N such that, for all ρ a ∈ SðH a Þ, However, by Eq. ( 11), we have and therefore, we conclude that the recovery channel R works with high fidelity (cf.Ref. [25] Corollary 6.1).By one of the Fuchs-van de Graaf inequalities [32], we have that for all ρ a ∈ SðH a Þ.We now show that the channel R recovers the reduced state on the entanglement wedge for arbitrary code states ρ, not just for those of the form ρ ¼ ρ a ⊗ σ ā: where the first inequality is Pinsker's inequality, and the second inequality is Eq. ( 11), with one state set to ρ and the other set to ρ a ⊗ σ ā.Therefore, we obtain that, for all Thus, we conclude that R recovers arbitrary bulk states in the entanglement wedge with high fidelity, as desired.We now show that the adjoint of the map R solves the entanglement wedge reconstruction problem in the form of Eq. (10).Given a bulk operator ϕ a supported in entanglement wedge a of A, define O A ¼ R à ½ϕ a .Then we have that, for all ρ ∈ SðH code Þ, ā is then the complement of a.(b) A bipartition of the boundary into A and Ā such that A consists of two disconnected components.In the figure above, A spans just more than half of the boundary, and in this case, the entanglement wedge of A is not simply the union of the causal wedges of each piece of A. In the bulk, a represents the entanglement wedge of A and ā is the complement of a. where the first inequality is Hölder's inequality and the second is Eq. ( 13).Thus, we have arrived at the desired approximate equality of one-point functions. IV. CORRELATION FUNCTIONS Given a set of n bulk operators fϕ ðiÞ a g acting on the entanglement wedge a of A, our result implies that we can calculate their n-point correlation function as the expectation value of the boundary operator a obtained by reconstructing the composite bulk operator a .However, one might hope that the reconstructed operators approximately reproduce the bulk algebra in the sense that where When the bulk and boundary relative entropies are exactly equal, it is known that R à is an algebra homomorphism (Ref.[26] Proposition 8.4), so that Eq. ( 14) holds with equality.We prove in Theorem 4 in the Appendix that when the bulk and boundary relative entropies are only approximately equal, as in Eq. ( 11), then Eq. ( 14) still holds approximately, although the size of the error may grow quadratically with n. Furthermore, we show in Corollary 5 that the above result continues to be true even when different operators are reconstructed using the entanglement wedges a i of different boundary regions A i : where O a i , and R A i is the recovery map for boundary region A i .This result is illustrated in Fig. 3. V. AN EXPLICIT FORMULA Our ultimate goal is an explicit formula for approximate entanglement wedge reconstruction.Thus, we want to calculate O A ¼ R à ½ϕ a explicitly by using Eq.(7).Recall that the recovery channel R depends on our choice of σ a and, through the channel N from Eq. ( 12), also on the choice of σ ā.The result is particularly satisfying when both σ a and σ ā are chosen to be maximally mixed.It is important to emphasize that this is just a convenient choice that we make in order to simplify our expressions.For an infinite-dimensional code space, such a choice would not be well defined, and instead another choice of average code state can be used.With this simplification, we find that, for all bulk operators ϕ a with support in the entanglement wedge a, where H A ¼ − log ðJτJ † Þ A is the boundary modular Hamiltonian on subregion A associated with the maximally mixed state τ on the code subspace.The above expression makes it clear that the natural basis one should use for entanglement wedge reconstruction is the eigenbasis of the modular Hamiltonian.What is more, the recovery channel can be expressed in the form of a logarithmic directional derivative, as in Eq. ( 8): where we write H A ½ρ ≔ − log ðJρJ † Þ A for the boundary modular Hamiltonian on subregion A associated with a bulk state ρ.In other words, the boundary operator corresponding to ϕ a can be computed as the response in the boundary modular Hamiltonian H A to a perturbation of the maximally mixed code state in the direction of the operator ϕ a .Equations ( 16) and ( 17) are explicit expressions for entanglement wedge reconstruction that are meaningful even when the bulk and boundary relative entropies are not exactly equal.When the relative entropies are exactly equal for all ρ, they reduce to the simple Petz map, which is equivalent to existing notions of operator algebra quantumerror correction, as applied to bulk reconstruction.FIG. 3. We show in Eq. ( 15) that correlation functions of bulk operators can be computed by pushing each bulk operator to the boundary separately and computing the expectation value in the boundary theory.The bulk operators need not live in the same entanglement wedge.In this figure, the boundary is decomposed into four regions: A, B, C, and D. Regions AB and BC have a nontrivial intersection, and the bulk operators ϕ are localized to the regions as shown.We can use the recovery maps R à AB and R à BC to push the operators to AB and BC, respectively. In Eq. ( 16), we first map our bulk operator ϕ a to the entire boundary via ϕ a ↦ Jðϕ a ⊗ 1 āÞJ † .One might wonder what the connection is between this mapping and the global HKLL reconstruction procedure [2] discussed in the preliminaries, which likewise produces an operator O HKLL supported on the full CFT Hilbert space and satisfying hϕ a i ρ ¼ hO HKLL i JρJ † .The latter condition means that J † O HKLL J ¼ ϕ a ⊗ 1 ā.Hence, The map JJ † is precisely the projection onto the code subspace.This means that, in order to compute the term Jðϕ a ⊗ 1 āÞJ † in our formula (16), we can leverage the global HKLL procedure to map out the bulk operator to a global boundary operator if we make sure to subsequently project the result onto the code subspace.Such a projection onto the code subspace is not necessarily complicated. In the Appendix, we work through an explicit calculation involving our reconstruction formula.The example is analogous to Rindler wedge reconstruction, although it is only strictly true for free fields.We consider a bulk operator ϕ a in AdS 3 localized in the entanglement wedge of a boundary interval, and we choose a two-dimensional code space spanned by states j 0i and ϕ a j 0i, where j 0i is the vacuum state.Using only our recovery formula and global HKLL, we find an expression for reconstructed bulk operators as a mode expansion supported on the Rindler wedge. VI. DISCUSSION The mapping of bulk operators to boundary subregions was recognized as a problem in operator algebra quantumerror correction in Ref. [30].This important conceptual advance, along with the insight that bulk and boundary relative entropies agree [16], imply that any low-energy bulk operator in the entanglement wedge of a boundary region should be representable as an operator acting on that boundary region [15,16].In this sense, a boundary region is dual to its entanglement wedge. In this article, we use recent advances in quantuminformation theory to provide a robust demonstration of this result that does not assume that the bulk and boundary relative entropies are exactly equal.In addition, we find a satisfyingly simple explicit formula for the boundary operator, namely, that it can be computed as the response of the boundary modular Hamiltonian of the subregion to a perturbation of the average code state in the direction of the bulk operator. Our argument does not rely on the structural consequences implied by exact equality of relative entropies assumed in the proof of the entanglement wedge reconstruction conjecture in Ref. [15].That said, the argument that we present here still assumes the finite dimensionality of the associated von Neumann algebras.Most of our expressions can be applied formally even in the infinite-dimensional setting, but it would be a worthwhile project to try to rigorously extend our results to the infinitedimensional case.It seems likely that additional hypotheses will be required in order to ensure the existence of the local channel that is instrumental in our argument. In independent work [33], Faulkner and Lewkowycz arrived at a formula for entanglement wedge reconstruction which also involves modular flow.Their approach builds on the insights of Ref. [16] and uses the free-field physics of the bulk to argue that entanglement wedge reconstruction involves integrating the modular flow against a certain (generally unknown) kernel.It would interesting to understand their results from the perspective of our framework; most likely, this requires further exploration of the consequences of the free-field assumption in the bulk which we do not a priori need to assume in our approach. ρ ∈ SðAÞ.[To any state ρ ∈ SðAÞ, we may assign the positive normalized linear functional ϕ ↦ hϕ ρ i, thereby connecting SðAÞ with the standard definition of states on a von Neumann algebra.] In this way, we may lift standard definitions in quantum-information theory to finite-dimensional von Neumann algebras.For example, the trace norm difference kρ − σk 1 , the relative entropy DðρkσÞ, and the fidelity Fðρ; σÞ for ρ; σ ∈ SðAÞ can be defined in the usual way by kρ − σk 1 ≔ Trjρ − σj, DðρkσÞ ≔ Tr½ρ log ρ − ρ log σ, and Fðρ; σÞ ≔ k ffiffi ffi ρ p ffiffiffi σ p k 1 , respectively, in agreement with their abstract definitions for von Neumann algebras.For a completely positive map N ∶PðAÞ → PðBÞ, the adjoint or dual channel N à ∶B → A is defined by demanding that hϕi N ½ρ ¼ hN à ½ϕi ρ for all ρ ∈ SðAÞ and ϕ ∈ B. If N is trace preserving, then it is called a quantum channel (equivalently, the dual map is unital, i.e., N à ½1 ¼ 1).It is called a quantum operation if N is merely trace nonincreasing (equivalently, N à ½1 ≤ 1).Consider a (unital) subalgebra A ⊆ B. For any state ρ ∈ SðBÞ, we define its restriction ρj A as the unique element in SðAÞ such that hϕi ρj A ¼ hϕ ρ i for all ϕ ∈ A; the assignment ρ ↦ ρj A defines a quantum channel.The inclusion map E A ∶SðAÞ ⊆ SðBÞ is likewise a quantum channel sometimes referred to as a state extension.(In the language of von Neumann algebras, it is the predual of a conditional expectation onto A.) Importantly, for all ρ ∈ SðAÞ.Since E A is just the inclusion map, it is immediate that for all ρ; σ ∈ SðAÞ.Lastly, we show that there is a natural generalization of Stinespring's dilation theorem to quantum operations on finitedimensional von Neumann algebras. Lemma 1.Let N ∶PðAÞ → PðBÞ be a quantum operation on finite-dimensional von Neumann algebras with where V∶H A → H B ⊗ H E (for some auxiliary Hilbert space H E ) and This is a quantum operation between states on Hilbert spaces and hence has a Stinespring dilation However, it follows from Eq. (A1) that and thus, we obtain Lemma 1. ▪ We refer to Refs.[26,31,35] for more detailed expositions of the theory of finite-dimensional von Neumann algebras. APPENDIX B: APPROXIMATE RECOVERY MAPS FOR VON NEUMANN ALGEBRAS We now extend the universal recovery result of Ref. [25] to finite-dimensional von Neumann algebras [cf.Eq. ( 6) herein]. is a quantum operation defined in terms of the Petz recovery map and the probability distribution Proof.-By assumption, A ⊆ BðH A Þ and B ⊆ BðH B Þ for finite-dimensional Hilbert spaces H A , H B .We denote by E A and E B the corresponding state extension maps defined above.As in Lemma 1, we now consider This is a quantum channel between density operators on Hilbert spaces, and hence, Ref. [25] Theorem 2.1 is applicable.It states that for any pair of density operators ω; χ ∈ SðHÞ, such that suppω ⊆ suppχ.We now make the choice ω ¼ E A ½ρ and χ ¼ E A ½σ.Then, using Eqs.(A1) and (A2), it follows that DðωkχÞ¼DðρkσÞ, Dð Ñ ½ωk Ñ ½χÞ ¼ DðN ½ρkN ½σÞ, R χ; Ñ ∘ E B ¼ E A ∘ R σ;N , and hence, Fðω; ðR χ; Ñ ∘ Ñ Þ½ωÞ ¼ Fðρ; ðR σ;N ∘ N Þ½ρÞ.The lemma follows immediately.▪ If a quantum channel N ∶SðAÞ → SðBÞ is exactly reversible, then it can be reversed by the Petz recovery map P ¼ P σ;N for any faithful state σ.In this case, P à is multiplicative in the sense that P à ½ϕ 0 ϕ ¼ P à ½ϕ 0 P à ½ϕ (e.g., Ref. [26] Proposition 8.4).If R is an arbitrary quantum operation that reverses N [i.e., ðR ∘ N Þ½ρ ¼ ρ for all ρ], then it is still true that hR à ½ϕ 1 R à ½ϕ 2 …i N ½ρ ¼ hR à ½ϕ 1 ϕ 2 …i N ½ρ ¼ hϕ 1 ϕ 2 …i ρ for all ρ ∈ SðAÞ, or, equivalently, that N à ðR à ½ϕ 1 R à ½ϕ 2 …Þ ¼ N à ðR à ½ϕ 1 ϕ 2 …Þ.In fact, this is true even when a different (exact) recovery map is used for each operator ϕ i .We now prove that approximate reversibility implies approximate multiplicativity.Correlation functions are reconstructed up to an error that grows at most quadratically with n, even when different recovery maps R i correcting different subalgebras A i of the original algebra A are used for each operator ϕ i . Theorem 3. Let N ∶PðAÞ → PðBÞ and R i ∶PðBÞ → PðA i Þ be quantum operations on finite-dimensional von Neumann algebras, with A i ⊆ A, and ϵ > 0 such that, for each recovery channel R i , we have kR i ∘ N ½ρ − ρj A i k 1 ≤ ϵ for all ρ ∈ SðAÞ.Then, Since N and R i are quantum operations, kVk; kW i k ≤ 1: where By assumption, kN à ðR à i ½χÞ − χk ≤ ϵkχk for all operators χ ∈ A i .Hence, we can bound kΔ 1 k ≤ ϵkϕk 2 and kΔ 2 k ≤ 2ϵkϕk 2 , and it follows that As a result, we see that where the first inequality is the triangle inequality, the second follows from the submultiplicativity of the operator norm, and the last inequality uses induction and Eq.(B3).Hence, Finally, From Eqs. (B4) and (B5), we see that Eq. (B2) follows immediately by the triangle inequality.▪ Although we prove this result for general quantum operations on finite-dimensional von Neumann algebras, the special case of ordinary quantum channels between density matrices of finite-dimensional Hilbert spaces follows immediately by considering A i ¼A¼BðH A Þ and B ¼BðH B Þ. Perhaps surprisingly, the result does not appear to have been previously known, even for a single recovery channel and for ordinary subspace quantum-error correction. To conclude this section, we provide a differential formula for the Choi operator of the recovery map R, which is given in Eq. ( 8) above, where N à denotes the adjoint of the channel N , and N ½σ is the complex conjugate of N ½σ.First, recall the integral formula (see, e.g., Ref. [36] Lemma 3.4) where H a ¼ − log σ a and H A ¼ − log J ðE a ½σ a Þ A for some arbitrary fixed full-rank state σ a ∈ SðM a Þ, with E a the state extension map SðM a Þ ⊆ SðM code Þ from Eq. (A1). Proof.-We consider the "local" quantum channel where the first inequality is the familiar relation between trace norm and relative entropy, the second inequality is our assumption Eq. (C1), and the last identity is Eq.(A1).We now let R ¼ R σ a ;N denote the recovery map (B1) associated with some full-rank state σ a ∈ SðM a Þ.With this choice of R, Eq. (C2) holds true.Moreover, Lemma 2 shows that, for all ρ ∈ SðM code Þ, by using our assumption Eqs.(C1) and (A1) a second time.Thus, F ρ a ; ðR ∘ N Þ½ρ a ≥ 1 − ϵ=2, and using the Fuchs-van de Graaf inequality, We obtain (i) from Eqs. (C3) and (C4) and the triangle inequality.This readily implies (ii), since For (iii), we observe that where the first inequality is Eq.(C3), and the second inequality is Theorem 3. ▪ When the fiducial state σ a ∈ SðM a Þ is chosen to be the maximally mixed state, then the map (C2) takes a particularly simple form.In this case, E a ½σ a ¼ 1 code =d code ≕ τ code , where d code ≔ Tr½1 code .We obtain where we recall that H A ¼ − log J ½τ code A is the modular Hamiltonian of the boundary region A associated with the maximally mixed code state.Equation (C5) can be rewritten as follows (Ref.[36] Lemma 3.4): where we introduce the notation H A ½ρ ≔ − log J ½ρ A for the boundary modular Hamiltonian on subregion A associated with a bulk state ρ ∈ SðM code Þ.That is, the boundary operator R à ½ϕ a can be found as the response of the boundary modular Hamiltonian to a perturbation of the fiducial bulk state in the direction of the bulk operator ϕ a .If σ a is not the maximally mixed state but is instead some arbitrary state, Eq. (C6) for R à ½ϕ a no longer holds.However, using Eq.(B6) we can write a similar equation for the Choi operator of R à itself: where Finally, we show that correlation functions of bulk operators are preserved even when each operator is reconstructed using a different entanglement wedge. Corollary 5. Let M a i ⊆ M code and M A i ⊆ M CFT be sets of finite-dimensional von Neumann algebras, J ∶SðM code Þ → SðM CFT Þ a quantum channel such that, for each pair of algebras M a i and M A i , the JLMS condition (C1) holds for some ϵ > 0.Then, where the recovery maps R A i are defined by applying the explicit construction (C2) to the pairs of algebras M a i and M A i . Proof.-The proof is a simple consequence of applying Theorem 3 to condition (i) of Theorem 4. If we take J to be the encoding map (denoted as N in Theorem 3) and include the restriction onto each boundary subalgebra M A i as part of the corresponding recovery map R A i , then Eq. (C7) follows immediately.▪ APPENDIX D: RINDLER WEDGE RECONSTRUCTION FROM GLOBAL RECONSTRUCTION FOR FREE FIELDS In this Appendix, we work through an illustrative example by applying our entanglement wedge reconstruction formula in a problem motivated by AdS-Rindler reconstruction using only global HKLL [2] as input.This example is strictly valid only for free-field theories, but we nevertheless use the language of AdS/CFT for familiarity; in short, we pretend both bulk and boundary fields are free, and we comment on the difficulties that arise when the boundary field is only a generalized free field.Let A be the boundary region corresponding to a single Rindler wedge, a be the entanglement wedge of A, D A be the boundary domain of dependence of A, and Ā be the complement of A. We show that local bulk operators in a can be represented as linear combinations of field operators for Rindler modes confined to region A. The subtlety for a true AdS/CFT calculation lies in the Rindler decomposition; in general, no such decomposition exists for generalized free fields. Using Q A , we can write N ½jxihỹj ¼ ðQ A Þ x ρ A;0 ðQ † A Þ y , where x; y ∈ f0; 1g.In particular, we have that ρ A;1 ¼ Q A ρ A;0 Q † A and N ½j 1ih 0j ¼ Q A ρ A;0 .We then rewrite Q A ρ A;0 ¼ ρ 1=2 A;0 ðρ −1=2 A;0 Q A ρ 1=2 A;0 Þρ 1=2 A;0 , where the term in the parentheses is still a simple sum of a l and a † l with various weights.Explicitly, it is ρ −1=2 A;0 Q A ρ 1=2 A;0 ¼ X l fa;l a l e −πE l þ fà a;l a † l e πE l þ fb;l a † l þ fà b;l a l : Using the approximate block diagonality of N ½τ, we can write A;1 ; although the reader is cautioned that we have not analyzed the quality of this approximation.The approximate block diagonality also implies that A;0 ρ 1=2 A;0 and a similar argument shows that Thus, the recovery channel is proportional to R à ½j 1ih 0j ¼ Z dtβ 0 ðtÞρ it=2 A;0 ðρ −1=2 A;0 Q A ρ 1=2 A;0 Þρ it=2 A;0 ; and we note that the factors of 1=2 cancel out.The combined object ρ it=2 A;0 ðρ −1=2 A;0 Q A ρ 1=2 A;0 Þρ −it=2 A;0 is then X l fa;l a l e −πE l þiπE l t þ fà a;l a † l e πE l −iπE l t þ fb;l a † l e −iπE l t þ fà b;l a l e iπE l t : Defining β0 ðωÞ ≔ R dtβ 0 ðtÞe iωt and noting that β0 ð−ωÞ ¼ β0 ðωÞ by symmetry of β 0 ðtÞ, the recovery map acting on our operator is and there is an analogous expression for R à ½j 0ih 1j.Equation (D5) is our desired result.The bulk operator X ¼ j 1ih 0j þ H:c: can be reconstructed on the Rindler wedge using only Rindler mode operators.Moreover, only single-mode operators appear. FIG. 1 . FIG. 1. (a)The HKLL procedure provides a way of writing bulk operators in terms of boundary operators living on a strip in the boundary consisting of all points that are spacelike separated from the bulk point.(b) The causal wedge HKLL procedure provides a way of expressing bulk operators in terms of boundary operators living only in the domain of dependence of a boundary region whose associated causal wedge contains the bulk point. FIG. 2 . FIG. 2. (a) A bipartition of the boundary into a connected pieceA and its complement Ā.In this case, the entanglement wedge of A coincides with the causal wedge of A labeled by a in the figure.ā is then the complement of a.(b) A bipartition of the boundary into A and Ā such that A consists of two disconnected components.In the figure above, A spans just more than half of the boundary, and in this case, the entanglement wedge of A is not simply the union of the causal wedges of each piece of A. In the bulk, a represents the entanglement wedge of A and ā is the complement of a.
11,424
0001-01-01T00:00:00.000
[ "Physics" ]
Comparative study of metaheuristics methods applied to smart grid network in Morocco Received Mar 9, 2019 Revised Jul 8, 2019 Accepted Nov 21, 2019 The economic dispatch problem of power plays a very important role in the exploitation of electro-energy systems to judiciously distribute power generated by all plants. The unit commitment problem (UCP) consists mainly in finding the minimum cost schedule for a set of generators by switching on or off each one over a given time horizon to meet the demand and satisfy different operational constraints, This research article integrates the crow search algorithm (CSA) as a local optimizer of Eagle strategy (ES) to solve unit commitment problem in smart grid system and economic dispatch of two electricity networks: a testing system 7 units and the Moroccan network.. The results obtained by ESCSA are compared with various results obtained in the literature. Simulation results show that using ES-CSA can lead to finding stable and adequate power generated that can fulfill the need of both the civil and industrial areas. INTRODUCTION Morocco is a country where the energy policy takes a capital importance in the same way as the emerging ones. The evolution of electricity demand is due to several factors. The evolution of the population, increases the domestic consumption, the modernization of lifestyles and the rural electrification also accentuate the dependence of the citizens in their daily life of the electrical network. Moreover, the country agrees and knows the advantage strong demand at the industrial level. Development plans and support industries have been launched and have generated foreign and local investment in many heavy energyintensive industries. The awareness of these issues has led to strategic investment and development laws that allow and support the diversity of the electricity system in Morocco. Projects launched at the national level for several years and focused on renewable energy confirm an energy transition. . They help reduce the dependence of the fossil fuel sector on huge fluctuations [1]. The electricity produced in Morocco is mainly produced by the National Office of Electricity and Drinking Water and three other independent electricity producers [2]: the wind power plant Abdelkhalek Torres (THEOLIA 50 MW), the energy Jorf Lasfar (JLEC 2020 MW) and basic electrical energy (EET 380 MW). Morocco adopted its with corresponding targets for 2020 in 2009 and renewed it in Paris at the end of 2015 with targets until 2030. The national energy strategy focuses on four main objectives: securing energy supply, controlling demand energy, generalize access to energy for all segments of the population at affordable and competitive prices and preserve the environment [3]. Smart grids are a set of technologies, concepts and approaches, allowing the integration the generation, transmission, distribution and use into one internet by full use of communications technology, advanced sensor measurement technology, computer technology, information technology, control technology, new energy technologies [4]. However, Smart Grid uses digital technology to control grid and choosing the best mode of power distribution to reduce energy consumption, reduce costs, increase reliability and also increase transparency in the network. Therefore, the system intelligent will have a significant impact in the fields of finance and economics of the power industry [5]. Although, the traditional network is a oneway network in which the electrical energy produced in power plants is channeled to consumers without information to create an automated and distributed network of advanced power supplies. The unit commitment problem plays a significant role in optimizing the cost of generating electrical power by planning production units based on the allocation of the production cost of each unit and the actual output power [6]. They involves scheduling the on/off states of generating units to minimize the operating cost for a given time horizon. The committed units must meet the systems fore-casted demand and spinning reserve requirement at minimum operating cost, subject to a large set of operating constraints. The UC problem, one of the most important tasks in short-term operation planning of modern power systems, has a significant influence on the secure and economic operation of power systems [7]. Optimal commitment scheduling cannot only save millions of dollars for power companies; it also ensures system reliability by maintaining the proper spinning reserve. The problem of economic dispatching is also applied in the integrated planning system of electrical energy production devices. Some methods have been published to solve the problem of economic dispatching and the optimal power flow. Researchers have published some methods to solve EDP and OPF problems. The direct method is precise and very simple but limited by the quadratic objective function [8]. The economic dispatch problem (EDP) is one of the power management tools that are used to determine real power output of thermal generating units to meet required load demand. The economic dispatch problem of power plays a very important role in the exploitation of electro-energy systems to judiciously distribute power generated by all plants [9]. The EDP results in minimum fuel generation cost, minimum transmission power loss while satisfying all units, as well as system constraints, [10][11]. The growing development of information networks and computer systems, power grids should also evolve in depth. Today, a substantial movement has begun to implement the smart grid industry around the world. Since with the creation of intelligent networks, it is possible to access the internal network from external spaces, it is also necessary to protect the data against unauthorized access and information. Therefore, a firewall must be used for information security. The firewall, based on existing security regulations, Given the discussions on passive defense topics and the importance of information security in smart grids systems. Although the firewall plays a major role in establishing security, and its proper installation, we must also take advantage of other security mechanisms to improve the security of the Smart Grid system [12]. The rest of this paper is organized as follows. Section 2 presents the smart grid system In morocco, Section 3 contains the problem formulation of the ED and UCP. Section 4 briefly presents the basics of ES and CSA. Section 5 briefly presents the case study . Section 6 provides the computational results.Finally, Section 7 outlines the conclusions. SMART GRID IN MOROCCO 2.1. Regulatory frameworks for the electricity sector Some years after its independence, Morocco took the decision to take on its own account a strategic sector of the economy such as security. Thus he oversaw the organization of the transmission and distribution of electrical energy. It has, however, promoted a regulatory framework without which the management of this sector would be impossible. Through a legal arsenal in the form of laws, dahirs and official decrees, stakeholders in the sector have been defined by specifying their responsibilities, their fields of intervention and their responsibilities [13][14]. Indeed, by Dahir 1-63-226 of August 5, 1963, the National Office of Electricity (ONE) was created in January of the same year under the administrative supervision of the Minister of Public Works. According to the evolution of the sector over the years this dahir has been completed and modified by the decrees and dahirs of the years 1994,2002,2006,2008,2015,2016. Except the restrictions of article 3 of the same dahir and renewable energies, managed by law 13-09, the ONE is responsible for the public service, the production of the transmission and the distribution of electrical energy [15]. Law N ° 48-15, appeared on July 7, 2016, gave birth to a national authority of regulation of the sector of electricity bearing the name ANRE. This law also prescribes the missions of the manager of the national electricity transmission network and the distribution network operators. In addition to the tasks assigned to it by Law No. 13-09, the National Transmission System Operator carries out its duties in accordance with the provisions of this law and the clauses of its specifications approved by regulation. The National Transmission System Operator is responsible for the operation, maintenance and development of the national transmission system and, where applicable, for its interconnections with the transmission networks of foreign countries. Similarly, he is responsible for [16]: Manage the flow of electrical energy on the national transmission grid,To ensure a realtime balance between production capacity and consumption needs, by making use of available production capacities and taking into account exchanges with other interconnected networks;Ensure the safety of the national transmission system, its stability, reliability and efficiency. Law No. 13-09 on renewable energies, promulgated by the Dahir No. 1-10-16 of 26 Safar 1431 (February 11, 2010) published in the Official Bulletin No. 5822 of the 1st Rabii II 1431 (March 18, 2010) put end to the monopoly of electric power generation by the national office of electricity and drinking water at least for renewable energies. Thus, by this law the production of electric energy from renewable energy sources is provided by the ONE, concurrently with legal entities of public or private law or natural persons, in accordance with the provisions of this law and texts. Taken for its application. Table 1 details the laws and regulation realted to RE that were issued since 2006 [17]. Table 1. Moroccan laws Another Moroccan initiative is the program of rural electrification that started in 1996. At that time, the rural electrification rate was 11% only, at the end of 2017 this rate attained 99.64%, Figure 1 presents how this rate kept increasing during these last years [18]. Moroccan electricity power industry Before the 1990's Morocco faced shortage in its electricity supply and part of the country experienced regular power cuts. It was then imperative to reform the electricity sector in order to mitigate the shortage issue.Independent Power Producers (IPPs) were authorized to privately generate electricity which was monopolized by ONEE before. The reform aimed at reducing the electricity generation using oil and hydropower and shifting some of it to coal and natural gas. The reform set by Morocco did not cover only generation but it was extended to distribution. The privatization of the distribution sector helped establish and Figure 2 details the situation of the Moroccan electrical industry in 2018 [18]. PROBLEM FORMULATION The problem of unit commitment solved ideally by acquiring a comprehensive test of all the solutions and the best solution is chosen among them. All possible units providing load and reserve requirements would be tested and would choose the optimal solution that has the minimum operating cost [19],The output power of the production units with system constraints over a period T and the on / off times at each step required for the unit engagement problem. The significant term of the operating cost of a thermal unit is the output power of the engaged units [20], the cost of the fuel, FCi is represented in a quadratic form of output power in a given time interval in (1) Where a, b, c are unit cost coefficients and p is the unit generating power. The calculation of the start-up cost (SC) depends on the treatment strategy for a thermal unit during down time periods and an exponential cost curve shown in Equation (2) is its representation. where σ , δ , τ is the hot startup cost, the cold startupcost and the cooling time unit constant and T , is the time at which the unit has been turned off so The total production cost, F is the sum of the operating, startup and shut down costs for all the units illustrated in Equation (3) SCi =σi δi * 1 exp Where N is the number of generating units and the number of requests of different load is T at the estimated commitment, SD is the stop cost; some constraints must be taken into consideration to minimize F as: a. Power balance equation is given by equation (4): Where PD is the load demand and PL is the power loss of the system. b. The hourly spinning reserve (R) is given by equation (5): c. Unit rated minimum and maximum capacities as in equation (6): The initial conditions of each unit and the minimum activation / deactivation times (MUT / MDT) of the units are given by equations (7) and (8) respectively. Where the unit off/on time is Toff/Ton the and the unit off /on [0, 1] status is Ut,i. the enhancement of ELD problem is represented by equation (9): Subject to the equality and inequality contraints are given by equations (10) and (11) respectively. ∑ Pi P P ( 1 0 ) P . P P . OVERVIEW OF EAGLE STRATEGY AND CROW SEARCH ALGORITHM 4.1. Egale strategy Eagle strategy is a two-stage optimization strategy was presented by [21]. This algorithm mimics behavior of eagles in nature. In fact, eagles use two different components to search for their prey. The first one is a random search performed by flying freely and the second one is an intensive search to catch prey when they see them. In this two-stage strategy, the first stage explores the search space globally by using a Levy flight: if it finds a promising solution, then an intensive local search is employed using more efficient local optimizer, such as hill-climbing and the down-hill simplex method. Then, the two-stage process commences another time with new global exploration, followed by local search in a new area. One of the remarkable advantages of such a combination is to use a parallel balance between global search (which is generally slow) and a rapid local search. There is another advantage that is called a methodology or strategy, not an algorithm. In fact, there are different algorithms that can be used at different times and stages during iterations. The main steps of the ES are outlined in Algorithm 1. Algorithm 1 Eagle strategy 1 : Objective function f(x) 2 : Initialization and random initial guess x t=o 3 : While (stop criterion ) do 4 : Global exploration by randomization (e;g ;levy flights ) 5 : Evaluate the objectives and find a promising solution 6 : Intensive local search via an efficient local optimizer 7 : If ( a better solution is found ) then 8 : Update the current best 9 : End if 10 : Update t=t+1 11 : End while Crow search algorithm The crow search algorithm (CSA) is a new population-based stochastic search algorithm recently proposed by [22]. The CSA is a newly developed optimization technique to solve complex engineering optimization problems [23][24]. It is inspired by the intelligent behavior of crows.The principles ofCSA are listed as follows [22]: d. Crows protect their caches from being pilfered through probability. Following the above assumptions, the core mechanism of the CSA consists of three basic phases, namely initialization; generate a new position, and updating the memory of crows. At first, the initial population of crows represented by n dimension is randomly generated. At iteration t, the position of crow is specified by x , x , , x , , … . . , x , and it is assumed that this crow has memorized its best experience thus far in its memory m , m , , m , , … . . , m , To generate a new position, crow i select randomly a crow j, for example, from the population and attempts to follow it to find the position of its hiding place (mj ). In this case, according to a parameter named awareness probability (AP), two states may happen: State 1: Crow j does not know that crow i is following it. As a result, the crow i will determine the hiding place of crow j. State 2: Crow j knows that crow j is following it. As a result, to protect its cache from being pilfered, the crow j will fool crow i by going to another position whitin the search space. According to States 1 and 2, the position of the crows is updated as follows: x , x , r fl , m , x , , IF r AP , A random positonofsearch space, otherwise ( 1 2 ) Where rj is a uniformly distributed fuzzy number from [0; 1] and AP , denotes the awareness probability of crow j at iteration iter. Finally, the crows update their memory as follows: is better than f m , m , , otherwise ( 1 3 ) Where f(-) denotes the objective function value. It is seen that if the fitness function value of the new position of a crow is better than the fitness function value of the memorized position, the crow updates its memory by the new position. The above process is repeated until a given termination criterion (itermax) is met. Finally, the best solution of the memories is returned as the optimal solution found by the CSA. The main steps of the CSA are outlined in Algorithm 2: Algorithm 2 Crow Search Algorithm 1 : Randomly initialize the position of a flock of (NP ) crows in the search space. : Evaluate the position of the Crows 3 : Initialize the memory of each Crow 4 : While (iter itermax) do 5 : for i= 1: to NP do 6 : Randomly choose one of the crows to follow (for example, j) 7 : Define an awareness probability 8 : if (rj AP j,iter ) then 9 : CASE STUDY In this section, we will use our three methods to solve the problem of Economic Dispatch and Unit Commitment Problem in Smart Grid System of the Moroccan Network. This network consists of 112 knots with 7 units of production, 120 lines and 15 transformers [18]. RESULTS AND DISCUSSION In this section, we present the results obtained based on ES-CSA for solving the economic dispatch problem and unit commitment problem and compare this result with the CSA [25] and ES [26]. A 7 unit's power unit system to explore our idea on using ES-CSA to find the optimal set of power generation of the system. ES-CSA will be used in this paper to solve the problem of economic dispatch and unit commitment. The programs are developed in MATLAB 7.9 environment. This network supplies a total load of 4902 MW MW. The loss ratios are calculated directly using the calculation result of power flow. The tuning parameters for ESCSA are given in Table 2; the Table 3 shows the parameters of the cost function and the 7 generator, under study, while the matrix is the loss coefficient matrix of the 7 units power system. the generating unit data of the test system are given in Table 4 , From the results of Table 5, we notice that ES-CSA give us the same production cost, and CSA gives a slightly lower cost of $ 0.7 / h, ES-CSA gives us a good production cost and good accuracy,In the meantime, we examine the variation in the total fuel cost of test system with evolutionary generation numbers. For different test systems, the convergence processes of the best solution in the 30 trials are listed in Fig. 4. From Fig 4, it is easy to see the ES-CSA has satisfactory Con-vergence and the algorithm escaped from the local optima at the later iterations. It proved that the stochastic searching mechanism of ES-CSA, which is conducted by gravitational forces among agents, is efficient. And the proposed mutation strategies improved the performance of ES-CSA.In Figure 4, we show the convergence of the metaheuristic search process based on ES-CSA in both the best and average cases. To see the difference between our new approach and another known method, we will compare the production cost found by ES-CSA to that found by ES [27].In this case, we will test the operation of ES-CSA. For this, we will use a simple network of 112 nodes with 7 production units. The total demand of the network is equal to 4902 MW and loss coefficients are as follows: The simulation results are presented in Table 5. CONCLUSION In this paper, we proposed an eagle strategy based crow search algorithm (ES-CSA) to solve unit commitment problem and economic dispatch problem in smart grid system of the Moroccan electricity network, and for this we have proposed a binary hybrid metaheuristic based on two recent nature insperied methods namely Eagle strategy and Crow Serach Algorithm. The hybrid appraoch were developed using MATLAB and tested a network of 112nodes.The results have shown that our new method combines the advantages of CSA and ES to give us a better performance with optimal results in all cases, and respecting the constraints imposed.
4,757.2
2020-03-01T00:00:00.000
[ "Engineering" ]
Influence of Angptl1 on osteoclast formation and osteoblastic phenotype in mouse cells Background Osteoblasts and osteoclasts play important roles during the bone remodeling in the physiological and pathophysiological states. Although angiopoietin family Angiopoietin like proteins (Angptls), including Angptl1, have been reported to be involved in inflammation, lipid metabolism and angiogenesis, the roles of Angptl1 in bone have not been reported so far. Methods We examined the effects of Angptl1 on the osteoblast and osteoclast phenotypes using mouse cells. Results Angptl1 significantly inhibited the osteoclast formation and mRNA levels of tartrate-resistant acid phosphatase and cathepsin K enhanced by receptor activator of nuclear factor κB ligand in RAW 264.7 and mouse bone marrow cells. Moreover, Angptl1 overexpression significantly enhanced Osterix mRNA levels, alkaline phosphatase activity and mineralization induced by bone morphogenetic protein-2 in ST2 cells, although it did not affect the expression of osteogenic genes in MC3T3-E1 and mouse osteoblasts. On the other hand, Angptl1 overexpression significantly reduced the mRNA levels of peroxisome proliferator-activated receptor γ and adipocyte protein-2 as well as the lipid droplet formation induced by adipogenic medium in 3T3-L1 cells. Conclusions The present study first indicated that Angptl1 suppresses and enhances osteoclast formation and osteoblastic differentiation in mouse cells, respectively, although it inhibits adipogenic differentiation of 3T3-L1 cells. These data suggest the possibility that Angptl1 might be physiologically related to bone remodeling. Background Bone is a living organ that undergoes remodeling throughout life. Bone remodeling is accelerated by the bone resorption by osteoclasts and subsequent bone formation by osteoblasts under the influences of growth factors, cytokines and/or chemokines, such as bone morphogenetic protein (BMP)-2 and stromal derived factor-1 [1,2]. Osteoclasts and osteoblasts are originated from granulocyte/macrophage-lineage hematopoietic stem cells and mesenchymal stem cells, respectively. In homeostatic balance, bone resorption and formation are balanced so that old bone is continuously replaced by new tissue to accommodate mechanical loading and strain [3]. The remodeling cycle consists of three consecutive phases: absorption, inversion, and formation. Angiopoietin like proteins (Angptls) contain eight secreted glycoproteins, namely Angptl1 to Angptl8, showing structural similarity to members of angiopoietin family proteins [4]. All Angptls have an amino-terminal coiled-coil domain, a linker region, and a carboxyterminal fibrinogen-like domain, and bind to the Tie2 receptor similarly to angiopoietin except for Angptl8 (without the carboxy-terminal fibrinogen-like domain). Angiopoietin and Angptls show high similarity among them [5]. Angptls are widely expressed in various tissues, including liver, cardiovascular and hematopoietic system, and play important roles in inflammation, lipid metabolism and angiogenesis [6]. Angiopoietin like protein-1 (Angptl1) with 491 amino acids has firstly been an isolated cDNA as encoding one of the novel angiopoietin like protein family members from human adult heart cDNA. Angptl1, also known as angioarrestin, was the first discovered member of the Angptl family, since it was identified as an anti-angiogenic factor that inhibits cell proliferation, migration and adhesion of endothelial cells [5,7]. Angptl1 inhibits vascular endothelial growth factor and basic fibroblast growth factor-induced proliferation of the endothelial cells and exerts the anti-apoptotic action through an induction of phosphorylation of extracellular-signal-regulated kinase 1/ 2 (ERK1/2) and protein kinase Bα in hepatocellular carcinoma [8]. Moreover, Angptl1 and Angptl2 have been reported to be involved in metabolic syndrome in human and rodents [9]. Lin et al. reported that Angptl1/2 enhances the formation of hematopoietic stem and progenitor cells through the interaction with notch receptor signaling in zebrafish [10]. These findings suggest that Angptl1 is related to wide variety of pathophysiological process in various diseases. Although angiopoietin and Angptls are known as angiogenic growth factors, the studies of Angptl1, Angptl2 and Angptl1/Angptl2-double knockout zebrafish and mice clearly demonstrated that Angptl2 and Angptl1 have angiogenic and anti-angiogenic function in endothelial cells, respectively, although only double knockout zebrafish showed the defects of endothelial development from embryos [11]. Although vessel formation and blood supply are important for the maintenance and activation of bone remodeling, it has not been reported about the roles of Angptl1 in bone metabolism elsewhere. We therefore investigated the effects of Angptl1 on osteoclast formation, osteoblastic differentiation and osteoblast phenotypes using mouse cell-lines and primary cultured cells. Ethics All animal experiments were performed according to the guidelines of the National Institutes of Health and the institutional rules for the use and care of laboratory animals at Kindai University. The procedures of animal experiments were approved by the Experimental Animal Welfare Committee of Kindai University (KAME-2020-011). The study was carried out in compliance with the ARRIVE guidelines. DNA construction For DNA construction, the coding region of murine Angptl1 was amplified by PCR with KAPA HiFi DNA Polymerase enzyme (KAPA Biosystems, Wilmington, MA, USA), using cDNA from mouse neonatal liver as a template and primers (Forward: ATAGAGGCTA GCATGAAGGCTTTTGTTTGGAC, Reverse: ATCAAC TCGAGTCAGTCAATAGGCTTGATCA), and then the resultant fragment was inserted into pcDNA3.1 (+) vector (Invitrogen, Thermo Fishers Scientific, Waltham, MA, USA) at the NheI/XhoI sites, and the DNA sequence was verified. Transfection For transient transfection, the expression vector was transfected into cells using jetPRIME reagent (Polyplustransfection SA, Illkirch, France) according to the manufacture's protocol. Six hours later, the cells were supplied with fresh αMEM, RPMI1640 or DMEM with high glucose containing 10% FBS. Forty-eight hours later, the transiently transfected cells were used for experiments. For stable transfection, empty vector or pcDNA3.1-Angptl1 vector was transfected into ST2 cells using jet-PRIME reagent, as described above. The cells were then subjected to the RPMI1640 supplemented with 10% FBS and 300 μg/ml G418 after 48 h transfection. The several colonies were obtained and the expression levels of Angptl1 were determined by quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) method. Osteoclast formation RAW 264.7 cells were seeded onto 96-well plate at 1000 cells/well. Empty vector-or Angptl1-transfected RAW 264.7 cells were cultured with αMEM containing 10% FBS and 50 ng/ml of RANKL to induce osteoclast differentiation for 5 days. For osteoclast formation in mouse bone marrow cells, cells were collected from tibias of 6 to 8week-old male C57BL/6 J mice. Osteoclast formation was induced in mouse bone marrow cells, as described previously [12]. Mouse bone marrow cells were cultured in αMEM with 10% FBS and 50 ng/ml M-CSF for 3 days. Then, osteoclasts were formed in αMEM with 10% FBS, 50 ng/ml M-CSF and 50 ng/ml RANKL for a further 4 days. The cells were fixed with 4% formaldehyde, and tartrate-resistant acid phosphatase (TRAP) staining were performed with TRAP Stain kit (WAKO Pure Chemicals) according to manufacturer's protocol. TRAP-positive multinuclear cells (MNCs) with three or more nuclei were counted as osteoclasts. qRT-PCR Total RNA was extracted from cells using TriSure reagent (Nippon Genetics, Tokyo, Japan) in accordance with the manufacturer's instructions and real-time PCR was performed as described [13]. Each PCR primer set is shown in Table 1. Expression levels of mRNA were normalized to the relative amount of a housekeeping gene, GAPDH or β-actin. Each experiment was performed in duplicate. Western blot Western blot analysis was performed, as described previously [12]. Cells were lysed with radio-immunoprecipitation assay buffer containing protease inhibitor cocktails (Millipore, Bedford, MA, USA) and phosphatase inhibitor cocktails (Wako Pure Chemicals). The lysate was sonicated and centrifuged at 10000 rpm for 10 min. The supernatant was collected and equal amount of protein aliquots were separated by electrophoresis on 4-20% FastGene gradient gels (Nippon Genetics, Tokyo, Japan). The proteins were transferred onto a polyvinyl difluoride membrane (Millipore). The membrane was blocked with 3% skim milk and subsequently incubated with anti-phosphorylated-p65, anti-p65 or anti-GAPDH antibodies followed by horse radish peroxidase-labeled antirabbit IgG (Cell Signaling Technology). The bands were visualized using ImmunoStar Zeta (Wako Pure Chemicals) and captured with Amersham Imager 680 (GE Healthcare, Tokyo, Japan). Alkaline phosphatase (ALP) activity Analysis of ALP activity in ST2 cells was performed, as described previously [14]. ST2 cells transiently expressing empty vector or Angptl1 were grown in 24-well plate. The confluent cells were stimulated with 200 ng/ml BMP-2 for 72 h. Then, the cells were lysed in distilled water and sonicated for 30 s on ice. After the lysates were centrifuged, ALP activity in the supernatant were analyzed using Lab assay ALP kit (Wako Pure Chemicals). The absorbance was measured at 405 nm by the Multiskan Go microplate spectrophotometer (Thermo Fishers Scientific). The protein levels in the supernatant were determined by Protein Assay BCA Kit (Wako Pure Chemicals). ALP activity was normalized to the protein levels and expressed as [units/ total protein (μg)]. Mineralization To test the effect of Angptl1 on mineralization, stably empty vector-or Angptl1-overexpressed ST2 cells were grown in RPMI/10%FBS-penicillin/streptomycin containing 300 μg/ml G418 and 200 ng/ml BMP-2 for 1 week to induce osteoblasts, and then changed the mineralization media supplemented with 300 μg/ml G418 for additional 10 days. To detect mineralization level, the cells were fixed with 4% formaldehyde and stained with alizarin red. For the quantitation of mineralization, the resultant cells were destained with cetylpyridinium chloride and the absorbance at 550 nm were measured by the Multiskan Go microplate spectrophotometer (Thermo Fisher Scientific). Mouse primary osteoblasts Primary osteoblasts were collected from the calvaria of new born C57BL/6 J mice, as described previously [15]. After mice were euthanized with excess isoflurane, calvaria was removed and cut into small pieces. The small pieces of calvaria were incubated four times with αMEM containing 1 mg/ml collagenase and 0.25% trypsin for 20 min at 37°C. Cells from second, third and fourth digestions were collected and cultured in αMEM with 10% FBS and 100 units/ml penicillin-100 μg/ml streptomycin. Adipogenic differentiation Confluent grown 3T3-L1 cells were transiently transfected with empty vector-or Angptl1 before the induction of adipogenic differentiation. Then, the medium was changed into high glucose DMEM-10%FBS supplemented with 10 μg/ml insulin, 500 μM 3-isobutyl-1-methyl-xanthine, and 1 nM dexamethasone for 2 days and then changed high glucose DMEM-10%FBS medium for additional 4 days as previously described [16]. The differentiated 3T3-L1 cells were fixed with 4% formaldehyde in phosphate-buffered saline for 30 min and stained with Oil red O solution according to the standard protocols. The stained cells were photographed under a microscope (BZ-700; Keyence, Osaka, Japan). Statistical analysis Data are presented as mean ± SEM. Comparisons between two groups were made with the Mann-Whitney U test. For parametric multiple comparison, one-way or two-way analysis of variance followed by the Dunnett test or Tukey-Kramer test was employed by GraphPad PRISM 7.00 software. Significance was defined as P < 0.05. Each experiment was performed independently three times. Effects of Angptl1 on osteoclast formation We first examined the effects of Angptl1 on the osteoclast formation. Transient overexpression of Angptl1 significantly decreased the number of TRAP-positive MNCs enhanced by RANKL in mouse monocytic RAW 264.7 cells, compared with empty vector-transfected cells (Fig. 1a). Moreover, Angptl1 overexpression significantly downregulated the mRNA levels of osteoclastic markers, such as TRAP and cathepsin K (Ctsk), enhanced by RANKL, in RAW 264.7 cells, although it seemed to suppress nuclear factor of activated T-cells, cytoplasmic 1 (NFATc1) mRNA levels enhanced by RANKL without significant difference (Fig. 1b). As shown in Fig. 2, Angptl1 significantly suppressed number of TRAP-positive MNCs as well as the mRNA levels of TRAP, Ctsk, NFATc1, matrix metalloproteinase (MMP)-9 and dendrocyte expressed seven transmembrane protein (DC-STAMP) enhanced by RANKL in mouse bone marrow cells. It did not affect the phosphorylation of p65 enhanced by RANK L in these cells. Effects of Angptl1 on osteoblasts We examined the effects of Angptl1 on osteoblastic differentiation using ST2 cells, mouse mesenchymal cell line. Transient overexpression of Angptl1 significantly enhanced Osterix mRNA levels and ALP activity enhanced by BMP-2 in ST2 cells, although it significantly reduced osteocalcin mRNA levels enhanced by BMP-2 ( Fig. 3a and b). Next, we established ST2 cells stably expressing Angptl1 to examine the effects of Angptl1 on mineralization. As shown in Fig. 3c, stable overexpression of Angptl1 enhanced mineralization in ST2 cells pretreated with BMP-2 in the presence of mineralization medium. On the other hand, transient overexpression of Angptl1 did not affect osteoblast phenotypes, such as the mRNA levels of Runx2, Osterix, ALP and osteocalcin in mouse osteoblastic MC3T3-E1 cells (Fig. 4a). Recombinant Angptl1 protein did not affect the mRNA levels of Runx2, Osterix, ALP and osteocalcin in mouse osteoblasts (Fig. 4b). Moreover, Angptl1 did not affect the mRNA of RANKL and osteoprotegerin (OPG) as well as the ratio of RANKL/OPG mRNA in mouse osteoblasts (Fig. 4c). Effects of Angptl1 on adipogenic differentiation We finally examined the effects of Angptl1 on adipogenic differentiation. Mouse preadipocyte cell line, 3T3-L1 cells, transiently overexpressing Angptl1, differentiate to adipocytes in the presence of adipogenic medium for 4 days. Transient overexpression of Angptl1 significantly decreased the mRNA levels of peroxisome proliferatoractivated receptor γ (PPARγ) and adipocyte protein-2 (aP2) enhanced by adipogenic medium in 3T3-L1 cells (Fig. 5a) as well as Oil Red O staining (Fig. 5b). Discussion The present study revealed that Angptl1 overexpression significantly inhibited number of TRAP-positive MNCs as well as the mRNA levels of TRAP and Ctsk enhanced by RANKL in RAW 264.7 cells. Moreover, Angptl1 protein significantly suppressed number of TRAP-positive MNCs as well as the mRNA levels of TRAP, Ctsk, NFATc1, MMP-9 and DC-STAMP enhanced by RANK L in mouse bone marrow cells. These findings indicate that Angptl1 suppresses osteoclast formation, then might lead to a suppression of bone resorption. Since the effects of Angptl1 on NFATc1 expression seemed to be less than on those of TRAP and Ctsk in RAW 264.7 cells, Angptl1 might differently affect several signaling for osteoclast formation in mice. It has previously been reported that Angptl1 exerts anti-apoptotic activity through the phosphorylation of ERK1/2 and Akt in zebrafish endothelial cells [11]. Moreover, the studies of Akt-deficient mice suggested that Akt signaling is partly related to osteoblastic bone formation and osteoclastic bone resorption in bone remodeling [17,18], although the previous studies suggest that ERK1/2 and Akt signaling stimulates bone resorption in mice in contrast to the effects of Angptl1 on osteoclast formation [19]. The roles of Angptl1 in osteoclast function have remained unknown. Further studies, such as pit formation assay using bone slices, will be necessary to clarify the roles of Angptl1 in bone resorption activity. Although MC3T3-E1cells are preosteoblastic cells, which already possess the phenotype with the osteoblastic character, such as high ALP activity and mineralization, ST2 cells are stromal cell-line, which do not exhibit the osteoblast phenotype. BMP-2 can induce the differentiation of ST2 cells into osteoblastic cells. In the present study, Angptl1 overexpression significantly (See figure on previous page.) Fig. 2 Effects of Angptl1 on osteoclast formation in mouse bone marrow cells. a Mouse bone marrow cells were pre-cultured with 50 ng/ml of M-CSF for 3 days, and further cultured with 50 ng/ml of RANKL and M-CSF in the presence or absence of recombinant Angptl1 (100 ng/ml) for additional 4 days. The number of TRAP-positive MNCs was counted. Data represent mean ± SEM (n = 6 in each group). **p < 0.01 by Tukey-Kramer test. b Mouse bone marrow cells were pre-cultured with 50 ng/ml of M-CSF for 3 days, and further cultured with 50 ng/ml of RANKL and M-CSF in the presence or absence of recombinant Angptl1 (100 ng/ml) for additional 4 days. Then, total RNA was extracted for qPCR analysis of TRAP, cathepsin K (Ctsk), NFATc1, MMP-9, DC-STAMP or GAPDH. Data are expressed relative to GAPDH mRNA value. Data represent mean ± SEM (n = 6 in each group). Two independent experiments were performed. c Mouse bone marrow cells were pre-cultured with 50 ng/ml of M-CSF for 3 days, and then cultured with or without recombinant Angptl1 (100 ng/ml) for 1 h in the presence of 50 ng/ml of RANKL and M-CSF. Total proteins were extracted from the cells and Western blotting analyses for phosphorylated-p65 (p-p65), p65 and GAPDH were performed. The images represent experiments performed independently 3 times. Cont; Control enhanced Osterix mRNA levels, ALP activity and mineralization in the presence of BMP-2 in ST2 cells, although it did not affect the mRNA levels of Runx2, Osterix, ALP and osteocalcin in mouse osteoblasts and MC3T3-E1 cells. These data suggest that Angptl1 enhances the mesenchymal cells into osteoblasts at the early stage of osteogenic differentiation, then leading to an enhancement of ALP activity and mineralization. In the present study, Angptl1 overexpression significantly suppressed osteocalcin mRNA levels enhanced by BMP-2 in ST2 cells, suggesting that Angptl1 affects osteoblastic differentiation at the early stage, but not the terminal stage, since osteocalcin has been considered to be late stage osteoblast differentiation marker [20]. Angptl2 has been well-studied in glucose/lipid metabolism. Kadomatsu et al. demonstrated that Angptl2 possesses angiogenic effects, which are related to the chronic inflammation in obesity and cancer metastasis [21][22][23]. Serum levels of Angptl2 have also been reported in type 2 diabetic and obese patients [24,25]. These in vitro and clinical studies suggested that Angptl2 is involved in the pathophysiology of metabolic syndrome, such as obesity, although no reports are available about the roles of Angptl1 in adipobiology. We therefore examined the effects of Angptl1 on adipogenic differentiation, since bone metabolism is regulated by the trans differentiation of mesenchymal stem cells into osteogenic or adipogenic cells [26]. In the present study, Angptl1 overexpression significantly decreased the mRNA levels of PPARγ and aP2 as well as the formation of lipid droplets during adipogenic induction in mouse preadipocyte cell line, 3T3-L1 cells, suggesting that Angptl1 suppresses adipogenic differentiation in mouse cells. Taken into account into the enhancement of Angptl1 on the mesenchymal cells into osteoblasts at the early stage, Angptl1 induces trans differentiation from adipogenic-lineage into osteogenic lineage cells during bone remodeling or in the pathological state, such as diabetes and obesity. Bone modeling and remodeling process participate during bone repair after fractures or bone defect [27]. Chondrogenesis, bone formation and bone resorption as well as vessel formation and inflammation are related to this bone repair process. Tanoue et al. evaluated Angptl2 expression and function in chondrocyte differentiation and subsequent endochondral ossification in mice, and they showed that Angptl2 functions as an extracellular matrix protein in chondrocyte differentiation and subsequent endochondral ossification [28]. Therefore, Angptl1 might participate bone repair process by affecting chondrogenesis, bone restoration and remodeling phase, although our preliminary study showed that an elevation in Angptl1 mRNA level are not evident at the injured sites after femoral bone defect in mice (data not shown). Fig. 3 Effects of Angptl1 on osteoblastic differentiation. a Total RNA was extracted from transiently empty vector-or Angptl1-transfected ST2 cells cultured with vehicle or BMP-2 (200 ng/ml) for 72 h. Then, qRT-PCR for Runx2, Osterix, osteocalcin, Angptl1 or GAPDH was performed. Data are expressed relative to GAPDH mRNA value. Data represent mean ± SEM (n = 6 in each group). *p < 0.05, **p < 0.01 by Tukey-Kramer test. ## p < 0.01 by Mann-Whitney U test. b Cell lysates were extracted from transiently empty vector-or Angptl1transfected ST2 cells cultured with vehicle or BMP-2 (200 ng/ml) for 72 h. Then, ALP activity was measured as described in Methods. Data represent mean ± SEM (n = 6 in each group). *p < 0.05, **p < 0.01 by Tukey-Kramer test. c Stably-empty vector-or Angptl1 (#1, #2)transfected ST2 cells were treated with BMP-2 (200 ng/ml) and G418 (300 μg/ml) for 1 week, then cultured with the mineralization medium (10 mM β-glycerophosphate and 50 μg/ml ascorbic acid) and G418 (300 μg/ml) for additional 10 days in the absence of BMP-2. The quantitation of mineralization was performed at the absorbance at 550 nm, as described in Materials and Methods. Data were expressed as ratio of the absorbance in empty vectoroverexpressed ST2 cells. Data represent mean ± SEM (n = 8 in each group). **p < 0.01 by Dunnett test. Lower panel shows the representative images of mineralization Fig. 4 Effects of Angptl1 on osteoblast phenotype. a Total RNA was extracted from subconfluent MC3T3-E1 cells transiently transfected with empty vector or Angptl1. Then, qRT-PCR for Runx2, Osterix, ALP, osteocalcin, Angptl1 or GAPDH was performed. Data are expressed relative to the GAPDH mRNA value. Data represent mean ± SEM (n = 8 in each group). ## p < 0.01 by Mann-Whitney U test. b Mouse osteoblasts were cultured in the presence and absence of recombinant Angptl1 (100 ng/ml) for 24 h, and total RNA was extracted. Then, qRT-PCR for Runx2, Osterix, ALP, osteocalcin or GAPDH was performed. Data are expressed relative to the GAPDH mRNA value. Data represent mean ± SEM (n = 6 in each group). Cont; Control. (C) Mouse osteoblasts were cultured in the presence and absence of recombinant Angptl1 (100 ng/ml) for 24 h, and total RNA was extracted. Then, qRT-PCR for RANKL, OPG or GAPDH was performed. Data are expressed relative to the GAPDH mRNA value. Data represent mean ± SEM (n = 6 in each group). Cont; Control The roles of Angptl1 in chondrogenesis and bone repair have still remained unknown. Conclusions In conclusion, the present study first revealed that Angptl1 overexpression suppresses osteoclast formation enhanced by RANKL in RAW 264.7 cells and mouse bone marrow cells. Moreover, it enhanced Osterix expression, ALP activity and mineralization induced by BMP-2 in ST2 cells, although it suppressed adipogenic differentiation of 3T3-L1 cells. This is the first report about the roles of Angptl1 in bone. Our data suggest the possibility that Angptl1 might enhance and reduce osteoblastic bone formation and osteoclastic bone resorption in mice, respectively, which might leading to an increase in bone mass and a target of bone metabolic Fig. 5 Effects of Angptl1 on adipogenic differentiation. a Total RNA was extracted from transiently empty vector-or Angptl1-transfected 3T3-L1 cells with and without adipogenic medium (10 μg/ml insulin, 500 μM 3-isobutyl-1-methyl-xanthine and 1 nM dexamethasone) for 6 days. Then, qRT-PCR for PPARγ, aP2, Angptl1 or β-actin was performed. Data are expressed relative to β-actin mRNA value. Data represent mean ± SEM (n = 6 in each group). *p < 0.05, **p < 0.01, by Tukey-Kramer test. b Transiently empty vector-or Angptl1-transfected 3T3-L1 cells with and without adipogenic medium for 6 days. Then, cells were stained with Oil red O. Data shows the representative images. Scale bar indicates 50 μm. At least three independent experiments were performed disorders, such as osteoporosis. Further in vivo study using Angptl1-deleted mice is required to clarify the roles of Angptl1 in bone metabolism in the future.
5,129.6
2020-12-14T00:00:00.000
[ "Medicine", "Biology" ]
Quantum nonlinear spectroscopy via correlations of weak Faraday-rotation measurements The correlations of fluctuations are key to studying fundamental quantum physics and quantum many-body dynamics. They are also useful information for understanding and combating decoherence in quantum technology. Nonlinear spectroscopy and noise spectroscopy are powerful tools to characterize fluctuations, but they can access only very few among the many types of higher-order correlations. A systematic quantum sensing approach, called quantum nonlinear spectroscopy (QNS), is recently proposed for extracting arbitrary types and orders of time-ordered correlations, using sequential weak measurement via a spin quantum sensor. However, the requirement of a central spin as the quantum sensor limits the versatility of the QNS since usually a central spin interacts only with a small number of particles in proximity and the measurement of single spins needs stringent conditions. Here we propose to employ the polarization (a pseudo-spin) of a coherent light beam as a quantum sensor for QNS. After interacting with a target system (such as a transparent magnetic material), the small Faraday rotation of the linearly polarized light can be measured, which constitutes a weak measurement of the magnetization in the target system. The correlated difference photon counts of a certain numbers of measurement shots can be made proportional to a certain type and order of correlations of the magnetic fluctuations in the material. This protocol of QNS is advantageous for studying quantum many-body systems. The correlations of fluctuations are key to studying fundamental quantum physics and quantum many-body dynamics.They are also useful information for understanding and combating decoherence in quantum technology.Nonlinear spectroscopy and noise spectroscopy are powerful tools to characterize fluctuations, but they can access only very few among the many types of higherorder correlations.A systematic quantum sensing approach, called quantum nonlinear spectroscopy (QNS), is recently proposed for extracting arbitrary types and orders of time-ordered correlations, using sequential weak measurement via a spin quantum sensor.However, the requirement of a central spin as the quantum sensor limits the versatility of the QNS since usually a central spin interacts only with a small number of particles in proximity and the measurement of single spins needs stringent conditions.Here we propose to employ the polarization (a pseudo-spin) of a coherent light beam as a quantum sensor for QNS.After interacting with a target system (such as a transparent magnetic material), the small Faraday rotation of the linearly polarized light can be measured, which constitutes a weak measurement of the magnetization in the target system.Using a Mach-Zehnder interferometer with a designed phase shift, one can post-select the effects of the light-material interaction to be either a quantum evolution or a quantum measurement of the material magnetization.This way, the correlated difference photon counts of a certain numbers of measurement shots, each with a designated interference phase, can be made proportional to a certain type and order of correlations of the magnetic fluctuations in the material.The analysis of the signal-to-noise ratios shows that the second-order correlations are detectable in general under realistic conditions and higher-order correlations are significant when the correlation lengths of the fluctuations are comparable to the laser spot size (such as in systems near the critical points).Since the photon sensor can interact simultaneously with many particles and the interferometry is a standard technique, this protocol of QNS is advantageous for studying quantum many-body systems. I. INTRODUCTION The correlations of fluctuations are key to studying the foundation of quantum mechanics (such as via the Bell inequalities [1,2] and the Leggett-Garg inequalities [3]) and the dynamics and thermodynamics of quantum manybody systems [4,5].They are also useful information for characterizing the noises and environments in quantum technology [6][7][8][9].The second-order correlations have been widely studied, such as in linear optics, transport, thermal response, magnetic response, and neutron scattering.Higher-order correlations are usually ignored by the assumption that they are weaker than second-order correlations, or they can be factorized into second-order correlations by Wick's theorem when the fluctuations are Gaussian (which is often the case in macroscopic systems far away from the critical points) [9].However, there are important scenarios where the irreducible higher-order correlations are significant.One such case is the critical fluctuations near the phase transitions [10,11].Recent *<EMAIL_ADDRESS>also show the importance of high-order correlations in quantum sensing [12][13][14], in mesoscopic quantum systems [15][16][17], and in spin systems under external noises [18].The characterization of higher-order correlations thus is desirable [19][20][21]. There is a rich structure in higher-order correlations of quantum fluctuations [22].Corresponding to different orderings of quantum operators, there are in general K! independent correlations among K different quantities B1 , B2 , . . ., BK , given by, e.g., Bσ(1) Bσ(2) • • • Bσ(K) (with σ denoting an element of the permutation group S K and ⟨• • • ⟩ denoting the average over the system density operator).For K quantities at different times (specifically, B1 , B2 , . . ., BK at t 1 , t 2 , . . ., t K correspondingly with t 1 ≤ t 2 • • • ≤ t K ), most of these correlations are the so-called out-of-time-order correlators (OTOCs), which do not directly affect the dynamics of the system.There are 2 K−1 independent time-ordered correlations, corresponding to different arrangements of the operators B k on the two branches of the contour evolution (i.e., on the left or right side of the system density operator ρ).The case for K = 2 has been well studied in the form of Keldysh-Kadanoff-Baym Green's functions [23,24].The time-ordered higher-order correlations can be formulated in an intriguing basis using the commutators or anti-commutators between the operators.We introduce the superoperators B ± k at time k with B + k corresponding to an anticommutator, defined as B + k ρ ≡ Bk ρ + ρ Bk /2, and B − k corresponding to a commutator, defined as B − k ρ ≡ Bk ρ − ρ Bk /i.The 2 K−1 types of time-ordered Kth-order correlations are , where ρ is the initial state of the system and η k ∈ {+, −}.Note that if η K = −, the correlation always vanishes since the trace of a commutator is zero.For η K = +, the correlations can also be written as This rich structure of correlations is important in the dynamic of quantum system [20] but however have been largely unexplored due to the limitation of conventional spectroscopy [5,12,18,[25][26][27][28]. Conventional nonlinear spectroscopy [25], using classical probes, measures only one of the 2 K−1 type of correlations, namely, sponding to the (K − 1)-th order nonlinear susceptibility, while noise spectroscopy usually measures only C ++...+ [5,12,18,[26][27][28]. It is recently proposed that arbitrary types and orders of time-ordered correlations can be extracted by a systematic quantum sensing approach, coined quantum nonlinear spectroscopy (QNS), using the correlations of sequential weak measurement [29] via a spin quantum sensor [19].This scheme has been adopted in experiments to single out quantum signals from classical noises [14,30].The quantum sensor interacts with the target system in a sequence of short periods, and the projective measurement of the sensor constitutes a weak measurement of the target.In the interaction picture, the coupled sensortarget system evolves by the quantum Liouville equation ∂ t ρ = −i V , ρ (hereafter the Planck constant ℏ is set as unity), where V = Ŝ ⊗ B with Ŝ being a sensor operator and B a target operator.For a small period of interaction, starting from an initial state ρ = ρs ⊗ ρB (with ρ s/B denoting the sensor/target state), the coupled sys- The physical meanings of the two terms in the r.h.s. of Eq. ( 1) are clear: 1. Considering Tr S + ρs = ⟨ Ŝ⟩, the first term corresponds to the target evolving (indicated by the commutator B − j ρB ) under a quantum force Ŝ from the sensor. 2. Similarly, the second term is the evolution of the sensor under the quantum force B from the target, that is, the back-action. After the interaction, the measurement of a physical quantity Λ j of the quantum sensor at time t j leads to a reduced density matrix of the target in the form of The correlation of the physical quantities {Λ 1 , Λ 2 , ..., Λ K } measured at time t 1 , t 2 , . . ., t K can be written as The initial state and measurement basis of the sensor can be chosen for the pre-and post-selection of the effect of the sensor-target interaction in each measurement period.For example, by choosing the initial state of the sensor ρs,j and the sensor quantity to be measured Λ j , one can make Tr s Λj ρs,j = 0 and Tr s Λj S − ρs,j = 0, so that the weak measurement yields an evolution of the target, that is, B − ρB (t j ).This way, the QNS realized by sequential weak measurement via a quantum sensor can extract arbitrary types and orders of the time-ordered correlations in a quantum system.However, the requirement of a central spin as the quantum sensor limits the versatility of the QNS since usually a central spin interacts only with a small number of particles in proximity and the measurement of single spins demands stringent conditions.The polarization (a pseudo-spin) of a coherent light beam can be an alternative for a central spin as a quantum sensor [12,31].The light beam as a quantum sensor has the advantage of large coherent length (so that it can interact with a quantum many-body system) and has the ease for preparation, control and detection.The pseudo-spin of light has been routinely used in noise spectroscopy of spin systems [26,32]. In this paper, we present a scheme of QNS using polarization of a coherent light beam as a quantum sensor.The interaction with the target system (such as spins in a transparent material) results in small Faraday rotation of the linearly polarized light.The measurement of the Faraday rotation constitutes a weak measurement of the spins.The Faraday rotation can be measured with a Mach-Zehnder interferometer with a phaseshift chosen such that the effects of the light-material interaction is selected to be either a quantum evolution or a quantum measurement of the magnetization of the spins (corresponding to a commutator or anti-commutator between the magnetizaiton and the density operator of the spins).As a result, the correlated photon counts of a sequence of measurement shots, each with a designated phaseshift, can be made proportional to a designated type and order of correlation of the spin fluctuations.Analysis of the signal-to-noise ratio for the measurement of correlations shows that the scheme is feasible under realistic conditions. II. SCHEME A. Basic idea We employ the Mach-Zehnder interferometer in which the polarization of a light beam can be measured in a designated basis by choosing a certain relative phaseshift between the two arms (see Fig. 1).A sequence of linearly polarized laser pulses, described by the coherent state |α, H⟩⟨α, H| (with H/V denoting horizontal/vertical polarization), are applied to a target system.The light polarization acts as the quantum sensor, coupled to a spin system (the target).The interaction with the spins leads to a Faraday rotation of the light polarization.After the interaction, the light beam is separated into two arms by a polarizing beam splitter (PBS), such that the component with horizontal polarization is reflected and the component with vertical polarization is transmitted.A phaseshift ϕ is then applied in one arm (e.g., the reflected beam) and then the two arms are mixed through a 1:1 beam splitter (BS).The difference between the photon counts in the two output directions is recorded to measure the light polarization, in a basis that is selected by choosing the phase shift ϕ.As the light polarization is entangled with the target spins, the measurement of the light polarization constitutes an indirect measurement of the target spins.When the interaction between the light polarization and the spins is weak, i.e., the Faraday rotation is small, the indirect measurement on the spins is a weak one.The correlation among a sequence of difference photon counts corresponds to a certain correlation in the target spin system. B. Light polarization as a pseudo-spin-1/2 sensor We can define a pseudo-spin-1/2 using Jones vectors.In the formalism of Jones vectors, we write the horizontal (H) and vertical (V) linear polarizations as the diagonal (D) and anti-diagonal (A) linear polarizations as 1.Schematic of a Mach-Zehnder interferometer for detecting arbitrary correlations of a quantum spin system using a polarized light beam sensor.The difference photon count Λ is detected to measure the polarization of the light after the interaction with the target system.A designated phase shift ϕ is induced by a variable optical delay to select the basis of polarization measurement. and the two circular polarizations as In analogue to a spin-1/2, we define three photon operators where âα is the photon annihilation operator for the polarization α and nα ≡ â † α âα is the photon number operator.The pseudo-spin operators satisfy the same commutators as the spin operators, namely, Ŝa , Ŝb = iϵ abc Ŝc . (5) But they do not have the same anti-commutators as the spin-1/2.For example, C. Single Shot of measurement Let us first consider a single shot of measurement.In the interaction picture, the evolution of the coupled system of photons and spins is determined by with the spin-photon interaction where the target operator B is an effective field proportional to the magnetization of the spins along the light propagation direction.The magnetization induces opposite frequency shifts for the left and right circularly polarized photons and hence opposite phase shifts, which causes the Faraday rotation of the linear polarization.We assume the pulse duration τ is small, that is, the Faraday rotation angle || Bτ || ≪ 1 and B(t) ≈ B(t + τ ).After the interaction for a short period of time τ , starting from the initial state ρ(t) = ρs ⊗ ρB , the coupled system evolves to the state The physical meaning of the term τ S − 3 ρs ⊗ B + ρB in Eq. ( 8) is the pseudo-spin rotating about the 3rd axis under the field B from the target.To select this effect, we can prepare the pseudo-spin initially in an eigenstate of Ŝ1 and then measure the photon polarization in the basis of Ŝ2 , that it, in the basis of the linear polarizations D and A. Indeed, if the sensor initial state is chosen as ρs (t j ) = |α, H⟩⟨α, H|, (with |α, H⟩ ≡ exp αâ † H − α * âH |vacuum⟩) and the observable to be measured as Λj = Ŝ2 , we have Tr s Ŝ2 ρs (t j ) = 0, Tr s Ŝ2 S + 3 ρs (t j ) = 1 2 Ŝ2 , Ŝ3 = 0, and Tr s Ŝ2 S − 3 ρs (t j ) = 1 i Ŝ2 , Ŝ3 = 1 2 |α| 2 .Therefore, the measurement induces the superoperator on the bath as The physical meaning of the other term τ S + 3 ρs ⊗ B − ρB in Eq. ( 8) is the quantum force Ŝ3 from the sensor driving the evolution of the target.To select this effect, we should measure the pseudo-spin component Ŝ3 .Using the same initial state ρs (t j ) = |α, H⟩⟨α, H| and similar algebra for the derivation of Eq. ( 9), we get Using the scheme illustrated in Fig. 1, we can measure either Ŝ2 or Ŝ3 by designing the phase shift in the reflected path.Interference occurs between transmitted path and reflected path.The 1:1 BS transforms the input photon operators â and b into output ones ĉ and d by For the measurement of Ŝ3 , we set the phase shift such that e iϕ = 1 and therefore the input annihilation operators are â = âH and b = âV .Then, the outputs become and the difference photon count Λ ≡ λ d − λ c as shown in Fig. 1 becomes â † R âR − â † L âL ≡ Ŝ3 .Similarly, for the measurement of Ŝ2 , we set the phase shift such that e iϕ = i and the input annihilation operators are â = âH and b = iâ V .Then, the outputs become D. Correlations of sequential weak measurements The correlation of a sequence of K shots of measurement is Using Eqs. ( 9) and ( 10), we find that the correlated photon count differences G (K) is proportional to a certain type of correlation in the target system as where η j = + or − if the j th shot of measurement is chosen as Λj = Ŝ2 or Ŝ3 , respectively.For example, choosing Λ2 = Ŝ2 and Λ1 = Ŝ3 , we obtain and choosing Λ4 = Ŝ2 , Λ3 = Ŝ3 , Λ2 = Ŝ3 and Λ1 = Ŝ2 , we get This way we can obtain arbitrary types and orders of correlations just by getting the correlated photon count differences from the interferometer. III. SIGNAL-TO-NOISE RATIO We calculate the signal-to-noise (SNR) ratio.We consider the shot noises [33] of the photon counts and neglect the technical noises such as the laser fluctuations and the imperfections in beam splitting, phase shift, and photon detection, since these factors would highly depend on the specific systems employed in experiments.The SNR is calculated for L repeated sequences of K shots of measurement for obtaining a K-th order correlation G (K) . A. SNR for first-order correlations The signal ⟨S⟩ for a first-order correlation G (1) is the average difference photon count between the two outputs, The variance of signal is , which is much smaller than S 2 , the variance can be simplified as Using Eq. ( 8) and keeping the leading-order terms, we get Therefore, the SNR for first-order correlations in the shot-noise limit is B. SNR for K-th order correlations Similarly, the signal of measuring a K-th order correlation given by Eq. ( 12) is ⟨S⟩ = G (K) .In the leading order, the variance , where the subscript in Λ k labels the shot of measurement.By straightforward calculation, we get Λ 2 k ≈ |α| 2 in the leading order.Thus, the variance of measuring a K-th order correlation is Therefore, the SNR for a Kth order correlation is C. Estimation of the signal strength We consider a target system containing an ensemble of spins.We assume that the laser pulse length is much greater than the thickness of the sample (D) and the laser pulse duration τ is much shorter than the timescale of the spin dynamics.In this case, all the spins { Ĵi |i = 1, 2, . . ., N s } within the laser spot interact simultaneously with the photons.The Faraday rotation θ(t) is related to the magnetization Ĵ(t) ≡ N −1 s i Ĵz i along the light propagation direction (the z-axis) by [31] where g is the coupling constant.The K-th order correlation is related to the correlation of the spins by When the spins are uncorrelated, each spin contributes to the correlation independently, and therefore the Kth order correlation of the N s spins is estimated to be The SNR is where N ph = |α| 2 is the average number of photons per laser pulse, n s is the spin density of the target system, and A is the area of the laser spot on the sample.To ensure the higher-order correlations be observable (for K > 2), the laser should be strong enough and the light beam should be well focused such that gN To evaluate the feasibility of the measurement, we consider LiHoF 4 [34] as a specific material system.This material is a transparent Ising spin system with Ĵz i 2 = 8 2 and presents interesting quantum phase transitions of magnetization [35].Using the data in Ref. [34], the coupling constant for a laser with wavelength about 435 nm (near resonant with the band gap of the material) is g ≈ 20 rad/cm.The density of spins of LiHoF 4 is n s ≈ 1.39 × 10 28 cm −3 .Considering a laser pulse with focus size ∼ 10 −4 cm and the number of photons n ph ∼ 10 14 and a sample of 1 cm thickness, we estimate the SNR as To get SNR> 1, the number of measurement shots should be L ≳ 10 5 for measuring the second order correlations, which is quite feasible.For measuring the fourth order correlations (K = 4), however, the number of measurement shots should be L > 10 49 , which is impossible for any practical experiments.This difficulty in measuring the higher-order correlations arises from the fact that uncorrelated or non-interacting spin ensembles typically exhibit negligible higher-order correlations. An interesting scenario is that the higher-order correlations in interacting spin ensembles become significant near the critical points where the spins fluctuate collectively with a diverging correlation length ξ.The spins within a region of size ξ can be taken as a large spin (∼ n s ξ 3 J).Thus, considering the collective fluctuations, the SNR for measuring the correlations is modified to be When the system is close to the critical point such that the correlation length is comparable to the size of the laser focus spot, the measurement of higher order correlations becomes feasible.Thus, the scheme presented in this paper provides an approach to studying the effects of higher-order correlations in interacting spin systems near the critical point. IV. CONCLUSION In summary, we propose an approach to selectively extracting arbitrary types and orders of quantum correlations via weak Faraday rotation measurement.Unlike the single spin sensors used in existing schemes [13,14,19,30,36], which interact with only a few spins in proximity, the quantum sensor employed in this scheme, namely, the optical polarization as a pseudo-spin, can couple uniformly to a quantum many-body system.We have analysed the signal-to-noise ratio to show the feasi-bility of the scheme.While the detection of second-order correlations of fluctuations suffices to characterize quantum many-body systems far away from critical points, the higher-order correlations could be important near the critical points.When the systems approach the critical points, the collective fluctuations have diverging correlation lengths.The optical detection of higher-order correlations is feasible when the correlation lengths are comparable to the focus size of the laser beam.The scheme proposed in this paper belongs to the category of quantum nonlinear spectroscopy [14], for its capability of extracting the rich structure of higher-order quantum correlations, which are inaccessible to conventional nonlinear optical spectroscopy [25].The quantum nonlinear spectroscopy is not only a new tool for studying quantum many-body physics, but also new approaches to quantum science and technology, such as the study of quantum foundation (higher-order Leggett-Garg inequalities [3] and Bell inequalities [1,2]), optimal quantum control [6,7,37,38], and quantum sensing [13,14,30]). FIG.1.Schematic of a Mach-Zehnder interferometer for detecting arbitrary correlations of a quantum spin system using a polarized light beam sensor.The difference photon count Λ is detected to measure the polarization of the light after the interaction with the target system.A designated phase shift ϕ is induced by a variable optical delay to select the basis of polarization measurement.
5,366.2
2023-09-01T00:00:00.000
[ "Physics" ]
Metallic oxide–graphene composites as a catalyst for gas-phase oxidation of benzyl alcohol to benzaldehyde A new catalyst (FAG) composed of Fe, Al, and graphene (G) was prepared for catalyzing the reaction of benzyl alcohol (BA) to benzaldehyde (BD). The catalyst was characterized by XRD, XPS, SEM, and BET analyses. The catalytic performance of FAG for oxidation of BA to BD was studied by comparing with the catalysts ferrous oxide (Fe), ferrous oxide doped with aluminum (FA), and ferrous oxide doped with graphene (FG). The effects of amount of graphene additive, temperature, and ratio of Fe and Al were also studied. The results show that the catalyst FAG has a great specific surface area of 80.40 m2 g−1 and an excellent catalytic performance for the reaction of BA to BD: the conversion of BA and yield of BD significantly increased, and the selectivity of BD reached 87.38%. Introduction Selective oxidation of alcohols such as benzyl alcohol (BA) to the corresponding aldehydes involving the cleavage of O-H bonds followed by α-C-H bond activation is a typical process in fine chemicals industry. In the past several years, several catalysts have been used for the selective oxidation of alcohols, such as noble metal coordination compounds (Pd, Ru, Au, Pt) [1][2][3], non-noble metal oxides [4,5], organometallic compounds [6,7], inorganic oxidants (manganese dioxide, Sarett reagent, Jones reagent) [8,9], heteropolyacid catalysts [10,11], and ionic liquids [12]. The oxidation activity of catalysts of these noble metal nanoparticles is higher for catalyzing BA. The metal nanoparticles have particular chemical and photoelectric properties, but they can be easily combined to form a big particle due to the electrostatic interaction between the particles. Thus, the exposure level of active site decreases, and its activity also decreases. Because of the size effect, aggregation of metal particles during reaction, and decrease in the exposure level of active site, the stability and life of catalyst is not satisfactory. Loading a noble metal to a carrier, such as loading Ru to TiO 2 , ZrO 2 , and carbon [13,14] and loading Au to Al 2 O 3 , SiO 2 , and TiO 2 [15,16], can increase the metal particle dispersity, prevent the carbon deposition and favor the contact of reactant with the active site, and further increase the stability and catalytic activity of catalyst. However, noble metal particle catalysts are expensive and suffer from low utilization and difficulty in recycling. Over the years, many efforts have been made to prepare non-noble metal catalysts [6,17]. In the past decades, carbon-based materials including graphene, carbon nanotubes, and nanodiamonds have attracted much attention because of their high specific surface area, high conductivity, feasible surface modification, and eco-friendly continuous large-scale preparation [18][19][20][21][22]. Among them, graphene with a hexagonal honeycomb structure, two-dimensional (2D) nanocarbon material stacked from carbon atoms by sp 2 hybridization has a unique conjugate electronic structure, i.e., πelectronic structure, that promotes strong interactions with various reactants, especially compounds with a benzene ring [23]. Graphene, a carbon-based material with a single layer of sp 2 -bonded carbon atoms arranged in a hexagonal honeycomb structure, mainly contains a graphite skeleton, defects, and oxygen-containing groups. The oxygencontaining groups can be a weak acid, and it can catalyze acid-base reactions. The O in the oxygen-containing group acts as an intermediate oxygen species and catalyzes the reaction with a stronger oxidizing ability. The defects in graphene not only activates the oxygen molecules, but also decomposes the reducing agent, such as hydrazine hydrate [23]; thus, they can catalyze simple reduction reactions. Yang et al. [24] used graphene as a catalyst and aqueous hydrogen peroxide solution as an oxidizing agent to catalyze the oxidation of benzene to phenol. The results show that graphene has good selectivity of phenol, i.e., 99%, and the conversion of benzene is 18%. The catalyst has a good lifetime and does not deactivate even after seven times of circulation. In the past few years, graphene and doped and modified graphene have been used as a nonmetal catalyst to catalyze the oxidation of BA to benzaldehyde (BD), thus oxidizing BA to BD with high selectivity. Wang et al. [25] used nitrogen-doped graphene to catalyze the oxidation of BA to BD at 80 °C temperature and 0.1 MPa pressure, achieving a BA conversion of 12.8% and BD selectivity of up to 100%. They proposed that the graphite type nitrogen (sp 2 N) is the active site of catalytic reaction. Dynamic study shows that when nitrogen-doped graphene is used as a catalyst, the activation energy of reaction is 56.1 kJ mol −1 , close to one of the traditional noble metal loaded catalyst, 51.4 kJ mol −1 . Cheng et al. [26] used boron-doped graphene as a metal-free catalyst in the gasphase oxidation of BA to BD and achieved excellent performance with selectivity of BD of up to 99.2%. Using graphene as a carrier to form a composite with metal particles, the combination of metal nanoparticles can be decreased. The nanoparticles can decrease the interaction between graphene layers. The metal matrix particles used in the composites of metal nanoparticles with graphene are mainly noble metals Pt and Au, and some transition metals and their oxides. The composites showed excellent performance compared to nanocomposite metal particles. Huang et al. [27] prepared a Pt-graphene composite using a soft chemical method. This method is simple, and the graphene structure is not damaged. The composite exhibited excellent electrochemical activity and good toxic tolerance. Zheng et al. [28] prepared a Ag-graphene composite where the Ag particles were uniformly dispersed on graphene, and a capacitance electricity of up to 220 F g −1 was achieved. The Ag-graphene composite has a good application prospect in the field of electrode energy storage and conversion. Jiang et al. [29] prepared a Fe-graphene nanomaterial using supercritical CO 2 fluid method. The particle diameter is 2 nm. and the Fe nanoparticles were dispersed on graphene with a high degree of dispersion. The Fe-graphene nanomaterial was used as a catalyst in dehydrogenation reaction of lithium aluminum tetrahydride and exhibited good catalytic activity. Without changing the reaction path, the dehydrogenation temperature of Fe-graphene composite was about 40 ℃ lower than before, and hydrogen with 4.5% mass fraction can be completely decomposed in 6 min at 100 °C. Moma et al. [30] synthesized Al/ Fe pillared clay from natural bentonite clay and used as a catalyst in the catalytic wet-air oxidation of phenol. It was found to be highly active and stable with superior properties such as surface areas, basal spacing, high porosity, and thermal stability. In this study, an Al-Fe composite doped with graphene (FAG) was evaluated for the gas-phase oxidation of BA to BD. The catalyst was characterized by XRD, XPS, SEM, and BET analyses, and the effects of additive amount of graphene, Al/Fe ratio, and preparation conditions temperature, pressure, and space velocity were evaluated. Preparation of catalyst Graphite oxide (GO, 96.0%, mass fraction) was supplied by Jining Leader nano Technology Co., LTD., Shandong, China. GO was prepared from natural graphite using a modified Hummers method [31,32]. The composites of ferric oxide with graphene (FG) and Fe-Al oxide with graphene (FAG) were synthesized using supercritical (subcritical) fluid method in an autoclave at 300, 350, and 400 °C for 1 h. In a supercritical solvent with high super saturation, uniform conditions were rapidly attained, and highly crystallized nano-sized oxide particles were obtained. This technique has the advantages of simple and short time for preparation, and it has been extensively used to successfully synthesize various metal oxides with fine control over chemical and phase compositions, size, morphology, and structure [33]. In a typical procedure for FAG preparation, GO was added to alcohol in an ultrasonic cleaner for 1 h while the GO was dispersed in alcohol uniformly. Then, ferric nitrate and aluminum nitrate were added, and the mixture was stirred for 2 h while they mixed well and then placed in an autoclave at a set temperature for 1 h. The obtained products were then washed with alcohol and water to eliminate the unreacted materials while the products had no color. Finally, the products were dried at 60 °C for 24 h. Analytical equipment A D8 Advance X-ray generator (Bruker AXS Company, Germany) with Cu Kα1 radiation was used to identify the structure and purity of products, The X-ray intensity was measured over a 2θ diffraction angle from 10° to 90° with a step size of 0.02°/min. An S-4800 scanning electron microscope (SEM) (Hitachi Ltd., Japan) and S-Twin transmission electron microscope (TEM) (TECNAI G 2 F20, FEI Company, USA) were used to observe the morphologies of catalysts. The SEM and TEM images were obtained at an accelerating voltage of 10 and 200 kV, respectively. The surface area of catalyst was analyzed using an Auto sorb-iQ system (Quanta chrome, USA) using Brunauer-Emmett-Teller (BET) method and nitrogen adsorption-desorption isotherms at a liquid nitrogen temperature of 77 K. The surface composition of synthesized materials was analyzed by X-ray photoelectron spectroscopy (XPS) (ESCALAB250, Thermo Electron Corporation, USA). The conversion and selectivity was analyzed by GC7900 gas chromatography (Shanghai Techcomp Instrument Ltd.) and Agilent 1260 liquid chromatography (Agilent Technologies Inc. USA). Measurement of catalytic performance The oxidation performance of catalyst was evaluated experimentally in a gas-phase fixed-bed reactor with 100 mg of each catalyst. The reactor was heated under 40 mL min −1 nitrogen at a heating velocity of 10 K min −1 until the set reaction temperature reached. The feed comprising BA and oxygen (n O2 /n BA = 2) in nitrogen (n N2 / n BA = 6.2) was added to the reactor at a total flow rate of 40 mL min −1 . Oxidation reactions were studied under relatively gentle conditions (493, 523, and 553 K). The liquidphase product obtained after cooling through a cryogenic circulation tank was sampled periodically and analyzed by high-performance liquid chromatography (HPLC, Agilent 1260, USA) equipped with an Agilent Zorbax SB-C18 column. Methyl BA was added as the internal standard to the collected liquid product for qualitative and quantitative analyses. The mobile phase is chromatographic-grade acetonitrile and ultrapure water (30:70, 40 °C). The reaction tail gas was monitored using an online gas chromatograph (Tech comp GC7980, China) equipped with a thermal conductivity detector (TCD) and Porapak-Q column. Blank experiments without carbon catalyst showed that the reaction conversion was negligible (< 0.2%). Figure 1 shows the XRD patterns of catalysts including graphene (G), ferrous oxide (Fe), ferrous oxide doped with graphene (FG), ferrous oxide doped with aluminum (FA), and FAG made from GO, aluminum nitrate, ferric nitrate, and alcohol (Table 2). Pattern (a) shows two broad diffraction peaks at 24.6° (d = 0.34 nm) and 43.0° (d = 0.21 nm) for G. Cheng et al. [26] reported an intense and sharp reflection centered at 11° for GO, corresponding to a c-axis [25]. The characteristic peak of GO disappeared since it underwent thermal processing in Ar atmosphere at 1173 K for 2 h, confirming that most of the oxygen-containing functional groups in GO were detached under the supercritical condition with high temperature and high pressure. The graphene lamella was replied, and the interlayer space was reduced [32,34]. Thus, successful conversion of GO to graphene was achieved [35]. This indicates that GO cannot be completely and uniformly exfoliated by the interlayer expansion along the c-axis direction at such a temperature, leading to the formation of multilayer graphene nanosheets [25]. Characterization of catalysts The Fe catalyst is composed of Fe 3 O 4 and Fe 2 O 3 (Pattern (b)). When G is added, the diffraction peaks are not changed, indicting that the G does not lead to a phase composition change for the metal oxide. The small peak at 24.6° confirms the successful formation of composite of G with metal oxide (Pattern (c)). When aluminum was (Pattern (d)), but XPS detection verified that aluminum exists (Fig. 2). When FA was added to G, Fe 3 O 4 became the main content with a small amount of Fe 2 O 3 , indicating that the phase composition changed under the effect of aluminum and G (Pattern (e)). Using Debye-Scherrer method, the particle diameter of FAG was found to be 16 nm, compared to FA of 21 nm and both Fe and FG of ~ 39 nm, indicating that FAG decreases the mutual friction between metal particles and also decreases the size of nanoparticles. The XPS survey spectrum shows a weak N1s signal in the three patterns because the samples were exposed to atmosphere and adsorbed the nitrogen in air (Fig. 2). Figure 3 shows the high-resolution spectra of samples G and FA with Fe and Al ratio of 2:1 and FAG samples with Fe, Al, and G ratio of 2:1:0.32. Figure 3a shows that most of oxygen in graphene is detached after it is handled under supercritical condition of high temperature. The oxygen content is only 13.41% (Table 1). The oxygen-containing functional groups mainly exist in three forms of C-O, C=O, and O-C=O (Fig. 3a). The carbon in graphene is mainly combined with the oxygen of oxygen-containing functional groups. The amount of C-C sp 2 in graphene is 37.39% (Table 1). The amounts of Fe and Al in the samples FA and FAG have not much difference after handled under the supercritical condition. In the sample FAG, most carbon corresponds to the graphite in graphene, 30.37%, and ketone carbonyl is the main form of oxygen-containing functional groups, 6.77% (Table 1). The main forms of Fe in the sample FA is Fe 2 O 3 and Al 2 FeO 4 , and those in the sample FAG is also Fe 2 O 3 and Al 2 FeO 4 (Fig. 3b, e). Al in the FA sample mainly exists in three forms Al/AlO x , Al 2 O 3 , and Al(OH) 3 , and Al in the FAG sample mainly exists in two forms Al 2 O 3 and Al/AlO x (Fig. 3c, f ). In the XPS measurement, the sample need to handle specially, otherwise, the surface of sample is easy to adsorb C and O, the peaks of C and O will appear in the results. The height of the peaks involve with the properties of the sample and the exposing time. Since the sample is not handled specially, C content is appeared in FA (Table 1). Figure 4 shows the SEM images of G, Fe, FG, FA, and FAG. Figure 4a shows a lamellar structure with mist-like Table 1 and wrinkle on the surface for the G sample obtained under supercritical condition. Thus, the GO sample can be detached and stripped to a lamellar structure [36]. Figure 4b shows that the Fe particles have uniform distribution and exist in clusters. When Fe is combined with graphene, the morphology of particles is distributed in cubes and spheres (Fig. 4c). The image of FA shows a severe particle agglomeration (Fig. 4d), and the image of FAG shows a big block and lamellar structure (Fig. 4e). The BET specific surface area was obtained from the nitrogen adsorption-desorption isotherms measured at a temperature of − 196 °C (Fig. 5). In the nitrogen adsorption-desorption isotherms, an obvious hysteresis loop observed between 0.45 and 0.98 p/p 0 (Fig. 5a). Since different scales are used in Fig. 5a, the specific surface area not Figure 5b shows that the specific surface area of G obtained under supercritical condition is 30.76 m 2 g −1 and that of Fe is 10.85 m 2 g −1 . When Fe is doped with G, its specific surface area increased, 16.18 m 2 g −1 for FG and 41.07 m 2 g −1 for FA. When FA is doped with G, the specific surface area significantly increased to 80.40 m 2 g −1 , indicating that the specific surface area can be effectively increased by adding G into FA. When the same amount of G was added to Fe without Al, the specific surface area did not increase much, from 10.85 to 16.18 m 2 g −1 . When Al exists in the catalyst, the specific surface area increased much more, up to 80.40 m 2 g −1 . Thus, it is concluded that when Al, Fe, and G exist at the same time, the specific surface area can be significantly increased. Table 2 shows the contents of G, Fe, FA, FG, and FAG in the samples prepared with graphite oxide, aluminum nitrate, and ferric nitrate. Figure 6b shows that the graphene prepared under supercritical condition has the highest selectivity for BD, 89.7%; however, the conversion ability is low, 16.9% (Fig. 6a). The conversion ability of Fe catalyst prepared under supercritical condition is much higher than graphene, but its selectivity is low, 40.5%. The selectivity of catalyst FA is also low. When the catalyst is made with graphene, the selectivity obviously increased compared to the catalysts of Fe and FA by about 1.8 times. When using FA composite with graphene, its selectivity further increased, 87.38%, approximately equal to that of graphene. The FAG sample has the maximum specific surface area in the five catalysts, where the dispersity of metal oxide particles increased. The particle diameter of Fe 3 O 4 in the sample is the smallest, and more active site is exposed during catalysis, thus leading to an excellent catalytic performance. When oxidants (O 2 , H 2 O 2 (aq) exist in the catalyst, the Al-O-Fe structure increases the reaction rate during oxidation [37]. Table 3 shows the samples made with graphite oxide, aluminum nitrate, and ferric nitrate with a mass ratio of 0.04:1:2, 0.08:1:2, 0.16:1:2, and 0.32:1:2, respectively. As shown Fig. 7, the conversion of BA slightly increased with the additive amount of graphene (Fig. 7a), but the selectivity of BD is obviously increased with the additive amount of graphene (Fig. 7b), where the crystalline phases of the sample are Fe 3 O 4 and G (Fig. 1e). The graphene additive, which has a single layer of sp 2 -bonded carbon atoms arranged in a hexagonal honeycomb structure, contains a graphite skeleton, defects, and oxygen-containing groups, prevents FA to form a big particle between the particles due to the electrostatic interaction, and remains the FA particle dispersity. The O in the oxygen-containing group acts as an intermediate oxygen species and catalyzes the reaction with a stronger oxidizing ability.The oxidation reaction rate of BA mainly depends on two steps: One is the reaction rate controlling step, i.e., oxidation step of BA, and another is the activation of substrate BA and the activation rate of molecular oxygen. Yang et al. [38] studied the activation process using electrochemistry and divided the oxidation reaction into two half-reactions, i.e., electrooxidation of BA and electro-reduction of oxygen, indicating that FeO x can obviously increase the activation ability of oxygen and thus increasing the catalytic activity. This is consistent with our results. Effect of Al/Fe ratio To evaluate the effects of ratio of Al to Fe, four samples were prepared with Al to Fe ratio of 0:1, 1:1, 1:2, and 1:3 ( Table 4). As shown in Fig. 8, both the conversion rate of BA and selectivity of BD increased after the catalysts were doped with aluminum compared to without aluminum (Al/Fe = 0:1). Because of the synergistic effect between Al and Fe, the catalytic ability increased [39]. As shown in Fig. 8b, the selectivity of BD is the best at a 1:2 ratio of Al to Fe, although the value is not stable at 280 °C. The conversion ability of BA at 280 °C is also good (Fig. 8a). Effect of temperature, pressure, and space velocity Two samples with content shown in Table 5 were evaluated at the preparation temperature of 300 and 400 °C. In fact, the effect of temperature is not obvious for the conversion of BA (Fig. 9). The selectivity of BD increased with the catalyst preparation temperature, especially at a reaction temperature of 400 °C. Temperature can make GO completely and uniformly exfoliated by the interlayer expansion along the c-axis direction [25]. The interaction between graphene layers decreased. The oxygen-containing groups is more active, selectivity increased. Under supercritical condition, three samples were prepared by adjusting the amount of absolute ethyl alcohol to control the reaction pressure ( Table 6). As shown in Fig. 10a, the effect of preparation pressure for the conversion of BA is not obvious, but the selectivity of BD obviously increased with the preparation pressure when the pressure is higher than 10 MPa (Fig. 10b). The increase in preparation pressure promotes graphene exfoliation, and the metal particles are uniformly dispersed on graphene, thus increasing the specific area. Three samples were prepared under the same preparation conditions to evaluate the effects of space velocity on catalytic ability (Table 7). As shown in Fig. 11, in the case of a low space velocity of 3000 mL h −1 g −1 , the conversion of BA increased with reaction temperature, but the selectivity of BD decreased with the reaction temperature, indicating that the sample at a low space velocity does not favor the conversion at a high reaction temperature. A low space velocity leads to a long contact time for the reaction sample with the catalyst so that over oxidation occurred and produced some byproducts such as benzoic acid. Thus, the selectivity of BD decreased. For the sample prepared at a space velocity of 6000 mL h −1 g −1 and 12,000 mL h −1 g −1 , the selectivity of BD increased with reaction temperature, and a high space velocity is not favorable for the conversion of BA and yield. Conclusions In this study, the physicochemical properties of FAG catalyst were studied. Using FAG as a catalyst for the oxidation of BA to BD, the catalytic activity for the conversion of BA and selectivity for BD were also studied. The effects of additive amount of graphene, the ratio of Al to Fe, preparation temperature, preparation pressure, and space velocity on FAG catalytic performance were studied. The following conclusions can be drawn based on the results of the study. 1. The addition of graphene to FA catalyst decreases the mutual friction between metal particles, thus decreasing the size of nanoparticles and increasing the specific area of catalyst. In the FAG sample, most of carbon is graphite carbon, 30.37% (at.), and most With the increase in the additive amount of graphene, the particle size decreased; the dispersion of nanoparticles of metal oxide increased; the catalyst specific area increased. 3. The addition of Al obviously promotes the conversion of BA and selectivity of BD. The selectivity of BD is affected by the Al to Fe ratio. When the ratio of Al to Fe is equal to 2, the selectivity of BD is the best. 4. The preparation temperature does not favor the increase in catalyst specific area; thus, the effect of preparation temperature for the conversion of BA is not obvious. The selectivity of BD increases with the increase in preparation temperature. 5. With the increase in preparation pressure, the extent of exfoliation increases, and the metal particles are uniformly dispersed on graphene, favoring the increase in catalyst specific area. The preparation pressure does not affect the conversion of BA. When the preparation pressure is up to 13.8 MPa, the selectivity of BD is doubled. 6. A low space velocity leads to a long contact time for reaction sample with catalyst and leads to over oxidation, and the selectivity of BD decreases. A higher space velocity is not favorable for the conversion of BA and yield. Conflict of interest There are no conflicts of interest to declare. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
5,654.6
2021-03-20T00:00:00.000
[ "Chemistry", "Materials Science" ]
SYNTHESIS OF Ni(OH)2, SUITABLE FOR SUPERCAPACITOR APPLICATION, BY THE COLD TEMPLATE HOMOGENEOUS PRECIPITATION METHOD Supercapacitors (SC) are modern chemical current sources (CСS). SC are widely used to start various electric motors (for example, in electric vehicles and pumping units), as starter CСS in cars with internal combustion engines. SC are also used in uninterruptible power supplies for computers, medical equipment, and critical infrastructure facilities. The best characteristics are possessed by hybrid supercapacitors with high charge-discharge rates. As a result, on the Faraday electrode of the hybrid supercapacitor, the electrochemical reaction proceeds on the surface and in the thin surface layer of the active substance particles. Therefore, special requirements are imposed for the active substance [1, 2]. The substance of the Faraday electrode should mainly consist of particles with a high specific surface, having a nanoand submicron size. Nickel hydroxide is widely used to make the Faraday electrode of hybrid supercapacitors. Ni(OH)2 is used both individually [3], in the form of a nanoscale [4] or ultrafine powder [5], and in the form of a composite with carbon nanomaterials (graphene oxide [6], carbon nanotubes [7]). A large number of methods for obtaining Ni(OH)2 and layered double nickel hydroxides on its basis have been developed [8]. Chemical precipitation methods include direct synthesis (adding an alkali solution to a Ni2+ salt solution) [9] and reverse synthesis (slowly adding a nickel salt solution to an alkali solution) [10, 11], as well as a method of high-temperature two-step synthesis [12] or sol-gel method [13]. For the synthesis of nickel hydroxides, electrochemical methods are also used [14, 15], including continuous production in a slit diaphragm electrolyzer [16, 17]. How to Cite: Kovalenko, V., Kotok, V. (2021). Synthesis of Ni(OH)2, suitable for supercapacitor application, by the cold template homogeneous precipitation method. Eastern-European Journal of Enterprise Technologies, 2 (6 (110)), 45–51. Introduction Supercapacitors (SC) are modern chemical current sources (CСS). SC are widely used to start various electric motors (for example, in electric vehicles and pumping units), as starter CСS in cars with internal combustion engines. SC are also used in uninterruptible power supplies for computers, medical equipment, and critical infrastructure facilities. The best characteristics are possessed by hybrid supercapacitors with high charge-discharge rates. As a result, on the Faraday electrode of the hybrid supercapacitor, the electrochemical reaction proceeds on the surface and in the thin surface layer of the active substance particles. Therefore, special requirements are imposed for the active substance [1,2]. The substance of the Faraday electrode should mainly consist of particles with a high specific sur-face, having a nano-and submicron size. Nickel hydroxide is widely used to make the Faraday electrode of hybrid supercapacitors. Ni(OH) 2 is used both individually [3], in the form of a nanoscale [4] or ultrafine powder [5], and in the form of a composite with carbon nanomaterials (graphene oxide [6], carbon nanotubes [7]). A large number of methods for obtaining Ni(OH) 2 and layered double nickel hydroxides on its basis have been developed [8]. Chemical precipitation methods include direct synthesis (adding an alkali solution to a Ni 2+ salt solution) [9] and reverse synthesis (slowly adding a nickel salt solution to an alkali solution) [10,11], as well as a method of high-temperature two-step synthesis [12] or sol-gel method [13]. For the synthesis of nickel hydroxides, electrochemical methods are also used [14,15], including continuous production in a slit diaphragm electrolyzer [16,17]. have certain characteristics -water solubility, chemical affinity for nickel compounds, and the ability to form a 3D matrix in the solution. PEG6000 is used as a water-soluble template [36]. Polyvinyl alcohol (PVA) is a promising template. PVA is used as an agent controlling porosity in the synthesis of mesoporous alumina [37], hydroxyapatite crystals (together with sodium dodecyl sulfate) [38]. PVA is used in the preparation of MFI zeolite [39]; the use of MgO to remove dyes from wastewater [40], to form single-layer [41,42] and multilayer [43] hydroxide coatings, to obtain 3D-structured macroporous oxides and hierarchical zeolites [44], as well as to improve the adhesion of coatings on the ITO surface [45]. In [46], the high efficiency of using cellulose ether Culminal C8465 as a template was shown. α-Ni(OH) 2 obtained by template homogeneous precipitation exhibits high electrochemical activity in supercapacitors. The main disadvantage is the high energy consumption for maintaining a high However, template synthesis has a significant drawbackduring synthesis, the template remains in the resulting substance and must be removed. The complexity of template removal led to the emergence of a whole "template-free" direction, i.e., template-free synthesis of ultradispersed systems, for example, the preparation of calcium titanate [47] and zeolite ZSM-5 [48]. Template-free synthesis is used to obtain nanosized Ni(OH) 2 [49,50]. When used in a supercapacitor, the presence of a template in the composition of the active substance can lead to partial blocking of the particle surface and other side effects. From this point of view, the template from the active substance, in particular, Ni(OH) 2 , must be removed as completely as possible. Various methods are used to remove templates and surfactants, including ozone oxidation [51]. However, in the manufacture of a pasted electrode of a supercapacitor, a binder is introduced into the composition to prevent shedding of the active mass [29]. But the binder is an electrochemically inert material, which reduces the specific capacity of the active mass. In [52], the negative effect of a high residual amount of template on the supercapacitor properties of Ni(OH) 2 was shown. However, it is promising to use another strategy -the use of a template as an internal binder at the stage of forming a pasted-type electrode. The possibility of implementing this strategy is shown in [53]. It is necessary to point out a significant drawback of the homogeneous precipitation method -the need to use high temperatures (80-95 °С) of synthesis for the hydrolysis of urea or other similar substance. Heating significantly increases the cost of Ni(OH) 2 and increases its crystallinity. Homogeneous precipitation of nickel hydroxide at room temperature is limited by the low rate of hydrolysis of urea (or similar compounds). However, research in this direction has not been carried out. The aim and objectives of the study The study aims to assess the possibility of performing cold template homogeneous precipitation and to study the characteristics of the resulting Ni(OH) 2 . This will reduce energy consumption during production. To achieve the aim, the following objectives were set: -visually reveal the course of the first proposed longterm synthesis of nickel hydroxide by the method of template homogeneous precipitation at room temperature and qualitatively evaluate the formed Ni(OH) 2 ; -if the process of cold homogeneous precipitation is successful, quantitatively study the structural and electrochemical characteristics of the obtained nickel hydroxide to assess the prospects of the proposed method. For nickel hydroxide, two structural modifications are described [18]: β-form (Ni (OH) 2 , crystal structure of brucite) and α-form (3Ni(OH) 2 •2H 2 O, crystal structure of hydrotalcite). β-Ni(OH) 2 has high cycling stability and is widely used in the positive electrode of alkaline batteries and supercapacitors. α-Ni(OH) 2 , compared to β-Ni(OH) 2 , is much more electrochemically active and can be used more effectively in hybrid supercapacitors. Therefore, the development and optimization of methods for obtaining electrochemically highly active α-Ni(OH) 2 are urgent problems. Literature review and problem statement The method and conditions of synthesis directly determine the micro-and macrostructure of Ni(OH) 2 particles, which determines the electrochemical characteristics of nickel hydroxide. For effective use in supercapacitors, Ni(OH) 2 must have certain characteristics [19], in particular, the α-structure with optimal crystallinity, submicron size [5], and nanosize [4,20] particles. During the formation of nickel hydroxide, the nucleation rate is much higher than the crystal growth rate. Because of this, Ni(OH) 2 is formed by a two-stage mechanism [21]: a very fast first stage of the formation of a primary amorphous particle and a slow second stage of crystallization (aging) of the primary particle. As a result, a hydrophilic matrix precipitate is formed containing a large amount of included mother liquor. During vacuum filtration, the precipitate particles are compressed, and during the subsequent drying, they are sintered. As a result, the particle size increases significantly and the specific surface area decreases. This can be prevented by using two strategies: 1) the use of synthesis methods with a reduced nucleation rate and an increased crystal growth rate; 2) introduction of special substances in the reaction solution to prevent aggregation and primary particle growth. The first strategy is implemented using the homogeneous precipitation method [22]. The essence of the method consists in the formation of OHprecipitating ions directly over the entire volume of the solution as a result of the reaction of thermal hydrolysis of substances containing amino groups (urea [23,24], hexamethylenetetramine [25]). Homogeneous precipitation can be carried out from aqueous solutions, from solutions with mixed solvents [26], or non-aqueous solutions, for example, based on ionic liquids [27]. To obtain ultrafine or nanosized particles, homogeneous precipitation of Ni(OH) 2 is carried out at temperatures up to 150-180 °С. For the same purpose, microwave heating is used [28]. The second strategy involves the use of surfactants [29,30] or templates. In the latter case, template synthesis is carried out, i.e., the formation of a substance in a matrix (template). As a rule, this method is used to form coatings, in particular electrochromic Ni(OH) 2 films [31], tripolyphosphate coating [32], or the direct formation of an electrode on the surface of nickel foam [33]. In this case, the resulting films have a composite structure similar to polymer composites [34], and there is no need to remove the template for them. In [35], the possibility of using water-soluble templates for the preparation of Ni(OH) 2 was shown. The most promising is a combination of both strategiesthe use of template homogeneous precipitation. A template for the synthesis of Ni(OH) 2 in an aqueous solution must 1. Method of obtaining the main sample of nickel hydroxide To synthesize the starting Ni(OH) 2 , we used the method of template homogeneous precipitation from a 1 M solution of nickel nitrate containing 1 M urea in the presence of a high molecular weight soluble template Culminal C8465 with a concentration of 0.5 wt %. In contrast to the method described in [53], the synthesis was carried out at room temperature (20-30 °С) for 6 months. After obtaining, the precipitate was filtered under vacuum, dried at 75 °С for a day, ground, soaked in distilled water (for 24 hours), filtered, and re-dried under the same conditions. 2. Study of the characteristics of nickel hydroxide samples To study the crystal structure of the obtained nickel hydroxide, we used X-ray phase analysis (XPA) using a DRON-3 X-ray diffractometer (Russia). XPA parameters: Co-Kα radiation in the 2θ range of 10-90° with a scan rate of 0.1°/s. Scanning electron microscopy (SEM) was used to study the morphology of nickel hydroxide particles (electron microscope 106-I (SELMI, Ukraine)). To study the electrochemical characteristics of synthesized Ni(OH) 2 , the following methods were used: 1) cyclic voltammetry. The study was carried out in the potential range of 0-500 mV (relative to the SCE), the sweep rate was 1 mV/s. When constructing voltammograms, the potentials were recalculated to NHE (normal hydrogen electrode); 2) galvanostatic charge-discharge cycling in the supercapacitor mode. Charge and discharge were performed at i=10, 20, 40, 80, and 120 mA/cm 2 . At each current density, 5 charge-discharge cycles were carried out. The discharge curves were used to calculate the specific capacities C sp (F/g) and Q sp (mA⋅h/g). Charge-discharge cycling in the supercapacitor mode was carried out twice: primary -for a freshly made electrode and repeated -for the same electrode after drying and storage for 12 hours. Both methods were carried out in a special electrochemical cell SEC-2 (USSR) using an electronic potentiostat-galvanostat Ellins R-8 (RF). For research, we made binder-free working electrodes (without the introduction of an external binder [52,53]). The composition of the active mass: nickel hydroxide (84 wt %), GAK-2 graphite (16 wt %). A 6M KOH solution was used as the electrolyte. The Ni grid was used as a counter electrode. Ag/AgCl (saturated) was used as a reference electrode. 1. Proceeding of long-term synthesis of nickel hydroxide Visual observation revealed the passage of the first implemented template homogeneous precipitation. It was shown that the first particles of nickel hydroxide were formed 32 days after the start of the synthesis, which indicates the presence of an induction period. It was revealed that, as a result of 6-month synthesis at room temperature, nickel hydroxide of an unusual type was formed -a fine crystalline precipitate of Ni(OH) 2 . It should be noted that, under normal conditions, the synthesized Ni(OH) 2 is a hydrophilic precipitate. 2. Study of the characteristics of the obtained nickel hydroxide The results of X-ray phase analysis (Fig. 1) show that the sample of nickel hydroxide obtained by cold template homogeneous precipitation is α-Ni(OH) 2 with low crystallinity. The cyclic voltammogram (Fig. 3) shows that the obtained sample of nickel hydroxide exhibits the characteristics of α-Ni(OH) 2 , however, the first charge peak is not clearly pronounced. Fig. 4 shows the specific capacities of the nickel hydroxide sample. The data of the charge-discharge curves reveal that the capacity of the freshly made electrode (primary cycling - Fig. 4, row 1) at a low current density of 10 mA/cm 2 is very low, less than 1 F/g. With an increase in the cycling current density to 80 mA/cm 2 , the capacity increases significantly. A further increase in the cycling current density leads to a decrease in specific capacities. Repeated cycling after drying the electrode and holding for 12 hours shows a sharp increase in specific capacities at a low current density of 10 mA/cm 2 . However, as the current density increases, the capacity decreases even faster than for the freshly made electrode. Discussion of the study of the characteristics of nickel hydroxide obtained by cold template homogeneous precipitation Visual observation shows that when the template homogeneous synthesis is carried out at room temperature, the hydroxide precipitate begins to form 30 days after the start of the synthesis. After the end of 6 months of the process, the resulting nickel hydroxide is a fine precipitate that does not have the usual hydrophilic character for Ni(OH) 2 . This indicates that cold template homogeneous precipitation proceeds according to a one-stage mechanism. At room temperature, the rate of hydrolysis, and, accordingly, the rate of formation of OHions is low. Consequently, no high supersaturation is created and the rate of crystal growth should exceed the rate of nucleation. In this case, Ni(OH) 2 with high crystallinity should be formed. However, the data of X-ray phase analysis (Fig. 1) show the formation of low-crystalline α-Ni(OH) 2 . This contradiction is explained by the influence of the water-soluble template. The scanning electron microscopy image (Fig. 2) revealed the formation of aggregates of particles of increased size. Culminal C8465 as a template, most likely, in addition to the formation of a 3D matrix, interacts with the synthesized hydroxide and exhibits the properties of a binder, causing particle aggregation, especially at high concentrations, which is consistent with the data reported in [52,53]. The freshly made electrode shows very low capacities at a current density of 10 mA/cm 2 , 0.9 F/g, and 0.35 mA⋅h/g. With an increase in the current density to 20 mA/cm 2 , the specific capacity increases 88.9 times, up to 80 F/g. A further increase in the current density leads to a decrease in the specific characteristics; at a current density of 120 mA/cm 2 , the specific capacities are 20.4 F/g and 10.2 mA⋅h/g. This behavior clearly indicates the primary blocking of the particle surface by the template, which is located in the agglomerates of particles. With an increase in the current density, the agglomerates of particles disintegrate into smaller components, which leads to a sharp increase in specific capacities. These data are consistent with previously published data [19,52]. However, with a further increase in the current density, the specific capacitances decrease. This is the result of two factors. On the one hand, as shown in [19], an increase in the current density should lead to an increase in the specific capacity. But on the Fig. 4. Specific capacity of the Ni(OH) 2 sample obtained by low-temperature homogeneous precipitation; a -C sp , F/g; b -Q sp , mA⋅h/g; 1 -primary cycle; 2 -repeated cycle after drying for 12 hours other hand, a "binder-free" electrode was made for galvanostatic charge-discharge cycling. Thus, no external binder was used in the manufacture of the active mass. Probably, with an increase in the current density, with the disintegration of particle aggregates into smaller components, the binding properties of the template turned out to be insufficient. As a result, a part of the active mass may peel from the nickel-foam current collector with the loss of electrical contact. In this case, a decrease in specific characteristics indicates a stronger effect of partial peeling of the active mass. The electrode was re-cycled after drying it at room temperature and holding for 12 hours (Fig. 4, row 2). At a current density of 10 mA/cm 2 , the specific capacity was 135 F/g and 69 mA⋅h/g. The increase in capacity compared to primary cycling was 150 and 197 times, respectively. Such a significant increase in specific characteristics is explained precisely by the decomposition of agglomerates of hydroxide particles into smaller components and an increase in the active surface. Drying the electrode in the air it leads to carbonization of the alkali in the pores of the active mass. In this case, the molar volume of sodium carbonates is greater than the molar volume of alkali. As a result, the agglomerates of nickel hydroxide particles are destroyed with an increase in the active surface and, accordingly, an increase in specific characteristics. However, an increase in the volume of the active mass can lead to partial exfoliation of the active mass with a decrease in the specific capacity. This effect was confirmed experimentally (Fig. 4): at a cycling current density of 120 mA/cm 2 , a specific capacity of 11 F/g was obtained, which was all 53.9 % of the specific capacity during primary cycling (20.4 F/g). Thus, it should be noted that the effect of swelling of the active mass upon drying was enhanced by the effect of insufficient binding properties of the template in the "binder-free" electrode. In general, it should be noted that the insufficiency of the binding characteristics of the Culminal C8465 template in the "binder-free" electrode is shown, which leads to a decrease in the specific characteristics with an increase in the current density. It is necessary to carry out additional studies to clarify the amount of external binder, which has sig-nificant properties of the binder. The previously undescribed effect of increasing the specific capacity of nickel hydroxide obtained by template synthesis upon drying the electrode after primary cycling was revealed. It is necessary to carry out additional studies of this effect with the introduction of an alkali solution during the preparation of the active mass. Conclusions 1. Synthesis of nickel hydroxide has been carried out using the previously undescribed method of cold template homogeneous precipitation. A water-soluble high-molecular compound Culminal C8465 has been used as a template; the synthesis has been carried out for 6 months at a temperature of 20-35 °С. A fine powder of nickel hydroxide has been obtained. 2. Using X-ray phase analysis and scanning electron microscopy, it has been shown that during cold template homogeneous precipitation, low-crystalline α-Ni(OH) 2 with agglomerated spherical particles is formed. The study of the electrochemical characteristics of the "binder-free" electrode with the synthesized nickel hydroxide sample has revealed high specific characteristics. The process of disintegration of agglomerates of particles into smaller ones with an increase in specific capacity has been revealed. The maximum specific capacities have been 80 F/g and 38 mA⋅h/g. However, the insufficiency of the binding properties of the template residues has been shown, and the conclusion has been drawn that an external binder should be introduced. The previously undescribed effect of a significant increase in the specific capacity during drying of an alkali-impregnated electrode has been revealed. The effect is due to the disintegration of particle agglomerates due to the swelling of the active mass during alkali carbonization. The maximum capacity reached has been 135 F/g and 69 mA⋅h/g. It has been concluded that the use of the revealed effect is promising for neutralizing the passivating effect of template residues in the composition of nickel hydroxide during template synthesis.
4,799.2
2021-04-12T00:00:00.000
[ "Materials Science" ]
DIRECT ESTIMATION OF THE RELATIVE ORIENTATION IN UNDERWATER ENVIRONMENT While ccuracy, detail, and limited time on site make photogrammetry a valuable means for underwater mapping, the establishment of reference control networks in such settings is oftentimes difficult. In that respect, the use of the coplanarity constraint becomes a valuable solution as it requires neither knowledge of object space coordinates nor setting a reference control network. Nonetheless, imaging in such domains is subjected to non-linear and depth-dependent distortions, which are caused by refractive media that alter the standard single viewpoint geometry. Accordingly, the coplanarity relation, as formulated for the in-air case does not hold in such environment and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. In this paper we propose a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. We also study a linear model for the establishment of this constraints. Clearly, a linear form requires neither first approximations nor iterative convergence scheme. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved. INTRODUCTION Relative orientation enables the estimation of a minimal set of parameters that are necessary to establish coplanarity among corresponding points between two views (Mullen, 2004). Its simplified form, which requires no knowledge of object space coordinates, makes it useful as a means to obtain scene reconstruction, up to a similarity transformation, without setting a reference control network (Stewenius et al., 2006;Pollefeys and Van Gool, 1997;Hemayed, 2003;Hartley and Zisserman, 2003). These advantages become paramount in underwater environments where reference control networks are difficult to establish and where local ones become the more practical solution. Accordingly, the use of relative orientation has been applied in a diverse set of applications, including: seabed mapping, archaeological surveys, marine biology studies, vehicle navigation, or industrial equipment inspection, to name only a few (Eustice et al., 2005;Allotta et al., 2015;Ricci et al., 2015;Teixeira et al., 2016;Ludvigsen and Sørensen, 2016;Pergent et al., 2017;Herkül et al., 2017). Establishing a relative orientation in underwater environments is challenged, however, by limited visibility due to scattering and absorbance of light, and by refraction that alters light rays trajectory from their standard 'collinear' path and ultimately the coplanarity relation (Menna et al., 2017). The problem of recovering 3-D geometry in the presence of refraction has been studied extensively in recent years. The majority of approaches rely on standard in-air models, assuming that the pinhole camera model with distortion can compensate for refraction (e.g., Pizarro et al., 2003;Singh et al., 2007;Shortis, 2015). These models have proved applicable, but analysis of the actual image-point correction due to refraction shows that for * Corresponding author close-range imaging, the actual aberration is depth dependent, and three dimensional (Sedlazeck and Koch, 2012;Shortis, 2015). Modeling the effect of imaging through refractive media has shown that the standard single viewpoint perspective (SVP) model no longer applies in this setting and introduces systematic errors that are proportional to the distance between the entrance pupil and the flat port (Menna et al., 2017). To handle these effects, recent studies considered the introduction of an explicit modeling of the refraction effect. As an example, Chaudhury et al. (2015) implemented a ray-tracing based model that allows expressing the underlying refractive geometry as an extension of projective geometry. The authors assumed, however, that refraction occurs only at the camera center, allowing to geometrically estimate the relationship between the observed image point and the one obtained as if refraction did not occur. The case of a fixed interface and the moving camera was considered by Chen et al. (2011) where the authors assumed that the imaging system is integrated with an inertial measurement unit (IMU) so that the pitch and yaw angles of the camera are considered known. The authors then proposed a closed-form solution to the absolute orientation estimation, yet required the vertical displacement of the camera to be known in advance. A setup of two cameras embedded in different watertight housing was considered in Kang et al. (2012). The authors neglected the glass thickness and considered the image plane and the glass interface to be parallel. They estimated then the air medium thickness for each imaging system and the five relative pose parameters in a non-linear optimization procedure. Telem and Filin (2013) derived a coplanarity constraint, which is based on the varifocal representation. The authors demonstrated that because of refraction, the linear relation between views does not hold when imaging underwater, leading to a non-linear and inhomogeneous coplanarity constraint. Aiming to accommodate for a thick glass interface found in deep-sea high pressure housings, Jordt et al. (2016) proposed a ray-tracing refractive structure-from-motion (SfM) framework by extending the one previously proposed in and . The authors relied on the coplanarity constraint to estimate the relative camera motion between two successive views while using a non-linear refractive bundle adjustment. The proposed refractive SfM required a good initialization, especially for the two-view pose estimation case, and relied on a computationally demanding genetic optimization. Furthermore, the model exhibited sensitivity to noise, especially when using thin glass interface or when the distance between the perspective center and the interface was small. A pose and calibration models for an underwater stereo imaging system was proposed by Zhang et al. (2018) where the authors remodeled the underwater imaging system in the view of the light filed parametrization and proposed a forward projection error function which was minimized in a non-linear optimization fashion. The literature shows that because of refraction, underwater imaging suffers from depth-dependent, non-linear distortions and that the imaging system does not follow the standard SVP model. To handle that, recent research followed ray-tracing based principles and incorporated robust optimization strategies, proposed to algorithmically reduce the measurement noise, or used hardware-driven solutions. While exerting high computational demand, they still fall short of obtaining satisfactory levels of accuracies and are often supplemented by elaborate setups, or require an a priori knowledge of part of the camera pose parameters. In this paper, we propose a geometrically driven model that accounts for the refraction effect and constitutes an alternative formulation of the relative orientation and the coplanarity constraint. We show that the pose parameters can be estimated linearly and directly, with no need for first approximations or iterations. The linearity is achieved by separating the relative pose from the underwater-related system parameters where the refraction effect is manifested. The advantages of our model are the following: firstly, the relative pose parameters can be estimated linearly; secondly, and contrary to the in-air case, the model allows to estimate the scale of the system; and thirdly, 3-D data can be extracted with no need to establish an explicit reference frame. The model is analyzed by simulated experiments and in actual underwater conditions. Evaluation is of the inner and geometric accuracy, both are analyzed by the reconstructed point-sets. Results show that a high-level of accuracy is reached while facilitating a flexible orientation strategy for modeling in underwater environments. Varifocal model We consider an imaging system made of a camera shielded by a flat-port housing. The image formation model is of a ray that propagates from an in-water point until it refracts at the port glass-interface and then propagates through the air until it reaches the image plane while passing through the perspective center (Fig. 1). The system is defined by the camera's focal length, f , the distance from the housing interface to the perspective center, F, and the refractive indices. Snell's law of refraction suggests that the refracted ray, the normal to the interface surface, and the incoming-and outgoing-rays, all lie on the same plane ( Fig. 1). As the refracted rays pass through the perspective center, all planes of refraction revolve about the optical axis as long as it coincides with the normal to the Figure 1: Varifocal configuration, the dashed line follows the direction of the underwater ray until intersecting the system axis, where kF is the perspective center offset, α is the angle of refraction; and β is the angle of incidence (µ1 and µ2 the refractive indices of air and water, respectively). interface. Hence, the axiality of the system. Axiality is also a property of the varifocal model, but here the position of the perspective center is modified along the optical axis to maintain the collinearity of the incoming ray. Axial camera modeling -Modifying the imaging system with respect to the interface (Fig. 1), F becomes the principal distance, and coordinates are given by: are the housing system coordinates for a given image plane coordinates, xi and yi. The collinearity form in this system can be written as: where Xi is the 3-D coordinate of a point in object space; ri is the i-th row of the rotation matrix R; tx, ty, and tz are the camera position in the camera reference frame; λi is a scale factor; and ki: is the principal distance adaptation to the refraction effect, where α is the angle of incidence (Fig. 1); and µ = µ1/µ2 is the refractive index ratio between the indices of air and water. Deviation from parallelism -The derivation of Eq. (2) assumed that the optical axis and the normal to the interface coincide. This assumption fails to hold when the image plane is not parallel to the housing interface. In such a case all the planes of refraction still pass through the perspective center. They do not coincide however with the optical axis, rather with the normal to the interface, which is common to all planes of refraction. The offset between the normal to the interface and the optical axis is an image related term which is defined by two rotation angles, one off the optical axis and another about it. Telem and Filin (2010) have shown that the rotation about the z-axis is strongly correlated with the exterior orientation angles, thus ab-sorbed, and cannot be estimated independently. Consequently, the transformation between the two systems can be approximated by the use of a single rotation about the y-axis. Hence, the modification becomes: where Ry (η) is the rotation matrix from the image plane to the interface by an angle η. Thus, the modified expression for k becomes: where x is the image point coordinate, and M3 = r3 ⊗ r3 is the outer product of the third row in Ry (η) in which the refraction effect is accounted for. As offsetting the perspective center should be performed in reference to the system in which the normal to the interface acts as the optical axis, both image and object-space related vectors should transform to this system, followed by a modification to the perspective center: The glass interface effect on the system was not considered to that point, but it was demonstrated by Telem and Filin (2010) that for a standard underwater imaging system it introduces a lateral shift which is smaller by an order of magnitude or more than the one caused due to angular refraction. This shift can be reduced to a constant value, factored by the glass thickness. Hence, it is absorbed by F . Plücker coordinates of a 3-D line To establish the relative orientation we define the ray's direction using Plücker representation of lines. We show that such a representation facilitates a linear estimation of the camera pose parameters. We represent the projected image rays as 3-D lines using Plücker coordinates. Of the definitions for a Plücker 3-D line we use the vector form (Förstner and Wrobel, 2016). Let A and B be homogeneous coordinates of two 3-D points defining a line. The Plücker line representation is given by: where Ai, Bi is the i th ordinate of the respective point. L can be split into two 3-vectors a and b, where, a and b are the direction and moment of the 3-space line, respectively. To define a proper line in space these two sub vectors must satisfy the constraint: Considering two 3-space lines, L and L , a coplanarity relation between them is obtained if and only if they fulfill the following constraint: where, W is a 6 × 6 permutation matrix. Transformationsa metric transformation defined by a rotation matrix R and a translation vector t, acts on a point X by: Accordingly, Plücker line coordinates are transformed by, where, [t] × is a skew symmetric matrix (Förstner and Wrobel, 2016). Underwater relative orientation Considering, L and L as lines that represent the rays projected from two cameras meeting in a point in object-space, Eq. (10) can be written as: Describing a ray by a point v that lies on it and a unit vector directionx. The Plücker coordinates of two lines corresponding to image pair correspondence i are given by, In our case,xi would be the direction defined by the corrected perspective center vi = [0, 0, −kiF ] and its corresponding housing system pointxi (Eq. 1) in the first image. Setting the point on the line to be vi, Eq. (14) can be written in terms of the varifocal model as follows, As the last element of Plücker rays is zero, the 6 × 6 transformation matrix in (Eq. 13) can reduced to the following 5 × 5 matrix: yielding the constraint:L where,Li andL i are a 5-length vectors. We can also write Eq. (17) as: Eq. (17) provides a form from which the relative pose parameters can be estimated linearly. For that purpose, we use the sin- Figure 2: Histograms of the estimation accuracy using 30 corresponding points and based on 500 randomly simulated tests. In all tests, a Gaussian noise with σ = ±0.5 pixels was introduced to the image measurements. gular value decomposition (SVD; Golub and Van Loan (1983)) based solution for Er which yields satisfactory results. However, we propose a two-step approach, an alternative solution scheme to obtain better estimates. First the rotation matrix is estimated from E = [t] x R, which is computed by the first element in Eq. (18). Secondly, a non-linear refinement of this estimates is directly computed from the complete form of Eq. (18). To compute the scale, we fix the refined rotation matrix in Eq. (18) and compute the translation vector, t. As t is the only unknown and Eq. (18) is a non-homogeneous set of equation (unless R = I, meaning no motion), thus t can be computed using linear least-squares adjustment, with no scale ambiguity. EXPERIMENTS Evaluation of the model was performed by studying two scenarios, one in which the pose parameters were estimated and compared to ground truth, and another in which reconstruction error was evaluated. The evaluation was carried out using simulations with settings that resemble actual underwater imaging conditions and using real-world experiments. For both real and simulated experiments, the proposed model performance was characterized by the following metrics: (i) angle difference between the rotation axes of R andR (Eq. 19a); (ii) the direction difference between t andt (Eq. 19b); and (iii) the scale error between t and andt (Eq. 19c). The parameters R, t are the ground-truth information used for the simulations, andR,t correspond to the estimated equivalents. Note that all these matrices or vectors are defined in absolute scales rather than up to a scale as is in the in-air case. Synthetic simulations To test the pose parameter estimation model we consider an imaging system with a standard frame camera and a flat-port housing. The settings consisted of a 3000 × 2000 pixels frame camera with a 7.5 µm pixel size; a 24 mm lens; µ glass = 1.5; µwater = 1.333; F = 50 mm; and η = 1 • . A normally distributed additive random noise with standard deviation ranging between σ = [0.1, 1] pixels was introduced to the image coordinates. The first experiment tested the estimation accuracy for rotation, translation, and scale in the presence of σ = ±0.5 pixels of Gaussian noise. Histograms of the estimation accuracy of different metrics (Eqs. 19a,19b and 19c) are plotted in Fig. (2). The second experiment tested the quality and stability of the camera pose parameters (position and orientation), and scale estimation in the presence of Gaussian noise. Here measurement noise varied from 0.1 to 1 pixels. The accuracy estimates are discussed only for σ = ±0.5, other levels are plotted in the designated graphs in Fig. (3). The application of the proposed model for that noise level yielded an angular error of ±0.012 • . The scale estimation accuracy was 5.5e −4 . The reprojection error shows a linear trend with the increase of the noise level (Fig. 3), an indication of the stability of the model. In all experiments convergence was reached within three iterations, one for the estimation of the approximated values and two others for the estimation by the optimal form in Eq. (18). Real world -Open sea experiment In addition to the synthetic evaluation, another test was performed under real-world conditions in open sea. This environment provided less controlled and different settings. Here a Nikon D70s with a fixed 24 mm lens shielded by an IKELITE housing, and two 2-D rigid plates were immersed in water, both are used for testing the quality of the pose estimation. A dry calibration phase was performed while the camera was in the housing (Elnashef and Filin, 2019), then, a wet calibration was carried out using a total of five images. The a posteriori standard deviation of the adjustment wasσ0 = ±0.42 pixels. F was estimated with an accuracy of ±0.07 mm, ±1e − 6 for µ, and ±0.05" for η. The system parameters were then fixed in the next stage. We used ORB key-point detector for feature detection, then a matching between correspondence was performed to find corresponding points between the two images shown in (Fig. 4). Note that, the two images used in the relative orientation estimation were different than those used in the calibration stage. Before approaching the non-linear estimation, approximate pose parameters were computed. To obtain them, we estimated the essential matrix, E = [t] × R using the housing system coordinates. Hence, we solved for E =x T i Ex i , which offers adequate approximation for Eq. (18). We applied the standard 5-point and the RanSac algorithms (Stewenius et al., 2006) to extract the parameters. With these values, we then optimally minimized the geometric error, by directly solving for the three rotations angles (ω, φ, κ) and two translation (tx, ty) (Eq. 18). Finally, to estimate the translation vector with scale, we first computed the optimal rotation matrixR from the estimated angles (Eq. 18). Then, fixing it in Eq. (18), we solved only for the three translation parameterst = [tx, ty, tz], using a standard least-squares solution. Testing 3-D point measurement accuracy -We evaluated the application of these parameters when applied to the measurement of 3-D object-space points using the varifocal model proposed by Elnashef and Filin (2019). Measurements were performed on tilted test-plate (with respect to the camera) by two images. The evaluated measures included coplanarity, collinearity, parallelism and orthogonality of the measured grid lines, and measurement of the grid dimensions. All 63 corners of the 7 cm-spaced, 42×56 cm, grid (Fig. 4) were measured. The first test evaluated the coplanarity of the mapped points, and a plane that was fitted to them yielded a ±0.15 mm deviation. The collinearity test for points lying on straight-lines, and as a consequence the removal of the underwater effect that bends them, yielded a mean deviation of ±0.21 mm and a maximal deviation of 0.36 mm from collinearity. Computation of the angles between parallel and orthogonal lines shows an angular error of 47" and 27" in parallelism and orthogonality, respectively. These results are equivalent to 0.54 mm and 0.42 mm deviation, with respect to their actual length on the target (56 cm and 42 cm, respectively). Such deviations are comparable in their accuracies to the ones reached in our other experiments. CONCLUSIONS This paper proposed a geometrical driven and scaled-relativeorientation model for the underwater environment. Its main advantage lies in the preservation of the coplanarity constraint with no need to gain explicit knowledge of object point coordinates, while also separating the pose parameters from the refraction related ones. Therefore, our model enables the estimation of the underwater related relative orientation parameters with the addition of the scale. The experiments demonstrated how estimation of the orientation parameters yielded high levels of accuracy, robustness to first approximations, and high level of noise. The linear estimation form requires 17 point-correspondences, which may limit its applicability in images with low number of correspondence, and would increase the computational demand in the presence of high fraction of outliers. This was alleviated, however, by the direct estimation methodology proposed in the paper, which required only 5 points and allowed us to solve the pose parameters in an optimal manner by minimization of the reprojection error. We showed that by applying our proposed relative orientation model, where scale is also estimable, it was possible to obtain high estimation and also reconstruction accuracy in close-range underwater imaging with no domain knowledge given. ACKNOWLEDGMENT This work was supported in part by the Israel Ministry of Science, Technology and Space grant # 3-12487.
4,992.4
2020-08-03T00:00:00.000
[ "Computer Science" ]
The Pro-Gly or Hyp-Gly Containing Peptides from Absorbates of Fish Skin Collagen Hydrolysates Inhibit Platelet Aggregation and Target P2Y12 Receptor by Molecular Docking Previous studies found that the collagen hydrolysates of fish skin have antiplatelet activity, but this component remained unknown. In this study, eleven peptides were isolated and identified in the absorbates of Alcalase-hydrolysates and Protamex®-hydrolysates of skin collagen of H. Molitrix by reverse-phase C18 column and HPLC-MS/MS. Nine of them contained a Pro-Gly (PG) or Hyp-Gly (OG) sequence and significantly inhibited ADP-induced platelet aggregation in vitro, which suggested that the PG(OG) sequence is the core sequence of collagen peptides with antiplatelet activity. Among them, OGSA has the strongest inhibiting activities against ADP-induced platelet aggregation in vitro (IC50 = 0.63 mM), and OGSA inhibited the thrombus formation in rats at a dose of 200 μM/kg.bw with no risk of bleeding. The molecular docking results implied that the OG-containing peptides might target the P2Y12 receptor and form hydrogen bonds with the key sites Cys97, Ser101, and Lys179. As the sequence PG(OG) is abundant in the collagen amino acid sequence of H. Molitrix, the collagen hydrolysates of H. Molitrix might have great potential for being developed as dietary supplements to prevent cardiovascular diseases in the future. Introduction In the last decade, with the increase in the aging population, the incidence and mortality of cardiocerebrovascular diseases (CCVd) are still on the rise in China. Researchers predict that there are 290 million CCVd patients in China and believe that the disease has become the primary threat to the health of the elderly [1]. Thrombus formation is the pathological basis of CCVd, and its key factors are activation, aggregation, and adhesion of platelets [2,3]. Platelet activation is a process in which platelets in a resting state release particles, adhere, and aggregate under the stimulation of activating agents such as collagen [4], ADP [5], and thrombin [6,7]. Platelet activation plays a decisive role in physiological hemostasis. However, excessive activation of platelets is involved in many pathological processes, such as atherosclerosis, thrombosis, tumor metastasis, and inflammation [8]. In recent years, a variety of potent antiplatelet drugs have been designed to inhibit platelet aggregation, such as thromboxane A2 (TXA2) inhibitor aspirin [9], P 2 Y 12 receptor antagonists clopidogrel, and prasugrel [10]. However, long-term antiplatelet drug treatment may have a variety of side effects, including bleeding, gastrointestinal damage, allergic reactions, headaches, and liver damage [11,12]. Compared with antiplatelet drugs, preventing and suppressing thrombotic diseases through diets have unique advantages: fewer side effects and less cost. At present, a large number of studies reported that various natural antiplatelet compounds in food can inhibit platelet aggregation, including coumarins from the plant family Rutaceae [13], mangiferin [14], polyphenols from tomato [15], resveratrol [16], and quercetin [17]. 2 of 15 Collagen is the structural protein of the extracellular matrix. It is abundant in skin, bone, and cartilage tissues, accounting for about 30% of the total protein in animals [18]. In recent years, the separation and identification of bioactive peptides from collagen hydrolysates, such as antioxidant peptides [19,20], anti-inflammatory peptides [21], and antiosteoporosis peptides [22], has attracted the attention of researchers. Previous studies reported that the peptides, such as DGEA, FGN, and GPR, derived from type I collagen in pig skin or fish skin have antiplatelet activity [23][24][25]. In our previous study, thirteen-monthold mice were administered with collagen hydrolysates from H. Molitrix skin for 2 months. The collagen hydrolysates significantly decreased the level of platelet release indicators in the plasma, including PF4, granule membrane protein (GMP)-140, -thromboglobulin, and serotonin [26]. Later, several antiplatelet peptides were identified from H. Molitrix collagen hydrolysate after simulated gastrointestinal digestion and intestinal absorption, including OG, OGE, PGEOG, and VGPOGPA, one of which, OGE, was also confirmed to inhibit thrombosis in vivo [27]. Molecular docking is a computer-based structure simulation method that is widely used in drug discovery and development. Since its first appearance in the mid-1970s, it has proven to be an important tool to help reveal how a compound interacts with its target protein [28]. In this study, molecular docking was used to reveal the target receptor proteins and binding sites and the bonds formed by their ligand-target interactions. In short, the purpose of this study was to identify the antiplatelet peptides from the absorbates of Alcalase-hydrolysates and Protamex ® -hydrolysates of the skin collagen of H. Molitrix and assess their antiplatelet activity. Furthermore, the target receptors of these peptides on platelets were simulatively screened by molecular docking. Preparation of Collagen Peptides (CPs) Gelatin from the H. Molitrix skin was obtained according to the previously described method [26]. The gelatin was enzymatically hydrolyzed by Alcalase for 4.0 h (AH), or Protamex ® for 8.0 h (PH). Simulated Gastrointestinal Digestion The in vitro gastrointestinal digestion of collagen peptide has been described in a previous report with a minor modification [27]. During simulated gastric digestion, pepsin (a ratio of enzyme to substrate of 1:50, w/w, and pH 2.0) was added to CPs (a concentration of 4%, w/v) and incubated at 37 • C for 2 h. During intestinal digestion, the mixture from simulated gastric digestion was adjusted to pH 7.0 with 2 M NaOH. Pancreatin (1:50, enzyme/CPs ratio, w/w) was further added to hydrolyze the mixture at 37 • C for 2 h and was stirred continually during the digestion progress. Finally, the mixture was heated in boiling water for 10 min to inactivate the enzymes. The digested solution was centrifuged at 8000 rpm for 15 min. Finally, the supernatant was freeze dried and stored at −80 • C in the dark for further Caco-2 treatment. Uptake and Transport by Caco-2 Cells The Caco-2 cell culture method has been described in a previous report [29]. Caco-2 cells were seeded and cultivated in 25 cm 2 plates, and the cells were incubated in an atmosphere of 5% CO 2 with DMEM containing 10% FBS, 1% nonessential amino acids, 100 U/mL penicillin, and 0.1 mg/mL streptomycin. When Caco-2 cells covered 80% to 90% of the culture area of the flask, cells were dislodged using trypsin. Then, Caco-2 cells were seeded and cultivated at a density of 1 × 10 5 /mL on the upper side of the 6-well Transwell cell culture plate (AP). The medium was replaced every 2 days. After 21 days of incubation, a transepithelial electrical resistance (TEER) value of 800 Ω cm 2 or higher indicated that the cells were differentiated in monolayers [30]. Lyophilized polypeptide samples were dissolved in HBSS at a dose of 15 mg/mL. The monolayer membrane was gently rinsed with preheated HBSS for 2-3 times, 1.5 mL HBSS solution was added to the upper (AP) side of the cell model, and 3 mL HBSS solution was added to the lower (BL) side. Then, the cell model was placed in a cell incubator and balanced at 37 • C for 30 min. After that, the solution on the AP side was removed and 1.5 mL sample was added. The sampled plates were placed in a cell incubator for 2 h. The solution on the BL side was collected for all experiments below. Separation of the AH Absorbate (AHA) and PH Absorbate (PHA) Each absorbate was desalted through a C18-SPE column (ODS-A, YMC Co., Kyoto, Japan) before separation. The absorbate was dissolved with an appropriate amount of deionized water and loaded on the column. Then, the column was washed with deionized water, 50% ethanol aqueous solution. The elution flow rate was 1.0 mL/min, and detection wavelength was 220 nm. The peptide components were collected after 10 min. After desalination, the sample was loaded on the ODS-A universal inverse-phase C18 column for separation. Elution conditions are as follows: 0-15 min, distilled water; 15-25 min, 10% methanol; 25-35 min, 30% methanol; 35-45 min, 50% methanol. The flow rate was 1.0 mL/min, and the detection wavelength was 220 nm. The HD-A chromatographic processing system was used to record the atlas and collect the fractions. Each fraction was concentrated by rotary evaporation at 45 • C then freeze dried and stored at −80 • C. Peptide Sequencing Using LC-MS/MS The peptides were identified by capillary high-performance liquid chromatographyelectrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS) analysis using the Thermo Fisher Scientific™ HPLC (Ultimate 3000) coupled to an Orbitrap Elite™ Hybrid Ion Trap-Orbitrap Mass Spectrometer (Thermo Fisher Scientific (China) Co., Ltd., Shanghai, China) according to previously reported methods [29]. The mass spectrometer was operated in the positive mode with the capillary voltage of 3.6 kV and the source temperature of 100 • C. The scanning m/z range was 200-1500 in the MS mode and 50-1500 in the MS/MS mode. The MS/MS data were processed using Thermo Xcalibur software (Thermo Fisher Scientific, San Jose, CA, USA) in combination with manual de novo sequencing. The spectra of hydrolysates were processed to MS (mass spectrum) and MS/MS, respectively, by Bruker Daltonics Data Analysis 4.0 (Bruker Daltonics Inc., Billerica, MA, USA). Analysis results were confirmed using protein database from National Center for Biotechnology Information (NCBI, https://www.ncbi.nlm.nih.gov/protein, accessed on 17 September 2018). Validated by Peptide/Protein MS Utility program (http://prospector.ucsf.edu/, accessed on 20 September 2018). Peptides were synthesized using the method of Fmoc solid-phase (GL Biochem Ltd., Beijing, China). The purity of all peptides used in the study was greater than 98% determined by HPLC. Antiplatelet Activity Assay In Vitro Blood was taken from the abdominal aorta of healthy male SD rats and mixed with 3.8% sodium citrate (anticoagulant: blood = 1: 9, v/v). Then, the solution with equal volume of PBS was centrifuged at 50× g for 10 min at 23 • C. The supernatant was centrifuged under the same conditions again. After that, supernatant was centrifuged at 750× g for 10 min at 23 • C. The upper layer was poor platelet plasma (PPP), and the precipitation was platelets. Platelets were suspended by plasma, and platelet concentrations were adjusted to 2-3 × 10 8 /mL to obtain platelets-rich plasma (PRP). A measurement of 270 µL PRP was placed in the test cup and preincubated at 37 • C for 5 min. For the model induced by collagen or thrombin, 30 µL of Tyrode's buffer (model group), peptides (2, 4 mg/mL, sample groups), tripeptide RGD(2, 4 mg/mL, positive control group 1), and aspirin (1.5 mM, positive control group 2) were added to the test cup and incubated at 37 • C for 5 min, respectively. Then, the activators collagen (50 µg/mL) or thrombin (0.5 U/mL) were added, respectively, to measure the platelet aggregation rate at 300 s. For ADP-induced model, 30 µL PPP (model group), peptides (sample group), tripeptide RGD, and aspirin (positive control group) were added to the test cup and incubated at 37 • C for 5 min. Finally, the activators ADP (100 µM) were added to measure the platelet aggregation rate at 300 s [31]. Eight-week-old male SD rats (180 ± 5 g, SPF grade) were purchased from Beijing Vital River Experiment Animal Technology Co., Ltd. (Beijing, China). All rats were housed in cages (6-8 rats per cage) and were allowed free access to normal diet and water. Animal room was maintained at a temperature of 23 ± 2 • C, relative humidity of 50 ± 5%, and artificially illuminated with a 12 h light/dark cycle. All rats were allowed a normal diet and water freely. After adapting to the environment for 6 days, rats were grouped based on bodyweight into four groups (n = 5/group), including the normal group (N, Normal saline), high dose peptides group (HP, 300 µmol/kg.bw), low dose peptides group (LP, 200 µmol/kg.bw), and positive control group (Clopidogrel, 100 µmol/kg.bw). All groups were orally administrated. Ferric Chloride-Induced Arterial Thrombosis Model After an hour, rats were anesthetized with 2% sodium pentobarbital (0.25 mL/100 g.bw). The neck skin was removed, and the muscle layer was cut open along the midline. Avoiding lymphatic and glands, the left carotid artery was separated about 1 cm. The carotid artery was encircled with a filter paper strip (1 cm × 0.5 cm) soaked with 10% FeCl 3 solution, then the wound was covered with gauze soaked with normal saline and kept for 15 min. After that, the filter paper strip was removed and kept for 40 min. After clamping the two ends of the blood vessel with hemostatic tweezers, blood was taken from the abdominal aorta [32,33]. After blood collection, the blood was mixed with 3.8% sodium citrate (9:1, v/v) for anticoagulation. After gently mixed, the blood was centrifuged for 15 min at 1500× g. The upper light-yellow plasma was collected and stored at −80 • C for later coagulation function test. The blood vessel segment was cut off, about 1 cm wrapped in filter paper strip, the residual blood inside was absorbed with filter papers, the overall wet weight was weighed, and the blood vessel was weighed after taking out the thrombus. After the rats were sacrificed, the thymuses and spleens were collected and weighed (mg) to obtain the thymus index and spleen index. Bleeding Assay and Determination of Thymus Index and Spleen Index After blood collection, the blood was mixed with 3.8% sodium citrate (9:1, v/v) for anticoagulation. After being gently mixed, the blood was centrifuged for 15 min at 1500× g. The upper plasma was collected and stored at −80 • C. APTT and PT assay: 130 µL rat plasma was put into Automated Coagulation Analyzer (HEMAVET950, Erba Diagnostics Inc., London, UK), then the values of PT and APTT were obtained from the instrument. TT assay: 200 µL rat plasma was set at 37 • C for 5 min. Then, 200 µL TT reagent was added to the system to record the solidification time. After the rats were sacrificed, the thymus and spleen were removed and weighed to obtain the thymus index and spleen index. Determination of Thymus Index and Spleen Index After the rats were sacrificed, the thymuses and spleens were collected and weighed (mg) to obtain the thymus index (TI) and spleen index (SI). The TI and SI were calculated according to the following equation: TI/SI (mg/g) = (weight of thymus/spleen)/body weight. Molecular Docking The interaction mode between antiplatelet peptides and receptors (P 2 Y 12 , PLC β2, PLC β3) was evaluated using molecular docking. The 2D structures of peptides were prepared by ChemBioDraw ultra 14.0 (Perkinelmer co., ltd., Waltham, MA, USA) and converted to 3D format by ChemBio3D ultra 14.0 (Cambridge Soft Corporation, USA). The X-ray crystal structures of P 2 Y 12 (PDB ID:4PXZ), PLC β2 (PDB ID:2ZKM) and PLC β3 (PDB ID:4GNK) were downloaded from the RCSB protein database. Changes such as adding polar hydrogen and water molecule deletion were done through modules of Sybyl-X 2.0 software (Tripos Software, Inc., Berkeley, CA, USA). The docking simulating was done by the triangle matcher placement algorithm plus ∆G bind score and force field as refinement process. The top-score docking poses were chosen for final ligand-target interactions analysis via the Surflex-Dock module of Sybyl-X 2.0 [34][35][36]. Effect of Alcalase-Hydrolysates (AH) and Protamex ® -hydrolysates (PH) on Inhibiting Platelet Aggregation Induced by Collagen, Thrombin, and ADP In Vitro Alcalase protease and Protamex ® protease were used to hydrolyze the skin collagen of H. Molitrix to obtain different hydrolysates (AH and PH). The antiplatelet activity of the hydrolysates in the platelet aggregation model of the three inducers (collagen, thrombin, and ADP) was evaluated, respectively. According to the results (Figure 1), the AH significantly inhibited the platelet aggregation caused by the three different inducers (** p < 0.01). The PH only had a significant antiplatelet activity in the ADP-induced platelet aggregation model (** p < 0.01). Separation and Identification of Antiplatelet Peptides from Collagen Hydrolysates after Simulated Gastrointestinal Digestion and Intestinal Absorption After simulated gastrointestinal digestion and intestinal absorption, the absorbates of collagen hydrolysate were separated to identify the antiplatelet peptides. Four fractions were isolated from the AHA and PHA, respectively, named A1, A2, A3, and A4, and P1, P2, P3, and P4, as shown in Figure 2A,B (AHA/PHA: the absorbate of collagen hydrolysate prepared by Alcalase/Protamex ® ). At a dose of 4 mg/mL, all four fractions of AHA had a significant inhibitory effect on ADP-induced platelet aggregation, and the fraction A1 had the highest inhibitory rate. For PHA, fractions P2 and P3 had a significant inhibitory effect on ADP-induced platelet aggregation, and P3 had the highest inhibitory rate ( Figure 2C). Then, the fractions with high bioactivity (A1, A4 and P3) were used for further identification of antiplatelet peptides by HPLC-ESI-MS/MS. Effect of Alcalase-Hydrolysates (AH) and Protamex ® -hydrolysates (PH) on Inhibiting Platelet Aggregation Induced by Collagen, Thrombin, and ADP In Vitro Alcalase protease and Protamex ® protease were used to hydrolyze the skin collagen of H. Molitrix to obtain different hydrolysates (AH and PH). The antiplatelet activity of the hydrolysates in the platelet aggregation model of the three inducers (collagen, thrombin, and ADP) was evaluated, respectively. According to the results (Figure 1), the AH significantly inhibited the platelet aggregation caused by the three different inducers (** p < 0.01). The PH only had a significant antiplatelet activity in the ADP-induced platelet aggregation model (** p < 0.01). Separation and Identification of Antiplatelet Peptides from Collagen Hydrolysates after Simulated Gastrointestinal Digestion and Intestinal Absorption After simulated gastrointestinal digestion and intestinal absorption, the absorbates of collagen hydrolysate were separated to identify the antiplatelet peptides. Four fractions were isolated from the AHA and PHA, respectively, named A1, A2, A3, and A4, and P1, P2, P3, and P4, as shown in Figure 2A,B (AHA/PHA: the absorbate of collagen hydrolysate prepared by Alcalase/Protamex ® ). At a dose of 4 mg/mL, all four fractions of AHA had a significant inhibitory effect on ADP-induced platelet aggregation, and the fraction A1 had the highest inhibitory rate. For PHA, fractions P2 and P3 had a significant inhibitory effect on ADP-induced platelet aggregation, and P3 had the highest inhibitory rate ( Figure 2C). Then, the fractions with high bioactivity (A1, A4 and P3) were used for further identification of antiplatelet peptides by HPLC-ESI-MS/MS. A total of 11 peptides from A1, A4, and P3 were identified by amino acid sequencing: A1 (OG, OGSA, PGPK, PGKP, PGPQ, PGQP, GTOGT, EGPAGPA), A4 (OGOMG, VVGOKG), and P3 (PGHH). Among them, the primary mass spectrum and secondary mass spectrum of OGSA in A1, OGOMG in A4, and PGHH in P3 were illuminated in Figure 3. Effect of Identified Peptides on Inhibiting Platelet Aggregation Induced by Collagen, Thrombin, and ADP In Vitro Furthermore, 11 identified peptides were synthesized with Fmoc solid-phase. According to platelet aggregation inhibition results (Table 1 and Figure S1), in the collagen-induced model, only the PGPQ showed a decent platelet aggregation inhibition rate (65.6 ± 14.1%); the others have poor inhibition effects or adverse effects. For the thrombin-induced model, the EGPAGPA showed a decent platelet aggregation inhibition rate (53.9 ± 2.1%), while the others have no or weak inhibition effects. Interestingly, the 11 collagen peptides identified all showed high inhibition rates of platelet aggregation in the ADP-induced model, including four the OG-containing peptides OG, OGSA, GTOGT, and OGOMG; five PG-containing peptides PGPK, PGKP, PGPQ, PGQP, and PGHH; and EGPAGPA and VVGOKG. The results suggested that these identified antiplatelet peptides inhibit platelet aggregation through the ADP pathway. Moreover, the OG-containing peptides commonly have a higher antiplatelet activity than the PG-containing peptides. The peptide with the highest antiplatelet activity was the tetrapeptide OGSA, and its IC 50 was 0.63 mM, which was significantly lower than the dipeptide OG (1.42 mM) ( Figure 4A). Tetrapeptide OGSA Inhibited Thrombus Formation In Vivo OGSA was selected as an example to evaluate its antithrombotic effect in vivo due to its outstanding effect on inhibiting platelet aggregation in vitro. The FeCl 3 -induced arterial thrombosis model in rats was used to evaluate the antithrombotic activity of OGSA. OGSA could inhibit the thrombus formation significantly at doses of 200 and 300 µM/kg body weight (bw), respectively ( Figure 4B). The thrombus weight of the high dose OGSA group was similar to that of the clopidogrel-treated group (93 µM/kg.bw, positive control). A total of 11 peptides from A1, A4, and P3 were identified by amino acid sequencin A1 (OG, OGSA, PGPK, PGKP, PGPQ, PGQP, GTOGT, EGPAGPA), A4 (OGOM VVGOKG), and P3 (PGHH). Among them, the primary mass spectrum and seconda mass spectrum of OGSA in A1, OGOMG in A4, and PGHH in P3 were illuminated Figure 3. The side effects (such as bleeding risk and immune response) of OGSA were further evaluated in vivo [37]. The prothrombin time (PT), activated partial thromboplastin time (APTT), and thrombin time (TT) in rats after oral administration of OGSA were determined to assess its potential bleeding risk. The results showed that OGSA (200 and 300 µM/kg.bw.) did not affect PT, APTT, or PT significantly ( Figure 4C), but the clopidogrel (93 µM/kg.bw) significantly increased TT (p < 0.01). The spleen index (SI) and thymus index (TI) of all rats were determined. Neither low dose (200 µM/kg.bw) nor high dose (300 µM/kg.bw) significantly affected TI and SI ( Figure S2), indicating that OGSA had few side effects on immune function in rats. Molecular Docking of Antiplatelet Peptides with P 2 Y 12 Receptor Molecular docking is a simulation method that can help researchers reveal the target receptor proteins and binding sites and the bonds formed by their ligand-target interactions [38]. In this paper, we used protein and ligand-based virtual screening techniques for identifying the target protein by comparative evaluation. The results of inhibiting platelet aggregation in vitro suggested that the collagen peptides identified inhibit platelet aggregation through the ADP activated signal pathway. The receptor protein of ADP (P 2 Y 12 ) and the nodal protein in the signaling pathway (PLC β2 and PLC β3) were molecularly docked with PG(OG) peptides for identifying the receptor protein. Nine antiplatelet peptides mentioned above and four antiplatelet peptides (OGE, PGEOG, VGPOGP, and PGE) containing the PG(OG) sequence from previous studies [27] were used for molecular docking through Auto Dock software. The binding free energy (∆G bind ) is usually used to describe the interaction between the ligands and receptors, and its value is negatively correlated with the strength of the binding [39]. The ∆G bind binding energy with P 2 Y 12 , PLC β2, and PLC β3 was shown in Table 2. The smaller the ∆G bind , the stronger the binding between the receptor and the ligand. The correlation analysis of ∆G bind and antiplatelet activities (IC 50 values) was carried out through Pearson correlation analysis; the result showed that the correlativity between the ∆G bind of peptides docking to the P 2 Y 12 receptor and the antiplatelet activities was the strongest (Pearson correlation coefficient, 0.802). From this result we speculated that the antiplatelet peptides containing the PG(OG) sequence might target the P 2 Y 12 receptor, which should be confirmed by further experiments. [27]. b IC 50 value: the half maximal inhibitory concentration. c PLC β2 and PLC β3 are the nodal proteins in the Gq-PLCβ signaling pathway. P 2 Y 12 is the membrane surface receptor of ADP on the platelets. The possible binding pocket of peptides ( Figure 5A) was determined on the surface of P 2 Y 12 receptor using the Multi-Channel Surfaces module of Sybyl-X 2.0 software (Tripos Corporation, USA). The results showed that the Pearson correlation coefficient between the ∆G bind of these peptides binding with pocket 1 and their antiplatelet activities (IC 50 values) was the largest (0.612), suggesting that the peptides containing the PG(OG) sequence may bind to the P 2 Y 12 receptor in its binding pocket 1 ( Figure 5B). The molecular docking scores of 13 antiplatelet peptides containing PG(OG) sequences with the P 2 Y 12 receptor were shown in Table S1. The docking findings were mainly evaluated by the total score and crash and polar values. The total score of 12 peptides was higher than 6.00, indicating that their binding is stable. OGE has the highest antiplatelet activity among the 13 peptides (IC 50 = 0.62 mM) and the lowest ∆G bind with the P 2 Y 12 receptor (−6.62 kcal/mol) ( Table 2). The docking findings showed that OGSA could bind to the receptor via H-bond interaction with the amino acid residues Asn159, Cys97, Ser101, Arg93, and Lys179 in the receptor pocket 1 ( Figure 5C,D). OGE could bind to the receptor via H-bond interaction with the amino acid residues Met152, Cys97, Ser101, Asn191, and Lys179 ( Figure 5E,F). In addition, the molecular docking results of PGPQ, which has lower antiplatelet activity in vitro (IC 50 = 3.56 mM), were also illuminated ( Figure 3G,H). The PGPQ could bind to the P 2 Y 12 receptor via H-bonds with the amino acid residues Arg 93, Cys175, Ser101, Asn191, Tyr259, and Lys280. For the molecular docking results of all 13 peptides, it is worth noting that PG(OG)-containing peptides with stronger antiplatelet activity form hydrogen bonds with the sites Cys97, Ser101, and Lys179 in pocket 1 of P 2 Y 12 ; thus, the activity of these antiplatelet peptides might depend on the key sites Cys97, Ser101, and Lys179. The possible binding pocket of peptides ( Figure 5A) was determined on the surfa of P2Y12 receptor using the Multi-Channel Surfaces module of Sybyl-X 2.0 software (Trip Corporation, USA). The results showed that the Pearson correlation coefficient betwe the ΔGbind of these peptides binding with pocket 1 and their antiplatelet activities (IC values) was the largest (0.612), suggesting that the peptides containing the PG(OG) quence may bind to the P2Y12 receptor in its binding pocket 1 ( Figure 5B). Discussion Previous experiments showed that the collagen hydrolysate treated with mixed enzymes (Papain, Trypsin, Alcalase) after simulated gastrointestinal digestion and intestinal absorption can significantly inhibit platelet aggregation [27]. In this study, Alcalase and Protamex ® were respectively used to prepare collagen hydrolysates instead of the previous mixed enzymes. Alcalase-hydrolysates (AH) has antiplatelet activity under three induction conditions (collagen, thrombin, ADP) in vitro, and Protamex ® -hydrolysates (PH) only inhibited ADP-induced platelet aggregation, indicating that collagen hydrolysates prepared by different enzymes have different bioactive compounds. AH has great potential compared to PH for being developed as a dietary supplement enriched in antiplatelet peptides. Ten antiplatelet peptides were identified from the absorbate of AH. It is worth noting that 9 of 11 identified antiplatelet peptides contain the PG(OG) sequence. Considering the other four OG-containing antiplatelet peptides reported previously [27], we speculated that the PG(OG)-containing collagen peptides may commonly have antiplatelet activity in vitro. In the platelet aggregation inhibition assay of the peptides identified, the results based on different inducer models (collagen, thrombin, ADP) showed that the inhibition rate of all 11 peptides in the ADP-induced model (39.8 ± 1.9% to 96.4 ± 2.0%) was significantly higher than that of the collagen-induced or thrombin-induced model. In the latter two models, only a few peptides inhibited platelet aggregation in vitro, and some peptides even showed the opposite effect. Therefore, the antiplatelet molecular mechanism of these peptides may be related to the signaling pathway by which ADP activates platelets. After the platelet release response is activated, the ADP stored in the platelets will be immediately released and activate other platelets. The platelet surface receptors P 2 Y 1 are rare in rats, thus the key membrane receptor on the ADP pathway that cause platelet aggregation is P 2 Y 12 [40]. After activation of the P 2 Y 12 receptor, the Gi protein bound to the P 2 Y 12 receptor started to inhibit adenylate cyclase (AC), resulting in a decrease in the level of cyclic adenosine monophosphate (cAMP). This would dephosphorylate the phosphorylation of vasodilationstimulating phosphoprotein (VASP), which causes platelet aggregation [4]. The Gq protein bound to the receptors P 2 Y 1, PAR1 (thrombin receptor), and TP (TXA2 receptor), which activate the Phospholipase C (PLC) β family (the Gq-PLCβ signaling pathway). The Gq-PLCβ leads to an increase in the level of Ca 2+ , which also causes platelet aggregation [40]. Thus, the membrane surface receptor P 2 Y 12 and the intramembrane receptors PLC β2 and PLC β3 were selected as candidates for identifying the target protein by molecular docking with the antiplatelet peptides containing the PG(OG) sequence. As the antiplatelet activity of peptides depends on their binding capacity of the target protein and peptides, we identified the target protein according to the Pearson correlation coefficient, which was the correlation analysis between the ∆G bind of these peptides binding with the candidate protein and their antiplatelet activities. The results indicated that the antiplatelet activity of peptides was highly related to their binding capacity of the peptides and P 2 Y 12 receptor (Pearson correlation coefficient was 0.802), suggesting that the P 2 Y 12 receptor was the target protein of the PG(OG)-containing peptides. Furthermore, by comparing their binding sites in pocket 1 of P 2 Y 12 , we found that the key binding sites responsible for the recognition of the PG(OG) peptides might be Cys 97, Lys 179, and Ser 101. In a previous molecular docking study on ADP inducer-2MeSADP [41], the hydroxyl group on the core structure of the compound formed hydrogen bonds with five sites (Cys 97, Lys 179, Thr 163, Asn 159, and His 187) in the pocket 1 of the P 2 Y 12 receptor. Two binding sites (Cys 97 and Lys 179) are the same as the binding sites of the (PG)OG-containing peptides with the P 2 Y 12 receptor. Then, we speculated that the OG-containing peptides may inhibit the binding of ADP to the P 2 Y 12 receptor in pocket 1 through competitive inhibition, thereby inhibiting ADP-induced platelet aggregation. Moreover, the sites of identified peptides in the collagen amino acid sequence of Hypophthalmichthys molitrix were listed in Figure S3 and Table S2 (Hyp is the hydroxylated Pro). The identified sequence PG is especially abundant in the skin collagen of the Hypophthalmichthys Molitrix. There are 117 and 99 PG sequences in the α1 and α2 chains of type I collagen, respectively. Therefore, the collagen hydrolysates from H. Molitrix might have great potential for developing dietary supplements to prevent cardiovascular diseases in the future. Conclusions In conclusion, this study found that AH has great potential compared to PH for developing dietary supplements enriched in antiplatelet peptides. Ten antiplatelet peptides were identified from the absorbate of AH, and nine peptides contained the PG(OG) sequence. Additionally, all PG(OG)-containing collagen peptides inhibited platelet aggregation in vitro, suggested that the PG(OG)-containing peptides may commonly have antiplatelet activity. Furthermore, the target protein of the PG(OG)-containing antiplatelet peptides was identified as the P 2 Y 12 receptor by molecular docking. The key binding sites in pocket 1 of the P 2 Y 12 receptor with PG(OG) might be Cys97, Ser101, and Lys179, which may contribute to the antiplatelet activity of these peptides. In addition, the identified sequence PG(OG) is abundant in skin collagen of the H. Molitrix. These newly found insights highlight the potential application of the hydrolysates of skin collagen from H. Molitrix as dietary supplements to prevent cardiovascular diseases by inhibiting platelet aggregation. However, in future studies, the antiplatelet peptides containing PG(OG) sequences should be further verified by experiments, such as the drug affinity responsive target stability (DARTS) approach and Western blot, to further verify their target proteins. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/foods10071553/s1, Figure S1: Inhibition rates and IC 50 values of 11 identified peptides on platelet aggregation in vitro, Figure S2: The effect of different doses of OGSA and Clopidogrel on thymus index and spleen index in vivo. Mean ± S.D., n = 5, Figure S3: Amino acid sequence of type I collagen of H. Molitrix skin; Table S1. Molecular docking scores of 13 antiplatelet peptides with P 2 Y 12 receptor, Table S2 Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
7,418.6
2021-07-01T00:00:00.000
[ "Biology", "Chemistry" ]
Dynamic community detection reveals transient reorganization of functional brain networks across a female menstrual cycle Sex steroid hormones have been shown to alter regional brain activity, but the extent to which they modulate connectivity within and between large-scale functional brain networks over time has yet to be characterized. Here, we applied dynamic community detection techniques to data from a highly sampled female with 30 consecutive days of brain imaging and venipuncture measurements to characterize changes in resting-state community structure across the menstrual cycle. Four stable functional communities were identified, consisting of nodes from visual, default mode, frontal control, and somatomotor networks. Limbic, subcortical, and attention networks exhibited higher than expected levels of nodal flexibility, a hallmark of between-network integration and transient functional reorganization. The most striking reorganization occurred in a default mode subnetwork localized to regions of the prefrontal cortex, coincident with peaks in serum levels of estradiol, luteinizing hormone, and follicle stimulating hormone. Nodes from these regions exhibited strong intranetwork increases in functional connectivity, leading to a split in the stable default mode core community and the transient formation of a new functional community. Probing the spatiotemporal basis of human brain–hormone interactions with dynamic community detection suggests that hormonal changes during the menstrual cycle result in temporary, localized patterns of brain network reorganization. INTRODUCTION The application of network science techniques to the study of the human brain has revealed a set of large-scale functional brain networks that meaningfully reorganize both intrinsically and in response to external task demands (Bassett & Sporns, 2017). One technique, dynamic community detection (DCD), has emerged as a powerful tool for conceptualizing and quan-Dynamic community detection: Network science technique that identifies strongly connected sets of nodes (communities) that persist or change over time. tifying changes in mesoscale brain network connectivity patterns by identifying sets of nodes (communities) with strong intracommunity connections (Newman, 2006) to enable identification of communities that persist or change over time. DCD complements other statistical approaches used in fMRI data analysis by identifying when functionally coupled brain regions undergo sufficiently large changes in connectivity to warrant reassignment to separate functional communities. Additionally, this method provides an interpretable summary of whether strongly connected sets of brain regions undergo transient, but significant, changes that could be missed when time-averaging data within and between sessions. This method is particularly suited for examining relationships between brain dynamics and physiological variables that vary over relatively short timescales, such as sex hormone fluctuations over the human menstrual cycle. A typical cycle, occurring every 25-30 days, is characterized by significant rises in estradiol (∼8-fold) and progesterone (∼80-fold), both of Estradiol: Refers to 17β-estradiol, the major form of estrogen in mammals. which are powerful neuromodulators that have a widespread influence on the central nervous system (Galea, Frick, Hampson, Sohrabji, & Choleris, 2017). Converging evidence from animal studies has established sex hormones' influence on regions supporting higher order cognition, including the prefrontal cortex (PFC) and hippocampus (Frick, 2015;Wang, Hara, Janssen, Rapp, & Morrison, 2010). Within these regions, for example, estradiol enhances spinogenesis and synaptic plasticity while progesterone largely abolishes this effect (Hara, Waters, McEwen, & Morrison, 2015;Woolley & McEwen, 1993). Importantly, sex hormones are expressed broadly throughout the cerebellum and cerebrum, suggesting that whole-brain effects might be observed beyond the regions targeted in these preclinical studies. Human neuroimaging studies have demonstrated that sex hormones influence brain activity across broad regions of cortex (Berman et al., 1997;Jacobs & D'Esposito, 2011). Further, a handful of studies have demonstrated that menstrual cycle stage shapes resting-state functional connectivity (rs-fc; Arélin et al., 2015;Lisofsky et al., 2015;Petersen, Kilpatrick, Goharzad, & Cahill, 2014;Weis, Hodgetts, & Hausmann, 2019). However, these studies typically involve group-based cross-sectional studies or sparse-sampling (two-four time points) designs that are unable to capture transient day-to-day relationships between sex hormones and functional brain dynamics, and this relatively low temporal resolution has led to inconsistencies in the literature (Hjelmervik, Hausmann, Osnes, Westerhausen, & Specht, 2014). Therefore, new approaches are needed that can address these spatial and temporal limitations, as doing so will provide novel perspectives on human brain-hormone interactions. Recently, Pritschet et al. (2020) applied a "dense-sampling" approach (Laumann et al., Dense sampling: Experimental paradigm in which phenotypic measures are repeatedly collected within an individual over time. 2015; Poldrack, Laumann, & Koyejo, 2015) to a naturally cycling female who underwent 30 consecutive days of brain imaging and venipuncture to capture rs-fc variability over a complete menstrual cycle (Figure 1). The authors found that day-to-day fluctuations in estradiol were associated with widespread increases in rs-fc across the whole brain, with progesterone showing an opposite, negative relationship. Using time series modeling and graph theoretical analysis, they also found that estradiol drives variation in topological network states, specifically within-network connectivity of default mode and dorsal attention networks. These findings have important implications for the field of network neuroscience where densesampling, deep-phenotyping approaches have emerged to aid in understanding sources of Figure 1. 28andMe dataset. (A) Subject LP (naturally cycling female, age 23) participated in a month-long "dense-sampling" experimental protocol to provide a multimodal, longitudinal dataset referred to as 28andMe . For 30 consecutive days, the subject completed assessments of diet, mood, and sleep, provided blood samples to examine serum hormone concentrations, and underwent a 10minute resting-state fMRI scan. (B) For each resting-state scan, functional connectivity matrices were constructed by calculating the pairwise mean magnitude-squared coherence between each region. The result is a 415 × 415 × 30 data structure, in which each entry indicates the coherence between two nodes on a given day. (C) The brain was parcellated into 415 regions that were assigned to one of nine networks based on previously identified anatomical and functional associations (Schaefer et al., 2018). Colors indicate regional network membership. In a follow-up experiment, the participant repeated the procedures while on a hormonal regimen (0.02 mg ethinyl-estradiol, 0.1 mg levonorgestrel, Aubra, Afaxys Pharmaceuticals), which she began 10 months prior to the start of data collection Taylor et al., 2020). intra/inter-individual variability in functional brain networks over days, weeks, months, and years Gratton et al., 2018;Poldrack et al., 2015). Pritschet and colleagues' approach identified node-averaged trends in rs-fc changes within canonical functional networks across the cycle, but questions remain regarding whether and where functional reorganization takes place between large-scale networks. As changes in edge weight can result in the formation of functional "communities" not captured by traditional rs-fc methods, complementary approaches are needed to characterize trends in brain connectivity at intermediate spatial and temporal scales. Examining mesoscale networks has further revealed fundamental principles of functional brain networks, such as the modular, integrated architecture underpinning flexible task performance (Bertolero, Yeo, & D'Esposito, 2015;Khambhati, Sizemore, Betzel, & Bassett, 2018). Additionally, a better understanding of mesoscale connectivity may provide an avenue for improving personalized medicine by increasing the efficacy of targeted therapeutic interventions (Gu et al., 2015). Here, we applied DCD to examine whole-brain dynamics in relation to sex hormone fluctuations across a menstrual cycle. Our results reveal that a stable set of "core" communities persist over the course of a menstrual cycle, primarily consisting of nodes belonging to distinct a priori defined functional-anatomical networks, namely visual, somatomotor, attention, default mode, and control networks. Though these core communities were largely stable, nodes belonging to limbic, subcortical, attention, and control networks changed community affiliation (referred to as flexibility) at higher rates than expected compared with a null hypothesis. DCD also identified a transient split of the default mode network (DMN) core into two smaller subcommunities concurrent with peaks in estradiol, luteinizing hormone (LH), and follicle stimulating hormone (FSH) levels defining the ovulatory window. This community split was driven by strong increases of within-network integration between prefrontal nodes of the DMN, which subsided immediately after the ovulatory window. The default mode, temporoparietal, limbic, and subcortical networks also exhibited significantly increased flexibility during ovulation, suggesting a role for estradiol, LH, and FSH in regulating localized, temporary changes in regional connectivity patterns. Importantly, this reorganization was not present in a follow-up study in which the same participant was placed on an oral hormonal regimen that prevented ovulation. Taken together, while a large degree of functional brain network stability was observed across the menstrual cycle, peaks in sex hormones resulted in temporary brain network reorganization, suggesting that sex hormones may have the ability to rapidly modulate rs-fc on shorter timescales than previously documented. RESULTS A single female underwent brain imaging and venipuncture for 30 consecutive days. For each session, the brain was parcellated into 400 cortical regions from the Schaefer atlas and 15 subcortical regions from the Harvard-Oxford atlas ( Figure 1C) and 415 × 415 functional association matrices were constructed via magnitude-squared coherence (Schaefer et al., 2018). Dynamic community detection was applied to these data, revealing a stable set of communities that persist over the course of a menstrual cycle. However, significant transient changes in community structure occurred within the default mode network during the ovulatory window concomitant with peaks in estradiol, luteinizing hormone, and follicle stimulating hormone. Stable Functional Cores Persisted Over the Course of One Menstrual Cycle The degree to which functional brain network connectivity changes over the course of a human menstrual cycle has yet to be fully characterized. Here, dynamic community detection (also referred to as multislice or multilayer modularity maximization (Bassett et al., 2013)) consistently identified four functional communities that were largely stable in a naturally cycling female over 30 consecutive days. In this context, "community" refers to a set of nodes whose intraset connections are significantly stronger than would be expected when compared with an appropriate null model. A representative example of this consensus temporal community structure (the community designation that best matches the output of 150 runs of the nondeterministic community detection algorithm) is shown in Figure 2C. This structure was conserved over a range of community detection parameter values that, roughly speaking, must be defined to set the "spatial" and "temporal" resolutions of community identification (see the Methods section for a detailed description). Across all temporal resolutions considered here, consensus community partitions with a spatial resolution parameter 0.97 ≤ γ ≤ 1.015 possessed exactly Partition: Set of community assignments for all nodes resulting from dynamic community detection. four communities. For the standard parameter choice (temporal and spatial resolution parameters both set to 1), the four identified communities had distinct compositional characteristics. These communities were largely bilaterally symmetric, with analogous brain regions in each hemisphere assigned to the same community 71% of the time. The four communities correspond roughly to a visual core, a default mode core, a control core, and a somatomotor-attention core. The compositions of these four communities are shown in Figure 3A. The composition value was calculated by summing the total number of instances in which a node belonging to an a priori functionalanatomical network (Schaefer et al., 2018) also belonged to the community identified in the consensus community partition. The core communities identified here were named based on the highest representation of nodes belonging to a priori functional networks. The visual core was 80% composed of visual Figure 2. Dynamic community detection identified changing modular structure over time at multiple scales. (A) A toy network example illustrates the dynamic community detection algorithm. For each time point, every node is assigned to a community so as to maximize the strength of intracommunity connections relative to intercommunity links while also taking community assignments over time into account (Eq. 1). In this case, three communities are identified and denoted by color. (B) To assess temporal structure in the 28andMe resting-state fMRI data, community assignments were calculated for a range of parameter values. In this procedure, two parameters, ω and γ, specify the temporal and spatial scales of analysis, respectively. After performing 150 runs of the community detection algorithm for each parameter combination, the statistical significance of each community partition relative to a random null model was calculated. The color for each entry in the heat map indicates the proportion of communities at that parameter combination that are significant at the p < 0.05 level. (C) Consensus partition structure varied according to the choice of resolution parameters. The example network community structure (left) changes at each time point, with node community assignment given by color on the y-axis and time indicated on the x-axis. For three different parameter combinations (outlined in red, blue, and green, respectively), the consensus partitions varied in the total number of communities identified, ranging from 4 to 15, with more communities identified when the temporal resolution was low and the spatial resolution was high. network nodes and had a median size of 52 nodes per day. The default mode core consisted of 56% DMN nodes and approximately 10% of each control, limbic, and temporoparietal network nodes and contained a median of 133.5 nodes per day. The control core consisted of 48% control and 28% dorsal attention network nodes and contained a median of 133 nodes Figure 3. Dynamic community detection uncovered stable cores across a complete menstrual cycle. (A) Four core communities (y-axis) were consistently identified in the 28andMe dataset across spatial and temporal resolution parameter values. For these parameter combinations, the compositions of the visual, default mode, control, and somatomotor-attention network cores are shown as a heat map, with color corresponding to the percentage of nodes in a community belonging to a functional-anatomical network. (B) The four networks that constituted the hubs of the core communities possessed stable pairwise connectivity between nodes across days. Scatterplots show the day-to-day correspondence between edge weights for all of the nodes of the somatomotor, default mode, temporoparietal, and visual networks on days t and t + 1. These network edges had Pearson correlation coefficients of 0.379, 0.573, 0.590, and 0.538, respectively. (C) The subcortical, limbic, and dorsal attention networks exhibited the highest median node flexibility. Top: Normalized flexibility values for each node over the entire cycle are plotted as points, with color indicating network affiliation. Thick horizontal lines on box plots indicate median values. A flexibility value of 1 indicates that a node changes community assignment at each possible time point, whereas a value of 0 indicates that the node never changes community assignment. Bottom: A 95% cutoff value is calculated using the flexibility values for each node over all 150 community detection runs. For each functional-anatomical network, the blue bar indicates the number of nodes belonging to that network which have flexibility values above the cutoff threshold. The red bars indicate the proportion of nodes in each network that surpass the cutoff value (i.e., the value for each blue bar is normalized by the number of nodes in the network). Once again, limbic, subcortical, dorsal attention, and control networks contained the highest proportion of highly flexible nodes. per day. Finally, the somatomotor-attention core was composed of 53% somatomotor, 27% salience-ventral attention, and 13% dorsal attention network nodes and had a median size of 97 nodes per day. Importantly, for all parameter combinations in which four communities were detected, the composition of these communities was consistent (Supporting Information). These community partitions were also stable across the entire menstrual cycle. Specifically, 315 of the 415 nodes (75.9%) did not change community affiliation across the 30-day experiment. Taken together, these results suggest the presence of a stable solution to the dynamic community detection algorithm and a reliable coarse-grained community architecture present in the data. In several functional-anatomical networks, there was little to no modification of network architecture over time; for instance, greater than 85% of nodes in each of the somatomotor, default mode, temporoparietal, and visual networks did not change community affiliation over the entire menstrual cycle. The strong day-to-day correlations between edge weights in these networks ( Figure 3B) reinforce the existence of these stable cores. Functional-Anatomical Networks Exhibited Distinct Patterns of Flexibility Though network community structure was stable over a complete menstrual cycle when classifying nodes into four communities, specific nodes did change community affiliation at levels above chance when modifying the sensitivity of the community detection algorithm. Specifically, when γ, the spatial resolution parameter, was increased, the dynamic community detection algorithm subdivided the four core communities into smaller communities, providing a finer grained classification of subnetwork structure. At an intermediate parameter combination (ω = 0.9, γ = 1.055), 10 communities significant at the p < 0.05 level were identified over the course of the experiment, as visualized in Figure 2C (blue outlines). The subsequent analysis uses community partitions at this parameter combination, but the results were consistent across a range of neighboring parameter values (Supporting Information). This "higher resolution" partition revealed trends in functional organization over time that were not observable with coarser partitions. First, inspecting the median flexibility value, or the proportion of times a node changed community affiliation out of the total possible number of changes, demonstrates that functional-anatomical networks possessed distinct flexibility distributions ( Figure 3C, top). The limbic, subcortical, dorsal attention, and control networks were significantly overrepresented in terms of highly flexible nodes relative to a random null hypothesis ( Figure 3C, bottom). The largest fine-scale community reorganization occurred on experiment Day 22 and persisted until Day 24 ( Figure 4A). Across these days, 65 nodes belonging to the default mode core community split from the default mode core community to transiently form a small, strongly connected community. This was one of only two large-scale reorganization events detected during the experiment; the other occurred on Day 9, when 59 nodes (yellow in Figure 4A) changed community affiliation. All 59 nodes involved in this event also changed community affiliation on Day 22. Interestingly, 31 (48%) of the nodes in the community that emerged on Day 22 belonged to the DMN, 12 nodes (19%) belonged to the temporoparietal network, and 9 (14%) were limbic regions (as defined by functional-anatomical atlases (Jenkinson, Beckmann, Behrens, Woolrich, & Smith, 2012;Schaefer et al., 2018; Figure 5A). The functional-anatomical network memberships of the node-node pairs exhibiting the strongest increases in coherence (top 5%) indicated that enhanced connectivity between DMN nodes drove this community split, as opposed to DMN nodes being "converted" to a new community via increased connectivity to non-DMN regions (Supporting Information). More specifically, nodes within prefrontal regions belonging to DMN subnetwork B drove this reorganization event, as 104 of the 466 (22%) Figure 4. Fine-grain community partitioning revealed a bifurcation in the default mode core during ovulation. (A) When the spatial resolution parameter (which alters the size of communities identified by dynamic community detection) was increased from the standard value, the four core communities identified previously were subdivided into smaller subcommunities (reproduced from Figure 2C). Here, a split in the default mode core community (light blue) appeared at Day 22 (red), concomitant with ovulation and a spike in sex hormones. This community (red) rejoined the default mode core on day 26. For illustrative purposes, only the consensus partition for one parameter value is shown, but this trend was consistent across nearby parameter combinations (Supporting Information). strongest increases in coherence occurred between nodes in this subnetwork, despite DMN subnetwork B containing only 32 nodes (8% of the total nodes). Network Reorganization Timing Coincided With Peaks in Hormone Levels During Ovulation Global flexibility was higher (Wilcoxon rank-sum test, p < 0.05) during ovulation (Days 23-25) than during early follicular or luteal phases. Specifically, global mean flexibility during The edges that exhibited large weight changes from Day 21 to Day 22 (top 5% of changes, left) were predominantly within-network connections between DMN network nodes (104/466). Examining subnetwork structure reveals that all of the strongly enhanced connections between nodes in the DMN belonged to subnetwork B, indicating that this subnetwork, which consists of regions in prefrontal cortex, drove the default mode core community bifurcation at ovulation. the ovulatory window was 0.10, whereas flexibility during follicular and luteal phases was 0.05 and 0.04, respectively. Flexibility of individual brain regions during these phases are shown in Figure 4B. Note that while several nodes exhibit high flexibility across all three phases, global flexibility and network-specific mean flexibility are still relatively low (as seen in Figures 3C and Figure 6A) because the majority of nodes rarely change community affiliation. Mean flexibility of each network over a five-day sliding window is depicted in Figure 6A. The DMN, temporoparietal, subcortical, and limbic networks exhibited peaks in flexibility at Day 23 of the experiment, coincident with the peaks in estradiol, LH, and FSH which are a Figure 6. Community reorganization was temporally localized to ovulation. Changes in community assignment (A) were coordinated and closely tracked the timing of spikes in estradiol concentrations (B). Default mode, limbic, subcortical, and temporoparietal networks exhibited peaks in flexibility on Day 23, indicating brain-wide functional reorganization during the ovulatory window. These same networks also exhibited elevated flexibility between Days 5 and 10 during the secondary estradiol peak. The pattern of flexibility shown here corresponds to the network reorganization observed for dynamic community detection performed with the parameter combination ω = 0.9, γ = 1.055 (blue outline in Figure 2). Here, flexibility is calculated over a five-day sliding window. hallmark signals of the ovulatory window ( Figure 6B). To determine whether the bifurcation of the default mode core community was significantly associated with sex hormones, we compared functional-anatomical network flexibility values with serum hormone levels. To assess the temporal relationship between network flexibility values and sex hormones, correlations between each time series were calculated. The default mode, limbic, salience/ ventral attention, somatomotor, subcortical, and temporoparietal networks had significant Spearman rank correlations greater than 0.6 (where maximum value of 1 indicates perfect rank correlation and 0 indicates no correlation) with estradiol (Bonferroni-corrected at p < 0.05). No other significant positive network flexibility-hormone correlations were identified (Supporting Information). Next, to determine whether these reorganization events are uniquely related to the intrinsic hormonal dynamics that occur across a menstrual cycle, we conducted an identical analysis from a follow-up dataset in which the same individual repeated the daily protocol (30 consecutive days of sampling) one year later. During this follow-up study, the participant was placed on a hormonal regimen that disrupted endogenous sex hormone production and prevented ovulation from occurring (see Pritschet et al., 2020;Taylor et al., 2020). Under this regimen, DCD identified the same four stable cores found in the original experiment, but no large-scale reorganization was observed (Supporting Information). DISCUSSION In this study, we applied DCD to data from a densely sampled female who underwent 30 consecutive days of brain imaging and venipuncture to investigate the extent of intrinsic spatiotemporal functional reorganization over a menstrual cycle. We identified four stable community cores across the cycle, represented here as visual, somatomotor, default mode, and control network cores; the strongest exception to this stability occurred simultaneously with peaks in estradiol, LH, and FSH. During this event, we observed a transient reorganization of the DMN core into a newly formed community, as well as increases in nodal flexibility among prefrontal, limbic, and subcortical nodes. A nearly identical reorganization event occurred during the secondary peak in estradiol. Together, our results suggest that the interplay between the nervous and endocrine systems over a menstrual cycle results in temporary, localized patterns of brain network reorganization. These results highlight DCD as a new avenue for investigating the intricate relationship between sex hormones and human brain dynamics. Dynamic Community Detection Characterizes Network-Specific Functional Stability Across a Menstrual Cycle Dense-sampling, deep-phenotyping studies offer new ways to investigate intra/interindividual variability in functional brain networks by identifying features of rs-fc that are stable traits within an individual or change in conjunction with biological factors and state-dependent variables (Gratton et al., 2018;Poldrack et al., 2015). Recent dense-sampling studies have shown that frontoparietal regions/networks exhibit high degrees of intra-individual rs-fc stability while also being characteristically unique across individuals, suggesting that these higher order regions may be especially critical for uncovering individual differences in brain function and improving personalized medicine (Gratton et al., 2018;Horien, Shen, Scheinost, & Constable, 2019). Our findings provide new insight towards the ongoing explorations into stability within functional brain networks. With the exception of limbic and subcortical networks, network nodes were highly stable, changing affiliations fewer than 10% of the time on average ( Figure 3C). Therefore, our results align with previous research suggesting a high degree of network stability in resting-state networks in individuals over time (Gratton et al., 2018;Hjelmervik et al., 2014;Horien et al., 2019;Poldrack et al., 2015). In contrast to this observed overall stability, several highly flexible nodes were identified. Control subnetwork C, encompassing posterior cingulate cortex/precuneus regions, was the most flexible functional subnetwork identified, with 10 of the 12 nodes exhibiting significantly higher than expected flexibility (Supporting Information). Limbic and subcortical networks displayed intermediate levels of flexibility. Regions from these systems receive input from and project to many cortical areas and are implicated in functions such as sensorimotor integration via the cortico-basal ganglia-thalamo-cortical loop ; therefore, the high degree of flexibility observed here may reflect the tendency of these systems to serve as relays between functionally segregated communities. Particular changes in rs-fc were significantly related to sharp rises in sex hormones seen during ovulation. Here, we observed a spatially specific transient reorganization of the DMN, during which nodes from the temporoparietal, limbic, subcortical, and default mode networks split from the default mode core to form a short-lived community for (three days) before rejoining the original core community. Notably, a nearly identical event occurred on Day 9 of the experiment, when 59 of the 65 nodes that changed community affiliation during ovulation merged with the default mode core. This event occurred concurrent with a secondary peak in estradiol and involved networks (DMN, temporoparietal, limbic, subcortical) whose flexibility values were significantly associated with estradiol levels, further implicating hormone-specific modulation of functional connectivity between these networks. Using time-lagged analyses, Pritschet and colleagues reported that within-network connectivity of the DMN was regulated by previous states of estradiol . Here, we expand on this finding and identify a subnetwork of the DMN that is likely driving this reorganization. Regions constituting this new community are located in PFC, an area exquisitely sensitive to sex steroid hormones (Shanmugan & Epperson, 2014) where, for instance, nearly 50% of pyramidal neurons in the dorsolateral PFC (dlPFC) express ER-alpha (Wang et al., 2010). Importantly, this coordinated reorganization was not observed in a follow-up experiment in which typical hormone production patterns were disrupted (Supporting Information). Together, this presents the possibility that endocrine signaling may, in part, regulate intrinsic brain dynamics within the frontal cortex. Neurobiological Interpretations of Sex Hormones on PFC Function Cross-species investigations have established estrogen's ability to shape the PFC (Galvin & Ninan, 2014;Hara et al., 2016;Jacobs & D'Esposito, 2011;Jacobs et al., 2017;Shanmugan & Epperson, 2014). In rodents, estradiol increases fast-spiking interneuron excitability in deep cortical layers (Clemens et al., 2019); in nonhuman primates, estradiol treatment increases dendritic spine density in dlPFC neurons (Hao et al., 2006) and this potentiation is observed only if the treatment is administered in the typical cyclical pattern observed across a menstrual cycle. Human brain imaging studies have also implicated estradiol in enhancing the efficiency of PFC-based circuits. In cycling women performing a working memory task, PFC activity is exaggerated under low estradiol conditions and reduced under high estradiol conditions (Jacobs & D'Esposito, 2011). Similarly, when estradiol declines across the menopausal transition, working memory-related PFC activity becomes more exaggerated despite no differences in task performance (Jacobs et al., 2017). Examining rs-fc across the cycle, Petersen and colleagues found that women in the late follicular stage (encompassing the ovulatory window) showed increased coherence within DMN and executive control networks compared with those in luteal stages (Petersen et al., 2014). Our findings extend this body of work by demonstrating that PFC nodal flexibility tracks significantly with sharp shifts in estradiol, which may support the brain's ability to reorganize at the mesoscale level. While future studies are needed to establish a mechanistic link between endocrine signaling and large-scale network reorganization, we present two possible neurobiological interpretations. One mechanism of action may be through estradiol's interaction with the dopaminergic system. The PFC is innervated by midbrain dopaminergic neurons that enhance the signal-to-noise ratio of PFC pyramidal neurons and drive cortical efficiency (Williams & Goldman-Rakic, 1995). In turn, estradiol enhances dopamine release and modifies the basal firing rate of dopaminergic neurons, potentially having the ability to alter mesoscale network integration. Second, although coherence is robust to changes in the hemodynamic response (Sun, Miller, & D'Esposito, 2004), sex hormones influence cerebrovascular function (Krause, Duckles, & Pelligrino, 2006;Krejza, Rudzinski, Arkuszewski, Onuoha, & Melhem, 2013). Therefore, the observed changes in rs-fc across the cycle could be due to changes in perfusion rather than alterations in neural activity. Important differences in network stability emerged between naturally cycling and oral hormonal contraceptive conditions. Under naturally cycling conditions, the largest reorganization event occurred during the ovulatory window. Although estradiol levels were comparable in Study 2 (Supporting Information), estradiol was decoupled from LH and FSH, progesterone was reduced by 97%, and ovulation did not occur. Therefore, hormone-related changes in DMN subnetwork reorganization might only be present when shifts in endogenous hormones occur in a coordinated fashion. Future studies comparing endocrine states of women over several cycles will help establish the robustness of these differences. Implications for Cognition and Disease Several studies have begun utilizing DCD to relate "task-free" and "task-based" functional network reorganization to cognitive performance. High levels of nodal flexibility have been associated with enhanced performance on working memory tasks (Braun et al., 2015), improved learning of a motor task (Bassett et al., 2011), and visual cue learning (Gerraty et al., 2018). Further, sensory regions typically participate in a small number of functional networks during various tasks, whereas "hub" regions in frontal cortex, including precuneus and posterior cingulate gyrus, participate in multiple functional networks (van den Heuvel & Sporns, 2013), indicating that network-specific temporal reconfiguration of functional connectivity has implications for a wide variety of cognitive functions (Mattar, Cole, Thompson-Schill, & Bassett, 2015). Highly flexible nodes were identified in precuneus and posterior cingulate gyrus, with changes in community affiliation occurring simultaneously with sharp peaks in estradiol levels, raising the possibility that hormonal fluctuations could be associated with and facilitate taskbased network reorganization. For instance, if high levels of estradiol increase nodal flexibility among hub regions in the PFC, one might predict that performance on PFC-dependent tasks will improve. Further, pregnancy-a period of profound neuroendocrine change-leads to long-lasting gray matter reductions within DMN regions (Hoekzema et al., 2017). The capacity for the brain to fluctuate between integrated and segregated states at rest allows for rapid and efficient transitions to various task states (Kabbara, Falou, Khalil, Wendling, & Hassan, 2017;Shine et al., 2016;Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014). Therefore, future work examining whether task-based functional brain networks undergo transient changes in flexibility and community structure, both across the menstrual cycle and during other hormonal transition periods, will be imperative. Examining how large-scale brain networks are disrupted in clinical populations can enhance our understanding of complex neurological disorders (Hallquist & Hillary, 2019), and studies have begun utilizing DCD methods to characterize the spatiotemporal basis of how networks reconfigure across diseases such as epilepsy and autism spectrum disorder (Braun et al., 2016;Martinet et al., 2020). Here, using similar methods, we demonstrate that hormone fluctuations are associated with significant reorganization of the DMN and increased flexibility among several brain networks. Notably, differences in DMN rs-fc emerge among individuals with depression (Greicius et al., 2007) and Alzheimer's disease (Buckner et al., 2009)-two conditions that display a sex-skewed prevalence towards women (Nebel et al., 2018). Using the MyConnectome Project, Betzel and colleagues provided evidence that increased network flexibility is associated with positive mood (Betzel, Satterthwaite, Gold, & Bassett, 2017). Here, network flexibility was highest during the ovulatory window followed by an immediate decline back to network stability in the luteal phase of the cycle, coincident with traditional rises in negative affect (e.g., premenstrual syndrome; Rubinow & Schmidt, 2006). Although we did not identify a relationship between mood and network flexibility within this participant, cycledependent brain network reorganization could play a role in psychiatric conditions observed in some women, particularly those suffering from premenstrual dysphoric disorder. Further, a decline of sex hormones to chronically low states occurs in postreproductive years, decades prior to diagnoses of Alzheimer's disease. Therefore, modeling time-varying community structure in conjunction with endocrine status could shed light on neurological disorders that display prominent sex differences. If fluctuations in sex hormones lead to greater network flexibility, and those in turn shape the brain and behavior (Betzel et al., 2017), hormone therapy that mimics the transient rise and fall of estradiol could provide a line of treatment for individuals experiencing cognitive symptoms in the transition to menopause and/or for those with a heightened risk for dementia. Limitations and Future Directions The following limitations should be taken into consideration. First, this study involved densely sampling a single female over one complete menstrual cycle, hindering our ability to generalize these findings to other individuals. Therefore, it is critical for this approach to be extended to a larger and more diverse set of women to establish the consistency of these results while accounting for individual differences. Second, we used a well-established group-based atlas to improve generalizability beyond a single-subject design (Schaefer et al., 2018). However, group-based atlases risk loss of individual-level specificity and could overlook meaningful reconfigurations in parcellations (Salehi et al., 2020). Future work using an individual-derived atlas is needed to confirm whether these results are stable across various analytic pipelines. Third, an ongoing debate in network neuroscience surrounds test-retest reliability and what constitutes a "substantial" amount of data per individual. While some studies suggest that (more than 20 min of data per individual is needed (Gratton et al., 2018), others contend that shorter durations (5-15 min) of sampling is sufficient to achieve reliability (Birn et al., 2013;Chen et al., 2015). Repeating this experiment under longer scanning durations (>10 min per day) will be critical for exploring the degree of network stability across the menstrual cycle. Finally, this work considers only between-session rather than within-session network reconfiguration because of the aforementioned concerns about test-retest reliability. However, as previous studies have found meaningful shifts in flexibility across shorter time scales (Braun et al., 2015;Telesford et al., 2016), a natural extension to this work will be to examine withinsession network reorganization across the cycle in larger samples of women. CONCLUSION In sum, we demonstrate that resting-state functional connectivity is largely stable within an individual over the course of a complete menstrual cycle. The largest exception to this stability occurs around the ovulatory window, during which peaks in sex hormones result in temporary patterns of brain network reorganization largely localized within areas of the default mode network. Historically, brain-level phenomena resulting from hormone fluctuations have been treated as an unwanted source of variance in population studies and, consequently, studies of this relationship are sparse and underpowered. This work demonstrates that dynamic network methods can reveal important, transient effects of sex hormones that may be overlooked by traditional approaches and provides a novel template for examining the nature of human brainendocrine relationships. 28andMe Experimental Protocol Data were collected and preprocessed as reported in Pritschet et al. (2020); methods briefly reproduced here. The participant was a right-handed Caucasian female, aged 23 years for the duration of the study. The participant had no history of neuropsychiatric diagnosis, endocrine disorders, or prior head trauma. She had a history of regular menstrual cycles (no missed periods, cycle occurring every 26-28 days) and had not taken hormone-based medication in the 12 months prior to Study 1. The participant gave written informed consent and the study was approved by the University of California, Santa Barbara, Human Subjects Committee. The participant underwent daily testing for 30 consecutive days, with the first test session determined independently of cycle stage for maximal blindness to hormone status (Study 1). The participant began each test session with a daily questionnaire (9:00 a.m.) followed by a timelocked blood sample collection 10:00 a.m. (±30 min). Endocrine samples were collected, at minimum, after 2 hr of no food or drink consumption (excluding water). This was followed by a 1-hr MRI session (11:00 a.m.) consisting of structural and functional MRI sequences. To note, the participant refrained from consuming caffeinated beverages before each test session. One year later (Study 2), the participant repeated the procedures while on a hormonal regimen (0.02 mg ethinyl-estradiol, 0.1 mg levonorgestrel, Aubra, Afaxys Pharmaceuticals), which she began 10 months prior to the start of data collection . A licensed phlebotomist inserted a saline-lock intravenous line into the dominant or nondominant hand or forearm daily to evaluate hypothalamic-pituitary-gonadal axis hormones, including serum levels of gonadal hormones (17β-estradiol, progesterone, and testosterone) and the pituitary gonadotropins luteinizing hormone (LH) and follicle stimulating hormone (FSH). One 10-ml blood sample was collected in a vacutainer SST (BD Diagnostic Systems) each session. The sample clotted at room temperature for 45 min until centrifugation (2,000 g for 10 min) and was then aliquoted into three 1-ml microtubes. Serum samples were stored at −20°C until assayed. Serum concentrations were determined via liquid chromatographymass spectrometry (for all steroid hormones) and immunoassay (for all gonadotropins) at the Brigham and Women's Hospital Research Assay Core. Initial preprocessing was performed using the Statistical Parametric Mapping 12 software (SPM12, Wellcome Trust Centre for Neuroimaging, London) in MATLAB. Functional data were realigned and unwarped to correct for head motion and the mean motion-corrected image was coregistered to the high-resolution anatomical image. All scans were then registered to a subject-specific anatomical template created using Advanced Normalization Tools (ANTs) multivariate template construction. A 5-mm full-width at half-maximum (FWHM) isotropic Gaussian kernel was subsequently applied to smooth the functional data. Further preparation for resting-state functional connectivity was implemented using in-house MATLAB scripts. Global signal scaling (median = 1,000) was applied to account for fluctuations in signal intensity across space and time, and voxelwise time series were linearly detrended. Residual BOLD signal from each voxel was extracted after removing the effects of head motion and five physiological noise components (cerebrospinal fluid + white matter signal). Motion was modeled using a Volterra expansion of translational/rotational motion parameters, accounting for autoregressive and nonlinear effects of head motion on the BOLD signal. All nuisance regressors were detrended to match the BOLD time series. Functional network nodes were defined based on a 400-region cortical parcellation and 15 regions from the Harvard-Oxford subcortical atlas. For each day, a summary time course was extracted per node by taking the first eigenvariate across functional volumes. These regional time series were then decomposed into several frequency bands using a maximal overlap discrete wavelet transform. Low-frequency fluctuations in wavelets 3-6 (0.01-0.17 Hz) were selected for subsequent connectivity analyses. Finally, we estimated the spectral association between regional time series using magnitude-squared coherence: this yielded a 415×415 functional association matrix each day, whose elements indicated the strength of functional connectivity between all pairs of nodes (FDR-thresholded at q < 0.05). Dynamic Community Detection and Analysis A multilayer tensor (415 × 415 × 30) was constructed from the association matrices described above for network analysis. Each layer corresponded to the strictly positive, weighted, FDRthresholded rs-fc association matrix for the corresponding day of the experiment. Interlayer links were added only between adjacent layers. Communities in resting-state connectivity were identified by maximizing multislice modularity, given by where μ is the total edge weight in the network, i and j index nodes in slices l and r, A is the adjacency matrix containing edge weights between nodes and slices, γ is the structural resolution parameter, k il is the strength of node i in slice l, m l is the total nodal strength in slice l, δ is the Kronecker delta, ω is the temporal resolution parameter, and g is the community assignment index (Bassett et al., 2013). Community assignments that maximize modularity were determined 150 times over a grid of parameter values (γ, ω) = [0.97, 1.07] × [0.8, 1.5] using the genlouvain function from Jeub et al. in MATLAB 2019a(Jeub, Bazzi, Jutla, & Mucha, 2011. From these community assignments, the consensus partition for each parameter combination was determined using the consensus_similarity function from the Network Connectivity Toolbox (NCT, http://commdetect.weebly.com/). Node flexibility is defined as the proportion of times a node changes community assignment out of all possible opportunities to change its assignment. Thus, a flexibility value of 1 indicates that a node changes community membership at every time step and a value of 0 indicates that it never changes communities. Partition significance, node flexibility, and persistence were also calculated using functions from the NCT (Bassett et al., 2011). Cross-covariance values were calculated and statistical tests were performed using built-in MATLAB functions. Head motion was low (< 130 μm), was not significantly associated with hormone concentrations (all pairwise Pearson correlations > 0.05 after Bonferroni correction), and was nearly identical between Days 22 and 24 of the experiment (when reorganization occurred), suggesting that head motion is not a confounding factor when considering community reconfiguration. On the day of the experiment with the fewest connections (Day 26), the network had an edge density of 0.9317 (i.e., 93.17% of possible edges have nonzero values) and the median density was 0.9713. This represents a 4% difference in density, and density was not significantly correlated with hormone levels, so we do not believe the community detection algorithm was biased by disparities in edge density.
9,591.4
2020-06-30T00:00:00.000
[ "Biology" ]
Investigation of Dark and Bright Soliton Solutions of Some Nonlinear Evolution Equations In this paper, generalized Kudryashov method (GKM) is used to find the exact solutions of (1+1) dimensional nonlinear Ostrovsky equation and (4+1) dimensional Fokas equation. Firstly, we get dark and bright soliton solutions of these equations using GKM. Then, we remark the results we found using this method. Introduction During the past years, soliton solutions are considerably important issue in biophysics, geophysical sciences, chemical kinematics, optical fibers, technology of space, electricity, elastic media and several topics in nonlinear sciences. In recent years, most authors have presented several methods to find the soliton solutions of NLEEs such as (G'/G)-expansion method [1], exp-function method [2], the tanh method [3] and many more.In this work, GKM [4] will be used to find exact solutions of (1+1) dimensional nonlinear Ostrovsky equation. (1) Eq. ( 1) is a model for weakly nonlinear surface and internal waves in a rotating ocean.This equation has been introduced by Vakhnenko and Parkers [5].They have found completely integrable of Eq. ( 1) by inverse scattering method [6].Then, some authors have used various methods to find travelling wave solutions of this equation [7][8][9][10][11]. Secondly, we handle (4+1) dimensional This equation has been obtained by Fokas by expanding the Lax pairs of the integrable Kadomtsev-Petviashvili (KP) and Davey-Stewartson (DS) equations to some higherdimensional nonlinear wave equations [12].The importance of Eq. ( 1) suggests that the idea of complexifying time can be considered in the context of modern field theories via the existence of integrable nonlinear equations in four spatial dimensions involving complex time [12].The layout of this paper is organized as follows.In Sec. 2, we give basic facts of GKM.In Sec. 3, we implement GKM to (1+1) dimensional nonlinear Ostrovsky equation and (4+1) dimensional Fokas equation.In Sec. 4, we report the results we obtained using this method. Then, we get traveling transformation where c is arbitrary constant.Eq.( 3) was reduced to a nonlinear ordinary differential equation: ( , , , , ) 0, where the prime denotes differentiation with respect to . Step 2. Suggest that the exact solutions of Eq.( 5) can be found as follows; where . We highlight that the function Q is solution of Eq. ( 7) Taking Eq.( 6) into account, we gain Step 3. The solution of Eq.( 5) can be expressed as follows: To compute the values M and N in Eq.( 11) that is the pole order for the general solution of Eq.( 5), we develop comparably as in the classical Kudryashov method on balancing the highest order nonlinear terms in Eq.( 5) and we can establish a relation between M and N . We can find values of M and N . Step 4. Substituting Eq.( 6) into Eq.( 5) ensures a polynomial   R Q of Q .Equating the coefficients of   R Q to zero, we get a system of algebraic equations.Solving this system, we can identify c and the coefficients 0 1 2 0 1 2 , , , , , , , , , N M a a a a b b b b .Thus, we gain the exact solutions to Eq.( 5). Application of GKM to (1+1) Dimensional Nonlinear Ostrovsky Equation and (4+1) Dimensional Fokas Equation In order to solve Eq. ( 1), we use the travelling wave transform where c is arbitrary constant. Substituting Eq. ( 13) into Eq.( 1), , , , , (13) we get the following equation After Eq. ( 1) has been turned into Eq.( 14), Substituting Eqs. ( 6) and ( 9) into Eq.( 14) and balancing the highest order nonlinear terms of uu and 3 u in Eq. ( 14), then the following relation is acquired The exact solution of Eq.( 1) is obtained as follows; Substituting Eq. ( 16) into Eq.( 11), we get dark soliton solutions of Eq.( 1) In an effort to find travelling wave solutions of the Eq. ( 2), we get the transformation via the wave variables where , , ,     and  are arbitrary constants. Substituting Eq. (20) into Eq.( we get the following equation After Eq. ( 2) has been turned into Eq.( 21), Substituting Eqs. ( 6) and ( 9) into Eq.( 21) and balancing the highest order nonlinear terms between u and 2 u in Eq. ( 21), then the The exact solution of Eq.( 2) is obtained as follows; Inserting Eq. ( 23) into Eq.( 11), we get bright soliton solutions of Eq.( 2) , , , , sec , 4 2 x y z w t u x y z w t h , , , , csc .4 2 Embedding Eq. (26) into Eq.( 11) , we get dark soliton solution of Eq.( 2) x y z w t u x y z w t Remark.If we compare the exact solutions of Eq. ( 1) and Eq. ( 2) reported by the other authors, our solution ( 17) is a similar solution with the solution (4.15) in [7].Also, our solution (24) is a similar solution with the solution (10) in [15].To our knowledge, other solutions of Eq. ( 1) we reported here are new and are not trackable in the former literature. Conclusion In this paper, we gain dark soliton solutions of (1+1) dimensional nonlinear Ostrovsky equation and (4+1) dimensional Fokas equation using GKM.Applications show that this method can successfully solve this equation.Thus, we can conclude that not only this method has a significiant role in observing NLEEs but also it is highly strong with regard to yielding analytical solutions of NLEEs.
1,303.4
2018-01-01T00:00:00.000
[ "Physics", "Mathematics" ]
The Response of Higher Education Institutions to Global, Regional, and National Challenges Higher education—as many other sectors—is challenged by the dynamics of the global and local socio-economic influencing our lives, economy, environment, and lifestyle. For instance, currently existing jobs and skills are expected to be replaced by super-fast artificial intelligent cloud-based employees. Traditional higher education system and institutions eagerly pursuing transformation to cope with the current and future demands in skills, teaching and learning, research, technology, funds amongst other demanding factors of our world. This paper showcase the University of Bahrain and present how the University is transforming to address the global, regional, and national challenges it is facing today. This Paper describes the key pillars and the key performance indicators of the Transformation Plan 2016–2021 of the University of Bahrain to respond to those challenges. The Transformation Plan is inspired by the Bahrain Economic Vision 2030. Furthermore, the Plan is aligned to the National Higher Education Strategy 2014–2024, and the National Research Strategy 2014–2024. The Transformation Plan also is in alignment with the national endeavors to achieve the United Nations Sustainable Development Goals 2030. Introduction Higher Education Institutions (HEI) face numerous challenges in the 21st century as economics, environment, demographics, political, labor market, technological, as well as social and health issues, are rapidly changing. HEIs, therefore, need to be responsive to the opportunities and challenges set by these issues, which are shaping the higher education sector and the future of employment. Universities need to have unique responses to these factors in order to maintain their competitiveness and quality of education. The competitiveness of institutes of higher education will depend upon their responsiveness to these variables of change. The University of Bahrain, established in 1986, is the national flagship university of the Kingdom of Bahrain with an enrollment of more than twenty-eight thousand five hundred (28,500) students and over eight hundred (800) faculty members. It is comprised of nine different colleges offering ten doctoral degrees, twenty-six masters, and forty-seven bachelors programs, in addition to eight postgraduate and fourteen associate diplomas. In 2016, the University of Bahrain launched a strategy plan designed to address the challenges of the 21st century, namely: Transformation Plan 2016-2021. The Plan was developed to incorporate the goals of the Bahrain Economic Vision 2030 (2008) of striving to develop an efficient and effective government, a robust knowledgeeconomy that benefits the people of the Kingdom, and a just and thriving society. It further integrates the themes of the National Higher Education Strategy 2014-2024 of enhancing the overall quality of higher education in Bahrain to: • graduate job-ready students, professionally, and personally, and to enable them to fulfil their potential and contribute to society; • aligning Bahrain's higher education sector to meet current and future regional and national priorities; • improving the linkages between higher education, vocational and continuing education to provide equitable and strategic access; • leveraging the newest trends in education technology to leap Bahrain's higher education sector; • creating an entrepreneurship ecosystem for students in Bahrain; and • leveraging research to enhance the overall competitiveness of Bahrain's economy. The Plan also incorporates the objectives of the Bahrain's National Research Strategy, launched in 2014, to: • participate in the establishment of a national research governance infrastructure; • improve public awareness and understanding of research and innovation; • address national research priorities, strengthen the university's research capacity; and • strengthen the integration of the university programs with industry, international research institutions, and entities focused on Bahrain's economic and social priorities. The Plan further is in alignment with the kingdom's endeavors to achieve the UN Sustainable Development Goals 2030 (UNESCO, 2017). This Paper present the University of Bahrain as a case study to discuss the ways and means in which the University is addressing these challenges through its Transformation Plan. The Paper discusses the key pillars of the Plan, as well as the key performance indicators used to assess its progress towards its goals. Transformation Plan As new technologies emerge, universities must rethink the way learning and teaching is being conducted. Universities not only need to be reactive to the ever-changing external environment, but also become proactive pioneers to provide cutting-edge solutions addressing regional and global challenges. Today's educators must meet the needs of a new generation of learners aspiring to thrive and contribute in an increasingly interconnected complex world. This provides new challenges for academics, and demands that universities must constantly evolve and innovate to meet the changing needs of academia, employers, and millennial learners. The traditional operations of universities are currently challenged as higher education institutions loose pace with dynamically changing demands. The rapidlychanging external environment led by technology is transforming learning, teaching, and research. Students and the wider society require year-round access to high quality programs and flexible modes of delivery. The regional economic context of the Gulf Cooperation Council (GCC) and challenges arising with the demands to develop and foster the private sector are also challenges that the University must respond to by focusing on producing a high quality and skilled human capital. Gaining international research impact is achieved by targeted and innovative approaches to research through international collaboration. The impact of the University as the national university goes far wider than teaching and research. The graduates of the University are the number one choice of employers in Bahrain, the University is ranked in the top 500 globally for employer reputation according to the 2019 QS World University Rankings, and it has a flock of alumni who are leading nationally, regionally, and internationally. Economic growth and community expectations drove the University of Bahrain to rethink its approach not just toward students but also to include faculty and staff members, industry, society and stakeholders in terms of brand, marketing, and international profile. The Transformation Plan aims to build a bridge to the future and provide a strategic roadmap to ensure that its graduates are sufficiently prepared and equipped to contribute to the social and economic growth of Bahrain by creating the next generation of leaders, influencers, entrepreneurs and innovators. The Transformation Plan has seven key pillars, which are as follows: 1. World-class Learning and Teaching: to transform learning and teaching philosophy to be responsive to changes in technology, labor market needs and the national and economic priorities. 2. Leading Edge Human Capital: to develop highly knowledgeable and skilled human capital by offering programs that will act as the catalyst for student success. 3. Research with National and Regional Impact: to concentrate on research that will contribute to national and regional priorities in areas such as renewable energy and water and food technologies. 4. A Dynamic, Innovative, and Entrepreneurial Environment: to become a competitive, efficient and entrepreneurial organization that thrives through developing an innovation culture through its people, policies and systems. 5. Local Engagement and International Reputation: to enhance local engagement and international reputation by becoming a driver for collaboration and partnerships. 6. Bahrain's Economic Diversification and Growth: to bridge the skills gap and place the university as the leading talent pool in the region. 7. A Transformative Environment: to transform the university to host a modern, inspiring and technology-led campus. The Transformation Plan focuses on internationalization to: create global citizens, develop relations with industry in order to be more innovative and entrepreneurial, widen its academic specialties, promote lifelong learning, and produce research that has regional and international impact. The Plan focuses on a number of areas, including establishing a Unit for Teaching Excellence and Leadership to advance teaching methods, develop innovative pedagogy, and develop the teaching skills of faculty. Technology is employed not only to enhance learning and teaching, but also to create a smart campus populated with skilled and able professionals. Faculty and staff members alike are equipped with the critical digital skills, which has been enforced by the development of a Digital Literacy Certificate for all members. Sustainable Development Goals and Innovation The pressure of economic growth against failing oil prices and diminishing resources means that economies and organizations of the GCC need to be more dynamic, agile, and productive than ever before. The GCC region faces dwindling natural resources, stretched public funding, and increasing populations. The population is increasingly shifting to living in cities which consequently is putting excess strain on infrastructure of many cities. The population of the GCC has doubled over the past 20 years to reach 51 million in 2015. The GCC is one of the most highly urbanized parts of the world with 85% of its populations living in cities today, expected to rise to 90% by 2050 (PWC, 2017). These issues are compounded by the environmental challenges that the GCC currently faces. The Middle East and North Africa (MENA) region as a whole currently faces significant environmental issues, water shortages, drought areas, air pollution, climate change, and rising energy consumption. The lack of access to sufficient clean water threatens in many ways mankind, and can lead to the spread of disease. Water scarcity and pollution threaten agriculture and food production. The Arab Forum for Environment and Development produced a report discussing these challenges to be faced by the region in the next ten years (AFED, 2017). Considering these regional environmental issues, universities in the region need to comply with international and environmental requirements, including policies towards reducing the carbon footprint by moderating carbon emissions and energy consumption, controlling the waste generated, smarter use of air conditioning, promoting recycling as good practice, and introducing transport regulations on student and faculty car usage (endorsing public transport). Moving forward, it is critical that universities must take responsibility for their environmental footprints and aspire to integrate environmental management good practice into daily business. Higher education must aim for eco-friendly campuses by refurbishing, designing, and managing existing and future campuses. In 2016, the University of Bahrain began to subscribe to the Green Metric ranking, ranked 307 globally in 2017 (Green Metric, 2017), in order to monitor its own progress and its key performance indicators towards becoming an environmentally friendly campus (Hamzah, Alnaser, & Alnaser, 2018). As Bahrain's population increases, it is clear that technology is being used in the development of smart towns with energy efficient housing and roads designed to minimize traffic congestion through the use of interconnected traffic signals. The greatest potential savings are in energy consumption. Total household electricity spending in the MENA region is expected to reach approximately $250 billion by 2025. Technology such as connected thermostats and remotely controlled lighting could save around 10-15%, in addition to saving time through automated tasks. Technology plays a significant role in innovation with the potential to address many of the regional challenges. Large investments in innovation, such as the partnership with Amazon Web Services, are testament to the growing importance of innovation on the business and national agendas of countries within the MENA region. Promptly, the University has established one of the first Amazon Academies in the region. The rise of the Fourth Industrial Revolution has created talent mismatches, with 65% of employers and 59% of job seekers in the MENA region believing that a skills gap exists (Bayt.com, 2017). Clearly there is a disconnect between the requirements of new jobs driven by technology and the skill sets of graduates. The Fourth Industrial Revolution is disrupting the way work is being done, how public services are delivered and how economies are being shaped. We live in permanent disruption where the mantra is to innovate or become obsolete. According to the Future of Jobs and Skills in the Middle East and North Africa report, (World Economic Forum, 2017) the skills relevant in industry include complex problem solving, critical thinking, creativity, people management, coordinating with others, judgement and decisionmaking, service-oriented, negotiation, and cognitive flexibility. Numerous employer surveys suggest that the majority of students graduating from the education system lack these skills. Graduates need to develop the relevant skills that allow them to compete in a competitive and dynamic job market, as one of the biggest challenges in the region is youth unemployment as a result of rapid change in labor market requirements. Universities must, therefore, revise how they operate, and investment in technology, teaching and career advice is critical to ensure skilled graduates and a productive labor market. The Future of Jobs and Skills in the Middle East and North Africa analysis (World Economic Forum, 2017) found that, by 2020, 21% of core skills in the countries of the GCC will be different compared to skills that were needed in 2015. The report listed the top ten skills needed for 2020 and beyond. Heading this list are skills that universities do not typically prioritize, such as: problem-solving, creativity, teamwork, and people management. Therefore, a new way of delivering higher education is required, a paradigm shift. Digital and related skills are predominately required by employers. The Transformation Plan adopts the premise that all students and teachers should have digital literacy as a minimum requirement. Teaching and learning should be repositioned to encourage inquiry-based learning and to encourage project participation. Assessment of students should focus on application of knowledge and skill development, not just passing exams. The University has responded by investing in upgrading teaching and curriculum development, in addition to re-designing student assessment, putting more emphasis on internships and industry engagement to shape learning experiences for students. As some economic sectors shrink, others will grow, especially those powered by technology. The rise of digital labor markets will mean that graduates must not only have core digital skills but also must have entrepreneurial skills to become social marketers and have a range of communication and softs skills that sets them apart. Universities are required to move to a multi-disciplinary approach or even transdisciplinary to allow students to explore creativity and nonlinear learning patterns that represent the digital age. Science, Technology, Engineering, and Mathematics (STEM) education is vital. In a world populated with super-fast and smart computers, however, STEM coupled with creativity can be considered as even more critical, STEAM (STEM + ARTS), the fusion of science and technology with creativity and liberal thinking. Innovation led by technology can aid the region in addressing many of the sustainability challenges currently faced. It can enable countries to ensure that infrastructure meets the demands of society and consumers. It can potentially tighten standards on consumption, especially energy. It can help to make it easier setting up businesses and to incubate startups. Universities must not only embrace technology but also use it effectively to help create innovative solutions that will help to solve many of our sustainability issues. Here the role of technology transfer is vital. Taking academic research and using technology to commercialize is one way to deploy solutions on a wider and faster scale. The impact of this is reflected in the 2017-18 Innovation Pillar from the World Economic Forum's Global Competitiveness Index which shows Bahrain's improvement in 7 out of 8 indicators, and driving this culture of innovation are the universities, significant improvements in innovation, capacity, quality of research institutes, and university/industry collaboration, as shown by the World Economic Forum Innovation pillar (World Economic Forum, 2018). As a result, Bahrain is emerging as a hub for Fintech entrepreneurship, powered by its skilled, labor market ready graduates, with large corporations such as Amazon Web Services and Huawei setting up their regional headquarters in Bahrain to enable them to tap into this rich dynamic potential. The Transformation Plan has provided a real opportunity to examine the systems, architecture and infrastructure needed for universities to contribute to the economies, public services and societies of the Kingdom of Bahrain. As technology continues to change lives it is also redefining how universities can create a lasting impact with their students. New Areas of Research and Innovation The area of renewable energy is a critical issue for both the MENA region and globally. GCC countries have seen rapid economic diversification and have become major energy consumers in their own right. Regional electricity consumption is growing at almost 8% a year, meaning generating capacity has to be doubled every decade. Gulf countries will require 100 GW of additional power over the next 10 years to meet demand. Renewable energy offers Gulf countries a proven, home-grown path to reducing CO 2 emissions. GCC countries are in the top 14 per capita emitters of carbon dioxide in the world (PWC, 2017). Renewables offer a financially viable way to change that. Renewable energy in the Middle East is therefore offering real potential for large-scale development projects. Middle East economies are now beginning to turn to new, more sustainable means of meeting their nations' increasing consumption. Faced with volatile oil prices and international demand, the move into alternative energies is only going to increase. As projects in the Gulf take off, demand for both investment and the best energy professionals is expected to be high, especially in solar energy, in which the region has an obvious advantage. The University must be at the leading edge of this shift and also the drive to clean energy though research and expertise, which is extended to an array of renewable energy sources such as wind, solar, waste and geothermal. Changes in the energy sector on a global basis, which include regulatory pressures for green generation, a push to harness energy in the most efficient way, supply constraints and the ever-growing demand by consumers for lower cost, are currently driving a rise in the development of new technologies with the ultimate aim of providing a regional and global clean energy network that is more robust and secure than ever before. Innovative technologies such as smart meters and battery storage are becoming an everyday feature for the commercial and residential customer. From data analytics to virtual power plants, there are big changes taking place in the way energy is distributed and consumed. As the pace of change accelerates, governments, universities, and the private sector will be under pressure to spread their investment. The University, as the leading research institute in Bahrain and one of the regional leaders in the area of clean energy and water resources, has a critical role in supporting the government to achieve the national renewable and energy efficiency targets. The University's ambition is to go beyond targets, and through sustainable partnerships, develop innovative solutions that have an impact on the environment, society, and the economy. The key deliverables of current collaborations are: • Capacity Building, through establishing University of Bahrain renewable energy labs, and preparing technicians and researchers. • Research activities in the areas of energy and renewable energy, and water desalination and treatment. • Training and workshops to disseminate the knowledge to society and those interested in the field of renewable energy from the private and public sectors. Smart Future One of the main components of the smart cities are the higher education institutions, and their role in human capital development and research and development to contribute to the advancement of ever smarter cities. The rise in smart cities or digital cities has been powered by certain trends, with around 68 per cent of the world's population expected to be urban residents by 2050 (UN DESA, 2018). The challenge to build more smart cities has become urgent. With the advent of digital technology and big data, changes are exponential, be it in public transportation, citizen services, or the way businesses are run. Physical infrastructure from transportation systems to buildings, factories, and entire supply chains will sequentially be connected via the Internet. The Internet of Things means small computing devices are interweaved in the fabric of our environment. Services can be delivered through new platforms such as connected cars, smart homes, and connected public spaces, which ultimately leads to a better quality of life. Smart cities will also require smart utilities. Sustainable energy consumption and 'green' energy production at home is becoming a new lifestyle. Today's hyperconnected consumer expects a reduced environmental footprint while still enjoying seamless services and ease of use, improving their quality of life through fully digitized processes that give them complete control over every aspect of their lives. According to the International Data Corporation (IDC), 30 billion 'things' will be connected by 2020 (IDC, 2013). Everything from cars and appliances, to lights and temperature controls, will be connected in an interoperable network that will give consumers unprecedented control and choice over their use of their energy through the internet of things. The scale and pace of change is unrelenting. However, it's a challenge that academia must play a key part in and also move at speed to keep pace with rapid transformative change led by technology. The University, through research, human capital developed, organizing Smart Cities Symposiums, and industry collaboration, is leading the drive for academia to understand the implications and opportunities for higher education. This can be achieved through the creation of research platforms, and extended collaborations. Through collaborations and partnerships, the University also expects to create awareness about the future prospects of Smart Cities and serve as a platform to exchange ideas and learning through international case studies and best practice. The University is actively engaged in promoting a smarter Bahrain via its consultancy, and generating ideas and continuity, in the form of publications, and solving industry problems. The University is preparing its graduate to work in current and future markets. Recently, the students participated in numerous local and global hackathons, competitions, projects, etc. Also in 2019, the Kingdom of Bahrain hosted Amazon Web Services Summit. The Summit included the first artificial intelligence hackathon, where 26 students from the University competed to provide solutions to real-world challenges. The focus in the participation in these kinds of competitions and hackathons is to develop digital skills for the students and improve their ability to innovate and invent. Discussion and Conclusion In 2016, the University of Bahrain embarked on challenging Transformation Plan. The Plan was developed to improve the impact of the University locally, regionally, and internationally. Corresponding, the University identified several key factors that will contribute to the successful implementation of the Transformation Plan, namely: • Transformed Governance: in its pursue of excellence, the University need to modernized the decision-making and governance in the University. A review of its organizational structure was conducted to optimize and streamline processes. • Research Excellence: the University focused not only to improve the number of publications and citation, but also on the impact of research locally, regionally, and internationally. The University established research partnership with international partners to build the national capacity in topics such as: renewable energy and water security. The result of this approach would lead to establishing the University of Bahrain as a flagship center of excellence in glocal research challenges and topics. • Promotion Reform: academic promotion is linked to many aspects of institutional excellence. The University improved the planning, transparency of the process, and awareness of faculty members of the criteria. This resulted in improving the focus on individual faculty members and collectively improve the institutional output in areas related to promotion criteria. • Financial Sustainability: the sustainability of the operations and services of the University is highly linked with its ability to self-generate income from a diverse set of sources. This is envisaged to gradually improve the funding projects, initiatives, activities, services, and academic programs when revenues are re-invested and systemically used. In conclusion, the future of higher education institutions is highly linked to the socioeconomic dynamics. It is, therefore, necessary that higher education institutions develop challenging strategies that comprises objectives to support their pursuit of excellence and address the future challenges and needs of learners, labor market, society, and economy. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
5,646.2
2020-01-01T00:00:00.000
[ "Education", "Economics" ]
A non-parametric peak finder algorithm and its application in searches for new physics We have developed an algorithm for non-parametric fitting and extraction of statistically significant peaks in the presence of statistical and systematic uncertainties. Applications of this algorithm for analysis of high-energy collision data are discussed. In particular, we illustrate how to use this algorithm in general searches for new physics in invariant-mass spectra using pp Monte Carlo simulations. Introduction Searches for peaks in particle spectra is a task which is becoming increasingly popular at the Large-Hadron collider that focuses on new physics beyond TeV-scale.Bump searches can be performed either in single-particle (such as p T distributions) or invariant-mass spectra.For instance, searches for new particles decaying into a two-body final state (jet-jet, gamma-gamma, etc.) and multi-body decays are typically done by examining invariant masses of final-state objects (jets, leptons, missing transverse momenta, etc.).For example, assuming seven identified particles (jets, photons, electrons, muons, taus, Z-bosons, missing p T ), a search can be made for parent particles decaying into 2, 3, or 4 daughter particles.This leads to 322 unique daughter groups.Thus, the task of analyzing such invariant-mass combinations becomes rather tedious and difficult to handle.Considering a "blind" analysis techniques for scanning many channels [1], any cut variation increases the number of channels that need to be investigated.Finally, similar challenges exist for automatic searches for new hadronic resonances combining tracks [2]. The task of finding bumps is ultimately related to the task of determining a correct background shape using theoretical or known cross sections.However, a theory can be rather uncertain in the regions of interest, difficult to use for background simulation or entirely nonexistent.Even for a simple jet-jet invariant mass, finding an analytical background function that fits the QCD-driven background spanning many orders in magnitude and which can be used to extract possible excess of events due to new physics requires a careful examination.Attempts to fit two-jet and threejet invariant masses have been discussed in CMS [3,4] and ATLAS [5] papers; while both experiments have reached the necessary precision for such fits using initial lowstatistics data, the used analytical functions are rather different and have many free parameters.This task becomes even more difficult considering multiple channels (invariant-mass distributions) with various cuts or detector-selection criteria (like b-tagging).Each such channel requires a careful selection of analytical functions for background fit and adjustments of their initial values for convergence of a non-linear regression while determining an expectantly smooth background shape.A fully automated approach to searches for new physics has been discussed elsewhere [6]. One technically attractive approach is to find a non-parametric way to extract statistically significant peaks without a prior assumptions on background shapes.Such approach is popular in many areas, from image processing to studies of financial market, where a typical peak-identification task is reduced to data smoothing in order to create a function that attempts to approximate the background.The smoothing can be achieved using the moving average [7], Lowess [8], SPlines [9] algorithms.Statistically significant deviations from smoothed distributions can be considered as peaks.Such technique is certainly adequate for the peak extraction, but it does not pursue the goal of peak identification with a correct treatment of statistical (or systematic) uncertainties. The closest peak-search approach to high-energy case has been developed for studies of γ-ray spectra where the usual features of interest are the energies and intensities of photo-peaks.Several techniques have been developed, such as those based on least squares [10], second differences with least-squares fitting [11], Fourier transformation [12], Markov chain [13], convolution [14] (just naming a few).While such approaches are well suited for counting-type observables, they typically focus on narrow peaks on top of small and often flat-shaped background.In high-energy collisions, a typical standard-model background distribution has a falling shape spanning many orders of magnitude in event counts.A typical example is jet-jet invariant masses used for new particle searches [3,5].For such spectra, the most interesting regions are the tails of the exponentially suppressed distributions where a new highp T physics may show up.This means that there should be rather different thresholds to statistical noise, depending on the phase-space region, and as the result, a correct treatment of statistical and systematic uncertainties is obligatory.Unlike the γ-ray spectra where peaks are rather common and subject of various classification techniques, peaks in high-energy collisions are rather rare.As a consequence, relatively little progress has been made to develop a non-parametric fitting technique for highenergy physics applications where an observation of peaks is typically a subject for searches for new physics rather than for peak-classification purposes. The above discussion leads to the need for a non-parametric way of background estimation together with the peak extraction mechanism which can be suited for high-energy collision distributions, such as invariant masses.The algorithm should be able to take into account the discrete nature of input distributions with their statistical (and possibly systematic) uncertainties. Non-parametric peak finder algorithm Due to the reasons discussed above, the program called Non-Parametric Peak Finder (NPFinder) was developed using a numerical, iterative approach to detect statistically significant peaks in event-counting distributions.In short, NPFinder iterates through bins of input histograms and, using only one sensitivity parameter, determines the location and statistical significance of possible peaks.Unlike the known smoothing algorithms, the main focus of this method is not how to smooth data and then extract peaks, but rather how to extract peaks by comparing neighboring points and then calling what is left over the "background".Below we discuss the major elements of this algorithm and then we illustrate and discuss its limitations and possible improvements. For each point i in a histogram, the first-order derivative α i is found taking into account possible (statistical or/and systematic) uncertainties.This is done by calculating the slope between two points including their experimental uncertainties: If point i + 1 is lower than point i, the upper error is used, while if point i + 1 is higher than point i, the lower error uncertainty is used.This is done in order to be always on a conservative side while reducing statistical noise.Mathematically, this can be written as: where the uncertainty δy i+1 is taken with negative sign for y i+1 > y i , and with positive sign otherwise.The uncertainly may not need to be symmetric; but for simplicity we assume that they are symmetric as this is usually the case for statistical nature of uncertainties.The derivatives are averaged calculating a running average for any given position N: The algorithm triggers the beginning of a peak if the local derivatives satisfy: where ∆ is a free positive parameter.When the above conditions are true, NPFinder registers a possible peak and begins classifying next points as a part of the peak.The running average Eq. 2 is not accumulated for the points which belong to a possible peak.∆ is the only free parameter which specifies the sensitivity to the peak finding.Typically, it should be close to 1 and decreasing it to a smaller value will increase sensitivity to the peaks (and likely will increase sensitivity to statistical fluctuations). NPFinder continues to walk over data points until δα N +1 and δα N +2 are both negative, which signifies the maximum of the peak has been reached.The double condition in Eq. 3 is used to reinforce the peak-search robustness.When this condition is met, NPFinder exits the peak and adds an equal number of points to the right side from the peak center.The requirement of having the same number of points implies that the peaks are expected to be symmetric (which is typically the case).For steeply falling distributions, as in the most common case for high-energy physics, this assumption usually means that we somewhat underestimate the peak significance. After detecting all peak candidates, NPFinder iterates through the list of possible peaks in order to form a background for each peak.This is achieved by performing a linear regression of points between the first and last points in the peak, i.e. applying the function y i = mx i + b, where m and b are the slop and intercept of the linear regression, which in this case is rather trivial as it is performed via the two points only.It should be noted that the linear regression is also performed taking into account uncertainties: where y 1 is the first point of the peak, y 2 is the final point of the peak, and δy 1 and δy 2 are their statistical uncertainties, respectively.Here the statistical uncertainties are added in order to always be on the conservative side in estimation of the background level under the peak.The intercept parameter then is b = y 1 + δy 1 − mx 1 . It should be mentioned that the technique of peak finding considered above is somewhat similar to that discussed for γ-ray applications [11].But there are several important differences: the peaks can have arbitrary shapes (not only a Gaussian form), no fitting or smoothing procedure is used and errors on data points are included at the iteration step. Finally, NPFinder uses the background points to calculate the statistical significance of each peak in a given histogram.This is done by summing up the differences r i of the original points in a peak with respect to the calculated background points, and then dividing this value by it's own square root.For a given peak, it can be approximated by: where the sum runs over all points in the peaks.The algorithm runs over an input histogram or graph, builds a list of peaks and estimates their statistical significance. A typical statistically significant peak in this approach has σ > 5 − 7. A first peak is usually ignored as it corresponds to the kinematic peak of background distributions.Below we illustrate the above approach by generating fully inclusive pp collision events using the PYTHIA generator [15].The required integrated luminosity was 200 pb −1 .Jets are reconstructed with the anti-k T algorithm [16] with a distance parameter of 0.6 using the cut p T > 100 GeV.Currently, this jet algorithm is the default at the ATLAS and CMS experiments.Then, the dijet invariant-mass distribution is calculated and the NPFinder finder is applied using the parameter ∆ = 1.As expected, no peaks with σ > 5 were found. Finally, a few fake peaks were generated using Gaussian distributions with different peak positions and widths.The peaks were added to the original background histogram.Figure 1 shows an example with 3 peaks generated at 1000 GeV (20 GeV width, 200000 events), 1500 (50 GeV widths, 30000 events) and at 2800 GeV (40 GeV widths, 1200 events).The algorithm found all three peaks and gave correct estimates of their positions, widths and approximate statistical significance using the input parameter ∆ = 1. It should be noted that the peak statistical significance of the proposed nonparametric method might be smaller than that calculated using more conventional approaches, such as those based on a χ 2 minimisation with appropriate background and signal functions.This can be due to the assumption on the symmetric form of the extracted peaks, the linear approximation for the background under the peaks, and the way in which the uncertainty is incorporated in the peak-significance calculation.An influence of experimental resolution can also be an issue [17] which can only be addressed via correctly identified signal and background functions.Such drawbacks are especially well seen for the highest-mass peak shown in Figure 1 where a statistical fluctuation to the right of the peak pulls the background level up compared to the expected falling shape.Given the approximate nature of the statistical significance calculations which only serve to trigger attention of analyzers who need to study the found peaks in more detail, the performance of the algorithm was found to be reasonable. It should be noted that there is a correlation between the peak width and the input parameter ∆: a detection of broader peaks typically requires a smaller value of ∆ (which can be as low as 0.2). In conclusion, a peak-detection algorithm has been developed which can be used for extraction of statistically significant peaks in event-counting distributions taking into account statistical (and potentially systematic) uncertainties.The method can be used for new physics searches in high-energy particle experiments where a correct treatment of such uncertainties is one of the most important issues.The non-parametric peak finder has only one free parameter which is fairly independent of input background distributions.The algorithm was tested and found to perform well.The code is implemented in the Python programming language with the graphical output using either ROOT (C++) [18] or jHepWork (Java) [19].The code example is available for download [20]. Figure 1 : Figure 1: Invariant mass of two jets generated with the PYTHIA Monte Carlo model.Several peaks seen in this figure were added using Gaussian distributions with different widths and peak values (see the text).The peaks are found using the NPFinder algorithms which also estimates their statistical significances as discussed in the text.
2,987.6
2011-10-17T00:00:00.000
[ "Physics" ]
Design and Development of an Intelligent Skipping Rope and Service System for Pupils Regular physical activity (PA) contributes to health, growth and development in childhood and it is essential for children to achieve appropriate PA levels (PAL). However, most children around the world fail to comply with the recommended PAL requirements. Rope skipping, as a highly accessible, enjoyable, and affordable physical activity for students, has been considered a sustainable afterschool physical activity to promote physical fitness of students by educators. The booming development of smart fitness product design and the advent of exergames have brought new possibilities for physical education and rope skipping: personalized guidance, intuitive and interesting feedback and visualized exercise data analysis—there is much room for optimization. In this study, an intelligent skipping rope and its service system were studied for primary school students (aged 7–12) who started to get involved in this sport. First, user needs, product functions, and system requirement were summarized by conducting observations and user interviews. Then, a prototype of the hardware and software interface were designed based on analysis of user research. Next, a usability test of the interactive prototype was carried out and optimization was finally made based on the feedback of the usability evaluation. The final system design includes combined innovations in software and hardware with the intention to increase children’s participation in physical activity and assist them in skipping rope in the right way with proper equipment and programs. Introduction In 2018, the World Health Organization (WHO) launched a new global sport initiative called More Active People for a Healthier World. It is recommended that all children and adolescents should engage in at least 60 min a day of moderate-to vigorous-intensity physical activity (MVPA) [1]. However, more than 80% of students do not conform to the current guidelines for sports activities, and it is urgent to strengthen the implementation of effective policies and programs to increase activities of adolescents [2]. Rope skippingbased sports intervention can improve students' participation in sports activities inside and outside the school and their overall physical quality [3]. Rope skipping is a wholebody exercise: the arms need to keep rotating, and the body should coordinate well with the rhythmic repetition of vertical jumping [4]. Previous studies suggested that school rope skipping has a positive impact on the psychological and physiological factors of adolescents [5,6]. In China, with the promulgation and implementation of the outline of the national fitness plan, the students' physique standard and the national fitness plan (2016-2020) of the State Council, rope skipping has been widely promoted, and the "rope skipping plan" has been advocated by the Social Sports Guidance Center of the General Administration of Sport of the People's Republic of China [7,8]. As an essential component of the physical fitness test in school, there is a distinct and exclusive scoring criteria for skipping rope in terms of the number of skips in one minute. For instance, for a boy in Grade 2, he could pass the test only if he skips rope more than 25 times in a minute, and the more he skips, the higher the score he gets. Such a teaching approach can teach students the basic skills of rope skipping in a relatively short time and build up distinct assessment criteria, but at the same time there remains problems and limitations: the single scoring criteria has turned rope skipping into a new test rather than an interesting activity that cultivates children's interest in physical activity and develops self-efficacy by improving skills at their own pace. Researchers use interactive experience design methods to stimulate children's sports activities, such as combining sports and games to develop digital games [9]. Games can create stimulating experiences. By applying game elements such as rules, goals and rewards in non-game environments, children can be effectively encouraged to participate in sports activities [10][11][12]. Game-led learning styles and activities should be applied to stimulate students' interest in physical activity and develop habits of physical activity [13]. With technological advances, the ability to track users' full body movements in three dimensions, accurately measure reaction times and accelerations and capture the speed and force of the users' movements has expanded the prospects of integrating exergames into schools and homes to promote healthy physical activity in children and decrease childhood obesity [14]. One approach that has shown some success in promoting PA in children consists of PE lessons with the Wii [15]. In another study in Latino children, an exergame named Dance Dance Revolution [DDR] was incorporated with an afterschool program. Outcomes indicated that the goal-setting strategy of exergames was effective for PA promotion among children [16]. With the increase of available smartphone applications, it is very important to document the process of designing and developing applications to enhance the credibility of mobile healthcare devices [17]. With the support of recent technological progress, the development of intelligent wearable systems (SWS) for health monitoring and fitness mobile applications is now booming [18,19]. Mobile health (mHealth) refers to the use of mobile devices (such as mobile phones and personal digital assistants) to provide health information and medical services for users [20]. High levels of engagement with mobile technology offers the access to take advantage of the potential of mobile interventions for health (mHealth) [21], including real-time data acquisition and feedback, flexibility, heart rate monitors, step counters, exercise programs and coaching guidance, and lower participant burden to resolve people's health care needs [22,23]. Studies have confirmed the positive potential of increasing physical activity and improving health outcomes, such as management of cardiovascular disease, obesity, and type 2 diabetes under the utilization of mHealth apps and wearables [24]. At present, many studies can be found in the literature focused on the development of mobile applications in health services [17,[25][26][27]. However, despite the thriving market for smart fitness products and mHealth [28], most products are oriented toward adults in China, such as Keep, FitTime and Mint health. Their main functions are to guide scientific fitness online, arrange reasonable training plans and cycles, record the user's sports training status through big data, and then make personalized fitness program for users. In contrast, the existing intelligent fitness product designs for children and adolescents is relatively lacking. Considering the behavioral and psychological characteristics of children and adolescents, specific concerns and restrictions should be taken into account when designing smart fitness [7]. To encourage primary school students to perform daily rope skipping, most schools in China still use paper record cards or sports videos of daily attendance to supervise students to complete daily exercise. This way is simple and direct, but it is lacking in fun, which is not in line with the psychological characteristics of children at this age. Therefore, the purpose of this study is to analyze, design, and implement a new rope skipping service system design to help primary school students to participate in rope skipping activities, and motivate them to skip for fitness and fun. The general idea of the study was as follows: 1. Target user research 2. Product positioning 3. Hardware design and software design 4. Interactive prototype usability testing and optimization The rest of the paper is organized as follows: Section 2 presents the formative research and the procedure of the field study; Section 3 presents the user research results revealed over the research process; the design of the system is described in Section 4, which is complemented with a usability test and optimization of the final prototype; Section 5 presents important findings and limitations in this study; and, finally, the conclusions and future lines are drawn in Section 6. Participants The target users of this study are primary school students aged 7-12 years old. At primary school, they are introduced to skipping rope and gradually develop skipping skills. For this age group, children are in a changing array of contexts-school, home, or with peers; they need guidance and motivation from the outside world in order to acquire the skills correctly and to persevere with this exercise. This is why it is important to consider not only the needs of the pupils themselves but also parents and schools behind them when researching user needs. A total of 19 individuals, including 13 pupils and 6 parents, participated in this study. The analysis of the sample of 13 pupils is shown in Appendix A. All participants had previous learning experience in rope skipping, with the shortest learning time being 1 month and the longest learning time being 5 years. 3 participants exercised at home and the remaining 13 participants exercised outdoors. 3 of the participants had experiences in using smart fitness products such as fitness bands. The analysis of the sample of 6 parents is shown in Appendix B. All participants had experience in teaching their children to skip rope. The frequency of accompaniment in the table represents the number of times the parent now accompanies his/her child to jump rope per week. Data Collection and Analysis Observations and semi-interviews were conducted with the participants at three neighborhoods separately in Hangzhou and Shanghai in China between October 2020 and January 2021. The observations were centered on behaviors during rope skipping, while the interviews focused on attitudes and habits towards this exercise. Qualitative data were collected including participants' opinions and experiences, as well as quantitative data relating to daily workout habits such as duration and frequency of exercise. The detailed procedure of user research is shown in Appendix C. (1) Observation The purpose of the observation was to observe how rope skipping ability, exercise location (indoors or outdoors) and accompaniment (alone, with parents or with peers) affect pupils' workout behaviors. It was mainly focused on three aspects: children's behaviors before, during and after skipping rope; parents' behaviors, including the way they taught their children to skip rope and their methods of recording children's workouts; and interactions between parents and children, as well as children and their peers. The observations were carried out in the community and in participants' homes ( Figure 1). Behaviors of pupils and their accompanists, communication and interactions between them were captured in text and photographs. The average session duration was 30 min. each pupil's interview, 15 questions of 3 aspects were asked as shown in Table 1. Pupils were asked about their daily exercise habits and goals, how they learned to skip rope and how much they knew about this exercise. They were also asked to talk about their favorite activity in PE class, their preferred games and experience in using smart fitness products such as fitness apps or fitness bracelets. In each parent's interview, 11 questions of 3 aspects were asked as shown in Table 2. Parents were asked to talk about the problems their children met in learning to skip rope, how school and parents motivated children to take part in sports or other activities and their attitude (expectations and worries) towards smart fitness products for children. Each interview was recorded with the permission of the participants. How is skipping rope taught at school? 5. Do you know other skipping skills in addition to the traditional two-feet basic jump? Are you interested? 6. What is your goal in skipping rope? Daily life (physical activity and sedentary behaviours) 1. What is your favourite activity in PE? 2. What other physical activities do you usually do? 3. Do you play sports with your parents? 4. Are there any incentives at home or at school? 5. How much time do you spend in writing homework every day? 6. Do you have any smart products in your home? How often do you use it? What do you do with it? 7. Have you ever used learning games? What do you think about it? Experience with smart fitness products 1. Have you ever used any smart fitness products? 2. Do you know exergames? Have you ever tried one? Table 2. List of the questions used in interviews with the parents. Indicator of the Interview Question of the Interview Skipping rope 1. Have you ever taught your child skipping rope? 2. Is there a school assignment for afterschool skipping rope practice? How is the completion checked? 3. What is your attitude towards daily skipping rope practice? What about your child's attitude? 4. Do you go skipping with your child? (2) Semi-structured interview The focus of the interview was not only on the rope skipping itself, but also covered aspects such as daily routine, interests and attitude towards smart fitness products. In each pupil's interview, 15 questions of 3 aspects were asked as shown in Table 1. Pupils were asked about their daily exercise habits and goals, how they learned to skip rope and how much they knew about this exercise. They were also asked to talk about their favorite activity in PE class, their preferred games and experience in using smart fitness products such as fitness apps or fitness bracelets. Indicator of the Interview Question of the Interview Skipping rope 1. When did you learn to jump rope and who taught you? 2. How often and how long do you skip rope per week? 3. What kind of skipping rope do you use? How do you feel about it? 4. How is skipping rope taught at school? 5. Do you know other skipping skills in addition to the traditional two-feet basic jump? Are you interested? 6. What is your goal in skipping rope? Daily life (physical activity and sedentary behaviours) 1. What is your favourite activity in PE? 2. What other physical activities do you usually do? 3. Do you play sports with your parents? 4. Are there any incentives at home or at school? 5. How much time do you spend in writing homework every day? 6. Do you have any smart products in your home? How often do you use it? What do you do with it? 7. Have you ever used learning games? What do you think about it? Experience with smart fitness products 1. Have you ever used any smart fitness products? 2. Do you know exergames? Have you ever tried one? In each parent's interview, 11 questions of 3 aspects were asked as shown in Table 2. Parents were asked to talk about the problems their children met in learning to skip rope, how school and parents motivated children to take part in sports or other activities and their attitude (expectations and worries) towards smart fitness products for children. Each interview was recorded with the permission of the participants. Table 2. List of the questions used in interviews with the parents. Indicator of the Interview Question of the Interview Skipping rope 1. Have you ever taught your child skipping rope? 2. Is there a school assignment for afterschool skipping rope practice? How is the completion checked? 3. What is your attitude towards daily skipping rope practice? What about your child's attitude? 4. Do you go skipping with your child? 5. Do you keep an eye on the number of rope jumps your child makes? Daily life (physical activity and sedentary behaviours) 1. Do you pay attention to your child's daily physical activity? 2. Would you take incentives to motivate your child? 3. How long does your child spend on screen time every day? 4. What is your opinion about learning games? Attitudes towards smart fitness products 1. Have you ever used any smart fitness products with your child? 2. Do you know exergames? What do you think about it? User Research Results Based on Kairy et al. (2021) [29], an interpretive description methodology was used to carry out qualitative analysis in this study. These interviews were recorded and transcribed in a naturalized way, omitting oral expressions such as "um" and "ah" [30]. All transcripts were anonymous and recorded in Excel software. The observation and interview of data collection and preliminary analysis were carried out concurrently. Results were continuously discussed by the members of the research team [31]. Accordingly, it can be seen that the pupils need guidance and motivation in the development of skip rope skills. At school, they learn from their teachers. After school, they learn and make progress through the interaction between family and friends. Specifically, the following conclusions can be drawn: From pupils' aspect: (1) Most teaching stops at the traditional two-feet basic jump The traditional two-feet basic jump, which is to jump on two feet consistently, is the most basic and easiest way to skip rope. For most pupils, according to our interview, it is the first and only skill they learned. One mother mentioned in the interview that once her child had learned the basics of skipping rope, the exercise goal turned to increasing speed, as the only factor determining performance in school tests is the number of skips in one minute. This result is also due to the different teaching features of physical educators. In the interview, a third-grade boy said that his PE teacher had once showed them the skill of cross jump and he thought it was interesting. However, a second grader from another school said that the way he practiced skipping rope in PE class was usually a one-minute timed jump. (2) Physical health awareness is lacking In the observation, only 3 of 13 participants started skipping rope with a warm up and stretching exercises. The children said that they would do warm-up exercises with the teacher in PE class at school, but they didn't know that it was essential to do warm-up exercises before skipping rope. (3) Gamification brings fun Students often use the word fun to predict or evaluate the worth of activities in which they engage. To find out the criteria of "fun" among pupils, interviews were focused on the attractive elements in pupils' daily life and "game" turned out to be the "magic word" to motivate students. Teachers are now using gamification apps to assist them with teaching. The interview discovered that English teachers in primary school are now using game-based apps to help students correct pronunciation and complete listening exercises. The combination of learning tasks and games have proved to be attractive to pupils, as a parent said that her child always chose to finish the homework assigned on app first after school. (4) Social support brings motivation In physical activity, children may receive support from their parents, teachers and peers. Their support exerts a positive effect on children and motivates them to continue to participate in the physical activity. Support can take the form of verbal praise; a secondgrade boy recalled an impressive memory that his PE teacher once praised him in front of the whole class for the progress he had made in the skipping rope test. In addition, it was observed that many parents would accompany their children to practice skipping rope in the community open area. As the child kept skipping rope, the parents would constantly give verbal encouragement to the child and would also motivate the child to keep skipping by telling them the number of skips they had completed. This was especially true for children who have not yet mastered the skill. They kept tripping over the rope, which would make them frustrated. At this time, affirmation and encouragement from parents would motivate them to persevere. Support also includes material rewards. One father said that his first-grade son was reluctant until he promised to buy him an airplane chess if he could keep practicing skipping rope for a week. School teachers have also set up reward rules to encourage pupils to participate actively in class. A third grader said that if he performed well in class, his teacher would award him with a sticker, and he was working hard to collect 10 stickers so as to exchange them for a gift from his teacher. From the parents' aspect: (1) Parents need teaching guides In the interview, parents mentioned that they would teach their children how to skip rope in advance to get prepared for primary school study, and it was always hard for children to get the rhythm of skips at the beginning. Some parents denoted that, at this time, they would search the internet to look for some teaching videos which introduced several pre-exercises to help children grasp the rhythm. Some parents even go so far as to send their children to skipping rope training class. News reported that skip rope classes were quite popular, and some parents spent over CNY 10,000 a year for children to learn to skip rope. Skipping rope is taught not only at school, but also at home, and parents need teaching guides to better teach their children to skip rope. (2) Recording is necessary It was observed that parents took photos or videos while their children were skipping rope, which according to the parents, "is a school requirement to check the completion of daily exercise". Other ways of recording included recording the number of rope skips on a paper card. In the parents' opinion, this was also one of the homework assignments. One parent also said that the daily record gave her an idea of how her child progressed in skipping rope. (3) Parents' worry about smart products When it comes to children's use of smart products, such as mobile phones or iPads, most parents said that they strictly controlled their children's screen time, especially on school days. As for the use of exergames, parents expressed that they would prefer their children to get outside and play sports than to work out in front of the screen. Other parents were concerned that too many game features might lead to addiction. Persona Human-centered design is a design paradigm whose focus of questions, insights and activities is on the people the product, system or service intended for [32]. Persona, the "hypothetical archetype of real users" [33], is a modelling tool to this paradigm to capture specific user needs and generate more creative and human-centered solutions [34]. In order to clarify the needs of pupils and their parents, three pupil personas and one parent persona were established according to the user research ( Figure 2). Problems and expectations of pupils were grouped into three personas based on their current skipping rope abilities and exercise goals. The persona of parents was created according to the typical problems and parental expectations reflected in the user research. Lastly, the common needs of pupils and parents are summarized. Product Positioning To ensure the accuracy and accessibility of exercise data acquisition and the integri of system functionality, the system uses a combination of software and hardware of intelligent skipping rope. Considering the different requirements of children and paren during rope skipping, the software is equipped with two terminals: the mobile phone te minal is for the parent and the smartwatch terminal is for children. The functional layo of the software on both terminals is slightly different depending on their different d mands. The final intelligent skipping rope service system of this study includes the fo lowing functional innovations: (1) The intelligent skip rope service system is composed of hardware products and so ware products. The hardware products are portable and easy to use. The interface the software is intuitive and simple. (2) The hardware products of the intelligent skipping rope service system consists of smartwatch and an intelligent skipping rope. They support multichannel data acqu sition of exercise data and health condition. (3) The hardware products of the intelligent skipping rope service system also brin about interactions between terminals and between users, which improves the us experience and increase the frequency of use. (4) The software of the intelligent skipping rope service system could generate perso alized exercise plans, provide skill guidance, integrate and analyze users' daily exe cise data, and provide multifaceted feedback to encourage users to skip rope for f ness and fun. Combined with the existing basic research of smart fitness products and the targ user research, user requirements were summarized and translated into five core functio ( Figure 3). Product Positioning To ensure the accuracy and accessibility of exercise data acquisition and the integrity of system functionality, the system uses a combination of software and hardware of an intelligent skipping rope. Considering the different requirements of children and parents during rope skipping, the software is equipped with two terminals: the mobile phone terminal is for the parent and the smartwatch terminal is for children. The functional layout of the software on both terminals is slightly different depending on their different demands. The final intelligent skipping rope service system of this study includes the following functional innovations: (1) The intelligent skip rope service system is composed of hardware products and software products. The hardware products are portable and easy to use. The interface of the software is intuitive and simple. (2) The hardware products of the intelligent skipping rope service system consists of a smartwatch and an intelligent skipping rope. They support multichannel data acquisition of exercise data and health condition. (3) The hardware products of the intelligent skipping rope service system also bring about interactions between terminals and between users, which improves the user experience and increase the frequency of use. (4) The software of the intelligent skipping rope service system could generate personalized exercise plans, provide skill guidance, integrate and analyze users' daily exercise data, and provide multifaceted feedback to encourage users to skip rope for fitness and fun. Combined with the existing basic research of smart fitness products and the target user research, user requirements were summarized and translated into five core functions ( Figure 3). Exercise data acquisition: In the process of rope skipping, the data to be collected include number of skips, tripping times, heart rate and duration of exercise. Tripping times could reflect whether the skipping posture is correct and whether the coordination between hand and feet has a good beat. Additionally, heartbeat offers a more objectiv look at exercise intensity. Exercise data analysis: Based on the acquired exercise data, processing and analysi of the data can offer users a clearer picture of their exercise. This is especially for the pu pils, for whom highly-visualized data analysis could intuitively reflect their progress They could see how many skips they have accomplished, how fast they have become and be more motivated. Additionally, data analysis could help users realize their weaknesse and make adjustments to their plan accordingly. For instance, if a user's tripping time remains at a high level, it means he or she might not yet grasp the rhythm and need mor basic pre-exercise. Exercise data sharing: As an after-school physical activity, the sharing of pupils' ex ercise data between home and school can help schools better supervise pupils' daily phys ical activity and rope skipping ability. Adjustment to teaching programs can be mad based on these data. Compared with the current recording methods, software-based dat flow is more effective and convenient. Exercise guidance: Exercise guidance includes not only step-by-step instructions, bu also personalized, long-term advice for improving motor skills. In schools, rope skipping is taught in classes and the content is determined by the average level of skipping, leaving the needs of individuals who are behind or ahead of the collective unmet. The guidanc can be in the form of text, video, animation and audio tracks, which is a useful way to se skipping pace used by competitive speed jumpers as a training technique. Exercise data acquisition: In the process of rope skipping, the data to be collected include number of skips, tripping times, heart rate and duration of exercise. Tripping times could reflect whether the skipping posture is correct and whether the coordination between hand and feet has a good beat. Additionally, heartbeat offers a more objective look at exercise intensity. Exercise data analysis: Based on the acquired exercise data, processing and analysis of the data can offer users a clearer picture of their exercise. This is especially for the pupils, for whom highly-visualized data analysis could intuitively reflect their progress. They could see how many skips they have accomplished, how fast they have become and be more motivated. Additionally, data analysis could help users realize their weaknesses and make adjustments to their plan accordingly. For instance, if a user's tripping times remains at a high level, it means he or she might not yet grasp the rhythm and need more basic pre-exercise. Exercise data sharing: As an after-school physical activity, the sharing of pupils' exercise data between home and school can help schools better supervise pupils' daily physical activity and rope skipping ability. Adjustment to teaching programs can be made based on these data. Compared with the current recording methods, software-based data flow is more effective and convenient. Exercise guidance: Exercise guidance includes not only step-by-step instructions, but also personalized, long-term advice for improving motor skills. In schools, rope skipping is taught in classes and the content is determined by the average level of skipping, leaving the needs of individuals who are behind or ahead of the collective unmet. The guidance can be in the form of text, video, animation and audio tracks, which is a useful way to set skipping pace used by competitive speed jumpers as a training technique. Interesting and effective incentives: Factors that affect physical activity levels (PALs) in children include psychological aspects such as self-efficacy and motivation, and sociological aspects including support from school, parents and friends [35]. Study found that children became unwilling to participate in PA when there was little enjoyment or "fun", and by comparison, they tended to continue to participate when a high level of enjoyment was experienced during PA [35]. Csikszentmihalyi's Flow Model (1975) described fun in a simple way, as "the balance between skill and challenge". To make physical activity "fun" and attractive is to emphasize intrinsic aspects such as personal improvement, skill development, optimal challenge, personal goals, opportunities to get involved and constructive feedback rather than extrinsic aspects such as winning [36]. Hardware and Software Product Design The hardware and software parts of the system were designed based on the above user demand analysis and functional positioning. Hardware Product Design With the aim to implement portability and comfortability to encourage children's use, as well as to implement the functional connection within the whole system, the appearance and circuit implementation of the intelligent skipping rope was designed as follows. Hardware Appearance Design The design of the hardware's appearance is shown in Figure 4 and the internal structure of the intelligent skipping rope is shown in Figure 5. Sustainable use: The handles of the intelligent skipping rope were designed ergonomically with sufficient length so as to fit well in the hand during the child's growth. The length of the skipping rope could also be easily adjusted to suit the child's changing height. Color: The orange color of the intelligent skipping rope, according to colour psychology, would evoke feelings of excitement, enthusiasm and energy [37]. Material: The soft rubber surface of the handle enhances grip comfort and absorbs perspiration well. Interesting and effective incentives: Factors that affect physical activity levels (PALs) in children include psychological aspects such as self-efficacy and motivation, and sociological aspects including support from school, parents and friends [35]. Study found that children became unwilling to participate in PA when there was little enjoyment or "fun", and by comparison, they tended to continue to participate when a high level of enjoyment was experienced during PA [35]. Csikszentmihalyi's Flow Model (1975) described fun in a simple way, as "the balance between skill and challenge". To make physical activity "fun" and attractive is to emphasize intrinsic aspects such as personal improvement, skill development, optimal challenge, personal goals, opportunities to get involved and constructive feedback rather than extrinsic aspects such as winning [36]. Hardware and Software Product Design The hardware and software parts of the system were designed based on the above user demand analysis and functional positioning. Hardware Product Design With the aim to implement portability and comfortability to encourage children's use, as well as to implement the functional connection within the whole system, the appearance and circuit implementation of the intelligent skipping rope was designed as follows. Hardware Appearance Design The design of the hardware's appearance is shown in Figure 4 and the internal structure of the intelligent skipping rope is shown in Figure 5. Sustainable use: The handles of the intelligent skipping rope were designed ergonomically with sufficient length so as to fit well in the hand during the child's growth. The length of the skipping rope could also be easily adjusted to suit the child's changing height. Color: The orange color of the intelligent skipping rope, according to colour psychology, would evoke feelings of excitement, enthusiasm and energy [37]. Material: The soft rubber surface of the handle enhances grip comfort and absorbs perspiration well. Modular Implementation The overall acquisition-analysis-feedback framework of the intelligent skipping rope service system is as follows ( Figure 6): Modular Implementation The overall acquisition-analysis-feedback framework of the intelligent skipping rope service system is as follows ( Figure 6): Modular Implementation The overall acquisition-analysis-feedback framework of the intelligent skipping rope service system is as follows ( Figure 6): Figure 6. Acquisition-analysis-feedback framework. Figure 6. Acquisition-analysis-feedback framework. (1) Skipping rope data acquisition module The basic need of data acquisition is supported by the gyroscope sensor on the intelligent skipping rope and the heart rate sensor on the smartwatch terminal. The data collected on the intelligent skipping rope are transmitted to the data smartwatch terminal through Bluetooth for further processing. (2) Data reception and processing module The system uses the smartwatch terminal as the data reception-and further processingprocessor to ensure the feasibility of real-time data viewing. The data reception and processing module receives the real-time data collected by the skipping rope data acquisition module and calculates and integrates simultaneously the required values of number of skips, tripping times, heart rate and duration of exercise. The data are uploaded to the server and stored in the cloud when the measurement is complete. (3) Cloud storage and analysis module To assure the consistency of the data on different terminals and to realize the sharing of the exercise data between family and school, each measurement of the user is stored in the cloud. The server side implements the function of data storage and analysis. In addition to measurement data tables, the server side is also responsible for the maintenance of user tables. The measurement data tables record users' measurement results. The user tables record the basic information of each user, such as their gender, height, weight, current skipping rope ability and workout goals, which serve as a reference for plan formation. (4) Display and interaction module During a workout, the exercise data, such as number of skips and time duration, were displayed in real time on the screen of the smartwatch terminal. In the mobile phone terminal, users could get access to more detailed exercise analysis and historical measurement. Considering the interactive needs between parents and children, as well as teachers and students, the system applies web services for communications. In addition, cross-screen interactions are implemented under the appliance of the web service and the sensor on the smartwatch which could recognize the user's hand motion. The overall multi-terminal skipping rope service system framework is shown in Figure 7. (1) Skipping rope data acquisition module The basic need of data acquisition is supported by the gyroscope sensor on the intelligent skipping rope and the heart rate sensor on the smartwatch terminal. The data collected on the intelligent skipping rope are transmitted to the data smartwatch terminal through Bluetooth for further processing. (2) Data reception and processing module The system uses the smartwatch terminal as the data reception-and further processing-processor to ensure the feasibility of real-time data viewing. The data reception and processing module receives the real-time data collected by the skipping rope data acquisition module and calculates and integrates simultaneously the required values of number of skips, tripping times, heart rate and duration of exercise. The data are uploaded to the server and stored in the cloud when the measurement is complete. (3) Cloud storage and analysis module To assure the consistency of the data on different terminals and to realize the sharing of the exercise data between family and school, each measurement of the user is stored in the cloud. The server side implements the function of data storage and analysis. In addition to measurement data tables, the server side is also responsible for the maintenance of user tables. The measurement data tables record users' measurement results. The user tables record the basic information of each user, such as their gender, height, weight, current skipping rope ability and workout goals, which serve as a reference for plan formation. (4) Display and interaction module During a workout, the exercise data, such as number of skips and time duration, were displayed in real time on the screen of the smartwatch terminal. In the mobile phone terminal, users could get access to more detailed exercise analysis and historical measurement. Considering the interactive needs between parents and children, as well as teachers and students, the system applies web services for communications. In addition, crossscreen interactions are implemented under the appliance of the web service and the sensor on the smartwatch which could recognize the user's hand motion. The overall multi-terminal skipping rope service system framework is shown in Figure 7. Software System Interface Design Based on the product positioning, the functional flow chart of the software system is shown in Figure 8. Software System Interface Design Based on the product positioning, the functional flow chart of the software system is shown in Figure 8. First Round of the Interface Design (1) Functional implementation Plan formation-First-time users enter the Plan interface on the mobile phone terminal. The interface then jumps to a plan formation page, where the users are required to enter basic information such as gender, grade, height and weight, as well as information about their current rope skipping ability level, exercise frequency and duration, and exercise goals. Skipping rope standards vary for children of different ages and the system will generate a personalized plan for the user based on the user's input and exercise goals (skip faster or learn more skipping skills). For sustainable use, the plan formed can be modified at any time as the user's skipping level changes (as shown in Figure 9). The interface then jumps to a plan formation page, where the users are required to enter basic information such as gender, grade, height and weight, as well as information about their current rope skipping ability level, exercise frequency and duration, and exercise goals. Skipping rope standards vary for children of different ages and the system will generate a personalized plan for the user based on the user's input and exercise goals (skip faster or learn more skipping skills). For sustainable use, the plan formed can be modified at any time as the user's skipping level changes (as shown in Figure 9). Healthcare 2021, 9, x FOR PEER REVIEW 12 of 22 Software System Interface Design Based on the product positioning, the functional flow chart of the software system is shown in Figure 8. First Round of the Interface Design (1) Functional implementation Plan formation-First-time users enter the Plan interface on the mobile phone terminal. The interface then jumps to a plan formation page, where the users are required to enter basic information such as gender, grade, height and weight, as well as information about their current rope skipping ability level, exercise frequency and duration, and exercise goals. Skipping rope standards vary for children of different ages and the system will generate a personalized plan for the user based on the user's input and exercise goals (skip faster or learn more skipping skills). For sustainable use, the plan formed can be modified at any time as the user's skipping level changes (as shown in Figure 9). Workout-During the workout, the interface on the mobile phone terminal is mainly for parental use and includes video demonstrations and textual posture instructions to guide their children ( Figure 10). The children's operation during workout is on the smartwatch terminal, where the user can click the button to start. The interface visually shows the real-time number of skips and time left through the time circle (Figure 11). Healthcare 2021, 9, x FOR PEER REVIEW 13 of 22 Workout-During the workout, the interface on the mobile phone terminal is mainly for parental use and includes video demonstrations and textual posture instructions to guide their children ( Figure 10). The children's operation during workout is on the smartwatch terminal, where the user can click the button to start. The interface visually shows the real-time number of skips and time left through the time circle (Figure 11). Exercise data feedback-After completing the workout, the interface on the smar watch terminal displays the number of skips and workout duration ( Figure 12). More detailed exercise data analysis is available on the mobile phone terminal, including the number of skips, tripping times, average heart rate and speed graphs. Users can click the toggle button in the upper left corner to switch from daily data analysis to weekly analysis to see the total amount of workout and data changes over the week. By swiping left and right, the user can view historical data analysis. Workout-During the workout, the interface on the mobile phone terminal is mainly for parental use and includes video demonstrations and textual posture instructions to guide their children ( Figure 10). The children's operation during workout is on the smartwatch terminal, where the user can click the button to start. The interface visually shows the real-time number of skips and time left through the time circle (Figure 11). Exercise data feedback-After completing the workout, the interface on the smar watch terminal displays the number of skips and workout duration ( Figure 12). More detailed exercise data analysis is available on the mobile phone terminal, including the number of skips, tripping times, average heart rate and speed graphs. Users can click the toggle button in the upper left corner to switch from daily data analysis to weekly analysis to see the total amount of workout and data changes over the week. By swiping left and right, the user can view historical data analysis. Exercise data feedback-After completing the workout, the interface on the smar watch terminal displays the number of skips and workout duration ( Figure 12). More detailed exercise data analysis is available on the mobile phone terminal, including the number of skips, tripping times, average heart rate and speed graphs. Users can click the toggle button in the upper left corner to switch from daily data analysis to weekly analysis to see the total amount of workout and data changes over the week. By swiping left and right, the user can view historical data analysis. Healthcare 2021, 9, x FOR PEER REVIEW 14 of 22 Figure 12. Exercise data analysis. (2) Incentive feedback system Self-evaluation-After the workout, users select an emotion stamp on the smartwatch terminal to evaluate their feeling regarding the exercise. The stamp they choose is recorded in the exercise log interface on the mobile phone terminal. Gamification feedback-A tree-planting game is introduced to visualize children's efforts and motivate them to persevere in exercising: after exercising, the watch will display the "exercise energy" accumulated through workout ( Figure 13). In the interface of My Garden on the mobile phone terminal, user can consume "exercise energy" to "water" the plant by the action of shaking the smartwatch. The virtual sapling grows and eventually bears fruits. Each fruit can be exchanged for a reward from the parents, who can enter the content of the reward in the corresponding input box. Once a tree has matured, the user can unlock the next planting scenario by swiping left and right. Parental feedback-When the user has finished the workout, a notification will appear on the mobile phone to notify parents of children's completion. After the parent has checked their child's exercise data and clicked the confirm button in the exercise log interface, the smartwatch receives a star sticker to remind the child that parents have given "me" a "thumb-up". (2) Incentive feedback system Self-evaluation-After the workout, users select an emotion stamp on the smartwatch terminal to evaluate their feeling regarding the exercise. The stamp they choose is recorded in the exercise log interface on the mobile phone terminal. Gamification feedback-A tree-planting game is introduced to visualize children's efforts and motivate them to persevere in exercising: after exercising, the watch will display the "exercise energy" accumulated through workout ( Figure 13). In the interface of My Garden on the mobile phone terminal, user can consume "exercise energy" to "water" the plant by the action of shaking the smartwatch. The virtual sapling grows and eventually bears fruits. Each fruit can be exchanged for a reward from the parents, who can enter the content of the reward in the corresponding input box. Once a tree has matured, the user can unlock the next planting scenario by swiping left and right. (2) Incentive feedback system Self-evaluation-After the workout, users select an emotion stamp on the smartwatch terminal to evaluate their feeling regarding the exercise. The stamp they choose is recorded in the exercise log interface on the mobile phone terminal. Gamification feedback-A tree-planting game is introduced to visualize children's efforts and motivate them to persevere in exercising: after exercising, the watch will display the "exercise energy" accumulated through workout ( Figure 13). In the interface of My Garden on the mobile phone terminal, user can consume "exercise energy" to "water" the plant by the action of shaking the smartwatch. The virtual sapling grows and eventually bears fruits. Each fruit can be exchanged for a reward from the parents, who can enter the content of the reward in the corresponding input box. Once a tree has matured, the user can unlock the next planting scenario by swiping left and right. Parental feedback-When the user has finished the workout, a notification will appear on the mobile phone to notify parents of children's completion. After the parent has checked their child's exercise data and clicked the confirm button in the exercise log interface, the smartwatch receives a star sticker to remind the child that parents have given "me" a "thumb-up". Parental feedback-When the user has finished the workout, a notification will appear on the mobile phone to notify parents of children's completion. After the parent has checked their child's exercise data and clicked the confirm button in the exercise log interface, the smartwatch receives a star sticker to remind the child that parents have given "me" a "thumb-up". School feedback-In order to implement home-school exercise data sharing, the user could enter the personal homepage interface, click the "My Class" button, enter the class account and join in the class group. After checking the data, the teacher could choose to give certain student a "thumb-up" as encouragement. The smartwatch would receive a red flower sticker to remind the child as a result. Friend interaction-There is a friend interface on both the mobile phone terminal and smartwatch terminal where users could view their friends' workout for the day and choose to give them a "thumb-up". If the user receives a "thumb-up" from a friend, a reminder would appear on the smartwatch. To add a new friend, the user needs to enter the friend interface, click the "add a friend" button and then shake the skip rope. Two skip ropes could recognize each other within a specific distance and, accordingly, the account information will appear on the screen of the smartwatch. Users click "confirm" to add each other as friends. Software Interface Usability Testing and Results After the low-fidelity interactive prototype was finished, a usability test was conducted for the software system. In order to improve the usability of the applications, the following standards should be followed: the procedures and operations of applications must be efficient, easy to remember, easy to learn, subjectively make users feel happy, and must produce minimal errors. Seven participants, including three adults and four primary school students took part in the test. Before the test, users were introduced to the basic usage of the application. During the test, the users were required to complete 8 tasks through low-fidelity interactive prototypes. Their actions, words and expressions were recorded. Problems in the interaction, design and functional issues of the software were identified in terms of the following 8 metrics: successful task completion, critical errors, non-critical errors, error-free rate, time on task, subjective measures and likes, dislikes and recommendations [38]. The specific task schedule for usability testing is shown in Appendix D. Based on the results of usability testing, the software interface design was optimized. The problems that needed to be optimized were the following: • Task 1 and Task 3. The icon of My Garden was misleading: users mistook it as the Homepage. Additionally, the order of the tab arrangement did not correspond to the order of use, since the users assumed the first tab on the left was the interface to start with. • Task 2. In the process of forming a plan, users couldn't make changes to the content previously filled in. • Task 4. Users expressed a need to record additional workouts after completing the daily scheduled exercise. • Task 5. By simply displaying the heart rate value, users couldn't determine for themselves whether their heart rate was within the normal range. • Task 7. Users couldn't quickly understand what the My Garden interface was for. They neither knew how to obtain the water to care for the plant nor what would happen if they kept watering the plant. Optimized Interface Based on the feedback from the user testing, the optimized interface design is as follows: Main interface: According to the analysis result of Task 1 and Task 2, the order of the tabs was adjusted in accordance with the sequence of exercise data analysis-My Garden (gamification feedback)-settings. The icon of My Garden was also substituted to improve identification accuracy (Figure 14). description explained the gamification feedback system and the meaning of each icon in the interface (from the feedback of Task 7). Additional features: Based on user feedback of Task 4, a new feature called Challenge, which offers an additional workout as an alternative aside from the plan was added in the smartwatch terminal interface (Figure 15). Users can specify the time duration or the number of skips to begin with. Data analysis for additional workouts can also be accessed in the data analysis interface. User Requirements of General Importance The intelligent skipping rope service system provides exercise guidance in the form of a personal workout plan for students with different skipping abilities. For example, students who are just starting to learn to skip rope need effective pre-exercises and detailed instructions to help them master the correct skipping posture and grasp the rhythm for skipping. On the other hand, students who want to improve their skipping speed need a scientific and effective training program to reach this end. As for students who are interested in other ways of skipping, they need expanded introduction to other freestyle skipping skills. In addition, exercise guidance enhances students' physical health awareness, guiding them to form the habit of warming up and stretching. In the interface of plan formation, a back button was added in the upper left corner for users to jump back to revise the previous options (from the feedback of Task 2). Based on the analysis of Task 5, evaluation of heart rate was added to help users better control their exercise intensity. In addition, to offer users an intuitive overview of the My Garden interface, a description was added when users enter this interface for the first time. The description explained the gamification feedback system and the meaning of each icon in the interface (from the feedback of Task 7). Additional features: Based on user feedback of Task 4, a new feature called Challenge, which offers an additional workout as an alternative aside from the plan was added in the smartwatch terminal interface (Figure 15). Users can specify the time duration or the number of skips to begin with. Data analysis for additional workouts can also be accessed in the data analysis interface. description explained the gamification feedback system and the the interface (from the feedback of Task 7). Additional features: Based on user feedback of Task 4, a n lenge, which offers an additional workout as an alternative aside in the smartwatch terminal interface (Figure 15). Users can spec the number of skips to begin with. Data analysis for additional w cessed in the data analysis interface. User Requirements of General Importance The intelligent skipping rope service system provides exerc of a personal workout plan for students with different skippin students who are just starting to learn to skip rope need effecti tailed instructions to help them master the correct skipping postu for skipping. On the other hand, students who want to improve th User Requirements of General Importance The intelligent skipping rope service system provides exercise guidance in the form of a personal workout plan for students with different skipping abilities. For example, students who are just starting to learn to skip rope need effective pre-exercises and detailed instructions to help them master the correct skipping posture and grasp the rhythm for skipping. On the other hand, students who want to improve their skipping speed need a sci-entific and effective training program to reach this end. As for students who are interested in other ways of skipping, they need expanded introduction to other freestyle skipping skills. In addition, exercise guidance enhances students' physical health awareness, guiding them to form the habit of warming up and stretching. To bring "fun" to the physical activity of rope skipping and to motivate children to develop physical activity into a life-long hobby, this study established a multifaceted incentive system which includes self-evaluation, support from parents, peers and school. Additionally, considering space constraints, the fact that most children skip rope outdoors and the concerns of parents about exergames, gamification features are applied in postexercise feedback rather than throughout the entire exercise process. Implications for Design Research In the design process, designers should always take into consideration the different needs of children and parents. Given the different user scenarios, the service system designed in this research requires a combined use of a smartwatch terminal and a mobile phone terminal, the former of which orients towards pupils, while the latter is mainly for parental use. To implement the modular function and improve maintenance, the service system uses web services to realize communication between the mobile terminals and the server side. While the intelligent skipping rope acquires exercise data, the software part of the intelligent skipping rope system supports daily exercise plan guidance and an exercise data log of the users. At the same time, it provides gamification feedback to visualize the amount of exercise to motivate users to keep exercising. In addition, it has a data sharing function which could connect family and school together; the feedback from parents, teachers and peers would serve as a source of motivation. Study Limitations and Future Directions Several aspects limit the generality of the current research results, which can provide reference for future research. Firstly, the number of users participating in this study was limited, and more participants will improve the effectiveness of the study. Secondly, the results of this study conducted in China may be limited to a Chinese cultural background, and future studies should consider people from different cultural backgrounds. Furthermore, although qualitative approaches gave useful insights in the first phase of designing and developing the intelligent skipping rope and service system, the usability test result of the prototype is imprecise. Future studies should continue to perfect the function of the system, improve the user experience and explore the use of the system in a real-life setting. Conclusions Primary school students are in a critical period of physical growth. Rope skipping, as a whole-body coordinated activity, can provide students with cardiovascular fitness, strength, and conditioning. With the rapid raise of the obesity rate in school-aged children and the proposal of health-enhancing physical activity (HEPA), rope skipping is the perfect fitness activity to promote health and fitness in students. The booming development of smart fitness product design and the advent of exergames have brought new possibilities for physical education regarding rope skipping. This research took an intelligent skipping rope service system for pupils as the research object and aimed to design a new intelligent skipping rope and a complete service system. This research summarized users' needs through observation and semi-structured interviews, established personas, and proposed a combination of software and hardware based on production positioning. The interactive prototype was tested through usability evaluation, and optimization was then carried out according to the feedback. The final intelligent skipping rope service system design, in the combination of hardware and software products, implemented the acquisition, analysis and sharing of the exercise data. In addition, the features of personalized plan formation with exercise guid-ance and the incentive system which included self-evaluation, support from parents, peers and school as well as gamification feedback, made improvements in educating pupils with the knowledge, skills, and confidence to enjoy healthy physical activity. In order to determine the effect of the intelligent skipping rope service system on pupils' PA levels, self-efficacy, enjoyment, and psychological needs satisfaction, more experiments based on larger sample groups should be conducted as future work. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Conflicts of Interest: The authors declare no conflict of interest.
13,395.2
2021-07-29T00:00:00.000
[ "Engineering", "Education", "Computer Science" ]
Computationally efficient flux variability analysis Background Flux variability analysis is often used to determine robustness of metabolic models in various simulation conditions. However, its use has been somehow limited by the long computation time compared to other constraint-based modeling methods. Results We present an open source implementation of flux variability analysis called fastFVA. This efficient implementation makes large-scale flux variability analysis feasible and tractable allowing more complex biological questions regarding network flexibility and robustness to be addressed. Conclusions Networks involving thousands of biochemical reactions can be analyzed within seconds, greatly expanding the utility of flux variability analysis in systems biology. Background Flux balance analysis (FBA) [1,2] is concerned with the following linear program (LP) where the matrix S is an m × n stoichiometry matrix with m metabolites and n reactions and c is the vector representing the linear objective function. The decision variables v represent fluxes, with V ⊆ ℝ n and vectors v l and v u specify lower and upper bounds, respectively. The constraints Sv = 0 together with the upper and lower bounds specify the feasible region of the problem. Flux variability analysis (FVA) [3] is used to find the minimum and maximum flux for reactions in the network while maintaining some state of the network, e.g., supporting 90% of maximal possible biomass production rate. Applications of FVA for molecular systems biology include, but are not limited to, the exploration of alternative optima of (1) [3], studying flux distributions under suboptimal growth [4], investigating network flexibility and network redundancy [5], optimization of process feed formulation for antibiotic production [6], and optimal strain design procedures as a pre-processing step [7,8]. Let w represent some biological objective such as biomass or ATP production. After solving (1) with c = w, FVA solves two optimization problems for each flux v i of interest where Z 0 = w T v 0 is an optimal solution to (1), g is a parameter, which controls whether the analysis is done w.r.t. suboptimal network states (0 ≤ g <1) or to the optimal state (g = 1). Assuming that all n reactions are of interest, FVA requires the solution of 2n LPs. While FVA is clearly an embarrassingly parallel problem and is therefore ideally suited for computer clusters, this note focuses on how FVA can be run efficiently on a single CPU. A multi-CPU implementation of fastFVA can be done in the same way as for FVA, i.e., by distributing subsets of the n reactions to individual CPUs. It is expected to give almost linear speedup for sufficiently large problems. Implementation A direct implementation of FVA iterates through all the n reactions and solves the two optimization problems in (2) from scratch each time by calling a specialized LP solver, such as GLPK [9] or CPLEX (IBM Inc.) At iteration i, i = 1, 2, ..., n all elements of c are zero except c i = 1. Since the only difference between each iteration is a change in the objective function, i.e., the feasible region does not change, solving the LPs from scratch is wasteful. Each time a LP is solved, the solver has to spend some effort in finding a feasible solution. Once a feasible solution is found, the solver then proceeds to locate the optimum. The small changes in the objective function suggest that, on average, the optimum for iteration i does not lie far away from the optimum for iteration i + 1. With Simplex-type LP algorithms, this property can be exploited by solving problem (1) from scratch and then solving the subsequent 2n problems of (2) by starting from the last optimum solution each time (warm-starts). It should be noted that the default behavior of some Simplex-type solvers is to use warmstarts when a sequence of LPs is solved within the same application call. However, current implementations of FVA do not make use of this option (c.f. [10]). Furthermore, for increased efficiency, model preprocessing (presolving) should be disabled after solving the initial problem P. Given a value of 0 <g ≤ 1, fastFVA performs the following procedure Setup problem (1), denote it by P Solve P from scratch to get v 0 and Z 0 Add the constraint w T v ≥ gZ 0 to P for i = 1 to n Let c i = 1 and c j = 0, ∀j ≠ i Maximize P, starting from v i-1 to get v i and Z i maxFlux i = Z i Once all maximization problems have been solved, the minimization problems are solved in the same way, starting from v 0 = v n . An important difference between the various LP solvers available is their ability to exploit multiple core CPUs or multi-processor CPUs to increase performance. The GLPK solver, for example, is a single threaded application. When running on a quad-core machine with hyperthreading enabled, the CPU load is only at 12-13%. On multi-core machines, a significant speedup can often be achieved by simply running multiple instances of fastFVA, each working on a different subset of the n reactions. The fastFVA package runs within the Matlab environment, which will facilitate the use of fastFVA by users less experienced in programming. In addition, many biochemical network models can be imported into Matlab using the Systems Biology Markup Language (SBML) and the COBRA toolbox [10]. The fastFVA code is written in C++ and is compiled as a Matlab EXecutable (MEX) file (additional file 1). Matlab's PARFOR command is used to exploit multi-core machines. Two different solvers are supported, the open-source GLPK package [9], and the industrial strength CPLEX solver from IBM. Results and Discussion We evaluated the performance of fastFVA on six biochemical network models ranging from approx. 650 up to 13,700 reactions (Table 1, additional file 2). Performance evaluation Four metabolic networks and two versions of a genomescale network of the transcriptional and translational (tr/tr) machinery of E. coli were used for testing the fas-tFVA code ( Table 1). The biomass reaction was used as an objective in the metabolic models, while the demand of ribosomal 50 S subunit was used as the objective in the tr/tr models. In all cases, flux distributions corresponding to at least 90% of optimal network functionality were sought. The fastFVA code was tested on a DELL T1500 desktop computer with a 2.8 GHz quad core Intel i7 860 processor with hyperthreading enabled and Windows 7. Running times The running times are given in Table 2 where fastFVA is compared to the direct implementation of FVA found in the COBRA toolbox [10]. The observed speedup is significant, ranging from 30 to 220 times faster for GLPK and from 20 to 120 times faster for CPLEX. The minimum and maximum flux values obtained with fas-tFVA were essentially identical to the values obtained with the direct approach (data not shown). Other uses of fastFVA The fastFVA code can be used to compute the fluxspectrum [11], a variant of metabolic flux analysis, simply by setting g = 0 in (2). The a-spectrum [12], which has been used to study flux distributions in terms of extreme pathways, can also be computed with fastFVA. In this case, the parameter g in (2) is set to zero and the S matrix is replaced by a matrix P containing the extreme pathways as its columns. Conclusions With this efficient FVA tool in hand, new questions can be addressed to study the flexibility of biochemical reaction networks in different environmental and genetic conditions. It is now possible to design computational experiments requiring hundreds or even thousands of FVAs. Availability and requirements The fastFVA package is freely available at http://notendur.hi.is/ithiele/software/fastfva.html together with precompiled binaries for Linux and Microsoft Windows. The fastFVA code runs under Matlab and relies on third-party solvers to solve linear optimization problems. Two such solvers are supported, the open source GLPK [9] and the industrial strength CPLEX (IBM Inc.) The fastFVA code is written in C++ and is compiled as a Matlab EXecutable function (MEX). It is released under GNU LGPL. Additional material Additional file 1: This file contains the C++ source code, the precompiled binaries, an example on how to use fastFVA and scripts for carrying out the experiments described above. Additional file 2: This file contains the six metabolic networks used in the experiments.
1,933.4
2010-09-29T00:00:00.000
[ "Computer Science", "Biology" ]
On the global well-posedness of axisymmetric viscous Boussinesq system in critical Lebesgue spaces The contribution of this paper will be focused on the global existence and uniqueness topic in three-dimensional case of the axisymmetric viscous Boussinesq system in critical Lebesgue spaces. We aim at deriving analogous results for the classical two-dimensional and three-dimensional axisymmetric Navier-Stokes equations recently obtained in \cite{Gallay,Gallay-Sverak}. Roughly speaking, we show essentially that if the initial data $(v_0,\rho_0)$ is axisymmetric and $(\omega_0,\rho_0)$ belongs to the critical space $L^1(\Omega)\times L^1(\mathbb{R}^3)$, with $\omega_0$ is the initial vorticity associated to $v_0$ and $\Omega=\{(r,z)\in\mathbb{R}^2:r>0\}$, then the viscous Boussinesq system has a unique global solution. INTRODUCTION The description of the state of a moving stratified fluid in three-dimensional under the Boussinesq approach, taking into account the friction forces is determined by the distribution of the fluid velocity v(t, x) with free-divergence located in position x at a time t, the scalar function ρ(t, x) designates either the temperature in the context of thermal convection, or the mass density in the modeling of geophysical fluids and p(t, x) is the pressure which is relates v and ρ through an elliptic equation. This provides us to the Cauchy problem for the Boussinesq system, and BMO −1 . It should be noted that these types of solutions are globally well-posed in a time for initial data sufficiently small with respect to the viscosity, except in two-dimensional, see [37,41]. In a similar way, especially in dimension two of spaces, the system (B µ,κ ) was tackled by enormemos authors in various functional spaces and different values for the parameters κ and µ. For the connected subject, we refer to some selected references [25,27,28,30,36,39,44]. Before discussing some theoretical underpinnings results on the well-posedness topic for the viscous Boussinesq system (B µ,κ ) in three-dimensional. First, let us notice that the topic of global existence and uniqueness for (NS µ ) in the general case is a till an open problem in PDEs. It is therefore incumbent upon us to seek a subclass of vector fields which in turns leading to some conservation quantities, and so the global well-posedness result. Such subclass involving to rewriting (NS µ ) under vorticity formulation by applying the "curl" to the momentum equation, which is defined by ω = ∇ × v. Thus, we have: (1.1) Above, the triplet ( e r , e θ , e z ) represents the usual frame of unit vectors in the radial, azimuthal and vertical directions with the notation e r = x 1 r , x 2 r , 0 , e θ = − x 2 r , x 1 r , 0 , e z = (0, 0, 1). For these flows the vorticity ω takes the form ω ω θ e θ , with Taking advantage to divv = 0, the velocity field can be determined clearly in the half-space Ω = {(r, z) ∈ R 2 : r > 0} by solving the following elliptic system under homogeneous boundary conditions v r = ∂ r v z = 0. The differential system (1.3) is wellknown the axisymmetric Biot-Savart law associated to (NS µ ), see Section 2.1. A few computations claim that the streatching term ω · ∇v close to v r r ω θ and that ω θ evolves, with the notations v · ∇ = v r ∂ r + v z ∂ z and ∆ = ∂ 2 r + 1 r ∂ r + ∂ 2 z . By setting Π = ω θ r , we discover that Π satisfies Since the dissipative operator (∆ + 2 r ∂ r ) has a good sign, thus the L p −norms are time bounded, that is for t ≥ 0 Π(t) L p ≤ Π 0 L p , p ∈ [1, ∞]. (1.6) Under this pattern, M. Ukhoviskii and V. Yudovich [43], independently O. Ladyzhenskaya [35] succeed to recover (NS µ ) globally in time, whenever v 0 ∈ H 1 and ω 0 , ω 0 r ∈ L 2 ∩ L ∞ . This result was relaxed later by S. Leonardi, J. Màlek, J. Necȃs and M. Pokorný for v 0 ∈ H 2 and weakened recently in [1] by H. Abidi for v 0 ∈ H 1 2 . The majority of aforementioned results are accomplished within the framework of finite energy solutions. For the solutions with infinite energy, in particular in two dimensions many results dealing with the global well-posedness problem have been obtained by numerous authors. Particularly, worth mentioning that Giga, Miyakawa and Osada have been established in [22] that (NS µ ) admits a unique global solutions for initial vorticity is measure. Lately, M. Ben-Artzi [3] has shown that (NS µ ) is globally well-posed whenever the initial vorticity ω 0 belongs to critical Lebesgue space L 1 (R 2 ) who proposed a new formalism based on elementary comparison principles for linear parabolic equations. While, the uniqueness was relaxed later by H. Brezis [6]. Thereafter, this result was improved by I. Gallagher and Th. Gallay [17], where they constructed a solutions globally in time, under the assumption that ω 0 is a finite measure. For more details about this subject we refer the reader to the references [11,18,21]. More recently, the global well-posedness problem for (1.4) was revisited in three-dimensional by Th. Gallay and V.Sverák who established in [20] that (NS µ ) posseses a unique global solutions if the initial velocity is an axisymmetric vector field and its vorticity lying the critical space L 1 (Ω). In addition, they were extending their results in more general case, i.e., ω 0 is a finite measure, where Ω is half-plane Ω = {(r, z) ∈ R 2 : r > 0, z ∈ R} endowed with the product measure drdz. Actually, their paradigm uses specifically the standard fixed point method to show in particular the local well-posedness for the system (1.4) under the form ∂ t ω θ + div ⋆ (vω θ ) = µ ∆ − 1 r 2 ω θ combined with the special structure of the vorticity, in particular, the axisymmetric Biot-Svart law, where div ⋆ f = ∂ r f r + ∂ z f z in general case. They showed that the local solutions often constructed can be extended to the global one by exploiting some a priori estimates for ω θ in various norms. We point us that the uniqueness topic for initial vorticity is measure was done by the same authors under some smallness condition for the ponctual part. Furthermore, they provide also an asymptotic behavior study for positive vorticity with a finite impulse. Apropos of (B µ,κ ), the global regularity in dimension three of spaces has received a considerable attention. As shown in [13], for κ = 0 R. Danchin and M. Paicu investigated that (B µ,κ ) is well-posed in time for any dimension in the framework of Leray's and Fujita-Kato's solutions, excpect in three-dimensional and under also smallness condition, the system (B µ,κ ) is also global. Next, in the axisymmetric case, H. Abidi, T. Hmidi and S. Keraani were establishing in [4] that (B µ,κ ) is globally well-posed by rewriting it under vorticity-density formulation: Consequently, the quantity Π = ω θ r solving the equation They assumed that v 0 ∈ H 1 (R 3 ), Π 0 ∈ L 2 (R 3 ), ρ 0 ∈ L 2 ∩L ∞ with supp ρ 0 ∩(Oz) = / 0 and P z (supp ρ 0 ) is a compact set in R 3 , especially to dismiss the violent singularity of ∂ r ρ r , with P z being the orthogonal projector over (Oz). Those results are improved later by T. Hmidi and F. Rousset in [29] for κ > 0 by removing the assumption on the support of the density. Their strategy is deeply based on the coupling between the two equations of the system (1.9) by introducing a new unknown which called coupled function (see, Section 2). In the same way, more recently P. Dreyfuss and second author [14] treated the system in question, where they replaced the full dissipation by an horizontal one in all the equations, and proved the global wellposedness if the axisymmetric initial data In the same direction, in [34] the second and third authors, succeed to solve (B µ,κ ) globally in time for κ = 0 and axisymmetric initial data , by essentially combining the works of [1,13,29]. In the present paper, we are interested to conduct the same results recently obtained in [20] for the viscous Boussinesq system (B µ,κ ) expressed by the following vorticity-density formulation.   9) for initial data (ω 0 , ρ 0 ) in the critical Lebesgue space L 1 (Ω) ×L 1 (R 3 ), with the following notations Let us denote that the spaces L 1 (Ω) and L 1 (R 3 ) are scale invariant, in the sense for any (ω 0 , ρ 0 ) ∈ L 1 (Ω) × L 1 (R 3 ) and any λ > 0. This is derived from the fact is a symmetry of (1.9). At this stage, we are ready to state the main result of this paper. To be precise, we will prove the following theorem. be an axisymmetric initial data, then the system (1.9) for κ = 1 admits a unique global mild solution. More precisely, we have: (1.10) (1.11) Furthermore, for every p ∈ [1, ∞], there exists some constantK p (D 0 ) > 0, for which, and for all t > 0 the following statements hold. A few comments about the previous Theorem are given by the following remarks. Remark 1.2. By axisymmetric scalar function we mean again a function that depends only on the variable (r, z) but not on the angle variable θ in cylindrical coordinates. We check obviously that the axisymmetric structure is preserved through the time in the way that if (v 0 , ρ 0 ) is axisymmetric without swirl, then the obtained solution is it also. Remark 1.3. The hypothesis ω θ is in L 1 (Ω) doesn't implies generally that the associated velocity v is in L 2 (Ω) space. Consequently, the classical energy estimate is not available to derive a uniform bound for the velocity. The proof is organized in two parts. The first one cares with the local well-posedness topic for (1.9) in the spirit of Gallay and Svérak [20]. We make use of fixed point-method for an equivalent system (1.9) on product space equipped with an adequate norm with the help of the axisymmetric Biot-Savart law and some norm estimates between the velocity and vorticity. But in our context, we should deal carefully wih the additional term ∂ r ρ r which contributes a singularity over the axe 5 (Oz). The remedy is to hide this term by exploiting the coupling structure of the system (1.9) for κ = 1 and introducing a new unknown functions Γ and Γ in the spirit of [29] by setting Γ = ω θ r − ρ 2 , and Γ = rΓ. A straightforward computation shows that Γ and Γ solve, respectively (1.14) (1. 15) In fact, in the second part, we shall investigate some a priori estimates for the different variables, in order to derive the global regularity for the system in question. Significant properties of the new unknowns, such as the maximum principle, are gained in this transition and Γ evolves a similar equation and keeps the same boundary conditions than Π in the case of the axisymmetric Navier-Stokes without swirl, see (1.5). As consequence, the function Γ (and eventually Γ after some technical computations) satisfies the boundedness estimate as in (1.6) which will be crucial in the process of deriving the global regularity of our solutions. For the reader's convenience, we provide a brief headline of this article. In section 2, we briefly depict the framework that exists regarding the axisymmetric Biot-Savart law. Many results could be spent in explaining this framework in detail, in particular, the relation between the velocity vector field and its vorticity by means of stream function. Along the way, we recall some weighted estimates which will be helpful in the sequel. Afterwards, we focus in the linear equation of (1.9) and some characterization of their associated semigroup, in particular the L p → L q estimate as in two-dimension space. Section 3, mainly treats the local well-posedness topic for the system (1.9). The main tool is the fixed point argument on the product space combined with a few technics about the semigroup estimates. In section 4, we investigate some global a priori estimates by coupling the system (1.9) and introducing the new unknowns Γ and Γ. Considering these latest quantities will be a helpful to derive the global existence for the equivalent system (1.9) and consequently the system (B µ,κ ). SETUP AND PRELIMINARY RESULTS In this section we recall some basic tools which will be employed in the subsequent sections. In particular, we devellop the Biot-Savart law in the framework of axisymmetric vector fields, and we study the linear equation associated to the system (1.9), usually specialized to the local existence. The tool box of Biot-Savart law. Recalling that in the cylindrical coordinates and in the class of axisymmetric vector fields without swirl the velocity is given by v = (v r , 0, v z ) with v r and v z are independtly of θ −variable, ω θ its vorticity defined from Ω into R by ω θ = ∂ z v r − ∂ r v z and the divergence-free condition divv = 0 turns out to be In this case, it is not difficult to build a scalar function Ω ∋ (r, z) → ψ(r, z) ∈ R which called axisymmetric stream function and satisfying Consequently, one obtains that ψ evolves the following linear elliptic inhomogeneous equation It is evident that L is an elliptic operator of second order, then according to [42], L is invertible with an inverse L −1 . Consequently, the last boundary value problem admits a unique solution given by where the function F : (0, ∞) → R is expressed as follows. Since, F cannot be expressed as an elementary functions, but it contributes some asymptotic properties near s = 0 and s = ∞ listed in the following proposition. For more details about the proof, see [15,42]. Proposition 2.1. Let F be the function defined in (2.4), then the following assertions are hold. Thus in view of (2.3), Ψ takes the form 7 with K can be seen as the kernel of the last integral representation. The last formula together with (2.1) claim that there exists a genuine connection between the velocity and its vorticity, namely, axisymmatric Biot-Savart law which reads as follows Here, with the notation and (2.8) A worthwhile property of the kernals K r and K z are given in the following result. For more details about the proof, see [20]. Now, we state the first consequence of the above result, in particular the L p → L q between the velocity and its vorticity, specifically we establish. Proposition 2.3. Let v be an axisymmetric velocity vector associated to the vorticity ω θ via the axisymmetric Biot-Savart law (2.6). Then the following assertions are hold. (2.11) Proof. (i) Combining (2.6) and (2.9), we get The last two integrals of the right-hand side are be seen as a singular integral. So, by hypothesis (ii) Let R > 0, then in view of (2.9) we have. 8 where Then by easy computations achieve the estimate. In the axisymmetric case the weighted estimates practice a decisive role to bound some quantities like r α v in Lebesgue spaces for some α. Now, we state some of them which their proofs can be found in [15,20]. Characterizations of semigroups associated with the linearized equation. We focus on studying the linearized boundary initial value problem associated to the system (1.9) and we state some properties of their semipgroups. Specifically, we consider in the product space Ω × R 3 , with Ω = {(r, z) ∈ R 2 : r > 0} is the half-space by prescribing the homogeneous Dirichlet conditions at the boundary r = 0 for ω θ variable. For (ω 0 , ρ 0 ) ∈ L 1 (Ω) × L 1 (R 3 ), the solution of (2.13) is given explicitely by where (S 1 (t)) t≥0 and (S 2 (t)) t≥0 being respectively the semigroups or evolution operators associated to the dissipative operators (∆ − 1 r 2 ) and ∆. Such are characterized by the following explicit formulae, namely we have. (2.15) Proof. We assume that (ω θ , ρ) solving (2.13), then a straigthforward computations claim that (ω, ρ), with ω = ω θ e θ satisfying the usual heat equation ∂ t ω − ∆ω = 0 and ∂ t ρ − ∆ρ = 0 in R 3 with initial data (ω(0, ·), ρ(0, ·)). Therefore, for every t > 0 we have (2.16) We will develop each term in the cylindrical basis ( e r , e θ , e z ) by writing x = (r cos θ , r sin θ , z) and x = ( r cos θ , r sin θ , z), hence the first equation of (2.16) takes the form , thus we have (2.18) To treat I 1 , we set α = θ − θ 2 then we have Then the last estimate becomes Similarly, Combining the last two estimates and plug them in I 1 we reach the desired estimate. 10 For the second equation in (2.16) we express the density formula in ( e r , e θ , e z ) basis (2.19) Setting The same variable α = θ − θ 2 allows us to write Plug I 2 in (2.19), we get the result. This ends the proof of the Proposition. The following Proposition provides some asymptotic behavior of the functions N 1 and N 2 near 0 and ∞, which will be fundamental in the sequel. Proposition 2.6. Let N 1 , N 2 : (0, ∞) → R be the functions defined in (2.15). Then the following statements are hold. Note that lim t↑0 II 2 = 0, so the behavior of N 1 near 0 comes from II 1 . Hence, let us deal with II 1 , we insert the Taylor expansion of the function ζ → 1 √ 1−tζ 2 in the integral of II 1 to obtain . It is straightforward to show that Consequently, lim t↑0 II 1 = 1. Combining all the previous quantities, we find the asymptotic behavior of II 1 near 0, that is, By derivation of II 1 , we find the behavior of N ′ 1 . (ii) The Mac Laurin's expansion of the function α → e − sin 2 α t at 0 is given by . After an easy computations we achieve the estimate. (iii) To prove this assertion, setting y = sin α √ t in N 2 and we split the integral into two parts, one has We follow the same steps as N 1 . For the second integral in right-hand side, we have Let us observe that the last estimate goes to 0 as t ↑ 0, so the asymptotic behavior of N 2 near 0 comes only from the first integral. To be precise, it is clear that t → 1 √ 1−ty 2 is bounded function whenever 0 < y < 1 Thus, the expansion of the function x → (1 − x) − 1 2 for x = ty 2 enuble us to write (iv) Using the fact sin α ≃ α near 0, then we get We set y = α √ t , clearly that y ↑ 0 as t ↑ ∞ and the power expansion of the function e y near 0 yields the asymptoyic expansion, whereas N ′ 2 is a direct derivative of N 2 expansion. Some consequences of the previous Proposition are listed in the following remark. (i) It should be noted that the functions t → N 1 (t) and t → N 2 (t) are decreasing over ]0, ∞[, but the proof seems very hard. Other nice properties of (S i (t)) t≥0 , with i = 1, 2, in particular the estimate L p → L q are given in the following result. Here, div ⋆ f = ∂ r f r + ∂ z f z (resp. div f = ∂ r f r + ∂ z f z + f r r ) stands the divergence operator over R 2 (resp. the divergence operator over R 3 in the axisymmetric case). Proof. (i) We follow the proof of [20] with minor modifications, for this aim let (r, z), ( r, z) ∈ Ω, we will prove the following worth while estimates      1 4πt . (2.23) We distinguish two cases r ≤ 2r and r > 2r. (ii) By definition for every (r, z) ∈ Ω, we have After an integration by parts, it happens and We proceed by the same manner as above. The fact that the functions N 1 , N ′ 1 and t → t α N 1 (t),t → t α N ′ 1 (t) are bounded, see Remark 2.7, one finds and | f z ( r, z)|d rd z. 14 Together with Young's inequality, we obtain (2.21). (iii) Let (r, z) ∈ Ω, then we have The two terms III 3 and III 4 ensue by the same argument as in (ii). It remains to treat the term III 5 in the following way The fact that (·/r r) 1/2 N 2 (·/r r) is bounded guided to By pluging the last estimate in (2.25) and combine it with (2.24), it follows Then a new use of Young's inequality leading to the result. To close our claim, it remains to establish that R + ∋ t → S 1 (t) (resp. R + ∋ t → S 2 (t)) is continuous on L p (Ω) (resp. on L p (Ω)). We restrict ourselves only for (S 1 (t)) t≥0 . Let ω 0 ∈ L p (Ω) and define its extension on R 2 by ω 0 which equal to 0 outside of Ω. Thus, in view the change of variables r = r + √ tϑ and z = z + √ tγ, the statement (2.14) takes the form where ϒ(t, r, z, ϑ , γ) = 1 + √ tϑ r Taking the L p −estimate of (2.26), then with the aid of the following Minkowski's integral formula in general case Now, we must establish that ϒ(t, r, z, ϑ , γ) L p (Ω) → 0 as t ↑ 0. To do this, let r > 0 and r + √ tϑ > 0. Writting Therefore On the other hand, it is clear to verify that 1 tϑ ) goes to 1 as t ↑ 0. Thus, Lebesgue's dominated convergence asserts for (ϑ , γ) ∈ R 2 that ϒ(t, r, z, ϑ , γ) L p (Ω) → 0 when t ↑ 0. A new use of Lebesgue's dominated convergence, we finally deduce lim t↑0 S 1 (t)ω 0 (r, z) − ω 0 (r, z) L p (Ω) → 0, (2.27) which accomplished the proof. In the spirit of Proposition 3.5 in [20], another weighted estimates for the linear semigroup (2.14) are shown in the following proposition, the proof of which can be done by the same reasoning as in the previous proposition, We end this section by recalling the following classical estimate on the heat kernel in dimension three, the proof of which is left to the reader. (2.30) LOCAL EXISTENCE OF SOLUTIONS We will explore the aformentioned results and some preparatory topics in the previous sections, we shall scrutinize the local well-posedness issue for the system (1.9). For this reason, we rewrite it in view of the divergence-free condition in the following form The direct treatment of the local well-posedness topic for (3.1) in the spirit of [20] for initial data (ω 0 , ρ 0 ) in the critical space L 1 (Ω) × L 1 (R 3 ) contributes many technical difficulties. This motivates to add the following new unknown ρ rρ which solves We remark that ρ satisfies the same equation as ω θ with additional source term and their variations are in Ω. To achieve our topic we will handle with the following equivalent integral formulation. In order to analyze the above system, we will be working in the following Banach spaces. equipped with the following norms Now, our task is to prove the following result. admits a unique local solution satisfying Proof. We will proceed by the fixed point theorem in the product space X T = X T × X T × Z T equipped by the norm Notice that by definition, we have For t ≥ 0, define the free part (ω lin (t), ρ lin (t), ρ lin (t)) = S 1 (t)ω 0 , S 1 (t)(rρ 0 ), S 2 (t)ρ 0 , where S 1 (t), S 2 (t) is given in Proposition 2.5. In accordance with the (i)-Proposition 2.8, it is not difficult to check that for (ω 0 , ρ 0 ) ∈ L 1 (Ω) × L 1 (R 3 ), we have for T > 0 and sup 0<t≤T t 1/4 ρ lin (t) On the other hand, the fact that together with (2.28) stated in Proposition 2.9, we further get Combining (3.5), (3.6) and (3.7) to obtain that (ω lin , ρ lin , ρ lin ) ∈ X T . Next, define the following quantity which will be useful in the contraction. We claim that Λ(ω 0 , ρ 0 , T ) → 0 when T ↑ 0. To do this, we employ the fact ( . Then for every ε > 0 and every On account of (i)-Proposition 2.8 we write Multiply the both sides by t 1/4 and taking the supremum over (0, T ] to get Thus, by setting and let T (resp. ε) goes to 0, one deduces Now, we multiply the both sides by t 3/8 and taking the supremum over (0, T ] to deduce Similarly, by putting we shall obtain that lim Now, we are ready to contract the integral formulation (3.3) in X T . Doing so, define for (ω θ , ρ, ρ) ∈ X T the map We aim at estimating Due to the similarity of the first two lines of (3.16), we will restrict ourselves to analyse the first and the third ones. For dτ. 19 Thanks to (2.10), it follows that We show next how to estimate t 0 S 1 (t −τ)∂ r ρ(τ)dτ in L 4 3 (Ω). In view of Proposition 2.9 for α = 0 and β = 3 4 , we get The above estimates combined with (3.8) provide the following inequality As explained above, the estimate of t 0 S 1 (t − τ)div ⋆ v(τ) ρ(τ) dτ can be done along the same lines, so we have we deduce that Let us move to estimate the last line in (3.16). Under the remark div(vρ) = v r r ρ + div ⋆ (vρ), we write dτ (3.20) 20 So, for the first term, we shall apply (2.28) stated in Proposition 2.9 for α = 3 4 and β = 2 to get The second term of the r.h.s. in (3.20), will be done by a similar way as above, but we employ (2.29) in Proposition 2.9 for α = 3 4 and β = 1, one may write Under the conditions 1 The details can be done by following the same approach of Lemma 5.1 from [20] concerning the Navier-Stokes equations, but in our case we have an additional term ∂ r ρ which its integral vanishes over Ω. 22 we shall obtain (3.27) We recall that ρ evolves the same equation as ω θ , so we have (3.28) Finally, to claim similar estimate for J p (ρ, T ), first we write Under the additional condition on p, q 1 , q 2 (3.29) Plugging (3.29) in (3.27) and (3.28) for q = 4 3 , and by denoting we deduce, that Now, to cover all the rang p ∈ ( 4 3 , ∞), we proceed by the following bootstrap algorithm: • For q 1 = q 2 = 4 3 we obviously check that U p (T ) → 0 as T → 0 for all 1 < p < 3 2 . • Next, by taking q 1 = q 2 sufficiently close to 3 2 , we obtain the same result, for all p < 9 5 . • For q 1 = q 2 = 8 5 , the estimate in question holds for all p < 2. • Taking q 1 sufficiently close to 2, the result follows for all p < 3 2 q 2 and for all q 2 < 2. • Finally, we define the sequence p n by p 0 = 4 3 and p n sufficiently close to 3 2 p n−1 , by induction, we find that p n is sufficiently close to ( 3 2 ) n p 0 . Hence, letting n goes to ∞, we can cover all the rage p < ∞, and thus we obtain Our last task of this section is to reach the continuity of the solution stated in (1.10) and (1.11) of the main Theorem 1.1. For this aim, we briefly outline the continuity of ω θ , the rest of quantities can be treated along the same lines. So, we will show that To do so, let 0 < t 0 ≤ t < T ⋆ (t 0 close to 0 for p = 1), so we have (3.31) The first term (free part) is derived by the same manner as in (2.27), that is to say, (3.32) Concerning the second term in the r.h.s of (3.31), (2.21) in Proposition 2.5 provides By virtue of the following interpolation estimate, see, Proposition 2.3 in [20], we have for some which is sufficient to obtain Let us move to the last term of (3.31) which we distinguish two cases for p. For p ∈ (1, ∞), (2.29) stated in Proposition 2.9 for α = 0 and β = 1 p yielding (3.34) and the fact that ρ ∈ L ∞ (0, T * ); L p (R 3 ) ensures that For the case p = 1, we will work with Γ instead of ω θ to avoid the source term ∂ r ρ. The fact rρ L 1 (Ω) = ρ L 1 (R 3 ) leading to so, the continuity of ω θ (·) L p (Ω) relies then on the continuity of Γ(·) L p (Ω) and ρ(·) L 1 (R 3 ) . On the one hand, seen that the equation ofΓ governs the same equation to that of ω θ , but without the source term ∂ r ρ, hence we follow then the same appraoch as above to prove that On the other hand, ρ solve a transport-diffusion equation, for which the continuity property is well-known to hold, thus we skip the details. Therefore Combining the last estimate with (3.32), (3.33) and (3.35), we achieve the result. GLOBAL EXISTENCE To reach the global existence for the local solution often formulated in sections 3, we will establish some a priori estimates in Lebesgue spaces. For this target, be a solution of the integral formulation (3.3) and so does (ω θ , ρ) to the differential equation (3.1) associted to initial data (ω 0 , ρ 0 ) ∈ L 1 (Ω) × L 1 (R 3 ), where T * denotes the maximal time of existence. Our basic idea is to couple the system (3.1) by introducing the new unknown Γ = Π − ρ 2 following [29] with Π = ω θ r . Some familiar computations show that Γ obeys (4.1) For our analysis, we need to introduce again the unknown Γ rΓ = ω θ − ρ 2 , which solves ∂ t Γ + div ⋆ (v Γ) − (∆ − 1 r 2 ) Γ = 0 if (t, r, z) ∈ R + × Ω, Γ |t=0 = Γ 0 . (4. 2) The role of the new function Γ (resp. Γ) for the viscous Boussineq system (B µ,κ ) is the same that Π (resp. ω θ ) for the Navier-Stokes equations (NS µ ). For this aim, it is quite natural to treat carefully the properties of Γ and Γ. The starting point of our analysis says that Γ enjoys the strong maximum principle. We will prove the following. Generally if Γ 0 changes its sign, we procced as follows: we split Γ(t) = Γ + (t) − Γ − (t), where Γ ± solves the following linear equation with the same velocity ∂ t Γ ± + v · ∇Γ ± − (∆ + 2 r ∂ r )Γ ± = 0 if (t, x) ∈ R + × R 3 , Γ ± |t=0 = max(±Γ 0 , 0) ≥ 0. (4.6) Arguiging as above to obtain that Γ ± satisfies (4.4). Thus we have: . If Γ 0 = 0, we distinguish that Γ 0 > 0 or Γ 0 < 0. For this two cases the last inequality is strict and consequently (4.4) is also strict. Therefore, t → Γ(t) L 1 (R 3 ) is strictly decreasing for t = 0, and analogously we deduce that is strictly decreasing over [0, T ]. Now, we state a result which deals with the asymptotic behavior of the coupled function Γ in Lebegue spaces L p (R 3 ). Specifically, we have. Proposition 4.3. Let ρ 0 , ω 0 r ∈ L 1 (R 3 ), then for any smooth solution of (4.1) and 1 ≤ p ≤ ∞, we have Thanks to the well-known Nash's inequality in general case We prove (4.8) for p = 2 n with nonnegative integers n by induction. Assume that (4.8) is true for q = 2 k with k ≥ 0, and let p = 2 k+1 . Combined with (4.11) Thus we have Hence, integrating in time le last inequality yields After a few easy computations, we derive the following By setting C p = 3p 8C 3 2p C q , then (4.8) remains true for p = 2 k+1 . Let us observe that which means that C ∞ is independtly of p. Letting p → ∞, we deduce that For the other values of p, we proceed by complex interpolation to get , combined with (4.12), so the proof is completed. Next, we recall some a priori estimates for ρ−equation in Lebesgue spaces. To be precise, we have.
8,244.6
2020-02-01T00:00:00.000
[ "Mathematics" ]
Magnetite Alters the Metabolic Interaction between Methanogens and Sulfate-Reducing Bacteria It is known that the presence of sulfate decreases the methane yield in the anaerobic digestion systems. Sulfate-reducing bacteria can convert sulfate to hydrogen sulfide competing with methanogens for substrates such as H2 and acetate. The present work aims to elucidate the microbial interactions in biogas production and assess the effectiveness of electron-conductive materials in restoring methane production after exposure to high sulfate concentrations. The addition of magnetite led to a higher methane content in the biogas and a sharp decrease in the level of hydrogen sulfide, indicating its beneficial effects. Furthermore, the rate of volatile fatty acid consumption increased, especially for butyrate, propionate, and acetate. Genome-centric metagenomics was performed to explore the main microbial interactions. The interaction between methanogens and sulfate-reducing bacteria was found to be both competitive and cooperative, depending on the methanogenic class. Microbial species assigned to the Methanosarcina genus increased in relative abundance after magnetite addition together with the butyrate oxidizing syntrophic partners, in particular belonging to the Syntrophomonas genus. Additionally, Ruminococcus sp. DTU98 and other species assigned to the Chloroflexi phylum were positively correlated to the presence of sulfate-reducing bacteria, suggesting DIET-based interactions. In conclusion, this study provides new insights into the application of magnetite to enhance the anaerobic digestion performance by removing hydrogen sulfide, fostering DIET-based syntrophic microbial interactions, and unraveling the intricate interplay of competitive and cooperative interactions between methanogens and sulfate-reducing bacteria, influenced by the specific methanogenic group. INTRODUCTION Anaerobic digestion (AD) is a complex process involving diverse microbiota that degrades a heterogeneous mixture of compounds producing methane (CH 4 ).In brief, the hydrolysis of complex organic matter leads to the formation of simpler molecules that undergo a primary fermentation to obtain acetate, carbon dioxide (CO 2 ), hydrogen (H 2 ), and formate, which are then secondarily used to obtain CH 4 . 1 The fine balance standing at the root of methanogenesis can be disturbed either by inhibition, such as high ammonia or volatile fatty acids concentrations, or by competition, such as the one between methanogens and sulfate-reducing bacteria� SRB. 2,3In anaerobic bioreactors, the presence of sulfur compounds such as protein, sulfate, thiosulfate, and sulfite leads to the formation of highly toxic and corrosive hydrogen sulfide (H 2 S), but there is still no definitive answer to the removal of this compound. 4Sulfate (SO 4 2− ) is reduced to H 2 S by SRB, which consume the same substrates used by methanogens, including acetate, and electron donors, including formate and H 2 , indeed creating a competitive behavior and ultimately limiting biomethane production. 5Methanogenesis based on indirect interspecies electron transfer (IIET)� comprising interspecies hydrogen transfer and/or interspecies formate transfer�is a rate-limiting step, since the highly energetic electron donors, such as formate and H 2 , are produced exclusively when their concentration is kept low.Additionally, SRB overcompete methanogens since sulfate reduction is energetically more favorable compared to CO 2 reduction. 6An alternative to IIET-based methanogenesis is a syntrophic mechanism based on direct interspecies electron transfer (DIET) where exoelectrogenic bacteria can directly provide electrons to electrotrophic archaea. 7t has been previously assessed that in the methanogenic DIET-based process, electrically conductive pili (e-pili) associated with multiheme c-type cytochromes (MHCs) play a fundamental role. 8,9These membrane-bound conductive structures are extensively involved in the extracellular transfer of electrons at the biotic−abiotic interfaces both in bacteria and archaea. 8,10,11However, it was recently revealed that the identification of DIET-capable microorganisms is not restricted to the presence of e-pili and MHCs. 12In fact, specific electron conductive structures external to the microorganisms, known as conductive materials (CMs), have the potential to enhance the electroactive properties of anaerobic biofilms. 13,14Examples of such conductive materials are humic acids, metallic ion content in the membrane, and biofilm polymers.However, the molecular mechanism of electron transfer in these systems is not yet fully understood.Examples are provided by humic compounds that undergo multiple reduction and oxidation steps, thereby transferring electrons from one microbial cell to another, while magnetite can form wires connecting the cells and allowing the electron flow, without the requirement of redox reactions. 12,15A classification can be proposed for carbon-based CMs such as biochar, granular activated carbon (GAC), carbon nanotubes (CNTs), and non-carbon-based CMs such as magnetite (Fe 3 O 4 ) and stainless steel. 16As a non-carbon-based iron oxide, magnetite is a ferrimagnetic mineral usually found in the form of Fe 3 O 4 that improves electron transfer ability by replacing the OmcS proteins distributed in the e-pili. 17Additionally, the position of the iron atom in the magnetite molecule can provide conductive properties exploitable for electron transfer between syntrophs; this can result in an increased degradation rate of organic matter during AD. 18There is overwhelming evidence that magnetite addition in methanogenic DIET-based systems has multiple positive effects, including, for instance, an increase in the CH 4 production rate, a reduction of the lag phase, and more efficient removal of many toxic intermediate compounds (e.g., benzoate, phenanthrene). 19,20However, to our knowledge, the effect of magnetite on the competition between SRB and methanogens is still to be clearly elucidated as well as the impact of this compound on the H 2 S removal and the overall dynamics occurring in complex microbiota. The central aim of this work was to elucidate how the introduction of magnetite as a conductive material would promote DIET mechanisms, thereby influencing the intricate interactions between methanogens and SRB.Consequently, the addition of magnetite was expected to have significant effects on the competitive dynamics between methanogens and SRB, as well as on the removal of H 2 S, ultimately impacting the overall composition and behavior of the microbiota during anaerobic digestion processes.For disclosing these interactions, two continuously stirred tank reactors (CSTRs) were monitored during long-term operation.At steady-state conditions, sulfate (SO 4 2− ) shocks were applied to stress the methanogenic population and thereby decrease the methane yield.Subsequently, magnetite was added as a conductive material to potentially promote DIET.Time series metagenomic analyses were applied to appreciate the microbiome evolution.A better comprehension of the interactions between SRB and methanogens is provided in the context of AD and the effect of magnetite on the microbial community is revealed. Inoculum, Feedstock, and Experimental Setup. The inoculum used in this study was obtained from the Hashøj Biogas plant (Denmark), stocked in 5 L plastic tanks, and rapidly transferred to the laboratory.It was sieved with a 5 mm sieve to remove large fibers and avoid blocking of tubes and then sparged with gaseous N 2 for 30 min to ensure an anaerobic environment.Source-separated organic waste was collected in the form of municipal biopulp from HCS A/S Transport & Spedition (Glostrup, Denmark) and used as feedstock.Biopulp was sieved with a 5 mm diameter mesh, diluted with distilled water to a final concentration of 56 gVS/ L, then stored at −20 °C, and thawed at 4 °C before use.The characteristics of the inoculum and the feedstock are reported in Table S1 (Data S1). Two identical lab-scale CSTRs (R-ctrl and R-mag) of 1.8 L working volume (2.3 L total volume) were set up.Both CSTRs consisted of the reactor vessel equipped with a magnetic stirrer, an influent bottle with a stirrer to ensure substrate homogeneity, a peristaltic pump for feeding, an effluent bottle, an electrical heating jacket, and a water-displacement gas meter.The reactors were operated under mesophilic (37 ± 1 °C) conditions to maintain the same working temperature as that operated in the biogas plant.The hydraulic retention time (HRT) was set at 23 days by a daily supply of 70 mL of diluted biopulp, leading to a constant organic loading rate (OLR) of 2.30 gVS/L-reactor.day.The experimental period was divided into four phases, and the reactors were maintained under the same conditions in the first three phases, while in the last phase, magnetite was added only in one of the two CSTRs: P1 (days 0−44); P2 (days 45−68); P3 (days 69−109), and P4 (days 110−197).The reactors were running at a steady state (less than 10% methane production variation for at least ten consecutive days) throughout phase P1.At the beginning of P2, Na 2 SO 4 (sodium sulfate suitable for HPLC, LiChropur, 99.0−101.0%,Sigma-Aldrich) was added as a single pulse to reach 0.6 g SO 4 2− /L in both CSTRs.Subsequently, the CSTRs were fed with Na 2 SO 4 -rich biopulp to maintain the sulfate content at the desired value.The second shock occurred at P3 when the Na 2 SO 4 level was increased to 1.2 g of SO 4 2− /L in both CSTRs and feedstocks.In P4, the content of SO 4 2− was kept constant and magnetite (Iron(II, III) oxide powder, <5 μm, 95%, Sigma-Aldrich) was added progressively (day 110 to 3.3 g/L, day 115 to 6.6 g/L, and day 120 to 10 g/L) only in Rmag for 12 days to reach a final concentration of 10 g/L.During this period, both reactors were manually fed from day 110 to day 122 and on days 110, 112, 114, 120, and 122, 3.6 g of magnetite were added in R-mag.In order to add SO 4 2− and magnetite, the feeding through the pumps was stopped, and the reactors were manually fed using a syringe containing diluted biopulp mixed with the compound of interest.The manual feeding was performed with 8 h gap within the same day and 16 h gap between one day and the other.The use of the syringe allowed to maintain anaerobic conditions while manually feeding. 2.2.DNA Extraction, Shotgun Sequencing, and Genome-Centric Metagenomics.Samples of 15 mL were collected at the end of each phase for DNA extraction, named R-mag P1, R-ctrl P1, R-mag P3, R-ctrl P3, R-mag P4, and Rctrl P4.Genomic DNA was isolated and purified using the DNeasy PowerSoil (QIAGEN 181 GmbH, Hilden, Germany) following the manufacturer's protocols with minor modifica-Environmental Science & Technology tions. 21Genomic DNA quality and quantity were determined using a Multiskan Sky Microplate Spectrophotometer (operated with Thermo Scientific SkanItTM Software 5.0, Thermo Fisher Scientific) and a Qubit fluorometer (Life Technologies, Carlsbad, CA).Illumina libraries were prepared with the Nextera DNA Flex Library Prep kit (Illumina, Inc., San Diego CA) and sequenced on the Illumina NovaSeq platform, producing 150 bp paired-end reads at the NGS sequencing facility of the Biology Department (University of Padova, Italy). 2.3.SEM Analysis and Qualitative Sulfur Detection.The wet catalytic oxidation reaction between magnetite and dissolved H 2 S was assessed by performing an independent experiment in sterile conditions. 35Basal anaerobic (BA) medium was prepared according to Angelidaki and Sanders, 36 50 mL was added to a 200 mL bottle and purged with N 2 for 15 min before autoclaving.Magnetite and H 2 S were added to the bottles at two different S/Fe ratios: 1:9 to simulate the reactor's state (R-mag, c[SO 4 2− ] = 1.2 g/L, c[Fe 3 O 4 ] = 10 g/ L) and 1:18 to test the excess concentration of magnetite.In addition, one control bottle without magnetite was set up to assess the dissolution of H 2 S in the liquid phase.The volume of H 2 S added per bottle was kept constant (160 mL, at standard temperature and pressure), and the amount of magnetite was changed to examine the different conditions.After the gas addition, the pressure inside the bottles was around 1.6 bar (Table 1).Pressure was monitored in triplicate at different time intervals over 24 h (immediately after H 2 S addition, after 2 and 24 h) using HD2124.2manometer (Delta OHM).At the end of the experiment, magnetite was collected from the bottle, placed on support for SEM analysis, and dried in a desiccator.SEM images were taken at the DTU Nanolab (National Centre for Nano Fabrication and Characterization) with an FEI Quanta FEG 200 microscope at 10 kV with a spot size of 3.5 and a working distance of about 10 mm.EDX analysis had the same parameters as the SEM analysis and was detected by Oxford Instruments 80 mm2 X-Max Silicon drift detector, MnKα resolution at 124 eV to evaluate the presence of sulfur compounds associated with the magnetite. Analytical Methods. The American Public Health Association (APHA) standard methods were followed to measure total solids (TS), volatile solids (VS), and volatile suspended solids (VSS). 37Biogas composition was analyzed with a gas chromatograph (GC-TRACE 1310, Thermo Fisher Scientific) equipped with a thermal conductivity detector (TCD) and Thermo (P/N 26004−6030) Column (30 m length, 0.320 mm inner diameter, and film thickness 10 μm) with helium as carrier gas (detection limit 0.0001%).TVFA (propanol, butanol, hexanol, acetate, propionate, isobutyrate, butyrate, isovalerate, valerate, hexanoic acid) concentrations were measured using an Agilent 7890A gas chromatograph (Agilent Technologies, US) equipped with a flame ionization detector (FID) and SGE capillary column (30 m length, 0.53 mm inner diameter, film thickness 1.00 μm) with helium as the carrier gas.The injection volume was 1 μL, and every sample was analyzed in duplicate to ensure the precision and reproducibility of the measures (quantification limit, 1 mg/ L).The injector temperature was kept at 150 °C and the detector temperature was at 220 °C.The initial temperature of the column oven was held at 45 °C for 3.5 min, increased to 210 °C at a ramping rate of 15 °C/min, and then held for 4 min at 210 °C.The concentration (obtained in mg/L) of each detected VFA was summed up to obtain the TVFA concentration.pH trend was monitored using FiveEasy Plus Benchtop FP20 (Mettler Toledo, CH).H 2 S accumulation in the headspace of the reactors was measured with Geotech BIOGAS 5000 portable gas monitor (detection limit 1 ppm, QED Environmental Systems Ltd., U.K.).In particular, 100 mL of biogas was collected from the headspace and diluted with 700 mL of N 2 into a gasbag, and the mixed gases were passed into the instrument for measurement. 2.5.Statistical Analysis.Descriptive statistics were conducted for all variables and mean values, and standard deviations were calculated.Wilcoxon signed-rank test has been performed to compare the values between the two reactors, and the Mann−Whitney U test was performed to compare the data of each reactor in the different phases.MAGs were classified into subgroups according to their 2-fold change within and between the different samples to select the groups of MAGs to be compared (Data S4 and S5): Pathways and single enzymes enriched in selected groups of MAGs were identified by performing comparisons with 2− concentration boosted SRB metabolism, resulting in more disturbed conditions for the methanogens and increased process instability.Indeed, SRB consume the same substrates as methanogens and a potential enrichment in their activity can hamper the methanogenesis. 39evertheless, the average percentage of CH 4 in the biogas did not experience major fluctuations, maintaining average values around 62% ± 1% in both reactors.After the Na 2 SO 4 addition, H 2 S started increasing from 500 ± 200 ppm to an average of 5000 ± 200 ppm (Figure 1c).The sudden rise suggested that SRB were substantially favored in their metabolic activity and potentially in their relative abundance, in accordance with findings reported in the literature. 40The pH was stable throughout the whole experiment, with values of 7.5 ± 0.1 and 7.4 ± 0.1 for R-mag and R-ctrl, respectively (Figure 1d). The total VFA (TVFA) present in the inoculum was initially rapidly consumed, with a subsequent gradual depletion observed during phase P1 (Figure 1b).In P2−P3, immediately after the shocks, an initial rise in TVFA concentration was noticed, in particular regarding the acetate (Figure 1, heatmap).This result can be attributed to a primary shock to the methanogens, who lately adapted to the new condition, as supported by the decreased TVFA of P3.Moreover, despite the identical conditions in which the two reactors were running, both propionate and acetate appeared to accumulate more in R-ctrl compared to R-mag (p = 0.02, Figure 1, heatmap).These variations may arise due to improper mixing during the first inoculum addition or stochastic events, leading to a different evolution of the microbial communities inside the two reactors.It is important to highlight that, while CH 4 yield was decreasing throughout P2 and P3, TVFA were not accumulating, except in the initial periods of both phases.This evidence can be associated with the improved metabolism of SRB that was favored by the presence of SO 4 2− and was consuming TVFA. 12,41.2.Magnetite Addition Allowed the Recovery of Methane Yield and Resulted in H 2 S Removal.Magnetite supplement in R-mag during P4 led to a significant recovery (10 ± 1%, p = 0.049) in the methane yield (332 ± 19 mLCH 4 / gVS) compared to P3, and it was significantly higher (p = 1.6 × 10 −11 ) compared to R-ctrl (277 ± 16 mLCH 4 /gVS) throughout the period (Figure 1a).Once magnetite was added in R-mag, the average CH 4 content increased to 65 ± 1%, reaching 75% as the maximum level.−44 In parallel, the methane yield was also increased in R-ctrl; however, the average along P4 (295 ± 16 mLCH 4 /gVS) was 6% lower (p = 5.6 × 10 −6 ) than the average value of the previous phase (314 ± 34 mLCH 4 /gVS).This evidence might be explained by a potential adaptation of methanogens to the increased SO 4 2− concentration after a destabilization period that occurred during P3.During P4, the TVFA content sharply decreased by 90% in R-mag and by 86% in R-ctrl (Figure 1b).The drop might be associated with the manual feeding which was less precise compared to the calibrated pumps.Nevertheless, the TVFA had already shown a declining trend from day 97 until the end of P3, as it commonly happens in long-term operating reactors where the initial TVFA accumulated in the inoculum are consumed over time. 45,46In R-mag, TVFA decreased from 826 ± 6 to 161 ± 10 mg/L at the beginning and at the end of phase P4 (Figure 1b).Specifically, acetate and propionate showed a higher decrease (Figure 1b).In R-ctrl, TVFA dropped to a minor degree from the beginning and to the end of phase P4 (Figure 1b).The TVFA decrease was more pronounced in R-mag compared to that in R-ctrl (p = 0.00015).The results indicate that magnetite addition exacerbated the difference observed during the SO 4 2− shock and enhanced the overall microbial metabolism, in particular, the acetogenic activity from propionate and butyrate, and the methanogenesis. 47 Environmental Science & Technology H 2 S measurements after magnetite addition evidenced a massive decrease of H 2 S levels in R-mag that reached 16 ± 8 ppm in less than 7 days, while in R-ctrl the levels remained the same (5000 ± 200 ppm) compared to the previous phase.Considering that the microbiological insights could not support the changes in the H 2 S profile, the precipitation of H 2 S in the form of zerovalent sulfur or sulfur compounds due to the presence of magnetite was evaluated.−50 It must be considered that the metagenomic investigation method has limitations when it comes to providing precise absolute abundance values.As a result, the trends observed between sampling points may be influenced by the underestimated differences in the overall size of the microbial community.Phylogenetic information of the MAGs is reported in Figure S1, Supporting Data S1. Environmental Science & Technology An independent test was performed to assess wet catalytic oxidation (Table 1).The expected value for H 2 S dissolution in water at temperatures between 30 and 40 °C and pressure equal to 2.0 bar is reported to be between 2.0 and 3.0 gH 2 S/ kgH 2 O. 51 Given the parameters of the experiment, the expected dissolution should have reduced the overpressure from 1 to 0.5 bar.After 2 h from the H 2 S addition, pressure decreased both in the batch containing a S/Fe ratio equal to 1:9 and also in the one with a S/Fe ratio equal to 1:18 reaching values considerably lower compared to the control and calculated values (Table 1).The results demonstrated that H 2 S precipitated when magnetite was present in the batches. To further confirm this hypothesis, EDX analysis was performed on the magnetite extracted from the batch bottle with a S/Fe ratio S/Fe 1:9.Sulfur was detected in the analyzed sample, mixed with the magnetite (Figure 2).It is worth mentioning that EDX analysis cannot discriminate among different sulfur compounds.Therefore, it was not possible to distinguish whether the detected sulfur was in the form of zerovalent sulfur or FeS 2 complexes.The precipitation process requires O 2 in order to reoxidize Fe(II) to Fe(III). 52In the CSTRs operating under anaerobic conditions, the continuous flow replaced reduced magnetite without requiring further oxidation.Moreover, reductive metabolic reactions requiring electrons (e.g., CH 4 and H 2 S production) may have supported magnetite reoxidation, as hypothesized by previous research where ferrous iron was suggested as an alternative electron donor in redox reactions occurring through long-distance electron transport. 53 Microbial Community Composition and Dynamics: Unraveling the Effect of Magnetite. A genome-centric approach applied to the shotgun reads of the six samples allowed the reconstruction of 145 metagenome-assembled genomes (MAGs), with an average read alignment of 80% across all samples.The MAGs recovered in the present study were compared with those previously reported in a comprehensive MAGs database where a range of samples have been analyzed using a binning approach (http:// microbial-genomes.org/). 94After clustering the MAGs at 95% average nucleotide identity, only 70 out of 145 MAGs identified in the present study represent species already deposited in the "global AD database" confirming that more than half of the species presented here were entirely new.After filtration according to the MIMAG standards, 54 the remaining 108 MAGs had an average reads alignment of 70% and were selected for further analysis, including relative abundance in the samples and variation in terms of fold change, after magnetite addition in R-mag.The taxonomic investigation allowed the assignment of 12 MAGs at the species level and 26 MAGs at the genus level (Data S2), confirming the presence of a high fraction of uncharacterized species. Influence of SO 4 2− and Magnetite on the Most Abundant Taxa.Taxonomic analysis showed that 19 and 50 of the 108 filtered MAGs were assigned to the Bacteroidetes and Firmicutes phyla, respectively (Data S3).Overall, the relative abundance of the phylum Firmicutes was not particularly affected under the different conditions.Nonetheless, when focusing on the family level, the presence of magnetite exerted a substantial influence on the relative abundance of Syntrophomonadaceae, since it increased from 2.3% in P3 to 6.7% in R-mag during P4, while it remained stable in R-ctrl throughout the entire experiment ranging from 1.9 to 2.7% (Data S3).This evidence suggested that members of the Syntrophomonadaceae family might be involved in DIET, as previously hypothesized, 55 even though the mechanism is still to be clarified.The involvement of some Firmicutes species in polysaccharides degradation is a well-known property in many natural and engineered environments. 56,57The high relative abundance of Firmicutes can be attributed to their capability to degrade polysaccharides and oligosaccharides that constitute a considerable fraction of the biopulp. 56,57The plasticity of the AD microbiome is particularly evident in the top layers of the food chain (e.g., in the hydrolytic step), and this is determined by the high functional redundancy. 58,59According to this, it is highly possible that the species reported in the mentioned literature are different from those identified in the present study but still involved in the same functional process.The functional annotations showed the presence of genes involved in the carbohydrate metabolism (Data S2).In particular, Firmicutes DTU61 and Firmicutes sp.DTU70 had the gene encoding the β-fructfuranosidase (EC: 3.2.1.26),which catalyzes the formation of D-fructose and D-glucose 6phosphate from sucrose 6-phosphate.In addition, Firmicutes sp.DTU61 and Firmicutes sp.DTU100 showed the presence of the 1,4-α-glucan branching enzyme (EC: 2.4.1.18)encoding the gene involved in the degradation of starch.This evidence confirms the involvement of Firmicutes spp. in the hydrolysis phase, specifically in carbohydrate degradation.Bacteroidetes were massively affected by the SO 4 2− addition since their relative abundance decreased from 15 to 20 to 10% in both reactors (Data S3) and increased to 20% in phase P4 both in R-ctrl and in R-mag.These results suggest that Bacteroidetes adapted to the new condition and that magnetite addition seemed to have no specific effect on them (Figure 3).The observed trend is in line with previous studies which found that a lower TVFA concentration would favor the establishment of Bacteroidetes during mesophilic AD. 60,61 Candidatus Cloacimonetes phylum, represented solely by two MAGs, was highly abundant in the initial inoculum, reached almost 20% of the microbial community after SO 4 2− shock (Data S3), and then extensively declined during P4 in both reactors. According to the literature, this phylum appeared to be active mainly during the initial step of cellulose hydrolysis or in the primary fermentation of hydrolysis products. 62Metagenomic analysis revealed the presence of two out of three genes of the propionyl-CoA metabolism (KEGG module M00741) in Candidatus Cloacimonetes sp.DTU2, suggesting its potential involvement in propionate oxidation.This evidence is supported by the literature, identifying the presence of the oxidative propionate degradation pathway in the Candidatus Cloacimonetes genome. 63pirochaetes and Synergistetes phyla were represented by seven and six MAGs, respectively (Data S3).In R-ctrl, the H 2 Srich environment promoted Spirochaetes growth, increasing in relative abundance from 3.7% in P1 to 8.1% in P4 (Data S3).The synergistic model between Spirochaetes and SRB was previously proposed, and Spirochaetes spp.were referred to as sulfur-oxidizing bacteria capable of detoxifying H 2 S-rich environments using oxygen or nitrate as electron acceptors. 64evertheless, the results showed that the removal of H 2 S in the reactors was complete and fast, indicating that Spirochaetes spp.alone could not remove the large amount of H 2 S present in the CSTRs.Nitrate can be present in traces in AD systems, and the low amounts could explain the slow process related to H 2 S oxidation using nitrate. 65These findings are in accordance with a previous study that highlighted the limited and slow Environmental Science & Technology removal of H 2 S by Spirochaetes spp. 64According to this, magnetite played a major role in the oxidation of H 2 S. In addition, members of the Spirochaetes phylum can degrade polysaccharides providing SRB with fermentation products such as acetate. 66Synergistetes increased in R-mag from 0.9 to 4%, while this trend was not observed in R-ctrl, where their relative abundance remained stable at around 2.5% during the entire process.−69 Both activities can provide substrates to methanogens, including, for example, H 2 , CO 2 , and acetate, suggesting that magnetite addition promoted the syntrophic behavior of Synergistetes. Metabolic Pathways Enriched under SO 4 2− Shock and Magnetite Supplementation.An unbiased analysis was performed to identify statistically enriched metabolic pathways in MAGs having higher relative abundance in selected samples (R-mag in P4 vs R-ctrl in P4).When comparing the MAGs enriched by more than two folds in R-mag (group 1) with those depleted (fold-change <2, group 2), a variety of metabolic pathways were found to be enriched (Data S4).In particular, the degradation and biosynthesis of carbohydrates and amino acids were more frequent in group 1 (p-values ranging from 0.02 to 0.04).A similar trend was found for genes involved in the metabolism of cofactors and vitamins (0.01 < p < 0.05, Data S4).These findings indicate that magnetite promoted the activity of taxa responsible for operating during hydrolysis and fermentation and, in general, improved the overall microbial community growth and metabolism.As a confirmation of what was observed during CSTRs monitoring, genes associated with methane production were enriched in group 1, underlining that magnetite enhanced archaeal growth (Data S4).Besides metabolic pathways, attention should be focused also on transport systems.The results highlighted that the twin-arginine translocation gene, tatA, was enriched in .Graphical representation of the statistical analysis performed with enrichM.Two groups were compared: MAGs with fold change >2 in R-mag and in R-ctrl during P4.A subsection of the metabolic pathways enriched in R-mag is reported in the rows.Completeness and contamination are reported next to the assigned MAGs on the left part by the green and gray color scales, respectively.From top to bottom are reported: general metabolic pathway (C m is the acronym label for carbohydrate metabolism), label identifying the specific enriched module, and genes belonging to the module.Colored columns refer to the enriched modules.Presence/absence of genes is reported as black/white dots, respectively.Labels from left to right: molybdate transport system (Mo), sodium transport system (S), glutamate:Na + symporter (G), secindependent translocase for protein secretion (P), tungstate transport system (T), pyrimidine metabolism (Py), purine metabolism (Pu), methane metabolism (Me), pentose phosphate pathway (PPP), glycolysis (Gl), amino sugar and nucleotide sugar metabolism (ANS), porphyrin metabolism (Po), molybdenum cofactor biosynthesis (MC).At the bottom, a heatmap representing the overall relative abundance per each represented gene.group 1 (p = 0.028), as well as different solute transport systems (p ranging from 0.01 to 0.04, Data S4).Prior research assessed that tat genes encode a protein transport system in many bacterial species involved in the biogenesis of bacterial electron transfer chains. 70Bacterial electron transfer chains interact directly with solute transport systems, coordinating energy production and solute transport. 71,72For instance, the respiratory chain proton gradient couples to other transport systems for nutrient uptake and ion transport.Furthermore, certain bacteria possess electron transfer chains that are coupled to the transport of metal ions or other specific substrates, enabling them to thrive in specific environments or under particular nutritional conditions. 71,72It can be hypothesized that magnetite stimulated the growth of Tatencoding species, and this resulted in an enhancement of the solute transport systems, allowing the internalization of compounds that are further metabolized in catabolic and anabolic pathways.Nevertheless, it should be taken into account that this analysis is generated based on metagenomic data and a more comprehensive metabolic analysis should be carried out as further confirmation.When comparing MAGs with fold-change >2 in R-mag and MAGs with fold-change >2 in R-ctrl, the results showed that genes within the nucleotide, carbohydrate, methane, cofactors, and vitamins metabolisms (0.02 < p < 0.05, Data S5), as well as transport systems (0.01 < p < 0.05, Data S5) were enriched in the group belonging to Rmag, underlining once more the positive effect of magnetite on the AD process (Figure 4).anosarcina sp.DTU101) and three belonged to the Methanoculleus genus (Methanoculleus bourgensis DTU16, Candidatus Methanoculleus thermohydrogenotrophicum DTU57, and Methanoculleus sp.DTU121).It is well documented in the literature that members of the genus Methanosarcina (e.g., Methanosarcina barkeri) can perform DIET by cooperating with syntrophic partners. 73 2 S measurements after SO 4 2− addition suggested a strong activity of SRB.Nonetheless, among all of the MAGs that presented at least one gene associated with sulfate reduction pathways, only one could be clearly defined as an SRB.Peptococcaceae sp.DTU26 had two genes out of three of the dissimilatory sulfate reduction pathway.This finding is in line with prior research identifying SRB within this taxon. 55,74oth methanogens and SRB can establish syntrophic interactions potentially based on DIET with syntrophic acetate-oxidizing bacteria (SAOB), syntrophic propionateoxidizing bacteria (SPOB), and syntrophic butyrate-oxidizing bacteria (SBOB). 75Among the retrieved MAGs, 13 were identified as SBOB, eight were defined as SAOB, and only one was detected as SPOB (Data S2). 3.4.1.Methanogens and SRB: From Cooperation to Competition.Analysis of the biochemical and microbiological parameters suggested a potential competition between SRB and methanogens.Since both groups can consume acetate, H 2 , and CO 2 for their metabolism, and sulfate reduction is thermodynamically favored compared to methanogenesis, SRB can outcompete methanogens. 2,5Magnetite addition could help electrotrophic archaea to thrive and restore their metabolic activity and relative abundance. 19,42The results indicated that both Methanosarcina spp.were clearly negatively affected by the addition of SO 4 2− addition.After the second SO 4 2− shock (P3), M. mazei DTU56 and Methanosarcina sp.DTU101 experienced a decrease in relative abundance ranging from 4-to 5.5-fold in both reactors.When magnetite was added in R-mag (P4), both increased in abundance with a fold change of 3 and 9, respectively, while they decreased by 5.5fold in R-ctrl.Genes for cytochrome c biosynthesis (ccmA-C, ccmE, and ccmF, Data S2) were found in M. mazei DTU56, suggesting its potential involvement in DIET.On the contrary, M. bourgensis DTU16 and C. Methanoculleus thermohydrogenotrophicum DTU57 were favored by the presence of SO 4 2− , as confirmed by their relative abundance during P3 and P4.These findings revealed that SRB can have opposite interactions with acetoclastic and hydrogenotrophic methanogens (competitive and cooperative, respectively; Figure 5).A possible explanation is that SRB can use the Wood−Ljungdahl pathway in reverse to completely oxidize acetate to H 2 and CO 2 , 76 and they can also obtain electrons to produce hydrogen by increasing hydrogenase expression. 77The ability of SRB to produce H 2 and CO 2 might have helped methanogens that rely on hydrogenotrophic methanogenesis (e.g., Methanoculleus spp., Figure 5).Concurrently, acetate consumption by SRB may have hindered acetoclastic methanogens competing for the same substrate.Magnetite addition favored Methanosarcina spp.able to perform DIET, providing them the opportunity to With reference to potential SRB, both Prevotella sp.DTU28 and Peptococcaceae sp.DTU26 increased after magnetite addition in R-mag, with a fold change of 7.6 and 5.3, respectively, while in R-ctrl, only Prevotella sp.DTU28 increased with a fold change of 2.3.In addition, genes for cytochrome c biosynthesis (ccmA, ccmE, and ccmF) were identified in Peptococcaceae sp.DTU26.SRB participation in DIET has already been reported by a previous study, which evidenced the ability of SRB to directly accept electrons through the c-type cytochrome in the cell's outer membrane for sulfate reduction (Table 2). 77.4.2.DIET-Based Interactions between SRB/Methanogens and Syntrophic Partners.−80 Syntrophic acetate-oxidizing bacteria (SAOB, e.g., Syntrophaceticus schinkii DTU119 (Lee et al., 2016)) and syntrophic butyrate-oxidizing bacteria (SBOB� mainly represented by the genus Syntrophomonas) 81 had an increased relative abundance in R-mag after magnetite addition (Data S2).While previous studies reported a rise in the relative abundance of SAOB and SBOB after magnetite addition, the mechanism behind the stimulatory effect is still unclear. 82,83n general, the presence of e-pili and cytochromes is often associated with the DIET mechanism. 8,10Indeed, the outcome of the metagenomic analysis evidenced that 12 MAGs, including, for example, M. mazei DTU56 and Peptococcaceae sp.DTU26, encoded the genes for cytochrome c biosynthesis, advising their potential involvement in DIET (Data S2).Magnetite can help the syntrophic interactions, establishing an electrical network that mediates long-range extracellular electron transfer. 84−,87 Under these circumstances, electron shuttles are required to transfer electrons between the inner and outer membranes. Other MAGs seemed to be specifically associated with the presence of SRB.Particularly, Ruminococcus sp.DTU98, usually involved in cellulose fermentation to generate end products such as lactate and succinate, increased in both reactors after the SO 4 2− shocks. 88,89This evidence suggested a potential syntrophic relationship between Ruminococcus sp.DTU98, producing lactate, and SRB, consuming lactate to produce H 2 S (Figure 5).In addition, MAGs assigned to the phylum Chloroflexi such as Chloroflexi sp.DTU120 and Chloroflexi sp.DTU130 were enriched after the SO 4 2− shocks, and after the magnetite supplementation in R-mag, they increased by 3.1-and 1.5-fold, respectively.Moreover, Chloroflexi sp.DTU130 encodes genes for cytochrome c biosynthesis (ccmA-C, ccmE, and ccmF), suggesting its potential involvement as an electrogenic syntrophic partner in DIET, as previously reported. 90,91This interaction could be attributed to the fact that some genera belonging to the Chloroflexi phylum (e.g., Anaerolinea) produce lactic acid, which has been defined as a suitable electron donor for SRB (Figure 5). 92,93As previously discussed, members belonging to the Spirochaetes phylum seem to have a synergistic interaction with SRB.The results of the metagenomic analysis showed that Spirochaetales sp.DTU27 encoded one of the genes for the cytochrome c biosynthesis, suggesting that their syntrophic behavior might be based on DIET. CONCLUSIONS The current study investigated the effect of magnetite on AD of municipal biopulp.Magnetite addition resulted in an increased CH 4 yield by 10%.Moreover, it allowed sulfur precipitation, reducing the H 2 S headspace content from ∼5000 to <20 ppm.Overall, magnetite supplementation benefited different microbial interactions with methanogens and SRB, involving diverse metabolic pathways and electron shuttles.As a consequence, AD DIET-based processes turn out to be an alternative strategy to promote the degradation of SO 4 2− -rich organic residues.As confirmed by the increased VFA consumption, magnetite addition constitutes a novel strategy to alleviate the impact of OLR fluctuations, which could significantly stress the methanogenic microbiome.Indeed, SRB, SAOB, and SBOB were found to be enriched when magnetite was added, highlighting their main role in promoting TVFA degradation.Finally, the relationships between SRB and archaea are found to be more complex than a mere competition for the same substrate; specifically, SRB can have either cooperative or competitive interactions with hydrogenotrophic and acetoclastic methanogens, respectively. Characterization of inoculum and biopulp (Table S1) and Phylogenetic representation of the identified MAGs (Figure S1) (PDF) Taxonomic and functional analyses performed on the MAGs identified using the genome-centric metagenomic approach (XLSX) Taxonomic conversion from GTDBTk database to NCBI database and additional statistical analysis performed on the taxonomic groups (XLSX) Results of the statistical analysis performed to identify enriched metabolisms in the MAGs with fold change >2 vs MAGs with fold change <2 in R-mag after magnetite addition (XLSX) Results of the statistical analysis performed to identify enriched metabolisms in the MAGs with fold change >2 R-mag vs R-ctrl after magnetite addition vs R-ctrl after magnetite addition (XLSX) Figure 1 . Figure 1.Overall CSTRs performance during the four phases (P1−P4).On the left part of the panel, four plots describe: (a) methane yield measurements, (b) TVFA trend, (c) H 2 S concentration after the first shock with 0.6 g/L SO 4 2− , and (d) pH drift.The (*) symbol marks the first SO 4 2− shock at operational day 49, (**) the second shock to a final SO 4 2− concentration of 1.2 g/L at day 66, and (***) points out the magnetite addition to a final concentration of 10 g/L, continuously applied for 12 days, from day 110 to 122.Data during the days of SO 42− (day 44 and day 69) and magnetite (days 110−122) additions are not shown.Data are masked (gray dotted lines) to avoid discrepancies that might have arisen due to the different feeding procedure (e.g., manual compared to the automatic feeding through the pumps).On the right part, a heatmap representing the variation of the four main VFA identified: Ac (acetate), Pr (propionate), Ib (Isobutyrate), Br (butyrate).On the far right, squared brackets divide the heatmap into the four phases analyzed. Figure 2 . Figure 2. Scanning electron microscopy (SEM) of the magnetite particles.Analysis was performed to evaluate the presence of precipitated sulfur.From left to right: secondary electron detector (SED), backscattered electron detector (BSED), and scanning electron microscopy−energydispersive X-ray spectroscopy (SEM-EDX) images.The EDX analysis highlights the presence of sulfur (highlighted with pink color) precipitated with the magnetite (Iron�Fe, highlighted with green color; oxygen�O, highlighted with light blue color). Figure 3 . Figure 3. Heatmap representing MAGs with relative abundance higher than 0.5% in at least one period, with completeness >70%.Relative abundance of MAGs in the six samples (R-mag P1 and R-ctrl P1 are the samples taken before the SO 4 2− shocks, R-mag P3 and R-ctrl P3 are the samples collected after the SO 42− shocks, R-mag P4 and R-ctrl P4 are the samples collected after the magnetite addition in R-mag) is reported on the left part of the figure.Relative abundance values range from 0% (black) to 5% (red), and higher values are saturated (the color scale is in the top part).In the right part of the figure is reported the MAGs fold change in R-mag after magnetite addition (R-mag P4 vs R-mag P3); values lower than zero (orange) are on the left of the black line and values higher than zero (dark green) are on the right of the black line.Values range from −10 to 10.It must be considered that the metagenomic investigation method has limitations when it comes to providing precise absolute abundance values.As a result, the trends observed between sampling points may be influenced by the underestimated differences in the overall size of the microbial community.Phylogenetic information of the MAGs is reported in FigureS1, Supporting Data S1. Figure 4 Figure 4. Graphical representation of the statistical analysis performed with enrichM.Two groups were compared: MAGs with fold change >2 in R-mag and in R-ctrl during P4.A subsection of the metabolic pathways enriched in R-mag is reported in the rows.Completeness and contamination are reported next to the assigned MAGs on the left part by the green and gray color scales, respectively.From top to bottom are reported: general metabolic pathway (C m is the acronym label for carbohydrate metabolism), label identifying the specific enriched module, and genes belonging to the module.Colored columns refer to the enriched modules.Presence/absence of genes is reported as black/white dots, respectively.Labels from left to right: molybdate transport system (Mo), sodium transport system (S), glutamate:Na + symporter (G), secindependent translocase for protein secretion (P), tungstate transport system (T), pyrimidine metabolism (Py), purine metabolism (Pu), methane metabolism (Me), pentose phosphate pathway (PPP), glycolysis (Gl), amino sugar and nucleotide sugar metabolism (ANS), porphyrin metabolism (Po), molybdenum cofactor biosynthesis (MC).At the bottom, a heatmap representing the overall relative abundance per each represented gene. 3 . 4 . Microbial Dynamics of SRB and Methanogens: Syntrophism and Competition.The archaeal community was represented by five MAGs, of which two belonged to the Methanosarcina genus (Methanosarcina mazei DTU56, Meth- Figure 5 . Figure 5. Potential interactions among syntrophic partners within the microbial community.Six representations, one per each taxon (Archaea, Syntrophomonas, Spirochaetes, Ruminococcus, and Chloroflexi) or group of microorganisms (e.g., SRB) are displayed to elucidate the interactions involving the main players, methanogens and SRB�(represented by bigger cells) with their potential syntrophic partners (represented by smaller cells).In each cell, the enzymes of a representative metabolism are drawn (Data S2): hydrogenotrophic methanogenesis in archaea, dissimilatory sulfate reduction and the reverse Wood−Ljungdhal pathway in SRB, and dissimilatory nitrate reduction in both Syntrophomonas and Chloroflexi.Substrates and products inside the cells are represented by small colored circles placed close to the arrows, while compounds exchanged among the cells are defined by their name, acronym, or chemical formula.Dashed green and red arrows indicate the main syntrophic and competitive interactions.Faded green lines represent possible direct compound exchanges between syntrophs.Created with BioRender.com,2023. Table 1 . Overpressure Measurements to Assess the Dissolution and Precipitation of H 2 S in the Presence (Ratio S/Fe 1:9 and Ratio S/Fe 1:18) and Absence (Control) of Magnetite 38vironmental Science & TechnologyenrichM "enrichment" software (v0.6.4−0) according to the statistical results obtained with the Mann−Whitney U test.38 Table 2 . Table Containing the Copy Number Per Gene Stated in Figure 5 (Six Representative MAGs have been Chosen for the Representation while the Extended Analysis is Reported in Data S2) Environmental Science & Technology compete more efficiently with SRB.Thereby, in the context of a competitive-cooperative relation between SRB and different methanogens, it is explained the observed fold change trend of Methanosarcina spp.and Methanoculleus spp.under the different conditions in P3 and P4.
9,875.6
2023-10-20T00:00:00.000
[ "Environmental Science", "Materials Science" ]
IoT Driven Vehicle License Plate Extraction Approach The objective of the research paper is to capture the design a Vehicle Number Plate Identification System which can be used to identify and read the license number of any vehicle. The basic process involves taking the image of the front/rear of the vehicle which then gets processed and ultimately the number gets displayed on the LCD. This system can be used for a wide variety of installations and establishments such as entry points of schools, colleges, offices and parking spaces. The camera takes an image which is then processed in PC. The result achieved from this is the license number of the vehicle which is then transmitted via the Wi-Fi module and ultimately displayed on the LCD. The hardware especially can be made more rugged and compact so as to handle all the elements of nature and various environments that it can be used in. License Plate Format In India, all motor vehicles registered number plates for twowheeler and four-wheeler private car owners have black colour lettering embossed or written on a white background. The license number plate foe all commercially used vehicles such as cabs and trucks have a black colouring text on a yellow background. Whereas the commercial vehicles accessible on rent for self-drive have yellow colouring letter on a black background. To differentiate the vehicles which are all belongs to foreign consulates have light blue background with white colour lettering. The official cars of the President of India and state governors have Emblem of India in gold embossed on a red plate without license plate registration numbers. License Plate -Current format The present licence number format of the registration index consists of 4 parts such as such as State Name, District number, Alphabets, Unique 4 digit number example TN01 SM5499. In this example: i. The first two letters "TN" indicate the vehicle belong to TamilNadu state to which the vehicle got registered. ii. The second part with two digit numbers indicates the unique sequential number assigned for each district. iii. The next part contains two uppercase alphabets. iv. The last part is a unique four digit number assigned for each vehicle. In cases, when the 4 digit number run out then a letter(s) is prefixed, next two letters and so on. v. Apart from this number format, on the top there is a small blue square along with an international oval with caption "IND". Objectives The goals associated with this work are listed below: i. The foremost goal is to develop the system to identify vehicle license number. ii. The result obtained from the system should be error-free thereby leading to minimum problems and discrepancies. iii. The result should be obtained in a fast and timely manner leading in reduction of waiting time and queues. iv. Reduction in queues will lead to saving time, subsequent monetary gains and reduced congestion. Related Work Many researchers have worked in this field and it has been found that data mining algorithm is majorly used to identify the characters from the number plate. The works and descriptions are as follows: • Using digital images, the author proposes a detection methodology of car number plates which takes inputs in gray image and extracts the characters from the number plate. The proposed method makes use of the morphological operators like morphological reconstruction, filters and top-hat operators. The method can be applied to access control systems supervising the car traffic in the restricted areas [10]. • Tracking Number plate from vehicle using MATLAB [11]: In traffic surveillance tracking of the number plate from the vehicle is an important task, which demands intelligent solution. In this document, extraction and recognition of number plate from vehicles image has been done using MATLAB. It is assumed that images of the vehicle have been captured from Digital Camera. Alphanumeric characters on plate has been extracted and recognized using template images of alphanumeric characters. This work presents a new algorithm in MATLAB which has been used to extract the number plate from the vehicle in various luminance conditions. Extracted number of the license plate can be seen in a text file for verification purpose. Number plate identification is helpful in finding stolen cars, car parking management system and identification of vehicle in traffic. Image processing is a form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing will be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. System Design The schematic of system design flow is given in figure 2. Serial Data Communication All the processed data are sent across the computer and Arduino by setting a pin high or low. Sketches are sent to the Arduino using Serial data transfer. When the sketch is compiled/verified, it is converted into binary data. Through USB all these data are uploaded one at a time to the Arduino. Acknowledgment from Arduino board for its data transmission and reception are done through two LEDs blinks near the USB connector. Serial Communication with Wi-Fi Module It uses Software Serial Communication where Pin 2 and 3 of the Arduino UNO board are used as TX and RX respectively. Pin 2 is used for transmitting data while Pin 3 is used for receiving data. In Software Serial Communication any pin of Arduino UNO can be used to act as TX and RX. A special library "Software Serial.h" has been used to achieve this. It is generally believed that parallel communication is faster than serial because more data is sent simultaneously. However, in order to achieve a higher data rate in serial communication, the links can be clocked faster than parallel communications links. The module includes the following features: i.802.11 b/g/n protocol ii.Wi-Fi Direct (P2P), soft-AP iii.Integrated TCP/IP protocol stack When power is applied to the module it is seen that the red power light turn on and the blue serial indicator light flickers briefly. Conclusion and Future Scope The proposed system is successful in identifying and recognizing texts in the vehicle number plates with accuracy. Although some factors such as lighting, camera quality and other environmental factors do determine the success of the system but given ideal conditions the system performs as intended and gives satisfactory results. This system can be used for a wide variety of installations and establishments such as entry points of schools, colleges, offices, parking spaces and toll-booths. Future scope involves use by law-enforcement agencies. If a central database is created and it is linked up to the system then blacklisted vehicles used in numerous unlawful activities can be identified and monitored. It can also be used by traffic police to identify, monitor and impound stolen vehicles.
1,597.4
2018-02-07T00:00:00.000
[ "Computer Science" ]
Secondary Contact, Introgressive Hybridization, and Genome Stabilization in Sticklebacks Abstract Advances in genomic studies have revealed that hybridization in nature is pervasive and raised questions about the dynamics of different genetic and evolutionary factors following the initial hybridization event. While recent research has proposed that the genomic outcomes of hybridization might be predictable to some extent, many uncertainties remain. With comprehensive whole-genome sequence data, we investigated the genetic introgression between 2 divergent lineages of 9-spined sticklebacks (Pungitius pungitius) in the Baltic Sea. We found that the intensity and direction of selection on the introgressed variation has varied across different genomic elements: while functionally important regions displayed reduced rates of introgression, promoter regions showed enrichment. Despite the general trend of negative selection, we identified specific genomic regions that were enriched for introgressed variants, and within these regions, we detected footprints of selection, indicating adaptive introgression. Geographically, we found the selection against the functional changes to be strongest in the vicinity of the secondary contact zone and weaken as a function of distance from the initial contact. Altogether, the results suggest that the stabilization of introgressed variation in the genomes is a complex, multistage process involving both negative and positive selection. In spite of the predominance of negative selection against introgressed variants, we also found evidence for adaptive introgression variants likely associated with adaptation to Baltic Sea environmental conditions. Introduction Introgression is a process that transfers genetic variation from 1 species or divergent lineage to another.Wholegenome analyses have shown introgression to be an important and pervasive evolutionary force (e.g.Mallet 2005;Lamichhaney et al. 2018;Oziolor et al. 2019) that has shaped the genome of many organisms (e.g.Dowling and Secor 1997;Harrison and Larson 2014;Suarez-Gonzalez et al. 2018), including humans (Huerta-Sánchez et al. 2014;Sankararaman et al. 2014Sankararaman et al. , 2016)).There is also well-documented evidence of introgression fueling adaptation in several species (Hedrick 2013;Racimo et al. 2015;Marques et al. 2019;Wang et al. 2023).However, amidst this process of introgression of adaptive and neutral variants, there exists a genome-wide selection against hybrids (e.g.Arnegard et al. 2014;Christie and Strauss 2018) and regions derived from hybridization throughout the entire genome (Harrison and Larson 2014;Sankararaman et al. 2014;Juric et al. 2016;Schumer et al. 2018;Calfee et al. 2021).The seemingly conflicting observations of widespread hybridization in nature and the prevalence of selection against foreign ancestry can be explained by the various factors contributing to the reduced fitness of hybrids: In addition to ecological selection against hybrids, the hybridizing parental populations may carry harmful variants (hybridization load), or the genes of the 2 parental lineages may have negative interactions (hybrid incompatibilities) (Moran et al. 2021). Genomic studies of contemporary hybrids have shown that the proportion of foreign ancestry is highly variable among species (Martin et al. 2015(Martin et al. , 2019;;Malinsky et al. 2018) and populations (Skoglund et al. 2015;Kuhlwilm et al. 2016), and the introgressed ancestry is unevenly distributed across the genome (Sankararaman et al. 2014(Sankararaman et al. , 2016;;Vattathil and Akey 2015;Zhang et al. 2016).However, the mechanisms underlying this heterogeneous distribution are not well understood.Generally, introgressed alleles are regarded to have a negative fitness effect when introduced into new genomic backgrounds (Amorim et al. 2017;Martin and Jiggins 2017;Bay et al. 2019), and according to the Dobzhansky-Muller model of hybrid incompatibility (Bomblies et al. 2007;Masly and Presgraves 2007;Lee et al. 2008), long-term negative selection on incompatible loci may create "deserts" of introgression in the genome (Sankararaman et al. 2014(Sankararaman et al. , 2016)).However, genetic architecture and constraints also play a role, and genomic regions characterized by higher gene density and/or low recombination rate are expected to show a lower rate of introgression than other regions (Martin and Jiggins 2017).This prediction is well supported by empirical studies in humans (Sankararaman et al. 2014(Sankararaman et al. , 2016)), fish (Schumer et al. 2018), and butterflies (Edelman et al. 2019;Martin et al. 2019), and the intensity of selection against introgression in these studies appears to be positively correlated with the density of functional elements.Nevertheless, the intricate interactions between the different forces against foreign ancestry and the predictability of the ultimate outcomes of hybridization are yet to be fully comprehended. Introgression and admixture always happen in an evolutionary context, and the hybridizing lineages can differ in terms of their demographic histories and population sizes, levels of genetic drift, and strength of selection against deleterious variants prior to the hybridization event (Schumer et al. 2018;Moran et al. 2021;Liu et al. 2022).Gene flow from a population with a smaller effective population size (N e ) and reduced purifying selection efficiency may increase the genetic load in the hybrid population through the introduction of weakly deleterious alleles (Harris and Nielsen 2016;Juric et al. 2016).Typically, gene flow from a population with a larger N e is thought to ease the genetic load (Edelman and Mallet 2021), but in extreme cases, it may import unbearable amounts of recessive lethal variation and condemn a tiny population into extinction (Kyriazis et al. 2021).Consequently, the selection on introgressed variants, both adaptive and maladaptive, plays a pivotal role in shaping the genome-wide patterns of foreign ancestry (Kim et al. 2018).The simultaneous operation of multiple demographic and selective processes may lead to interwoven effects, emphasizing the need for a systematic investigation of the historical demographic events and the distinct evolutionary forces that shape the genomic landscape of introgression.A thorough understanding of both the history and the genomic mechanisms is vital for comprehending the evolutionary consequences of hybridization. The 9-spined stickleback (Pungitius pungitius) is a small euryhaline teleost fish that inhabits circumpolar regions of the northern hemisphere.The evolutionary history of 9-spined sticklebacks has been extensively studied (Aldenhoven et al. 2010;Shikano, Ramadevi, et al. 2010;Teacher et al. 2011;Bruneaux et al. 2013;Guo et al. 2019;Natri et al. 2019;Feng et al. 2022Feng et al. , 2023)), and in Europe, 2 distinct evolutionary lineages have been identified: the Western European lineage (WL) and the Eastern European lineage (EL).Despite the lineages exhibiting distinct sex determination systems (Natri et al. 2019) and highly differentiated mitochondrial haplotypes (Aldenhoven et al. 2010;Shikano, Shimada, et al. 2010;Guo et al. 2019), they are known to interbreed (Natri et al. 2019) and populations in the Baltic Sea area display a gradient of mixed ancestry (Feng et al. 2022).The identity of the participating populations and the exact timing of the Baltic Sea admixture event(s) are unknown.Based on purely geographical information (Ukkonen et al. 2014), the EL appears to have colonized the Baltic Sea relatively late (in the Ancylus Lake stage, 10,700 to 10,200 BP; Björck 2008;Feng et al. 2022) and it seems likely that the large water body had (WL origin) sticklebacks trapped for the duration of the ice age or at least the species had colonized the area during the Yoldia Sea stage (11,600 to 10,700 BP).If the area was inhabited by WL-origin sticklebacks, the 2 lineages met the first time when the EL colonized the Baltic Sea; the second window for admixture started after the reopening of the Danish Straits during the Littorina Sea stage (at the latest 8,500 to 6,000 BP).If the pre-Ancylus waters were void of sticklebacks, the area was first of EL-origin and the 2 lineages have admixed during or after the Littorina Sea stage and the formation of the current Baltic Sea. The Baltic Sea is a relatively shallow brackish water inland sea with steep gradients of salinity and other abiotic factors (Szaniawska 2018).Due to its young age and recent colonization by various species, it provides an appealing system to study adaptation to variable conditions (Reusch et al. 2018).The genetic resources, the broad-scale sampling, and the detailed demographic history available for the Baltic Sea 9-spined sticklebacks (Varadharajan et al. 2019;Kivikoski et al. 2021;Feng et al. 2022) provide a good starting point for separating the complex signals of various evolutionary factors.The known genetic and phenotypic variation in the area (Herczeg et al. 2009(Herczeg et al. , 2010;;Shikano, Ramadevi, et al. 2010;Teacher et al. 2011;Guo et al. 2019;Natri et al. 2019;Feng et al. 2022Feng et al. , 2023) ) makes the species an intriguing model for investigating admixture, and the presence of populations with large N e reduces the impact of drift (Pamilo and Nei 1988;Palumbi 1994;Tigano and Friesen 2016;Jokinen et al. 2019) and the hybridization load and thus permits excluding specific mechanisms from the process.Ideally, the Baltic Sea sticklebacks should allow studying the consequences of admixture and the stabilization of the postadmixture genome and help understand the interplay of different forces and mechanisms that lead genomes to resist ongoing introgression in hybrid zones. The objective of this study was to investigate the mechanisms influencing the introgression landscape between 2 evolutionary lineages of 9-spined sticklebacks (P.pungitius) after their secondary contact in northern Europe.To this end, we employed whole-genome sequence data from 284 individuals belonging to 13 populations across the Baltic Sea hybrid zone, subsampled from Feng et al. (2023).Our analysis aimed to estimate the levels of introgression across various populations and different Feng et al. • https://doi.org/10.1093/molbev/msae031MBE genomic regions, thereby characterizing the genomic landscape of introgression across the hybrid zone.We had three main objectives: (i) to comprehend the leading evolutionary forces that have shaped the genomic landscape of introgression, (ii) to identify genomic regions where positive natural selection has likely favored introgressed genetic elements, and (iii) to elucidate how natural selection has influenced the genome-wide patterns of introgression, particularly regarding the transfer and elimination of adaptive and deleterious variants between the 2 lineages.Our findings suggest that a diverse array of evolutionary forces has contributed to shaping the genomic landscape of introgression, the fast removal of highly deleterious variations, and the long-term selection against weak deleterious variations being the predominant driving forces.The different sources of selection have interacted with the variable recombination landscape and genome structure, thereby adding complexity to the predictability of postadmixture genome evolution in the hybrids. Data Description We retained 284 individuals from 13 populations of the total data of 888 individuals and 45 populations studied earlier by Feng et al. (2022Feng et al. ( & 2023) ) (Fig. 1a and supplementary table S1, Supplementary Material online).The admixed populations were required to be from marine localities with variation in environmental conditions (Fig. 1b and c) and have large N e (Fig. 1d; Feng et al. 2023) to minimize the impact of genetic drift.After quality control, 4,294,816 biallelic singlenucleotide polymorphisms (SNPs) over the 20 autosomal linkage groups (LGs) remained for the following analyses. Quantification of Introgression across the Genome Following Petr et al. (2019), we applied the f 4 -ratio test (Reich et al. 2009) to quantify the composition of WL ancestries in the Baltic Sea populations.In the intergenic regions, considered representing the background level, the southern Baltic Sea populations (GER-RUE and POL-GDY) contain 35% and 22.2% of WL ancestry (Fig. 2a), respectively, whereas the more northern populations from Gotland (SWE-GOT), Gulf of Finland (FIN-HEL and others), and the Bothnian Bay (SWE-BOL, FIN-KIV) contain 13.5% to 11.3% of WL ancestry, consistent with the whole-genome estimates of Feng et al. (2022).Overall, the ancestry proportions show a gradient across the Baltic Sea with the foreign ancestry decreasing with increasing distance from the Danish straits (Fig. 2a, supplementary table S2, Supplementary Material online). To examine the potential impacts of selection on the minor parental ancestries, we binned the genome into functional categories and computed the WL ancestries across them using the f 4 -ratio test.We considered 6 categories: intergenic, coding DNA (CDS), constrained elements located inside or outside of genes, introns, and promoters.According to these estimates, all Baltic Sea populations contain significantly elevated amounts of WL ancestry in promoter regions (P < 0.001 to 0.044, estimated via resampling, see Materials and Methods; supplementary table S2, Supplementary Material online).Additionally, except for FIN-HEL and SWE-BOL (P = 0.090 to 0.098), all populations display significantly lower amounts of WL ancestry in constrained elements located within genes (P = 0.024 to 0.055).While not statistically significant for all mid to northern Baltic Sea populations, there was a slight increase in WL ancestry in CDS compared with intergenic regions. Footprints of Selection on WL Introgression in Baltic Sea Populations The 7 populations from the northern Baltic Sea showed similar levels of genetic introgression and were studied more closely to understand the factors shaping the genomic landscape of introgression as well as the potential adaptive nature of the introgressed variation.To more precisely identify the regions enriched with introgressed WL variants, we combined the populations, referred to as BS7, and computed the fd summary statistic (Martin et al. 2015) for 100 kb windows with 20 kb steps across the genome.Based on the false discovery rate (FDR) corrected P-value cutoff at 0.05, we obtained 181 putative introgression-enriched regions; by merging the overlapping regions, these collapsed into 45 regions with lengths varying from 100 to 560 kb (Fig. 3a, supplementary table S3, Supplementary Material online). By definition, introgression introduces novel variation to a population and the footprints of selection within introgressed genomic regions differ from those expected under models without introgression (Setter et al. 2020).More precisely, methods based on polymorphism patterns may fail to detect the signals of selection, and approaches based on the overrepresentation of introgressed alleles in a specific population relative to other populations are considered more robust (Racimo et al. 2015).Following this, adaptive introgression (AI) can be distinguished from neutral admixture using the number of sites uniquely shared between the donor and recipient population (U test) as well as the allele frequencies on those sites (Q95 test; Racimo et al. 2016).We applied the U and Q95 tests to search for footprints of selection amongst the introgressed variants and obtained 44 regions which collapsed into 11 candidate regions (supplementary table S4, Supplementary Material online).Five of these candidate regions overlapped with the regions identified with the fd analysis and were chosen as candidates for AI.Integrating information from F ST , d xy , variant allele frequencies (VAF), and genetic diversity (π; Martin et al. 2015, Setter et al. 2020) (Martin et al. 2015).Additionally, we expected the genetic diversity to display the characteristic "volcano" pattern (Setter et al. 2020) and the allele frequencies to be similar to those in the WL.The region in LG20 shows all the hallmarks of a selective sweep (Fig. 3b): lowered F ST to the WL reference and increased F ST to the EL reference, no significant increase in d xy , and a volcano-shaped pattern of π created by recombination between the alternative haplotypes.Although the number of genes found within the candidate regions is small, this does not exclude the possibility that a greater number of genes would be under adaptive selection, e.g. through introgressed promoter regions or other regulatory elements and structural variations.the Baltic Ice Lake (16,000 to 11,600 BC), the Yoldia Sea (11,600 to 10,700 BC), the Ancylus Lake (10,700 to 10,200 BC), the fresh-to-brackish water transition stage (10,200 to 8,500), and the Littorina Sea (8,500 to 6,000 BC).The gray shading indicates the last glacial maximum (26,000 to 20,000 BC). Interaction between Introgression, Differentiation, and Recombination Rate Possible mechanisms causing heterogeneity in admixture proportions and differentiation across the recipient populations' genomes include incompatibility of genetic variants introgressed from a diverged lineage (Schumer et al. 2018), selection against introduced deleterious variation (Juric et al. 2016;Kovach et al. 2016), and adaptive evolution in different environments (Chunco 2014;Smukowski Heil et al. 2019).As selection removes negative variants, it also removes linked neutral variation, giving the variation in recombination rate a role in shaping the distribution and patterns of introgression across the genome (e.g.Schumer et al. 2018).In our data, WL ancestry proportion and recombination showed positive correlation only in the German coastal population (supplementary fig.S3, Supplementary Material online) and even there the correlation was weak (r s = 0.064, P < 0.001).Interestingly, no correlation was found in the Polish population (r s = −0.029,P = 0.082) and in Gotland (r s = −0.023,P = 0.174) but a weak negative correlation was seen in the northern Baltic Sea populations (r s = −0.090 to −0.039, P ≤ 0.018).In the combined BS7 set, the correlation was slightly negative (r s = −0.046,P = 0.006; Table 1). Due to decreased N e (and increased drift) caused by background selection, genomic regions experiencing lower levels of recombination are expected to be more differentiated than those experiencing more recombination (Nachman and Payseur 2012).In our data, the genetic diversity and the genetic distance measurements F ST and d xy , regardless of which parental population they were compared with, were always positively correlated with recombination rate (r s = 0.179 to 0.615, P ≤ 0.001; supplementary table S5, Supplementary Material online).As the density of coding sequences across the genome is weakly and positively correlated with the recombination rate (r s = 0.098, P < 0.001), it is not surprising to see the admixture proportion also weakly correlating with the density of coding sequences (r s = −0.047,P = 0.006) and density of constrained elements (r s = −0.074,P < 0.001). Introgressive Genetic Load across Populations Given the dramatic differences in WL ancestry across the localities and the different genomic features, we set out to investigate the steps, progression, and stabilization of the WL ancestry within the genome since the secondary contact.Specifically, we investigated how the efficacy of selection on introgressed variants changes across the Baltic Sea (Fig. 4a).First, we identified WL-origin variants with opposing allele frequencies (Fig. 4b); these alleles were found in high frequencies in the south, but their frequency dramatically decreased in the central Baltic Sea and further declined toward the north.Second, we estimated the r xy statistics for all coding variants and the WL-origin coding variants for population pairs with increasing distance from the Danish Straits.The results show that, while no apparent differences are seen at the genome-wide level (Fig. 4c), the selection has very efficiently purged the introgressed genetic variation in the south compared with the mid and northern Baltic Sea (Fig. 4d), especially the variants inferred to have a significant effect and causing early stop codons (high impact).Interestingly, we found that the strength of purging of WL-origin variation is not much different between synonymous (low impact) and nonsynonymous (moderate impact) variants.The efficiency of Discussion In hybridization, 2 locally adapted genomes get mixed and the intragenomic interactions get broken, opening a window for exceptionally dynamic evolution followed by a phase of subsequent stabilization.Large-scale DNA sequencing has revealed the prevalence of genetic introgression in the wild, but the events after the hybridization and the roles and the interplay of the different evolutionary factors are more difficult to study and still poorly understood.While some basic principles of hybridization have been emerging (Moran et al. 2021) and it is well documented that the foreign ancestry is selected against within the functionally most important genome regions, little is known about the relative importance of the different evolutionary forces and their interactions and, ultimately, how predictable the outcomes of hybridization are. Genomic Variation of Minor Parental Ancestry In hybrid populations, genome-wide patterns of ancestry are predicted to be extremely variable right after the hybridization event and then gradually stabilize over time. In our study, the WL ancestry decreased from 35% in the population closest to the entry to the Atlantic to 22.2% in a population 330 km to the east and then leveled at 13.5% to 11.3% in mid and northern Baltic Sea populations.In principle, this could be explained by the ∼12% WL ancestry in the mid and northern parts of the Baltic Sea representing the ancestral, first-stage admixture event and the higher WL ancestry levels in the south being explained by more recent and still ongoing WL migration.However, ), we found that the selection has specifically removed the functionally significant WL ancestry in the southern Baltic Sea, both in the most conserved parts of coding sequences (Fig. 2a) and among functional coding sequence changes (Fig. 4d).While the genomewide ancestry proportions in the mid and northern parts of the Baltic Sea are highly similar, there is a consistent pattern of WL ancestry being enriched in the promoter regions and being depleted in the constrained elements within coding sequences.Consistent with the latter, the r xy values for the amino-acid-changing variants of WL origin are below 1 in all pairwise comparisons, indicating selection against the functionally significant foreign variation.Similar patterns of rapid removal of largely deleterious introgressed variation have been found in studies of hominins, swordtail fishes, and Drosophila (Harris and Nielsen 2016;Schumer et al. 2018;Matute et al. 2020;Veller et al. 2023), and such a pattern is considered to be widespread. Diverse Evolutionary Forces Shaped the Landscape of Introgression Recombination rate variation is known to play a key role in shaping the genomic landscape of introgression (e.g.Kim et al. 2018;Martin et al. 2019;Veller et al. 2023). Generally, the footprints of selection are more prominent in regions with low recombination rates where the minor parental haplotypes are longer and thus more likely to contain harmful alleles (Wu, 2001;Nachman and Payseur 2012).As a result, a positive correlation between the admixture proportion and recombination rate is expected if the selection against genomic incompatibilities is the dominant force (e.g.Schumer et al. 2018;Edelman et al. 2019;Martin et al. 2019;Duranton and Pool 2022); conversely, a negative correlation is expected if the foreign ancestry is favored (Pool 2015;Corbett-Detig and Nielsen 2017;Duranton and Pool 2022) or if the effects of deleterious variants are recessive (Kim et al. 2018).Interestingly, we found that the correlation between admixture proportions and recombination rates was in general weak and varied across the Baltic Sea: correlation was slightly positive in Germany, nonexistent in the central Baltic Sea, and weakly negative in the more northern parts.This pattern may have arisen from a historical process that the populations inhabiting the southern Baltic Sea region experienced a more recent influx of WL gene flow, in contrast to the more northern populations which likely retain a more ancestral state of admixture (Feng et al. 2022).This aligns with the observation of a declining LD-based estimate of current N e only in the German population (supplementary fig.S1, Supplementary Material online), which is a typical indicator of recent gene flow (Santiago et al. 2020).In addition, our earlier analysis (Feng et al. 2022) revealed a surprising pair of ancestrally related freshwater populations-one in the Baltic Sea side in Latvia and the other in the North Sea side in Tyrifjorden, Norwaywith a slightly higher WL ancestry than in the BS7 set.If this level of WL ancestry reflects the ancestral ghost population, the BS7 populations have actually been introgressed from the EL and only the German and Polish populations from the WL.Then, the correlation between the recombination rate and the proportion of the recent introgression, EL in the north and WL in the south, is indeed positive. On the other hand, such a pattern is also consistent with a model where the selection against introgression varies during the process.At the early stages, the selection against incompatibilities and highly deleterious variation is the dominant force, creating a positive correlation (Duranton and Pool 2022); with increasing distance from the secondary contact zone, the foreign ancestry gets "filtered," and the force of selection against it diminishes, which in turn weakens the correlation between the recombination rate and the levels of introgression (Duranton and Pool 2022;Groh and Coop 2023).Consistent with this, we found that the purifying selection has very efficiently purged early stop codons and amino-acid-changing variation of WL origin in the southern and central parts of the Baltic Sea, while the selection against the different types of variants tends to become more similar toward the north (Fig. 4).Importantly, at the genome-wide level, we found no evidence of purging of the putatively deleterious moderate-and low-impact variants, and thus, neither drift nor selection has significantly affected the overall proportions of weakly deleterious alleles.The deficit of highimpact coding variants of WL origin in the southern Baltic Sea is surprisingly strong (Fig. 4); as the WL-origin variants appear at a frequency > 0.95 in the North Sea population, they cannot be highly deleterious in that environment.The strong selection against them indicates that they must be maladaptive on the Baltic Sea side and suggests that the environmental differences are the driver of this selection. The observed enrichment of introgression within promoter regions (Fig. 2a) is contrary to findings in human studies (Petr et al. 2019;Telis et al. 2020) but agrees with those in swordtail fishes (Schumer et al. 2018).In swordtail fishes, recombination is concentrated within promoter and other functional regions (Baker et al. 2017), whereas in humans, the process is driven by specific DNA motifs detected by the chromatin-modifying protein PRDM9 (Myers et al. 2010).The relationship between recombination and genomic features in the 9-spined stickleback is unknown, and recombination cannot be excluded as a factor explaining the high WL ancestry within promoters.On the other hand, introgression is more likely to induce regulatory changes in gene expression than radical alterations in protein-coding genes (Gittelman et al. 2016;Dannemann et al. 2017;Dannemann and Kelso 2017;McCoy et al. 2017; but see Petr et al. (2019) and Telis et al. ( 2020)).If so, the strong and consistent pattern across all populations could suggest that a selective sweep Feng et al. • https://doi.org/10.1093/molbev/msae031MBE happened in the first admixture event between the EL and the ancestral Baltic Sea population.The opposite (and consistent) trends for the classes "Promoter" and "CE In Gene" in WL ancestry are striking and may suggest that genetic incompatibilities take place on the level of coding genes (selecting against minor parental ancestry; here WL), while the adaption is driven by gene regulation (favoring the local ancestry; here ancestral Baltic Sea of WL origin). Adaptive Introgression Recent studies suggest that introgression is an important source of genetic variation and allows adaptive evolution to proceed much faster than it would do with de novo mutations (Racimo et al. 2015;Edelman and Mallet 2021).Evidence for AI has been found in a diverse array of taxa, including humans (e.g.Racimo et al. 2015), fish (e.g.Meier et al. 2017;Oziolor et al. 2019), butterflies (Pardo-Diaz et al. 2012; The Heliconius Genome Consortium 2012), and plants (Whitney et al. 2006).Our results add to this evidence by identifying 4 genes with high-frequency WL-origin variants in the EL genetic backgrounds with footprints of selection. Two of these genes might be associated with reproduction.The zona pellucida glycoprotein 4 (ZP4) gene is known for its importance in sperm-egg interactions during fertilization (Wassarman et al. 2001).As a key component of the fish chorion (egg cell coat), it could have a role in the adaptation to the brackish water conditions of the Baltic Sea (Lønning and Solemdal 1972;Nissling 2002;Jovine et al. 2005).The second gene, dynein axonemal heavy chain 5 (DNAH5), encodes dyneins, essential for flagellar beating and sperm function (Turner 2003).Harmful mutations in the DNAH5 gene have been associated with dysfunction in spermatozoa (Zuccarello et al. 2008).Notably, it is not the first time that DNAH5 has emerged as a candidate gene behind salinity-associated ecological speciation in the Baltic Sea: The footprints of selection in DNAH5 suggest that the gene has contributed to the flounders' adaptation to low-salinity Baltic Sea conditions and allow sperm activation in lower salinities than in the related saltwater-adapted European flounder (Momigliano et al. 2017;Jokinen et al. 2019). The relatively small number of positively selected candidates from our study may be an underestimate due to challenges in identifying positively selected introgressed variants and our rather stringent approach to screening for candidates of AI.In contrast to some other studies (e.g.Teng et al. 2017;Walsh et al. 2018;Hu et al. 2019), we first identified regions showing enrichment of introgressed variants and then located the candidate regions with signatures of positive selection on the introgressed variants.This approach might have rendered our ability to identify candidates for AI conservatively, and further work is needed to identify the actual targets of selection and biological functions of the candidate genes and promoters.Nevertheless, it seems reasonable that the introgressed WL-origin variants have played a key role in the adaptation of the dominantly EL-origin sticklebackswith a recent freshwater history in the northern Fennoscandia (Feng et al. 2022)-to the warmest and most saline part of the southern Baltic Sea. Conclusions To conclude, our study of genetic introgression between 2 divergent stickleback lineages in the Baltic Sea demonstrates that the stabilization of hybrid genomes after admixture is a multistage process where the purifying selection against introgressed deleterious variations has played a central role.The varying genomic landscapes of foreign ancestry are likely the consequence of different types and targets of selection and their interactions, as well as the distribution of functional elements and the variation in recombination.Our work adds a well-worked example to studies showing that introgression can contribute to local adaptation, in spite of the widespread evidence suggesting that selection against introgression is pervasive.Although the observed weak correlation between levels of introgression and recombination rate is in stark contrast to findings in most earlier studies (e.g.Schumer et al. 2018;Edelman et al. 2019;Martin et al. 2019;Stankowski et al. 2019), it highlights the complexity of selection on shaping the genomic landscape of introgression since the occurrence of the admixture event.While more work is needed to distinguish the different forces shaping the ancestry of hybrid genomes, our findings bring new insights into the formation of a heterogeneous landscape of introgression and highlight the importance of considering demographic history, genome structure, parental population differentiation, as well as recombination rate in understanding introgression. Ethics Statement The data used in this study were collected in a previous study (Feng et al. 2022) and in accordance with the national legislation of the countries concerned. Data Acquisition The data were subsetted from the vcf file provided in Feng et al. (2023) using BCFtools v.1.7 (Li et al. 2009).Following Feng et al. (2022), DEN-NOR (from the North Sea) was used as the WL source population in both f 4 -ratio and fd tests (see below), GBR-GRO (from the UK) as the WL reference population for the WL source population in the f 4 -ratio test (see below), and RUS-LEV (from the White Sea) as the EL source population in both f 4 -ratio and fd tests, and CAN-TEM (from Quebec, Canada) was selected as the outgroup.As the focal study populations, 9 admixed marine populations from the Baltic Sea identified by Feng et al. (2022) were selected (Fig. 1a and supplementary table S1, Supplementary Material online).In the earlier analyses of this data (Feng et al. 2023), the The Dynamics of Introgressive Hybridization • https://doi.org/10.1093/molbev/msae031MBE reads were first mapped to the latest 9-spined stickleback reference genome (Kivikoski et al. 2021) using the Burrows-Wheeler Aligner v.0.7.17 (BWA-MEM algorithm, Li, 2013) and its default parameters.Duplicate reads were marked with SAMtools v.1.7 (Li et al. 2009), and variant calling was performed with the Genome Analysis Toolkit (GATK) v. 4.0.1.2 (McKenna et al. 2010) following the GATK Best Practices workflows.Sites located within identified repetitive sequences (Varadharajan et al. 2019) and negative mappability mask regions combining the identified repeats and unmappable regions (Kivikoski et al. 2021) were excluded.Multiallelic variants and sites showing an extremely low (<5×) or high average coverage (>25×), genotype quality score (GQ) < 20, quality score (QUAL) < 30, and missing data >75% were filtered out using VCFtools v.0.1.5(Danecek et al. 2011).The 2 lineages are known to have distinct sex chromosomes (LG12 for EL and unconfirmed for WL), and data from the known sex chromosomes (LG12, Natri et al. 2019) were removed from further analysis (Feng et al. 2022). During the finalization of this study, the WL sex determination region was discovered to be located in an 80 kbp inversion in LG3 (Yi et al. 2023).Given its inversion status and the small size (the EL sex chromosome makes up some 50.5% of the 33.6-Mbp-longLG12), we do not believe that its inclusion to our analyses has much influence on the results. (1) f 4 -ratio tests were performed using ADMIXTOOLS v.5.1 (qpF4ratio v.320; Patterson et al. 2012).In brief, this setup (Fig. 2b) assumes that the WL population that contributed to the Baltic Sea populations formed a clade with DEN-NOR, rather than with the more ancestral GBR-GRO (Feng et al. 2022), enabling us to directly measure the contribution of the WL source population to the Baltic Sea test populations.For further details on the choice of the optimal WL source, see Feng et al. (2022). To estimate the admixture proportion among different genomic features, the locations of constrained elements were lifted from the 3-spined stickleback genome annotation (Ensembl release ver.95; Herrero et al. 2016) using CrossMap v.0.3.3 (Zhao et al. 2014) and a liftover chain created with LAST (Frith et al. 2010) and the Kent utilities (Tyner et al. 2017).The genome annotations from Varadharajan et al. (2019) were lifted to the latest 9-spined stickleback reference genome (Kivikoski et al. 2021) using liftoff (Shumate and Salzberg 2021).The promoter regions were defined as 1 kb stretches upstream of the gene start. A significance test was then applied to assess whether EL and WL ancestry were significantly enriched or depleted in any of the genomic features in comparison with the levels seen within intergenic regions.Following Petr et al. (2019), the alpha value of a given annotation category was resampled 10,000 times from a normal distribution centered on the alpha with a standard deviation equal to the standard error given by ADMIXTOOLS.An empirical P-value was then calculated for the estimated alpha for each genomic feature to test the hypothesis that the ancestry proportions for different genome features do not differ from that of the intergenic regions. Quantification of WL Introgression (fd) and Population Genetic Statistics The modified D-statistic, fd (Martin et al. 2015), was used to quantify introgression for the admixed population at finer genomic scales.We used a fixed window size of 100 kb with a 20 kb step size using the scripts from Martin et al. (2015) and estimated P-values from the Z-transformed fd values using the standard normal distribution and corrected for multiple testing with the Benjamini-Hochberg FDR method (Benjamini and Hochberg 1995).Windows with positive D and fd values with a number of informative sites ≥ 100 and FDR value ≤ 0.05 were retained as outlier loci (see below).Similarly to the f4-ratio test, we used the DEN-NOR and RUS-LEV to represent the WL and EL ancestral population and CAN-TEM as the outgroup.The admixture proportions were estimated separately for each Baltic Sea population and for a combined set of 7 northern populations showing similar levels of introgression (i.e.SWE-GOT, FIN-HEL, FIN-TVA, FIN-HAM, FIN-SEI, SWE-BOL, and FIN-KIV, hereafter BS7 when referred collectively). We examined the covariation of admixture proportions (fd) with the population genetic statistics π (nucleotide diversity), d xy (absolute divergence), and F ST (measure of genetic drift).The statistics were computed genome-wide in 10 and 100 kb windows using the scripts from Martin et al. (2015).The mean recombination rate was estimated from the linkage map Varadharajan et al. (2019), initially for 10 kb windows (see Varadharajan et al. (2019) for details) and then binning the rates into 100 kb nonoverlapping windows.The rates were lifted to the latest 9-spined stickleback reference genome (Kivikoski et al. 2021) using custom scripts.The 10 kb population genetic statistics were used in fine-scale analyses of candidate regions for adaptive evolution. Footprints of Selection in Baltic Sea Populations After the quantification of WL introgression, we searched for footprints of selection on introgressed variants using the U and Q95 tests following Racimo et al. (2016) and Jagoda et al. (2018).Both tests are based on the VAF and measure, respectively, the number of SNPs shared with the donor population that appear at a high frequency in the focal population but at a low frequency in the reference population, and the 95% quantile of the frequency of the SNPs that are shared with the donor population and appear Feng et al. • https://doi.org/10.1093/molbev/msae031MBE at a low frequency in the reference population.The VAF was estimated separately for WL (DEN-NOR), EL (RUS-LEV), and the Baltic Sea populations, and the tests were performed using 100 kb windows with a 20 kb step and discarding positions with more than 25% missing data.We first calculated the U20EL, BALTIC, and WL(0.01, 0.2, 1) to count the SNPs that are at <1% frequency in the EL reference population, at ≥20% frequency in the combined Baltic Sea population (BS7), and fixed (100% frequency) in the WL population.We then calculated the Q95EL, BALTIC, and WL(0.01, 0.2, 1) to obtain the 95% quantile VAF of these SNPs in the Baltic Sea population.The intersection of the top 1% regions of the U20 and Q95 tests and the candidate regions from the fd test were then considered as putative AI regions.Within each AI region, F ST , d xy , and π were calculated for 10 kb sized windows with a 5 kb step, and genotypes and allele frequencies (minor allele frequency [maf] ≥ 0.05) of variants were used to identify candidates for possible adaptive evolution among the lifted reference gene annotations. Assessment of Introgressive Genetic Load and Its Purification in the Baltic Sea Following the concept of U20 test, we defined the variants in the Baltic Sea populations to be of WL origin if they showed frequency ≤ 0.05 in the EL reference and frequency ≥ 0.95 in the WL reference.To evaluate the efficacy of selection on potentially deleterious foreign variations introduced via genetic introgression, we employed the r xy statistics as described by Xue et al. (2015).The r xy statistics compared the allele frequencies of certain categories of variants relative with the levels of neutral variants between populations located at varying distances from the entry to the Baltic Sea.An r xy value below 1 indicates a deficiency of the focal alleles in the population farther from the WL introgression entry, suggesting the influence of purifying selection.The effects of variants on protein-coding gene sequences were annotated and classified as low impact (synonymous variants), moderate impact (missense variants), and high impact (stop codon gaining variants) using SnpEff v.5.0 (Cingolani et al. 2012).The same number of variants from intergenic regions were randomly selected and served as a proxy for the neutral level of genetic variation.We obtained standard errors and 95% confidence intervals for the r xy estimates by jackknifing the values across the 20 individual LGs.For comparison, we also estimated r xy for the genome-wide putatively deleterious variants. Fig. 1 . Fig. 1.Study populations and localities.(a) Geographic origins of populations involved in this study, modified from Feng et al. (2022).The pie charts show the proportions of Eastern Lineage (EL) males; "Other" can be either WL male or WL/EL female.The outline colors indicate the mtDNA lineage assignment of the population.The dot-dashed frame marks the admixed populations, and the gray dotted frame indicates the BS7 set (see Materials and Methods).The source and reference populations used in the f4-ratio test and fd statistics are indicated.(b) Map of the sea surface temperature and (c) salinity of the Baltic Sea and its surroundings, adapted from Schlitzer and Mieruch-Schnülle (2024).(d) Demographic history of parental, reference, and admixed populations from 30,000 yr ago (kya) to the present, adapted from Feng et al. (2023).The blue, green, and red colors indicate the EL parental population RUS-LEV, WL parental population DEN-NOR, and WL reference population GBR-GRO, respectively; the orange and green colors indicate GER-RUE and POL-GDY, the 2 southern Baltic Sea populations, respectively.The 7 populations from the central and northern Baltic Sea are depicted in black.The yellow shadings indicate different stages of the Baltic Sea:the Baltic Ice Lake (16,000 to 11,600 BC), the Yoldia Sea (11,600 to 10,700 BC), the Ancylus Lake (10,700 to 10,200 BC), the fresh-to-brackish water transition stage (10,200 to 8,500), and the Littorina Sea (8,500 to 6,000 BC).The gray shading indicates the last glacial maximum (26,000 to 20,000 BC). Fig. 2 . Fig. 2.Western lineage ancestry across 6 different genomic features in the admixed Baltic Sea populations.(a) The dots and lines show the proportion of WL ancestry and its standard error (estimated with jackknifing across the genome) for each genomic feature, and the asterisks above and below indicate that the estimate is significantly higher or lower than within the intergenic region.The black lines indicate the estimates for intergenic regions, assumed to represent the background level of WL ancestry.The populations are ordered by increasing geographic distance from the Danish Straits.CE, constrained elements; CDS, coding sequences.(b) The tree depicts the setup of the f4-ratio test for assessing the WL ancestry followingPetr et al. (2019).The test population X is assumed to be a mixture between populations DEN-NOR and RUS-LEV, α indicating the amount of WL ancestry. Fig. 3 . Fig. 3. AI in the northern Baltic Sea populations.(a) The Manhattan plot shows the estimated admixture proportions (fd) for 100 kb windows across the genome, orange dots indicating genomic windows significantly enriched for WL ancestry.The dotted red line at 0.5137 indicates the P < 0.05 significance level.(b) A candidate region for AI at LG20:20260000-20315000 (dotted box).The panels show the gene annotations (coding sequences in orange), per site VAF (heatmap) for the 3 sets of population (WL source, BS7, EL source), and F ST , d xy , and π (10 kb windows) in the northern Baltic Sea populations (BS7).The red and blue colors indicate F ST and d xy measured against DEN-NOR (WL source) and RUS-LEV (EL source), respectively. Fig. 4 . Fig. 4. Selection on potentially deleterious introgressed variation.(a) Populations for the pairwise r xy comparisons with colors matching those in panels c and d.The 3 populations from the Gulf of Finland indicated with black triangles were not included in the pairwise comparisons.The populations representing the WL and EL parental lineages are indicated with red and blue dots, respectively.(b) Density plots of the WL-origin VAF in the admixed Baltic Sea populations and the 2 parental populations.(c) and (d) The r xy statistics for all (c) and WL-origin (d) coding variants of different impact for the 5 pairwise comparisons (supplementary table S6, Supplementary Material online).r xy = 1 if the allele frequency changes do not deviate from the background; r xy < 1 indicates the relative deficit of the corresponding variants in the northern population compared with the southern population.r xy confidence intervals are based on jackknife estimates across the 20 linkage groups. , we identified 4 candidates for adaptively introgressed genes (viz.ZP4, PLEKHG3, DNAH5, ADCY2; supplementary table S4 and fig.S2, Supplementary Material online).In particular, we searched for signals within the introgression-enriched region that exhibit lower F ST and d xy to the WL reference than the The Dynamics of Introgressive Hybridization • https://doi.org/10.1093/molbev/msae031 MBEneighboring regions Table 1 Spearman rank correlations between admixture proportion (fd) and recombination rate in different populations BS7, combined 7 northern Baltic Sea populations.Feng et al.• https://doi.org/10.1093/molbev/msae031Although we cannot fully separate the signals created by the temporal factors of a potentially multistaged admixture history from those created by the selection, the results highlight the complex forces acting in hybrid populations and demonstrate the potential of the Baltic Sea sticklebacks as a model system to study the evolutionary processes after secondary contacts in action.
9,926
2023-08-31T00:00:00.000
[ "Biology", "Environmental Science" ]
High-angle, not low-angle, normal faults dominate early rift extension in the Corinth Rift, central Greece Low-angle normal faults (LANFs) accommodate extension during late-stage rifting and breakup, but what is more difficult to explain is the existence of LANFs in less-stretched continental rifts. A critical example is the <5 Ma Corinth Rift, central Greece, where microseismicity, the geometry of exposed fault planes, and deep seismically imaged faults have been used to argue for the presence of <30°-dipping normal faults. However, new and reinterpreted data call into question whether LANFs have been influential in controlling the observed rift geometry, which involves (1) exposed steep fault planes, (2) significant uplift of the southern rift margin, (3) time-averaged (tens of thousands to hundreds of thousands of years) uplift-to-subsidence ratios across south coast faults of 1:1–1:2, and (4) north margin subsidence. We test whether slip on a mature LANF can reproduce the long-term (tens of thousands of years) geometry and morphology of the Corinth Rift using a finite-element method, to model the uplift and subsidence fields associated with proposed fault geometries. Models involving LANFs at depth produce very minor coseismic uplift of the south margin, and post-seismic relaxation results in net subsidence. In contrast, models involving steep planar faults to the brittle-ductile transition produce displacement fields involving an uplifted south margin with uplift-to-subsidence ratios of ~1:2–3, compatible with geological observations. We therefore propose that LANFs cannot have controlled the geometry of the Corinth Rift over time scales of tens of thousands of years. We suggest that although LANFs may become important in the transition to breakup, in areas that have undergone mild stretching, do not have significant magmatic activity, and do not have optimally oriented preexisting low-angle structures, high-angle faulting would be the dominant strain accommodation mechanism in the upper crust during early rifting. INTRODUCTION Both high-and low-angle (dipping <30°) normal faults have been proposed to accommodate strain at some early-stage continental rifts, including the Basin and Range (North America), central Apennines (Italy), Aegean-western Turkey, and Corinth Rift (Greece). Low-angle normal faults (LANFs) present a paradox in structural geology, as Andersonian theory (Anderson, 1905) would suggest that slip on LANFs is mechanically unfavorable. These continental rifts are therefore crucial case studies for assessing when in the rifting process, and under what conditions, LANFs become important in accommodating strain. In the Basin and Range, an association between detachments (faults dipping <15°) and magmatic activity has been reported, suggesting that igneous midcrustal inflation may rotate the stress field such that LANFs are favorable (Parsons and Thompson, 1993). Low-angle faults have been observed in Turkey, however these may have rotated from steeper angles to their current 0°-20° dip angles (Gessner et al., 2001), rather than LANFs being a first-order extension mechanism. Low-angle normal faults have also been observed in the central Apennines, where their origin has been linked to subduction rollback (Collettini et al., 2006) or collapse of an overthickened accretionary wedge, whereby thrust faults are reactivated as LANFs (Ghisetti and Vezzani, 1999). These studies suggest that LANFs are only present in early-stage continental rifts under rather specific conditions. A key example is the 3-30-km-wide Corinth Rift, Greece, which at <5 Ma and with a stretching factor (initial crustal thickness divided by final crustal thickness) of <1.4 (Bell et al., 2011), provides a snapshot of the very early stages of continental rifting (Fig. 1A) Rigo et al. (1996) Bernard et al. (1997 long been used as an example of a setting where low-angle normal faulting plays a key role in strain accommodation (e.g., Jolivet et al., 2010;Sorel, 2000) and is commonly referred to by studies reviewing LANF mechanisms (e.g., Lecomte et al., 2012). The Corinth Rift is amagmatic, and like the central Apennines, it occurs within orogenically thickened crust; however, the Corinth Rift runs orthogonal, not parallel, to the Hellenide orogeny. If early-stage continental extension at the Corinth Rift is really controlled by LANFs, it would suggest that they may be a primary strain-accommodation mechanism in the earliest stages of continental rifting. If a mature, seismically active LANF or detachment does exist beneath the western Corinth Rift, slip on such a structure must be able to account for the long-term (tens of thousands to hundreds of thousands of years) geometry and pattern of vertical displacement across the rift, which is well constrained from seismic reflection imaging and geomorphology data. The southern margin is uplifting with late Quaternary uplift rates of 0.8-2.0 mm/yr determined from uplifted marine terraces and wave-cut notches (e.g., Armijo et al., 1996). Long-term uplift-to-subsidence ratios across the largest faults bordering the southern margin are estimated at 1:1.2 -1:2.2, measured from the elevation of features of comparable age in the footwall and hanging wall (McNeill et al., 2005(McNeill et al., , 2007. In contrast, the northern margin appears to be dominated by subsidence (Bell et al., 2009;Elias et al., 2009). GPS data have been collected in the Corinth Rift; however, Bell et al. (2011) showed that GPS extension rate patterns are incompatible with the long-term rift geometry and cannot have persisted over a 100+ k.y. time scale. In this study, we perform new finite-element displacement modeling to investigate if LANFs are capable of reproducing the observed patterns of vertical displacement across the rift to definitively address the question: Is Corinth an example of a rift that is dominated by low-angle normal faulting? If the answer to this question is no, the elimination of this example would call into question the significance of LANFs during early-stage rifting. LOW-ANGLE VERSUS HIGH-ANGLE DEEP FAULT GEOMETRY MODELS IN THE CORINTH RIFT The shallow geometry of Corinth Rift active faults is known from exposed fault scarps onshore, with reported fault dips of 46°-60° (e.g., Ford et al., 2013). Seismic reflection interpretations from different authors agree that the dip angle of offshore faults down to ~3 km ranges from 35° to 60° (e.g., Nixon et al., 2016;Taylor et al., 2011). However, controversy remains as to the geometry of faults deeper than 3 km as these have never been directly imaged. Two classes of deep fault geometry model have emerged. Rigo et al. (1996) proposed that microseismicity lies on a plane dipping 10°-25° to the north at a depth of ~6-9 km below the south coast of the Gulf of Corinth (Fig. 1B). They interpreted this band of active seismicity as a low-angle detachment onto which steep faults observed at the surface sole at depth. Syn-rift sediment depocenters in the northwestern Peloponnese have shifted north through time, and although most of the faults in this area are moderately to steeply dipping (40°-50°), the most southern and oldest fault is low-angle (25°). Sorel (2000) proposed that this structure is a detachment extending northwards beneath observed steep faults in the western part of the rift (Fig. 1B). Sachpazi et al. (2003) and Taylor et al. (2011) interpreted from deep seismic reflection data a low-angle structure at a similar depth to Sorel's (2000) proposed detachment in the western rift ( Fig. 1B; Fig. DR1 in the GSA Data Repository 1 ). Some focal mechanisms determined for large (M 6) earthquakes that have 1 GSA Data Repository item 2018025, Figures DR1 (high-and low-angle faults interpreted in seismic data), DR2 (variation in vertical displacement with postseismic relaxation), DR3 (horizontal displacement for low-and high-angle faults), and DR4-DR6 (the effect of changing model layer thickness), is available online at http://www .geosociety.org/datarepository/2018/ or on request from<EMAIL_ADDRESS>occurred in the western rift may support relatively low-angle faulting, however these are subject to the nodal plane that is selected as the active fault (Bernard et al. 1997). Low-Angle (<30°) Active Faults at Depths >3 km Although Rigo et al. (1996) and Sorel (2000) are commonly cited together as studies in support of a low-angle detachment, the Sorel (2000) and Taylor et al. (2011) LANF is 3 km beneath the south coast of the gulf, whereas the zone of microsesimicity is at 6-9 km depth (Fig. 1B). Moderate-Angle to High-Angle Faults (35°-60°) to Depths >3 km Although there is debate regarding the western Corinth Rift, focal mechanisms of large earthquakes in the eastern rift occurred on faults dipping at 40°-50° (Jackson et al., 1982). Lambotte et al. (2014) resolved the western Corinth Rift seismicity pattern further, and noted that, in some locations, the microseismicity cloud appears to dip 10°-20° to the north, yet in others, it is subhorizontal or chaotic (Fig. 1B). They also observed seismicity beneath the clustered microseismicity cloud, which they suggested is the continuation of high-angle faults imaged at shallower depth and supports a high-angle fault model rather than an active detachment. Nixon et al. (2016) reinterpreted the deep seismic reflection data and suggested that north-dipping south margin faults remain moderately to steeply dipping (35°-60°) below depths of 3 km ( Fig. 1B; Fig. DR1). Further evidence for high-angle faulting comes from McNeill et al. (2005), Bell et al. (2011), and Ford et al. (2013, who calculated total extension across the rift by summing fault heaves assuming a planar fault geometry. In the western Corinth Rift, the amount of extension required for the observed basement subsidence and crustal thinning is in line with summed heaves across higher-angle faults. MODELING THE VERTICAL DISPLACEMENT FIELD We use PyLith (https://geodynamics.org/cig/software/pylith/), a finiteelement code with quasi-static formulation (Aagaard et al., 2013), to model the deformation associated with imposed fault slip in a simple layered continental crust, and we solve the two-dimensional (2-D) surface uplift and subsidence fields for the two contrasting fault geometry models. PyLith allows the coseismic displacement field associated with a particular magnitude of slip on a fault to be calculated and also the deformation associated with post-seismic relaxation to be recovered. Assuming linear rheologies, addition of these displacement fields is representative of the vertical deformation pattern caused by multiple seismic cycles over a 10+ k.y. time scale. Our 2-D model includes a 10-km-thick elastic upper crust overlying a 30-km-thick viscoelastic lower crust layer with a Newtonian rheology with constant viscosity of 10 19 Pa·s, consistent with previously published models for the region ( Fig. 2; Sachpazi et al., 2007;King, 1998). The Maxwell time for the viscoelastic material is 10 yr, and full relaxation of the model is achieved after ~30 k.y. (Fig. DR2). The model is 1000 km wide, with free-slip boundary conditions on the model sides and base and a top free surface. We only consider deformation patterns in the central 100 km of the model to avoid boundary effects. We impose a total of 200 m of normal slip along a prescribed fault (simulating the result of 30 k.y. of earthquakes with a recurrence interval of 150 yr and a displacement of 1 m) and let the system relax. We present our vertical deformation results normalized to show the amount of uplift and subsidence produced per meter of displacement on the fault and compare them to first-order observations of Corinth Rift vertical deformation. We also calculate the normalized horizontal deformation (extension produced per meter of fault displacement) between points 10 km on either side of the fault, to test how the magnitude of extension varies on high-angle versus low-angle faults (Fig. DR3). We note that varying elastic and viscous layer thickness within realistic limits does not change the conclusions presented in this manuscript . Changing the linear viscosity only changes the time scale of relaxation, not the final uplift-to-subsidence pattern. To cover the full range of deep fault geometries proposed in the literature, we test two classes of fault model. Model 1 involves a LANF at depth, and two different scenarios are tested: (1) model 1a is a planar fault with dip of 45° that soles into a 15° north-dipping detachment at a depth of 7 km below the south coast (cf. Rigo et al., 1996); and (2) model 1b is a 45° fault that changes to a dip angle of 20° at a depth of 3 km and extends to a brittle-ductile transition at 10 km depth (cf. Sorel, 2000;Taylor et al., 2011). Model 2 involves planar faults that dip at 45° (model 2a) or 60° (model 2b) to the brittle-ductile transition at a depth of 10 km (after Nixon et al., 2016). Model 1 The coseismic vertical displacement field associated with moderately high-angle faults (45°) soling into a detachment at a depth of 7 km (model 1a) and 3 km (model 1b) generates a minor amount of coseismic uplift of the southern margin (<0.01 m for every 1 m of slip), but an order of magnitude more uplift of the northern margin 20-25 km north of the surface trace of the fault (0.1 m for every 1 m of slip) (Figs. 3A and 3B). The coseismic uplift of the north margin occurs above where the low-angle fault tips out, as is also observed in dislocation models for low-angle normal faulting (e.g., Resor, 2008). The horizontal displacement between points 10 km either side of the fault associated with the coseismic response is 0.6-0.65 m of extension per meter of slip on the fault (Fig. DR3). After 30 k.y. of post-seismic relaxation, the overall vertical displacement profile changes considerably (Figs. 3A and 3B). The deformation across the rift associated with models 1a and 1b involves overall subsidence of both the southern and northern margins (Figs. 3A and 3B). This contrasts markedly with the observed uplift (0.8-2 mm/yr) of the southern Gulf of Corinth coastline averaged over a time scale of 10-100+ k.y. Model 2 For both 45° and 60° normal faults (models 2a and 2b), the model predicts considerable coseismic uplift of the southern margin (0.12 m and 0.22 m for every 1 m of slip for 45° and 60° faults, respectively) and subsidence of the northern margin (Figs. 3C and 3D). The horizontal displacement between points 10 km either side of the fault associated with the coseismic response is 0.5 m of extension per meter of slip for model 2a, and 0.3 m of extension per meter of slip for model 2b (Fig. DR3). After post-seismic relaxation, the southern margin remains uplifted and the northern margin subsides. DISCUSSION AND CONCLUSIONS Deformation associated with LANFs modeled over a >10 k.y. time scale produces subsidence of the southern margin of the Corinth Rift, incompatible with the widespread indicators of uplift along the south coast (Figs. 3A and 3B). We suggest that the enhanced post-seismic subsidence is due to viscous flow in the lower crust balancing the gradient of gravitational potential energy generated by slip on the fault. In contrast, models involving moderate to steep faults to depths of 10 km predict uplift of the south margin and uplift-to-subsidence ratios of 1:3.1 and 1:2.2 for 45° to 60° faults, respectively, within the range of estimates from geological observations (Figs. 3C and3D) (McNeill et al., 2005, 2007). Crustal thickness in the Eratini-Aigion area is ~33-40 km (Sachpazi, et al. 2007); we note that reducing the crustal thickness of our model from 40 km to 30 km results in uplift-to-subsidence ratios of 1:2.3 and 1:1 for models 2a and 2b respectively, providing an even better match to the geologically constrained ratios (Fig. DR6). Our models show that LANFs result in 2.2× more coseismic extension than planar 60° dipping faults for the same amount of normal slip (Fig. DR3). These findings, together with the fact that high-angle planar faults can account for the amount of total extension across the rift (Bell et al., 2011;Ford et al., 2013), lead us to conclude that high-angle normal faulting is the dominant mechanism of strain accommodation in the Corinth Rift. We support the conclusion of Lambotte et al. (2014) that if a LANF does exist in the western Corinth Rift, it is incipient. The elimination of the Corinth Rift as an example of a rift that deforms predominantly by low-angle normal faulting brings into question the importance of LANFs at early rift zones generally. LANFs are observed in continental rifts that (1) have undergone significant stretching and are close to breakup (e.g., Woodlark Rift, Papua New Guinea; Taylor et al. 1999), (2) exhibit significant magmatic activity or base-crustal shear stress to rotate stress tensors (e.g., Basin and Range and central Apennines), or (3) have preexisting thrust faults optimally orientated for reactivation (e.g., central Apennines). The Corinth Rift is an example of a mildly stretched rift, which is amagmatic, and normal faulting is perpendicular to the Hellenide orogeny; thus none of these conditions apply. We suggest that although LANFs can become important in the transition to breakup, high-angle faulting would be the dominant strain accommodation mechanism in areas that have undergone limited stretching, do not have significant magmatic activity, and do not have optimally orientated preexisting low-angle structures.
3,859.8
2017-12-08T00:00:00.000
[ "Geology" ]
Concept of Integrative Currency Area and Application to the Central Europe Euro zone is created along with theory of optimal currency area. This theory is not the only solution that indicates how to integrate countries in the common currency area. The euro area requires that labor productivity is at a similar level. Central European countries, on the other hand, have much lower labor productivity than Western European countries. The question arises about the possibility of creating a currency zone for countries with different levels of productivity. In this paper, it is indicated that there is a positive answer in this matter. However, this requires a rethinking and definition of the basic arrangement of the economic concepts. The paper indicates a method of translating economic values into value of the one currency. The main measurer limiting freedom is the labor productivity index, which cannot decrease what would cause inflation. The lower labor productivity means only that employees have the lower earnings. Introduction Theory of optimal currency area is the supporting knowledge base of the European euro area.However, the euro area does not work fluently, and many doubts appear in respect to the theoretical foundations and applied procedures in conducting this project.In particular, the accession of new members is doubtful since euro area is clearly discriminatory and exclusionary to countries with productivity levels significantly lower than in the founding member states.Contrary to euro area here is presented consideration that develops an idea of integrative currency area (ICA).The ICA application in the Central Europe will work for real integration of countries.An additional aim of this paper is to point M. Dobija DOI: 10.4236/me.2018.970811248 Modern Economy out how to involve states that do not fulfill the Maastricht criterion.The ICA can involve countries with different productivity levels that do not fulfill the Maastricht criterion.A significant part of this paper develops a measurement of labor productivity ratio as a chief economic measurer.The only condition of being a member of ICA is to hold the labor productivity not decreased. What is more presented considerations may serve as an intellectual tool on a way to creation of union of countries belonging to ABC area.The ABC is an abbreviation where A means Adriatic Sea, B means Baltic Sea and C means Black Sea.An essence of this union in the Central Europe is creation a pole of strength equitable to Russia or Germany power.As Z. Sykulski has explained an aim of such project is protection from imperial temptations of the mentioned powers and creation of condition for free development of the nations of the region [1]. Confidence is growing now that the current geopolitical circumstances are unstable and will undergo changes.In this state of affairs, the Central-Europe countries should seek courses of action appropriate for the strengthening of sovereignty and the ways of entrance to the circle of countries of much significant in world politics. Fundamental Economic Concepts in the Theory of ICA In order to develop correct theory of the ICA a set of fundamental economic categories must be elaborated and precisely defined.It is necessary since in the development of economic thought one can see the amazing lack of continuity and consistency.In the fifteenth century already used the double entry accounting system, as a system measuring the periodic economic income of companies. In 1492, in Venice was printed book by L. Pacioli, in which he presented a theory of the double accounting system and its organization.The thing is that fundamental part of the accounting theory is the basic identity that contains the abstract categories of values, as capital and assets.This basic identity is a derivative of duality principle that allows for measuring periodic gains being an increase of the abstract capital invested by the owners.Thus, the abstract nature of capital could be well understood for at least 6 centuries.However notion of capital is still differently perceived [2] [3].The latest example is the T. Piketty's book, in which the author presents its own tortuous term [4].The author dedicates the Section 1. achievements of the theory of the double accounting and the principle of duality that points on the abstract nature of capital.He does not perceive the importance of this knowledge for the existence and functioning of money-goods economy.This attitude is not unique among economists, which does not mean that is universal, but is, however, a little reflective.But the point is that the lack of understanding the strong relationship between capital and labor has a negative impact on the quality of the economic analyzes and conclusions.It looks like W. Piketty confirms K. Bliss (1975) opinion about confusion with category of capital who wrote that when economists reach agreement on the theory of capital they will shortly reach agreement on everything [5].Abstract nature of capital is explained long ago and applied to important theoretical formulation by M. Dobija and B. Kurek [6] [7].Authors interpret capital as abstract potential ability of doing work.What is more an economic constant of potential growth is discovered.It is worth noting that abstract categories, which are defined and measured in the physical sciences or accounting systems must be described and analyzed mathematically.It is therefore natural that well versed scholars in mathematics wrote papers on accounting like L. Pacioli and even Professor of pure mathematics A. Cayley [8], the author of the basic theorems in the theory of analytic functions.The set of economic categories that forms fundamentals for accounting and economic of labor are a consequence of the duality principle.As is explained in paper by M. Dobija [9] the duality means that different sorts of assets embody abstract capital that express ability of the assets to work.Assuming that economic means denotes a prime notion that does not require explanations the fundamental set of categories is presented in the body of Table 1. The above set of notions is a fundamental of economics of labor.In order to grasp the idea of capital we have to notice the tandem of capital and labor.That tandem is a direct implication from the definition of capital.Capital is the ability of doing work.It is therefore the potential for doing work (e.g. the car in a garage).Labor process on the other hand is a transfer of this potential of accumulated capital to objects of work.Thus labor is a dynamic side of the potential capital.One cannot perform any labor without having capital that was collected earlier.Therefore labor determines also a unit of measure of capital. It is hard to explain why the above understanding of these significant categories did not become a standard for economic thinking through ages.Capital by definition is closely related with labor and value.The work is measurable so it assures a measurability of capital and assets.Money is a work receivable, namely regardless law to obtain value equivalent, and this category is also abstract although so much popular.Why such a long-lasting difficulties exists, despite L. Pacioli [10] published his book, in which appears an abstract category of capital and the basic double-entry equation.The purpose of this accounting system was, and still is, a periodic measuring of increase of the initial capital invested in the economic processes, i.e. income.Moreover, the use of the double-entry accounting soon became the norm, and economic history researchers [11] wrote Table 1.The set of fundamental economic notions. Capital Abstract and potential ability of an object for doing work.Capital and labor are complementary categories. Labor Transfer of capital from output localization to object of destiny. Labor is the category measurable in labor units. Labor unit Unit of labor = unit of power × number of time units. Value Value is determined by concentration of capital in an object.Measure of value is a real positive number that fulfils the measure postulates.Types of measures: exchange value, cost value, present value, and others. Money Work receivables expressed in money units. Legal-economic category that determines unconditional law for receiving a value equivalent. Money unit It is a fraction of labor unit applied in a given economy. Assets Material and immaterial objects that contain a measurable deposit of capital expressed in money unit. Resources Economic means with undetermined concentration of capital so immeasurable. Resources are merely countable in natural units. very positive opinions about the contribution of this system for the development of capitalism. Labor Productivity as the Fundamental Economic Measurer Having defined the set of fundamental notions concept of labor productivity should be taken under consideration.To this aim a special kind of description of economic activities is needed.A significant tool of economic consideration is the well known production function.It is however the tool limited to examining of the production process.The theory of production function examines and explains the general production process not a specific entity.What is more the econometric modeling needs statistical approach and sets of data.Therefore for the construction of ICA the concept of the production function needed some rethinking.As a result a mathematical description of the economic activity of the economic entity (company or state) has been created.In this approach the function of economic activity (FEA) takes into account the value of assets as well value of human capital of employees.The FEA leads directly to labor productivity ratio Q.If considered entity is a state so FEA presents GDP = W × Q that is equal to the factor W × Q (W -total remunerations).In the economics of laborit is natural that the tandem (GDP, Q) is the main measure of economic evaluation. From other point of view the quotient of market value of final annual products of an organization to annual cost of labor is a specific dimensionless ratio called labor productivity.Denoting it as Q we speak about labor productivity of a country economy Q = GDP/W, where W denotes the cost of labor.Then the Q is the inverse value of well known ratio of labor share.Since labor share is very stable the Q is stable as well.We can write an identity including Q, which sheds new light on the issue of the share of wages in GDP.The identity proves that GDP is a sum of the share of wages (W) and the share of assets (GDP A ).The greater is the assets share the wealthier is a country. 1 1 The FEA expressed analytically may be a tool of economic analysis or it may provide numerous non-linear models describing the behavior of a selected variable.The value of final production sold in the historical prices of outlays may be expressed as follows: ( ) where: K-all costs of manufacturing and managing, and Z-annual income.Thus Z/K denotes cost profitability.This ratio can be introduced as a function of the ROA.The quotient w = K/A, where A is the book value of assets that is known as the turnover ratio.Thus K = wA so Z/K = Z/wA = ROA/w.Considerations of the FEA show that the Q can be interpreted as follows [12]: where: W-cost of labor, A-end of period value of assets, r-cost profitability, z-turnover ratio, u-percentage of pay in respect to employees human capital H, F-level of management and F = F(r, z, u). The FEA is a kind of function that takes into account the criticism of the econometric modelling of production expressed by J. Robinson [13].Arguments of the FEA represent economic value, which is the measure of embodied capital. Therefore the Q is present in the most significant economic explanation and among others a trend of the exchange rate, which is discussed in more detail by M. Jędrzejczyk [14].An important application of the Q is the wage equation of exchange that determines a required level of credit in the money-goods economy [15]. Integrative Currency Area versus Euro Area Theory of the ratio Q allows discussing agenda of the integrative currency area ICA is an answer for question how to construct a stable currency area with ability of integrating subsequent members.Then, only one money unit would rule in this area without any exchange rate.This is also a way to global currency area under condition of reshaping of the central bank function. Countries have their own economic systems including work remuneration. What's more each country has its own level of labor productivity.The fundamental condition of ICA organization is maintaining (not declining) the ratio Q. A growth of this ratio is highly appreciated.The ratio Q is the most significant indicator for organization and conducting the ICA.A currency area is integrating zone provided is able to accept countries with different ratio Q.Each country can be accepted to the ICA provided as a member of ICA is able to maintain or eventually to increase the Q.Therefore not declining the Q is the indispensable condition of membership.Decreasing the Q means growing inflation; so wages are too high.Increasing the Q means that employees may eventually got a rise. Difficulties that are an experience of the EU rely on a different level of labor productivity, and of course existence of European Central Bank operating under a fault theory.The euro, as commonly known, was introduced at the beginning of 1999, fully since 2002, when there was a delegation of competence in the field of the single monetary policy to the European Central Bank.The 11 member states were joined in 2001 by Greece and in 2007 by Slovenia.On 1 st January 2008 two countries joined the euro area: Malta and Cyprus.While creating the euro area in countries where Q > 3.0 was justified by an almost equal and high labor productivity (assessment based on the situation in 2006), joining the Eurozone by countries with a productivity of approximately 2.5 is questionable, and with Q less than 2.0-inappropriate.This partition shows that Europe has several "speeds".Polish politicians do not want to belong to the Europe of the second speed, but their words are really just a form of putting a spin on reality. Currency areas can be created based on the current economic theory, as described by various prominent authors, adopting an independent central bank and its functions as the canon of economics.The most renowned author is R. Mundell [16], whose examples of creating a currency area (division of the North American continent into East and West), illustrating the theory of optimum currency areas, have long been capturing the imagination of many scholars.It is a fact that the euro area has been formed based on a group of economically homogeneous countries (Q > 3.3), which is why it was so successful at the beginning.The problems with Greece, Portugal and other South countries with the Q < 2.3 are caused by the much lower labor productivity in these countries.In such a country a free exchange rate is a market regulator that helps to keep economic balance.This regulator is set off and does not work in the euro area. Euro area requires the real economic sphere to achieve certain, determined by theory, parameters with respect to inflation, exchange rate behavior and productivity.That is why the current Euro zone does not integrate all European countries.What is more membership is not helpful in speeding up economic integra-tion.On the contrary, euro area is clearly discriminatory and exclusionary with respect to countries with productivity levels significantly lower than in the founding member states.These latter, with a similar level of labor productivity, were already economically integrated, so creating the euro area was a natural, formal undertaking.The difficulties started when countries with a lower rate of productivity were being These difficulties are enormous and insurmountable in the short term. Translation of Wages to Common Monetary Unit Creating the ICA requires a reorganization of central bank tasks and a proper translation of wages.The central bank no longer emits money by fiat but it pays compensations for public sector employees [17].Then the main issue is the adjustment of wages to the value of a new monetary unit.In application of the theory of translation, there must be clearly defined measurable quantities in the statistical office of each country.Necessary, clear interpretations of every variable are necessary.The conversion method consists in using the values expressed in the national currency of: GDP and GDPE, or GDP per one employed.Parities of these values are taken into account.The starting point is the natural dependence: ( ) where: GDP-real value of GDP, W-sum of compensations (labor costs), Q-real labor productivity, L-number of employees, S-average cost of labor. Let the index M denote the selected country with a conversion factor equal to 1.0 which will be the reference for ABC.Basic economic values will be adapted to it.For this purpose, it is worth indicating the appropriate country, for example Slovakia, whose monetary unit is the euro.It is assumed that the P index means Poland and the M index denotes Slovakia as a model country.Two equations are created: Model country: Poland: Dividing the equation for Poland by the equation with the index M, we obtain: Now the above formula is reshaped and the average pay parity is computed: where: GDPE-denotes real GDP per employee.It appears that this parity is a product of the two parities namely variables Q and GDPE. Computing coefficient we obtain coefficient for translating of compensations.Therefore translation formula is as follows (E is name of money unit in the ISW): This formula leads to equivalent value of earnings.To illustrate computations, below is presented translation of compensations for a group of countries.In the calculations, Slovakia was chosen as the model country, whose currency is currently the euro, so there is a common understanding of the value of this unit.Of course, after the creation of ABC, the original name of the currency will be entered.Temporarily, the unit of the ABC currency is denoted by the letter E. Table 2 presents a selected set of countries, future members of ABC, and contains basic quantities from databases.The last column contains the calculation of the indicator Q. Table 3 presents the main results of calculations, that is, the determination of the compensations conversion rate and the determination of wages on the assumption that Slovakia is a model country.A subsequent important agenda of every country is a system of compensation for labor.There are many reasons to pay attention to wages but in the case of the ICA authorities have particular responsibility because of necessity of labor productivity control.Theory of human capital measurement and fair compensation for labor determines wages as percent of employee's capital [18].What percent?This percent is equal to the rate of spontaneous and random rate of human capital dispersion s.General model of capital explains that mean value of the random variable s is equal to the constant p, i.e. p = E(s) = 0.08 [1/year] [19]. Using this relation the fair compensation take into regard formula W = p × H(p), where: p = 0.08 and denotes constant of potential growth, H(p)-denotes employee's human capital, W-fair remuneration. To illustrate let us consider theoretical minimum wage.From the theoretical (not legal) point of view such a pay is relevant to teenager 17 or 18 years old who start with his first job.Human capital of such a worker includes neither capital of professional education nor experience capital.This value represent only capitalized cost of living.Poland is a country that strives to achieve 100% of consistency of theoretical and legal minimum wage.Therefore legal minimum wage for 2018 year is determined as 2180PLN so that cost of labor estimates 2180 × 1.2 = 2616 PLN.Thus percent of consistency to the legal minimum pay attains to 93%.This would be a success provided cost of living was constant.Empirical research shows that level of consistency C depends on labor productivity Q [12]. Presently, in line with J. Renkas [20] this relation is described by formula: Assuming that the Q for Poland will be 2.10 in 2018 year, so the C = 85.34%. Thus, the legal minimum wage for 2018 can be slightly too high for Polish economy.But it is a choice.A country can strive for increasing the Q or to increasing remunerations. Inflation-Deflation Agenda Another essential problem of forming the ICA is inflation-deflation agenda.Problem of inflation is already theoretically solved by restriction on labor productivity.By definition the Q = GDP/W so the condition of declining Q means that compensations must be controlled and limited.Size of credit is also under control as it has been discussed in the above cited paper.Deflation however still needs discussion.As is commonly known deflation is falling consumer price index so deflation occurs when the general level of prices falls.R. Bootle [21] points that in Japan "downward trend of consumer prices has meant that consumer price level in 2003 is the same as it was a decade earlier".It is obvious that increasing of labor productivity can lead to falling prices.Therefore some economist draws distinction between "bad deflation" and "good deflation".This classification is not a canon since increase of labor productivity usually leads to rise of wages so deflation should not happen.A press on rise of wages usually exceeds labor productivity growth.Deflation could be seen as a normal phenomenon in developed countries working under monetary paradigm.Let us assume that a country has a fair system of work compensation.Fair employee's earnings are taxed so their real earnings become unfair (too small).They do not generate enough demand and many economic troubles come to existence.Then the Say's law is bringing to a halt.In the ICA taxation concerns this part of salaries which exceeds the fair level.The Central Bank is the payer of remuneration for the public sector employees so this stream of money does equalize stream of value embodied in products and services.Deflation cannot come to existence but inflation is under control by the ratio Q.The Say's law acts correctly.Among members of the ICA there are not countries that finance their budgets by disproportionate emissions of money (it is impossible), without a close relationship with economic performances. Money-goods economy needs credit.This agenda is elaborated earlier [22] and a formula that determine necessary amount of credit is as follows: ( ) The above formula explains that the needed credit depends on: labor cost stream W, and the real work productivity Q.The credit is declined by the poverty level (a) as well changes in savings and detriment funds (d).An employee is discerning as poor if he or she does not have any savings.The greater is the productivity of work (depending strongly on the value of assets) the greater are the possibilities and requirements of crediting.The above formula indicates only main macroeconomic variables, which have impact on the requirements of generating credit by commercial banks.Apart from these variables there is a set of constraints needed for providing safety of a commercial bank, which seriously limits lending.Thus the key explanation of deflation lies in drawbacks of monetarism and incorrect function of the central banks in economy.The right remedy is eco-nomics of labor as discussed in many earlier papers [23] where the central banks is the payer of remuneration to public sector employees (PSE)and the taxes are low. Making the ABC Union Work How to make the ABC Union work is a following question.It is worth noting that the creation and development of the ICA is not necessarily an equivalent to total freedom of movement for the population.An agreement about free movement between countries is not a mandatory requirement of the ICA formation. The ICA is an economic tool to strive for a welfare state, which can be achieved by wise, hardy, and productive labor with the help of an encouraging international environment.But the countries can by bilateral agreements to establish free movement of peoples.The ICA is free from the exchange rates and all the intellectual challenges related to them, such as the exchange rate movement and policy.The introduction of the ICA must be accomplished step by step that are roughly described in Table 4.Here CB, LP, MU denote respectively central bank, labor productivity, and the ABC area monetary unit. The first step relies on implement by a country a new financial system with the purpose of be able to use the self-financing of labor for the PSE.This means that main task of the central banks changes to be payers for PSE and cash money emissions no longer exist.Then the country experiences many benefits such as a lack of budget deficits and the possibility of taxes declining.The second step starts at the end of negotiations for the treaty of the political agreement of creating the ICA i.e., relinquishing the exchange rates applied during economic The parity of labor productivity ratio is the chief determinant of the fair exchange ratio.9 Edition a directive on control of commercial bank liquidity and solvency Central Bank no longer emits money so commercial bank must maintain liquidity and solvency on its own.Procedures ought to be precise and effective. Summary and Some Ending Remarks At this study, algorithms for translating remuneration for work into a common monetary unit introduced in the integrating currency area are presented.The course of proceedings in organizing and implementing the operation of the currency area was also indicated.ICA is a consequence of economic thought referring to the labor economy, which in effect changes the economic system for better.The money-goods economy is a great achievement under condition that the triad: capital-labor-money is completely recognized and respected.Then human capital becomes a means of increasing value; that is to say, for creating well-being. The integrative currency area concept yields insight into the duality of capital and labor.It reveals the self-financing of labor and brings to light new possibilities of a money-goods economy.An economy without a budget deficit and rectified income tax is a real perspective.But the integrative currency area leads in the final effect to a global currency area; that is to say, to a world with new qualities and more hopes that globalization will lead to peaceful coexistence and balanced development.In this project, the future world is without a reserve currency system.J. Stiglitz and B. Greenwald [27] introduced precise analysis conducted in the old financial framework with an opinion that "… a new global reserve system is absolutely essential, if we are to restore the global economy to sustained prosperity and stability.But achieving this, too, will not be easy …".In a world with merely one currency, a government will likely hold surplus money in a portfolio of bonds of different countries, taking care of liquidity and gaining additional earnings. Therefore, the correct solution of the reserve currency agenda would be the introduction of the ICA worldwide, which would have only one money unit, and the problem of reserve currency would no longer exist.An individual doing the same job would earn different monthly pay in dissimilar countries.In a poor country it could be, let us say, 200 MU, in Poland it could be 750 MU but in the USA it could reach 1300 MU.But the process of convergence is already set into motion.Each country would enjoy a better future, without the danger that the authorities would establish a crazy exchange rate, for example "one peso for one dollar" or "thirty UAH for euro" since they introduce so called "liberal reforms". Poland and others Central Europe countries have no sources of power as far says J. Bartosiak [28], well known Polish geopolitician.Poland should strive to form some source of power reforming architecture of Polish space, tending to open ways for trade from North to South by Slovakia and Ukraine among others.The presented consideration shows that Poland could be a designer of the ICA in the Central Europe and together with group of neighboring countries to become a centre of force and decisions.In my belief union of countries with economy without budget deficit, with taxes limited to financing assets mainly, attracts thoughtful attention of the world, whereas building an organization against other nations is unstable and unreliable basis for future geopolitical organization. ( ICA) versus euro currency area.The formation of the ICA requires of solving a problem how to implement one money unit in countries with different labor productivity as Poland and neighbouring countries?In such a case an employee working in Hungary for example will directly buy goods in Poland for his native earnings and vice verse.This task seems simple; it is enough to change name of Polish and Hungarian money unit to common one.If at the moment Hungary has greater labor productivity and compensation is higher so Hungarian citizens have greater purchasing power in Poland.Polish citizens can enjoy higher demand and opportunity for faster development.Poland cannot in return raise wages since labor productivity ratio should not decline but process of economic equalization has already started.By the tight control of labor productivity the Table 3 gives information that Polish wages and assets at the beginning of 2017 should be divided by 3.826, when the euro exchange rate is currently 4.19 PLN.What positively surprise are the similar values of the calculated wages of the ABC states.However, this is natural.Workers such as construction workers, transport workers, teachers, police officers, and civil servants, mostly carry out similar work in similar conditions, generating GDP measured in national currencies.Avoiding adulteration of the economic image by the crude application of exchange rates for the conversion of wages, the calculations show the existence of favorable conditions for the creation of a common currency area in the Central East Europe. Table 2 . Basic data for computing coefficient D. 2016 year. Table 3 . Coefficient D and translation of wages. Table 4 . The set of fundamental directives that determine way of conducting ICE.
6,757.8
2018-06-28T00:00:00.000
[ "Economics" ]
An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning We present a joint model for entity-level relation extraction from documents. In contrast to other approaches - which focus on local intra-sentence mention pairs and thus require annotations on mention level - our model operates on entity level. To do so, a multi-task approach is followed that builds upon coreference resolution and gathers relevant signals via multi-instance learning with multi-level representations combining global entity and local mention information. We achieve state-of-the-art relation extraction results on the DocRED dataset and report the first entity-level end-to-end relation extraction results for future reference. Finally, our experimental results suggest that a joint approach is on par with task-specific learning, though more efficient due to shared parameters and training steps. Introduction Information extraction addresses the inference of formal knowledge (typically, entities and relations) from text. The field has recently experienced a significant boost due to the development of neural approaches (Zeng et al., 2014;Zhang and Wang, 2015;Kumar, 2017). This has led to two shifts in research: First, while earlier work has focused on sentence level relation extraction (Hendrickx et al., 2010;Han et al., 2018;Zhang et al., 2017), more recent models extract facts from longer text passages (document-level). This enables the detection of inter-sentence relations that may only be implicitly expressed and require reasoning across sentence boundaries. Current models in this area do not rely on mention-level annotations and aggregate signals from multiple mentions of the same entity. The second shift has been towards multi-task learning: While earlier approaches tackle entity mention detection and relation extraction with separate models, recent joint models address these tasks The Portland Golf Club is a private golf club in the northwest United States, in suburban Portland, Oregon. The PGC is located in the unincorporated Raleigh Hills area of eastern Washington County, southwest of downtown Portland and east of Beaverton. PGC was established in the winter of 1914, when a group of nine businessmen assembled to form a new club after leaving their respective clubs. The golf club hosted the Ryder Cup matches of 1947, the first renewal in a decade, due to World War II. The U.S. team defeated Great Britain 11 to 1 in wet conditions in early November. at once (Bekoulis et al., 2018;Nguyen and Verspoor, 2019;. This does not only improve simplicity and efficiency, but is also commonly motivated by the fact that tasks can benefit from each other: For example, knowledge of two entities' types (such as person+organization) can boost certain relations between them (such as ceo of). We follow this line of research, and present JEREX 1 ("Joint Entity-Level Relation Extractor"), a novel approach for joint information extraction. JEREX is to our knowledge the first approach that combines a multi-task model with entity-level relation extraction: In contrast to previous work, our model jointly learns relations and entities without annotations on mention level, but extracts document-level entity clusters and predicts relations between those clusters using a multi-instance learning (MIL) (Dietterich et al., 1997;Riedel et al., 2010;Surdeanu et al., 2012) approach. The model is trained jointly on mention detection, coreference resolution, entity classification and relation extraction ( Figure 1). While we follow best practices for the first three tasks, we propose a novel representation for relation extraction, which combines global entity-level representations with localized mention-level ones. We present experiments on the DocRED (Yao et al., 2019) dataset for entity-level relation extraction. Though it is arguably simpler compared to recent graph propagation models (Nan et al., 2020) or special pre-training (Ye et al., 2020), our approach achieves state-of-the-art results. We also report the first results for end-to-end relation extraction on DocRED as a reference for future work. In ablation studies we show that (1) combining a global and local representations is beneficial, and (2) that joint training appears to be on par with separate per-task models. Related Work Relation extraction is one of the most studied natural language processing (NLP) problems to date. Most approaches focus on classifying the relation between a given entity mention pair. Here various neural network based models, such as RNNs (Zhang and Wang, 2015), CNNs (Zeng et al., 2014), recursive neural networks (Socher et al., 2012) or Transformer-type architectures (Wu and He, 2019) have been investigated. However, these approaches are usually limited to local, intrasentence, relations and are not suited for documentlevel, inter-sentence, classification. Since complex relations require the aggregation of information distributed over multiple sentences, document-level relation extraction has recently drawn attention (e.g. Quirk and Poon 2017;Verga et al. 2018;Gupta et al. 2019;Yao et al. 2019). Still, these models rely on specific entity mentions to be given. While progress in the joint detection of entity mentions and intra-sentence relations has been made (Gupta et al., 2016;Bekoulis et al., 2018;Luan et al., 2018), the combination of coreference resolution with relation extraction for entity-level reasoning in a single, jointly-trained, model is widely unexplored. Document-level Relation Extraction Recent work on document-level relation extraction directly learns relations between entities (i.e. clusters of mentions referring to the same entity) within a document, requiring no relation annotations on mention level. To gather relevant information across sentence boundaries, multi-instance learning has successfully been applied to this task. In multiinstance learning, the goal is to assign labels to bags (here, entity pairs), each containing multiple instances (here, specific mention pairs). Verga et al. (2018) apply multi-instance learning to detect domain-specific relations in biological text. They compute relation scores for each mention pair of two entity clusters and aggregate these scores using a smooth max-pooling operation. variant that is pre-trained on detecting co-referring phrases. They show that replacing RoBERTa with CorefRoBERTa improves performance on DocRED. All these models have in common that entities and their mentions are both assumed to be given. In contrast, our approach extracts mentions, clusters them to entities, and classifies relations jointly. Joint Entity Mention and Relation Extraction Prior joint models focus on the extraction of mention-level relations in sentences. Here, most approaches detect mentions by BIO (or BILOU) tagging and pair detected mentions for relation classification, e.g. (Gupta et al., 2016;Bekoulis et al., 2018;Nguyen and Verspoor, 2019;Miwa and Bansal, 2016). However, these models are not able to detect relations between overlapping entity mentions. Recently, so-called span-based approaches (Lee et al., 2017) were successfully applied to this task (Luan et al., 2018;Eberts and Ulges, 2019): By enumerating each token span of a sentence, these models handle overlapping mentions by design. Sanh et al. (2019) train a multi-task model on named entity recognition, coreference resolution and relation extraction. By adding coreference resolution as an auxilary task, propagate information through coreference chains. Still, these models rely on mention-level annotations and only detect intra-sentence relations between mentions, whereas our model explicitly constructs clusters of co-referring mentions and uses these clusters to detect complex entity-level relations in long documents using multi-instance reasoning. Approach JEREX processes documents containing multiple sentences and extracts entity mentions, clusters them to entities, and outputs types and relations on entity level. JEREX consists of four task-specific components, which are based on the same encoder and mention representations, and are trained in a joint manner. An input document is first tokenized, yielding a sequence of n byte-pair encoded (BPE) (Sennrich et al., 2016) tokens. We then use the pretrained Transformer-type network BERT (Devlin et al., 2019) to obtain a contextualized embedding sequence (e 1 , e 2 , ...e n ) of the document. Since our goal is to perform end-to-end relation extraction, neither entities nor their corresponding mentions in the document are known in inference. Model Architecture We suggest a multi-level model: First, we localize all entity mentions in the document (a) by a spanbased approach (Lee et al., 2017). After this, detected mentions are clustered into entities by coreference resolution (b). We then classify the type (such as person or company) of each entity cluster by a fusion over local mention representations (entity classification) (c). Finally, relations between entities are extracted by a reasoning over mention pairs (d). The full model architecture is illustrated in Figure 2. (a) Entity Mention Localization Here our model performs a search over all document token subsequences (or spans). In contrast to BIO/BILOU-based approaches for entity mention localization, span-based approaches are able to detect overlapping mentions. Let s := (e i , e i+1 , ..., e i+k ) denote an arbitrary candidate span. Following Eberts and Ulges (2019), we first obtain a span representation by max-pooling the span's token embeddings: Our mention classifier takes the span representation e(s) as well as a span size embedding w s k+1 (Lee et al., 2017) as meta information. We perform binary classification and use a sigmoid activation to obtain a probability for s to constitute an entity mention:ŷ where • denotes concatenation and FFNN s is a two-layer feedforward network with an inner ReLu activation. Span classification is carried out on all token spans up to a fixed length L. We apply a filter threshold α s on the confidence scores, retaining all spans withŷ s ≥ α s and leaving a set S of spans supposedly constituting entity mentions. (b) Coreference Resolution Entity mentions referring to the same entity (e.g. "Elizabeth II." and "the Queen") can be scattered throughout the input document. To later extract relations on entity level, local mentions need to be grouped to document-level entity clusters by coreference resolution. We use a simple mention-pair (Soon et al., 2001) model: Our component classifies pairs (s 1 , s 2 ) ∈ S×S of detected entity mentions as coreferent or not, by combining the span representations e(s 1 ) and e(s 2 ) with an edit distance embedding w c d : We compute the Levenshtein distance (Levenshtein, 1966) between spans d := D(s 1 , s 2 ) and use a learned embedding w c d . A mention pair representation x c is constructed by concatenation: Similar to span classification, we conduct binary classification using a sigmoid activation, obtaining a similarity score between the two mentions: where FFNN c follows the same architecture as FFNN s . We construct a similarity matrix C ∈ R m×m (with m referring to the document's overall number of mentions) containing the similarity scores between every mention pair. By applying a filter threshold α c , we cluster mentions using complete linkage (Müllner, 2011), yielding a set E containing clusters of entity mentions. We refer to these clusters as entities or entity clusters in the following. (c) Entity Classification Next, we map each entity to a type such as location or person: We first fuse the mention representations of an entity cluster {s 1 , s 2 , ..., s t } ∈ E by max-pooling: x e := max-pool(e(s 1 ), e(s 2 ), ..., e(s t )) (5) Entity classification is then carried out on the entity representation x e , allowing the model to draw information from mentions spread across different parts of the document. x e is fed into a softmax classifier, yielding a probability distribution over the entity types: We assign the highest scored type to the entity. (d) Relation Classification Our final component assigns relation types to pairs of entities. Note that the directionality, i.e. which entity constitutes the head/tail of the relation, needs to be inferred, and that the input document can express multiple relations between different mentions of the same entity pair. Let R denote a set of pre-defined relation types. The relation classifier processes each entity pair (e 1 , e 2 ) ∈ E×E, estimating which, if any, relations from R are expressed between these entities. To do so, we score every candidate triple (e 1 ,r i ,e 2 ), expressing that e 1 (as head) is in relation r i with e 2 (as tail). We design two types of relation classifiers: A global relation classifier, serving as a baseline, which consumes the entity cluster representations x e , and a multi-instance classifier, which assumes that certain entity mention pairs support specific relations and synthesizes this information into an entity-pair level representation. Global Relation Classifier (GRC) The global classifier builds upon the max-pooled entity cluster representations x e 1 and x e 2 of an entity pair (e 1 , e 2 ). We further embed the corresponding entity types (w e 1 / w e 2 ), which was shown to be beneficial in prior work (Yao et al., 2019), and compute an entity-pair representation by concatenation: This representation is fed into a 2-layer FFNN (similar to FFNN s ), mapping it to the number of relation types #R. The final layer features sigmoid activations for multi-label classification and assigns any relation type exceeding a threshold α r : Multi-instance Relation Classifier (MRC) In contrast to the global classifier (GRC), the multiinstance relation classifier operates on mention level: Since only entity-level labels are available, we treat entity mention pairs as latent variables and estimate relations by a fusion over these mention pairs. For any pair of entity clusters e 1 ={s 1 1 , s 1 2 , ..., s 1 t 1 } and e 2 ={s 2 1 , s 2 2 , ..., s 2 t 2 }, we compute a mention-pair representation for any (s 1 , s 2 )∈e 1 ×e 2 . This representation is obtained by concatenating the global entity embeddings (Equation (5)) with the mentions' local span representations (Equation (1)) Further, as we expect close-by mentions to be stronger indicators of relations, we add meta embeddings for the distances d s ,d t between the two mentions, both in sentences (d s ) and in tokens (d t ). In addition, following Eberts and Ulges (2019), the max-pooled context between the two mentions (c(s 1 , s 2 )) is added. This localized context provides a more focused view on the document and was found to be especially beneficial for long, and therefore noisy, inputs: u (s 1 ,s 2 ):=u(s 1 ,s 2 ) • c(s 1 ,s 2 ) • w r ds • w r dt (10) This mention-pair representation is mapped by a single feed-forward layer to the original token embedding size (768): These focused representations are then combined by max-pooling: x r =max-pool({u (s 1 , s 2 )|s 1 ∈e 1 ,s 2 ∈e 2 }) (12) Akin to GRC, we concatenate x r with entity type embeddings w e 1 /w e 2 and apply a two-layer FFNN (again, similar to FFNN s ). Note that for both classifiers (GRC/MRC), we need to score both (s 1 , r i , s 2 ) and (s 2 , r i , s 1 ) to infer the direction of asymmetric relations. Training We perform a supervised multi-task training, whereas each training document features ground truth for all four subtasks (mention localization, coreference resolution, as well as entity and relation classification). We optimize the joint loss of all four components: L := β s · L s + β c · L c + β e · L e + β r · L r (13) L s , L c and L r denote the binary cross entropy losses of the span, coreference and relation classifiers. We use a cross entropy loss (L e ) for the entity classifier. A batch is formed by drawing positive and negative samples from a single document for all components. We found such a singlepass approach to offer significant speed-ups both in learning and inference: • Entity mention localization: We utilize all ground truth entity mentions S gt of a document as positive training samples, and sample a fixed number N s of random non-mention spans up to a pre-defined length L s as negative samples. Note that we only train and evaluate on the full tokens according to the dataset's tokenization, i.e. not on byte-pair encoded tokens, to limit computational complexity. Also, we only sample intra-sentence spans as negative samples. Since we found intra-mention spans to be especially challenging ("New York" versus "New York City"), we sample up to Ns 2 intra-mention spans as negative samples. • Coreference resolution: The coreference classifier is trained on all span pairs drawn from ground truth entity clusters E gt as positive samples. We further sample a fixed number N c of pairs of random ground truth entity mentions that do not belong to the same cluster as negative samples. • Entity classification: Since the entity classifier only receives clusters that supposedly constitute an entity during inference, it is trained on all ground truth entity clusters of a document. • Relation classification: Here we use ground truth relations between entity clusters as positive samples and N r negative samples drawn from E gt ×E gt that are unrelated according to the ground truth. Each component's loss is obtained by averaging over all samples. We learn the weights and biases of sub-component specific layers as well as the Table 1: Test set evaluation results of our multi-level end-to-end system JEREX on DocRED (using the end-to-end split). We either train the model jointly on all four sub-components (left) or arrange separately trained models in a pipeline (right) ( * joint results are for MRC except for the last row). meta embeddings during training. BERT is finetuned in the process. Experiments We evaluate JEREX on the DocRED dataset (Yao et al., 2019). DocRED ist the most diverse relation extraction dataset to date (6 entity and 96 relation types). It includes over 5,000 documents, each consisting of multiple sentences. According to Yao et al. (2019), DocRED requires multiple types of reasoning, such as logical or common-sense reasoning, to infer relations. Note that previous work only uses DocRED for relation extraction (which equals our relation classifier component) and assumes entities to be given (e.g. Wang et al. 2019;Nan et al. 2020). On the other hand, DocRED is exhaustively annotated with mentions, entities and entity-level relations, making it suitable for end-to-end systems. Therefore, we evaluate JEREX both as a relation classifier (to compare it with the state-of-the-art) and as a joint model (as reference for future work on joint entity-level relation extraction). While prior joint models focus on mention-level relations (e.g. Gupta et al. 2016;Bekoulis et al. 2018;Chi et al. 2019), we extend the strict evaluation setting to entity level: A mention is counted as correct if its span matches a ground truth mention span. An entity cluster is considered correct if it matches the ground truth cluster exactly and the corresponding mention spans are correct. Likewise, an entity is considered correct if the cluster as well as the entity type matches a ground truth entity. Lastly, we count a relation as correct if its argument entities as well as the relation type are correct. We measure precision, recall and micro-F1 for each sub-task and report micro-averaged scores. Dataset split The original DocRED dataset is split into a train (3,053 documents), dev (1,000) and test (1,000) set. However, test relation labels are hidden and evaluation requires the submission of results via Codalab. To evaluate end-to-end systems, we form a new split by merging train and dev. We randomly sample a train (3,008 documents), dev (300 documents) and test set (700 documents). Note that we removed 45 documents since they contained wrongly annotated entities with mentions of different types. Table 2 contains statistics of our end-to-end split. We release the split as a reference for future work. Hyperparameters We use BERT BASE (cased) 2 for document encoding, an attention-based language model pre-trained on English text (Devlin et al., 2019). Hyperparameters were tuned on the end-to-end dev set: We adopt several settings from (Devlin et al., 2019), including the usage of the Adam Optimizer with a linear warmup and linear decay learning rate schedule, a peak learning rate of 5e-5 3 and application of dropout with a rate of 0.1 throughout the model. We set the size of meta embeddings (w s , w c , w e , w r ds , w r dt ) to 25 and the number of epochs to 20. Performance is measured once per epoch on the dev set, out of which the best performing model is used for the final evaluation on the test set. A grid search is performed for the mention, coreference and relation filter threshold (α s =0.85, α c =0.85, α r (GRC)=0.55, α r (MRC)=0.6) with a step size of 0.05. The number of negative samples (N s =N c =N r =200) and sub-task loss weights (β s =β c =β r =1, β e =0.25) are manually tuned. Note that some documents in DocRED exceed the maximum context size of BERT (512 BPE tokens). In this case we train the remaining position embeddings from scratch. End-to-End Relation Extraction JEREX is trained and evaluated on the end-to-end dataset split (see Table 2). We perform 5 runs for each experiment and report the averaged results. To study the effects of joint training, we experiment with two approaches: (a) All four sub-components are trained jointly in a single model as described in Section 3.2 and (b) we construct a pipeline system by training each task separately and not sharing the document encoder. Table 1 illustrates the results for the joint (left) and pipeline (right) approach. As described in Section 3, each sub-task builds on the results of the previous component during inference. We observe the biggest performance drop for the relation classification task, underlining the difficulty in detecting document-level relations. Furthermore, the multi-instance based relation classifier (MRC) out- Table 4: Single-task performance of the joint model (left) and separate models (right) on the end-to-end split ( * joint results are for MRC except for the last row). performs the global relation classifier (GRC) by about 2.4% F1 score. We reason that the fusion of local evidences by multi-instance learning helps the model to focus on appropriate document sections and alleviates the impact of noise in long documents. Moreover, we found the multi-instance selection to offer good interpretability, usually selecting the most relevant instances (see Figure 3 for examples). Overall, we observe a comparable performance by joint training versus using the pipeline system. This is also confirmed by the results reported in Table 4, where we evaluate the four components independently, i.e. each component receives ground truth samples from the previous step in the hierarchy (e.g. ground truth mentions for coreference resolution). Again, we observe the performance difference between the joint and pipeline model to be negligible. This shows that it is not necessary to build separate models for each task, which would result in training and inference overhead due to multiple expensive BERT passes. Instead, a single neural model is able to jointly learn all tasks necessary for document-level relation extraction, therefore easing training, inference and maintenance. Relation Extraction We also compare our model with the state-of-theart on DocRED's relation extraction task. Here, entity clusters are assumed to be given. We train and test our relation classification component on the original DocRED dataset split. Since test set labels are hidden, we submit the best out of 5 runs on the development set via CodaLab to retrieve the test set results. Table 3 includes previously reported results from current state-of-the-art models. Note that our global classifier (GRC) is similar to Queequeg is a fictional character in the 1851 novel Moby-Dick by American author Herman Melville . The son of a South Sea chieftain who left home to explore the world, Queequeg is the first principal character encountered by the narrator, Ishmael. The quick friendship and relationship of equality between the tattooed cannibal and the white sailor shows Melville's basic theme of shipboard democracy and racial diversity... Shadowrun:Hong Kong is a turn-based tactical role-playing video game set in the Shadowrun universe. It was developed and published by Harebrained Schemes , who previously developed Shadowrun Returns and its standalone expansion. It includes a new single -player campaign and also shipped with a level editor that lets players create their own Shadowrun campaigns and share them with other players. In January 2015, Harebrained Schemes launched a Kickstarter campaign in order to fund additional features and content they wanted to add to the game, but determined would not have been possible with their current budget. The initial funding goal of US $ 100,000 was met in only a few hours. The campaign ended the following month, receiving over $ 1.2 million. The game was developed with an improved version of the engine used with Shadowrun Returns and Dragonfall. Harebrained Schemes decided to develop the game only for Microsoft Windows, OS X, and Linux, ... Figure 3: Two example documents of the DocRED dataset. Highlighted are relations "creator" between "Queequeg" and "Herman Melville" (top) and "developer" between "Shadowrun Returns" and "Harebrained Schemes" (bottom). Bordered pairs are the top selections of the multi-instance relation classifier. the baseline by (Yao et al., 2019). However, we replace mention span averaging with max-pooling and also choose max-pooling to aggregate mentions into an entity representation, yielding considerable improvement over the baseline. Using the multi-instance classifier (MRC) instead further improves performance by about 4.5%. Here our model also outperforms complex methods based on graph attention networks (Nan et al., 2020) or specialized pre-training (Ye et al., 2020), achieving a new state-of-the-art result on DocRED's relation extraction task. Ablation Studies We perform several ablation studies to evaluate the contributions of our proposed multi-instance relation classifier enhancements: We remove either the global entity representations x e 1 , x e 2 (Equation 5) (a) or the localized context representation c(s 1 , s 2 ) (Equation 10) (b). The performance drops by about 0.66% F1 score when global entity representations are omitted, indicating that multi-instance reasoning benefits from the incorporation of entity-level context. When the localized context representation is omitted, performance is reduced by about 0.90%, confirming the importance of guiding the model to relevant input sections. Finally, we limit the model to fusing only intra-sentence mention pairs (c). In case no such instance exists for an entity pair, the closest (in token distance) mention pair is selected. Obviously, this modification reduces computational complexity and memory consumption, especially for large documents. Nevertheless, while we observe intra-sentence pairs to cover most relevant signals, exhaustively pairing all mentions of an entity pair yields an improvement of 0.67%. Model F1 Relation Classification (MRC) 59.76 -(a) Entity Representations 59.10 -(b) Localized Context 58.85 -(c) Exhaustive Pairing 59.09 Table 5: Ablation studies for the multi-level relation classifier (MRC) using the end-to-end split. We either remove global entity representations (a), the localized context (b) or only use intra-sentence mention pairs (c). The results are averaged over 5 runs. Conclusions We have introduced JEREX, a novel multi-task model for end-to-end relation extraction. In contrast to prior systems, JEREX combines entity mention localization with coreference resolution to extract entity types and relations on an entity level. We report first results for entity-level, end-to-end, relation extraction as a reference for future work. Furthermore, we achieve state-of-the-art results on the DocRED relation extraction task by enhancing multi-instance reasoning with global entity representations and a localized context, outperforming several more complex solutions. We showed that training a single model jointly on all subtasks instead of using a pipeline approach performs roughly on par, eliminating the need of training separate models and accelerating inference. One of the remaining shortcomings lies in the detection of false positive relations, which may be expressed according to the entities' types but are actually not expressed in the document. Exploring options to reduce these false positive predictions seems to be an interesting challenge for future work.
6,421.8
2021-02-11T00:00:00.000
[ "Computer Science" ]
Making Predictions Using Poorly Identified Mathematical Models Many commonly used mathematical models in the field of mathematical biology involve challenges of parameter non-identifiability. Practical non-identifiability, where the quality and quantity of data does not provide sufficiently precise parameter estimates is often encountered, even with relatively simple models. In particular, the situation where some parameters are identifiable and others are not is often encountered. In this work we apply a recent likelihood-based workflow, called Profile-Wise Analysis (PWA), to non-identifiable models for the first time. The PWA workflow addresses identifiability, parameter estimation, and prediction in a unified framework that is simple to implement and interpret. Previous implementations of the workflow have dealt with idealised identifiable problems only. In this study we illustrate how the PWA workflow can be applied to both structurally non-identifiable and practically non-identifiable models in the context of simple population growth models. Dealing with simple mathematical models allows us to present the PWA workflow in a didactic, self-contained document that can be studied together with relatively straightforward Julia code provided on GitHub. Working with simple mathematical models allows the PWA workflow prediction intervals to be compared with gold standard full likelihood prediction intervals. Together, our examples illustrate how the PWA workflow provides us with a systematic way of dealing with non-identifiability, especially compared to other approaches, such as seeking ad hoc parameter combinations, or simply setting parameter values to some arbitrary default value. Importantly, we show that the PWA workflow provides insight into the commonly-encountered situation where some parameters are identifiable and others are not, allowing us to explore how uncertainty in some parameters, and combinations of parameters, regardless of their identifiability status, influences model predictions in a way that is insightful and interpretable. Introduction Interpreting experimental data using mechanistic mathematical models provides an objective foundation for discovery, prediction, and decision making across all areas of science and engineering.Interpreting data in this way is particularly relevant to applications in the life sciences where new types of measurements and data are continually being developed.Therefore, developing computational methods that can be used to explore the interplay between experimental data, mathematical models, and real-world predictions is of broad interest across many application areas in the life sciences. Key steps in using mechanistic mathematical models to interpret data include: (i) identifiability analysis; (ii) parameter estimation; and, (iii) model prediction.Recently, we developed a systematic, computationally-efficient workflow, called Profile-Wise Analysis (PWA), that addresses all three steps in a unified way (Simpson and Maclaren 2023).In essence, the PWA workflow involves propagating profile-likelihood-based confidence sets for model parameters, to model prediction sets by isolating how different parameters, and different combinations of parameters, influences model predictions (Simpson and Maclaren 2023).This enables us to explore how variability in individual parameters, and variability in groups of parameters directly influence model predictions in a framework that is straightforward to implement and interpret in terms of the underlying mechanisms encoded into the mathematical models (Simpson et al. 2023;Simpson and Maclaren 2023).This insight is possible because we harness the targeting property of the profile likelihood to isolate the influence of individual parameters (Casella and Berger 2001;Cox 2006;Cole 2020;Pace and Salvan 1997;Pawitan 2001), and we propagate forward from parameter confidence sets into prediction confidence sets.This feature is convenient since we can target individual interest parameters, or lower-dimensional combinations of parameters, one at-a-time, thereby providing mechanistic insight into the parameter(s) of interest, as well as making our approach scalable since we can deal with single parameters, or combinations of parameters, without needing to evaluate the entire likelihood function, only profiles of it.Profilewise prediction confidence sets can be combined in a very straightforward way to give an overall curvewise prediction confidence set that accurately approximates the gold standard prediction confidence set using the full likelihood function.One of the advantages of the PWA workflow is that it naturally leads to curvewise prediction sets that avoids challenges of computational overhead and interpretation that are inherent in working with pointwise approaches (Hass et al. 2016;Kreutz et al. 2013).The first presentation of the PWA workflow focused on idealised identifiable models only, where the quality and quantity of the available data meant that all parameters were identifiable.Here we address to the more practical scenario of exploring how the PWA workflow applies to non-identifiable models.This is important because nonidentifiability is routinely encountered in mathematical biology, and other modelling applications in the life sciences (Simpson et al. 2021;Fröhlich et al. 2014). The first component of the PWA workflow is to assess parameter identifiability.Parameter identifiability is often considered in terms of structural or practical identifiability (Wieland et al. 2021).Structural identifiability deals with the idealised situation where we have access to an infinite amount of ideal, noise-free data (Chiş et al. 2011(Chiş et al. , 2016)).Structural identifiability for mathematical models based on ordinary differential equations (ODE) is often assessed using software that includes DAISY (Bellu et al. 2007) and GenSSI (Ligon et al. 2018).In brief, GenSSI uses Lie derivatives of the ODE model to generate a system of input-output equations, and the solvability properties of this system provides information about global and local structural identifiability.In contrast, practical identifiability deals with the question of whether finite amounts of imperfect, noisy data is sufficient to identify model parameters (Kreutz et al. 2013;Raue et al. 2009Raue et al. , 2013Raue et al. , 2014)).Practical identifiability can be implicitly assessed through the failure of sampling methods to converge (Hines et al. 2014;Siekmann et al. 2012;Simpson et al. 2020).Other approaches to determine practical identifiability include indirectly analysing the local curvature of the likelihood (Browning and Simpson 2023;Gutenkunst et al. 2007;Chiş et al. 2016;Vollert et al. 2023) or directly assessing the global curvature properties of the likelihood function by working with the profile likelihood (Raue et al. 2009(Raue et al. , 2013(Raue et al. , 2014)).Therefore, practical identifiability is a more strict requirement than structural identifiability, as demonstrated by the fact that many structurally identifiable models are found to be practically non-identifiable when considering the reality of dealing with finite amounts of noisy data (Simpson et al. 2020(Simpson et al. , 2022)).Fröhlich et al. ( 2014) compared several computational approaches (bootstrapping, profile likelihood, Fisher information matrix, and multi-start based approaches) to diagnose structural and practical identifiability of several models encountered in the systems biology literature and concluded that the profile likelihood produced reliable results, whereas the other approaches could be misleading. In this work we will explore how the PWA workflow can be applied to both structurally non-identifiable and practically non-identifiable mathematical models that are routinely encountered in modelling population biology phenomena.Though we have not explicitly considered PWA and profile likelihood methods for non-identified models previously, there is reason to expect that it may perform reasonably well despite these challenges.As mentioned above, previous work in the context of systems biology by Fröhlich et al. (2014) found that profile likelihood is the only method they consider that performs well in the presence of practical identifiability (Fröhlich et al. 2014).Investigations in other application areas has led to similar observations.In particular, in econometrics, Dufour (1997) considers the construction of valid confidence sets and tests for econometric models under poor identifiability.They first show that, lacking other constraints, valid confidence sets must typically be unbounded.They then show that Wald-style confidence intervals of the form 'estimate plus or minus some number of standard errors' lack this property and are always bounded.In contrast, likelihood-based methods, such as likelihood ratios and associated confidence intervals, can produce appropriately unbounded intervals that maintain correct (or higher) asymptotic coverage.Zivot et al. (1998) similarly compare confidence intervals for poorly identified econometric models based on inverting Lagrange multiplier, likelihood ratio, and Anderson-Rubin tests to Wald-style confidence intervals.As before, Zivot et al. (1998) show that the relatively flat likelihood corresponds to appropriately wide confidence intervals, maintaining or exceeding coverage is maintained.Further, they demonstrate that likelihood-based intervals perform much better than Wald intervals (Zivot et al. 1998).In particular, these intervals may generally be unbounded unless additional constraints are applied, as shown to be theoretically necessary by Dufour (1997).Zivot et al.'s study is computational, while (Wang and Zivot 1998;Dufour 1997) supplement this with additional asymptotic arguments.A more recent review of these and similar results, again in econometrics, is given by Andrews et al. (2019), who also discuss how naive application of the bootstrap typically fails in the poorly identified setting (which is consistent with our recent results in Warne et al. (2024) and those of Fröhlich et al. (2014)).In light of these results we expect that likelihood-based confidence intervals for parameters in poorly identified problems may still perform well.Though these will be unbounded in general, they will typically rule out some regions of the parameter space corresponding to e.g., rejected values of identified parameter combinations or one-sidedly bounded intervals.Furthermore, in most problems we can combine these intervals with simple a priori bounds (which are weaker assumptions than prior probability distributions; see Stark 2015) to obtain confidence sets with the same coverage as the unbounded sets (again, see Stark 2015). In the context of prediction, our approach begins with a gold-standard full likelihood method that carries out prediction via initial parameter estimation in such a way that coverage for predictions is at least as good as coverage for parameters.Thus, the desirable properties for parameter confidence sets are expected to hold up for predictions, and predictions can also be thought of as functions or functionals of the parameters that may be better identified than the underlying parameters.The benefit of going via the parameter space rather than directly to predictions is that we can more easily generalise to new scenarios with mechanistic model components and parameters.In addition to this gold standard approach, we then introduce the PWA approach, which is based on the same general principles but requires less computation in exchange for potentially lower coverage.PWA also facilitates an understanding of the connection between individual parameters and predictions (Simpson and Maclaren 2023).Guided by these theoretical intuitions about general models, we carry out computational experiments for particular models.One limitation of our approach is that the theoretical underpinnings are asymptotic and that finite sample corrections may be needed when data is sparse and the parameter space is large.One approach to address this would be applying the calibration methods from Warne et al. (2024), which can be used with minimal modifications to our overall workflow.We leave this for future work. An mentioned above, understanding the ability of the PWA workflow to deal with non-identifiable models is important because many mathematical models in the field of mathematical biology are non-identifiable.To demonstrate this point, we will briefly recall the history of mathematical models of avascular tumour spheroid growth.The study of tumour spheroids has been an active area of research for more than 50 years, starting with the seminal Greenspan model in 1972Greenspan model in (1972. .Greenspan's mathematical model involves a series of analytically tractable boundary value problems whose solutions provide a plausible explanation for the observed three-stage, spatially structured growth dynamics of avascular in vitro tumour spheroids (Greenspan 1972).Inspired by Greenspan, more than 50 years of further model development and refinement delivered continual generalisations of this modelling framework.This iterative process produced many detailed mathematical models that have incorporated increasingly sophisticated biological detail.This progression of ideas led to the development of time-dependent partial differential equation (PDE)-based models (Byrne et al. 2003), moving boundary PDE models (Byrne and Chaplain 1997), multi-phase PDE models with moving boundaries (Ward andKing 1997, 1999), and multi-component PDE models that simultaneously describe spheroid development while tracking cell cycle progression (Jin et al. 2021).These developments in mathematical modelling sophistication did not consider parameter identifiability and it was not until very recently that questions of parameter identifiability of these models was assessed, finding that even the simplest Greenspan model from 1972 turns out to be practically non-identifiable when using standard experimental data focusing on the evolution of the outer spheroid radius only (Browning and Simpson 2023;Murphy et al. 2022).This dramatic example points to the need for the mathematical biology community to consider parameter identifiability in parallel with model development to ensure that increased modelling capability and modelling detail does not come at the expense of those models to deliver practical benefits. Previous approaches for dealing with non-identifiable models in mathematical and systems biology and related areas are to introduce some kind of simplification by, for example, setting parameters to default values (Hines et al. 2014;Vollert et al. 2023;Simpson et al. 2020), or seeking to find parameter combinations that simplify the model (Cole 2020;VandenHeuvel et al. 2023).For structurally non-identifiable models, symbolic methods can be used to find locally identifiable reparameterisations (e.g., Catchpole et al. 1998;Cole et al. 2010, summarised in Cole 2020), though these are difficult to apply to the practically non-identifiable case and typically require symbolic derivatives to be available.Several approaches can be used to find parameter combinations in the practically non-identifiable scenario, such as the concept of sloppiness (Brown and Sethna 2003;Brown et al. 2004;Gutenkunst et al. 2007) where a log-transformation is used to explore the possibility of finding informative combinations of parameters, typically in the form of ratios of parameters or ratios of powers of parameters as a means of simplifying the model.In this work, we take a different approach and use the (expected Dufour 1997) desirable properties of likelihood-based confidence intervals for poorly identified models, along with the the targeting property of the profile likelihood as used within the PWA workflow, to propagate confidence sets in parameter values to confidence sets for predictions for non-identifiable parameters.This approach provides insight because the targeting property of the profile likelihood can illustrate that some parameters are identifiable while others are not, and importantly the PWA workflow directly links the relevance of different parameters, regardless of their identifiability, to model predictions.Overall, we illustrate that the PWA workflow can be used as a systematic platform for making predictions, regardless of whether dealing with identifiable or non-identifiable models.This is useful for the commonly-encountered situation of partly identifiable models where some parameters are well-identified by the data and others are not (Simpson et al. 2020).To make this presentation as accessible as possible we present results for very simple, widely adopted mathematical models relating to population dynamics.Results are presented in terms of two simple case studies, the first involves structural non-identifiability, and the second involves practical non-identifiability.The first, structurally non-identifiable case is sufficiently straightforward that all calculations are performed by discretising the likelihood function and using appropriate grid searches to construct various profile likelihoods.This approach has the advantage of being both conceptually and computationally straightforward.The second, practically non-identifiable case, is dealt with using numerical optimisation instead of grid searches.This approach has the advantage of being computationally efficient and more broadly applicable than using simple grid searchers.As we show, the PWA leads to sensible results for these two simple case studies, and we anticipate that the PWA will also leads to useful insights for other non-identifiable models because related likelihood-based approaches are known to perform well in the face of non-identifiability (Dufour 1997;Zivot et al. 1998;Fröhlich et al. 2014;Warne et al. 2024).Open source software written in Julia is provided on GitHub to replicate all results in this study. Results and Discussion We present this these methods in terms of two case studies that are based on very familiar, mathematical models of population dynamics.Before presenting the PWA details, we first provide some background about these mathematical models and briefly discuss some application areas that motivate their use. Logistic Population Dynamics The logistic growth model with solution is one of the most widely-used mathematical models describing population dynamics (Edelstein-Keshet 2005;Kot 2003;Murray 2002).This model can be used to describe the time evolution of a population with density C(t) > 0, where the density evolves from some initial density C(0) to eventually reach some carrying capacity density, K > 0. For typical applications with C(0)/K 1 the solution describes near exponential growth at early time, with growth rate λ, before the density eventually approaches the carrying capacity in the long time limit, C(t) → K − as t → ∞.One of the reasons that the logistic growth model is so widely employed is that it gives rise to a sigmoid shaped solution curve that is so often observed in a range of biological phenomena across a massive range of spatial and temporal scales.Images in Fig. 1 show typical applications in population biology where sigmoid growth is observed.Figure 1a shows the location of a coral reef on the Great Barrier Reef that is located off the East coast of Australia.Populations of corals are detrimentally impacted by a range of events, such as tropical cyclones or coral bleaching associated with climate change (Hughes et al. 2021).Data in Fig. 1b shows measurements of the percentage of total hard coral cover after an adverse event that reduced the proportion of area covered by hard coral to just a few percent (Simpson et al. 2022(Simpson et al. , 2023;;eAtlas 2023).The recovery in hard coral cover took more than a decade for the corals to regrow and colonise the reef, back to approximately 80% area occupancy.This recovery data can be described using the logistic growth model, where it is of particular interest to estimate the growth rate λ because this estimate can be used to provide a simple estimate of the timescale of regrowth, 1/λ (Simpson et al. 2022(Simpson et al. , 2023)). Another classical application of the logistic growth model is to understand the dynamics of populations of tumour cells grown as in vitro avascular tumour spheroids.For example, the five experimental images in Fig. 1c show slices through the equatorial plane of a set of in vitro melanoma tumour spheroids, after 12, 14, 16, 18 and 21 days of growth after spheroid formation (Browning et al. 2021;Murphy et al. 2022).The structure of these spheroids approximately takes the form of a growing compound sphere.Dotted lines in Fig. 1c show three boundaries: (i) the outer-most boundary that defines the outer radius of the spheroid; (ii) the central boundary that separates a cells according to their cell cycle status; and, (iii) the inner-most boundary that separates the central shell composed of living but non-proliferative cells from the central region that is composed of dead cells.Cells in the outer-most spherical shell have access to sufficient nutrients that these cells are able to proliferate, whereas cells in the central spherical shell have limited access to nutrients which prevents these cells entering the cell cycle (Browning et al. 2021).Data in Fig. 1d shows the time evolution of the outer radius for groups of spheroids that are initiated using different numbers of cells (i.e.2500, 5000, 10,000 cells, as indicated).Regardless of the initial size of the spheroids, we observe sigmoid growth with the eventual long-time spheroid radius is apparently independent of the initial size.This sigmoid growth behaviour is consistent with the logistic growth model, Eq. ( 1), where here we consider the variable C(t) to represent the spheroid radius (Murphy et al. 2022).In addition to modelling coral reef re-growth (Simpson et al. 2023) and tumour spheroid development (Browning et al. 2021;Browning and Simpson 2023), the logistic model has been used to model a wide range of cell biology population dynamics including in vitro wound healing (Maini et al. 2004) and in vivo tumour dynamics (Gerlee 2013;Sarapata and de Pillis 2014;West et al. 2001).Furthermore, the logistic equation and generalisations thereof are also routinely used in various ecological applications including modelling populations of sharks (Hisano et al. 2011), polyps (Melica et al. 2014), trees (Acevedo et al. 2012) and humans (Steele et al. 1998). In terms of mathematical modelling, the simplicity of the logistic growth model introduces both advantages and disadvantages.The advantages of the logistic growth model include the fact that it is both conceptually straightforward and analytically tractable.In contrast, the simplicity of the logistic growth model means that it is not a high-fidelity representation of biological detail.Instead, it would be more accurate to describe the logistic growth model as a phenomenologically-based model that can be used to provide insightful, but indirect information about biological population dynamics.In response to the disadvantages of the logistic model, there have been many generalisations proposed, such as those reviewed by Tsoularis and Wallace (2002).For the purposes of this study we will work with two commonly-used generalisations. As we explain later, these two commonly-used generalisations introduce different challenges in terms of parameter identifiability. The first generalisation we will consider is to extend the logistic growth model to include a separate linear sink term, where d > 0 is a rate of removal associated with the linear sink term.Incorporating a linear sink term into the logistic growth model has been used to explicitly model population loss, either through some kind of intrinsic death process (Baker and Simpson 2010), a death/decay associated with external signals, such as chemotherapy in a model of tumour cell growth (Swanson et al. 2003) or external harvesting process (Miller and Botkin 1974;Brauer and Sánchez 1975), as is often invoked in models of fisheries (Cooke and Witten 1986;Xu et al. 2005).Re-writing Eq. (3) as with = λ − d and κ = K /λ, with λ > 0, preserves the structure of the logistic growth equation so that the solution is which now gives three long-term possibilities: (i) 0) for all 0 < t < ∞ for the special case λ = d.In this work we will refer to Eq. ( 3) as the logistic model with harvesting (Brauer and Sánchez 1975). The second generalisation we will consider is to follow the work of Richards (1959) who generalised the logistic growth model to where the free parameter β > 0 was introduced by Richards to provide greater flexibility in describing a wider range of biological responses in the context of modelling stem growth in seedlings (Richards 1959).This model, which is sometimes called the Richards' model, has since been used to study population dynamics in a range of applications, including breast cancer progression (Spratt et al. 1993).The solution of the Richards' model can be written as which clearly simplifies to the solution of the logistic equation when β = 1.The Richards' model is also related to the well-known von Bertalanffy growth model in which β = 1/3 (Tsoularis and Wallace 2002).Instead of considering fixed values of β, the Richards' model incorporates additional flexibility by treating β as a free parameter that can be estimated from experimental data.This work focuses on the development and deployment of likelihood-based methods to make predictions with poorly-identified models.A preliminary observation about the mathematical models that we have presented so far is that both the logistic model, Eq. ( 1), and the Richards' model, Eq. ( 6) are structurally identifiable in terms of making perfect, noise-free observations of C(t) (Ligon et al. 2018).In contrast, the logistic model with harvesting is structurally non-identifiable since making perfect, noise-free observations of C(t) enables the identification of and κ only, for which there are infinitely many choices of λ, d and K that give the same C(t).We will now illustrate how the PWA workflow for parameter identifiability, estimation and prediction can be implemented in the face of both structural and practical non-identifiability. Logistic Model with Harvesting To solve Eq. (3) for C(t) we must specify an initial condition, C(0), together with three parameters θ = (λ, d, K ) .We proceed by assuming that the observed data, denoted C o (t), corresponds to the solution of Eq. ( 3) corrupted with additive Gaussian noise with zero mean and constant variance so that C o (t) | θ = N C(t i ), σ 2 , where C(t i ) is the solution of Eq. ( 3) at a series of I time points, t i for i = 1, 2, 3, . . ., I .Within this standard framework the loglikelihood function can be written as where φ(x; μ, σ 2 ) denotes a Gaussian probability density function with mean μ and variance σ 2 , and C(t) is the solution of Eq. (3).We treat (θ | C o (t)) as a function of θ for fixed data (Cassidy 2023), and in this first example, for the sake of clarity and simplicity, we treat C(0) and σ as known quantities (Hines et al. 2014).This has the benefit of reducing this problem to dealing with three unknown parameters, θ = (λ, β, K ) .This assumption will be relaxed later in Sect.2.3.Synthetic data is generated with λ = 0.01, d = 0.002, K = 100, C(0) = 5 and σ = 5.Data is collected at t i = 100 × (i − 1) for i = 1, 2, 3, . . ., 11 and plotted in Fig. 2a.The remainder of this mathematical exploration is independent of the units of these quantities, however to be consistent with the data in Fig. 1b We now apply the PWA framework for identifiability, estimation and prediction by evaluating (θ | C o (t)) across a broad region of parameter space that contains the true so that the maximum normalised loglikelihood is zero.The value of θ that maximises ) is denoted θ , which is called the maximum likelihood estimate (MLE).In this instance we have θ = (0.0094, 0.00192, 99.90) , which is reasonably close to the true value, θ = (0.01, 0.002, 100) , but this point estimate provides no insight into inferential precision which is associated with the curvature of the likelihood function (Cole 2020;Pace and Salvan 1997). To assess the identifiability of each parameter we construct a series of univariate profile likelihood functions by partitioning the full parameter θ into interest parameters ψ and nuisance parameters ω, so that θ = (ψ, ω).For a set of data C o (t), the profile log-likelihood for the interest parameter ψ, given the partition (ψ, ω), is which implicitly defines a function ω * (ψ) of optimal values of ω for each value of ψ, and defines a surface with points (ψ, ω * (ψ)) in parameter space.In the first instance we construct a series of univariate profiles by taking the interest parameter to be a single parameter, which means that (ψ, ω * (ψ)) is a univariate curve that allows us to visualise the curvature of the loglikelihood function.Since this first example deals with a relatively straightforward three-dimensional loglikelihood function where we have already evaluated ˆ (θ | C o (t)) on a 500 3 regular discretisation of θ , the calculation of the profile likelihood functions is straightforward.For example, if our interest parameter is ψ = λ and the associated nuisance parameter is ω = (d, K ) , for each value of λ along the uniform discretisation of the interval 0.0001 < λ j < 0.05 for j = 1, 2, 3, . . ., 500, we compute the indices k and l that maximises ˆ (θ | C o (t)), where θ is evaluated on the uniform discretisation (λ j , d k , K l ).This optimisation calculation can be performed using a straightforward grid search.Repeating this procedure by setting ψ = d and ω = (λ, K ) , and ψ = K and ω = (λ, d) gives univariate profile likelihoods for d and K , respectively.The three univariate profile likelihood functions are given in Fig. 3 where each profile likelihood function is superimposed with a vertical green line at the MLE and a horizontal like at the 95% asymptotic threshold, * (Royston 2007). The univariate profile for λ in Fig. 3a indicates that the noisy data in Fig. 2a allows us to estimate λ reasonably well, at least within the 95% asymptotic confidence interval, 0.0067 < λ < 0.0183.In contrast, the relatively flat profiles for d and K in Fig. 3b, c indicate that the noisy data in Fig. 2a does not identify these parameters.This is entirely consistent with our observation that this model is structurally non-identifiable.The univariate profile likelihoods indicate there are many choices of d and K that can be used to match the data equally well.This is a major problem because these flat profiles indicate that incorrect parameter values can match the observations.If, when using mechanistic models, our aim is to link parameter values to biological mechanisms, this means that we can use a model parameterised to reflect an incorrect mechanisms to explain our observations.Here, for example, our data is generated with K = 100, but our univariate profile likelihood indicates that we could match the data with K = 90 or K = 190 which is clearly unsatisfactory if we want to use our data to infer not only the value of K , but also to understand and quantify the mechanism implied by this parameter value. Before considering prediction, we first explore the empirical, finite sample coverage properties for our parameter confidence intervals as they are only expected to hold asymptotically (Pawitan 2001).We generate 5000 data realisations in the same way that we generated the single data realisation in Fig. 3a, for the same fixed true parameters.For each realisation we compute the MLE and count the proportion of realisations with ˆ ≥ * = − 0.95,3 /2, where ˆ is evaluated at the true parameter values.This condition means the true parameter vector is within the sample-dependent, likelihood-based confidence set.This gives 4900/5000 = 98.00% which, unsurprisingly, exceeds the expected 95% asymptotic result due to the fact that the likelihood function is relatively flat.This result is very different to previous implementation of the PWA workflow for identifiable problems where finite sample coverage properties for parameter confidence intervals are very close to the expected asymptotic result (Simpson and Maclaren 2023;Murphy et al. 2024). Estimating prediction uncertainties accurately is nontrivial, due to the nonlinear dependence of model characteristics on parameters (Villaverde et al. 2015(Villaverde et al. , 2023)).The recent PWA workflow addresses identifiability, estimation and prediction in a unified workflow that involves propagating profile-likelihood-based confidence sets for model parameters, to model prediction sets by isolating how different parameters, influences model prediction by taking advantage of the targeting property of the profile likelihood.Previous applications of PWA found that constructing bivariate profile likelihood functions is a useful way to explore relationships between pairs of parameters, and to construct accurate prediction intervals.However, these previous applications have focused on identifiable problems, whereas here we consider the more challenging and realistic question of dealing with non-identifiable and poorly-identifiable models.The idea of using bivariate profile likelihood functions to provide a visual indication of the confidence set of pairs of parameters and the nonlinear dependence of parameter estimates on each other has been long established in the nonlinear regression literature (Bates and Watts 1988).Here, and in the PWA, we use a series of bivariate profile likelihoods to provide a visual interpretation of the relationship between various pairs of parameters, as well as using these profile likelihood functions to make predictions so that we establish how the variability in parameters within a particular confidence set map to predictions that are more meaningful for collaborators in the life sciences.One of the benefits of working with relatively simple models with a small number of parameters is that we can take the union of predictions from various made using all possible bivariate profile likelihoods and compare these predictions from the full likelihood.Our previous work has compared making such predictions using the union of univariate profile likelihood-based predictions, and union of bivariate profile likelihood-based predictions and predictions from the full likelihood function, funding that working with bivariate profile likelihood-based predictions leads to accurate predictions at a significantly reduced computational overhead when using the full likelihood function. To construct bivariate profile likelihood functions by setting: (i) ψ = (λ, d) and ω = K ; (ii) ψ = (λ, K ) and ω = d; and, (iii) ψ = (d, K ) and ω = λ and evaluating Eq. ( 10).Again, just like the univariate profile likelihood functions, evaluating on a 500 3 regular discretisation of θ .For example, to evaluate the bivariate profile likelihood for ψ = (λ, d) we take each pair of (λ j , d k ) values on the discretisation of θ and compute the index l that maximises ˆ (θ | C o (t)), where θ is evaluated on the uniform discretisation (λ j , d k , K l ) to give the bivariate profile shown in Fig. 4a where we superimpose a curve at the 95% asymptotic threshold * and the location of the MLE.The shape of the bivariate profile shows that there is a narrow region of parameter space where ˆ p ≥ * , and the shape of this region illustrates how estimates of λ and d are correlated (i.e. as λ increases the value of d required to match the data also increases).This narrow region of parameter space where ˆ p ≥ * extends right across the truncated region of parameter space considered.As we pointed out in the Introduction, despite the fact that the logistic model with harvesting is structurally non-identifiable, we see that the likelihood-based confidence intervals for parameters performing well in the sense that each bivariate profile clearly rules out certain regions of the parameter space.This result is inline with the general conclusions of Fröhlich and co-workers (Fröhlich et al. 2014) who found that likelihood-based methods provided reliable results in the face of non-identifiability whereas other commonly-used approaches can be unreliable. To understand how the uncertainty in ψ = (λ, d) impacts the uncertainty in the prediction of the mathematical model, Eq. ( 4), we take those values of (λ j , d k ) for which ˆ p ≥ * (i.e.those values of θ lying within the region enclosed by the threshold, including values of θ along the boundary) and solve Eq. ( 4) using the associated value of the nuisance parameter, K l , that was identified when we computed the bivariate profile likelihood.Solving Eq. ( 4) for each choice of (λ j , d k , K l ) within the set of parameters defined by the asymptotic threshold provides a family of solutions from which we can compute a curvewise prediction interval, shown in the right most panel of Fig. 4a where we have also superimposed the MLE solution.This prediction interval explicitly shows us how uncertainty in the values of (λ, d) propagates into uncertainty in the model prediction, C(t). We now repeat the process of computing the bivariate profile likelihoods for ψ = (λ, K ) and ψ = (d, K ), shown in Fig. 4b-c, respectively.Here we see that the bivariate profile likelihood functions provides additional insight beyond the univariate profiles in Fig. 3.For example, here we see that bivariate profile likelihood for ψ = (d, K ) is similar to the bivariate profile for ψ = (λ, d) in the sense that there is a large region, extending right across the parameter space, where ˆ p ≥ * .In contrast, for ψ = (λ, K ) we see that the curvature of the bivariate profile likelihood is such that the region where ˆ p ≥ * is mostly contained within the region of parameter space considered.The profile wise prediction intervals for each bivariate profile likelihood is given in the right-most panel in Fig. 4a-c for each choice of ψ, as indicated.Each contour plot is superimposed with a contour showing the asymptotic threshold * = − 0.95,2 /2 = −3.00(solid red curve) and the location of the MLE (cyan disc).Each of the bivariate profiles in the left column is constructed using a 500 × 500 uniform mesh, and parameter values lying within the contour where ≥ * are propagated forward to form the profile-wise prediction intervals in the right column (purple shaded region).Each prediction interval is superimposed with the MLE solution (cyan solid curve) (Color figure online) profile-wise prediction intervals in the Fig. 4 illustrates how differences in parameters affect these predictions.For example, the prediction intervals in Fig. 4b-c, which explicitly involves bivariate profile likelihoods involving K are wider at late time than the prediction interval in Fig. 4a.This reflects the fact that the carrying capacity density, K affects the late time solution of Eq. ( 3). In addition to constructing profile-wise prediction intervals to explore how uncertainty in pairs of parameters impacts model predictions, we can also form an approximate prediction interval by taking the union of the three profile-wise prediction intervals.This union of prediction intervals, shown in Fig. 5, compares extremely well with the gold-standard prediction interval obtained using the full likelihood function.In this case it is straightforward to construct the full likelihood prediction interval because we have already evaluated ˆ (θ | C o (t)) at each of the 500 3 mesh points, and we simply solve Eq. ( 3) for each value of θ for which ˆ ≥ * = − 0.95,3 /2 = −3.91 and use this family of solutions to form the curvewise gold-standard prediction interval. Our construction of the approximate full prediction interval is both practically and computationally convenient.This approach is practically advantageous in the sense that building profile-wise prediction intervals like we in Fig. 4 provides insight into how different pairs of parameters impacts model predictions in a way that is not obvious when working with the full likelihood function.This process is computationally advantageous since working with pairs of parameters remains computationally tractable for models with larger number of parameters where either directly discretizing or sampling the full likelihood function becomes computationally prohibitive.Finally, this example indicates that the PWA workflow presented previously for identifiable problems can also be applied to structurally non-identifiable problems.The main difference between applying the PWA workflow for identifiable and structurally non-identifiable problems is that the former problems involve determining regions in parameter space where ˆ p ≥ * implicitly defined by the curvature of the likelihood function.In contrast, the region of parameter space where ˆ p ≥ * for structurally non-identifiable problems is determined both by the curvature of the likelihood function and the user-defined bounds on the regions of parameter space considered.These bounds can often be imposed naturally, such as requiring that certain parameters be non-negative as in the case of K > 0 in Eq.(3).Alternatively these bounds can be also be prescribed based on the experience of the analyst, such as our results in Figs. 3 and 4 where we limited our parameter space to the region where 0.0001 < λ < 0.05.In this case we chose the interval to be relatively broad, guided by our knowledge that the true value lies within this interval.Later, in Sect.2.3 we will deal with a more practical case involving real measurements where we have no a priori knowledge of the true parameter values. The results in this section for Eq. ( 3) are insightful in the sense that they provide the first demonstration that the PWA workflow can be applied to structurally nonidentifiable problems, however we now turn our attention to a class problems that we believe to be of greater importance, namely a practically non-identifiable problem where some parameters are well identified by the data, whereas others are not. Richards' model We now consider estimating parameters in the Richards' model, Eq. ( 6) to describe the coral reef recovery data in Fig. 1a, b.To solve Eq. ( 6) we require values of (λ, β, K , C( 0)) , and we assume that the noisy field data is associated with a Gaussian additive noise model with a pre-estimated standard deviation σ = 2 (Hines et al. 2014;Simpson et al. 2020).Therefore, unlike the work in Sect.2.2 where it was relatively straightforward to discretize the three-dimensional parameter space and calculate the MLE and associated profile likelihood functions simply by searching the discretised parameter space, we now perform all calculations using a standard implementation of the Nelder-Mead numerical optimisation algorithm using simple bound constraints within the NLopt routine (Johnson 2023).Figure 5a shows the data superimposed with the MLE solution of Eq. ( 6) where we have θ = (0.0055, 0.341, 81.73, 0.092) and we see that the shape of the MLE solution provides a very good match to the data.The parameter λ has units of [/day] and the exponent β is dimensionless.Both K and C(0) measure coral densities in terms of the % of area covered by hard corals. Before we proceed to consider the curvature of the likelihood function we construct a gold standard full likelihood prediction interval using rejection sampling to find 5000 estimates of θ for which ≥ * = − 0.95,4 /2 = −4.74.Evaluating the solution of the Richards' model, Eq. ( 7) for each identified θ gives us 5000 traces of C(t), which we use to construct the prediction interval shown by the gold shaded interval in Fig. 5b. To proceed with the PWA workflow, we first assess the practical identifiability of each parameter by constructing four univariate profile likelihood functions by evaluating Eq. ( 10) with ψ = λ, ψ = β, ψ = K and ψ = C(0), respectively.Profiles are constructed across a uniform mesh of the interest parameter, and the profile likelihood is computed using numerical optimisation.The four univariate profile likelihood functions are given in Fig. 7 where each univariate profile likelihood is superimposed with a vertical green line at the MLE and horizontal line at the asymptotic 95% threshold, * .The univariate profile for λ is flat, meaning that the data is insufficient to precisely identify λ.In contrast, profiles for β, K and C(0) are relatively well-formed about a single maximum value at the MLE.Together, these profiles indicate that we have a commonly encountered situation whereby we are working with a structurally identifiable mathematical model, but the data we have access to does not identify all the parameters.Instead, the data identifies some of the parameters, β, K and C(0), whereas the data does not contain sufficient information to identify λ.This problem is practically non-identifiable, and we will now show that the PWA workflow can be applied to this problem and that this approach yields useful information that is not otherwise obvious.As with the logistic model with harvesting, before considering prediction we first explore the empirical, finite sample coverage properties.We generate 5000 data realisations in the same way that we generated the single data realisation in Fig. 3a, for the same fixed true parameters.For each realisation we compute the MLE and count the proportion of realisations with ˆ ≥ * = − 0.95,4 /2, where ˆ is evaluated at the true parameter values.This gives 4780/5000 = 95.60%,which is close to the expected asymptotic result, regardless of the non-identifiability of λ. We now consider prediction by computing various bivariate profile likelihoods.Results in Fig. 8a-f Each bivariate profile likelihood is superimposed with the MLE (cyan dot), and following Simpson and Maclaren (2023) we randomly identify 500 points along the boundary where * = − 0.95,2 /2, corresponding to the asymptotic 95% threshold.This approach to propagate confidence sets in parameters to confidence sets in predictions is different to the approach we took in Sect.2.3 where we propagated all gridded parameter values for which p ≥ * , but here we focus only on the boundary values where p = * .As we will later show, this approach of focusing on the boundary values is a computationally convenient way to identify parameter values which, when propagated forward to the model prediction space, forms an accurate prediction interval in terms of comparison with the gold-standard full likelihood approach.Furthermore, the shape of these identified p = * contours provides important qualitative insight into parameter identifiability.For example, the contours in Fig. 8a for the ψ = (λ, β) bivariate profile is a classic banana-shaped profile consistent with our previous observation that λ is not identifiable.Similarly, the bivariate profile for ψ = (λ, K ) shows that this contour extends right across the truncated regions of parameter space considered.Again, this is consistent with λ not being identified by this data set.In contrast, other bivariate profiles such as ψ = (K , β) and ψ = (C(0), β) form closed contours around the MLE and are contained within the bounds of the parameter space. For each bivariate profile in Fig. 8 we propagate the 500 boundary values of θ forward by solving Eq. ( 6) to give 500 traces of C(t) from which we can form the profile wise predictions shown in the right-most panel of each subfigure.Propagating the boundary values of θ forward allows us to visualise and quantify how variability in pairs of parameters affects the prediction intervals.In this instance we can compare prediction intervals for identifiable pairs of parameters, such as those in Fig. 8d-e with the prediction intervals for pairs of parameters that are not identifiable, such as those in Fig. 8a, b.In this instance we see that the width of the prediction intervals dealing with identifiable target parameters appear to be, in general, narrower than the prediction intervals for target parameters that involve non-identifiable parameters. Taking the union of the profile wise prediction intervals identified in Fig. 8 gives us the approximate prediction intervals shown in Fig. 6b where we see that the union of the profile-wise prediction intervals compares very well with the gold-standard prediction interval formed using the full likelihood approach.While this comparison of prediction intervals in Fig. 6b indicates that working with the full likelihood and the bivariate profile likelihoods leads to similar outcome interms of the overall prediction interval for C(t), this simple comparison masks the fact that working with the bivariate profile likelihood functions provides far greater insight by using the targeting property of the profile likelihood function to understand the how the identifiability/non-identifiability of different parameters impacts prediction. Conclusion and Outlook In this work we demonstrate how a systematic likelihood-based workflow for identifiability analysis, parameter estimation, and prediction, called Profile-Wise Analysis (PWA), applies to non-identifiable models.Previous applications of the PWA workflow have focused on identifiable problems where profile-wise predictions, based on a series of bivariate profile likelihood functions, provide a mechanistically insightful way to propagate confidence sets from parameter space to profile-wise prediction confidence sets.Taking the union of the profile-wise prediction intervals gives an approximate global prediction interval that compares well with the gold standard full likelihood prediction interval (Simpson and Maclaren 2023).In addition, empirical, finite sample coverage properties for the PWA workflow compare well with the expected asymptotic result.In this study we apply the same PWA workflow to structurally non-identifiable and practically non-identifiable mathematical models.Working with simple mathematical models allows us to present the PWA in a didactic format, as well as enabling a comparison of the approximate PWA prediction intervals with the gold-standard full likelihood prediction intervals.In summary we find that the PWA workflow approach can be applied to non-identifiable models in the same way that as for identifiable models.For the structurally non-identifiable models the confidence sets in parameter space are partly determined by user-imposed parameter bounds rather than being determined solely by the curvature of the likelihood function.Since the likelihood function is relatively flat we find that finite sample coverage properties exceed the expected asymptotic result.For the practically non-identifiable models we find that some parameters are well-identified by the data whereas other parameters are not.In this case the PWA workflow provides insight by mechanistically linking confidence sets in parameter space with confidence sets in prediction space regardless of the identifiability status of the parameter. Demonstrating how the PWA workflow applies to non-identifiable mathematical models is particularly important for applications in mathematical biology because non-identifiability is commonly encountered.Previous approaches for dealing with non-identifiability have included introducing model simplifications, such as setting parameter values to certain default values (Simpson et al. 2020;Vollert et al. 2023), or seeking some combination of parameters that reduces the dimensionality of the parameter space (Gutenkunst et al. 2007;VandenHeuvel et al. 2023;Simpson et al. 2022).One of the challenges with these previous approaches is that they are often administered on a case-by-case basis without providing more general insight.In contrast, the PWA workflow can be applied to study parameter identifiability and to propagate confidence sets in parameter values to prediction confidence sets without needing to find parameter combinations to address non-identifiability, therefore providing a more systematic approach. There are many options for extending the current study.In this work we have chosen to deal with relatively straightforward mathematical models by working with ODEbased population dynamics models that are standard extensions of the usual logistic growth model.Working with simple, canonical mathematical models allows us to compare our approximate PWA workflow prediction intervals with gold-standard full likelihood results, as well as making the point that parameter non-identifiability can arise even when working with seemingly simple mathematical models.In this work we have applied the PWA to simple models with just three or four parameters, and previous implementations have dealt with models with up to seven parameters (Murphy et al. 2022;Simpson et al. 2023), and future applications of the PWA to more complicated mathematical models with many more unknown parameters is of high interest.Unfortunately, however, under these conditions it becomes computationally infeasible to compare with predictions from the gold-standard full likelihood approach so we have not dealt with more complicated models in the current study.While we have focused on presenting ODE-based models with a standard additive Gaussian measurement model, the PWA approach can be applied to more complicated mathematical models including mechanistic models based on PDEs (Simpson et al. 2020;Ciocanel et al. 2023), as well as systems of ODEs and systems of PDEs (Murphy et al. 2024), or more complicated mathematical models that encompass biological heterogeneity, such as multiple growth rates (Banks et al. 2018).Here we have chosen to work here with Gaussian additive noise because this is by far the most commonly-used approach to relate noisy measurements with solutions of mathematical models (Hines et al. 2014), but our approach can be used for other measurement models, such as working with binomial measurement models (Maclaren et al. 2017;Simpson et al. 2024) or multiplicative measurement models that are often used to maintain positive predictions (Murphy et al. 2024), as well as correlated noise models (Lambert et al. 2023).Another point for future consideration is that the present study has dealt with prediction intervals in terms of the mean trajectories.When it comes to considering different measurement models it is also relevant to consider how trajectories for data realisations are impacted, since our previous experience has shown that different measurement models can lead to similar mean trajectories but very different data realisations (Simpson et al. 2024).In addition, these approaches can also be applied to stochastic models if a surrogate likelihood is available, as discussed in Simpson and Maclaren (2023) and implemented by Warne et al. (2024). Fig. 1 Fig. 1 Practical applications of logistic growth models in ecology and cell biology.a Location of Lady Musgrave Island (black disc) relative to the Australian mainland (inset, red disc).b Field data showing the time evolution of the percentage total hard coral cover, C o (t) (blue discs) after some disturbance at Lady Musgrave Island monitoring site 1 (eAtlas 2023).Images in a-c reproduced with permission from Simpson et al. (2023).c Experimental images showing equatorial slices through tumour spheroids grown with melanoma cells.Images from left-to-right show spheroids harvested at 12, 14, 16, 18 and 21 days after formation (Browning et al. 2021).Each cross section shows that the spheroids grow as a three-layer compound sphere with the inner-most sphere containing dead cells, the central spherical shell containing live but arrested cells that are unable to progress through the cell cycle, and the outer-most spherical shell contains live cells progressing through the cell cycle.d Summarises the dynamics of the outer most radius of a group of spheroids of the same melanoma cell line grown from different numbers of cells, including spheroids initiated with 2500, 5000 and 10,000 cells, as indicated.Images in c-d reproduced with permission from Browning et al. (2021) (Color figure online) the dimensions of C(t) and K would be taken to be densities expressed in terms of a percentage of area occupied by growing hard corals, and the rates λ and d would have dimensions of [/day].Similarly, work with the data in Fig.1dthe variables C(t) and K would be taken to represent spheroid radius with dimensions μm, and the rates λ and d would have dimensions [/day](Murphy et al. 2022). Fig. 3 Fig.3a-c Univariate profile likelihood functions for λ, d and K , respectively.Each Profile likelihood function (solid blue) is superimposed with a vertical green line at the MLE and a horizontal red line shoing the asymptotic 95% threshold at * p = − 0.95,1 /2 = −1.92,where q,n refers to the qth quantile of the χ 2 distribution with n degrees of freedom(Royston 2007), here n = 1 for univariate profiles.Parameter estimates and 95% confidence intervals are: λ = 0.0094 [0.0067, 0.0183], d = 0.00192 [0, 0.010] and K = 99.90 [74.05, 200] (Color figure online) together with the MLE solution.Comparing the Fig. 4 Fig. 4 a-c Bivariate profile likelihood functions for ψ = (λ, d), ψ = (λ, K ) and ψ = (d, K ), respectively, shown in the left column, together with the associated prediction intervals in the right column.Contour plots in the left column show ˆ p (ψ | C o (t)) for each choice of ψ, as indicated.Each contour plot is superimposed with a contour showing the asymptotic threshold * = − 0.95,2 /2 = −3.00(solid red curve) and the location of the MLE (cyan disc).Each of the bivariate profiles in the left column is constructed using a 500 × 500 uniform mesh, and parameter values lying within the contour where ≥ * are propagated forward to form the profile-wise prediction intervals in the right column (purple shaded region).Each prediction interval is superimposed with the MLE solution (cyan solid curve) (Color figure online) Fig. 5 Fig. 5 Prediction interval comparison.The prediction interval constructed from the gold standard full likelihood (solid gold region) is superimposed with the MLE solution (cyan solid curve) and the union of the upper and lower profile-wise intervals in Fig. 4 (dashed red curves) (Color figure online)
12,502
2024-03-13T00:00:00.000
[ "Mathematics", "Biology", "Computer Science" ]
Enumerating Independent Linear Inferences A linear inference is a valid inequality of Boolean algebra in which each variable occurs at most once on each side. In this work we leverage recently developed graphical representations of linear formulae to build an implementation that is capable of more efficiently searching for switch-medial-independent inferences. We use it to find four `minimal' 8-variable independent inferences and also prove that no smaller ones exist; in contrast, a previous approach based directly on formulae reached computational limits already at 7 variables. Two of these new inferences derive some previously found independent linear inferences. The other two (which are dual) exhibit structure seemingly beyond the scope of previous approaches we are aware of; in particular, their existence contradicts a conjecture of Das and Strassburger. We were also able to identify 10 minimal 9-variable linear inferences independent of all the aforementioned inferences, comprising 5 dual pairs, and present applications of our implementation to recent `graph logics'. Introduction A linear inference is a valid implication ϕ → ψ of Boolean logic, where ϕ and ψ are linear, i.e. each variable occurs at most once in each of ϕ and ψ.Such implications have played a crucial role in many areas of structural proof theory.For instance the inference switch, s : x ∧ (y ∨ z) → (x ∧ y) ∨ z governs the logical behaviour of the multiplicative connectives `and ⊗ of linear logic [Gir87], and similarly the inference medial, m : (w ∧ x) ∨ (y ∧ z) → (w ∨ y) ∧ (x ∨ z) together with the structural rules weakening and contraction, governs the logical behaviour of the additive connectives ⊕ and & [Str02,Str03].Both of these inferences are fundamental to deep inference proof theory, in particular allowing weakening and contraction to be reduced to atomic form [BT01,Brü03a], thereby admitting elegant 'geometric' proof normalisation procedures based on atomic flows [GG08,Gun09].One particular feature of these normalisation procedures is that they are robust under the addition of further linear inferences to the system, thanks to the atomisation of structural steps. On the other hand the set of all linear inferences L plays an essential role in certain models of linear logic and related substructural logics.In particular, the multiplicative fragment of Blass' game semantics model of linear logic validates just the linear inferences (there called 'binary tautologies') [Bla92], and this coincides too with the multiplicative fragment of Japaridze's computability logic, cf., e.g., [Jap05].From a complexity theoretic point of view, the set L is sufficiently rich to encode all of Boolean logic: it is coNP-complete [Str12b,DS16]. It was recently shown by one of the authors, together with Straßburger, that, despite its significance, L admits no polynomial-time axiomatisation by linear inferences unless coNP = NP [DS15, DS16], resolving a long-standing open problem of Blass and Japaridze for their respective logics (see, e.g., [Jap17]).Note that the condition of polynomial-time computability naturally arises from proof theory [CR74,CR79], and is also required for the result to be meaningful: it prevents us just taking the entire set L as an axiomatisation. From a Boolean algebra point of view, this result means that the class of linear Boolean inequalities has no feasible basis (unless coNP = NP).From a proof theoretic point of view this means that any propositional proof system (in the Cook-Reckhow sense [CR74,CR79], see also [Kra19]) must necessarily admit some 'structural' behaviour, even when restricted to proving only linear inferences (unless coNP = NP). An immediate consequence of this result is that s and m above do not suffice to generate all linear inferences (unless coNP = NP), even modulo all valid linear equations (which are just associativity, commutativity, and unit laws, cf.[DS15,DS16]).In fact, this was known before the aforementioned result, due to the identification of an explicit 36 variable inference in [Str12b]. 1Already in that work the question was posed whether such an inference was minimal, and since then the identification of a minimal {s, m}-independent linear inference has been a recurring theme in the literature of this area. It has been verified in [Das13] that a minimal {s, m}-independent linear inference must be 'non-trivial', as long as we admit all valid linear equations (again, these are just associativity, commutativity and unit laws).Intuitively, 'non-triviality' rules out pathological inferences such as x ∧ y → x ∨ y or x ∧ (y ∨ z) → x ∨ (y ∧ z).For these inferences the variable, say, y is, in a sense, redundant; it turns out that they may be derived in {s, m}, modulo linear equations, from a smaller non-trivial 'core'.We recall these arguments in Section 2. Furthermore [Das13] identified a 10 variable linear inference that is not derivable by switch and medial (even under linear equations), which Straßburger conjectured was minimal [Str12a].Around the same time Šipraga attempted a computational approach, searching for independent linear inferences by brute force [ Š12].However, computational limits were reached already at 7 variables.In particular, every linear inference of up to 6 variables is already derivable by switch and medial, modulo linear equations; due to the aforementioned 10 variable inference, any minimal independent linear inference must have size 7,8,9, or 10. Since 2013 there have been significant advances in the area, in particular through the proliferation of graph-theoretic tools.Indeed, the interplay between formulae and graphs was heavily exploited for the aforementioned result of [DS15,DS16].Since then, multiple works have emerged in the world of linear proof theory that treat these graphs as 'first class citizens', comprising a now active area of research [NS18, AHS20, AHS22, CDW20]. Contribution.In this work we revisit the question of minimal {s, m}-independent linear inferences by exploiting the aforementioned recent graph theoretic techniques.Such an approach vastly reduces the computational resources necessary and, in particular, we are able to provide a conclusive result: the smallest {s, m}-independent linear inference has size 8.In fact there are four minimal (w.r.t.derivability) such ones, unique up to associativity, commutativity, renaming of variables, given by, and the De Morgan dual of Eq. (1.3).Note that Eq. (1.2) did not appear in the preliminary version [DR21] as it was erroneously considered to be isomorphic to the first.We dedicate some discussion to each of these separately in Section 3.2, and include a manual verification of their soundness and {s, m}-independence in Appendix A, as a sanity check. Our main contribution, as announced in the preliminary version of this paper [DR21], is an implementation that checks inference for {s, m}-derivability, which was able to confirm that all 7 variable linear inference are derivable from switch and medial.In fact we found (1.1) independently of the implementation presented in this paper. 2Ultimately, we improved the implementation to run on inferences of size 8 too, and our inference (1.1) was duly found, as well as (1.2), (1.3), and the dual of (1.3).One highlight of (1.3) is that it exhibits a peculiar structural property that refutes Conjecture 7.9 from [DS16], as we explain in Section 3.2.2. In this extended version, we also show how our search algorithm can be generalised to enumerate minimal linear inferences independent of an arbitrary set S of linear inferences.This can be employed to devise a recursive algorithm basis(n) that computes all n-variable minimal linear inferences independent of all smaller ones.To this end, in Section 6 we define the notion of an n-minimal set, which is intuitively a minimal set of inferences that derives all n-variable inferences modulo all < n-variable inferences.We show these n-minimal sets are unique up to linear equations and renaming of variables, and finally prove correctness of our algorithm basis(n) by extending some theoretical results on constant-freeness for switch and medial to the general setting of arbitrary n-minimal sets.For instance the 9-variable minimal linear inferences given by basis(9) are those not derivable from switch, medial, and the four aforementioned 8-variable linear inferences, instead of the weaker condition of not being derivable from only switch and medial.Indeed, we were able to successfully run basis(9) to compute the 9-minimal linear inferences.We classify these in Section 7, identifying 10 such inferences in total, comprising 5 dual pairs. Our implementation [Ric21] is split into a library and an executable, where the executable implements our search algorithm described in Section 5.2, and the library contains foundations for working with linear inferences using the graph theoretic techniques presented in Section 4. These are written in Rust and designed to be relatively fast while maintaining readability. In addition to having the machinery needed for our original search, the library has also been extended for to work with arbitrary graphs, e.g. as presented in [CDW20], in particular being able to check other notions of inference for graphs and having tools for determining whether an inference is an instance of an arbitrary, user-supplied graph rewrite.Our intention is that this could form a reusable base for future investigations in the area, both for linear formulae and for the recent linear graph theoretic settings of [NS18, AHS20, AHS22, CDW20]. As well as the additions already mentioned, this extended version contains several further proofs, examples and general discussion compared with the preliminary version [DR21]. Preliminaries In this section we present some preliminaries on linear inferences, triviality and minimality.The content of this section is based on Section 2 of the preliminary version, [DR21], with some additional proofs and narrative. Throughout this paper we shall work with a countably infinite set of variables, written x, y, z etc.A linear formula on a (finite) set of variables V is defined recursively as follows: • and ⊥ are linear formulae on ∅, the empty set of variables (called units or constants).• x and ¬x are linear formulae on {x}, for each variable x. We adopt the expected Boolean semantics of (linear) formulae.Namely we extend any map α from variables to {0, 1} (an 'assignment') to one on all (linear) formulae as follows: The formula ϕ satisfies an assignment α if α(ϕ) = 1.Formulae ϕ and ψ are logically equivalent if they satisfy the same assignments. Remark 2.1 (On negation).Note that our restriction of negation to only propositional variables is made without loss of generality.Namely, we may recover ¬ϕ thanks to De Morgan's laws by setting: Note in particular that, if ϕ is a linear formula on V, then so is ¬ϕ. A linear formula that does not contain or ⊥ is constant-free.A linear formula with no negated variables (i.e.formulae of form ¬x) is negation-free.Later in the paper, we will be able to restrict our search to inferences between constant-free negation-free formulae. In what follows, we shall omit explicit consideration of variable sets, assuming that they are disjoint whenever required by the notation being used. A binary relation ∼ on linear formulae is closed under contexts if we have for all ϕ, ψ, and χ: An equivalence relation (on linear formulae) that is closed under contexts is called a (linear) congruence. Definition 2.2 (Linear equations).Let ∼ ac be the smallest congruence satisfying, ∼ u is the smallest congruence satisfying: ∼ acu is the smallest congruence containing both ∼ ac and ∼ u . Remark 2.3 (Variable sets under congruence).Suppose ϕ and ϕ are linear formulae on V and V respectively.If ϕ ∼ ac ϕ then V = V . Note also that ∼ u generates a unique normal form of linear formulae by maximally eliminating constants: Proposition 2.4 (Folklore).Every formula is ∼ u -equivalent to a unique constant-free formula, or is equivalent to ⊥ or . Proof.Construe the equations of Eq. (2.1) as a rewrite system by orienting them left-to-right.This system reduces the size of formulae and so must terminate.By inspection, the normal form must contain no constant under the scope of a connective ∨ or ∧. For uniqueness, it suffices to consider the critical pairs.In fact, the analysis is particularly simple since any critical pair is a ∨ or ∧ of constants that always reduces to the same constant: For the above pairs, we have indicated a reduction, say ∧ ϕ → ϕ or ϕ ∨ → , by ∧ or ∨ respectively. Remark 2.5 (On logical equivalence).Clearly, if ϕ ∼ acu ψ then ϕ and ψ are logically equivalent.In fact, for linear formulae, we also have a converse: two linear formulae ϕ and ψ on the same variables are logically equivalent if and only if ϕ ∼ acu ψ [DS15, DS16].Furthermore, given two linear formulae ϕ and ψ that are constant-free, ⊥, or , then ϕ and ψ are logically equivalent if and only if they are ∼ ac -equivalent.These property follows from Proposition 2.4 above, the results of Section 2.2, and the graphical representation of linear formulae and their semantics in Section 4. Linear inferences. A linear inference is just a valid implication ϕ → ψ (i.e., α(ϕ) = 1 =⇒ α(ψ) = 1) where ϕ and ψ are linear formulae.The left-hand side (LHS) and right-hand side (RHS) of a linear inference, generally speaking, need not be linear formulae on the same variables.Nonetheless we shall occasionally refer to linear inferences "on V" or "on n variables", assuming that the LHS and RHS are both linear formulae on some fixed V with |V| = n. There are two linear inferences we shall particularly focus on, due to their prevalence in structural proof theory.Switch is the following inference on 3 variables, and medial is the following inference on 4 variables: We may compose switch and medial (and more generally an arbitrary set of linear inferences) to form new linear inferences by construing them as term rewriting rules.More generally, we will consider rewriting derivations modulo the equivalence relations ∼ ac and ∼ acu we introduced earlier.In the latter case, as previously mentioned, the underlying set of variables may change during a derivation, though Proposition 2.4 will later allow us to work with some fixed set of variables throughout {s, m} derivations.Definition 2.6 (Rewriting).Let S be a set of linear inferences.We write → S for the term rewrite system generated by S. I.e., → S is the smallest relation containing each inference of S and closed under substitution and contexts. We write S for the closure of → S modulo ac, i.e. ϕ S ψ if there are ϕ , ψ such that ϕ ∼ ac ϕ → S ψ ∼ ac ψ.In this case we may say that the inference ϕ → ψ is an instance (without units) of some inference in S. We write Su for the closure of → S modulo acu, i.e. ϕ Su ψ if there are ϕ , ψ such that ϕ ∼ acu ϕ → S ψ ∼ acu ψ.Similarly in this case we may simply say the inference is an instance (with units) of some inference in S. We write * S , or * Su , for the smallest reflexive transitive relation containing S and ∼ ac , or Su and ∼ acu , respectively, and say that a linear inference ϕ → ψ is S-derivable (without units) (or S-derivable with units) if ϕ * S ψ (or ϕ * Xu ψ, respectively). We shall typically write such sets S by just listing their linear inferences, e.g.we write * ms instead of * {m,s} and * msu instead of * {m,s}u , in order to lighten notation.Clearly, s and m are valid, so any derivation ϕ * msu ψ comprises a linear inference. Example 2.7 (Weakening and duality).The following derives ⊥ * su χ for any χ: Using this, we may {s}-derive weakening, ϕ → ϕ ∨ χ, with units as follows: Notice that * su is closed under De Morgan duality: If ϕ * su χ and φ and χ are obtained from ϕ and χ, respectively, by flipping each ∨ to a ∧ and vice versa, then χ * su φ.This follows by direct inspection of s and each clause of ∼ acu ; indeed the same property holds for * s , * ms , and * msu by the same reasoning.As a result, we also have that coweakening, ϕ ∧ χ → ϕ, is {s}-derivable with units. Example 2.8 ('Mix').Units can help us derive even constant-free linear inferences.For instance, mix: ϕ ∧ ψ → ϕ ∨ ψ can be derived as a composite of coweakening and weakening: The following is one of the main results of this paper: Theorem 2.9.Suppose ϕ is a linear formula over V 1 and ψ is a linear formula over V 2 and r : ϕ → ψ is a linear inference.Then if |V 1 ∩ V 2 | ≤ 7 we have that ϕ * msu ψ.Furthermore, there is a valid linear inference ϕ → ψ on 8 variables with ϕ * msu ψ, so 7 is maximal with the property above. This theorem was the original objective of the preliminary version of this paper [DR21].Indeed, before this it was not known if the smallest linear inference independent of switch and medial used 7 or 8 variables.The rest of Section 2 and Sections 4 and 5 work towards a proof of this theorem. Trivial inferences. In order to state Theorem 2.9 above in its most general form, we have allowed linear formulae to include constants and negation, and linear inferences to be between formulae with different variable sets.However it turns out that we may proceed to prove Theorem 2.9, without loss of generality, by working with derivations of only constant-free, negation-free formulae on some fixed set of variables, as was already shown in [Das13].This is done by defining the notion of a trivial inference, whose {s, m}-derivability, with units, may be reduced to that of a smaller non-trivial inference. Definition 2.10.An inference ϕ → ψ is trivial at a variable x if ϕ[ /x] → ψ[⊥/x] is again a valid inference.An inference is trivial if it is trivial at one of its variables. Example 2.11.The mix inference from Example 2.8, x∧y → x∨y, is trivial at x and trivial at y. Note, however, that it is not trivial at x and y 'at the same time', in the sense that the simultaneous substition of ⊥ for x and y in the LHS and for x and y in the RHS does not result in a valid implication.In contrast, the linear inference w ∧ (x ∨ y) → w ∨ (x ∧ y) from [Das13] is, indeed, trivial at x and y 'at the same time'. Neither switch nor medial are trivial. Remark 2.12 (Global vs local triviality).Note that triviality is closed under composition by linear inferences: if ϕ → ψ is trivial at x and ψ → χ is valid, then ϕ → χ is trivial at x. Similarly for χ → ψ if χ → ϕ is valid.One pertinent feature is that the converse does not hold: there are 'globally' trivial derivations that are nowhere 'locally' trivial.For instance consider the following derivation (from [DS16, Remark 5.6]): The derived inference is just an instance of mix, from Example 2.8, on the redex w ∧ x, which is trivial.However, no local step is trivial. To prove (the first half of) Theorem 2.9, in Section 5 we will actually prove the following apparent weakening of that statement: Theorem 2.13.Let n < 8. Let ϕ and ψ be constant-free negation-free linear formulae on n variables and suppose ϕ → ψ is a non-trivial linear inference.Then ϕ In fact this statement is no weaker at all, and we will now see how the consideration of triviality allows us to only deal with such special cases without loss of generality.We first show that is not necessary to consider linear inferences where the premise and conclusion have different variables. Proposition 2.14.Let ϕ and ψ be linear formulae on V 1 and V 2 , respectively, and let r : ϕ → ψ be a linear inference.Then there is r : ϕ → ψ on V 1 ∩ V 2 such that r is {s, m, r }-derivable with units. Proof.First note that by setting ϕ = in Example 2.8 we have for any ψ: and by setting ψ = ⊥ we have for all ϕ that: ⊥ ∼ acu ϕ ∧ ⊥ * msu ϕ ∨ ⊥ ∼ acu ϕ Now we can let ϕ be ϕ with every variable in V 1 \ V 2 (or its negation) replaced with , and ψ be ψ with every variable in V 2 \ V 1 (or its negation) replaced with ⊥.Then r : ϕ → ψ is clearly a linear inference (since r is valid w.r.t.all assignments) and only uses the variables V 1 ∩ V 2 .Furthermore we have the derivation: msu ψ where the first step applies the derivation of x * msu The key use of triviality is that if an inference is trivial, then there is a smaller non-trivial inference from which it can be derived. Proposition 2.15 [Das13, Theorem 34].Let ϕ and ψ be linear formulae on V, and let r : ϕ → ψ be a trivial linear inference.Then there is a non-trivial linear inference r : ϕ → ψ on some V V such that r : ϕ → ψ is {s, m, r }-derivable with units. Note in particular that, in both statements above, if r is {s, m}-derivable with units, then so is r.This is also the case for the next result. Proof.First, note that both ϕ and ψ must be linear formulae on V, since ϕ → ψ is non-trivial.For the same reason, no variable can occur positively in ϕ and negatively in ψ or vice-versa, since ϕ → ψ is non-trivial, and so any negated variable may be safely replaced by its positive counterpart.From here, we simply set ϕ and ψ to be the constant-free formulae (uniquely) obtained from Proposition 2.4 by ∼ u .Non-triviality of r follows from that of r by logical equivalence. Corollary 2.17.The statement of Theorem 2.13 implies (the first half of ) the statement of Theorem 2.9. Proof.Let r be as in Theorem 2.9.Let r be the inference where the LHS and RHS have the same variable set obtained from Proposition 2.14, let r be the non-trivial linear inference obtained by Proposition 2.15 above, and finally let r be the non-trivial constant-free negation-free linear inference obtained by Proposition 2.16.By Theorem 2.13, r is {s, m}derivable and so, by Proposition 2.14, Proposition 2.15, and Proposition 2.16, r is also {s, m}-derivable with units. It is clear that if an inference is derivable with switch and medial then it is also derivable with switch, medial, and units.The following proposition, while not necessary for the proof of Corollary 2.17, allows the the converse in some cases, and is the reason why our search algorithm in Section 5 will only check for {s, m}-derivability. Proof.Suppose we have the following derivation of ϕ → ψ, a non-trivial constant-free negation-free linear inference: Using Proposition 2.4, we can reduce any formula ϕ to a ∼ u equivalent constant-free formula (or ⊥ or ), which we will denote ϕ .We can now reduce each ϕ i and ψ i to ϕ i or ψ i respectively.Now each ∼ acu can be replaced by a ∼ ac by Remark 2.5 as all the involved formulae are constant-free or ⊥ or .It remains to show that each rewrite is still valid.We prove by induction on the definition of → ms (namely as the closure of ms under contexts and substitution) that if ϕ → ms ψ then either ϕ ∼ ac ψ or ϕ ms ψ .• Suppose we have an instance of the form ϕ ∧ χ → ψ ∧ χ, with ϕ → ms ψ. - -If χ = then (ϕ ∧ χ) = ϕ and (ψ ∧ χ) = ψ , whence we conclude by the inductive hypothesis for ϕ → ms ψ. -Otherwise we have χ is not a constant, and so ϕ and ψ must also not be constants, or the inference would be trivial.Therefore, (ϕ ∧ χ) = ϕ ∧ χ and (ψ ∧ χ) = ψ ∧ χ .By the inductive hypothesis for ϕ → ψ we have ϕ ∼ ac ψ or ϕ ms ψ , whence we conclude by context closure of ∼ ac and ms . • The other cases for closure under contexts follow similarly. However any such r is either an instance of ∼ ac or s .(This can be checked by hand or using our computer implementation presented in Section 5).Now, returning to Eq. (2.4), we have by the above argument that each ϕ i ∼ ac ψ i or ϕ i ms ψ i .Since ms = ∼ ac • → ms • ∼ ac , we can locally merge any redundant ac steps to obtain that, indeed ϕ * ms ψ . Minimality of inferences. Let us take a moment to explain the various notions of 'inference minimality' that we shall mention in this work. Size minimality refers simply to the number of variables the inference contains.E.g. when we say that the 8-variable inferences in the next section are size minimal (or 'smallest') non-{s, m}-derivable with units (or {s, m}-independent) linear inferences, we mean that there are no {s, m}-independent linear inferences with fewer variables. A linear inference ϕ → ψ is logically minimal if there is no logically distinct interpolating linear formula.I.e. if ϕ → χ and χ → ψ are linear inferences, then χ is logically equivalent to ϕ or ψ (and so, by Remark 2.5, is ∼ acu -equivalent to ϕ or ψ). Finally, a linear inference ϕ → ψ is {s, m}-minimal if there is no formula χ s.t.ϕ ms χ or χ ms ψ and χ → ψ or ϕ → χ, respectively, is a valid linear inference which is not a logical equivalence. It is clear from the definitions that any logically minimal inference is also {s, m}-minimal, though the converse may not be true.The reason for considering {s, m}-minimality is that it is easier to systematically check by hand.In fact, the implementation we give later in Section 5 further verifies that our new 8-variable inferences are logically minimal. Logical minimality also serves an important purpose for our proof of Theorem 2.13, as it allows the following reduction, greatly reducing the search space for our implementation, in fact to nearly 1% of its original size for 8 variable inferences: 3 Lemma 2.19.Suppose the statement of Theorem 2.13 holds whenever ϕ → ψ is logically minimal.Then the statement of Theorem 2.13 holds (even when ϕ → ψ is not logically minimal). Proof.Suppose we have a non-trivial inference between constant-free negation-free linear inferences ϕ → ψ.Then ϕ → ψ can be refined into a chain of logically minimal linear All of these must be non-trivial, as triviality of any of them would imply triviality of ϕ → ψ, cf.Remark 2.12.Therefore if all such inferences are derivable from switch and medial (with units) then so is ϕ → ψ, by transitivity. New 8-variable {s, m}-independent linear inferences In this section we shall present the new 8-variable linear inferences of this work ((1.1), (1.2), and (1.3) from the introduction), and give self-contained arguments for their {s, m}independence and {s, m}-minimality, as a sort of sanity check for the implementation described in the next section.We shall also briefly discuss some of their structural properties, in reference to previous works in the area.Thanks to the results of the previous section, in particular Proposition 2.16 and Remark 2.12, we shall only consider non-trivial constant-free negation-free linear inferences with the same variables in the LHS and RHS.Furthermore, by Proposition 2.18 we shall only consider {s, m}-derivability (i.e.without units). The content of this section is based on Section 3 of the preliminary version, [DR21], with the addition of the inference (1.2) and surrounding discussion that was previously overlooked. 3.1.Previous linear inferences.In [Str12b] Straßburger presented a 36-variable inference that is {s, m}-independent, by an encoding of the pigeonhole principle with 4 pigeons and 3 holes.He there referred to it as a 'balanced' tautology, but in our setting it is a linear inference that can be written as follows: In [Das13] Das noticed that a more succinct encoding of the pigeonhole principle could be carried out, with only 3 pigeons and 2 holes, resulting in a 10-variable {s, m}-independent linear inference.A variation of that, e.g. as used in [Das17], is the following: In fact this is not a {s, m}-minimal inference, but we write this one here for comparison to two of the new 8-variable inferences in the next subsection.It can be checked valid and non-trivial by simply checking all cases, or by use of a solver.We do not give an argument for {s, m}-independence here, but such an argument is similar to the one we give for an 8-variable inference Eq. (3.2) given in the next subsection. Remark 3.1 (Deriving 3-2-pigeonhole using contraction).It is known that, when we add to {s, m} structural rules that allow us to 'duplicate' and 'erase' propositional variables then the resulting (non-linear) system can derive every linear inference.This is thanks to the completeness of the deep inference system SKS for propositional logic and its cut-elimination theorem (see, e.g., [Brü03b]).With respect to Eq. (3.1) above, we need only a single 'duplication' on z or z .For instance, here is the corresponding derivation for z, 3.2.The four minimal 8 variable {s, m}-independent linear inferences.Pre-empting Section 5.2, let us explicitly give the four minimal linear inferences found by our algorithm and justify their {s, m}-independence and {s, m}-minimality, as a sort of sanity check for our implementation later.As we will see, they turn out to be significant in their own right, which is why we take the time to consider them separately. 3.2.1.Refinements of the 3-2-pigeonhole-principle.First let us consider an 8 variable linear inference that may be used to derive Eq. (3.1) (identical to (1.1) from the introduction): Recalling the notion of 'duality' from Example 2.7, let us formally define the dual of a linear inference ϕ → χ to be the linear inference χ → φ, where φ and χ are obtained from ϕ and χ, respectively, by flipping all ∨s to ∧s and vice-versa.Considering linear inferences up to renaming of variables, we have: Indeed, the formula structure of the RHS is clearly the dual of that of the LHS, and the mapping from a variable in the LHS to the variable at the same position in the RHS is, in fact, an involution.I.e., u is mapped to itself; v is mapped to y which in turn is mapped to v; v is mapped to w which is in turn mapped to v ; x is mapped to y which in turn is mapped to x; and z is mapped to itself.Validity may be routinely checked by any solver, but we give a case analyis of assignments in Appendix A.1. We may also establish {s, m}-independence and {s, m}-minimality by checking all applications of s or m to the LHS (note that we do not need to check the RHS, by Observation 3.2 above).This analysis is given explicitly in Appendix A.4. We also found the following inference (identical to Eq. (1.2)): At first glance this looks very similar to (3.2), however it turns out that it is not isomorphic to it, and neither is derivable from the other.Like (3.2), it is also self dual.Validity of this inference is checked in Appendix A.2, and its {s, m}-minimality is checked in Appendix A.5 (similarly to the (3.2), by duality we only need to check applications of s and m from the LHS). 3.2.2. Counterexamples to a conjecture of Das and Straßburger.Finally, our search algorithm found two completely new linear inferences, which were dual to eachother.One of them (identical to (1.3) from the introduction) is: with the other given by dualising the above inference.Again, validity is routine, but a case analysis is given in Appendix A.3.We may establish {s, m}-independence and {s, m}-minimality again by checking all possible rule applications.This analysis is given in Appendix A.6.This new inference exhibits a rather interesting property, which we shall frame in terms of the following notion, since it will be used in the next section: Definition 3.4.Let ϕ be a linear formula on a variable set V. For distinct x, y ∈ V, the least common connective (lcc) of x and y in ϕ is the connective ∨ or ∧ at the root of the smallest subformula of ϕ containing both x and y. Note that, in the inference (3.4) above, the lcc of w and x changes from ∨ to ∧, but the lcc of y and y changes from ∧ to ∨.No such example of a minimal linear inference exhibiting both of these properties was known before; switch, medial and all of the linear inferences of this section either preserve ∨-lccs or preserve ∧-lccs.In fact, Das and Straßburger showed that any valid linear inference preserving ∧-lccs is already derivable by medial [DS16, Theorem 7.5].Furthermore they made the following conjecture: Conjecture 7.9 from [DS16], rephrased.Each logically minimal nontrivial linear inference preserves either ∧-lccs or ∨-lccs. We shall see several further counterexamples over 9 variables later in Section 7, where we also discuss possible consequences and trends arising from all these new inferences. A graph-theoretic presentation of linear inferences In this section we consider graphical representations of formulas that render our search algorithm in Section 5 more efficient.The content of this section is based on Section 4 from the preliminary version, [DR21], with the addition of further examples, figures and narrative. A significant cause of algorithmic complexity when searching for linear inferences is the multitude of formulae equivalent modulo associativity and commutativity (∼ ac ).For example, for 7 variables, there are 42577920 formulae (ignoring units), yet only 78416 equivalence classes.Under Remark 2.5 it would be ideal if we could deal with ∼ ac -equivalence classes directly, realising logical and syntactic notions on them in a natural way.This is precisely what is accomplished by the graph-theoretic notion of a relation web, cf.[Gug07, Str07, DS15, DS16]. Throughout this section we work only with constant-free negation-free linear formulae, cf.Theorem 2.13.Recall the notion of least common connective (lcc) from Definition 3.4.Definition 4.1.Let ϕ be a linear formula on a variable set V. The relation web (or simply web) of ϕ, written W(ϕ), is a simple undirected graph with: • The set of nodes of W(ϕ) is just V, i.e. the variables occurring in ϕ. • For x, y ∈ V, there is an edge between x and y in W(ϕ) if the lcc of x and y in ϕ is ∧. When we draw graphs, we will draw a solid red line x y if there is an edge between x and y, and a green dotted line x y otherwise. Example 4.2.Let ϕ be the linear formula w ∧ (x ∧ (y ∨ z)).W(ϕ) is the following graph: w x y z Note that linear formulae equivalent up to associativity and commutativity have the same relation web, since ∼ ac does not affect the lccs.For instance, if ψ = (w ∧ x) ∧ (z ∨ y), then W(ψ) is still just the relation web above.In fact, we also have the converse: Proposition 4.3 (E.g., [DS16], Proposition 3.5).Given linear formulae ϕ and ψ, ϕ ∼ ac ψ if and only if W(ϕ) = W(ψ). Thus relation webs are natural representations of equivalence classes of linear formulae modulo associativity and commutativity. We can also realise our new 8-variable inferences from the previous section graphically: Example 4.4.The inferences (3.2), (3.3) and (3.4) may be construed as graph rewrites on their webs, as given in Fig. 1.The one for (3.4) is obtained from that of (3.4) by switching every edge to a non-edge and vice-versa, and exchanging the LHS and RHS.Note, for (3.4), that the edge between y and z changes from red to green, and several edges (e.g.x to y) change from green to red, exhibiting how this inference breaks Conjecture 7.9 from [DS16]. It is easy to see that the image of W is just the cographs.A cograph is either a single node, or has the form R S or R S for cographs R and S. Formally, R S has as nodes the disjoint union of the nodes of R and the nodes of S; edges within the R component are inherited from R and similarly for S; there is also an edge between every node in R and every node in S. R S is defined similarly, but without the last clause.A cograph decomposition of a cograph R is just a definition tree according to these construction rules (its 'cotree'), which we identify with a linear formula (whose web is R) in the natural way, setting ∧ for a node and ∨ for a node.By Proposition 4.3, the cograph decomposition of a cograph is unique up to associativity and commutativity of ∧ and ∨. Cographs admit an elegant local characterisation by means of forbidden subgraphs: Definition 4.5.An induced subgraph of a graph G is one whose edges are just those of G restricted to some subset of the nodes.A graph G is P 4 -free if none of its induced subgraphs are isomorphic to the following graph, called P 4 : w x y z Further define a clique P of a graph G to be a subset of the nodes of G such that the induced subgraph on these nodes has all possible edges.Dually define a stable set to be a subset of the nodes whose induced subgraph contains no edges.A clique or stable set is maximal if any proper superset is no longer a clique or stable set respectively. The following result is well-known since (at least) the '70s, and has appeared numerous times in the literature (see also [Gug07,Str07]): Proposition 4.6 (E.g.[CLB81]).A graph is a cograph if and only if it is P 4 -free. Thus, relation webs are just the P 4 -free graphs whose nodes are variables.Note, in particular, that this characterisation gives us an easy way to check whether a graph is the web of some formula: just check every 4-tuple of nodes for a P 4 .In both these cases, the induced subgraph generated from {u, w, x, y} is P 4 . What is more, we may also verify several semantic properties of linear inferences, such as validity and triviality, directly at the level of relation webs: Proposition 4.8 (Follows from Proposition 4.4 and Theorem 4.6 in [DS16]).Let ϕ and ψ be linear formulae on the same set of variables with webs R and S respectively.The following are equivalent: (1) ϕ → ψ is a valid linear inference. (2) For every maximal clique P of R, there is some Q ⊆ P such that Q is a maximal clique of S. (3) For every maximal stable set Q of S, there is some P ⊆ Q such that P is a maximal stable set of R. Note that for inferences between arbitrary graphs (i.e.graphs that are not P 4 -free), conditions (2) and (3) above are not equivalent.In fact, (3) holds for R → S exactly when (2) holds for S → R, where R is the dual of R (inverting every edge). Example 4.9 [DS16].Recall the definition of P 5 and C 5 from Example 4.7.It can be checked that P 5 → C 5 satisfies (2) but C 5 → P 5 does not (as {u, z} is a maximal clique of C 5 with no subset that is a maximal clique of P 5 ).Conversely C 5 → P 5 does satisfy (3) but P 5 → C 5 does not (for example {x, z} is a maximal stable set of C 5 but none of {x, z} is stable but not maximal in P 5 as {u, x, z} is stable). Proposition 4.10 [DS16, Proposition 5.7].Let ϕ and ψ be linear formulae on the same variables with webs R and S respectively.The following are equivalent: (1) ϕ → ψ is a linear inference that is trivial at x. (2) For every maximal clique P of R, there is some Q ⊆ P \ {x} such that Q is a maximal clique of S. (3) For every maximal stable set Q of S there is some P ⊆ Q \ {x} such that P is a maximal stable set of R. Note that the criterion for triviality is a strict strengthening of that for validity, as we would expect.Again note the duality between the second and third characterisations of triviality above.It is easy to see that the validity criterion of Proposition 4.8 holds for each of these rules.For s, the maximal cliques {x, y} and {x, z} in the LHS are mapped to {x, y} and {z} in the RHS respectively.For m, the maximal cliques {w, x} and {y, z} in the LHS are mapped to themselves in the RHS.Now consider the trivial inference x ∧ y → x ∨ y, construed as the graph rewrite rule: We can easily verify the criterion for triviality at x from Proposition 4.10 since the only maximal clique on the LHS, {x, y} has {y} ⊆ {x, y} \ {x} as a maximal clique on the RHS. Example 4.12 (Supermix).A further interesting example is given by a class of inferences from [Das13] known as 'supermix'.These have the following form: Each of these inferences is trivial, as long as n > 0, and moreover they are trivial at each b i 'at the same time'.Note that, for n > 3, they are examples of inferences that that have more ∧ symbols in the RHS than the LHS, and for n > 4 more edges in the RHS than the LHS.For instance, considering n = 4, we can see that there are 4 edges in the LHS but 4 2 = 6 edges in the RHS.The only other inference so far that increases the number of edges is medial (which, nonetheless, reduces the number of ∧ symbols in a formula).It is not known whether any non-trivial linear inference has both of these properties. Remark 4.13.With the results in this section, the notation for inferences between formulae can be equally used for relation webs.For example, for webs R and S, we write R ms S (or R → S is an instance of {s, m}) to mean that the inference between (any choice of) the underlying linear formulae is an instance of ms , and R * ms S to mean the there is a derivation from switch and medial between the underlying formulae.Since these relations are invariant under associativity and commutativity, they are independent of the particular cograph decomposition chosen. Furthermore, to prove Theorem 2.13, it is sufficient to show that for all webs R and S with size less than 8, if R → S is valid and non-trivial then R * ms S. It is in fact possible to check whether an inference is derivable from medial at the level of graphs: The second condition can, in fact, be replaced by simply requiring that R → S is valid [DS16, Theorem 7.5].A relation web characterisation for switch derivability can also be found in [Str07, Theorem 6.2], however we do not use it in our implementation.The medial characterisation was used in the original implementation that accompanied [DR21], though a more general method is used in the current implementation which allows checking whether an inference is an instance of any graph rewrite (see Section 5.1 for more details). Implementation As stated in previous sections, Theorem 2.13 is proved using a computational search.In this section we describe the algorithm used to search for S-independent inferences between P 4 -free graphs of size n, for some set S of linear inferences, as well as some of the optimisations we employ so that this search finishes in a reasonable time.Many of these optimisations may be of self-contained theoretical interest.To prove Theorem 2.13, we will take the set S = {s, m} and n = 7, but everything in this section generalises to an arbitrary set of linear inferences. The implementation is written in Rust,5 which offers a combination of good performance (both in terms of speed and memory management) but also provides a variety of high level abstractions such as algebraic data types.Furthermore, it has built-in support for iterators, allowing the code to be written in a more functional style, and has a built-in testing framework, meaning that sanity checks can be built into the code base.The code [Ric21] is available at https://github.com/alexarice/lin_infand has been split into two parts: a library containing types for undirected linear graphs and formulae and some operations on them, and an executable which implements the search algorithm using this library as a base. The content in this section is based on Section 5 of the preliminary version [DR21].In this version we have generalised the search algorithm to take an arbitrary set S of linear inferences as input, and we have further used this to devise a recursive algorithm basis computing minimal sets of independent linear inferences, whose correctness we justify in Section 6. 5.1.Library.The library portion of the implementation defines methods for working with relation webs, as well as the ability to convert formulae to relation webs and vice versa.The majority of the library consists of the LinGraph trait, which is an interface for types that can be treated as undirected graphs.This allows us to query the edges between variables as well as perform more involved operations such as checking whether a graph is P 4 -free.We may also ask whether a pair of relation webs forms a valid linear inference and check whether the inference is trivial using Propositions 4.8 and 4.10. Storing graphs and relation webs.The library was designed with the intention of storing graphs as compactly as possible.Therefore there are implementations of LinGraph which pack the data (a series of bits for whether there exists an edge between each pair of nodes) into various integer types.The implementation is given for unsigned 8 bit, 16 bit, 32 bit (which can store up to 8 variable graphs), 64 bit, and 128 bit integers.Furthermore there is an implementation using vectors (variable length arrays) of Boolean values, which is less memory efficient but can store relation webs of arbitrary size.A further improvement could be to use an external library implementing bit arrays to make a memory efficient, yet infinitely scalable implementation. Checking an inference between graphs.In order to implement linear inference checking, we use a data type representing maximal cliques of a relation web, which we represent as the trait MClique.It is possible to use Rust's inbuilt HashSet6 to do this but, as above, a more memory efficient solution is provided where we store the data in a single integer, with each bit determining whether a node is contained in the clique.For example a maximal clique on an 8 node graph can be encoded into a single byte.While checking for linear inferences and triviality, the main operation on maximal cliques is asking whether one is a subset of the other.This operation can be carried out very quickly using bitwise operations.Lastly we also need a way to generate the maximal cliques of a relation web.This is done using the Bron-Kerbosch algorithm [BK73], which is fast enough for our purposes (as we are only finding the maximal cliques of relatively small graphs). As mentioned after Proposition 4.8, for arbitrary (non-P 4 -free graphs) the maximal clique condition and maximal stable set condition for checking an inference do not have to agree, and so we would like to have both options in our library.To avoid having to calculate stable sets as well as cliques, we instead simply calculate whether an inference R → S between graphs satisfies the stable set condition by checking whether S → R satisfies the clique condition.To this end, the library contains functions for dualising a graph. Working with isomorphism.There is also code for working with isomorphisms of graphs, which is used in the search algorithm to shrink the search space further.This is implemented as a module where permutations and operations on these permutations are defined, as well as having the ability to apply a permutation to the nodes of a graph, to get a new but isomorphic graph. Generating all P 4 -free graphs.The library also has a function that allows all P 4 -free graphs of a certain size to be generated.The naive algorithm for doing this which simply generates all graphs and checks each one for being P 4 -free is computationally infeasible for graphs with more than a few variables, as the number of graphs scales superexponentially with the number of variables (for instance there are 2 21 7-variable undirected graphs).Instead, we use a recursive algorithm that generates all P 4 -free graphs of size n by first generating the P 4 -free graphs of size n − 1 and then checking all possible extensions of these graphs to see if they are P 4 -free.Correctness of this procedure is due to the fact that induced subgraphs of a P 4 -free are themselves P 4 -free.In fact, a further optimisation is also added: when we check whether the extensions are P 4 -free, it is sufficient to only check if subsets of the nodes containing the added node are not isomorphic to P 4 , instead of checking every subset. Checking if an inference is an instance of another.The library contains code for checking whether an inference r : R → R is an instance of another inference p : S → S .This is implemented as in [CDW20], and so allows us to also work with non-P 4 -free graphs.Let us first recall the definition of a module: Definition 5.1.Given a graph R on variables V, a subset M of V is a module if every element of M has the same neighbourhood outside of M , so if x, y ∈ M and z ∈ V \ M then there is an edge between x and z if and only if there is an edge between y and z. For a formula ϕ, the modules of the graph W(ϕ) correspond exactly to the subformulae of ϕ.The check is done as follows: • For each subset M of the variable set, check if M is both a module of R and R .This is the generalisation of finding a suitable context for the rewrite to all graphs.Each such module M generates the induced subgraphs M ⊆ R and M ⊆ R .• If r is an instance of p, then for some M found we have there is a substitution of p which produces M → M .We check this by going through all partitions of M into |S| non-empty parts P x for every node x of S and seeing if this substitution produces M → M .This is done by checking if M has an edge between a ∈ P x and b ∈ P y (with x = y) precisely when then there is an edge between x and y in S, checking the similar condition for M , and finally checking that for each x, the induced subgraph of M and M generated from P x are the same. When specialised to P 4 -free graphs, this procedure agrees with the usual notion of rewrite step.11:21 Sanity checks.Finally, the library also contains some automated tests used as sanity checks on the code, which may be used to check various implementations against each other.5.2.Search algorithm.The main part of the implementation is a search algorithm to find logically minimal non-trivial inferences of a specified size n between relation webs that are not derivable from a specified set of inferences S. The search algorithm functions in multiple phases.After each phase the results are serialised and saved to disk so that the algorithm can be restarted from this point. Phase 1: generating P 4 -free graphs on n nodes.Suppose we are searching for S-independent linear inferences between webs on n variables.The first phase, as described in the previous section, is to gather all P 4 -free graphs with n nodes. Phase 2: identifying isomorphism classes and canonical representatives.To describe the second phase we need to introduce some new notions.Without loss of generality, we will assume henceforth that the variable set is given by V = {0, . . ., n − 1}. Definition 5.2.Note that the function ι : {(x, y) ∈ N × N | x < y} → N given by ι(x, y) = x + i<y i is a bijection.Define the numerical representation of a linear graph R, written N (R), to be the natural number whose ι(x, y) th least significant bit is 1 if and only if (x, y) ∈ R.This is the encoding used to store graph in integers as described in the Section 5.1.Definition 5.3.Given a bijection ρ : V → V, we write ρ(R) for the graph on V with edges (ρ(x), ρ(y)) for each edge (x, y) ∈ R. R and S are isomorphic if S = ρ(R), for some bijection ρ : V → V, in which case ρ is called an isomorphism from R to S. As isomorphism is an equivalence relation, we can partition the set of P 4 -free graphs into isomorphism classes.It can readily be checked that N (from Definition 5.2) is injective and can therefore be used to induce a strict total ordering on graphs.Say that a relation web is least if it is the smallest element in its isomorphism class (with respect to this ordering induced from N ). The second phase of the algorithm is to identify these least relation webs, as well as identify the isomorphism between every relation web and its isomorphic least relation web.It will become clear why this data is needed later on in the section.To obtain this, first the relation webs are sorted (by numerical representation) and then, taking each graph R in turn, applying every possible permutation to its nodes, and seeing if any result in a smaller web (with respect to N ).If none do then we record it as a least relation web (with the identity isomorphism).Otherwise suppose it is isomorphic to R with isomorphism ρ where N (R ) < N (R).As we are checking graphs in order, we must already know that R is isomorphic to least graph R with isomorphism π.Then we can record R as being isomorphic to R with isomorphism π • ρ.This allows us to use the following lemma. Lemma 5.4.Take some set S of inferences and number n.Then to show that any valid non-trivial logically minimal inference R → S on n variables is derivable from S, that is R * S S, it is sufficient to only check those inferences where R is least.Hence to prove the statement of Theorem 2.13, it is sufficient to prove that for all valid non-trivial logically minimal inferences R → S on n < 8 variables where R is least, that R * ms S. Proof.Suppose R → S is a valid non-trivial logically minimal inference on n variables.Then let ρ be an isomorphism from R to R which is least, and let S = ρ(S).Then we have that R → S is valid, non-trivial, minimal, and R is least, and so by assumption we have R * S S .By applying the isomorphism to the derivation we get R * S S as required.The second part of the lemma follows straight from the first, along with Remark 4.13 to restrict to non-trivial linear inferences and Lemma 2.19 to restrict to minimal inferences. The above lemma allows us to search for independent linear inferences by only searching inferences from least webs to arbitrary webs.This increases the speed of the search greatly as it turns out there are relatively few isomorphism classes (and so least webs).For example, specialising to the case n = 7, there are 78416 P 4 -free graphs with 7 variables with only 180 of them being least (so that there are 180 isomorphism classes).Note that we may not similarly restrict the RHS of inferences to least webs.This means we need to know the maximal cliques of every P 4 -free graph to determine whether there are inferences between them. Phase 3: generating all maximal cliques.In phase three we generate all the maximal cliques of the graphs found in phase one and store them so that they do not need to be recomputed every time we check a linear inference.Each maximal clique of a graph of size n can be stored using only n bits, so storing all this data is feasible. Phase 4: generating 'least' linear inferences.With the maximal clique data, phase four of generating a list of all valid linear inferences (from a least web to an arbitrary web) can be easily done by iterating through all possible combinations and checking them using Proposition 4.8. Phase 5: checking for non-triviality.Similarly phase five of checking which of these inferences are non-trivial is also simple using Proposition 4.10.This data is stored in a HashMap of sets for quick indexing. Phase 6: restricting to logically minimal inferences.Phase six is now to restrict our inferences to only those that are logically minimal.Write Φ R be the set of webs distinct from R that R (non-trivially) implies.We calculate, for a least web R, the set M R of webs S with R → S a logically minimal linear inference using the identity: Note that to calculate this, we need to be able to generate Φ R for arbitrary (i.e.not necessarily least) webs.This is where the isomorphism data stored in phase two becomes useful, as if ρ is an isomorphism from R to R , with R least, we can use, to generate Φ R , where we already have Φ R .In the implementation, we generate each Φ R on the fly (from Φ R ), though we could have pre-generated all of these, which might provide further speedup for this phase. Phase 7: checking for S derivability.The last phase is to check the remaining inferences, of which there are now few enough to feasibly do so.Logically minimal inferences have one further benefit: a logically minimal inference (and in fact any S-minimal inference) ϕ → ψ is derivable from S if and only if it is derivable from a single S step. To check whether each remaining inference is derivable from S, we use the library function for determining whether a given inference is an instance of another.For the specific • 384 were an instance of Inference (3.4), • 104 were an instance of the dual of Inference (3.4), • and there were 184 remaining inferences.After quotienting by isomorphism, there were 10 remaining inferences, in 5 dual pairs.These inferences are given in Section 7, and a theoretical justification for this search is given in Section 6. Other features of the implementation.The implementation executable has a couple of extra features.One of these is the ability to run the search algorithm recursively on n to get a set of minimal inferences basis(n).This works in the following way: We set basis(0) = ∅.To calculate basis(n + 1), we first recursively run the algorithm on n to get a set of inferences basis(n).We can then run the above phases on size n + 1 inferences checking against the set basis(n) of inferences.This will give us a set S of n + 1 variable inferences.We then return basis(n + 1) = basis(n) ∪ S. The use of this algorithm and it's name are explained in Section 6. Further we also include a flag that toggles whether the search algorithm searches for only P 4 -free graphs, or if it searches for any graph.Turning off the checking for P 4 -free graphs changes phase 1 of the algorithm to include all graphs instead of just those that are P 4 -free, but the rest of the algorithm functions the same.We lastly also include a flag for using maximal stable set entailment (condition (3)), instead of the maximal clique entailment (condition (2)).Since both of these coincide for P 4 -free graphs, this only needs to be considered when the "P 4 " setting is turned off. These extra features of the implementation are not needed for the proof of Theorem 2.13, but are used in the following sections. Enumerating a 'basis' of linear inferences Theorem 2.9 was concerned with minimal linear inferences independent of switch and medial, and so its proof requires the special case of S = {s, m} in the search algorithm of the previous section.This section is devoted to proving that the recursive algorithm basis(n) described at the end of the last section in fact computes a 'canonical' basis of linear inferences independent of all smaller ones.The content of this section is new and does not appear in the preliminary version [DR21]. 6.1.Defining a 'basis' of linear inferences.First, we must make precise what the 'canonical' linear inferences of a certain size are. Definition 6.1.Let L ≤n denote the set of all linear inferences on ≤ n variables, and write L <n = L ≤(n−1) .For n > 4, we call a set X of linear inferences n-minimal if it is a smallest set of linear inferences such that X ∪ L <n derives (with units) every r ∈ L n . (4) Any non-trivial logically minimal r ∈ L ≤n that is not L <n -derivable (with units) is an instance of some r ∈ X, modulo acu. Proof.For (1), suppose for contradiction that ϕ has variables V 1 and ψ has variables V 2 where V 1 = V 2 .However by Proposition 2.14, we then have that there is some r on V 1 ∩ V 2 , which must have fewer variables than r, and that r is derivable from r , switch and medial, meaning r is already L <n -derivable.For (2), note that by Proposition 2.15 any trivial n-variable linear inference is already L <n -derivable. For (3), suppose X r : ϕ → ψ is not logically minimal.Let χ have n variables such that ϕ → χ → ψ are valid but χ is not equivalent to ϕ or ψ.Note in particular that ϕ, χ, ψ are satisfied by k, l, m assignments respectively, with k < l < m.By assumption X ∪ L <n must derive r 0 : ϕ → χ and r 1 : χ → ψ; we may assume that any such derivation does not use r: • if an instance of r has a constant substituted for one of the variable inputs, then it reduces to a L <n inference, modulo acu by Proposition 2.4.• otherwise an instance of r would have k assignments satisfying its LHS (which is too few in the case of r 1 ) and m assignments satisfying its RHS (which is too many in the case of r 0 ).Thus (X \ {r}) ∪ L <n derives r 0 and r 1 and hence it also derives r and by extension every r ∈ L ≤n , contradicting the minimality of X. For (4) let L ≤n r : ϕ → ψ be logically minimal and non-trivial and not L <n -derivable.X ∪ L <n must derive r, since it derives all of L ≤n .However, any such derivation cannot have any formula not already logically equivalent to ϕ or ψ, by logical minimality.Thus all formulae are acu-equivalent to ϕ or ψ by Remark 2.5, and there must be a single step whose LHS and RHS are thus acu-equivalent to ϕ and ψ respectively.Note that the above lemma actually characterises n-minimal sets: a set X is n-minimal iff it contains just (a renaming, modulo ac, of) each non-trivial logically minimal r ∈ L ≤n that is not L <n -derivable.Corollary 6.4.For n > 4, n-minimal sets are unique up to acu and renaming of variables. Due to Proposition 2.16, and since every element of an n-minimal set is non-trivial, by Proposition 6.3, we can always choose the elements of an n-minimal set to be constant-free and negation-free.This motivates the following definition.Definition 6.5.We define the sets M n for each n as follows: For n = 0, 1, 2 we have M n = ∅, then M 3 = {s}, and M 4 = {m}.For n > 4, we write M n for the (unique up to ac and renaming) constant-free negation-free n-minimal set, and we write M ≤n := m≤n M m . By definition of n-minimality and completeness of {s, m} for L ≤4 we have: Fact 6.6.For n > 4, M ≤n is complete for L ≤n , i.e. any r ∈ L n is M ≤n -derivable with units. In fact we can say more than this: M ≤n is the minimal set w.r.t.set inclusion (unique up to ac and renaming) such that, for every k, its restriction to ≤ k-variable inferences derives every r ∈ L ≤k .It also has the property that for every r ∈ M n , we cannot derive r in M ≤n \ {r}.This can be viewed as a 'local' version of independence that allows the unit-elimination results of the next subsection to go through.In this way we may say that it forms a 'basis' for L. Remark 6.7 (On independence).Note that our sets M ≤n are not fully independent.For instance s is an acu-instance of (3.2).In fact we believe that there cannot exist a minimal set (w.r.t.set inclusion) that is complete, i.e. derives all of L, as infinite descending sequences (w.r.t.set inclusion) may exist.Indeed the inferences QHQ n from [Str12b] seem to suggest this, since each QHQ n derives all QHQ <n .However these inferences are not logically minimal, and a comprehensive reduction of this family to a logically minimal one is beyond the scope of this work.6.2.Computing M n recursively without units.With a view to generalising our search algorithm, we would like to, for arbitrary n, identify M n , i.e. the 'minimal' n-variable inferences that are independent of L <n .However, in order to apply our implementation, based on graph theoretic representations that do not account for constants or negation, we need the following generalisation of Proposition 2.18.Theorem 6.8.For n ≥ 4, if M ≤n derives a constant-free negation-free nontrivial linear inference r, then it also derives r without units. Proof.We structure the whole proof by induction on n.The base case is given by Proposition 2.18, and we follow a variation of the argument therein for the inductive step too. Again, for any formula ϕ, write ϕ for the unique constant-free formula (or ⊥ or ) ∼ uequivalent to ϕ, cf.Proposition 2.4.Suppose ϕ → ψ is a nontrivial constant-free negation-free linear inference with an M ≤n -derivation (with units) of the form: Like in Proposition 2.18, we proceed by replacing each ϕ i and ψ i by ϕ i and ψ i respectively. As before, we indeed have that ψ i ∼ ac ϕ i+1 , thanks to Remark 2.5.For the inferences ϕ i → ψ i , we shall show by induction on the definition of → M ≤n (namely as the closure of M ≤n under contexts and substitution) that, whenever ϕ → M ≤n ψ, we have ϕ * M ≤n ψ .(Note the difference here with Proposition 2.18 where we showed the stronger statement that ϕ ∼ ac ψ or ϕ ms ψ , instead of ϕ * ms ψ ).• In the case of context closure, the argument is the same as for Proposition 2.18, relying on context closure of * X in place of X .• Suppose ϕ → M ≤n ψ with ϕ = α(χ 0 , . . ., χ k ) and ψ = β(χ 0 , . . ., χ k ) where α(x 0 , . . ., x k ) → β(x 0 , . . ., x k ) (all variables displayed) is in M ≤n (and so k < n). -Otherwise, ϕ → ψ is an instance of some inference r : γ → δ on < n variables, so that r ∈ L <n .Now we appeal to Fact 6.6 so that r is M <n -derivable with units and so applying the main inductive hypothesis we have that r is M <n -derivable without units.Finally, since derivations are closed under substitution, we have that ϕ → ψ is M <n derivable (without units), i.e. ϕ * M<n ψ as required.Now, returning to Eq. (6.1), we have by the above argument that each ϕ i * M ≤n ψ i , whence we conclude since * M ≤n is reflexive and transitive. Putting together Proposition 6.3, Fact 6.6, and Theorem 6.8 above we finally have: Corollary 6.9.For all n, M n is precisely the set containing (a renaming, modulo ac, of ) each logically minimal constant-free negation-free non-trivial n-variable linear inference that is not an instance without units of some inference in M <n . Proof.This is easily checked for n ≤ 4. For n > 4, we have from Proposition 6.3 and Fact 6.6 that M n consists of (a renaming, modulo ac, of) each logically minimal constant-free negationfree non-trivial n-variable linear inference that is not derivable by M <n with units.By Theorem 6.8, we have that it is sufficient to be not derivable from M <n without units, and since inferences of M n are logically minimal, this is the same as each inference of M n not being an instance of any inference in M <n .6.3.Enumerating M n using graphs.With the results of the previous section, we can now compute each M n using the techniques from Section 4 and Section 5. We start with some definitions: Definition 6.10.Recall that graphs have an ordering < derived from their numerical representation.Further recall that we call a graph least if it is the smallest element (with respect to this ordering) in its isomorphism class.Let an inference of graphs R → S be least if R is least and there is no isomorphism ρ such that ρ(R) = R and ρ(S) < S. Inductively define W n to be the set containing each logically minimal non-trivial nvariable least linear inference between P 4 -free graphs which is not an instance of any inference in W <n (i.e., under substitutions of P 4 -free graphs for nodes). We now claim the following: Lemma 6.11.M n contains just (a renaming of ) a cograph decomposition of each r ∈ W n . Proof.The proof proceeds by induction on n by using the classification of Corollary 6.9. First, all cograph decompositions of inferences in W n are logically minimal non-trivial n-variable inferences by definition, and must be constant-free and negation-free, as they are decompositions of graph-based inferences.Further they are not an instance (without units) of some inference in M <n as this would render them an instance of an inference in W <n by the inductive hypothesis.Since two formulae have the same web if and only if they are ∼ ac equivalent, a renaming of some cograph decomposition of each inference in W n is present in M n . Conversely, suppose r : ϕ → ψ is a logically minimal constant-free negation-free nontrivial n-variable inference that is not an instance (without units) of some inference in M <n .Then there is some isomorphism ρ such that ρ(W(ϕ)) → ρ(W(ψ)) is a logically minimal non-trivial n-variable least linear inference.Further it is not an instance of W <n by inductive hypothesis and assumption, and so ρ(W(ϕ)) → ρ(W(ψ)) is in W n .Note that r is, by definition, just a renaming of some cograph decomposition of ρ(W(ϕ)) → ρ(W(ψ)). We now find each W n by inductively finding W ≤n for each n.The base case is given that W ≤4 = {s, m}.For the inductive step we suppose we want to find W n and W ≤n for some n > 5, and assume that we already know W <n .We simply use the search algorithm specified in Section 5.2 with inputs n and using W <n as the set of rewrites to check against.This will find us a set of n-dimensional inferences between P 4 -free graphs which are logically minimal, valid, non-trivial, have a least graph as their premise, and are not derivable from Figure 2: 9-variable inferences which form M 9 with their duals. W <n .From here, M n is calculated by simply taking cograph decompositions of the elements of W n .This is exactly the procedure basis, described at the end of Section 5.2, which is part of the executable portion of the implementation. Theorem 6.12.The recursive algorithm basis(n) introduced in Section 5.2 computes the set M n . It should further be remarked that running this recursive algorithm for n = 7 and n = 8 replicates the main results of the paper, as M <7 = M <8 = {s, m}, and so the recursive algorithm simply checks for switch medial derivability in these cases. New 9-variable minimal linear inferences Running basis(9) found 10 distinct 9-variable inferences which are logically minimal, nontrivial, and not derivable from switch, medial, and the four 8-variable inferences presented in Section 3.These are the 5 inferences in Figure 2, along with each of their duals, which form the set M 9 .None of the 9-variable inferences were self-dual.Running basis(10) seemed infeasible on a standard desktop. In what follows we provide some analysis of these new 9-variable inferences, though a more detailed inspection of these (and, indeed, larger inferences) could form the subject of future work.The content of this section is new and does not appear in the preliminary version [DR21].7.0.General comments.Before turning to each inference individually, let us remark on some global patterns and statistics.• The LHS has 25 red edges (and 11 green edges), and the RHS has 12 red edges (and 24 green edges).• The inference flips 14 red edges to green, and 1 green edge to red (namely the edge (y, z )). Let us point out that this inference is distinguished in that its RHS has the highest logical depth among all inferences in M ≤9 , exhibiting 4 alternations between ∨ and ∧.It would be interesting to see if inferences in M n become arbitrarily deep as n increases.• The LHS has 25 red edges (and 11 green edges), and the RHS has 11 red edges and 25 green edges.• The inference flips 15 red edges to green, and 1 green edge to red (namely the edge (y, z )). This inference is similar to the last one, in that the two have identical LHSs, each flips only one green edge to red, and each generalises the same inferences from M ≤8 (only (3.3)).This inference is in a sense more 'symmetric' in that the number of red edges in the LHS equals the number of green edges in the RHS (and vice-versa). Further remarks: applications to graph logics In this section we describe some further potential applications of our theoretical results in the previous section to graph logics, and the generalised implementation available at [Ric21]. The content of this section is new and does not appear in the preliminary version [DR21]. 8.1.Graph inferences.As mentioned earlier, our implementation is set up to be able to work with all graphs, and not just P 4 -free graphs, in particular, we can check if an inference is an instance of any arbitrary graph rewrite.Using this, we can perform our algorithm without the restriction to P 4 -free graphs to obtain sets G n of n-variable logically minimal non-trivial graph inferences which are not derivable (w.r.t.condition (2)) from the set of smaller graph instances.Note that although our software is able to check maximal stable set entailment (condition (3)) as well as maximal clique entailment (condition (2)), using the first would simply yield the dual of the results we list here. There are still no non-trivial inferences for sizes 0, 1, and 2. For n = 3, we have G 3 = {s}.At n = 4 we get that M n = G n , as medial is no longer logically minimal, and in fact decomposes into the two following inferences: We list G 5 in Figure 3 which has 16 elements.We were further able to calculate G 6 which has 137 elements, and G 7 , which has 2013 elements, but we omit these here. Let us point out, as a word of warning, that the sort of unit-elimination that we carry out for {s, m}-derivability in Section 2, and more generally for linear derivability in Section 6, has not been considered.The significance of our sets G n should be established by similar such results, but this is beyond the scope of this work.8.2.The graph logic GS.Acclavio, Horne and Straßburger introduce in [AHS20, AHS22] a graph logic motivated by the linear logic MLL.In particular they define an extension GS of MLL + mix to arbitrary graphs, that uniquely satisfy certain desiderata.Their logic has a primitive treatment of negation, inducing a form of graph entailment.Interestingly, GS entailment is incomparable with the conditions (2) and (3) on graphs considered in the previous subsection, even though the former's restriction to P 4 -free graphs is contained in the latter's. It would be interesting to adapt our implementation to consider GS -entailment instead of the graph entailments of the previous subsection, in order to better understand theorems of GS itself, but this consideration is left for future work. Conclusions In this work we undertook a computational approach towards the classification of linear inferences.To this end we succeeded in exhausting the linear inferences up to 8 variables, showing that there are four (distinct) 8 variable linear inferences that are independent of switch and medial, using the algorithm basis presented in Section 5.One of these new inferences (and its dual) contradicts a Conjecture 7.9 from [DS16].Conversely, all linear inferences on 7 variables or fewer are already derivable using switch and medial. We proposed in Section 6.3 a well-defined notion M ≤n of 'basis' for linear inferences with ≤ n variables.We showed that our algorithm basis(n) in fact correctly computes each M n .We were further able to run basis(n) for n = 9 and thus classified in Section 7 the logically minimal 9 variable linear inferences independent of all inferences on < 9 variables.Our preliminary analysis uncovered that all but 1 of these was a counterexample to the aforementioned conjecture, suggesting that inferences of this form may not be scarce when the number of variables is increased.We were not able to evaluate basis(10), but believe it is Example 4.7.P 5 and C 5 are two examples of graphs which are not P 4 free.They are the following: Example 4. 11 ( Validity of switch and medial, triviality of mix).The switch and medial inferences can be construed as the following graph rewrite rules on relation webs, respectively: Proposition 4.14 [Str07, Theorem 5].Let R → S represent a constant-free negation-free non-trivial linear inference.Then R → S is derivable from medial if and only if: • Whenever x y in R, also x y in S. • Whenever x y in R but x y in S there exists w and z such that w x y z is an induced subgraph of R and w x y z is an induced subgraph of S. Figure 3 : Figure 3: The set of G 5 of logically minimal nontrivial graph inferences not derivable by all smaller ones.
18,494
2021-11-09T00:00:00.000
[ "Computer Science", "Mathematics" ]
How Prefrail Older People Living Alone Perceive Information and Communications Technology and What They Would Ask a Robot for: Qualitative Study Background In the last decade, the family system has changed significantly. Although in the past, older people used to live with their children, nowadays, they cannot always depend on assistance of their relatives. Many older people wish to remain as independent as possible while remaining in their homes, even when living alone. To do so, there are many tasks that they must perform to maintain their independence in everyday life, and above all, their well-being. Information and communications technology (ICT), particularly robotics and domotics, could play a pivotal role in aging, especially in contemporary society, where relatives are not always able to accurately and constantly assist the older person. Objective The aim of this study was to understand the needs, preferences, and views on ICT of some prefrail older people who live alone. In particular, we wanted to explore their attitude toward a hypothetical caregiver robot and the functions they would ask for. Methods We designed a qualitative study based on an interpretative phenomenological approach. A total of 50 potential participants were purposively recruited in a big town in Northern Italy and were administered the Fried scale (to assess the participants’ frailty) and the Mini-Mental State Examination (to evaluate the older person’s capacity to comprehend the interview questions). In total, 25 prefrail older people who lived alone participated in an individual semistructured interview, lasting approximately 45 min each. Overall, 3 researchers independently analyzed the interviews transcripts, identifying meaning units, which were later grouped in clustering of themes, and finally in emergent themes. Constant triangulation among researchers and their reflective attitude assured trustiness. Results From this study, it emerged that a number of interviewees who were currently using ICT (ie, smartphones) did not own a computer in the past, or did not receive higher education, or were not all young older people (aged 65-74 years). Furthermore, we found that among the older people who described their relationship with ICT as negative, many used it in everyday life. Referring to robotics, the interviewees appeared quite open-minded. In particular, robots were considered suitable for housekeeping, for monitoring older people’s health and accidental falls, and for entertainment. Conclusions Older people’s use and attitudes toward ICT does not always seem to be related to previous experiences with technological devices, higher education, or lower age. Furthermore, many participants in this study were able to use ICT, even if they did not always acknowledge it. Moreover, many interviewees appeared to be open-minded toward technological devices, even toward robots. Therefore, proposing new advanced technology to a group of prefrail people, who are self-sufficient and can live alone at home, seems to be feasible. Background Aging is associated with physiological decay, higher risk of multiple acute and chronic diseases, and ultimately, loss of independence and disability [1]. The constant growth of the proportion of populations represented by older people is challenging the civil society and the health and social systems [1][2][3][4]. In this scenario, preventive actions aiming at promoting an active and healthy aging, as opposed to treatment, have the largest potential to reduce the societal burden associated with population aging. From the individual perspective, many older people want to remain as independent as possible and to remain in their own home; aging in place rather than in nursing home, even when they live alone, is an essential part of this wish [1,[4][5][6]. Aging in place is often considered a good alternative to expensive institutional care by policy makers [6]. Furthermore, " [...] research has shown that there may be significant benefits from delaying or avoiding moving to skilled nursing residences" [5]. However, living alone in older age does not necessarily mean autonomy; rather, it might simply reflect the changes in the structure of the contemporary society, especially in cities, in which children more often move definitively from the nuclear family. In fact, older people living alone are often already in an initial state of vulnerability (prefrail condition), while to safely maintain their independence in their home, one should be able to perform at least some instrumental and basic activities of daily living [1]. Moreover, having a sufficient level of functioning does not necessarily imply satisfaction and well-being, which might also depend on the ability and opportunity to enjoy a social life [7]. There is a growing interest in exploring the potential of information and communications technology (ICT) in interventions to assist frail older people and also to promote active and healthy aging [1,3,[6][7][8][9][10][11]. However, it is commonly agreed that to increase the success of ICT-based solutions, these technologies must be designed according to the end users' needs. This is particularly relevant, and also challenging, when the user is an older person [3,7,9,12,13]. Their opinion and needs become crucial to plan technological devices, which need to be considered from the beginning of their development. Pursuing the acceptance and adaptation of the older user to an already designed technology has a higher chance of failure [5]. This is one of the reasons why there has been an increased interest among researchers in exploring older people's perception of technologies. Studies conducted so far have found that different factors might affect older people's aptitude and attitude toward ICT-based solutions, including psychological and physical condition, personality, life history, culture, and socioeconomic status. To our knowledge, no study conducted to explore older people's preferences on ICT-based solutions to promote independent living has so far focused on a prefrail older population, who might not apparently require assistance for daily activities but, being at risk of deterioration, might still benefit from a support to an independent living. Moreover, we could not find studies looking at the relationship between older people and technologies, conducted in Italy or with participants belonging to a Latin culture. In this sociocultural reality, familism -which denotes the centrality of family in the life of people and expectations of mutual emotional and instrumental support among family members throughout the life span [18]-can generate resistance to ICT-based solutions to assist the older person. It is commonly agreed that the users' cultural background impacts the views and expectations toward technological devices. Objective With this background, we conducted a qualitative study aimed at answering the following question: what are the needs, wishes, preferences, and views about technologies, in general and as a support to independent living, in a sample of Italian prefrail older people who live alone? This study was part of a wider European project, HORIZON 2020 number 732158, named MoveCare (Multiple-actors Virtual Empathic Caregiver for the Elder), which aims at supporting the independent living of the elderly at home. Study Design Assuming that older persons' attitudes and preferences regarding a complex phenomenon such as ICT are mediated by past events and lived experience with technology, a qualitative study based on the interpretative phenomenological approach (IPA) [19] was designed. This method allows exploring people's lived experience about a specific phenomenon. Data were gathered through individual semistructured interviews which, compared with focus groups or group interviews, we believed would have a higher chance of making the older interviewees at their ease, especially when discussing personal issues. This data collection method was largely used in previous qualitative studies based on IPA and on similar topics [4,8,10,14,17,20]. Sampling This study was conducted in Milan, Italy. First, we identified a set of 50 potential participants according to a purposeful sampling. We looked for older people who, at first approach, were willing to talk about their habits and opinions. Moreover, candidates had to satisfy the following criteria: • age ≥65 years • living alone (or living alone and receiving assistance for a maximum of 1 hour a day) • speaking Italian fluently. From February to May 2017, we sampled members and users of an association of older volunteers named Associazione Nazionale Tutte le Età Attive per la Solidarietà (ANTEAS; n=32), people identified through the social work of the municipality of Milan (n=13), patients and relatives referred to the geriatric outpatient clinics of the hospital Policlinico of Milan (n=3), and members of the Italian Union of Retired People of Milan (n=2). The 50 potential participants were contacted by phone. Overall, 4 declined because of health problems and 5 gave no explanation, 4 did not reply to the phone call, 1 did not show up at the appointment, and 1 was not fluent in Italian. The remaining 35 persons were then invited for an interview. Procedure Before the interview, phenotype frailty assessment [21] and Mini-Mental State Examination (MMSE) [22] were administered. To be included in the study, the potential participants had to meet 1 or 2 Fried criteria (ie, prefrail category) and score ≥26 at the MMSE (ie, excluding people with clinically relevant cognitive impairment). When a potential participant satisfied those inclusion criteria, the full interview was conducted. We drafted an initial interview grid, which in the first part of the interview included questions on the participants' sociodemographic characteristics, social condition and lifestyle, relationships with relatives and friends, and current health problems and therapies. To explore the experience and attitude toward ICT, the interviewee was asked to imagine a new technology (ie, like a robot) available at home and to describe the functions he/she would have asked for it to have. The interviewees were then shown a short video in which an aged woman talked about her experience with Giraff, a previous version of a robot implemented in the MoveCare project [23]. Video was shown from the beginning to 1-min [24,25]. At the end of the video, we again posed the question on the desired functions of the technology, also trying to explore the interviewee's feelings toward the opportunities offered by a service robot at home. Finally, we investigated the participants' level of education and asked the interviewees for the income they thought was necessary to have a satisfactory quality of life in their (urban) context. Even though our interview guide was not theory-driven, we considered some studies [1,4,6,8] when formulating some questions of our grid, which was slightly modified after 2 test interviews. When a candidate did not satisfy the inclusion criteria, we completed a courtesy interview on some neutral topics concerning everyday life. After analyzing the 25 transcriptions, the researchers agreed that data saturation was reached and no more interviews were scheduled. Richards and Morse describe the process of data saturation as follows: "When data offer no new direction, no new question, then there is no need to sample further -the account satisfies. Often, the first sign is that investigator has a sense of having heard or seen it all" [26]. After 2 test interviews, minor modifications were done to the interview grid. The final list of questions is reported in Multimedia Appendix 1. Each interview was performed by one of the 3 investigators involved, who did not have any previous personal or professional relationship with the participants. To guarantee homogeneity and adherence to standards among the interviewers, the senior investigator conducted the test interviews, with the other 2 researchers attending as observers. After the interview and verbatim transcription, the 3 investigators met for debriefing. Interviews took place in a quiet room in the Policlinico, or at the participant's home when it was difficult for the older person to travel to the hospital. Each interview lasted about 45 min. At the end of the interview, and later, during its analysis, the researchers wrote memos on what happened during and after the interview and all the reflections that could have been relevant to data analysis [19]. Demographics Sociodemographic and clinical data were extracted from the interviews and processed to convert descriptive/qualitative information into semantic clusters based on the literature and with the help of a geriatrician. For example, a diet was considered adequate if the older person declared to have 3 quite balanced meals a day; it was judged partially adequate if the interviewee stated to have less than 3 meals a day or 3 meals a day but unbalanced. The diet was considered inadequate if the participant declared to have less than 3 meals a day and those meals resulted to be unbalanced. The final classification is shown in Multimedia Appendices 2 and 3. Finally, descriptive statistics were used to report the characteristics of our sample. Qualitative Analyses Data analysis began as soon as each interview was completed. A total of 3 researchers (with psychopedagogical and/or care background) independently performed the analysis of the interviews. Each interview, transcribed verbatim, was read several times to grasp its global meaning. The coding was data driven. Indeed, each researcher independently identified the meaning units (named in other qualitative methods codes/labels), which were later grouped into clustering of themes. These clustering of themes were later clustered into emergent themes, which were named according to their content. As the clustering of themes emerged, the researchers went back to the transcription to verify its adherence to the participant's words. The emergent themes were finally related to each other to create a meaningful network. The researchers then discussed each phase of the process until an agreement was reached and the results were reconciled. Participants A total of 26 candidates eventually met the inclusion criteria. After the full interview, one of the participants retired his/her consent to participate in the study for personal reasons and the file of interview was destroyed. The characteristics of the 25 participants are summarized in Tables 1-3. Qualitative Results As part of the qualitative analysis of the interviews, 477 meaningful units were identified and then grouped into 110 clustering of themes. From these, 13 emergent themes resulted. Overall, 9 of these clustering of themes were related to the quality of life in prefrail older people and will be presented elsewhere. Furthermore, 4 emergent themes were concerning the older people's attitudes toward ICT-based home services. These were (1) previous and current experiences with ICT, (2) participants' views of their own relationship with technologies, (3) functions they would ask an imaginary robot for, and (4) sensitivity of the older person's opinion to others' experiences with existing ICT solutions for independent living (see Multimedia Appendix 4). Previous and Current Experiences With Information and Communications Technology Devices: Between Continuity and Discontinuity A clear digital divide was identified among participants in relation to the current use of technologies. One group (about one-third of the participants) declared to use an old mobile phone (number of participants=6) or the landline phone and no other technological devices (n=1); some affirmed they only used mobile phones during holidays (n=2). The other group (about two-thirds) stated to use 1 or more ICT devices, such as a smartphone (n=13), a computer (n=9), and a tablet (n=1). Among smartphone owners, 3 also owned an old mobile phone. Furthermore, 7 participants affirmed they used WhatsApp. Those who had a smartphone also used some functions/apps such as the camera (n=6), the alarm clock, the calculator, and the memo. Moreover, among the smartphone/tablet owners, 1 used Skype and 2 used Facebook. Those who owned a smartphone (or a computer) used it to play Web (eg, Burako, a card game; n=3) or not Web-based (solitaire; n=1). In addition, 5 smartphone owners declared that they had an internet connection but were unable to use it at that time. A total of 3 participants declared that they had some difficulties in using the touch function of their smartphone; however, 2 of them affirmed that they could find some strategies to overcome those difficulties. Technological devices were mainly used by participants to communicate and interact with relatives or friends. To this purpose, some informants specified that they used the smartphone to make phone calls (n=4), for emails (n=1), and for the app WhatsApp (n=1). The computer (internet) was used for email (n=8), video calls via Skype (n=4), to receive news from relatives (n=3), and to receive photos (n=2). One interviewee (Interviewee 12) said she had an emergency call device, but she never used it as she was afraid of making unintentional calls that would have alarmed her caregivers unnecessarily. Technological devices were not only used by the participants for communication purposes; internet was also used to search for recipes (n=1), for home banking (n=3), to search for the meanings of new words, or to look for information (n=7) Moreover, the participants declared that they used computers for some applications such as Office (n=3; "Of course, [I know] how to write documents using Word, working on tables..."-Interviewee 5), for their personal accountancy (n=1; "I use it to manage the household accounts"-Interviewee 5), to archive photos (n=1), and to watch Digital Versatile Discs (n=1). Furthermore, from the analysis of the interviews, 3 trends emerged concerning the current use of ICT in relation with the past use of ICT, that is, at a younger age. About one-third of the participants used a computer in the past (both for work and for personal interests) and still used either a computer (n=6) or a smartphone (n=6). Most had an internet connection. An exception in this group was represented by 1 interviewee who used a computer in the past, but at the time of the interview, the only ICT he/she owned was an old mobile phone. A second trend was represented by those (about one-third of the participants) who did not use a computer in the past and just used an old mobile phone at the time of the study. Finally, a third group of participants did not use a computer in the past, but declared to currently use 1 or more ICT devices, such as a smartphone (n=7), a computer (n=3), or a tablet (n=1). With regard to the association with the level of education, about half of the participants who had a computer in the past, received a postsecondary level of education. Conversely, among the 13 smartphone owners, only one-third went through higher education. Surprisingly, we did not find any obvious association between the participants' age and their use of technology. Older People's Views of Their Relationship With Information and Communications Technology: Thinking Negatively, Acting Positively Almost one-third of the interviewees (n=11) did not have an opinion about their relationship with technological devices. One-third (n=9) claimed to have a negative relationship with ICT, that is, endorsing a feeling of denial, inadequacy, or a lack of expertise: In addition, 5 of those who could not define their relationship with ICT actually used several devices. A small group of participants affirmed that they were in favor of new technologies (n=1), or stated that they wanted to improve their relationship with ICT or they wanted to adapt themselves to "advancing technology" (Interviewee 8), for example, by learning how to use their smartphone better (n=2), or by attending an informatics course (n=1). One participant said he/she wanted "to live in the technological era" (Interviewee 11). Finally, 3 participants clearly stated that ICT was not a fundamental part of their lives, as "you can work the same without having a computer" (Interviewee 7) or was not "essential" (Interviewee 16). One interviewee declared he/she still used a paper telephone book "only because I often lose my cell phone" (Interviewee 15). However, 2 of these participants actually used a computer (Interviewee 15) or a smartphone (Interviewee 16). Functions That Older People Would Ask a Hypothetical Robot for: Between Housekeeping and Need for Company Most of the participants (n=18) wished to own a robot that helped them with cleaning the house. According to an interviewee, a robot could be a valid substitute of the housekeeper: [A small robot should] first of all, do the cleaning and then I don't know, nothing else, because I'm only interested in having the house cleaned and tidied up. [Interviewee 1] Among these 18 interviewees, only 7 did not have any help with housekeeping activities. Cooking was another desired task (n=5). One participant imagined that the robot could even be used as an oven. Some participants (4) asked for a multitasking robot, able to act as a formal caregiver or babysitter or maid. According to 1 participant, it might be a machine with "its own intelligence" (Interviewee 7), able to perform the tasks that a human does not want to do or cannot do. Following this request of a multitasking robot, 1 participant imagined that this mechanical device could be able to drive a car. Another participant imagined the robot to go shopping with him. Others (n=2) wanted this hypothetical robot to go shopping for them or to buy their medicines, if they were unable to go by themselves (n=2). One participant asked for help with getting washed and dressed. Finally, 1 participant asked for support with physiotherapy. Other participants (n=3) believed that a robot should be able to monitor the old person and ask for help in case of emergency: On the other hand, about one-third of the participants would ask for a robot with more entertaining functions. The imaginary robot could itself be a companion (n=3) to listen to music (n=1), sing together (n=1), play cards (n=1), or share hobbies with (n=1). It was otherwise seen as a provider of general leisure activities (n=1), or to keep abreast of news (n=1), or as a tool to learn writing and reading (n=1). Most of the interviewees in this second group had a weak or fair social life. Finally, a small group of participants (n=3) claimed that a robot could not only be useless but also a deterrent for those people who can perform some tasks autonomously (eg, housekeeping). Sensitivity of the Old Persons' Opinion to Others' Experiences With Existing Information and Communications Technology Solution for Independent Living After watching the video about Giraff, a robot carer which offers company and some monitoring functions [23], the participants' opinions on ICT undertook a substantial change. The most requested function from a robot was still related to cleaning the house, but this request was expressed by 9 participants instead of 19. The second most frequent request, after watching the video, became the robot's ability to ask for help in case of emergency (n=9), falls (n=1), and danger, in general (n=1). In particular, according to 5 participants, the robot could be useful in case of gas leaks, open windows, or an attempted break-in (n=5). Moreover, only after watching the video, the participants could endorse the usefulness of the robot in monitoring vital signs (n=4). One person even spoke about "being monitored 365 days per year" (Interviewee 6). Giraff was not only perceived by some participants (n=3) as a useful instrument to remind the older person to perform some health-related tasks (eg, measuring their blood pressure and taking pills), but also as a reminder for buying groceries. The function of preparing meals was still considered important by 2 participants. The robot was also acknowledged, but only minimally, as a communication instrument (n=2); indeed, it was considered by 1 participant as a valid alternative to a computer or a tablet. Moreover, an informant speculated an amplification function for people who suffer from hearing problems. However, the idea that a robot could be a companion for an older person increased after watching the video (mentioned by 7 participants). One participant sustained that Giraff could be useful to go for walks together, and another imagined it could drive a car and take a person out. The fact that the robot could not "speak, offend, or get angry like a human being" was mentioned as an advantage of this special companion by 1 participant (Interviewee 9). Some participants considered Giraff as a useful caregiver for those who are in old age (n=4) or have cognitive problems. However, although some informants still endorsed that the robot could perform important assistance functions in the case of people with lower levels of autonomy, overall, after watching the video, the participants tended to scale down the robot's functions compared with the multitasking robot they had imagined before. Interestingly, Giraff was seen as inadequate if an old person was self-sufficient in performing certain tasks (n=4) or even as a deterrent to keep oneself active and self-sufficient (n=2): One participant highlighted that Giraff cannot be useful in increasing social relationships, particularly for self-sufficient people. Another participant did not consider this robot as a valid alternative or as an addition to other devices such as computer and telephone. One participant stressed the fact that Giraff could be annoying for those who live in a small house. We noticed that the participants' capacity to change their mind (about the possible functions of the robot), after watching the Giraff video, were mainly related to their baseline relationship with ICT, that is, the more positive it was, the more easily they changed their mind and could grasp the potentialities of a robot aimed at assisting an older person who lives alone. Principal Findings Our study partially challenges the previously reported positive association between older people's past and current use of ICT, as well as the positive association between the current use of ICT and higher education and lower age [4,12,14,15]. From our study, it emerged indeed that a number of interviewees who were currently using ICT (ie, smartphone) did not own a computer in the past, or did not receive higher education, or were not all youngolder people (aged 64-75 years). Furthermore, we found that a negative view of ICT (thinking negatively) not always corresponds to its actual rejection or underuse in everyday life. Referring to robotics, our interviewees appeared quite open-minded. In particular, robots were considered suitable for housekeeping, for monitoring older people's health and accidental falls, and for entertainment. After watching the Giraff video [23], many interviewees acknowledged those functions and even glimpsed other potentialities. Although ICT is providing a wealth of new devices and possibilities, because of the common belief that older people are not keen to adapt or use new technologies in the same way younger generations would, researchers have been very cautious in developing new platforms, applications, and systems for older people [27]. Indeed, literature reported some correlation between the positive or negative perceptions of ICT and older people's level of education, age, and previous familiarity with technological devices [4,12,14,15], thus supporting the theory of an educationally based digital divide. However, according to our findings, being open-minded to technologies seems to be only loosely related to older people's level of education, contrary to what was reported by other studies [14]. In our study, just half of the participants who had a computer owned a degree, and among smartphone owners, just one-third had received higher education. Furthermore, we did not find correspondence between older people's lower age and greater use of technology, contrary to previous studies [14,15]. In accordance with the existing research [4,12], we found that the actual use of technology seems more correlated with the previous use of technological tools. We also found that those who used a computer in the past seemed to be more open-minded toward new technologies, particularly smartphones. However, having used technology in the past does not seem a necessary condition to accept technology in the older age [15]; even if some participants in our study did not own a computer in the past, they did currently have a smartphone and were able to use it, not only to communicate, but also for other functions offered by the device. This is quite an important finding and it may be explained by the fact that today's technologies are certainly more available and immediate than in the past, and therefore more accessible to older people too, with little adaptation. This process seems to improve older people's digital literacy. This is confirmed by the fact that previous familiarity with technology seems to have a greater impact on the current use of computers and less on smartphones and tablets use, whose interfacing modality is more intuitive [28]. Concerning the actual use of technological devices, we found that they are used by more technological older people for several aims. The most frequent use is to communicate with their children and relatives, but also to search for the meaning of some words, recipes, and to manage their money (home banking). In accordance with a recent study [6], our results indicate that the greatest impact on the use of technologies is determined by their views: the more positive it was, more participants said they used technologies with satisfaction. However, the relationship between technologies views and their use is not straightforward. In fact, we detected some ambivalence in the participants' words. A conflicting position sometimes emerged from the interviews about the object (the technologies that the participants actually used) and the way older people represent technologies to themselves. We discovered that even among the participants who claimed to have a negative relationship with technologies, some familiarity with them can be found in practice. It seems that negative views of technology do not always affect their actual use; on the contrary, sometimes older people tend to think negatively about technology, but act positively toward it. In general, the older people in our study had many negative preconceptions about technology. They often declared that they were not able to use it despite the fact that they actually used several devices in their daily life. This feeling of inadequacy or incompetence in using technologies also emerged from other studies [8][9][10], and it seems to be reflected in our interviewees' need to receive some help when using technologies. Therefore, providing technical support appears to be fundamental when proposing new technologies to older people. With regard to the functions that the participants would ask a hypothetical robot for, when assisting them at home, we found that they require, above all, cleaning functions. In some cases, they asked for support in preparing meals and in other cases they requested to have their health state monitored or to receive monitoring and aid on request, as already reported in literature [1,7]. From the analysis of the participants' requests, it seems that the desire to maintain autonomy and avoiding the help of a caregiver at home, especially for cleaning, could make them accept a robot in their home, as reported by existing literature [4,17,29]. Those participants who claimed for a multitasking robot seemed to be moved by the same reasons, although simultaneously they made an unrealizable request. It could be interpreted as a rejection of a robot at home or conversely as an unexpressed request for human aid, which can effectively act as a maid. A few participants stated that a robot could be useless or discouraging for those people who could still perform some tasks and therefore have a negative impact on their health and autonomy. Health status is a common concern of all participants. A vital parameter control function was also considered important by participants, especially for the anxiolytic effect it may have on them. They also requested an option for Web-based shopping (although some have pointed out that the everyday shopping is important for the older people's health) and functions that are similar to tools such as the Bimby or Thermomix, a kitchen robot which helps to prepare meals. The need of sociality also emerged clearly. Some older people (especially those who stated that they had a poor or average social life) would ask a hypothetical robot for entertainment-related functions such as listening to music, singing, playing cards, receiving information or keeping up-to-date, and even sharing hobbies. A robot supporting older people living alone could thus have management and monitoring functions, as well as entertainment and company functions. After watching the Giraff video, many of the above functions advocated by participants were confirmed, some reinforced, and new possibilities on socialization envisaged. The interviewed participants understood more clearly the usefulness of a robot in monitoring and managing emergencies, particularly the Giraff's function of calling in case of emergency, which was acknowledged by one-third of the participants. One-third of the participants highlighted the perceived usefulness of Giraff in detecting gas leaks or intrusion of strangers into the house. From the socialization point of view, the participants' acknowledgment of a robot's ability to communicate and entertain increased after watching the video. This is an interesting aspect as people may not be automatically disposed to recognize a nonhuman interlocutor (the robot) as a converser in communication processes or as a leisure companion. One participant even emphasized that the robot "could not speak, offend, and get angry like a human being"; so, this mechanical device could be sometimes preferred to the company of a person. We found that although only a few participants considered that a robot with the abovementioned functions would be useful to those who lost their autonomy, many realized that such a device would help maintain one's own autonomy and be able to keep living alone in their home. The way participants changed their mind (before and after watching the video) on the possible functions of a robot assisting them at home, seems to be correlated to the views of the technologies that the informants had previously expressed: the more positive it was, more they easily changed their mind and could grasp the potentialities of a robot aimed at assisting the older person who lives alone. Therefore, when proposing a robot to a prefrail elder living alone, the relationship with technologies and their views should be carefully evaluated. Both the older person and the other significant ones around them must perceive the usefulness of technologies, for example, in the contribution that they can give to their safety. The functions that our participants asked a robot for are perfectly in line with those recommended by others belonging to different cultures, such as house cleaning, help in finding information, detecting falls or domestic accidents, monitoring vital parameters, and even in walking around [1,7]. In addition to these functions, more related to house management and to health monitoring, some participants expressed their desire for a company or entertainment function that may counteract a condition of loneliness referred by many older people as indeed a cause of frailty. In conclusion, the results of our study indicate that a technological device such as Giraff could be well received by prefrail older people who live alone at home, even when they belong to a Latin culture (Mediterranean countries and Latin America), where the elder's assistance is commonly delegated to family members, usually women. In some cases, the robot was considered useful in communication processes or even as a leisure companion. This means that some participants could consider such a device as a remedy to their loneliness. This does not mean considering a technology such as Giraff as a substitute of a human person, but as a tool to maintain and broaden one's relationships and interests. Limitations The qualitative design of this study entails both strengths and limitations. Though we were able to examine data not easily accessible to quantitative research, these results are not transferable. An additional limitation stems from our recruitment mostly coming from ANTEAS, in which participants volunteered or participated in recreational activities. Therefore, even if purposively sampled, the majority of our participants could be defined as socially active older people. Furthermore, our participants lived in a big city. Different perspectives on technologies could be gathered from prefrail older people who live alone in a countryside. Conclusions This study provides insights on the perceptions of new technologies in a group of prefrail older people. This is a target of potential users that has received little attention by the literature. Our study suggests that many participants were able to use technologies, even if they did not always recognize it. Moreover, many seemed to be open-minded toward technological devices, even toward robots. Moreover, some older people, even from Latin culture where familism -a strong commitment to the family as a system of support, learning, socialization, and assistance-can be a core cultural value, could recognize among the functions that a robot should have, some not trivial ones, such as company and entertainment. Therefore, proposing new advanced technology to a group of prefrail older people, who are self-sufficient and live alone at home, is both attainable and desirable. However, before proposing robots to this target group, it is highly recommended to conduct a pilot study on the development of such new technology to make it more suitable for the end users, as we began to do in this study.
8,426.6
2019-08-01T00:00:00.000
[ "Sociology", "Computer Science" ]
Soliton Solutions of Space-Time Fractional-Order Modified Extended Zakharov-Kuznetsov Equation in Plasma Physics The aim of this article is to calculate the soliton solutions of space-time fractional-order modified extended Zakharov-Kuznetsov equation which is modeled to investigate the waves in magnetized plasma physics. Fractional derivatives in the form of modified Riemann-Liouville derivatives are used. Complex fractional transformation is applied to convert the original nonlinear partial differential equation into another nonlinear ordinary differential equation. Then, soliton solutions are obtained by using (1 GG′ ⁄ ) −expansion method. Bright and dark soliton solutions are also obtained with ansatz method. These solutions may be of significant importance in plasma physics where this equation is modeled for some special physical phenomenon. Introduction Many real world problems in science and engineering are modelled by using nonlinear partial differential equations (NPDEs). Finding the exact solutions of such nonlinear equations is an important area of research. Fractional differential equations (FDE’s) are also getting the attention of the researchers in the recent years. Many real world problems are modelled via FDE’s in fluid dynamics. Exact solutions of such models play an important role in the mathematical sciences [1-9]. FDE’s are studied by researchers for obtaining the exact solutions with different techniques in literature. Exact solutions of time-fractional Burgers equation, biological population model and space–time fractional Whitham–Broer–Kaup equations are calculated with (GG′ GG ⁄ ) −expansion method [10]. Lie group analysis for n order linear fractional partial differential equation and nonlinear fractional reaction diffusion convection equation are performed [11]. Homotopy perturbation method is applied to the nonlinear fractional Kolmogorov-Petrovskii-Piskunov equations to obtain exact solutions [12]. Exact solutions of (3+1)-dimensional space-time fractional modified KdV-Zakharov-Kuznetsov equation are obtained [13]. The improved extended tanh-coth method and (1 GG′ ⁄ ) −expansion method are used to find the exact solution of Sharma–Tasso–Olver equation [14,15]. New extended trial equation method is used for finding the exact solution of a class of generalized fractional Zakhrov-Kuznetsov equation [16]. Improved fractional sub-equation method is utilized for calculating the exact solutions of (3+1)-dimensional generalized fractional KdV–Zakharov–Kuznetsov equations [17]. Generalized Kudryashov method is considered for timefractional differential equations [18]. In mathematical physics, Zakharov-Kuznetsov (ZK) equation is used to describe the nonlinear development of ion-acoustic waves in magnetized plasma [19]. It is comprised of cold ions and hot isothermal electrons in the presence of a uniform magnetic field. It is also known as the generalization of KdV equation. Extended form of (1+2)-dimensional and (1+3)-dimensional quantum Zakharov-Kuznetsov equation are investigated for exact solutions [20]. Bulletin of Mathematical Sciences and Applications Submitted: 2018-04-26 ISSN: 2278-9634, Vol. 20, pp 1-8 Revised: 2018-08-07 doi:10.18052/www.scipress.com/BMSA.20.1 Accepted: 2018-10-04 2018 SciPress Ltd, Switzerland Online: 2018-11-21 SciPress applies the CC-BY 4.0 license to works we publish: https://creativecommons.org/licenses/by/4.0/ In this article, we consider the following (1+3)-dimensional space-time fractional-order modified extended Zakharov-Kuznetsov equation (MEZK) DDttuu + ββuuDDzzuu + γγ�DDxxuu + DDyyuu + DDzzuu� + δδ�DDzzDDxxuu + DDzzDDyyuu� = 0, (1) for calculating the soliton solutions where ββ, γγ, δδ are the constants and 0 < αα ≤ 1. The article is arranged as follows. In section, modified Riemann-Liouville derivative of order αα is explained with some of its properties. Soliton solutions are obtained with (1 GG′ ⁄ ) −expansion method. Also, bright and dark soliton solutions are calculated with ansatz method in Section 3. Conclusion is provided in Section 4. References are given in the end. Modified Riemann-Liouville Derivative and Its Properties The modified Riemann-Liouville derivative of order αα for a continuous function is defined as follows [21] DDxxgg(xx) = � 1 ΓΓ(1 − αα) dd ddxx �(xx − ττ)−αα�gg(ττ) − gg(0)�ddττ, 0 < αα < 1 xx 0 (ggnn(xx))αα−nn, nn ≤ αα < nn + 1, nn ≥ 1 (2) where gg: RR → RR, xx → gg(xx) denotes a continuous function not necessarily first order differential. Following are the important properties of modified Riemann-Liouville derivative. • If h: RR → RR, is a continuous function, then its fractional derivative in the form of integral w.r.t. (ddxx)αα DDxxgg(xx) = 1 ΓΓ(αα) � (gg − ττ)αα−1gg(ττ)ddττ = 1 ΓΓ(1 + αα) dd ddxx �gg(ττ)(ddττ)αα , 0 < αα < 1 xx Introduction Many real world problems in science and engineering are modelled by using nonlinear partial differential equations (NPDEs). Finding the exact solutions of such nonlinear equations is an important area of research. Fractional differential equations (FDE's) are also getting the attention of the researchers in the recent years. Many real world problems are modelled via FDE's in fluid dynamics. Exact solutions of such models play an important role in the mathematical sciences [1][2][3][4][5][6][7][8][9]. FDE's are studied by researchers for obtaining the exact solutions with different techniques in literature. Exact solutions of time-fractional Burgers equation, biological population model and space-time fractional Whitham-Broer-Kaup equations are calculated with ( ′ ⁄ ) −expansion method [10]. Lie group analysis for n order linear fractional partial differential equation and nonlinear fractional reaction diffusion convection equation are performed [11]. Homotopy perturbation method is applied to the nonlinear fractional Kolmogorov-Petrovskii-Piskunov equations to obtain exact solutions [12]. Exact solutions of (3+1)-dimensional space-time fractional modified KdV-Zakharov-Kuznetsov equation are obtained [13]. The improved extended tanh-coth method and (1 ′ ⁄ ) −expansion method are used to find the exact solution of Sharma-Tasso-Olver equation [14,15]. New extended trial equation method is used for finding the exact solution of a class of generalized fractional Zakhrov-Kuznetsov equation [16]. Improved fractional sub-equation method is utilized for calculating the exact solutions of (3+1)-dimensional generalized fractional KdV-Zakharov-Kuznetsov equations [17]. Generalized Kudryashov method is considered for timefractional differential equations [18]. In mathematical physics, Zakharov-Kuznetsov (ZK) equation is used to describe the nonlinear development of ion-acoustic waves in magnetized plasma [19]. It is comprised of cold ions and hot isothermal electrons in the presence of a uniform magnetic field. It is also known as the generalization of KdV equation. Extended form of (1+2)-dimensional and (1+3)-dimensional quantum Zakharov-Kuznetsov equation are investigated for exact solutions [20]. In this article, we consider the following (1+3)-dimensional space-time fractional-order modified extended Zakharov-Kuznetsov equation (MEZK) for calculating the soliton solutions where , , are the constants and 0 < ≤ 1. The article is arranged as follows. In section, modified Riemann-Liouville derivative of order is explained with some of its properties. Soliton solutions are obtained with (1 ′ ⁄ ) −expansion method. Also, bright and dark soliton solutions are calculated with ansatz method in Section 3. Conclusion is provided in Section 4. References are given in the end. Modified Riemann-Liouville Derivative and Its Properties The modified Riemann-Liouville derivative of order for a continuous function is defined as follows [21] where : → , → ( ) denotes a continuous function not necessarily first order differential. Following are the important properties of modified Riemann-Liouville derivative. • If ℎ: → , is a continuous function, then its fractional derivative in the form of integral w.r.t. ( ) • For any constant , the fractional derivative is • Fractional derivative for the linear combination of the functions ( ) and ℎ( ) and the constants and is defined as • For ℎ( ) = ( ) , the fractional derivative will be Soliton Solutions of Fractional Form of MEZK Equation In this section, different methods are applied to find the soliton solutions of Eq. (1). Here, we use the (1 ′ ⁄ ) −expansion method for calculating the soliton solutions. For transforming the Eq. (1) in to another ordinary differential equation (ODE), we apply the following complex fractional transformation Ansatz method To obtain bright and dark soliton solutions of Eq. (1), ansatz method is applied. Bright soliton solutions To obtain the bright solutions of Eq. (9), we consider the ansatz of the form where is the amplitude of the soliton, is the inverse width of the soliton and > 0 for the solitons to exist. The value of the unknown will be determined during the derivation of the solution. Now Putting the values in Eq. (9), it results − ℎ ( ξ) and are defined above. Equating the exponents 3 and + 2 from Eq. (24), we have Now comparing the different powers of , it yields the following system Solving this system, one can obtain Set 1: BMSA Volume 20 Set 2: Set 4: Hence the bright solutions will be where ξ is defined in Eq. (7). Dark soliton solutions To calculate the dark solutions of Eq. (9), we take the ansatz of the form where > 0 is unknown and will be determined during the derivation of the solution. Now Putting the values in Eq. (9), it results (39) Solving this system, one can obtain Set 1: Set 2: Set 3: Set 4: Hence the dark soliton solutions will be where ξ is defined in Eq. (7). Conclusion In this article, space-time fractional form of MEZK is investigated for soliton solutions. Complex fractional transformation is utilized to achieve the nonlinear ODE from fractional MEZK equation. Bright and dark soliton solutions are obtained with solitary wave ansatz method. (1 ′ ⁄ ) −expansion method is also applied to get some other solutions. These solutions may be of significant importance for the explanation of the some special physical phenomena arising in plasma physics modelled by this equation. This article also highlights the strength of the methods to obtain the soliton solutions to the highly nonlinear FDE's with constants coefficients.
2,157.4
2018-11-01T00:00:00.000
[ "Mathematics", "Physics" ]
Titanium dioxide high aspect ratio nanoparticle hydrothermal synthesis optimization Abstract TiO2-B (bronze) nanowires were synthesized via simple hydrothermal treatment of commercial titanium dioxide nanopowder in aqueous NaOH. The reaction temperature, calcination temperature, reaction time, NaOH concentration, autoclave filing fraction and precursor were systematically varied to optimize the nanowire morphology. The crystal structure, morphology and particle size were investigated by XRD, SEM and TEM. The morphology and structure are sensitive to experimental conditions. A reaction temperature of at least 150°C and NaOH concentration at least 10 M are essential, but reaction time from 24 to 72 h makes little difference. Nanowires obtained at 150°C were 60-180 nm wide and 2-4 μm long, while those after treatment at 200°C were thinner (40-100 nm) and longer (2-6 μm). The relationship between reaction conditions and morphology is discussed and practical guidelines for titanium dioxide nanowire synthesis are suggested Graphical Abstract Introduction Nanostructured materials such as wires, rods, whiskers, belts and tubes attract great interest because unique mechanical, electrical, magnetic and optical properties result from special combinations of bulk and surface structures. Since the discovery of carbon nanotubes by Iijima in 1991 [1], much effort has been devoted to the development of one-dimensional nanomaterials. This soon embraced materials other than carbon, primarily sulfides [2] and oxides [3,4]. Among metal oxides, nanowire titanium dioxide has attracted particular attention because of its broad range of potential applications, including photocatalysis [5], solar cells [6], gas sensors [7], and drug delivery [8]. It also shows promise as a negative electrode material [9,10] because its open mesoporous structure, efficient transport of lithium ions, and effective ion-exchange properties result in a high charge/discharge capacity, fast kinetics, robustness, and good safety characteristics. TiO 2 nanowires can be synthesized by a sol-gel process [11], a template-assisted method [12], electrochemical anodic oxidation [13], or hydrothermal treatment [14]. Each method has advantages and disadvantages. For example, the template-assisted method employs an anodic aluminum oxide template to control the TiO 2 nanostructure but its removal is difficult; it may remain as an impurity. Kasuga et al. [15] first reported titanium dioxide nanotube hydrothermal synthesis by reacting TiO 2 particles with concentrated NaOH followed by acid washing. Armstrong et al. [16] found that as-synthesized nanotubes and nanowires are Na y H 2-y Ti n O 2n+1 •xH 2 O sodium hydrogen titanates. Acid washing results in ion exchange to produce H 2 Ti n O 2n+1 •xH 2 O hydrogen titanates; their dehydration yields nanostructured TiO 2 . Hydrothermal synthesis of nanowires has many advantages, including solubility, diffusion and crystallization enhancements as well as control of morphologies, sizes and phase transformations at relatively low temperature. Although many papers on hydrothermal titanium dioxide nanowire synthesis have been reported there has been no systematic study of the effect of reaction conditions on their crystal structure and morphology. Experimental procedure Titanium dioxide nanowires were synthesized via hydrothermal reaction of commercial TiO 2 powder (Degussa P25) in aqueous NaOH followed by ion exchange in aqueous HCl and calcination. In our initial procedure 1 g of P25 powder was added to 15 M aqueous NaOH. After stirring for 1 h the resulting suspension was transferred to a Teflon-lined stainless steel autoclave and heated at 170°C for 72 h. The product was acid washed by stirring in 0.1 M HCl for 12 h. The material was then filtered and washed with distilled water or 0.1 M HCl until the filtrate became neutral. The dried, acid-washed samples were calcined at 400°C for 4 h in air. This procedure was optimized by varying the parameters in Table 1. The products' phase structures were investigated by X-ray diffraction (XRD) using a Philips PW 1050 diffractometer with CuKα irradiation (λ = 0.154056 nm). The texture and morphology were observed by scanning and transmission electron microscopy using Zeiss EVO 40 (SEM) and JOEL JEM 1200 EX (TEM) instruments. Effect of reaction temperature The starting commercial Degussa P25 TiO 2 powder is composed of small, irregular particles approximately 25 nm in diameter (Fig. 1). It is 75% anatase and 25% rutile. In the initial procedure the reaction temperature was 170°C. Four different temperatures were then tried with the other parameters unchanged. The reaction temperature influences the product nanostructure. Fig. 2 shows XRD patterns of TiO 2 treated for 72 h in 15 M NaOH at temperatures from 110 to 200°C. SEM and TEM images are in Fig. 3. Reaction below 130°C did not alter the original morphology or structure. Increasing the reaction temperature to 150°C changed the morphology from particles to nanowires and the crystal structure changed to TiO 2 -B (bronze). Further temperature increase did not change the structure or morphology but the aspect ratio increased. TiO 2 nanowires obtained at 150°C were 60-180 nm wide and 2-4 µm long, while those after treatment at 200°C were thinner (40-100 nm) and longer (2-6 µm). Effect of calcination temperature Calcination is necessary to remove water from intermediate hydrogen titanates and form pure crystalline TiO 2 . The standard 400°C calcination temperature resulted in TiO 2 -B nanowires. Calcination at 600°C caused the nanowires to aggregate into irregular anatase particles (Figs. 4, 5). Effect of hydrothermal reaction time Hydrothermal treatment of titanium dioxide at 170°C for different times (24, 48, 72 and 96 h) changes the morphology (Fig. 6). Nanowires appear after 24 h and do not change up to 72 h. After 96 h some nanoparticles appear to be attached to the nanowires. The nanowires are 40-180 nm wide and 2-6 µm long. Morphology, size, and structure (TiO 2 -B) did not depend on the reaction duration (not shown). Effect of NaOH concentration The morphology is strongly affected by the NaOH concentration. Fig. 7 shows SEM and TEM images after hydrothermal treatment at 170°C for 72 h in 5, 10 or 15 M NaOH. In 5 M NaOH large irregular particles were formed. Closer inspection of the TEM image reveals that these are masses of nanorod-like particles. Increasing the NaOH concentration to 10 M results in nanowires with 40-120 nm diameter averaging about 2 µm long. Further NaOH concentration increase to 15 M leads to elongation of the nanowires up to 6 µm. The crystal structure was the same (TiO 2 -B) irrespective of NaOH concentration (XRD not shown). Effect of autoclave filling fraction Surprisingly, the autoclave filling fraction had a strong impact on the product. In the initial procedure the autoclave was only 5% filled, yielding well defined TiO 2 -B nanowires. Increasing the filling fraction to 10% led to well-developed TiO 2 -B nanowires with small attached nanoparticles. When the volume fraction was raised to 50%, nanoparticles dominated the nanowires. The product exhibits a low crystallinity anatase structure (Fig. 8). Further filling fraction increase to 85% results in improved anatase crystallinity but the morphology changed to nanoparticles (Fig. 9). Effect of precursor To determine whether different forms of TiO 2 can be used as precursors in hydrothermal nanowire synthesis TiO 2 , nanorods and nanopowder were prepared following literature procedures [17,18]. The as-synthesized materials were added to 15 M NaOH and reacted at 170°C for 72 h. XRD patterns of the products are in Fig. 10 and SEM, TEM images are in Fig. 11. It seems that only commercial P25 TiO 2 powder yields the 40-120 nm wide and 2-5 µm long TiO 2 -B wires. The TiO 2 nanorods were converted to single nanowires dominated by large, irregular particles. The anatase crystal structure remained unchanged. The titanium dioxide nanopowder precursor gave some thin (50-75 nm) and short (2 µm) nanowires with irregular particles and a TiO 2 -B structure. Optimization of the synthesis parameters might improve the results. The optimized reaction conditions for one dimensional TiO 2 nanoparticle formation are summarized in Table 2. Conclusions We have identified hydrothermal synthesis conditions leading to TiO 2 nanowires. Reaction temperature and NaOH concentration are particularly important in controlling the nanostructure. Titanium dioxide nanowires can be readily synthesized hydrothermally using commercial Degussa P25 titanium dioxide nanopowder and highly concentrated (above 10 M) NaOH. TiO 2 -B nanowires 60-180 nm wide and 2-6 µm long appear at the relatively low reaction temperature of 150°C after 24 h. Surprisingly, the autoclave filling fraction is crucial in nanowire formation; exceeding 10% changes the product to particles with a lower aspect ratio. Due to the low cost, procedural simplicity and facile morphology control, one dimensional TiO 2 can be manufactured on a large scale and considered for applications in solar cells, lithium batteries, hydrogen storage devices or biomedical applications where such materials have shown promise.
1,990.8
2014-10-09T00:00:00.000
[ "Materials Science" ]
The Habitability of Venus Venus today is inhospitable at the surface, its average temperature of 750 K being incompatible to the existence of life as we know it. However, the potential for past surface habitability and upper atmosphere (cloud) habitability at the present day is hotly debated, as the ongoing discussion regarding a possible phosphine signature coming from the clouds shows. We review current understanding about the evolution of Venus with special attention to scenarios where the planet may have been capable of hosting microbial life. We compare the possibility of past habitability on Venus to the case of Earth by reviewing the various hypotheses put forth concerning the origin of habitable conditions and the emergence and evolution of plate tectonics on both planets. Life emerged on Earth during the Hadean when the planet was dominated by higher mantle temperatures (by about 200∘C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$200~^{\circ}\text{C}$\end{document}), an uncertain tectonic regime that likely included squishy lid/plume-lid and plate tectonics, and proto continents. Despite the lack of well-preserved crust dating from the Hadean and Paleoarchean, we attempt to review current understanding of the environmental conditions during this critical period based on zircon crystals and geochemical signatures from this period, as well as studies of younger, relatively well-preserved rocks from the Paleoarchean. For these early, primitive life forms, the tectonic regime was not critical but it became an important means of nutrient recycling, with possible consequences on the global environment in the long-term, that was essential to the continuation of habitability and the evolution of life. For early Venus, the question of stable surface water is closely related to tectonics. We discuss potential transitions between stagnant lid and (episodic) tectonics with crustal recycling, as well as consequences for volatile cycling between Venus’ interior and atmosphere. In particular, we review insights into Venus’ early climate and examine critical questions about early rotation speed, reflective clouds, and silicate weathering, and summarize implications for Venus’ long-term habitability. Finally, the state of knowledge of the Venusian clouds and the proposed detection of phosphine is covered. Introduction With an average temperature of ∼ 750 K, today the surface of Venus is far from environmental conditions suitable for life as we know it. However, billions of years ago, when the Venus: Evolution Through Time Edited by Colin F. Wilson, Doris Breuer, Cédric Gillmann, Suzanne E. Smrekar, Tilman Spohn and Thomas Widemann Extended author information available on the last page of the article Sun was much fainter (Sagan and Mullen 1972;Hart 1979;Gough 1981;Claire et al. 2012), Venus was located in the middle of the classical habitable zone around our sun (Kasting 1993;Kopparapu et al. 2013), thus fueling speculations about the early habitability of the planet (Pollack 1971;Grinspoon and Bullock 2007;Way et al. 2016). Important preconditions for habitability at the surface of Venus include a temperature range allowing the existence of liquid water; surface geochemistry with available chemical energy and appropriate elemental and molecular constituents, such as active water/rock interfaces; and protection from lethal solar radiation. The latter could have been provided by liquid water, as well as by a large reflective cloud cover resulting from the presence of liquid water. Surface thermochemical conditions would have ultimately been controlled by Venus' tectonic activity, although different models suggest different scenarios. Venus' convection regime may have changed over the course of the planet's history and at least some models suggest that Venus could have maintained temperate surface conditions until as recently as 0.7 Ga (Way et al. 2016). Lastly, although the early Sun was fainter, Venus' more sunward position means that solar incident insolation during its early history was still 40% higher than for present-day Earth (Lammer et al. 2008). Even if the overall radiation environment at the surface (as determined by absorption in Venus' early atmosphere) was clement, it may have affected conditions for early life potentially inhabiting water on exposed landmasses or subsea environments, such as hydrothermal vents. Increasing solar luminosity, continuous degassing of CO 2 from Venus' mantle into the atmosphere, or large-scale volcanic eruptions ) may have brought an end to a potential early habitable period. In order to gain insight into whether these preconditions existed in the early history of Venus, we address the mechanisms relevant to Earth's early planetary and biological evolution in Sect. 2. The main challenge with this approach is the loss of the earliest records of Earth's ancient surface due to tectonic recycling. Understanding of the early environmental conditions on Earth can be approached through a combination of modelling, inherited geochemical signatures in younger rocks, and comparison with well-preserved, younger crustal rocks formed about 1 billion years (Gyr) after solidification of the planet. We will also elaborate on tectonic processes and interior-atmosphere volatile exchange relevant to the evolution of Venus and a potential early habitable period in Sect. 3. Based on these constraints, the climate throughout Venus' history from general circulation models will be discussed in Sect. 4. Speculation on present-day habitability is primarily focused on Venus' cloud aerosols. Although larger reservoirs of water are likely to be dissolved in the mantle and as atmospheric vapor, cloud droplets are the only known place where liquid water is found on Venus. This liquid water is dissolved in sulfuric acid, the aerosols' primary constituent. Section 5 discusses what is known about the requirements for Venus cloud habitability by comparison with Earth's aerobiosphere, as well as discussion of suggested Venusian biosignatures, such as UV absorption and the controversial report of phosphine detection. We conclude with an evaluation of mission requirements necessary to improve our constraints on Venus' past and present habitability in Sect. 6. Water on the Earth The history of initial habitability on any planet is first and foremost the history of water, although long-lived habitability is also controlled by tectonics. Temperature and pressure, as well as the various gaseous species that comprise the atmosphere, determine the possibility of liquid water at the surface. A large part of the conditions that govern the onset (or lack thereof) of a habitable era is therefore set by the composition of the atmosphere and its interaction with factors, such as planetary characteristics, solar energy input, or material delivery. Recent investigation of calcium-aluminium-rich inclusions (CAIs) in some of the most primitive meteorites suggests the admixture of a significant amount of interstellar water during the early evolution of the protosolar cloud. This, in turn, implies very early formation of planetary reservoirs of volatile elements (Aléon et al. 2022;cf. Grossman and Larimer 1974). Thus, early volatile-containing materials in the Solar System would have contributed to the building blocks (pebbles, e.g. Morbidelli et al. 2012;Raymond 2021;Johansen et al. 2021 and/or planetesimals, Chambers andWetherill 1998;Levison et al. 2015;Burkhardt et al. 2021) of the inner rocky planets. Some of the early water (and other volatiles) would have been degassed and lost during the Moon forming impact (Benz et al. 1986;Canup 2004) with a Mars-sized planet (named Theia cf. Halliday 2000) that occurred approximately 4.51 Ga (Barboni et al. 2017). Although (Connelly and Bizzarro 2016), also using Pb isotope data, suggest slighter younger dates between 4.426-4.417 Ga. Recent calculations suggest that all of Earth's water and other volatiles may have been delivered by volatile-rich carbonaceous chondrites, initially formed outside the orbit of Jupiter but displaced inwards by the planet's growth and migration (Kleine et al. 2020). However, timing of the accretion of the volatiles to Earth is still an active area of research Salvador et al. 2023, this journal). Note that Marty (2012) suggests that the isotope signatures of terrestrial H, N, Ne and Ar may be the result of mixing between two end-members of solar and chondritic compositions, with the N and H isotopic compositions suggesting a primitive meteoritic origin. Liquid water is critical to magmatic processes on the Earth, including partial melting of the mantle and crustal recycling. Indeed, water is essential for the production of significant amounts of granitic melts formed by melting of pre-existing crustal rocks (Campbell and Taylor 1983;Jacob et al. 2021;Turcotte and Schubert 2002;Korenaga 2018). Although nonhydrous fractionation will form feldspathoids, as testified by the lunar anorthosites (Norman et al. 2003). These granitic melts, in turn, formed the early, buoyant, less dense granitoid rocks that were the cores of early continents. Thus, evidence of any of these phenomena can be used as proxies for the presence of liquid water. Physical evidence for the existence of early granitoid crust, however, is restricted to: (1) zircon crystals formed by crustal fractionation during the Hadean (4.5-4.0 Ga) and Eoarchean (4.0-3.6 Ga) that were eroded from the initial crustal rocks and then sedimented. These ancient zircons have re-emerged in Palaeoarchean (3.5-3.3 Ga) rocks in Western Australia (Wilde et al. 2001;Mojzsis et al. 2001). (2) Small enclaves of granitoid rocks from this period still exist and are occasionally associated with metamorphosed sediments (metasediments), such as the 4.3 (O'Neil et al. 2008) to 3.8 (Cates and Mojzsis 2007) Nuvvuagittuq Supracrustal Belt and the 4.02 Ga Acasta Gneiss (Bowring and Williams 1999) in Canada, the 3.7-3.8 Ga Isua terrane in West Greenland (Moorbath et al. 1973), and the 3.5-3.2 Ga greenstone belts of the Pilbara in W. Australia (Nelson et al. 1999) and Barberton in South Africa (Lowe and Byerly 1999b). Finally, (3) inherited uranium-lead and hafnium isotope signatures in the reworked zircon crystals provide a certain amount of information pertaining to the pre-existing Hadean crust (Mulder et al. 2021). Oxygen isotopic signatures preserved in zircon crystals, and dated at up to 4.4 Ga (Wilde et al. 2001; Mojzsis et al. 2001;Valley et al. 2014) (but possibly younger in age, Whitehouse et al. 2017) have been interpreted to suggest the exposure of the crust from which the crystals formed via hydrothermal processing, implying the presence of water recycled into the crust from the surface of the Earth by 4.4 Ga. Indeed, recent combined oxygen and silicon isotope measurements of zircons from the Hadean support the existence of significant quantities of siliceous sediments during the Hadean (Trail et al. 2018). Another proxy for the presence of water is the existence of sediments; they imply erosion by and/or deposition in a body of water. Sediments are associated with the most ancient terranes preserved, the 4.3-3.8 Ga Nuvvuagittuq (Canada) and the 3.7-3.8 Ga Isua (West Greenland) supracrustal terranes, the latter of which includes also metamorphosed pillow basalts, undeniable structures produced under water. Further evidence of hydrospherecrustal interactions comes from extremely high δ 18 O values of up to +9 measured in metamorphic zircons formed about 3.5 Ga by reworking of metamorphic crust in the ca. 3.86 Ga, Eoarchean Saglek Block (North Atlantic Craton) (Vezinet et al. 2019). Brief Overview The interior and tectonic processes on the early Earth had important implications for the building of a habitable planet (e.g. Schubert et al. 1989;Korenaga 2012;Höning et al. 2019a). Indeed, our present-day Solar System provides a perfect correlation between the occurrence of plate tectonics and planetary habitability, although with a sample size of one. A possible reason for this is the increased exchanges between the interior and the atmosphere of planets with plate tectonics, compared, for example with stagnant lid convection (Foley and Smye 2018;Höning et al. 2019b;Rolf et al. 2022;Gillmann et al. 2022, this journal). It has also long been suggested that plate tectonics and the presence of surface liquid water were entwined (Campbell and Taylor 1983, for example) and favoured volatile cycles, and possibly stabilizing feedback process for surface conditions. Moreover, while Venus (and Mars) appear to be operating under stagnant-lid-like convection today, it is possible that their convection regime changed over their past respective histories (e.g. Sleep 1994;Gillmann and Tackley 2014;Smrekar et al. 2018). In a review of the evolution of continental crust and the onset of plate tectonics, Hawkesworth et al. (2020) note the paucity of early crustal preservation and reiterate the fact that inferences based on the few preserved remnants, represent only a part of the geological history of this early time. This fact is all the more important because it is apparent that tectonic signatures varied in time and place, and that a form of subduction may have been catalysed, at least temporarily, by impacts and mantle plumes, as well as by plate tectonics (Gillmann et al. 2016;O'Neill et al. 2017;Gerya et al. 2015a). A recent study of Paleoarchean zircons ages recording submantle δ 18 O relates their production to impact induced crustal recycling . The timing of the onset of plate tectonics is still debated and ranges from ca. 4 Ga to 1 Ga (as reviewed by Lammer et al. 2018;Dehant et al. 2019;Korenaga 2021). Most estimates place the transition from an earlier convection regime (possibly from a more stagnant state, or already a plume-induced proto-plate tectonics, see also Fig. 1) between 3 and 4 Ga, with the process taking place gradually at different places and at different times. Prior to about 3.0 Ga, xenon isotopes suggest little recycling of volatiles in the crust (Péron and Moreira 2018). However, the recent review by Korenaga (2021) hypothesises an early start to plate tectonics during the Hadean, as soon as there was water at the surface of the planet. The existence of plate tectonics has numerous implications: firstly that the crust was sufficiently rigid as to allow crustal breakup under stress caused by vigorous mantle convection, as well Overview of changes in crustal growth, crustal thickness, crustal reworking, lithospheric stabilisation and the formation of supercontinents due to lateral accretion, the appearance of dykes swarms indicating rigid crust, changes in the oxygen isotope composition reflecting increasing continental sediment incorporated into the mantle/crust with time, and the appearance of blueschists indicating high temperature metamorphism, all signs of and influenced by the emergence of plate tectonics (after Hawkesworth et al. 2020) as to allow the intrusion of dyke swarms (Cawood et al. 2018), and secondly, that it was dense enough (i.e. mafic in composition) to subduct (Van Kranendonk 2010;Hawkesworth et al. 2009;Cawood et al. 2013). The paired metamorphic zones so typical of convergent tectonics, and recognised by Th/Nb ratios, suggest that magmas, both related to subduction (suites of high Th/Nb magmas) and not related to subduction (low Th/Nb magmas), were concomitant in different locations of the planet (Hawkesworth et al. 2020). In parallel to the initiation of plate tectonics, there was a change in the composition of juvenile continental crust from mafic to intermediate andesitic compositions (Dhuime et al. 2015), the latter characterising the upper continental crust (Chowdhury et al. 2017;Perchuk et al. 2018). Increasing crustal thickness and more acidic compositions of the granitic cores of the continents led to landmasses with higher relief, which influenced erosion and sedimentation, and hence the composition of the oceans and the atmospheres. Major continental amalgamation to form super continents started at least by 2.8 Ga (Evans 2013). Rates of continental reworking (estimated from Hf isotope ratios, Belousova and Kostitsyn 2010;Dhuime et al. 2012) and destruction linked to tectonic processes started increasing from about 3.0 Ga, an indication of efficient recycling of the older, less buoyant, mafic continental crust. There was also a change in the global oxygen isotope ratios in zircons indicating incorporation of eroded sediments and, therefore, the presence of exposed landmasses (Valley et al. 2005;Spencer et al. 2014). Figure 2 compares the evolution of crustal growth and thickness with factors such as lithospheric stabilisation, change in style of metamorphism, as well as dyke swarm frequency. Korenaga (2018) has compiled a list of published models for continental growth through Earth's history which underlines the wide range of estimates for the initiation of plate tectonics from the Hadean to the Archean. Continental landmass is an important source of phosphorus, one of the rate limiting nutrients for biomass development. On the early Earth, relatively low weathering rates of exposed land masses (because of their low relief) led to a relatively low influx of P in the form of apatite (Hao et al. 2020). New experimental and analytical work suggests that phosphate (HPO 2− 4 ) in the form of apatite (insoluble) can be reduced to phosphite (HPO 2− 3 ) by concurrent oxidation of Fe 2+ . Phosphite is much more soluble and therefore would have been available for biomass development. Evidence for Interior Processes and Tectonics, the Results of Modelling Studies Recently, tectono-magmatic processes on pre-Phanerozoic Earth have been the subject of growing numerical geodynamic modelling efforts (e.g. Gerya 2022, and references therein). The resulting holistic modelling-and observation-based view of the global Precambrian tectono-magmatic evolution that has emerged (Gerya 2014;Rey et al. 2014;Bercovici and Ricard 2014;Rozel et al. 2017;Sobolev and Brown 2019;Gerya 2019;Hawkesworth et al. 2020;Gerya 2022) is briefly summarized below. As envisaged in these models, Hadean-Archean plutonic squishy-lid/plume-lid/lid-andplate tectonics before about 3 Ga were characterised by mantle potential temperatures 250-200 K higher than present day, which resulted in the widespread development of mantle-derived magmatism and rheologically-weak crust (Richter 1985;Gerya 2014;Rozel et al. 2017;Hawkesworth et al. 2020, see Figs. 1 and 2). Models suggest that the global tectono-magmatic style was dominated by plume-and drip-induced tectono-magmatic processes under conditions of an internally deformable (squishy, non-stagnant, non-rigid) lithospheric lid that is often compared to conditions on present-day Venus (e.g. Van Kranendonk 2010; Gerya et al. 2015b;Rozel et al. 2017;Harris and Bédard 2014;Hansen 2018). In this hypothesised pre-plate tectonics regime, both proto-oceanic and proto-continental lithospheres were formed by a combination of several tectono-magmatic differentiation processes (e.g. Sizova et al. 2015;Capitanio et al. 2020). Lid evolution was driven by episodic tectono-magmatic activity (e.g. Moore and Webb 2013;Johnson et al. 2014;Piccolo et al. 2019Piccolo et al. , 2020 controlling crustal and lithospheric growth and removal with a periodicity of ∼ 100 Myr (Sizova et al. 2015;Fischer and Gerya 2016a), which is comparable to the geological-geochemical record from some major Archean greenstone belts, e.g. East Pilbara in Western Australia and Kaapvaal in South Africa (cf. discussions in Fischer and Gerya 2016a, and references therein). Thermal regimes of crustal reworking produced by this non-plate tectonic environment are also broadly consistent with the metamorphic record (cf. discussions in Capitanio et al. 2019, and references therein). (Ultra)-slow rifting and oceanic spreading with intense decompression melting and thick mafic crust were capable of developing in the absence of subduction (e.g. Sizova et al. 2015;Capitanio et al. 2019Capitanio et al. , 2020. Due to the elevated mantle potential temperature (e.g., Herzberg et al. 2010), (ultra)slow spreading is associated with intense mantle decompression melting leading to thick mafic crust formation (e.g., Sizova et al. 2015) and thereby to high heat fluxes at ridges. In addition, heat fluxes were also higher in the continental crust, which was much hotter than present day due to widespread intrusions of mantle-derived magma (e.g., Sizova et al. 2015;Rozel et al. 2017;Piccolo et al. 2019). Importantly, the modern-style global mosaic of rigid plates separated by narrow, rheologically weak, plate boundaries did not exist in this preplate tectonics period (Bercovici and Ricard 2014). Voluminous melting of the upper mantle caused the formation of both cold lithospheric and hot sub-lithospheric, highly depleted, proto-cratonic mantles with lowered density and increased viscosity (e.g. Sizova et al. 2015;Capitanio et al. 2020;Perchuk et al. 2020Perchuk et al. , 2021. Note that this scenario is in direct contrast to the scenario proposed by Korenaga (2021), in which the Hadean was characterised by a vigorous plate tectonic regime and recycling of the earlier, thinner crust. This regime then slowed down during the Archean as a result of increasing mantle temperatures and therefore thicker crust that would have been more difficult to subduct. Subsequently, during the period of protracted Archean-Proterozoic transitional tectonics between about 3 Ga and 0.75 Ga, notable secular cooling of the mantle potential temperature occurred (to 200-100 K above present day). As a result, squishy-lid/plume-lid/lid-and-plate tectonics may have gradually evolved towards the modern plate tectonics regime by combining elements of these two contrasting global styles in both space and time (e.g. Fischer and Gerya 2016b;Chowdhury et al. 2017Chowdhury et al. , 2020Sobolev and Brown 2019;Perchuk et al. 2018Perchuk et al. , 2019Perchuk et al. , 2020. The transitional tectonic regime was controlled by gradual stabilization of rheologically-strong continental and oceanic plate interiors (e.g. Sizova et al. 2010;Fischer and Gerya 2016a). Plume-induced subduction was likely common in the beginning, and triggered the onset of this transitional tectonic regime (Gerya et al. 2015a). Due to the hot mantle temperature and weak lithospheric plates subjected to bending-induced segmentation near trenches , shallow slab break-off would have been very frequent, causing intermittent rather than continued subduction (e.g. van Hunen and van den Berg 2008; Perchuk et al. 2019Perchuk et al. , 2020Gerya et al. 2021). Elements of squishy-lid/plume-lid/lid-and-plate tectonics were also locally present and controlled continued development of granite-greenstone belts in (proto)continental domains (Fischer and Gerya 2016b). As noted above, different elements of modern plate tectonics likely emerged at different geological times and oceanic subduction likely became widespread earlier than modern-style (cold) continental collision (e.g. Sizova et al. 2010Sizova et al. , 2014Perchuk et al. 2018). Delamination of the mantle lithosphere in long-lived accretionary orogens controlled gradual changes of continental crust composition from mafic to more felsic components with related rising of the continents due to efficient recycling of lower continental, mafic crust and tectono-magmatic reworking and thickening of more felsic upper continental crust (Chowdhury et al. 2017;Perchuk et al. 2018). The intermittent subduction was likely initially inefficient in creating large volumes of silicic continental crust and, associated with massive decompression melting of the mantle, resulted in the formation of oceanic plateau-basalts (Perchuk et al. 2019). The presence of low-density, highly depleted, hot, ductile mantle under oceanic plates contributed to the formation of chemically layered cratonic keels through a viscous emplacement mechanism driven by oceanic subduction (Perchuk et al. 2020). This peculiar mechanism of cratonic growth deactivated after about 2 Ga due to a decrease in mantle temperature (Perchuk et al. 2020). Finally, the establishment of modern plate tectonics after about 0.75 Ga followed cooling of mantle potential temperatures to less than 100 K above present day values. This process was attained gradually by a combination of four interrelated factors (Bercovici and Ricard 2014;Gerya 2014;Gerya et al. 2015b;Sobolev and Brown 2019;Gerya et al. 2021): (1) cooling and strengthening of the oceanic lithosphere that stabilized continued long-lived subduction, (2) emergence of a global mosaic of rigid plates divided by strongly localized, long-lived, rheologically-weak boundaries, (3) stabilisation and cooling of thick, rheologically strong continental lithospheres and the rise of the continents above the sea level, and (4) the growing intensity of surface erosion providing rheologically weak sediments deposited in the oceans that increasingly lubricated subduction in trenches. The transition to modern plate tectonics followed a long period of reduced tectono-magmatic activity -the boring billion, 1.7 to 0.75 Ga (Sobolev and Brown 2019). Establishment of the Conditions for the Emergence of Life Understanding the internal, dynamic processes of the early Earth is certainly essential for appreciating the building of habitable conditions on a global scale. However, the emergence of life and its early evolution were events that occurred on local scales, although perhaps combining the results of different prebiotic reactions occurring in different microenvironments (Stüeken et al. 2013). In this section we will review our present understanding of the environmental conditions reigning on early Earth that were of immediate importance for the emergence of life. The primary requirement for establishing an environment conducive to the emergence of life is the presence of liquid water. We noted above various proxies indicating liquid water on the Hadean-Eoarchean Earth. One of the main constraints for liquid water at the surface is the composition and partial pressure of the atmosphere ( Table 1 in Catling and Zahnle 2020, and references therein). After the Moon-forming impact about 4.5 Ga (e.g. Barboni et al. 2017) that effectively vaporised the surface of the Earth as well as the impactor, the Sirich vapor recondensed and a thick CO 2 plus water greenhouse atmosphere formed (Zahnle et al. 2015;Sleep et al. 2014;Sossi et al. 2020). Removal of much of the CO 2 though crustal recycling during the Hadean would have resulted in an atmosphere containing approximately 1 bar CO 2 atmosphere and temperatures permitting oceans to form (at ∼ 500 K, Zahnle et al. 2015;Sleep et al. 2014). In contrast to Sleep et al. (2014), Catling and Zahnle (2020) conclude that the early atmosphere could not have been very thick and that it was compensated by the presence of greenhouse gases (Fig. 3). These interpretations are based on geochemical investigations of nitrogen contained in fluid inclusions in quartz crystals of Paleoarchean age (Marty et al. 2013;Avice et al. 2018) and on physical phenomena, such as the sizes of gas bubbles in submarine lavas of similar age indicating hydrostatic pressures of not more than 0.5 bars (Som et al. 2012). To date, we have no hard and fast evidence of when oceans formed but have listed the different proxies in Sect. 2.1, which suggest an early appearance of water (Catling and Zahnle 2020). Indeed, the Hadean Earth would have been more of an ocean planet and its primitive continents being characterised by submerged plateaus with emergent volcanic edifices and their surrounding land masses, similar to those characteristic of the Paleoarchean, as we will see below. Early Habitable Environments There are only a few exposed locations where Paleoarchean terranes are well-preserved (namely the ∼ 3.5-3.3 Ga Barberton, South Africa, and Pilbara, Australia, Greenstone Belts), most of which are subaqueous deposits. Indeed, until about 3.2 Ga, very little subaerial material from this time period exists. There are reports of quartzites and quartz-biotite schists from the Isua and Nuvvuagittuq terranes that are interpreted to be the metamorphosed remnants of sandstone and conglomerate protoliths (Bolhar et al. 2005;Cates and Mojzsis 2007;O'Neil et al. 2011), respectively, as well as some horizons of pebble conglomerates and sands attesting to deposition in a terrestrial setting in the 3.48 Ga Hooggenoeg Formation. Subaerial deposits are far more common in the younger, < 3.2 Ga Moodies Group in the Barberton Greenstone Belt (Lowe and Byerly 1999b,a;Heubeck 2009;Hofmann et al. 1999), while subaerial spring deposits associated with a caldera have been described in the 3.48 Ga Dresser Formation in the Pilbara . All the preserved subaqueous sedimentary deposits in the Barberton and Pilbara Greenstone Belts formed at relatively shallow water depths in depositional basins on top of the plateau-like protocontinents (i.e. at water depths ranging from littoral to below wave base, which could have been some tens to a few 100s m) (Lowe and Byerly 1999b). Nijman et al. (2017) compared the Paleoarchean depositional basins to collapse basins on Venus or Mars, forming on softened crust atop mantle plumes, although it has been argued that the early Earth's crust was not thick enough to support such a tectonic situation (comment by an anonymous reviewer). Nevertheless, the thickness of the Archean Earth's crust is modelled to have been greater than that of the present day owing to hotter mantle temperatures and magmas (Hawkesworth et al. 2020). The group of van Kranendonk proposes an alternative caldera-like scenario for at least some of the shallow basins. Although we have geochemical evidence for the existence of open ocean via a positive Eu anomaly reflecting a global, background hydrothermal signature (Jacobsen and Pimentel-Klose 1988;Hofmann and Wilson 2007;Hofmann and Harris 2008;Hickman-Lewis et al. 2020b), there is no morphological preservation of deep oceanic crust, which was probably removed (together with much of the early protocontinental crust) by a combination of tectonic overturn and possibly the high rate of impacts on the Hadean Earth (Melosh and Vickery 1989;Abramov et al. 2013;Kemp et al. 2010;Kamber 2015;Griffin et al. 2014;Maher and Stevenson 1988). The early basins and emergent landmasses likely hosted a variety of habitable environments including subaqueous, littoral (i.e. tidal therefore partially subaerial), subaerial and hydrothermal settings (note, however, that hydrothermal settings were ubiquitous). These sedimentary environmental settings are attested by sedimentary structures, pillow lavas and geochemical signatures. Various proxies are used to infer the environmental conditions. Estimates of water temperatures on the early Earth from oxygen, silicon, and hydrogen isotopic signatures preserved in chert sediments are wide-ranging, from a cool 26 • C (Hren et al. 2009;Blake et al. 2010) to ∼ 50 • C and up to ∼ 70 • C (Robert and Chaussidon 2006;Van den Boorn et al. 2010;Marin-Carbonne et al. 2012;Tartèse et al. 2017). The latter studies especially noted the strong influence of the early Earth's abundant hydrothermal activity on the temperature signatures, as evidenced also by the aforementioned REE signatures (positive Eu and Y anomalies, and Y/Ho ratio Hofmann and Harris 2008;Hickman-Lewis et al. 2020b). It should be borne in mind that the sediments analysed were formed at the interface between the relatively shallow column of water and above warm or hot rock that could be easily heated; this may not have been the case in the "deep" ocean, and we do not know how deep the Earth's oceans were outside the shallow plateau areas. However, in the scenario where there was not much exposed continental landmass, the average ocean depths would have been about 2 km. The pH of the early oceans would have been variable, with alkaline conditions enhanced by aqueous alteration of the predominantly ultramafic and mafic crust (Kempe and Degens 1985;García-Ruiz et al. 2020), while more acidic conditions were the consequence of boiling and hydrogen-rich hydrothermal fluids as well as the CO 2 -rich atmosphere (Morse and Mackenzie 1998;Catling and Zahnle 2020). Intermediate values were estimated by Friend et al. (2008), who interpreted circum-neutral pH from geochemical analyses of Eoarchean rocks from West Greenland. On a local scale, variations in pH could have been readily maintained around hydrothermal vents, where exiting fluids of a certain pH mix with seawater of different pH, or where the pH is changed by interaction with adjacent rock/sediment materials, e.g. acidic fluids flowing through mafic or ultramafic rocks and sediments becomes initially alkaline before returning to and slightly acidic pH if that is the ambient condition Westall et al. 2018). Estimations of salinity for the early oceans vary, but fluid inclusion studies suggest that they range from present day values to about double these values ; see also Knauth 2011;Catling and Zahnle 2020). These estimations will be relevant for the shallow water basins on top of the submerged protocontinents but environmental conditions in the open ocean may have differed. While all environmental parameters indicate an anaerobic early Earth, extremely small amounts of oxygen would have formed locally by EUV photodissociation of water vapour in the atmosphere and at the surface of the seawater (Kasting et al. 1979). Oxygenated species could have resulted also from dissociation of boiling, pressurised hydrothermal fluids as they exited vents in shallow waters (pers. comm. C. Ramboz, 2009). Note also that the early oceans were more enriched in the transition metals essential for Earth-like life (Fe, V, Ni, As and Co) than today, owing to leaching of the early ultramafic and mafic rocks characteristic of the early volcanic crust (Hickman-Lewis et al. 2020a). Indeed, the early oceans could be considered iron-rich environments. Scenarios for the Emergence of Life There is strong evidence for diversified life forms comprising chemotrophic and phototrophic microorganisms already in the Paleoarchean (3.6 to 3.2 Ga) based on morphological structures, as well as geochemical and organic biosignatures (Hofmann et al. 1999;Hassenkam et al. 2017;Djokic et al. 2017;Hickman-Lewis and Westall 2021). This suggests that life must have emerged during the Hadean. Or, if life appeared very rapidly (and we have no idea how long it took for life to emerge) at the latest, in the very early Eoarchean. Microbial fossils have been interpreted from the 4.28-3.8 Ga Nuvvuagittuq rocks of Canada (Dodd et al. 2017;Papineau et al. 2022), where jaspilite deposits probably represent hydrothermal chemical sediments. Relatively large filaments (about 16.5 µm in diameter and up to 1000s of µm in length) were hypothesised to be of microbial origin. Associated multiple sulfur isotopes are consistent with a microbial signature. However, Greer et al. (2020) and Lan et al. (2022) suggested that these and other Fe-rich microbial filaments in the highly metamorphosed, Nuvvuagittuq rocks are abiotic mineral features. Microbial life is strongly associated with hydrothermal deposits in later Paleoarchean sediments 3.33 Ga (Westall et al. 2015;Hickman-Lewis et al. 2020b), and confirmation of such traces in more ancient deposits is actively being sought. There are many scenarios suggested for the emergence of life on Earth, including hydrothermal environments undersea (Baross and Hoffman 1985;Russell and Hall 1997;Martin and Russell 2003) and on land (Damer and Deamer 2020;Van Kranendonk et al. 2021); associated with impact craters (Sasselov et al. 2020); pumice rafts (Brasier et al. 2011); deep seated faults (Schreiber et al. 2012); and mixing of chemical precursors produced in combinations of these and other environments (Stüeken et al. 2013). Each of the scenarios has relative merits and some disadvantages, as reviewed by Westall et al. (2018). We will briefly summarise the different scenarios below. An important point in addressing the scenarios for the origin of life is that the environmental requirements for this are not necessarily the same as those for flourishing, more evolved life forms. This will become evident also later in this chapter during the discussion of possible life forms in the clouds of Venus today. For life as we know it, based on organic carbon molecules and liquid water, the basic ingredients include the six essential elements, C, H, O, N, P, S, as well as transition metals (especially Fe), liquid water, an energy source (e.g., chemical, photonic, heat), and a suitable geological context. According to our current understanding of prebiotic chemistry processes, life as we know it could not emerge in an environment with free oxygen, thus anaerobic conditions are also important. According to some researchers Pross and Pascal 2013), the initial energy for pushing prebiotic reactions past the required activation level needs to be very high and can only be provided by UV radiation. Conversely, the complex compounds necessary for biological functions, such as peptides, information transferring molecules (RNA, DNA), or the lipids of cell membranes (Kminek and Bada 2006;Reisz et al. 2014), would rapidly break down under UV radiation. Others (Adam 2007;Adam et al. 2018) have hypothesized that beach sands enriched in uranium could have provided the radiation necessary for activating prebiotic processes. Although the existence of uranium placer deposits during the Hadean is highly unlikely owing to the small quantities of uranium in the early terrestrial rocks and the limited availability of oxygenated environments for leaching and concentrating it out of the rocks. If correct, the necessity of UV radiation for early prebiotic reactions would place serious constraints on where life could have emerged, i.e. life could only have emerged where there was exposed land and not, for example, on an ice covered ocean planet. Another important condition for the emergence of life is the presence of natural gradients: in temperature, pH, ionic concentrations, water and osmotic potential, and energy . Gradients drive the diffusion of essential components for prebiotic chemistry and primitive metabolisms, via hydrothermal fluids, seawater, pore waters (in porous materials), river water, or (impact) lakes. As one commonly hypothesized requirement for life is compartmentalization, chemical constituents would have needed to be transferred into and out of the micro-scale compartments (e.g., pores in rocks and minerals, naturally-forming gels, vesicles, or micelles) in which prebiotic reactions would have taken place. In terms of the emergence of life, three key factors are critical: (1) the concentration of the various molecular building blocks of life, (2) their stabilisation and structural conformation, and (3) chemical evolution (as summarised from previous works by Westall et al. 2018). In a manner that is a priori counter intuitive for non-prebiotic chemists, there are stages during prebiotic reactions when water is a hindrance. This is when it is necessary to concentrate the ingredients of life. Darwin's dilute, warm little pond will not work. Concentration allows basic prebiotic molecules to interact sufficiently with each other to create additional, more complex conformations. For example, Russell and Hall (1997), Russell (2021), Martin et al. (2008) view the reactive mineral-rich walls of pores in deep sea hydrothermal vents as a likely location for concentration and condensation of organic molecules. Porous silica gel was suggested by Westall et al. (2018) and Dass et al. (2018) because of its ubiquity Martin and Russell 2003) in the early terrestrial oceans, and its association with hydrothermal environments. Organic molecules chelate to the surface of the pores in the gel, which is permeable, letting through nutrients, molecules and enabling gradients. Other researchers prefer wetting-drying cycles that imply exposure of the organic molecules to the early atmosphere, either in a beach environment (Deamer 1997), or on land (Damer and Deamer 2020;Marshall 2020;Sasselov et al. 2020). Deep sea hydrothermal vents were suggested as a suitable location for the origin of life by Baross and Hoffman (1985). This idea was further developed in great detail by Russell and various colleagues since the mid 1990s. Russell et al. (2010) noted the particular importance of alkaline vents for the emergence of life (Fig. 4). These were environments from which metal-rich fluids and small organic molecules formed during serpentinising reactions in the crust, including hydrogen, methane, minor formate, and ammonia, as well as calcium and traces of acetate, molybdenum and tungsten. Chemiosmotic energy would have been provided by proton and redox gradients across the porous vent walls. According to this hypothesis, prebiotic chemical reactions in the porous, reactive mineral constructs of the vents would concentrate molecules, helping them to form new structures and combinations. Eventually, all the constituents of life, except cell membranes, would be found within the pores, thus forming the first living entities (i.e. non membrane-bound cells). Finally membranes would form around the edges of the pores to enclose the proteins and RNA molecules, allowing the protocells to be expulsed into the ocean. In this scenario, UV radiation is not necessary to surmount the activation energy barrier. Damer and Deamer (2020) In support of the deep-sea hydrothermal vent scenario, Ménez et al. (2018) note that serpentinite-bearing hydrothermal environments requiring exhumation of mantle rocks to the surface are common for (ultra)slow spreading mid-ocean ridges and/or oceanic rifts. Such tectonic settings require lithospheric extension and were likely present since very early stages of lithospheric evolution and crustal differentiation (e.g. Sizova et al. 2015). Models show that their existence does not require global plate tectonics and/or subduction and can associate with several other styles of mantle convection and surface dynamics such as ridgeonly convection or plutonic squishy lid that might be common styles for young Venus/Earthsized terrestrial planets (Rozel et al. 2017;Sizova et al. 2015;Lourenço et al. 2018). In a variant on the deep-sea hydrothermal scenario and based on their studies of wellpreserved hydrothermal sediments from the Paleoarchean, Westall et al. (2018) suggested that volcanic sediments in the vicinity of hydrothermal vents may have hosted prebiotic reactions leading to the emergence of life. The scenario is very similar in principal to that of Russell (a porous medium comprised of reactive minerals), with the exception of the inclusion of porous silica gel, as noted above, a ubiquitous by-product of the early silicarich seawater. The presence of these sediments around hydrothermal effluent extends the available environments for the emergence of life. Moreover, such environments existed at all water depths, from tidally-influenced littoral environments down to the deep sea. In this case, if UV radiation were an essential factor in prebiotic reactions, shallow water systems in the tidal zone would offer exposure to UV radiation, as well as protection of the more complex molecules under water and within the subaqueous sediments. Another popular scenario suggests subaerial hydrothermal environments for the emergence of life. This is largely because of the findings that (1) UV radiation can contribute to the neoformation of prebiotic molecules Pross and Pascal 2013), (2) hydrophobic conditions are necessary at certain stages of prebiotic reactions to concentrate molecules (Damer and Deamer 2020;Deamer 1997;Marshall 2020;Sasselov et al. 2020), and (3) subaerial vents have been interpreted in ancient Paleoarchean terranes (Van . Hydrothermal vents on land (Fig. 5) would have provided a suitable environment for prebiotic processes as they are exposed to UV radiation, when necessary, as well as protected by water. The porous sediments and vent walls would have served the same function as hypothesized in the deep-sea vent scenario, mini-reactors localizing and supporting the prebiotic chemical reactions. Fully-formed microbial cells would have been transported to the oceans by rivers (UV avoidance being a necessity in this scenario). One could also envisage the transport of microbes attached to each other or to dust particles in small clumps that can travel significant distances through the air. In this scenario, despite exposure to UV and desiccation, cells in the interior of the clump will remain shielded and wet for a certain time even though cells on the surface of the clump die (Madronich et al. 2018). This scenario works on the Earth today but the higher UV flux on the early Earth may have been a considerable constraint. The common denominator in the most popular of the above origin-of-life scenarios is hydrothermal environments, either undersea or on land. Here, the contact of hot water with reactive mineral surfaces would have provided the chemical energy for prebiotic reactions. The early volcanic rocks were more ultramafic than today, comprising predominantly iron and magnesium-rich basalts and komatiites. There would have been the possibility of exposure to UV radiation in beach environments, shallow water vents, or subaerial vents, at significant moments (if it was indeed essential). The mineral surfaces, perhaps assisted by the presence of ubiquitous silica gel, would have facilitated increasing molecular concentration, conformation and complexity in mineral and silica pores. The necessary gradients would have been provided by the through flow of fluids and nutrients from the vent effluent to the immediately surrounding environment (including sediments) and protocells would have formed. We have only considered here in detail a few of the wide variety of environments suggested for the emergence of life, but we noted above some of the other hypotheses. However, we may never know exactly where life originated. It is clear that there were numerous possibilities for prebiotic chemistry in different scenarios. It is possible that important components of living cells were formed in different types of environments and eventually concentrated together in one location. Although certain prebiotic chemists do not endorse this idea, considering that the processes leading to life needed to have occurred in one location (N. Lane, pers. comm" 2022). It is also possible that life emerged in more than one place during the course of the Hadean, possibly even under different scenarios in different times and places. Large impacts or other more localized environmental changes could have wiped out life in some regions while in others it continued to flourish. Given the biochemical, genetic, and other evidence we have today that all modern life shares a common ancestor (Weiss et al. 2018, and references therein), eventually, the early world ocean must have been dominated by one form of life, presumably the metabolically most effective, using the molecular machinery that we know today. Scenarios for the Emergence of Life and the Problem of Prebiotic Chemistry in the Laboratory The origin of life has traditionally been addressed through experiments in prebiotic chemistry in carefully controlled laboratory conditions. This has led to a significant amount of confusion in the origins of life community because the realities of the early terrestrial environment were, and are still, rarely taken into account. A prime example of this situation is the stabilisation of ribose (sugar), one of the essential ingredients of RNA. The element boron has been suggested to have been critical to the stabilisation of the sugar (Benner et al. 2010;Scorei 2012). Boron is a constituent of tourmaline, a mineral present in the sediments of Eoarchean terranes and was certainly present on the Hadean Earth (Grew et al. 2011) but some researchers have suggested its presence as boron salts on exposed landmasses, thereby inferring that life could have emerged in subaerial rivers (Benner et al. 2010). Another solution for the stabilisation of ribose hypothesises is exposure to ice. Szostak (2016) invokes seasonal changes in a subaerial hydrothermal setting (similar to Yellowstone), whereby temperature changes could have induced a temporally icy setting. Trinks et al. (2005) suggest sea ice as an important setting for prebiotic chemistry in terms of concentration of organic molecules and for providing optimal conditions for the early replication of nucleic acids. Support for this scenario was based on models suggesting that the early Earth had a cold start (Catling and Zahnle 2020). They are predicated on the lower luminosity of the Sun and the necessity of either high partial pressure for a CO 2 atmosphere or a large amount of greenhouse gases, such as CH 4 (Sagan and Mullen 1972). Is such a scenario realistic? While there is no evidence during the Archean of glacial conditions (beaches were bathed by tidal waves and hosted evaporite mineral precipitation), heat flow from the mantle during the Hadean is modelled to have been lower than during the Archean (Ruiz 2017;Korenaga 2018). Radiogenic heating of the mantle continued up to about 3.0-2.5 Ga when it reached a maximum (1500-1600 • C compared with 1350 • C today) and then decreased thereafter (Herzberg et al. 2010). Although Foley et al. (2014) propose mantle temperatures more than 2000 • C for post magma ocean times. The aforementioned examples underlines two important points regarding the origin of life on Earth. In the first place, the difficulties in interpreting early habitable environmental conditions are based on the relatively distorted prism of the relatively rare occurrences of metamorphosed and altered rocks from the Eoarchean. Secondly, the fact that experiments in prebiotic chemistry often do not take into account realistic early Earth scenarios. There is also the point that, while life on Earth may have emerged in one particular environment (or several), this does not mean that life on another terrestrial planet, such as Venus, could not have originated in an alternative scenario. Evidence for Early Life After life on Earth became established, the variety of environments on the early Earth seem to have provided a plethora of habitats, each characterised by different and, likely, timevariable characteristics. Given the anaerobic conditions prevailing on the early Earth and the negative effects of oxygen on modelled prebiotic chemistry, early terrestrial organisms had to have inhabited anaerobic environments. The earliest ecosystems would have supported chemoautotrophs, i.e. microorganisms obtaining their energy from oxidation of organic substances, such as methyl compounds (chemoorganotrophs), or inorganic substances, such as ferrous iron, hydrogen sulfide, elemental sulfur, thiosulfate, or ammonia (i.e. chemolithotrophs). These were the key substrates or electron donors that were immediately available to support life on the early Earth. Heterotrophs, or organisms that depend on carbon for their nutrient supply (by consuming them, their debris, or their waste products), may have emerged as early as the first autotrophs. Organisms that developed the ability to use sunlight as a more powerful and effective source of energy, i.e. phototrophs, emerged after the chemotrophs. Initially (3.8 to 3.5 Ga), phototrophs may have used hydrogen and/or sulfur as a reductant. A decrease in the availability of these compounds in Earth's atmosphere, possibly related to the global production of methane by the early biosphere (methanogenesis), may have led to the development of ferrous iron-based phototrophy at or before 3.0 Ga (Olson 2006). The efficiency of even these early forms of photosynthesis, combined with the abundant availability of sunlight, appears to have conferred an immense metabolic advantage to the phototrophs; they were already relatively widespread by about 3.5 Ga (Noffke et al. 2013;Hickman-Lewis et al. 2018a), which indicates their relatively rapid evolution and spread (i.e. biomass) in environments with sufficient insolation. Liu et al. (2020) considers this an additional point in favour of an origin of life during the Hadean and not after the (now) controversial Late Heavy Bombardment (LHB) of 3.9 Ga. Given the implication for the timing of the origin of life it is important to understand why the LHB is currently out of favor. The hypothesised LHB was originally tied to dating of the returned lunar samples by Turner and Cadogan (1975) and Tera et al. (1973) that showed a preponderance of ca. 3.96 Ga ages that they hypothesised could have been produced by a "lunar cataclysm". Modelling has shown that such an event could have been created by instability in the orbits of the giant planets, particularly Jupiter (Walsh et al. 2011;Deienno et al. 2016). Recent analysis of lunar impact glass ages (Zellner 2017) demonstrates the unlikeliness of a late bombardment peak. Observations of a binary Jupiter Trojan (Nesvorný 2018), coupled with studies based on cratering statistics, geochronological databases tied to closure temperature, and resolved ages using orbital dynamics and thermal modeling (Mojzsis et al. 2019;Clement et al. 2019), show that any giant planet instability would have occurred before about 4.45 Ga. We noted above purported microbial fossils associated with hydrothermal activity from the 3.8 Ga Nuvvuagittuq Supracrustal terrane (Dodd et al. 2017;Papineau et al. 2022) that have since been reinterpreted as of abiotic origin (McMahon 2019; Greer et al. 2020;Lan et al. 2022). Furthermore, while organic molecules in garnets from the 3.7 Ga Isua Greenstone Belt may be remnants of microbial life (Hassenkam et al. 2017), purported microbial stromatolites described from the same rocks (Nutman et al. 2016(Nutman et al. , 2019 are apparently of abiotic origin (Allwood et al. 2018;Zawaski et al. 2020). Nevertheless, by 3.5 Ga, the Barberton and Pilbara Greenstone Belts, the two main locations with well-preserved crustal rocks, document abundant evidence of microbial life. Most readily visible are small, domical stromatolites ∼several cm in height occurring in shallow water environments in the Pilbara (Hofmann et al. 1999;Allwood et al. 2006), that represent the macroscopic evidence of phototrophic microbial mat formation. However, most phototrophic biofilms and mats from the Paleoarchean in Barberton and the Pilbara are represented by tabular mats (Byerly et al. 1986;Westall et al. 2011aWestall et al. , 2006aNoffke et al. 2013;Hickman-Lewis et al. 2018b). Evidently, these phototrophic biosignatures occur in very shallow water environments, the organisms relying on access to sunlight to obtain their energy. The shallow water environment, together with warm seawater, would have led to relatively high salt concentrations. Westall et al. (2006a) describe microcrystalline, silicapseudomorphed evaporate mineral sequences in 3.33 Ga coastal sediments in the Barberton Greenstone Belt, South Africa, while (Lowe and Byerly 1999a) document an horizon of nacholite crystals in 3.42 Ga sediments from Barberton (Knauth 2011). Thus, given the abundant evidence for microbial life in these shallow water environments, it must have been at least partially halophilic (Westall et al. 2015;Hickman-Lewis and Westall 2021). Moreover, the volcanic sedimentary environment with its associated hydrothermal activity, hosted chemotrophic life, including chemolithotrophs, as well as chemoorganotrophs, the latter in the direct vicinity of hydrothermal vents (Westall et al. 2006b(Westall et al. , 2011bHickman-Lewis et al. 2020a). Colonies inhabiting hydrothermal environments would have comprised thermophiles and probably hyperthermophiles. (Hickman-Lewis and Westall 2021) review the widespread distribution of early life in the Barberton Greenstone Belt through the Archean (3.5-2.6 Ga), showing how its nature and distribution throughout this early period of Earth's history were controlled by both the gradual evolution of the environment, as well as the rise of oxygenic phototrophs at about 3.0 Ga. Figure 6 illustrates a variety of early biogenic remains from the Paleoarchean Barberton Greenstone Belt (South Africa) and Strelley Pool Chert (Pilbara Greenstone Belt, Australia) sediments, including macroscopic stromatolites from the 3.43 Ga Strelley Pool (Fig. 6A, B) (e.g. Hofmann et al. 1999;Allwood et al. 2006), tufted tabular stromatolites Tectonics High surface temperatures that prevail on present-day Venus may strongly affect the interior and the surface (Phillips et al. 2001;Noack et al. 2012;Gillmann and Tackley 2014). Extensive outgassing and a greenhouse effect such as the one observed on Venus directly affect surface mobilization (horizontal velocity and inclusion of the lithosphere in the convective cell). Several studies that coupled 1D, 2D and 3D interior dynamics models with atmospheric evolution models have investigated the effects of an evolving atmosphere formed through mantle degassing of H 2 O and CO 2 (Noack et al. 2012;Gillmann and Tackley 2014). However, the feedback between the atmosphere and the mantle can be quite complex. Some models, in which digitized atmospheric temperature values from a non-grey (wavelengthdependent) radiative-convective atmospheric model by Bullock and Grinspoon (2001) were used, suggest that high surface temperatures lead to surface mobilization (Noack et al. 2012). Higher surface temperatures translate into lower surface viscosity, reducing the viscosity contrast between the surface and the mantle, and allowing the surface layer to be mobilized by mantle convection. On the other hand, models that consider plastic yielding of the lithosphere and couple the interior evolution to a grey atmosphere (thermal opacity uses a single value, independent of wavelength), find that a high surface temperature stops surface recycling and promotes a stagnant-lid regime (Gillmann and Tackley 2014). Instead, a lower surface temperature will lead to higher viscosities and higher convective stresses that, in turn, may promote plastic yielding and surface mobilization (Lenardic et al. 2008). Venus may have experienced lower surface temperatures during its early thermal history, due to the efficient removal of water by escape processes (Gillmann and Tackley 2014). During this time more moderate conditions may have existed at its surface and allowed the sequestration of atmospheric CO 2 , preventing its accumulation in the atmosphere. Global circulation models even suggest that a temperate Venus could have been maintained under habitable surface conditions until as recently as 0.7 Ga (Way et al. 2016). The difference between the results from numerical models is mostly due to the various rheologies and mechanisms considered, and may indicate possible competition between multiple processes on real planets. As such, there may be a sweet spot when rheology remains stiff enough for the lid to break and convective stress to be transmitted to the lid, but soft enough to allow vigorous convection and prevent the lid from growing too static. Additionally, the specifics of the regime may depend on the history of the planet (for instance Weller et al. 2015;Weller and Kiefer 2020) and in particular the transition between the magma ocean solidification and the solid mantle convection (see Salvador et al. 2017Salvador et al. , 2023. This topic is discussed further in Rolf et al. 2022, this journal). The exact style of resurfacing on Venus is still debated (Rolf et al. 2022, this journal). Different scenarios that have been discussed in various studies (Armann and Tackley 2012;Gillmann and Tackley 2014;Karlsson et al. 2020;Lourenço et al. 2020) could have operated at various times throughout Venus' history: (1) stagnant lid: the surface was continuously renewed by volcanic activity without any kind of surface mobilization (Armann and Tackley 2012;Gillmann and Tackley 2014;Karlsson et al. 2020); (2) episodic lid: at periods plate tectonic like surface mobilization takes place with more quiescent periods in between (Armann and Tackley 2012;Gillmann and Tackley 2014;Uppalapati et al. 2020); (3) plutonic squishy lid (Lourenço et al. 2020): recycling of the lithosphere by eclogitic dripping and delamination, with strong plates separated by hot magmatic intrusions. These scenarios are different in terms of the efficiency of volatile recycling, which, in turn, has important implications for mantle dynamics and thermochemical history. In addition to this, the exact convection regime is probably not static and could have changed throughout the history of Venus (Gillmann and Tackley 2014;Weller et al. 2015;Weller and Kiefer 2020). Understanding the tectonic regime throughout Venus' history is important as it is closely linked to its volatile history. First, outgassing, and thus the atmosphere thickness and bulk composition, are directly governed by the mantle dynamics and volatile release by volcanism. More detailed overviews of the mantle based outgassing processes are proposed in Rolf et al. (2022) and Gillmann et al. (2022), respectively. It has therefore been postulated that one could infer the tectonic style of a planet based on its volatile history and atmosphere characteristics. In particular, Venus' 40 Ar measurements have been used to suggest that the planet only outgassed 10 to 34% of its total 40 Ar inventory (Kaula 1999;O'Rourke and Korenaga 2015;Namiki and Solomon 1998;Volkov and Frenkel 1993), compared to Earth's 50%. That would imply Venus outgassing was limited during most of its evolution or was only important during its early history (when 40 Ar had not formed yet). Therefore such measurements argue against an Earth-like tectonic regime for all but primitive Venus at least. One should note that the thick CO 2 atmosphere and large N 2 inventory could point toward strong outgassing at some point in the history of Venus; this question is not yet solved. It has been suggested that such a period may have been very ancient, possibly dating back to the magma ocean phase (Gaillard et al. 2022) or the Late Accretion (Gillmann et al. 2020). CO 2 could also have been released by different processes depending on surface conditions (Höning et al. 2021). Planetary tectonics also ties into the possibility of a carbonate-silicate cycle and thereby on the potential of long-term habitability. On Earth, the climate is stabilized as CO 2 , outgassed at mid-ocean ridges and other volcanic units, is consumed by silicate weathering processes, precipitated as carbonates on the seafloor, and recycled back into the mantle at subduction zones (Walker et al. 1981;Kasting and Catling 2003). The land fraction is an important parameter in this cycle, since silicate weathering is particularly efficient on land that is emerged over sea-level. Higher rates of seafloor weathering at mid-ocean ridges can partly compensate for a smaller land fraction (e.g., Foley 2015), but the global mean surface temperature would nevertheless be expected to be higher if most of the planet's surface is covered by oceans. In the stagnant lid scenario, recycling of volatiles is the least efficient because the thick static lid is not part of the convection. Water, carbonates (if formed during moderate atmospheric conditions), and sulfates may have never been recycled into the mantle. The observed tessera terrains, that represent around 8% of the crust on Venus, have been suggested to resemble continental crust on the Earth (Gilmore et al. 2015). If so, they may be difficult to form in this scenario, as their formation would require some kind of crustal recycling in the presence of water. However, models (Karlsson et al. 2020) have not yet been able to simulate their behaviour satisfactorily. While delamination of the lower crust may take place in this scenario, if the crust grows thicker than the basalt to eclogite transition depth (Sizova et al. 2015;Fischer and Gerya 2016a, e.g.,), water recycling remains unlikely. On Earth, felsic material can form without water. However, such a mechanism would struggle to produce enough felsic material to account for the total volume of present-day Venus tesserae (Smrekar et al. 2018, and references therein). Future missions may place a constraint on how much of the tessera terrains can actually be considered felsic. Volatile recycling would be efficient during plate tectonic periods in the episodic lid scenario, while in the plutonic squishy lid case, the efficiency would presumably be lower than in the episodic case but still notably higher (Sizova et al. 2015;Fischer and Gerya 2016a, e.g.,) than in the stagnant lid scenario. Volatiles that are introduced back into the mantle would have major consequences for mantle dynamics and subsequent magmatic evolution. Recycled crust will become negatively buoyant when undergoing the phase transition from basalt to eclogite. The recycled crust is rich in incompatible elements, such as heat producing elements and volatiles, which can significantly affect subsequent melting of the mantle. The subducted crustal material will refertilize the mantle and promote partial melting, both by increasing the amount of heat producing elements in the mantle and recycling of volatiles that would locally decrease the melting temperature. In addition to decreasing the local solidus, recycled volatiles will also decrease the viscosity of the mantle material, thus affecting the interior dynamics and the cooling behavior of the mantle, and consequently the subsequent outgassing. Constraints on the style of recycling on Venus may be derived from the inferred crustal thickness, and variations in the crustal age and geoid (Kiefer and Hager 1991;Armann and Tackley 2012;King 2018). The episodic lid models and the plutonic squishy lid, with a low reference mantle viscosity and a low eruption rate, seem to produce a crustal thickness that is closer to the inferred crustal thickness of Venus compared to stagnant lid cases (Rolf et al. 2018;Lourenço et al. 2020). Surface age variations indicated by Venus' cratering record may be easier to reconcile with the episodic lid scenario ) and the long wavelength of the gravity spectrum can be matched well if the last resurfacing event ended a few hundred Myr ago (Rolf et al. 2018). Whether these observations are consistent with the plutonic squishy lid regime remains to be tested in future models. It should be noted again, however, that the stagnant lid, episodic lid, and plutonic squishy lid scenarios are not mutually exclusive, but may have been active at different times during Venus' history, as seems to have been the case on early Earth. Additionally, they constitute a continuum of behaviours rather than distinct, clear-cut end-members. Finally, local variations are to be expected, and different crust deformation processes could occur at different locations of the surface of Venus at a given time (see Rolf et al. 2022). Thus, recycling of volatiles may have significantly changed during the thermal evolution of Venus, and its present-day state is the result of complex feedback mechanisms between the interior, surface and atmosphere . Future work assessing volatile exchange associated with changes in the tectonic regime throughout Venus' history is necessary in order to advance our knowledge of stable surface water in the past. Outgassing Outgassing is an important source of secondary volatiles for the atmosphere of a terrestrialtype planet. It therefore directly affects surface conditions and the surface habitability of a planet. Broadly speaking, three processes can lead to significant outgassing and affects on the atmosphere (including the fluid envelope) in the long term: (i) magma ocean solidification, (ii) collision with impactors, especially large ones at an early stage, and (iii) volcanism. Magma ocean evolution, solidification, outgassing and its consequences on the atmosphere and surface conditions on rocky planets, in particular on Venus, are discussed in detail in Salvador et al. (2023, this journal). After accretion and the capture of a possible primordial hydrogen atmosphere, it is the source of the early volatiles and a secondary atmosphere. It is generally understood that CO 2 outgasses early and in large quantities, while water should be released near the end of the magma ocean phase, if at all (Salvador et al. 2017). It has been proposed that a freezing magma ocean could retain a large portion of its water (Solomatova and Caracas 2021). Outgassing from magma oceans is an active research topic, and it has been highlighted that the actual species outgassed to form this early secondary atmosphere could heavily depend on the magma ocean's redox state (e.g. Lichtenberg et al. 2021;Gaillard et al. 2022) It has been suggested that this phase could already set the planet on an habitable or uninhabitable evolutionary path, depending on the duration of the magma ocean, and ability of the planet to cool down fast enough to allow liquid water to condense on its surface (Gillmann et al. 2009;Lebrun et al. 2013;Hamano et al. 2013;Salvador et al. 2017;Turbet et al. 2021) before it is lost to space. Large impacts and their consequences are discussed in Gillmann et al. (2022), Salvador et al. (2023, this journal). The velocity and mass of impactors means that large amounts of kinetic energy are transferred to the planet as thermal energy during a collisional event. For bodies that are large enough (or fast enough), that is, above a few tens of kilometers in radius, impacts can cause large-scale melting of the crust and the mantle of the planet. They may create magma ponds/seas, and lead to the release of volatiles into the atmosphere (e.g. for Venus, Gillmann et al. 2016). Such events are more frequent and important during early evolution, especially during the accretion phase and are accompanied by the additional release of volatiles contained in the impactors (Sakuraba et al. 2019;Gillmann et al. 2020;Sakuraba et al. 2021). Volatile release by impactors can substantially modify the composition of the atmosphere and the state of the surface. If the impactor is large enough there could be implications for habitability ranging from the short term (earthquakes, tsunamis, storms, lava ponds/oceans) to the long term (increased greenhouse effect, increase in CO 2 concentrations). Volcanic outgassing is discussed in Gillmann et al. (2022, this journal). Its causes are explained in Rolf et al. (2022, this journal) and include partial melting of the mantle due to local pressure-temperature conditions exceeding the local solidus of the mantle material. Its surface expressions are addressed in Smerkar et al. (2022, this journal) and Herrick et al. (2022, this journal). For Venus, recent volcanic production can be roughly estimated, but with large uncertainties. Observation of the surface and atmosphere can provide some constraints for modelling efforts, but recent volcanic production rates are debated. They could be very low, 1 km 3 , in agreement with the minimal effect of volcanism on randomly distributed impact craters (Basilevsky and Head 1997;Schaber et al. 1992); moderately lower than on Earth (∼ 1 km 3 /yr) (Head et al. 1991;Phillips et al. 1992); or similar (∼ 10 km 3 /yr; Fegley and Prinn 1989;Bullock and Grinspoon 2001) to Earth's production rates (e.g. Byrne and Krishnamoorthy 2021). Long-term numerical modelling of mantle dynamics offers a reasonable solution for estimating a range of possible outgassing rates from volcanic production, keeping in mind the lack of hard constraints on values before 1 Ga, or on the state of Venus' mantle. However, little evidence exists for volatile concentration in Venus' lava, leading to further uncertainties, as volatile output strongly depends on the redox state (oxygen fugacity) of the mantle. In addition, it has been suggested that the high surface pressure in the Venusian atmosphere could suppress water outgassing compared to CO 2 (Gaillard and Scaillet 2014). It is currently debated if the present-day atmosphere is geologically (<1Ga) recent (as suggested by Way and Del Genio 2020) or a fossil atmosphere . Extrapolating Venus' outgassing rates back in time is subject to even greater uncertainties since the tectonic regime may have changed (discussed in Rolf et al. 2022, this journal). Magmatic outgassing is a consequence of partial melting of hot, uprising mantle material. The shallower the depth to which the mantle material rises, the smaller the lithostatic pressure and therefore the lower the solidus temperature. A thick, insulating lid on top of the mantle would usually create a barrier to hot, uprising plumes and therefore a relatively small melting region ). In contrast, on planets with plate tectonics, the melting region beneath mid-ocean ridges is extended close to the surface. On Earth, this mechanism causes a basaltic crust production rate of at least ∼ 19 km 3 /yr (Cogné and Humler 2006). An additional effect associated with plate tectonics is the subduction of water. In subduction zones at depths of 100-200 km, subducted hydrous minerals become unstable and release their water (Rüpke et al. 2004;Stern 2011). Since water reduces the solidus temperature of the surrounding rock, partial melt is produced that rises to the surface, which is also accompanied by outgassing. Altogether, the tectonic regime has a tremendous effect on the outgassing rate. If early Venus possessed plate tectonics, its outgassing rate would likely have been much higher than it is today. The nature of outgassed species is also important for surface conditions. It has been highlighted that the mantle redox state (the mantle oxygen fugacity) could greatly affect the speciation in the atmosphere, with oxidised mantles (such as the Earth's) leading to the outgassing of CO 2 and water. On the other hand, a reduced mantle could rather favor CO or H 2 (e.g. Kasting et al. 1993a;Gaillard et al. 2021;Frost and McCammon 2008;Hirschmann 2012). On Venus, not only is CO 2 the major current component of the atmosphere, but it is also responsible in large part for the high surface temperatures. Large rates of CO 2 outgassing can inhibit global glaciation due to this species' role as a greenhouse gas. This is particularly important if a carbonate-silicate cycle is active on the planet, where silicate weathering serves as a sink to atmospheric CO 2 (e.g., Kadoya and Tajika 2014). On the other hand, high rates of CO 2 degassing can enhance the greenhouse effect and ultimately lead to the evaporation of water. Whereas an active carbonate silicate cycle would balance high rates of outgassing to some extent, recycling of CO 2 into the mantle on planets without plate tectonics is rare (e.g., Foley and Smye 2018;Höning et al. 2019b). The solar flux that Venus receives today, and has received in its history, is substantially higher than that the present-day Earth receives, and the threshold towards surface water evaporation is the most relevant bottleneck to Venus' habitability (see also Gillmann et al. 2022, this journal). Small outgassing rates during Venus' early evolution, in combination with an active carbonate-silicate cycle, would increase the likelihood of an early, habitable Venus, while significant early outgassing would go against habitable conditions. On Earth, water is a major component of volcanic outgassing (on the order of 1%). It is possible that Venus is much drier, due to loss during the magma ocean phase and the inability to condense water early on (Gillmann et al. 2009;Hamano et al. 2013), or that high surface pressure stifles water outgassing (Gaillard and Scaillet 2014), but the planet's current state and modelling seem consistent with marginal water outgassing during recent history at least (Gillmann and Tackley 2014). Beyond its availability for a possible liquid layer, water also affects surface conditions by being a strong greenhouse gas and, despite its low abundance in Venus' current atmosphere, is the second highest contributor to high surface temperatures on the planet. Initial analysis by the Pioneer Venus Large Probe Neutral Mass Spectrometer (PV-LNMS) indicated the possible presence of CH 4 (Donahue and Hodges 1992). It was speculated that CH 4 was likely not well mixed in the atmosphere, given the measured variations in abundance. Later analysis of the same data by the same team indicated that the detection was unlikely (Donahue and Hodges 1993) and was due to contamination from terrestrial CH 4 brought along in the instrument and hence "was generated by a reaction between an unidentified highly deuterated atmospheric constituent and a poorly deuterated instrumental contaminant." However, the instrumental contaminant has never been identified and hence the detection of CH 4 in the Venusian atmosphere remains an open question. D/H Ratio Venus' atmosphere today contains only about 30 ppm H 2 O (Fegley 2014, and refs. therein). The first in-situ D/H measurement by the PV-LNMS demonstrated a ratio ∼ 150 times that of Earth (Donahue et al. 1982). Upper atmosphere measurements of D/H by Venus Express (Fedorova et al. 2008) documented much higher values, which are inconsistent with those of Donahue et al. (1982). Fedorova et al. (2008) attributed these differences to "a lower photodissociation of HDO and/or a lower escape rate of D atoms versus H atoms." Ground based measurements do not always concord with both space and in-situ measurement, although in some cases they are consistent (Matsui et al. 2012). Various measurements have been suggested to imply a vertical variation of HDO/H 2 O (see Marcq et al. 2018, and references therein) at odds with current chemical models. Despite these discrepancies, the general view is that hydrogen in Venus' atmosphere has a D/H much higher than any other reservoirs of hydrogen in the solar system, which implies that Venus lost hydrogen to space. This may indicate that after an early wet, possibly habitable, time (Donahue et al. 1982;Way and Del Genio 2020), water dissociated and the hydrogen was removed from the atmosphere to space. Dissociation of water molecules and the escape of hydrogen probably had a strong influence on the entire geodynamical history of the planet (Baines et al. 2013) However, while recent studies propose scenarios for the evolution of the D/H of Earth's water (Pahlevan et al. 2019;Kurokawa et al. 2018), estimating the amount of water lost from Venus over the last 4.5 Gyr remains extremely challenging for several reasons. Firstly, the history of hydrogen escape cannot be easily re-constructed since it depends on numerous factors (solar irradiation history, atmospheric composition and vertical structure, regime of escape etc.). Secondly, the starting D/H ratio for hydrogen in the Venus atmosphere remains unknown. An important contribution from solar gases would imply a low starting D/H ratio, while contributions from comets could have increased this ratio up to 4-5 times that of Earth (Altwegg et al. 2015). New investigations of the elemental and isotopic composition of noble gases in the Venus atmosphere, especially of xenon, would help shed further light on the history of hydrogen (and water) escape from Venus (see Marty 2020 andAvice et al. 2022, this journal). Climate History The initial stages of Venus' habitable state are more shrouded in mystery than those of Earth's. We begin our analysis of Venusian habitability with the longevity of the postaccretion magma ocean. Early work by Hamano et al. (2013) and Lebrun et al. (2013) showed that, if the time needed for the crystallization of Venus' magma ocean is ∼ 100 million years or longer, there is the risk of dissociation and loss of its primordial H 2 O steam atmosphere: the hydrogen, as well as some of the oxygen, escapes during this time while any leftover oxygen is absorbed into the magma ocean. After the cooling and crystallization of the magma ocean, the planet (denoted by Hamano as a Type II world) may inherit a thick CO 2 dominated atmosphere not that different from what we observe today on Venus. In this scenario, the D/H ratio measured by Donahue et al. (1982) is a possible remnant of the primordial CO 2 +Steam(H 2 O) dominated atmosphere. In an alternative scenario for Venus (which Hamano et al. termed a Type I world), the magma ocean crystallization takes place over ∼ 1 million years (similar to that of Earth: Katyal et al. 2020;Nikolaou et al. 2019). This scenario avoids the loss of the steam atmosphere, which may then condense out onto the surface, possibly allowing for a period of habitability of undetermined length. More recent work by Turbet et al. (2021) has expanded the 1-D models of Hamano et al. (2013), Lebrun et al. (2013) to a 3-D GCM where cloud effects can be modeled and their importance quantified. The Turbet et al. (2021) models of steam+CO 2 and steam+N 2 atmospheres demonstrate that there are little to no day side clouds at the substellar point to shield the planet from high solar insolation (as will be seen in the cold-start cases below). Their model also demonstrates the presence of high clouds at the polar and night side, which are effective at trapping outgoing infrared radiation preventing the cooling of the planet. The Turbet study supports the Type-II outcome modeled in Hamano et al. (2013), where the magma ocean steam atmosphere is never able to condense out on the surface, and once again the H 2 escapes and most of the oxygen is absorbed by the magma ocean. One major shortcoming in all of the models above is the inability to provide better constraints on exactly what the constituents were of the outgassed magma ocean atmosphere, which presently is an active area of research (e.g. Bower et al. 2022;Gaillard et al. 2022). Alongside these unknowns can be added the inability to constrain the albedo (Salvador et al. 2017, which would again influence whether Venus becomes a Type I or Type II world. The lack of definitive evidence for one scenario or the other implies that the question of the existence of Venus' past habitability remains open. More details on magma ocean and atmospheric evolution is provided in Salvador et al. (2023, this journal). One method of testing which hypothesis for Venus' evolution is correct is by examining data from the upcoming DAVINCI mission (Garvin et al. 2022), which should provide better constraints on when Venus lost its water and the timescale over which it happened by examining a number of noble gas isotopes (see Avice et al. 2022, this journal). Another more indirect method would be possible by looking at exoplanet demographics -if we observe planets in the Venus Zone (Kane et al. 2014) that have temperate conditions then at least we know it is possible. See Way et al. (2023, this journal) for how exoplanet research may inform Venus' history. To date, very little work has been done to examine how Venus (or the Earth for that matter) moves from a post-magma ocean state to a period of habitability with moderate surface temperatures and oceans, despite this transition being a cornerstone of the onset of habitability. As mentioned above, the first step would be to provide better constraints on exactly what the constituents were of the magma ocean atmosphere (e.g. Bower et al. 2022;Gaillard et al. 2022). One also needs to account for large impacts occurring in the first few hundred million years of the planet's evolution (see Salvador et al. 2023;Gillmann et al. 2022, this journal), which could have major consequences on the atmospheric mass and composition, as large amounts of water, CO 2 , N 2 , and other species could be delivered or removed (e.g. Schlichting and Mukhopadhyay 2018;Gillmann et al. 2020). Thus, it is unlikely that surface conditions would remain consistent throughout that early time. Some works have attempted to examine the first 100s of million years (e.g. Harrison 2020), as described above, but many unknowns still remain. There is another problem for those interested in Venus' habitability: How could Venus ever have been habitable like early (or modern) Earth when Venus at 4.5 Ga received ∼ 1.5 times the incident solar flux that Earth receives today? Most studies resulting in temperate conditions have been made assuming a cold start in the post-magma ocean phase, which hypothesizes that the early magma ocean cooled quickly (a few million years) and that water was able to condense out on the surface (Hamano Type I discussed above). The first to successfully model such temperate conditions in the Pre-Fortunian (Hiesinger and Tanaka 2020) was Pollack (1971), who used a 1-D radiative convective non-grey model. He presented two options at 4.5 Ga when the solar luminosity was ∼ 30% lower than today. The first was an early Venus with a 50% cloud cover -the motivation for 50% was that he believed modern Earth has roughly this amount (modern measurements indicate 70 ± 10%; Holdaway and Yang 2016) -and in that scenario early Venus had temperatures (depending upon the atmosphere assumed) ranging from ∼ 320-500 K. The second choice was a 100% cloud cover model, yet the motivation for such a model was not disclosed. In this scenario Pollack discovered that the planet could host moderate surface temperatures below 300 K. Pollack also demonstrated that, even at today's insolation (∼ 1.9 times Earth's), the surface temperature could have remained below 300 K. This 100% cloud cover assumption was the basis for all subsequent Venus habitability studies (e.g. Grinspoon and Bullock 2007) yet no mechanism for producing 100% cloud cover mechanism was ever provided. It would not be until the exoplanet work of Yang et al. (2014) that such a mechanism was discovered. Yang et al. (2014) used the NCAR CAM General Circulation Model (GCM) 1 and discovered that, for slowly rotating planetary atmospheres, an expanded Hadley cell would provide the 100% cloud cover at the subsolar point. Modern Earth actually contains three Hadley cells in the north and three in the south. The reason Earth does not have a single Hadley cell in each hemisphere is because its 'fast' rotation generates a strong Coriolis force deflecting the north-south overturning cells. In a slowly rotating world, the Coriolis force is very weak and hence a single north and south Hadley cell is present. Subsequent Venus-focused work by Way et al. (2016), using the ROCKE-3D (Way et al. 2017) GCM with a fully coupled dynamic ocean, confirmed Yang's work which utilzied a simplified single mixed-layer/slab ocean without any horizontal heat transport. Later ROCKE-3D GCM work by Way et al. (2018) utilized both fully coupled dynamic oceans and mixed-layer/slab oceans to confirm Yang's general conclusions over a large range of rotation rates and insolation. The fact that two independent GCMs observe the same behavior is encouraging, but cannot be considered conclusive until these effects are observed in exoplanetary systems in the future. However, these models require at least some surface water. Some tens of cm in soil would be sufficient according to more recent work in Way and Del Genio 2020. Whether or not water was able to condense on the surface at all after the magma ocean phase depends on the atmosphere at this time whose composition is a matter of on-going debate as mentioned above (e.g. Bower et al. 2022). It should be noted that we have no constraints on what early Venus' rotation rate was, but an early slow rotation rate can be achieved via a number of mechanisms including solid body tidal dissipation (e.g. MacDonald 1964; Goldreich and Peale 1966;Way and Del Genio 2020), Core-Mantle friction (Goldreich and Peale 1970;Correia and Laskar 2001;, and oceanic tidal dissipation (Green et al. 2019). The possible role of impactors in Venus' rotational evolution goes back at least to the work of McCord (1968), although no detailed hydrodynamical simulations have ever been performed to examine the impactor parameters and lack of an observable moon as we have for Earth. Our moon is likely the remnant of an impactor (e.g. Benz et al. 1986;Canup 2004;Lock and Stewart 2017), see section on Archean Earth above. Moreover, given the youthful age (200-750 Myr) of the surface of Venus (e.g. McKinnon et al. 1997;Bottke et al. 2016), there is little chance of observing cratered remains of any such ancient impactors. If such an impactor did collide with the planet in Venus' past, it may be possible to detect it isotopically if it was sufficiently different from the bulk composition of Venus, but measuring this would be challenging. To paraphrase Way and Del Genio (2020) "it is clear that Brasser et al. (2016) and Mojzsis et al. (2019) prefer the hypothesis that the Earth's late veneer was mainly delivered by a single Charon-or Ceres-sized impactor. For that reason, if a larger object was involved in the late evolution of Venus' spin or obliquity, it may be possible to detect its geochemical fingerprints in a future in situ mission." Thus, all discussions of the habitability of Venus over any time scale discussed herein assumes that the planet was rotating slowly enough to generate ∼ 100% cloud cover at the subsolar point and extending across most of the sunlit hemisphere of the planet (see Fig. 7). It should be noted that the persistence of Venus' habitability was originally predicated upon the notion that the faint young Sun's increase in brightness through time (e.g. Gough 1981;Claire et al. 2012) would take some hundreds of million years to subsequently increase surface temperatures, driving the planet into a runaway greenhouse. Regardless, some form of volatile cycling would be required to keep the planet's climate stable over geological time (Höning et al. 2021;Krissansen-Totton et al. 2021;Gillmann et al. 2022;Way et al. 2023, this journal), as for Earth, normally through some form of weathering (e.g. Walker et al. 1981;Kasting and Catling 2003;Krissansen-Totton et al. 2018;Höning 2020;Graham and Pierrehumbert 2020). At the same time, the work of Yang et al. (2014), Way and Del Genio (2020) has definitively shown that, if the cloud albedo feedback for slowly rotating worlds is correct, then increases in solar insolation through time cannot be the deciding factor in the evolution of Venus from a temperate planet to a hothouse as long as volatile cycling takes place. From this broad overview of the possible conditions at the onset, persistence, and loss of habitability, it appears that the question of transitioning from one state or era to another is a major challenge that will need to be addressed by future models. The period toward the Fig. 7 Generated from General Circulation Model simulation 33 in Way and Del Genio (2020). This is a 1 bar N 2 Dominated atmosphere including 400 ppmv CO 2 and 1 ppmv CH 4 . The sidereal rotation rate is the same as modern Venus (−243 × Earth). Insolation is set to the value that Venus received 715 Ma (1.7 × Modern Earth or 2358.9 W/m 2 ). This is a snapshot of 1/12 of a Venusian solar year. Left: percentage total cloud cover. The black star represents the location of the subsolar point. Right: surface temperature map. Note that the highest temperature regions (dark red) are located near the subsolar point and on the southern landmass, while the coldest regions are on the anti-solar continental landmasses (blue). The fully coupled dynamic ocean, which includes horizontal heat transport, keeps the oceans warm (red/orange colors) even in the anti-solar regions of the simulation end of the magma ocean phase has been highlighted as an important criterion for subsequent evolution and needs to be studied more intensely before any definitive conclusions can be drawn. In the same way, much more work needs to be done to explore how a planet may go from a temperate to a moist and then a runaway greenhouse state (e.g. Kasting 1988), and 3D GCMs will be needed (e.g. Boukrouche et al. 2021). Linking Possible Past Habitable States to Present-Day Observations Present-day Venus looks nothing like what habitable models suggest Venus could have been like in the past. Therefore, we should first attempt to understand how the planet could have radically changed from an hypothesized temperate climate with a relatively thin atmosphere to the dense hothouse we observe today. Then, we take a look at what signs of a previous habitable time interval or of the stages required to bring Venus to its present state could be observable today. The evolution of Venus from an hypothetical habitable time interval to the present-day must bring its atmosphere to the current inventory of major species (11 · 10 18 kg of N 2 , 10 16 kg of H 2 O and 4.69 · 10 20 kg of CO 2 ). It must also remove any molecular oxygen. Ideally, it would also bring the D/H ratio to its present value (Donahue et al. 1982), but due to uncertainties in the loss mechanisms, and varying isotopic ratios for the volcanic and meteoritic sources, this is challenging (Grinspoon 1987(Grinspoon , 1992Gurwell 1995). Likewise, the stable isotope ratios of noble gases have been suggested to derive from early hydrodynamic escape but cannot be modelled by a unique self-consistent scenario due to the lack of constraints on early atmospheric conditions, structure and composition, as well as solar energy input (see Avice et al. 2022, this journal). Nitrogen evolution has long been assumed to be relatively straightforward once the magma ocean crystallized, since, in the absence of fixation by living organisms, it was not expected to be part of complex cycles (e.g. Stüeken et al. 2016). However, the understanding of the nitrogen cycle, even on Earth, is much less advanced than that of CO 2 (Stüeken et al. 2020). On Earth (Marty and Dauphas 2003), it is thought to be approximately in balance between sources (half volcanic outgassing and half oxidative weathering) and the sink (burial with a touch of subducted flux). In the past, though, nitrogen fluxes are likely to have significantly changed (Goldblatt 2018). This possibly affected surface pressure, despite variations that are much lower than those expected on Venus for CO 2 , for instance. Some reasons behind these variations include possible volcanic production changes with time (from mantle conditions and composition evolution), and changes in the composition of the atmosphere (e.g. presence/absence of oxygen) (e.g. Som et al. 2016;Catling and Zahnle 2020), leading to changes in the chemical reactions between the atmosphere and the surface (i.e. weathering). For example, some results suggest maximum atmosphere pressures of about 0.5 bar on Earth (probably much less), 2.7 Gyr ago, using barometric calculations from fossilized raindrops and gas bubbles in basaltic lava (Som et al. 2012(Som et al. , 2016. However, the past N 2 abundance in the atmosphere is still poorly constrained and generally thought to have possibly varied by a factor 2-3 relative to present-day (Goldblatt et al. 2009;Johnson and Goldblatt 2015;Goldblatt 2018). In such a scenario, considerable build-up of nitrogen in the atmosphere of Earth over its history may be expected. What this could mean for Venus is still uncertain, given the differences between the two planets, the lack of data relative to Venus' past and the dependence of the nitrogen fluxes on surface conditions. Comparing the nitrogen abundances on the two planets, one should also consider that some nitrogen is stored in Earth's continental crust. Still, a better understanding of the nitrogen exchanges applied to Venus will provide valuable insight on the planet's evolution. The greater abundance of nitrogen in Venus' atmosphere compared to Earth's (4 · 10 18 kg) could imply it has escaped even less than on Earth and was thus protected from losses (Lammer et al. 2018), despite the fact that the 40 Ar value suggests that Venus' mantle is less degassed than Earth's. Some early temperate Venus models use an atmospheric nitrogen content similar to Earth's (e.g. Way et al. 2016). While it is likely that, by the end of the magma ocean phase, the primordial nitrogen-based species would have been trapped by the hot surface and removed from the atmosphere, collisions with large impactors would have delivered additional nitrogen over the first few hundred million years. Gillmann et al. (2020) proposed that about 5 · 10 18 kg N 2 could have been brought to the atmosphere this way, despite this number being highly dependent on impactor vaporization and composition. The rest (about half) of the present-day inventory of N 2 could realistically be released into the atmosphere over the following 4 Gyr by volcanic activity, but actual fluxes depend on volcanic production rates, mantle composition and surface conditions (Gillmann et al. 2020;Gaillard and Scaillet 2014), as well as burial fluxes, which are all poorly constrained. Confirmation of the volatile composition of the lava and the volcanic plumes could help refine these estimates and better assess the feasibility of long term outgassing of the current nitrogen content of Venus' atmosphere. CO 2 evolution is a more complex issue, since it can interact more easily with the surface. The main question is how it was possible to evolve from very low CO 2 abundances, in a temperate atmosphere, to a full-fledged atmosphere with 88 bar CO 2 . Volcanic outgassing has been proposed to be responsible for Venus' atmospheric CO 2 inventory (see Gillmann et al. 2022, this journal), despite the possible high surface pressure (Gaillard and Scaillet 2014). This implies that, if CO 2 is available for outgassing (i.e. present in the mantle and transferred into the melt; see Gillmann et al. 2022 this journal), it will be released into the atmosphere. However, it has been shown that, with Earth-like outgassing (Earth-like composition of the gases released into the atmosphere during a volcanic eruption, indicating probable Earth-like oxidation of the Venusian mantle), at least the equivalent of ∼ 10 global resurfacing events is needed to build up Venus' CO 2 atmosphere (Lopez et al. 1998). More recent work , estimates that the number of equivalent global resurfacing events needed to obtain the amount of CO 2 in the present atmosphere of Venus is about 100. This would indicate that most of the present-day atmosphere would have originated from the period before the present geological record, which is in line with the interpretation of 40 Ar in the atmosphere of Venus that suggests that the bulk of the outgassing occurred rather early during its evolution. Numerical modeling of the mantle of Venus (e.g. Armann and Tackley 2012;Gillmann and Tackley 2014) also implies that global volcanic events are unlikely to occur with such a high frequency due to the massive internal heat dissipation they cause. The mantle requires time for heat to accumulate again before a new event is triggered. Therefore, volcanism is unlikely to have been the cause for the full atmosphere build up, or even for more than a fraction of the build-up. It does not preclude volcanism from triggering a transition, though (e.g. Way et al. 2022). Instead, whatever outgassing was due to volcanism took place on the long term, but without allowing us to be more specific about a precise age for a possible transition. Weller and Kiefer (2021) present an alternative picture of how Venus' atmosphere could go from a very low CO 2 abundance to 20-60 bar. Weller and Kiefer (2021) demonstrate that a significant fraction of the present-day atmosphere can be produced with a single overturn (early hot planet, about 5 bar CO 2 per overturn), no overturns, or multiple overturns (later cold planet). Interestingly, these overturns do not need to be global and can occur on geologically short timescales. Weller et al. (2022) also suggests that a significant portion of Venus' atmosphere present-day N 2 and CO 2 inventory could be best produced under an early plate tectonics regime and may be reached without initial magma ocean contribution. An alternative solution that has been suggested is that a global volcanic event, possibly akin to Earth-like Large Igneous Provinces (LIPs), could have both outgassed CO 2 from mantle reservoirs and destabilized carbon crustal reservoirs (such as carbonates and other carbon rich sediments, see Retallack et al. 2006;Svensen et al. 2009;Ganino and Arndt 2009;Nabelek et al. 2014), leading to the accumulation of CO 2 in the atmosphere on a short timescale (Way and Del Genio 2020;Krissansen-Totton et al. 2021;Höning et al. 2021;Way et al. 2022) at an undefined date. Such a mechanism and its feasibility are rather difficult to assess in the absence of observation. While Earth has experienced several LIPs during its evolution, no trace of CO 2 partial pressure increase on the order of tens of bars has been recorded (Schaller et al. 2011, for an example of estimate in the order of tens of thousands ppm CO 2 ). However, such events have been associated with dramatic climate change and global extinction events (e.g. Wignall 2001), making them important enough to affect life on a global scale and planetary habitability. They could possibly trigger a climate transition by overwhelming any volatile cycling in effect hence driving the planet into a moist and then runaway greenhouse (Way and Del Genio 2020;Way et al. 2022). The remaining issue with this habitable scenario can simply be stated as "where is the oxygen?" If there were once substantial surface reservoirs of water on the surface of Venus and they were driven into the atmosphere as the planet warmed up, then why does the atmosphere not contain many bars of oxygen? In essence, as a planet enters the moist greenhouse state, water is transported into the stratosphere. Over time this water is photodissociated, the hydrogen can escape via diffusion and the oxygen should be left over (Kasting 1988). Studies have shown that it is difficult for oxygen to escape in substantial quantities in the present day Venusian atmosphere (e.g. Persson et al. 2020) where the H + :O + ratio is ∼ 2:1 over the solar cycle (e.g. Barabash et al. 2007;Persson et al. 2018). Assuming Venus' atmosphere has not changed over geological time, Persson et al. (2020) demonstrated that it would have lost between 0.02-0.6 meters of a global equivalent layer of water over the past 3.9 billion years via atmospheric escape. Additionally, non-thermal escape is a slow, ongoing process that declines with time, as the solar extreme UV input decreases. This implies that it takes a long time to remove any significant amount of oxygen from Venus' atmosphere. In turn, if atmospheric escape alone is considered, progressive loss after an early habitable time billions of years ago would be favored. In fact, it has been speculated that exoplanetary worlds with multiple bars of oxygen may indicate a former temperate period with oceans (Luger and Barnes 2015;Wordsworth and Pierrehumbert 2013). It has been suggested that surface interaction and oxidation could have been a major sink of oxygen during the magma ocean phase (Kasting et al. 1993b;Gillmann et al. 2009), but this can be an efficient way to suppress oxygen accumulation only until the magma ocean solidifies. Way and Del Genio (2020) speculated that the resurfacing we see on Venus today could have been the means to sequester the leftover oxygen. Gillmann et al. (2020) have simulated ongoing oxidation of the fresh, solid, basaltic crust and found it able to extract oxygen from the atmosphere at a maximum rate slightly higher than atmospheric escape, at most. Pieters et al. (1986), Lécuyer et al. (2000 have calculated that a hypothetical equivalent layer of approximately 50 km of hematite would be necessary to account for the oxidation of the content of an Earth ocean on Venus. More recent work by Warren and Kite (2021) has suggested that, for this hypothesis to be valid volcanic ash produced by explosive volcanism needs to be oxidized. In their model oxidation efficiency was increased by the larger free surface of the material (they therefore assume a 100% oxidation efficiency). However, such a mechanism still requires layers of kilometers to tens of kilometers of oxidized material to be emplaced onto the surface of the planet. This hypothesis also needs to consider that only very limited pyroclastic activity has been identified on Venus today (Campbell and Clark 2006;Ghail and Wilson 2015;Grosfils et al. 2000Grosfils et al. , 2011Keddie and Head 1995;McGill 2000), as explosive volcanism requires volatile contents > 3-5 wt%, several wt% higher than typical Earth magmas (< 1 wt%) . As a result, it is possible that such a mechanism might have actually played a role in the more distant past of Venus, rather than relatively recently. Again, better understanding of the nature and composition of the surface layers of Venus would be a tremendous help to understanding its history. Gillmann et al. (2022, this journal) expands on this topic and surface-atmosphere interaction. Section 2 describes what to look for in the atmosphere today (D/H and noble gases, see Avice et al. 2022 this journal, for more details on noble gases) and what to look for on the surface in terms of felsic materials (similar to material from Earth's continents, formed at subduction factories) that may have a connection to surface water-rock interactions. On the other hand, if tesserae prove to be mainly basaltic, they formed without the need for liquid water, which would support a dryer evolution at least at the time they were formed. The Clouds of Venus The question of Venus' present-day habitability has been discussed for decades (Morowitz and Sagan 1967;Cockell 1999;Grinspoon and Bullock 2007). As covered in prior sections, it is often reasoned that if conditions on early Venus were similar to conditions on early Earth during the period in which Earth life arose -carbon molecules, surface water and rock-water interactions, and N, P, S, and transition metals, as well as suitable surface geology, volcanism and hydrothermal activity -this indicates the potential for an Earth-like biochemistry to have arisen on early Venus. However, modern-day Venus' surface is too hot for liquid water to be present, which rules out such biochemical reactions. Speculative alternative biochemistries compatible with the modern Venus surface have been proposed, such as the use of supercritical carbon dioxide as a polar solvent (Budisa and Schulze-Makuch 2014); the possibility for water-based life to have retreated to underground high-pressure water refugia Fig. 8 The calculated temperature, pressure, and pH prevailing in Venus cloud aerosols in the height range 45-70 km from the surface (solid lines) and respective observed limits for terrestrial life (dashed lines and solid fill). The limits of terrestrial organisms may or may not reflect the possibilities for Venus has also been discussed (Schulze-Makuch and Irwin 2002). However, the only liquid water known to exist today on Venus is that dissolved within its sulphuric acid clouds. There has therefore been a great deal of interest in the potential for present-day habitability of the Venus cloud aerosols, and whether such a habitat could have existed contemporaneously with surface water for long enough for life to have made the transition. Figure 8 shows the calculated temperature, pressure, and pH in the middle Venus atmosphere (Grinspoon and Bullock 2007;Dartnell et al. 2015), juxtaposed with the respective observed limits for terrestrial life. The resulting discussion of a potential 'habitable range', and its relation to the limits of terrestrial cloud-and airborne microorganisms, is the subject of the following section. Future missions may help to constrain additional major variables affecting habitability in this altitude range, such as water availability and better constraining ultraviolet radiation flux (e.g. Mogul et al. 2021b). Life in the Clouds Earth's biosphere is a dynamic system of organisms and their interactions with the physical environment, including both transient and enduring habitats as well as short-term transport pathways (wind, rain) and long-term dormant refugia (polar ice, the deep subsurface). It includes a significant atmospheric component, the aerobiosphere. Tropospheric cloud waterthe warmest and wettest airborne habitat -carries between 10 3 and 10 5 viable cells per mL (Amato et al. 2007), some of which is metabolically active (Amato et al. 2017). In addition to liquid cloud water, viable microbes are transported both regionally and globally as dust (Schuerger et al. 2018), ranging from 10 1 to 10 6 cells per cubic meter of air (Bowers et al. 2011;Burrows et al. 2009). These viable, dry bioaerosols extend throughout the troposphere and into the stratosphere (Bryan et al. 2019). However, most and possibly all of these desiccated microbes are inactive. Airborne microbial reproduction has not yet been directly observed in the field, although there have been indirect laboratory demonstrations (e.g. Sattler et al. 2001); this is likely in part because microbes in the field do not stay airborne for very long compared to typical generation times . At its most abstract, the requirements needed to support life (as we know it) were summarized by Hoehler (2007) as: a solvent (water); nutrients (C, H, N, O, P, S, and trace elements like Fe); energy for primary producers (autotrophs, chemical or photonic); and a stable environment (temperature, pH, radiation, etc.). The limits of habitability are often reasoned by analogy to therefore be the limits of life with respect to these requirements; the limits for the emergence of life are not fully understood, but may be different from or more constrained than established life which has had time to adapt and diversify, as noted above in Sect. 2.3. By these metrics, hypothetical life in Venusian aerosols may be within the bounds of temperature, pressure, and pH (Grinspoon and Bullock 2007;Nicholson et al. 2010);radiation (Schulze-Makuch and Irwin 2002;Dartnell et al. 2015;Mogul et al. 2021b); C, H, N, O, and S, with some evidence for P Milojevic et al. 2021;Mogul et al. 2021a); and energy sources (photonic and/or oxidation-reduction potential, Limaye et al. 2018;Mogul et al. 2021b). Seager et al. (2020) argue that because the pH scale becomes highly compressed at extreme acid (or base) concentrations, the Hammett acidity function (H 0 ) is a more representative metric for the Venus aerosols' acid activity. H 0 for Venus' aerosols is poorly constrained. It has been estimated from as low as −11 ) based on the current understanding of bulk aerosol composition, to ≥ −1.5 by Mogul et al. (2021b) with favorable assumptions regarding trace aerosol composition, the latter of which has some support in recent modeling by Rimmer et al. (2021) speculating the presence of ammonium or other hydroxide salts; however, even under the most favorable conditions, the results are at or below the acidity of any known Earth habitat, a substantial challenge for the hypothesis of an Earth-like biochemistry. The previously discussed alternative biochemistry hypotheses, such as a theoretical biochemistry based on sulphuric acid as a polar solvent instead of water, are not sufficiently detailed to be constrained in the same way. An airborne ecosystem faces the additional unique requirement that its organisms must be able to stay aloft long enough to reproduce in a suitable, microbial-scale environment. Otherwise, the aerobiosphere will eventually settle out to extinction (if the planetary surface is uninhabitable, as with Venus), or be limited to transportation of a continual flux of organisms from the surface (as appears to be the case on Earth). A stable microbial aerobiosphere -using the term 'microbe' generally, without implied similarity to terrestrial microbiology -therefore has much stricter constraints than initially apparent. Microbes in a long-lived aerobiosphere (i.e., an atmospheric habitat) cannot rely on the common survival strategy of dormancy, i.e., 'waiting it out' to grow and reproduce during brief influxes of water, light, heat, etc. as is observed in microbes from Earth's deserts, poles, and other extreme environments. In effect, the 'soft' constraints of surviving versus thriving (activity, growth, and reproduction) become converted to hard habitability constraints when assessing potential atmospheric habitability, and are further related to the typical particle residence time determined by the large-and small-scale atmospheric dynamics. On Earth, residence time for liquid water cloud particles, the most clement airborne microenvironment, ranges from hours to days; this is roughly on order with typical microbial generation times for common surface soil-or water-dwelling microbes. Smaller and lighter particles in drier and colder parts of Earth's atmosphere, such stratospheric aerosols, may be resident for as much as a few years; however, extremophilic microbes observed capable of withstanding similar conditions in other terrestrial habitats reproduce far more slowly, with an example of a 60-day mean generation time reported for Siberian permafrost at −10 • C (Bakermans et al. 2003). Another survival strategy often found in extremophilic environments with highly dynamic conditions -for example, a desert which might receive all of its rainfall on one or two days a year -is adaptation to long periods of dormancy followed by brief periods of repair and growth (e.g., Friedmann et al. 1993). Microbes have been observed to survive decades and perhaps far longer of complete desiccation, freezing, or other extreme conditions in the field (see Schulze-Makuch et al. 2018;Lowenstein et al. 2011;Knowlton et al. 2013 and references therein), but it should be emphasized that they do not reproduce during these periods and thus this phenomenon does not necessarily extend [h] Nutrients (e.g., surface minerals), energy, and wet periods become sufficient to allow division, beginning the cycle anew the criteria for long-term aerobiosphere habitability. This is an important point: on Earth, airborne life has so far been observed to originate within at most a few generations from surface habitats. Our knowledge of the microenvironments and typical residence times of Venus cloud droplets is limited, though they are likely longer-lasting than Earth's clouds. Seager et al. (2020) implemented a model that suggests coagulation rates constrain 3 µm-diameter cloud aerosols to 6 months aloft; Grinspoon and Bullock (2007) note that Hadley circulation may impose an overall 70-90 day upper bound. Conditions favorable to metabolic activity and reproduction must also have sufficient continuity. A generalization of the constraints that shape Earth's aerobiosphere is shown in Fig. 9: cloud formation, precipitation, particle trajectories, cycles of dehydration and rehydration, and nutrient and radiation flux, among others. The hydrated periods of metabolic activity must align with the availability of nutrients and energy to allow growth to reproduce before the particle containing the microbe(s) rains or settles below the surface, or habitable altitude range. Given the above, the most significant question for the potential present-day habitability of Venus' clouds is whether sufficient water exists in the cloud aerosols to allow occasional microbial growth. Both Seager et al. (2020) and Hallsworth et al. (2021a) estimate the water activity (a w ) as ≤ 0.004, far below the microbial activity limit of ∼ 0.6. Limaye et al. (2021) calculated a higher but still prohibitive estimate of 0.02; as with calculations of H 0 above, the speculative models of Mogul et al. (2021b) and Rimmer et al. (2021) could allow currently unmeasured trace aerosol constituents to raise this to within known limits. Understanding the habitability of Venus' clouds will require both future missions and modeling, with close coordination between experts in atmospheric dynamics, aerosol properties, cloud microphysics, and aerobiology. Key parameters to constrain include detailed measurements of Venus' aerosol composition, including trace constituents that could be nutrients or provide acid neutralization; typical residence times for particles with microbe-like properties in the Venusian atmosphere; and typical generation times for potential Earth analogue microorganisms, especially primary producers able to survive repeated desiccation and high acidity. Fig. 10 Notional particles potentially to be encountered in the Venus cloud decks, inspired by terrestrial atmospheric sampling, to guide future instrument and analysis selection: (1) complex shapes with fluorescent properties, (2) particulate aggregates of sulfates and related compounds, (3) unidentified group of complex shapes adhered to an aerosol particle, (4) objects that resemble Earth bacteria or archaea, and (5) volcanic ash particles Given the importance of microenvironments within cloud droplets to habitability, it is relevant to note several lines of evidence pointing to the existence of multiple cloud aerosol constituents beyond sulphuric acid and water: (1) UV absorption in the upper clouds of Venus is caused by an as-yet-unidentified "unknown UV absorber"; (2) VEx/VMC imager's analysis of the phase functions of light reflected from the upper clouds show more variation in refractive index than can be explained by H 2 SO 4 :H 2 O mixtures alone; (3) particulates are observed to exist at altitudes below the main cloud base, where temperatures are too high for H 2 SO 4 :H 2 O droplets to persist in liquid form; (4) X-ray Fluorescence analysis of collected droplets conducted from Venera and Vega descent probes found evidence of iron, chlorine and phosphorus in cloud droplets; and (5) the recent reanalysis of the Pioneer Venus LNMS data by Mogul et al. (2021a) which may provide further evidence for phosphorus in the cloud layer. The latter two results have not yet been reconciled with other in situ measurements and therefore remain something of an enigma (see review in Titov et al. 2018). The identification of cloud particle composition, down to the trace level, is clearly of great importance for assessing the present day habitability of the cloud deck (Fig. 10). In this respect, new investigations including measurement of the abundance and isotope ratios of volatile elements would likely shed light on the past and present dynamics of the cloud region of the Venus atmosphere ). Suggested Venusian Biosignatures Interest in the habitability of Venus' clouds is furthered by several currently unexplained observations of the Venusian atmosphere that bear some similarities to known terrestrial biosignatures; if the clouds can be shown to bear equivalent similarities to the corresponding terrestrial habitats, the case for dedicated life detection investigation strengthens, and vice versa. The Venus cloud layers have significant spectral absorption features not currently explained by what is known about the bulk aerosol composition, most notably in the UV but also at some longer wavelengths. Limaye et al. (2018) and Mogul et al. (2021b) suggested that this could be caused by phototrophy and/or 'sunscreen' pigments similar to carotenoids -in other words, analogous to the green 'color' of Earth resulting from the global presence of chlorophyll. There are also discontinuities or unexplained variances in atmospheric sulfur and other chemical cycling Shao et al. 2020). Spacek and Benner (2021) suggested that these result from the presence of organic carbon, while Limaye et al. (2018) suggested that redox-based metabolic processes could play a role. Recently, there have been controversial claims for the presence of a biosignature, the molecule phosphine (PH 3 ), in Venusian clouds. We summarize the controversy below. In September of 2020, Greaves et al. (2021b) published an analysis of JCMT (James Clerk Maxwell Telescope) and ALMA (Atacama Large Millimeter Array) spectra of the Venusian atmosphere that demonstrated that phosphine (PH 3 ) may have been detected. At the same time another paper by Bains et al. (2021), with many of the same authors as in the Greaves et al. (2021b) work, was submitted to arXiv with the title "Phosphine on Venus Cannot be Explained by Conventional Processes." Subsequently a series of papers were submitted (posted to arXiv) and eventually published that put into question the veracity of the original JCMT and ALMA observations (e.g. Snellen et al. 2020;Thompson 2021;Villanueva et al. 2021;Akins et al. 2021;Lincowski et al. 2021). Additional papers placed upper limits via other space and ground based measurements that further questioned the PH 3 detection Trompet et al. 2021). Greaves et al. (2021a,c,d) offered a response to such criticisms, and more backand-forth rebuttals continue in the literature today. At the same time another paper may have offered support to the Greaves et al. (2021b) ground based observations (Mogul et al. 2021a) by looking at in-situ archival data from the Pioneer Venus Large Probe Neutral Mass Spectrometer (PV-LNMS). Subsequently, a few papers have been published that look into the possible origins of PH 3 in planetary atmospheres in addition to the Bains 2020 paper (e.g. Bains et al. 2019aBains et al. ,b, 2021Sousa-Silva et al. 2020;Omran et al. 2021;Cockell et al. 2021;Truong and Lunine 2021;Limaye et al. 2021) and whether factors, such as water activity, pH, etc. play a role in PH 3 production (e.g. Hallsworth et al. 2021a;Rimmer et al. 2021). Other work has considered the effects of Cosmic Rays on PH 3 production, but found it difficult to produce as much as 20 ppb (McTaggart 2022). While there is yet no consensus on the detection of phosphine in the atmosphere of Venus, its potential discovery has initiated many efforts including a mission to search for life in the clouds of Venus . This demonstrates that detection of potential biosignatures in planetary atmospheres is a high priority goal for investigations targeting Venus and its exoplanet cousins. Izenberg et al. (2021) proposed a general framework for assessing the probability of extant life on modern-day Venus. This 'Venus Life Equation' breaks down the qualitative factors affecting the probability of the origination of life (O), the robustness (size and diversity) of the supportable biosphere (R), and whether habitable conditions could have persisted continuously between the origin of life and the current day (C). The factors supporting a high value for O by analogy between early Venus and Earth are discussed above. R, by contrast, is very low for the atmospheric habitat hypothesis, as a result of both limited substrate (the total liquid volume of Venus' aerosols is at least five orders of magnitude less than, say, Earth's surface and ground water) and the typical low biodiversity of ecosystems highly constrained by water availability. The value C is affected by both the potential for global extinction events, such as asteroid strikes and coronal mass ejections, and overall planetary climate history, as affected by volcanism, stellar evolution, and many other factors. The former is relatively similar for Venus and Earth; the latter depends primarily on the water history of Venus as discussed above. The Venus Life Equation thus suggests a non-zero value for the probability of extant Venusian life. It also confirms that continuity (spatial and temporal) of conditions amenable to life is one of the most important unknowns that can be quantitatively constrained by direct in situ observation, through robust improvement of understanding of atmospheric zones and geologic/hydrologic history. This latter point is of particular importance where the Venusian aerosols are concerned. Unlike on Earth, where localized extinction-level events occurred but conditions for life persisted in other habitats (and cf. the subsurface punctuated habitability suggested for Mars by Melosh and Vickery 1989), this may not have been a possibility on Venus. Investigation Priorities There are two investigation priorities concerning the habitability of Venus: 1. To study past habitability. This can be addressed both through orbital observations of the surface and crust, in order to understand the geodynamic regime through time, and through noble gas and light element isotope measurements, to obtain insights into the history of volatiles through formation and evolution. 2. To characterize the present cloud-level environment including searching for molecular biosignatures of past or present-day life. This can be partially addressed by descent probes, but a more comprehensive investigation would require sustained presence in the clouds as from a balloon platform. 1. Studying the past habitability of Venus It is very difficult to access Venus' history. Modelling and comparative planetology with other planets of our Solar system alongside exoplanets (for example, their age in conjunction with their rotation rate) will provide insight into the past conditions on Venus but models are only as good as the data initially used. Both the onset of, and exit from, a potential habitable phase need to be modelled. The period toward the end of the magma ocean phase has been highlighted as an important criterion for subsequent evolution and needs to be studied more intensely before any definitive conclusions can be drawn as to whether Venus ever hosted liquid water at its surface (e.g. Salvador et al. 2023, this journal). Similarly, much more work needs to be done to explore how a planet may go from a temperate to a moist and then a runaway greenhouse state (e.g. Kasting 1988), and coupled interior-atmosphere models as well as 3D GCMs will be needed (e.g. Boukrouche et al. 2021). Future Venus missions will address habitability in a range of different investigations. One approach to reconstructing Venus' history is to study its geologic record, as preserved in its surface and crust; this provides a record of the last billion years or so of surface evolution. Of particular interest is the possibility that the tessera highlands show emissivity signatures consistent with widespread (continental-scale) granitic composition, like that found in Earth's continental crust; such a detection would suggest that large volumes of liquid water were present during their formation (Gilmore et al. 2017). However, non-detection of this felsic signature would not be conclusive, as such continental crust might have been covered by aeolian or other deposits, or otherwise not detectable from orbit. Determining Venus' current geodynamic regime -through gravity mapping and through searches for recent or ongoing geological activity -will help to constrain estimates of current heat and volatile loss from the interior, important factors in modelling the evolution of Venus' climate and habitability. The EnVision and VERITAS orbiters both will provide extensive datasets to address these investigations, as will DAVINCI descent imaging of tesserae, as will discussed in far more detail in companion publications in this journal (Chaps. 2-4). Another approach, isotope geochemistry, allows one to constrain Venus' evolution in the distant past, right back to its formation and early evolution. The isotopic abundances of noble gases and light elements provide constraints on acquisition and loss processes of volatiles, and about their exchanges between mantle and atmosphere, so are particularly important for reconstructing the history of water. For example, measuring the magnitude of radiogenic/fissiogenic excesses of 4 He and 129, 131-136 Xe produced at different times over Venus' history, will help to distinguish between scenarios for the geological evolution of Venus (stagnant lid, episodic plate tectonics episodes etc. see Gillmann et al. 2022, this journal). Although some noble gas isotope measurements were already obtained from Pioneer Venus and Venera in the 1970s and early 1980s, the upcoming DAVINCI entry probe mission will measure a greater variety of these isotopes with much greater precision, including the first measurements of krypton and xenon isotopes, permitting much better constraints on formation and evolution scenarios than are currently possible. Venus atmospheric sample return missions, though technically demanding, would allow isotopic ratio measurement to even higher precision and thus would offer correspondingly greater constraints on evolution scenarios. These investigations, and their implications for determining the history of water, are reviewed in detail in (Avice et al. 2022, this journal). Conducting these investigations will not only give us a better understanding of Venus' evolution and potential habitability through time, but also will help us to assess habitability in terrestrial worlds in other planetary systems; these parallels are explored in much more detail in (Way et al. 2023, this journal). 2. The search for biosignatures in Venusian clouds If Venus was habitable in the past (meaning, it had liquid water on its surface, the other ingredients of life being a given on a rocky planet such as Venus, and similar to early Earth), and life emerged, could it have survived to the present day in atmospheric aerosols? With regard to the habitability of the Venusian cloud deck, high priority in situ investigations include the structure of the atmosphere and variables, such as temperature, pressure, pH, UV radiation flux (cf. Grinspoon and Bullock 2007;Dartnell et al. 2015), and above all, detailed composition of the cloud aerosols, including water activity, acid activity, and trace constituents such as organics and ammonia. Comparison of these variables with those on Earth would substantially improve our ability to provide a 'habitable range'. Moreover, analysis of the aerosols will permit detailed study of the micro-environmental conditions within them. Could they be conducive to hosting an airborne biosphere, permitting life to thrive (not just survive), even if they are "extreme" by terrestrial standards? Future descent probes, like the upcoming DAVINCI (Garvin et al. 2022) probe, or the descent phase of lander missions such as Venera-D (Zasova et al. 2019) will be essential for this investigation by providing vertical profiles of atmospheric composition with far greater sensitivity and vertical resolution than is available from past missions. Far more spatially and temporally extensive investigations will require cloud-level aerial platforms (e.g. Cutts et al. 2018;Baines et al. 2018;Arredondo et al. 2021) offering sustained presence in the clouds. Instrumentation on such platforms should, in particular, seek to measure the composition, size, and lifetime of cloud and aerosol particles, as these are the most habitable environmental niches of astrobiological interest. In situ investigations should also look for signs of extant life (biomolecules, metabolites). A staged approach to detection of biosignatures was developed by the Venus Life Finder Missions team , consisting of missions increasing in size and complexity. In a first mission, a small entry probe would descend through the clouds carrying an autofluorescence backscatter nephelometer, which would characterize the shape and composition of cloud particles and search for the fluorescence in UV light as a biomarker (Baumgardner et al. 2022). Such a mission is in development, at the time of writing, and may launch as soon as 2023 (French et al. 2022). A second proposed mission would put more capable chemical and environmental detection instrumentation on a long-lived balloon floating in Venus' clouds; such instrumentation could eventually include an aerosol mass spectrometer ) and/or a fluorescent microscope which provides particular sensitivity to biomolecules (Sasaki et al. 2022). These could eventually lead to a third mission which would bring back a sample of Venus cloud material to Earth, so that it could be examined with the highly sensitive instrumentation available in terrestrial laboratories. Even if no metabolically active life forms are detected, information from these investigations will inform the models used to determine the history of the planet and its potential for having been habitable and seen the independent emergence of life. One final note is that any information from Venus in situ and any possible, future sample return mission would be extremely valuable for studying the habitability of exoplanets.
28,821.8
2023-02-22T00:00:00.000
[ "Environmental Science", "Physics" ]
The Local-Global Principle for Integral Soddy Sphere Packings Fix an integral Soddy sphere packing P. Let K be the set of all curvatures in P. A number n is called represented if n is in K, that is, if there is a sphere in P with curvature equal to n. A number n is called admissible if it is everywhere locally represented, meaning that n is in K(mod q) for all q. It is shown that every sufficiently large admissible number is represented. (a) Four tangent spheres (b) Two more spheres (c) Reflection of (b) through a sphere centered at p Figure 1 Introduction This paper is concerned with a 3-dimensional analogue of an Apollonian circle packing in the plane, constructed as follows. Given four mutually tangent spheres with disjoint points of tangency (Figure 1a), a generalization to spheres of Apollonius's theorem says that there are exactly two spheres (1.1) tangent to the given ones (Figure 1b). For a proof of (1.1), take a point p of tangency of two given spheres and reflect the configuration through a sphere centered at p. Thus p is sent to ∞, and the resulting configuration (Figure 1c) consists of two tangent spheres wedged between two parallel planes; whence the two solutions claimed in (1.1) are obvious. Returning to Figure 1b, one now has more configurations of tangent spheres, and can iteratively inscribe further spheres in the interstices (Figure 2a). Repeating this procedure ad infinitum, one obtains what we will call a Soddy sphere packing (Figure 2b). The name refers to the radiochemist Frederick Soddy (1877Soddy ( -1956, who in 1936 wrote a Nature poem [Sod36] in which he rediscovered Descartes's Circle Theorem [Des01, pp. 37-50] and a generalization to spheres, see Theorem 2.3. The latter was known already in 1886 to Lachlan [Lac86], and appears in some form as early as 1798 in Japanese Sangaku problems [San]. We name the packings after Soddy because he was the first to observe that there are configurations of circle and sphere packings in which all curvatures 1 are integers [Sod37]; such a packing is called integral. The numbers illustrated in Figure 2b are some of the curvatures in that packing. By rescaling an integral packing, we may assume that the only integers dividing all of the curvatures are ±1; such a packing is called primitive. We focus our attention on bounded, integral, primitive Soddy sphere packings. In fact, all of the salient features persist if one restricts the discussion to the packing P 0 illustrated in Figure 2b; the reader may wish to do so henceforth. What numbers appear in Figure 2b? For a typical sphere S ∈ P, let κ(S) be its curvature, and let K = K (P) be the set of all curvatures in P, K := {n ∈ Z : ∃S ∈ P, κ(S) = n}. The bounding sphere is internally tangent to the others, so is given opposite orientation and negative curvature. The first few curvatures in P 0 are: that is, there are local obstructions. In analogy with Hilbert's 11th problem on representations of numbers by quadratic forms, we say that n is represented if n ∈ K. Let A = A (P) be the set of admissible numbers, that is, numbers n that are everywhere locally represented in the sense that n ∈ K (mod q) for all q. (1.4) In our example, A is the set of all numbers satisfying (1.3). The set of admissible numbers for any packing P satisfies either (1.3) or ≡ 0 or 2 (mod 3), (1.5) see Lemma 2.11. The number of spheres in P with curvature at most N (counted with multiplicity) is asymptotically equal to a constant times N δ , where δ is the Hausdorff dimension of the closure of the packing [Kim11]. Soddy packings are rigid (one can be mapped to any other by a conformal transformation), and so δ is a universal constant; it is approximately (see [Boy73a,BdPP94]) equal to δ ≈ 2.4739 . . . . Hence one expects, on grounds of randomness, that the multiplicity of a given admissible curvature up to N is roughly N δ−1 , which should be quite large. In particular, every sufficiently large admissible should be represented. The main purpose of this paper is to confirm this claim. Theorem 1.6. Integral Soddy sphere packings satisfy a local-to-global principle: there is an effectively computable N 0 = N 0 (P) so that if n > N 0 and n ∈ A , then n ∈ K . This theorem is the analogue to Soddy sphere packings of the localglobal conjecture for integral Apollonian circle packings [GLM + 03, FS11, BF11, BK12]. Being in higher dimension puts more variables into play, making the problem much easier. For the proof, we study a certain infinite index subgroup Γ of the integral orthogonal group G preserving a particular quadratic form of signature (4, 1). This group, Γ, which we call the Soddy group, is isomorphic to the group of symmetries of P; extended to act on hyperbolic 4-space, the quotient is an infinite volume hyperbolic 4-fold. Passing to the spin double cover of the orientation preserving subgroup of G, we find that Γ contains an arithmetic subgroup Ξ acting with finite co-volume on hyperbolic 3-space. A consequence is that the set K of curvatures contains the "primitive" values of a shifted quaternary quadratic form (in fact an infinite family of such). Then using Kloosterman's method, the local-global theorem follows. This generalizes to sphere packings the following related result in 2-dimensions due to Sarnak [Sar07]: the curvatures in an integral, primitive Apollonian circle packing contain the primitive values of a shifted binary quadratic form. It is in this sense that we have more variables: instead of binary forms, sphere packings contain values of quaternary forms. Binary forms represent very few numbers, so despite some recent advances [BF11,BK12], the analogous problem in circle packings is wide open. In dimension n ≥ 4, one can start with a configuration of n tangent hyperspheres, repeating the above-described generating procedure. Unfortunately this does not give rise to a packing, as the hyperspheres eventually overlap [Boy73b]. Moreover there are no longer any such configurations in which all curvatures are integral (they can be S-integral, with the set S of localized primes depending on the dimension n); this follows from Gossett's [Gos37] generalization (also in verse) of Soddy's Theorem 2.3 to n-space. Preliminaries Let S = (S 1 , S 2 , S 3 , S 4 , S 5 ) be a configuration of five mutually tangent spheres, and let v 0 = v(S) = (κ 1 , κ 2 , κ 3 , κ 4 , κ 5 ) be the corresponding quintuple of curvatures, with κ j = κ(S j ). Any four tangent spheres, say S 1 , S 2 , S 3 , S 4 have six cospherical points of tangency, and determine a dual sphereS 5 passing through these points. Similarly, for j = 1, . . . , 4, letS j be the dual sphere orthogonal to all those in S except S j , and callS = (S 1 , . . . ,S 5 ) the dual configuration. Reflection throughS 5 fixes S 1 , S 2 , S 3 , S 4 , and sends S 5 to S 5 , the other sphere satisfying (1.1), see Figure 5a. The same holds for the other S j , and iteratively reflecting the original configuration through theS j ad infinitum yields the Soddy packing P = P(S) corresponding to S. Observe that unlike the Apollonian case, the dual spheres inS are not tangent, but intersect non-trivially, see Figure 3b. Extend the reflections through dual spheres to hyperbolic 4-space, replacing the action of the dual sphereS j by a reflection through a hyper(hemi)sphere s j whose equator (at y = 0) isS j (with j = 1, . . . , 5). We abuse notation, writing s j for both the hypersphere and the conformal map reflecting through s j . The group generated by these reflections acts discretely on H 4 . The A-orbit of any fixed base point in H 4 has a limit set in the boundary ∂H 4 ∼ = R 3 ∪ {∞}, which is the closure of the original sphere packing, a fundamental domain for this action being the exterior in H 4 of the five dual hyperspheres s j . Hence the quotient hyperbolic 4-fold A\H 4 is geometrically finite (with orbifold singularities corresponding to nontrivial intersections of the dual spheresS j ), and has infinite hyperbolic volume with respect to the hyperbolic measure in the coordinates (2.1). The group A is the symmetry group of all conformal transformations fixing P. To spy out spherical affairs / An oscular surveyor / Might find the task laborious, / The sphere is much the gayer, / And now besides the pair of pairs / A fifth sphere in the kissing shares. / Yet, signs and zero as before, / For each to kiss the other four / The square of the sum of all five bends / Is thrice the sum of their squares. If κ 1 , . . . , κ 4 are given, it then follows from (2.4) that the variable κ 5 satisfies a quadratic equation, and hence there are two solutions. This is an algebraic proof of (1.1). Writing κ 5 and κ 5 for the two solutions, it is elementary from (2.4) that In other words, if the quintuple (κ 1 , κ 2 , κ 3 , κ 4 , κ 5 ) is given, then one obtains the quintuple with κ 5 replaced by κ 5 via a linear action: This is an algebraic realization of the geometric action ofS 5 (or s 5 ) on a quintuple. Call the above 5 × 5 matrix M 5 . One can similarly replace other κ j by κ j keeping the four complementary curvatures fixed, via the matrices The inner product ·, · in (2.9) is the standard one on R 5 . This explains the integrality of all curvatures in Figure 2b: the group Γ has only integer matrices, so if the initial quintuple v 0 (or for that matter any curvatures of five mutually tangent spheres in P) is integral, then the curvatures in P are all integers (as first observed by Soddy [Sod37]). (2.10) Let Γ, isomorphic to The orbit under Γ, reduced mod 3 is then elementarily computed. In general we have the following Lemma 2.11. For K the set of curvatures of an integral, primitive Soddy packing P, there is always a local obstruction mod 3, either of the form (1.3) or (1.5). In particular, there is an ε = ε(P) ∈ {1, 2} so that, for any quintuple v in the cone (2.4) over Z, two entries are ≡ 0(mod 3) and three entries are ≡ ε(mod 3). Note that we are not (yet) claiming that these are the only local obstructions; this will follow from a proof of the local-to-global theorem. Proof. One may first attempt to understand the cone (2.4) over Z/3Z, but the form Q in (2.5) reduced mod 3 is highly degenerate. So instead consider the cone over Z/9Z. Disregarding the origin (since the packing is assumed to be primitive), there are 140 vectors mod 9, not counting permutations. Reducing these mod 3 leaves only the two vectors (0, 0, ε, ε, ε), ε ∈ {1, 2}, and their permutations. The action of Γ(mod 3) on these is trivial: each vector is fixed. This is all verified by direct computation. In this subsection, we prove that the Soddy group Γ, while being infinite index in O Q ∼ = O(4, 1), contains a subgroup which is arithmetic in SO(3, 1), or alternatively, in its spin double cover SL 2 (C). The method is a generalization of Sarnak's in [Sar07]. Recall the configuration S = (S 1 , . . . , S 5 ) of five mutually tangent spheres and the group A in (2.2) of reflections through spheres in the configurationS dual to S. Let A 1 = s 2 , ..., s 5 be the subgroup of A which fixes the sphere S 1 in S. It acts discontinuously on the interior of S 1 , which we now consider as the ball model for hyperbolic 3-space H 3 . A fundamental domain for the quotient A 1 \H 3 is the curvilinear tetrahedron interior to S 1 and exterior to the dual spheresS 2 , . . . ,S 5 , see Figure 4. In particular, it has finite volume. To realize this geometric action algebraically, let be the corresponding subgroup of Γ, where the M j are given in (2.6). We immediately pass to its orientation preserving subgroup, setting Then Ξ is generated by Then for j = 1, 2, 3, the conjugates are given bỹ Proof. Of course this can be verified by direct computation. But we elucidate the role of J as follows. The matrix J is then simply the change of variables matrix from v to a. The convenience of this conjugation is made apparent in the following Lemma 3.10. The quadratic form F in (3.9) has signature (3, 1). Its special orthogonal group SO F has spin double cover isomorphic to SL 2 (C). There is a homomorphism ρ : SL 2 (C) → SO F given explicitly (for our purposes embedded in GL 5 ) by mapping (3.12) The preimages under ρ of the matricesξ 1 ,ξ 2 ,ξ 3 in (3.6) are ±t 1 , ±t 2 , ±t 3 , respectively, where: (3.13) Here Proof. The signature of F is computed directly, and its spin group being SL 2 (C) is a general fact in the theory of quadratic forms, see e.g. [Cas78]. We construct ρ explicitly as follows. Observe that the matrix is Hermitian and has determinant −F (a). Then for g ∈ SL 2 (C), and then drawing another Dirichlet domain using these shows the equality of fundamental domains. Hence the groups are also equal. In hindsight, one can now directly verify that words in the generators (3.15) of Λ give the matrices in (3.19). The point is that Λ is now seen to be an arithmetic group, so its elements can be parametrized, giving an injection of affine space into the otherwise intractable thin Soddy group Γ. Then n 1 := n + κ 1 also has (n 1 , κ 1 ) = 1, and n 1 = n + κ 1 ≡ 0(mod 3). That is, n 1 is coprime to the discriminant (3.25). By (3.28), n 1 moreover satisfies the local condition (3.27). Kloosterman's method for quaternary quadratic forms (see e.g. [IK04,Theorem 20.9]) shows that every sufficiently large (with an effective bound) number which is locally represented by the form f v and coprime to its discriminant is also globally represented by f v . Hence if n 1 is sufficiently large, it is represented by f v . But then n = n 1 − κ 1 is represented by F v in (3.23), and thus appears in K by Corollary 3.22. Since there is nothing special about κ 1 , Theorem 1.6 follows immediately from Proposition 3.26 by primitivity.
3,592.4
2012-08-27T00:00:00.000
[ "Mathematics" ]
New 9-Hydroxybriarane Diterpenoids from a Gorgonian Coral Briareum sp. (Briareidae) Six new 9-hydroxybriarane diterpenoids, briarenolides ZI–ZVI (1–6), were isolated from a gorgonian coral Briareum sp. The structures of briaranes 1–6 were elucidated by spectroscopic methods and by comparison of their spectroscopic data with those of related analogues. Briarenolides ZII (2) and ZVI (6) were found to significantly inhibit the expression of the pro-inflammatory inducible nitric oxide synthase (iNOS) protein of lipopolysaccharide (LPS)-stimulated RAW264.7 macrophage cells. Results and Discussion The molecular formula of a new briarane, briarenolide ZI (1), was determined as C 24 H 33 ClO 11 (eight degrees of unsaturation) by high-resolution electrospray ionization mass spectrum (HRESIMS) at m/z 555.16025 (calcd. for C 24 H 33 ClO 11 + Na, 555.16036). The IR of 1 showed absorptions at 1715, 1769 and 3382 cm´1, which were consistent with the presence of ester, γ-lactone and hydroxy groups. The 13 C NMR spectrum (Table 1) suggested that 1 possessed an exocyclic carbon-carbon double bond based on signals at δ C 138.6 (C-5) and 116.9 (CH 2 -16), which was confirmed by the 1 H NMR spectrum of 1 (Table 1), which showed two olefin proton signals at δ H 5.88 (1H, dd, J = 2.4, 1.2 Hz, H-16a) and 5.64 (1H,dd,J = 2.4,1.2 Hz,. Three carbonyl resonances at δ C 175. 3 (C-19), 173.4 and 169.3 (2ˆester carbonyls) revealed the presence of one γ-lactone and two ester groups in 1; two acetyl methyls (δ H 2.06, s, 2ˆ3H) were also observed. According to the overall unsaturation data, it was concluded that 1 was a diterpenoid molecule possessing four rings. -11 and H-17, H 3 -18/C-19) permitted elucidation of the carbon skeleton (Table 1). HMBC correlations between H 2 -16/C-4, -5 and -6 indicated an exocyclic double bond at C-5, which was further confirmed by the allylic coupling between H 2 -16/H-6. HMBC correlations between H 3 -15/C-1, -2, -10 and -14 and H-2 and H-10/C-15, revealed that the ring junction C-15 methyl group was located at C-1. Furthermore, an HMBC correlation between H-2 (δ H 5.09) and the acetate carbonyl (δ C 173.4) revealed the presence of an acetate ester at C-2; and an HMBC correlation between a hydroxy proton (δ H 6.50) and C-4 oxygenated quaternary carbon suggested the presence of a hydroxy group at C-4. The C-4 hydroxy group was determined to be part of a hemiketal constellation on the basis of a characteristic carbon signal at δ C 96.7. 1 H-1 H COSY correlations between OH-9/H-9 and OH-12/H-12 suggested the presence of the hydroxy groups at C-9 and C-12. A carbon signal at δ C 81.8 (C-8) indicated 3 J-coupling with protons at δ H 2.23 (H-10), 1.33 (H 3 -18) and 2.73 (OH-9). Therefore, the remaining hydroxy and acetoxy groups had to be positioned at C-11 and C-14, respectively, as indicated by analysis of 1 H-1 H COSY correlations and characteristic NMR signal analysis. The intensity of the sodiated molecules [M + 2 + Na] + isotope peak observed in the ESIMS and HRESIMS spectra ([M + Na] + :[M + 2 + Na] + = 3:1) was evidence of the presence of one chlorine atom in 1. The methine unit at δ C 56.2 was more shielded than expected for an oxygenated carbon and was correlated to the methine proton at δ H 5.54 (H-6) in the heteronuclear multiple quantum coherence (HMQC) spectrum, and this proton signal was 3 J-correlated with H-7 (δ H 4.73) in the 1 H-1 H COSY spectrum, which proved that a chlorine atom was attached at C-6. These data, together with the HMBC correlations between H-17/C-9, - 18 and -19 and H 3 -18/C-8, -17 and -19, established the molecular framework of 1. The relative configuration of 1 was elucidated on the basis of a nuclear Overhauser effect spectroscopy (NOESY) experiment and by vicinal 1 H-1 H proton coupling constant analysis. Most naturally-occurring briarane natural products have Me-15 in the β-orientation and H-10 in the α-orientation [2][3][4][5][6], which were verified by the absence of a correlation between these two groups. In the NOESY experiment of 1 (Figure 2), H-10 correlated with H-2, H-9 and H 3 -20, indicating that these protons were situated on the same face; they were assigned as α protons, as C-15 methyl was β-oriented at C-1. The oxymethine proton H-14 was found to exhibit a response with H 3 -15, but not with H-10, revealing that H-14 was β-oriented. H-12 correlated with each of the C-13 methylene protons and H 3 -20, but not with H-10, indicating that H-12 was β-oriented and was positioned on the equatorial direction in the cyclohexane ring by modeling analysis. H-17 exhibited correlations with H-9 and H-7 and was also found to be reasonably close to H-9 and H-7 by modeling analysis; thus, H-17 could therefore be placed on the β face in 1, and H-7 was β-oriented. One of the C-3 methylene protons (δ H 3.73) displayed a correlation with H 3 -15; therefore, it was assigned as the H-3β proton, and the other was assigned as H-3α (δ H 1.46). H-6 displayed correlations with H-3β and H-7, which confirmed that this proton was in the β-orientation, and the oxygen bridge between C-4 and C-8 was found to be α-oriented by modeling analysis. Based on the aforementioned results, the structure, including the relative configuration, of 1 was elucidated unambiguously. Int. J. Mol. Sci. 2016, 17, 79 4 of 13 with H-9 and H-7 and was also found to be reasonably close to H-9 and H-7 by modeling analysis; thus, H-17 could therefore be placed on the β face in 1, and H-7 was β-oriented. One of the C-3 methylene protons (δH 3.73) displayed a correlation with H3-15; therefore, it was assigned as the H-3β proton, and the other was assigned as H-3α (δH 1.46). H-6 displayed correlations with H-3β and H-7, which confirmed that this proton was in the β-orientation, and the oxygen bridge between C-4 and C-8 was found to be α-oriented by modeling analysis. Based on the aforementioned results, the structure, including the relative configuration, of 1 was elucidated unambiguously. Briarenolide ZII (2) was isolated as a white powder and had a molecular formula of C28H38O10 on the basis of HRESIMS at m/z 557.23552 (calcd. for C28H38O10 + Na, 557.23572). Carbonyl resonances in the 13 C NMR spectrum of 2 ( Table 2) at δC 173.0, 170.7, 170.4 and 169.9 demonstrated the presence of a γ-lactone and three other esters in 2. It was found that the NMR signals of 2 were similar to those of a known briarane analogue, excavatolide F (7) [7] (Figure 1), except that the signals corresponding to the 9-acetoxy group in 7 were replaced by signals for a hydroxy group in 2. The correlations from a NOESY experiment of 2 also revealed that the stereochemistry of this metabolite was identical to that of 7. Thus, briarenolide ZII (2) was found to be the 9-O-deacetyl derivative of 7. Briarenolide ZII (2) was isolated as a white powder and had a molecular formula of C 28 H 38 O 10 on the basis of HRESIMS at m/z 557.23552 (calcd. for C 28 H 38 O 10 + Na, 557.23572). Carbonyl resonances in the 13 C NMR spectrum of 2 ( Table 2) at δ C 173.0, 170. 7, 170.4 and 169.9 demonstrated the presence of a γ-lactone and three other esters in 2. It was found that the NMR signals of 2 were similar to those of a known briarane analogue, excavatolide F (7) [7] (Figure 1), except that the signals corresponding to the 9-acetoxy group in 7 were replaced by signals for a hydroxy group in 2. The correlations from a NOESY experiment of 2 also revealed that the stereochemistry of this metabolite was identical to that of 7. Thus, briarenolide ZII (2) was found to be the 9-O-deacetyl derivative of 7. Briarenolide ZIII (3) had a molecular formula C 24 H 32 O 10 as deduced from HRESIMS at m/z 503.18858 (calcd. for C 24 H 32 O 10 + Na, 503.18877). The IR spectrum of 1 showed three bands at 3444, 1779 and 1732 cm´1, which were in agreement with the presence of hydroxy, γ-lactone and ester groups. Carbonyl resonances in the 13 C NMR spectrum of 3 at δ C 171. 8, 170.7 and 170.6 revealed the presence of a γ-lactone and two esters (Table 3). Both esters were identified as acetates by the presence of two acetyl methyl resonances in the 1 H (δ H 2.01, 1.98, each 3Hˆs) and 13 C (δ C 21.1, 21.1) NMR spectra (Table 3). It was found that the NMR data of 3 were similar to those of a known briarane analogue, 2β-acetoxy-2-(debutyryloxy)-stecholide E (8) [1] (Figure 1), except that the signals corresponding to the 4-hydroxy group in 3 were not present in 8. A correlation from the NOESY signals of 3 showed that H-4 correlated with H-2, but not with H 3 -15, indicating that the hydroxy group at C-4 was β-oriented. The results of 1 H-1 H COSY and HMBC correlations fully supported the positions of functional groups, and hence, briarenolide ZIII (3) was found to be the 4β-hydroxy derivative of 8. Briarenolide ZIV (4) was obtained as a white powder, and the molecular formula of 4 was determined to be C 28 H 40 O 11 (9˝of unsaturation) by HRESIMS at m/z 575.24645 (calcd. for C 28 H 40 O 11 + Na, 575.24628). The IR spectrum of 4 showed three bands at 3444, 1778 and 1732 cm´1, consistent with the presence of hydroxy, γ-lactone and ester carbonyl groups. Carbonyl resonances in the 13 C NMR spectrum of 4 showed signals at δ C 173. 9, 173.2, 170.8 and 170.4, which revealed the presence of a γ-lactone and three esters in 4 (Table 4), of which, two of the esters were identified as acetates based on the presence of two acetyl methyl resonances in the 1 H NMR spectrum of 4 at δ H 1.97 (2ˆ3H, s) ( Table 4). The other ester was found to be an n-butyrate group based on 1 H NMR studies, which revealed seven contiguous protons (δ H 0.94, 3H, t, J = 7.2 Hz; 1.65, 2H, sextet, J = 7.2 Hz; 2.23, 2H, t, J = 7.2 Hz). According to the 1 H and 13 C NMR spectra, 4 was found to have a γ-lactone moiety (δ C 173.9, C-19) and a trisubstituted olefin (δ C 145.4, C-5; 121.6, CH-6; δ H 5.32, 1H, d, J = 8.8 Hz, H-6). The presence of a tetrasubstituted epoxide that contained a methyl substituent was established based on the signals of two oxygenated quaternary carbons at δ C 71.8 (C-8) and 63.7 (C-17) and confirmed by the proton signals of a methyl singlet at δ H 1.51 (3H, s, H 3 -18). Thus, from the NMR data, five degrees of unsaturation were accounted for, and 4 was identified as a tetracyclic compound. From the 1 H-1 H COSY spectrum of 4 (Table 4), three different structural units, including C-2/-3/-4, C-6/-7 and C-9/-10/-11/-12/-13/-14, were identified. From these data and the HMBC correlation results (Table 4), the connectivity from C-1 to C-14 could be established. A methyl attached at C-5 was confirmed by an allylic coupling between H 3 -16/H-6 and by the HMBC correlations between H 3 -16/C-4, -5 and -6. The C-15 and C-20 methyl groups were identified as being positioned at C-1 and C-11 from the HMBC correlations between H 3 -15/C-1, -2, -10, -14 and H 3 -20/C-10, -11, -12, respectively. Furthermore, the acetate esters positioned at C-2 and C-14 were established by the HMBC correlations between δ H 4.97 (H-2) and 4.70 (H-14) and the acetate carbonyls at δ C 170.4 and 170.8, respectively. The location of an n-butyrate group in 4 was verified by an HMBC correlation between H-12 (δ H 4.83) and the n-butyrate carbonyl carbon (δ C 173.2) ( Table 4). These data, together with the HMBC correlations between H 3 -18/C-8, -17 and -19, established the main molecular framework of 4. The NMR data of 4 were found to be similar to those of a known briarane, excavatolide Z (9) [8] (Figure 1), except that the signals corresponding to the 4-hydroxy group in 4 were not present in 9, and an 11β-hydroxy group was found in 9. The correlations from NOESY signals of 4 ( Figure 3) also showed that the relative configurations of most chiral centers of 4 were similar to those of 9. H-10 exhibited interactions with H-2 and H-11, and H-2 correlated with H-4, indicating that the hydroxy group at C-4 and the methyl group at C-11 were β-oriented; additionally, briarenolide ZIV (4) was found to be the 4β-hydroxy-11-dehydroxy-11β-methyl derivative of 9. Briarenolide ZV (5) was obtained as a white powder and had the molecular formula C24H30O10, as determined by HRESIMS at m/z 505.20460 (calcd. for C24H30O10 + Na, 505.20442) (10° of unsaturation). The IR spectrum of 5 showed bands at 3445, 1770 and 1732 cm −1 , consistent with the presence of hydroxy, γ-lactone and ester carbonyl groups. Comparison of the 1 H and distortioneless enhancement by polar transfer (DEPT) spectra with the molecular formula revealed that there must -9, H-11 C-1, -2, -8, -9, -11, -12, -14, -15, - Briarenolide ZV (5) was obtained as a white powder and had the molecular formula C 24 H 30 O 10 , as determined by HRESIMS at m/z 505.20460 (calcd. for C 24 H 30 O 10 + Na, 505.20442) (10˝of unsaturation). The IR spectrum of 5 showed bands at 3445, 1770 and 1732 cm´1, consistent with the presence of hydroxy, γ-lactone and ester carbonyl groups. Comparison of the 1 H and distortioneless enhancement by polar transfer (DEPT) spectra with the molecular formula revealed that there must be three exchangeable protons, requiring the presence of three hydroxy groups. In addition, it was found that the spectral data (IR, 1 H and 13 C NMR) of 5 (Table 5) were similar to those of a known briarane, excavatolide Z (9) [8] (Figure 1), except that 9 exhibited signals representing an n-butyrate substitution, which were replaced by a hydroxy group in 5. The results of 1 H-1 H COSY and HMBC correlations fully supported the positions of functional groups, and hence, briarenolide ZV (5) was found to be the 12-O-debutyryl derivative of 9. The new briarane, briarenolide ZVI (6), had a molecular formula of C 26 H 36 O 11 as determined by HRESIMS at m/z 547.21473 (calcd. for C 26 H 36 O 11 + Na, 547.21498). Thus, nine degrees of unsaturation were therefore determined for the molecule of 6. In addition, the spectral data (IR, 1 H and 13 C NMR) ( Table 6) of 6 were found to be similar to those of a known briarane, excavatolide E (10) [9] (Figure 1). However, the NMR spectra revealed that the signals representing the C-4 methylene group in 10 were replaced by those of an additional acetoxy group. In the NOESY experiment of 6, H-10 gives correlations to H-2, H-9, H-11 and H-12, but not to H 3 -15 and H 3 -20, and H-2 was found to show a correlation with H-4, indicating that these protons (H-2, H-4, H-9, H-10, H-11 and H-12) are located on the same face of the molecule and assigned as α-protons, since the C-15 and C-20 methyls are the β-substituents at C-1 and C-11, respectively. The signal of H 3 -20 showed a correlation with H 3 -18, indicating that H 3 -18 and 8,17-epoxide group were βand α-oriented, respectively, in the γ-lactone ring in 6. H-4 correlated with H-2, but not with H-7 and H 3 -15, indicating that H-7 was β-oriented. H-14 was found to exhibit nuclear Overhauser effect (NOE) responses with H-2 and H 3 -15, but not with H-10, revealing the β-orientation of this proton. Thus, based on the above findings, Compound 6 was found to be the 4β-acetoxy derivative of 10, with a structure as described by Formula 6. Furthermore, the chemical shifts for H 3 -18 in briaranes 4, 5 and 6 were found to appear at δ H 1.51, 1.68 and 1.57, respectively, indicating that the 11β-hydroxy group in 5 led to a downfield chemical shift for H 3 -18. In an in vitro anti-inflammatory activity assay, Western blot analysis was used to evaluate the upregulation of the pro-inflammatory cyclooxygenase 2 (COX-2) and inducible nitric oxide synthase (iNOS) protein expressions in lipopolysaccharide (LPS)-stimulated RAW264.7 macrophage cells. At a concentration of 10 µM, briarenolides ZII (2) and ZVI (6) were found to significantly reduce the levels of iNOS to 47.2% and 55.7%, respectively, in comparison to the control cells stimulated with LPS only (Figure 4 and Table 7). By using trypan blue staining, it was observed that briarenolides ZI-ZVI (1-6) did not induce significant cytotoxicity in RAW264.7 macrophage cells. -3, -4, -10, -14, -15, acetate carbonyl 3 3.16 dd (15.6, 12.8); 1.91 m 37. 6, 5.01 dd (12.8,5.6) 72.7, 2.20 dd (4.8,2.2) 41.6,3.6) 67.0, In an in vitro anti-inflammatory activity assay, Western blot analysis was used to evaluate the upregulation of the pro-inflammatory cyclooxygenase 2 (COX-2) and inducible nitric oxide synthase (iNOS) protein expressions in lipopolysaccharide (LPS)-stimulated RAW264.7 macrophage cells. At a concentration of 10 μM, briarenolides ZII (2) and ZVI (6) were found to significantly reduce the levels of iNOS to 47.2% and 55.7%, respectively, in comparison to the control cells stimulated with LPS only (Figure 4 and Table 7). By using trypan blue staining, it was observed that briarenolides ZI-ZVI (1-6) did not induce significant cytotoxicity in RAW264.7 macrophage cells. The relative intensity of the LPS-stimulated group was taken to be 100%. Band intensities were quantified by densitometry and are indicated as the percentage change relative to that of the LPS-stimulated group. Briarenolides ZII (2) and ZVI (6) and DEX significantly inhibited LPS-induced iNOS protein expression (<60%) in macrophages. The experiments were repeated three times (* p < 0.05, significantly different from the LPS-stimulated group). The relative intensity of the LPS-stimulated group was taken to be 100%. Band intensities were quantified by densitometry and are indicated as the percentage change relative to that of the LPS-stimulated group. Briarenolides ZII (2) and ZVI (6) and DEX significantly inhibited LPS-induced iNOS protein expression (<60%) in macrophages. The experiments were repeated three times (* p < 0.05, significantly different from the LPS-stimulated group). General Experimental Procedures Melting points were determined using a Fargo apparatus (Panchum Scientific Corp. Kaohsiung, Taiwan), and the values were uncorrected. Optical rotation values were measured with a Jasco P-1010 digital polarimeter (Japan Spectroscopic Corporation, Tokyo, Japan). IR spectra were obtained with an FT-IR spectrophotometer (Digilab FTS 1000; Varian Inc., Palo Alto, CA, USA); peaks are reported in cm´1. NMR spectra were recorded on a 400-MHz Varian Mercury Plus NMR spectrometer (Varian Inc.) using the residual CHCl 3 signal (δ H 7.26 ppm) as the internal standard for 1 H NMR and CDCl 3 (δ C 77.1 ppm) for 13 C NMR. Coupling constants (J) are given in Hz. ESIMS and HRESIMS were recorded using a Bruker 7 Tesla solariX FTMS system (Bruker, Bremen, Germany). Column chromatography was performed using 230-400 mesh silica gel (Merck, Darmstadt, Germany). TLC was carried out on precoated 0.25 mm-thick Kieselgel 60 F 254 (Merck); spots were visualized by spraying with 10% H 2 SO 4 solution followed by heating. Normal-phase HPLC (NP-HPLC) was performed using a system equipped with a Hitachi L-7110 pump (Hitachi Ltd., Tokyo, Japan), a Hitachi L-7455 photodiode array detector and an injection port (7725; Rheodyne LLC, Rohnert Park, CA, USA). A semi-preparative normal-phase LiChrospher column (Hibar 250 mmˆ10 mm, Si 60, 5 µm, Merck) was used for HPLC. Reverse-phase HPLC (RP-HPLC) was performed with a system equipped with a Hitachi L-7100 pump, a Hitachi L-2455 photodiode array detector, a Rheodyne 7725 injection port and a 25 cmˆ10 mm Polaris 5 C-18-A column (5 µm; Varian Inc., Palo Alto, CA, USA). Animal Material Specimens of Briareum sp. were collected by hand by scuba divers in an area off the coast of southern Taiwan in July 2011 and stored in a freezer. A voucher specimen was deposited in the National Museum of Marine Biology & Aquarium (NMMBA-TW-SC-2011-77) [10][11][12][13][14]. Conclusions Gorgonian corals belonging to the genus Briareum are proven to be rich sources of briarane-type compounds. Briarenolides ZII (2) and ZVI (6) are potentially anti-inflammatory compounds for future development [18,19]. This interesting species was transplanted to culture tanks located in the NMMBA, for extraction of natural material to establish a stable supply of bioactive substances.
4,809
2016-01-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
A Functional Central Limit Theorem for a Class of Interacting Markov Chain Monte Carlo Methods We present a functional central limit theorem for a new class of interacting Markov chain Monte Carlo algorithms. These stochastic algorithms have been recently introduced to solve non-linear measure-valued equations. We provide an original theoretical analysis based on semigroup tech-niques on distribution spaces and fluctuation theorems for self-interacting random fields. Addi-tionally we also present a series of sharp mean error bounds in terms of the semigroup associated with the first order expansion of the limiting measure-valued process. We illustrate our results in the context of Feynman-Kac semigroups. Nonlinear Measure-Valued Equations and Self Interacting Markov chains Let (S (l) ) l≥0 be a sequence of measurable spaces endowed with some σ-fields ( (l) ) l≥0 , and for every l ≥ 0, denote by (S (l) ) the set of all probability measures on S (l) . Let also (π (l) ) l≥0 be a sequence of probability measures on S (l) satisfying a nonlinear equation of the following form Φ l (π (l−1) ) = π (l) (1.1) for some mappings Φ l : (S (l−1) ) → (S (l) ). Being able to solve these equations numerically has numerous applications in nonlinear filtering, global optimization, Bayesian statistics and physics as it allow us to approximate any sequence of fixed "target" probability distributions (π (l) ) l≥0 . For example, in a nonlinear filtering framework π (l) corresponds to the posterior distribution of the state of an unobserved dynamic model at time l given the observations collected from time 0 to time l. In an optimization framework, π (l) could correspond to a sequence of annealed versions of a distribution π that we are interested in maximizing. When Φ l is a Feynman-Kac transformation [2], there has been much work on mean field interacting particle interpretations of measure-valued equations of the form (1.1); see for example [2; 6]. An alternative iterative approach to solve these equations named Interacting Markov chain Monte Carlo methods (i-MCMC) has recently been proposed in [1; 3]. Let us give a rough description of i-MCMC methods. At level l = 0, we run a standard MCMC chain say X (0) n with a prescribed simple target distribution π (0) . Then, we use the occupation measure of the chain X (0) to design a second MCMC chain say X (1) n with a more complex target limiting measure π (1) . These two mechanisms are combined online so that the pair process interacts with the occupation measure of the system at each time. More formally, the elementary transition of the chain X (1) at a current time n depends on the occupation measure of the chain X (0) p from the origin p = 0, up to the current time p = n. This strategy can be extended to design a series of MCMC samplers (X (l) ) l≥0 targeting a prescribed sequence of distributions (π (l) ) l≥0 of increasing complexity. These i-MCMC samplers are self interacting Markov chains as the Markov chain X [m] n = (X (l) n ) 0≤l≤m associated with a fixed series of m levels, evolves with elementary transitions which depend on the occupation measure of the whole system from the origin up to the current time. The theoretical analysis of these algorithms is more complex than the one of mean field interacting particle algorithms developed in [2]. In a pair of recent articles [1; 3], we initiated the theoretical analysis of i-MCMC algorithms and we provided a variety of convergence results including exponential estimates and an uniform convergence theorem with respect to the index l. However, we did not present any central limit theorem. The main purpose of this paper is to provide the fluctuation analysis of the occupation measures of a specific class of i-MCMC algorithms around its limiting values π (l) , as the time parameter n tends to infinity. We also make no restriction on the state spaces and we illustrate these models in the context of Feynman-Kac semigroups. The rest of the paper is organized as follows: the i-MCMC methodology is described in section 1.2. For the convenience of the reader, we have collected in the short preliminary section 1.3 some of the main notation and conventions used in the paper. We also recall some regularity properties of integral operators used in this article. Section 2 is devoted to the two main results of this work. We provide a multivariate functional central limit theorem together with some sharp non asymptotic mean error bounds in terms of the semigroup associated with a first order expansion of the mappings Φ l . To motivate our approach, we illustrate in section 3 these results in the context of Feynman-Kac semigroups. The last four sections are essentially concerned with the proof of the two theorems presented in section 2. Our analysis is based on two main ingredients; namely, a multivariate functional central limit theorem for local interaction fields and a multilevel expansion of the occupation measures of the i-MCMC model at a given level l in terms of the occupation measures at the levels with lower indexes. These two results are presented respectively in section 4 and section 5. The final section 6 and section 7 are devoted respectively to the proofs of the functional central limit theorem and the sharp non asymptotic mean error bounds theorem. In appendix, we have collected the proofs of a series of technical lemmas. Description of the i-MCMC methodology To introduce the i-MCMC methodology, we first recall how the traditional Metropolis-Hastings algorithm proceeds. This algorithm samples a Markov chain with the following transition kernel where P(x, d y) is a Markov transition on some measurable space S, and G : S 2 → + . Let π be a probability measure on S, and let (π × P) 1 and (π × P) 2 be the pair of measures on (S × S) defined by (π × P) 1 (d(x, y)) = π(d x) P(x, d y) = (π × P) 2 (d( y, x)). We further assume that (π × P) 2 ≪ (π × P) 1 and the corresponding Radon-Nykodim ratio is given by In this case, π is an invariant measure of the time homogeneous Markov chain with transition M . One well-known difficulty with this algorithm is the choice of P. Next, assume π = Φ(η) is related to an auxiliary probability measure η on some possibly different measurable space S ′ with some mapping Φ : We also suppose that there exists an MCMC algorithm on S ′ whose occupation measure, say η n , approximates the measure η as the time parameter n increases. At each current time, say n, we would like to choose a Metropolis-Hastings transition M n associated to a pair (G n , P n ) with invariant measure Φ(η n ). The rationale behind this is that the resulting chain will behave asymptotically as a Markov chain with invariant distribution Φ(η) = π, as soon as η n is a good approximation of η. The choice of (G n , P n ) is far from being unique and we present a few solutions in [3]. A natural and simple one consists of selecting P n (x, d y) = Φ(η n )(d y) which ensures that G n is equal to one. This is the class of i-MCMC algorithms studied in this paper. Iterating this strategy, we design a sequence of i-MCMC chains X (k) = (X (k) n ) n≥0 on every level set S (k) whose occupation measures η (k) n approximate the solution π (k) of the system (1.1) for every k ≥ 0. To describe more precisely the resulting algorithm, we need a few notation. We introduce a sequence of initial probability measures ν k on S (k) , with k ≥ 0, and we denote by the sequence of occupation measures of the chain X (k) at level k from the origin up to time n, where δ x stands for the Dirac measure at a given point x ∈ S (k) . The i-MCMC algorithm proposed here is defined inductively on the level parameter k. First, we shall suppose that ν 0 = π (0) and we let X (0) = (X (0) n ) n≥0 be a collection of independent random variables with common distribution ν 0 = π (0) . For every k ≥ 1, and given a realization of the chain X (k) , the (k + 1)-th level chain X (k+1) = (X (k+1) n ) n≥0 is a collection of independent random variables with distributions given, for any n ≥ 0, by the following formula where we use the convention that we have Moreover, d(x 0 , . . . , x n ) = d x 0 × . . . × d x n stands for an infinitesimal neighborhood of a generic path sequence (x 0 , . . . , x n ) ∈ (S (k+1) ) (n+1) . Notation and conventions We denote respectively by (E), 0 (E), (E), and (E), the set of all finite signed measures on some measurable space (E, ), the convex subset of measures with null mass, the set of all probability measures, and the Banach space of all bounded measurable functions f on E. We equip (E) with the uniform norm f = sup x∈E | f (x)|. We also denote by 1 (E) ⊂ (E) the unit ball of functions f ∈ (E) with f ≤ 1, and by Osc 1 (E), the convex set of -measurable functions f with oscillations less than one, which means that We denote by the Lebesgue integral of a function f ∈ (E) with respect to µ ∈ (E). We slightly abuse the notation, and we denote by µ(A) = µ(1 A ) the measure of a measurable subset A ∈ . We recall that a bounded integral operator M from a measurable space (E, ) into an auxiliary measurable space (F, ), is an operator f → M ( f ) from (F ) into (E) such that the functions are -measurable and bounded for any f ∈ (F ). A bounded integral operator M from a measurable space (E, ) into an auxiliary measurable space (F, ) also generates a dual operator µ → µM from (E) into (F ) defined by (µM )( f ) = µ(M ( f )). We denote by the norm of the operator f → M ( f ) and we equip the Banach space (E) with the corresponding total variation norm µ = sup We also denote by β(M ) the Dobrushin coefficient of a bounded integral operator M defined as When M has a constant mass M (1)(x) = M (1)( y), for any (x, y) ∈ E, the operator µ → µM maps 0 (E) into 0 (F ), and β(M ) coincides with the norm of this operator. We equip the sets of sequence of distributions (E) with the uniform total variation distance defined for all η = We extend a given bounded integral operator µ ∈ (E) → µM ∈ (F ) into a mapping Sometimes, we slightly abuse the notation and we denote by ν instead of (ν) n≥0 the constant distribution flow equal to a given measure ν ∈ (E). Finally we denote by c(k) with k ∈ , a generic constant whose values may change from line to line, but they only depend on the parameter k. For any pair of integers 0 ≤ m ≤ n, we denote by (n) m := n!/(n − m)! the number of one to one mappings from {1, . . . , m} into {1, . . . , n}. Finally, we shall use the usual conventions = 0 and = 1. Fluctuations theorems This section is mainly concerned with the statement of the two main theorems presented in this article. These results are based on a first order weak regularity condition on the mappings (Φ l ) appearing in (1.1). We assume that the mappings Φ l : (S (l−1) ) → (S (l) ) satisfy the following first order decomposition for any l ≥ 1 In the formula above, l,η is a collection of bounded integral operators from S (l−1) into S (l) , indexed by the set of probability measures η ∈ (S (l−1) ) and Ξ l (µ, η) is a collection of remainder signed measures on S (l) indexed by the set of probability measures µ, η ∈ (S (l−1) ). We also require that for some integral operator Ξ l from (S (l) ) into the set 2 (S (l−1) ) of all tensor product functions This weak regularity condition is satisfied in a variety of models including Feynman-Kac semigroups discussed in the next section. We also mention that, under weaker assumptions on the mappings Φ l , we already proved in [1, Theorem 1] and [3, Theorem 1.1.] that for every l ≥ 0 and any function f ∈ (S (l) ), we have the almost sure convergence result The article [3, Theorem 1.1.] also provides exponential inequalities and uniform estimates with respect to the index l. In order to describe precisely the fluctuations of the empirical measures η (l) n around their limiting values π (l) , we need to introduce some notation. We denote by D l the first integral operator l,π (l−1) associated with the target measure π (l−1) , and we set D k,l with 0 ≤ k ≤ l for the corresponding semigroup. More formally, we have D l = l,π (l−1) and for all 1 ≤ k ≤ l, For k > l, we use the convention D k,l = I d , the identity operator. The reader may find in section 3 an explicit functional representation of these semigroups in the context of Feynman-Kac models. We now present the functional central limit theorem describing the fluctuations of the i-MCMC process around the solution of equation n which are i.i.d. samples from π (0) when k = 0 and generated according to (1.4) otherwise. Theorem 2.1. For every k ≥ 0, the sequence of random fields (U (k) n ) n≥0 on (S (k) ) defined by U (k) n := n η (k) n − π (k) converges in law, as n tends to infinity and in the sense of finite dimensional distributions, to a sequence of Gaussian random fields U (k) on (S (k) ) given by the following formula where V (l) is a collection of independent and centered Gaussian fields with a covariance function given for any ( f , g) ∈ (S (l) ) 2 by We now recall that if Z is a Gaussian (0, 1) random variable, then for any m ≥ 1, where Γ stands for the standard Gamma function. Consequently, in view of Theorem 2.1, the reader should be convinced that the estimates presented in the next theorem are sharp with respect to the parameters m and k. Theorem 2.2. For any k, n ≥ 0, any f ∈ Osc 1 (S (k) ), and any integer m ≥ 1, we have the non asymptotic mean error bounds with the collection of constants a(m) given by a(2m) 2m = (2m) m and a(2m and for some constants b(m) (respectively c(k)) whose values only depend on the parameter m (respectively k). Feynman-Kac semigroups In this section, we shall illustrate the fluctuation results presented in this paper with the Feynman-Kac mappings (Φ l ) given for all l ≥ 0 and all (µ, f ) ∈ ( (S (l) ) × (S (l+1) )) by where G l is a positive potential function on S (l) and L l+1 is a Markov transition from S (l) to S (l+1) . In this scenario, the solution of the measure-valued equation (1.1) is given by where (Y l ) l≥0 stands for a Markov chain taking values in the state spaces (S (l) ) l≥0 , with initial distribution π (0) and Markov transitions (L l ) l≥1 . These probabilistic models arise in a variety of applications including nonlinear filtering and rare event analysis as well as spectral analysis of Schrödinger type operators and directed polymer analysis. These Feynman-Kac distributions are quite complex mathematical objects. For instance, the reference Markov chain may represent the paths from the origin up to the current time of an auxiliary sequence of random variables Y ′ l taking values in some state spaces E ′ l . To be more precise, we have To get an intuitive understanding of i-MCMC algorithms in this context, we note that for the Feynman-Kac mappings (3.1) we have for all f ∈ (S (k) ) It follows that each random state X (k) p is sampled according to two separate genetic type mechanisms. First, we select randomly one state X (k−1) q at level (k − 1), with a probability proportional to . Second, we evolve randomly from this state according to the exploration transition L k . This i-MCMC algorithm model can be interpreted as a spatial branching and interacting process. In this interpretation, the k-th chain duplicates samples with large potential values, at the expense of samples with small potential values. The selected offspring evolve randomly from the state space S (k−1) to the state space S (k) at the next level. The same description for path space models (3.2) coincides with the evolution of genealogical tree based i-MCMC algorithms. For this class of Feynman-Kac models, we observe that the decomposition (2.1) is satisfied with the first order integral operator D l,η defined for all f ∈ (S (l) ) by and the remainder measures Ξ l (µ, η) given by We can observe that the regularity condition Indeed, it is easy to check that for any function f ∈ 1 (S (l) ), Finally, we mention that the semigroup D k,l introduced above can be explicitly described in terms of the semigroup The Dobrushin coefficient β(D k,l ) can also be computed in terms of these semigroups. We have with the Markov integral operator P k,l given, for all 1 ≤ k ≤ l, by Therefore, it follows that Introduction As for mean field interacting particle models, see section 9.2 in [2], the local errors induced by the i-MCMC algorithm can be interpreted as small disturbances of the measure-valued equation (1.1). To be more precise, we need a few additional definitions. (S (l) ) defined for all n ≥ 0 and all l ≥ 0 by For n = 0, we use the convention Φ l η In our context, the i-MCMC process at level l consists in a series of conditionally independent random variables X (l) n with different distributions Φ l (η By definition of the i-MCMC algorithm, we have (∆ (l) n ( f )) = 0 for any f ∈ (S (l) ). In addition, we have Recalling (2.5), as η (l) n ( f ) converges almost surely to π (l) ( f ), as n → ∞, for every l ≥ 0 and any function f ∈ (S (l) ), we deduce that Φ l (η (l−1) n )( f ) converges almost surely to π (l) ( f ), as n → ∞, which implies, via Cesaro mean convergence that, as n → ∞, This shows that the random disturbances ∆ (l) n ( f ) are unbiased with finite variance. It is possible to obtain a much stronger result by applying the traditional central limit theorem for triangular arrays; see for instance Theorem 4, page 543 of [11]. More precisely, we find that ∆ (l) n ( f ) converges in law as n → ∞ to a centered Gaussian random variable with variance given by (4.2). If we rewrite ∆ (l) n in a different way, we have the following decomposition Equation (4.3) shows that the evolution equation of the sequence of occupation measures η (l) n can be interpreted as a stochastic perturbation of the following measure-valued equation initialized using conditions µ (0) n = η (0) n . Note that the constant distribution flow µ (l) n = π (l) associated with the solution of (1.1) also satisfies (4.4). Although the above discussion gives some insight on the asymptotic normal behavior of the local errors accumulated by the i-MCMC algorithm, it does not give directly the precise fluctuations of the sequence of measures (η (l) n ) n≥0 around their limiting values π (l) . The analysis of these fluctuations is based on the way the semigroup associated with the evolution (4.4) propagates these local perturbation random fields. We can observe that the one step transformation of the equation (4.4) is an averaging of a nonlinear transformation Φ l of the complete sequence of measures µ (l−1) p from the origin p = 0 up to the current time n. Iterating these transformations, we end up with a complex semigroup on the sequence of measures between successive levels. We refer the interested reader to [3] for an "explicit" calculation of these semigroups in the context of Feynman-Kac models. In the further developments of section 5, we shall derive a first order expansion of these semigroups. These multilevel decompositions express the propagations of the local perturbation errors in terms of weighted random fields. The next section is devoted to the fluctuations analysis of a general class of weighted random fields associated with the i-MCMC algorithm. We present an admissible class of array type weight functions for which the corresponding weighted perturbation random fields behave as a collection of independent and centered Gaussian fields. A martingale convergence theorem This section is concerned with the fluctuations analysis of weighted random fields associated with the local perturbation random fields (4.1) discussed in section 4.1. Our theorem is basically stated in terms of the convergence of a sequence of martingales. To describe these processes, we need the following definitions. We observe that the standard sequence w n (p) = 1/ n belongs to , with the identity function ̟(ǫ) = ǫ. Denote by f = ( f l ) l≥0 ∈ l≥0 (S (l) ) d a collection of d-valued functions, and for every 1 ≤ i ≤ d, let f i be the i-th coordinate collection of functions f i = ( f i l ) l≥0 ∈ l≥0 (S (l) ). We consider the dvalued and (n) -martingale (n) ( f ) = ( (n) ( f i )) 1≤i≤d defined for any l ≥ 0 and any 1 ≤ i ≤ d, We can extend this discrete generation process to the half line + = [0, ∞) by setting, for all t ∈ + , We can observe that ( (n) t ( f )) is a càdlàg martingale with respect to the filtration associated with the σ-fields (n) t generated by the random variables (X (k) p ) 0≤p≤n , with k ≤ ⌊t⌋, and X ⌊t⌋+1 p with 0 ≤ p < ⌊{t}(n + 1)⌋. Before establishing the proof of this convergence result, we illustrate some direct consequences of this theorem. On the one hand, it follows from this convergence that the discrete generation martingales ( (n) l ( f )), where l ∈ , converge in law as n → ∞ to an d -valued and discrete Gaussian martingale l ( f ) = ( l ( f i )) 1≤i≤d with bracket given, for any l ≥ 0 and any 1 ≤ i, j ≤ d, by On the other hand, we can fix a parameter l and introduce a collection of functions g In other words, all the functions g j k are null except on the diagonal k = j. By construction, for every 0 ≤ k ≤ l, we have We can deduce from the above martingale convergence theorem that the random fields (V ( j) n ) with 0 ≤ j ≤ l, converge in law, as n tends to infinity and in the sense of finite dimensional distributions, to a sequence of independent and centered Gaussian fields V ( j 0≤ j≤l such that, for any 1 ≤ i, i ′ ≤ d j , We can achieve this discussion by the following corollary of Theorem 4.4. Corollary 4.5. For every collection of weight functions (w (l) ) ∈ , the sequence of random fields (V (l) n ) l≥0 converges in law, as n tends to infinity and in the sense of finite dimensional distributions, to a sequence of independent and centered Gaussian fields V (l) l≥0 with covariance function given for any l ≥ 0 and any ( f , g) ∈ (S (l) ) 2 by V (l) ( f )V (l) (g) = π (l) ( f − π (l) ( f )) (g − π (l) (g)) . We now come back to the proof of Theorem 4.4. Proof of Theorem 4.4: For every 1 ≤ i ≤ d and any 0 ≤ k ≤ l such that f i k ∈ (S (k) ), the martingale increments are given by Our goal is to make use of the central limit theorem for triangular arrays of d -valued martingales given in [8] page 437. We first rewrite the martingale where for every 0 ≤ i ≤ l(n + 1) + n, with i = k(n + 1) + p for some 0 ≤ k ≤ l, and 0 ≤ p ≤ n We further denote by (n) i the σ-field generated by the random variables X (k) p for any pair of parameters (k, p) such that k(n + 1) + p ≤ i. By construction, for any sequence of functions f = ( f l ) l≥0 and g = (g l ) l≥0 ∈ l≥0 (S (l) ) and for every 0 ≤ i ≤ l(n + 1) + n, with i = k(n + 1) + p for some 0 ≤ k ≤ l, and 0 ≤ p ≤ n, we find that with the local covariance function This implies that Recalling (2.5), we have that η (k) n (h) converges almost surely to π (k) (h), as n → ∞, for every k ≥ 0 and any function h ∈ (S (k) ). Under our regularity conditions on the sequence of mappings (Φ k ), we find that Φ k (η (k−1) n )(h) converges almost surely to π (k) (h), as n → ∞, for any function h ∈ (S (k) ). It immediately implies the almost sure convergence . Under our assumptions on the weight array functions (w (k) n (p)), we can deduce that almost surely Consequently, it leads to the almost sure convergence In order to go one step further, we introduce the sequence of d -valued and continuous time càdlàg random processes given, for any f = ( f l ) l≥0 ∈ l≥0 (S (l) ) d and t ∈ + , by It is not hard to see that for any l ∈ , and any 0 ≤ k ≤ n, and ⌊t(n + 1)⌋ − ⌊t⌋(n + 1) = ⌊{t}(n + 1)⌋ = k where {t} = t − ⌊t⌋ stands for the fractional part of t ∈ + . Using the fact that ⌊t(n + 1)⌋ + n = (⌊t⌋(n + 1) + n) + ⌊{t}(n + 1)⌋, we obtain the decomposition This readily implies that Furthermore, using the same calculations as above, for any pair of integers 1 ≤ j, j ′ ≤ d, we find that Under our assumptions on the weight array functions (w (k) n (p)), we find that Hereafter, we have for every 0 ≤ i ≤ l(n + 1) + n In addition, we also have lim n→∞ sup 0≤k≤l sup 0≤p≤n w (k) n (p) = 0. Consequently, we conclude that the conditional Lindeberg condition is satisfied. Hence, the dvalued martingale ( (n) t ( f )) converge in law, as n → ∞, to a continuous martingale t ( f ) with predictable bracket given, for any air of indexes 1 ≤ j, j ′ ≤ d, by which completes the proof of Theorem 4.4. A multilevel expansion formula We present in this section a multilevel decomposition of the sequence of occupation measures (η (k) n ) around their limiting value π (k) . In section 4.1, we have shown that the sequence (η (k) n ) satisfies a stochastic perturbation equation (4.3) of the measure-valued equation (4.4). In this interpretation, the forthcoming developments provide a first order expansion of the semigroup associated to equation (4.3). This result will be used in various places in the rest of the paper. In section 6, we combine these first order developments with the local fluctuation analysis presented in section 4 to derive a "two line" proof of the functional central limit theorem 2.1. In section 7, we use these decompositions to prove the non asymptotic mean error bounds presented in Theorem 2.2. The forthcoming multilevel expansion is expressed in terms of the following time averaging operators. Definition 5.1. For any l ≥ 0, denote by the mapping : η ∈ (S (l) ) → (η) = ( n (η)) n≥0 ∈ (S (l) ) defined for any sequence of measures η = (η n ) ∈ (S (l) ) , and any n ≥ 0, by We also denote by k = • k−1 the k-th iterate of the mapping . We are now in position to state the following pivotal multilevel expansion. Proposition 5.2. For every k ≥ 0, we have the following multilevel expansion with a sequence of signed random measures Ξ (k) = (Ξ (k) n ) such that, for any m ≥ 1, for some constants b(m) (respectively c(k)) whose values only depend on the parameter m (respectively k). Proof of Proposition 5.2: Recall that we use the convention D 1,0 = I d . We prove the mean error bounds by induction on the integer parameter k. For k = 0, we readily find that Hence, (5.2) is satisfied with the sequence of null measures Ξ (0) n = 0. We further assume that the expansion is satisfied at rank k. We shall make use of the following decomposition It follows from our assumptions on the mappings (Φ k ) that for all µ, η ∈ (S (k) ), Using the fact that On the other hand, using the first order expansion of the mapping Φ k+1 , we have the decomposition ) be a tensor product function associated with a subset I ⊂ , a pair of functions (h 1 i , h 2 i ) i∈I ∈ ( (S (k) ) 2 ) I and a sequence of numbers (a i ) i∈I ∈ I . Using the generalized Minkowski integral inequality, we find that . Moreover, via the mean error bounds established in [3, Theorem 1.1.], it follows that for any i ∈ I and j = 1, 2 for some finite constant a(m) whose values only depend on the parameter m. These estimates readily implies that for some finite constant b(m) whose values only depend on the parameter m. This implies that for any f ∈ 1 (S (k) ) To take the final step, we use the induction hypothesis to check that This yields the decomposition n (δ (k+1) ) + In the last assertion, we have used the convention that D k+2,k+1 = I d stands for the identity operator. We also note that The end of the proof of the result at rank (k + 1) is now a consequence of decomposition (5.4) in conjunction with the estimates (5.5) and (5.6). This completes the proof of Proposition 5.2. Proof of the central limit theorem This section is mainly concerned with the proof of the functional central limit theorem presented in section 2.1. We first need to express the time averaging semigroup k introduced in definition 5.1 in terms of the following weighted summations k n (η) = 1 n + 1 0≤p≤n s (k) n (p) η p with the weight array functions s (k) n = (s (k) n (p)) 0≤p≤n defined by the initial condition s (1) n (p) = 1 and, for any k ≥ 1 and, for all n ≥ 0 and 0 ≤ p ≤ n, by the recursive equation The proof of Theorem 2.1 is a direct consequence of the following proposition whose proof is postponed to the end of the section. We shall now proceed to the "two line" proof of Theorem 2.1. Proof of Theorem 2.1: Fix a parameter k ≥ 0 and for any 0 ≤ l ≤ k, let W (l) be the distribution flow mappings associated with the weight functions (w ((k−l)+1) ) defined in Proposition 6.1 and given by By construction, we have for any 0 ≤ l ≤ k Using the multilevel expansion presented in Proposition 5.2, we obtain that where we recall that, for any k ≥ 0, V (k) n = W (k) n (δ (k) ). Finally, Theorem 2.1 is a direct consequence of Corollary 4.5 together with the mean error bound (5.3) with m = 1 and (6.1). The proof of Proposition 6.1 relies on two pivotal lemmas. Their proofs follow elementary techniques but require tedious calculations which are given in appendix. Lemma 6.2. For any k ≥ 0, and any 0 ≤ p ≤ n, we have the formula where the remainder sequence (r (k) n (p)) is such that r (0) n (p) = r (1) n (p) = 0 and, for all k ≥ 2 and 0 ≤ p ≤ n, we have with |ρ (k) (n)| ≤ c(k) log (n + 1) k . In addition, for any increasing sequence of integers (κ n ) such that κ n ≤ n and κ n /n → κ > 0, as n → ∞, we have We are now in position to prove Proposition 6.1. We achieve the proof of (6.3) by a direct application of Lemma 6.3. From the above discussion, we also find that 0≤p≤κ n n | ≤ |r (k) n |. We deduce from Lemma 6.3 that for any increasing sequence of integers (κ n ) such that κ n ≤ n and κ n /n → κ > 0, lim n→∞ 1 n 0≤p≤κ n s (k) n (p) 2 = (2(k − 1))! Proof of the sharp m -estimates This short section is concerned with the proof of non asymptotic estimates presented in Theorem 2.2. We start with a pivotal technical lemma. In summary, we have proved that with some remainder sequence satisfying r (k) n (p) ≤ c(k) (log (n + 1)) k−2 p≤q≤n 1 (q + 1) 2 which completes the proof of Lemma 6.2. We observe that the first inequality in the left hand side also holds true for p = 0. Moreover, using the fact that for any p ≥ 2, we have Finally, the second assertion of Lemma 6.3 follows from (7.2) and (7.3).
8,229.2
2009-04-10T00:00:00.000
[ "Mathematics", "Computer Science" ]
Exploring the Higgs sector of the MRSSM with a light scalar In a recent paper we showed that the Minimal R-symmetric Supersymmetric Standard Model (MRSSM) can accommodate the observed 125 GeV Higgs boson as the lightest scalar of the model in agreement with electroweak precision observables, in particular with the W boson mass and $T$ parameter. Here we explore a scenario with the light singlet (and bino-singlino) state in which the second-lightest scalar takes the role of the SM-like boson with mass close to 125 GeV. In such a case the second-lightest Higgs state gets pushed up via mixing already at tree-level and thereby reducing the required loop correction. Unlike in the NMSSM, the light singlet is necessarily connected with a light neutralino which naturally appears as a promising dark matter candidate. We show that dark matter and LHC searches place further bounds on this scenario and point out parameter regions, which are viable and of interest for LHC Run II and upcoming dark matter experiments. Contents . Benchmark points for the scenario discussed here. Dimensionful parameters are given in GeV or GeV 2 , as appropriate. to mixing with the additional scalars, and the loop corrections cannot be enhanced by stop mixing. However, the new fields and couplings can give rise to the necessary large loop contributions to the Higgs mass without generating too large a contribution to the W-boson mass. For a similar analysis see Ref. [48], where also a welcome reduction of the level of fine-tuning was found. The plan of the present paper is as follows. In section 2 we remind the reader on the relevant model details and will describe how demanding a light singlet scalar constrains the parameter space of the model. Sections 3 and 4 study how LHC and dark matter bounds, respectively, are affected by these constraints. To highlight features of different regions of parameters, which are also in agreement with LHC and dark matter searches, we define the new Benchmark points given in Tab. 1. They feature many states well below 1 TeV, including the top squark masses. The combination of all different experimental constraints on the parameter space of the MRSSM is summarized in section 5 which also contains our conclusions. Scenario description The essential field content and parameters of the MRSSM can be read off from the superpotential (2.1) The MSSM-like fields are the Higgs doublet superfieldsĤ d,u and the quark and lepton superfieldsq,û,d,l,ê. The Yukawa couplings are the same as in the MSSM. The new fields are the doubletsR d,u , which contain the Dirac mass partners of the higgsinos and the corresponding Dirac higgsino mass parameters are denoted as µ d,u . The singlet, the SU(2)-triplet and the color-octet chiral superfields,Ŝ,T andÔ, contain the Dirac mass partners of the usual gauginos. The superpotential contains Yukawa-like trilinear terms involving the new fields; the associated parameters are denoted as λ d,u for the terms involving the singlet and Λ d,u for the terms involving the triplet; terms involving the octet are not allowed by R-symmetry. Further important parameters of the MRSSM are the Dirac mass parameters M D B,W,O for the U(1), SU(2) L and SU(3) c gauginos, respectively, the soft scalar mass parameters m 2 S,T,O for the singlet, triplet and octet states, the soft mass parameters m 2 H d ,Hu,R d ,Ru for the Higgs and R-Higgs bosons, and the standard B µ parameter and sfermion mass parameters, while the trilinear sfermion couplings are forbidden by R-symmetry. The explicit form of the soft SUSY breaking potential is given in Ref. [46]. We begin the discussion with the neutral Higgs sector of the MRSSM. There are four complex (electrically and R-charge) neutral scalar fields, which can mix. Their real and imaginary parts, denoted as (φ d , φ u , φ S , φ T ) and (σ d , σ u , σ S , σ T ), correspond to the two MSSM-like Higgs doublets H d,u , and the N = 2 scalar superpartners of the hypercharge and SU(2) L gauge fields, S and T . Since their vacuum expectation values, denoted as v d,u,S,T respectively, are assumed real, the real and imaginary components do not mix, and the mass-square matrix breaks into two 4x4 sub-matrices. In the pseudo-scalar sector the MSSM-like states (σ d , σ u ) do not mix with the singlettriplet states (σ S , σ T ) and the mass-square matrix breaks further into two 2x2 sub-matrices, see Appendix A. Therefore the neutral Goldstone boson and one of the pseudo-scalar Higgs bosons A with m 2 A = 2B µ / sin 2β appear as in the MSSM. On the other hand, in the (σ S , σ T ) sector the mixing is negligible for a heavy triplet mass m T and the lightest pseudo-scalar state is almost a pure singlet with its mass given by the soft parameter m S . Therefore, to a good approximation it decouples and is of no interest for the following discussion. In the scalar sector all fields mix and the full 4x4 tree-level mass-square matrix M φ in the weak basis (φ d , φ u , φ S , φ T ) has been given explicitly in Ref. [46]. It is diagonalized by a unitary rotation Z H M φ Z H † to the mass-eigenstate basis (H 1 , H 2 , H 3 , H 4 ), ordered with increasing mass. For the qualitative discussion we consider the limit in which the triplet decouples due to a high soft breaking mass term m T (a necessary condition to suppress the tree-level triplet contribution to the W boson mass and ρ parameter), and in which the pseudoscalar Higgs mass M A and the value of tan β = v u /v d become large. In this limit the SM-like Higgs boson is dominantly given by the up-type field φ u , and to highlight the main effect of the mixing in the light-scalar scenario it is enough to recall the most relevant part of the mass-square matrix, namely the 2x2 sub-matrix corresponding to the (φ u , φ S ) fields only, which reads where we have included the dominant radiative correction ∆m 2 rad to the diagonal element of the doublet field φ u , and we have used the abbreviations The parameters M D B , µ u and λ u appear in the scalar potential due to supersymmetry. 1 From this approximation to the mass-square matrix we can draw several immediate consequences on the four most important parameters in the light-singlet scenario in which a) Singlet soft mass m 2 S : The soft breaking parameter m 2 S appears in the diagonal element for the singlet field φ S in the mass matrix (2.2). To ensure that the lightest scalar state is a singlet-like a direct upper limit can be put on this parameter: Therefore the numerical value of m 2 S has to be limited to be below ≈ (120 GeV) 2 b) Dirac bino-singlino mass M D B : Due to supersymmetry, the Dirac mass parameter M D B appears also in the singlet diagonal element, and moreover in the off-diagonal elements as well. Hence, a direct upper limit can be derived similar to the one on m 2 S (however stronger) to keep the singlet-diagonal element smaller than the doublet one. Of course, the bino-singlino mass parameter appears also in the neutralino mass matrix and is the dominant parameter that determines the LSP mass in the light singlet/bino-singlino scenario. Therefore constraints from dark matter and direct LHC searches for neutralinos will have a strong impact on the Higgs sector. c) Higgsino mass µ u : The higgsino mass parameter µ u appears in the off-diagonal element of the mass matrix in Eq. (2.2) in combination with λ u . However µ u also enters as a parameter in the chargino/neutralino sector, and it is strongly limited from below by the direct chargino searches and dark matter constraints. We recall that µ u has a strong influence on the loop corrections ∆m 2 rad to the doublet component φ u , so it can affect the mixing additionally through that diagonal element. d) Yukawa-like parameter λ u : The Yukawa-like parameter λ u multiplies the higgsino mass parameter µ u in the offdiagonal element of Eq. (2.2). The off-diagonal element cannot be too large in order to avoid a negative determinant of the mass matrix and thus tachyonic states. In the simple limit v S , m Z µ u , avoiding tachyons leads to the following bound on λ u : There can be a cancellation between λ u and the M D B -term, but since µ u is much larger than all other appearing mass parameters, the value of λ u is necessarily very small. In summary, these simple considerations lead to the following promising parameter hierarchies for a light singlet scenario: Quantitative analysis and comparison to experiment In our phenomenological investigations we will of course not use the approximations made before but we will always use the most precise available expressions including higher order corrections. This is done using the Mathematica [50] package SARAH [51][52][53][54][55][56] and SPheno [57,58]. This approach was cross-checked, as described in Refs. [46,47], using FlexibleSUSY [59], FeynArts and FormCalc [62][63][64]. Fig. 1 shows a quantitative analysis of the mixing of the two lightest Higgs states, as a function of the two relevant parameters m S (left column) and M D B (right column). The other parameters are chosen as in BMP5, given in Tab. 1. They satisfy the hierarchy of Eq. (2.7); the precise parameter choices will be motivated in the subsequent sections. The first two lines in the figure show the masses of the two lightest Higgs states (computed to two-loops as described in Ref. [47]) and the singlet component of these two states, measured by the square of the corresponding mixing matrix elements (Z H H 1 ,φ S ) 2 and (Z H H 2 ,φ S ) 2 . As expected from the discussion above, as long as the approximate inequalities Eq. (2.4) and Eq. (2.5) are satisfied, the lightest state is significantly lighter than m Z and has a high singlet component. Once m S or M D B become heavier, the lightest state becomes mainly a doublet-like (and hence SM-like) and its mass approaches a limit, which is here below 120 GeV. Which of the parameter regions is phenomenologically viable will be determined below. The third line of plots in Fig. 1 quantifies one of the major motivations for the light singlet scenario -the enhancement of the SM-like Higgs boson mass due to the tree-level push resulting from mixing with the scalar state. SM-like one are very heavy. In the light-singlet case, the upward shift can amount to more than 10 GeV. The upward shift is particularly strong for non-vanishing Dirac mass M D B , since this parameter also appears in the off-diagonal element of the mass matrix Eq. (2.2). Now we compare the light-singlet scenario illustrated in Fig. 1 with experimental data. It is clear that there are two kinds of direct constraints on the two lightest scalars and their mixing. On the one hand, there must be a SM-like state which can be identified with the Higgs boson observed at LHC, with mass compatible with the observed value and with sufficiently small singlet mix-in in order to agree with the observed Higgs signal strengths and branching ratios. On the other hand, the state that is lighter than the one observed at the LHC must be sufficiently singlet-like in order to evade limits from direct searches for light scalars, especially the LEP searches. Roughly speaking, for light singlet masses these constraints restrict the mixing to around (Z H H 1 ,φu ) 2 < 0.02, except for the region, where LEP saw an upward fluctuation. Here the mixing is restrained to (Z H H 1 ,φu ) 2 < 0.15. Both types of constraints can be precisely analyzed with the help of HiggsBounds-4.2 and HiggsSignals-1.4, respectively [65][66][67][68][69][70]. Here, we pass the effective couplings written by SPheno to both tools for each parameter point studied. With HiggsBounds the option LandH is used to check the MRSSM Higgs sector against all experimental constraints included with the program. For HiggsSignals all peak observables available are used (option latestresults and peak ), the mass uncertainties are described using a Gaussian shape and we estimate the uncertainty of the SM-like Higgs boson mass with 3 GeV. The interpretation of the HiggsBounds and HiggsSignals results is done in a simplified way. A parameter point is excluded by HiggsBounds by using the 95% CL limit of its output and by HiggsSignals when its approximately calculated p-value is smaller than 0.05. A point is excluded by both when both constraints apply. For a more complete statistical treatment of extended Higgs sectors containing light singlet scenarios for the singlet-extended SM and NMSSM see Refs. [71] and [72], respectively. In the case of the MRSSM this approach is left for future work. for two different values of λ u . The parameter Λ u is adjusted to ensure the correct mass of the observed Higgs boson; the remaining parameters are fixed as in BMP5. The combination of the two plots shows that complete range of m S and M D B discussed before with simple approximations of the tree-level scalar mass matrix is viable, except of the region of level crossing and very strong mixing. We find that the light-singlet scenario can be realized in the MRSSM for m S < 100 GeV and M D B < 55 GeV. However, the two plots also show very high sensitivity to the small λ u , which appears essentially in combination with the very small term ∝ M D B /µ u , see (2.6). Changing it from zero to (−0.01) leads to strong shifts of the allowed regions. The main novel feature of the light-singlet scenario in the MRSSM is the restriction on the Dirac bino-singlino mass M D B . This restriction has no counterpart e.g. in the NMSSM. Obviously this limit affects the neutralino sector. Combined with LEP bounds on chargino and neutralino production it leads to the conclusion that the LSP is a Dirac bino-singlino neutralino with mass related to M D B and thus limited from above. This limit on the LSP mass will provide strong constraints from and correlations to LHC processes and dark matter studies. With these restrictions on m S and M D B , the decay of the Z and SM-like Higgs boson into the LSP and singlet-like Higgs boson, if kinematically allowed, needs to be sufficiently suppressed so that theoretical prediction of invisible width of both particles is in agreement with experiment. In the mass region of interest, m H 1 < 62 GeV, HiggsBounds puts very stringent limits on the singlet-doublet mixing. As the decay of the SM-like Higgs boson into the light (pseudo-)scalar is related to this mixing, contributions of these decay channels are strongly reduced for realistic scenarios. Decays of Z (Higgs) bosons to LSPs are induced by the couplings to the (R-)higgsino (higgsino+bino or R-higgsino+singlino) parts of the LSP. This is mainly mediated by gauge couplings and will be suppressed by the weak (R-)higgsino-singlino-bino-mixing. We will show that it is also important for direct detection limits from dark matter searches. These searches put such a strong bound on the mixing so that the decay to the LSP will be automatically suppressed. Table 2 summarizes the Higgs sector observables for our benchmark points, as well as the predicted W boson mass. LHC constraints As was discussed in the previous section, demanding the singlet state to give the lightest Higgs boson yields a direct upper limit on the Dirac bino mass M D B . A light bino-like neutralino, and more generally light weakly interacting particles, are promising in two respects. They might lead to observable signals at the LHC, and they might explain the observed dark matter relic density. At the same time, the light-singlet scenario is constrained by existing data from LHC and dark matter searches. In the present section we study the constraints from LHC in detail. We only consider constraints on weakly interacting particles, i.e. on charginos, neutralinos, and sleptons. All strongly interacting particles are not relevant for the electroweak and dark matter phenomenology studied here and are assumed to be heavy enough to evade limits. So far the negative results of LHC searches for new particles have been interpreted within the MSSM, or a number of simplified models which include a pair of charginos, the neutralinos and gluinos are of the Majorana type, and left-and right-chiral sfermions that can mix. In general, care has to be taken in reinterpreting the LHC bounds in the context of the MRSSM, since the MRSSM differs from the MSSM in the number of degrees of freedom, the Dirac versus Majorana nature of neutralinos, the mixing patterns, and since some processes are forbidden in the MRSSM due to the conserved R-charge. Constraints from slepton searches We first discuss briefly the simplest case of constraints following from slepton searches. In both the MSSM and the MRSSM, sleptons can be produced in pairs through Drell-Yan processes. In the MRSSM, slepton left-right mixing vanishes, which is not an essential difference to the MSSM, where the mixing is typically assumed to be small. In both models, light sleptons will decay directly to the corresponding leptons and the bino-like neutralino LSP. In such a case the Dirac or Majorana nature of the LSP does not matter and, hence, the MSSM exclusion bounds for light sleptons can be applied directly to the MRSSM case. Recently the ATLAS Collaboration derived the exclusion limits in the (m˜ L,R , mχ0 1 ) parameter space from the analyses of the selectron and smuon pair-production processes, see Figs. 8 (a) and (b) of Ref. [73]. 2 These exclusion limits can be specialized to our case, in which Eq. (2.5) implies an upper limit of ≈ 55 GeV on the LSP mass. We then find that left-handed slepton masses below ≈ 300 GeV are excluded. For the right-handed slepton masses the bound is somewhat weaker (around 250 GeV) due to smaller production cross section. Further, a small corner at a lower right-handed slepton mass ∼ 100 GeV with the LSP masses in the range 40-55 GeV is still allowed. With the current experimental reach the direct production of staus with the cross sections predicted by supersymmetry can not be excluded, see Ref. [75]. The remainder of the present section is devoted to the more interesting case of neutralino and chargino searches. The MRSSM comprises four different charginos, with masses determined by the wino and the two higgsino mass parameters, see Appendix A. In contrast, the MSSM contains only two charginos. Table 3 gives an overview of possible observed signatures of sets of charginos and neutralinos (listed in the first column). It then shows the corresponding possible interpretations of the signatures in the MSSM (second column) and in the MRSSM (third column), which are very different as can be understood on the basis of the previous remarks. The table is more general than necessary for the purposes of the present paper, and it can serve as a basis of further investigations of the LHC phenomenology of the MRSSM. In general, as the number of degrees of freedom for neutralinos and charginos is twice as large in the MRSSM than in the MSSM one could naively expect an enhancement of the production cross section by a factor of four. However, all MRSSM charginos and neutralinos carry non-vanishing R-charge. Due to R-charge conservation, therefore, only half of all final state combinations is allowed. Furthermore, the new genuine MRSSM states (R d ,R u ,S,T ) do not interact at tree level with fermions, sfermions or gluons. Hence the situation is more complicated and has to be analyzed channel by channel. Further differences between MRSSM and MSSM exist in the couplings of charginos and neutralinos and therefore in the decay branching ratios. For example, in the MSSM the higgsino mass parameter µ induces a maximal mixing between the up-and down-higgsino so that both will mix almost similarly with the bino and wino, implying an appreciable decay rates of both higgsinos to sleptons, if kinematically available. This is different in the MRSSM. R-symmetry does not allow a parameter which induces mixing between the upand down-(R-)higgsinos, instead separate mass parameters µ d and µ u are needed. Then, upand down-(R-)higgsino states separately will be good approximations to the corresponding mass eigenstates. For tan β 1 this also means that the mixing between down-(R-)higgsino with the wino-triplino and bino-singlino states will be strongly suppressed. As a result, for large tan β the down-(R-)higgsino has appreciable couplings only to staus, which are not shared with the up-(R-)higgsino. Table 3. Possible discovery scenarios at colliders of different sets of particles from the neutralinochargino sector and the corresponding dominant gauge eigenstates for the MSSM and MRSSM. This is assuming the light-singlet-scenario in the MRSSM, which leads to a light bino-singlino as LSP candidate. It is also important to consider decays of charginos and neutralinos into the SM-like Higgs boson H 2 observed at the LHC or into the light singlet H 1 , present in our scenario. These decays are influenced by the λ/Λ parameters. Large Λ are needed to generate large enough loop contributions to the mass of the SM-like Higgs boson and λ needs to be small in the light singlet scenario studied here. Therefore, the decay of the wino-triplino to the SM-like Higgs boson and LSP, if kinematically allowed, will be enhanced by Λ, while contributions from λ to the higgsinos decaying to the SM-like Higgs boson and LSP are small. When the chargino/neutralino decay to the SM-like Higgs boson is kinematically not possible, decays to the light singlet in our scenario still might open. Therefore, it can be a competing channel to the decay to the Z boson above the Z boson mass or the dominant decay channel below it. For higher neutralino masses the branching ratio to the light singlet will be sub-dominant to the one to the SM-like Higgs boson, but might still be non-negligible, depending on the mixing between singlet and doublets. Setup for recasting LHC limits for the MRSSM After having given an overview on the qualitative details of electro-weak production in the MRSSM, we now turn to a more in-depth analysis, taking the experimental input into account in a systematic way. In recent years different computational tools emerged, which try to automatize the study of beyond the Standard Model physics at the LHC in a rather generic way using standardized interfaces. This allows us to take the model differences between the MRSSM and the MSSM/simplified models studied by the experiments into account and directly calculate bounds from the LHC on the MRSSM parameter space. In the following, we give a description of the set of tools used to for this. Using the UFO [76] output produced by SARAH with Herwig++-2.7 [77, 78] 3 we simulate the LO production of electro-weak sparticles at the LHC and compare it with 8 TeV data with CheckMATE-1.2.0 [81]. 4 CheckMATE includes ATLAS analyses, which can be sensitive to the final state of these processes, when several leptons appear as decay products. Specifically, we take into account the analyses implemented in CheckMATE for two [73,87], three [88,89] and four and more leptons [90] in the final state. The final output of CheckMATE used to set exclusion limits in the MRSSM parameter space is the value for CL S of most sensitive signal region of the studied analyses for each parameter point. To ensure correctness of the calculated limits several tests of the used tools were done. For the calculation of the matrix element the event generator Herwig++-2.7 was checked against Madgraph 5 [91], FeynArts/FormCalc and CalcHEP [92], where also the input of the model files was produced by SARAH. Agreement was achieved up to the implementation differences of the programs. The analyses implementation by CheckMATE was checked by recalculating some of the bounds given by experiment. Additionally, the cutflow for a selection of signal regions was reproduced by an independent implementation within uncertainties. With our setup we will consider the following production processes pp → χ 0χ0 , pp → χ 0 χ + , χ −χ0 , pp → χ 0 ρ − , ρ +χ0 , pp →˜ + R˜ − R . The corresponding production cross sections of our benchmark points are given in Tab. 4. Processes with left-handed sleptons are neglected, assuming that the masses are above the detection limit as discussed before. The masses of the relevant particles are given in Tab. 5. In the next sections we will show how the setup described here is used to recast the experimental bounds on the chargino and neutralino sector of the MRSSM. We will also highlight differences to the usual MSSM interpretations. Light staus and light charginos In the following we focus on specific cases, which are of interest for the light-singlet scenario and which are promising in view of dark matter. We begin with the case corresponding to the first row of Table 3, i.e. we consider a very light LSP (a bino-like in the MSSM, and a bino-singlino in the MRSSM), and a relatively light chargino (a wino in the MSSM, and a down-(R-)higgsino in the MRSSM). We also take relatively light staus, which are motivated by dark matter considerations. sensitive to Higgs bosons in the final state, yielding a low exclusion power. 5 Only two small regions are excluded: the region with a chargino mass below 100-150 GeV (depending on the stau mass), and the region around the chargino mass of 150-300 GeV and stau mass below 150 GeV. In contrast, for the MRSSM the produced charginos are down-higgsinolike and the decay to stau, if available, is preferred. This makes the searches designed for events with large multiplicity of taus very efficient. Hence, a large triangular region with chargino mass between the decay threshold (mτ R + m LSP ) and around 450 GeV is excluded. Interestingly, however, to the left of the triangle the chargino becomes too light, decays to on-shell staus are impossible, and decays via off-shell stau are suppressed. In that region the acceptance is lowered and with present Run-I data no exclusion is possible. General investigation of light charginos Now we discuss the limits on chargino and neutralino masses in more generality. Fixing the bino-singlino to be very light, ∼ 50 GeV, there are three relevant mass parameters: the Dirac wino-triplino mass M D W and the two higgsino masses µ d,u . Further, the exclusion limits depend on the slepton masses. Figure 4 shows the exclusion regions in the plane of the two higgsino masses µ d -µ u , for the case that also right-handed staus are light. In the left plot all other sleptons are heavy; in the right plot the right-handed sleptons of the other generations are also light. In both plots there is an interesting non-excluded region with very small µ d , below around 150 GeV. This is consistent with the discussion of the previous subsection and with the corresponding region in Fig. 3. The region for µ d between around 150 and 400 GeV corresponds to the triangular region in Fig. 3, in which the decay of the higgsino into stau is dominant, and is excluded. In the case of light right-handed sleptons of all generations, the region for the other higgsino mass µ u below around 300 GeV is also excluded. This originates from the mixing with the bino, leading to non-negligible and democratically distributed decay to all right-handed sleptons in addition to the decay to Z and Higgs boson. Therefore selectrons and smuons will be produced in large enough abundance to be picked up by the experiments. As discussed before, this is in contrast to the down-higgsino, where the mixing to the bino is suppressed via large tan β and the large Yukawa coupling of tau leads to dominating decay into staus. In the case of heavy sleptons of the first two generations, the exclusion power on µ u is very weak as the decay to Z and Higgs boson have comparable branching ratios. Thus the case of light right-handed staus leads to two distinct viable parameter regions, one with very small µ d and one with larger µ d . These two regions are represented by the two benchmark points BMP4 and BMP6. Now we consider the case in which all sleptons, including the staus, are heavy. Figure 5 shows the exclusion regions for this case, once in the µ d − µ u plane (with M D W = 500 GeV), and once in the µ d =µ u − M D W plane. Both plots show that once both higgsinos are heavier than around 300 GeV there are parameter regions that are safe from exclusion. This generic, viable parameter region for heavy sleptons is represented by benchmark point BMP5. The left plot of Fig. 5 analyzes the case that one of the higgsinos is lighter. It shows two additional strips of parameter space at higgsino masses around 150 GeV, which are not excluded by the considered LHC searches. The physics reason for the existence of these regions are the higgsino decay patterns. In the non-excluded strips the dominant decay of the light higgsino is a two-body decay into the LSP and W/Z or SM-like Higgs boson. Due to the small higgsino mass, the LSP is too soft to allow a discrimination of the events from the background of standard on-shell W/Z/Higgs production and decay. For even lower higgsino masses, the dominant decay modes are 3-body decays via the SM bosons. In this case the SM background for the lepton searches is smaller and the exclusion power higher. We exemplify the region of low µ d by the benchmark point BMP4. For the region of low µ u we also need to consider constraints from dark matter searches, which will be described in the next section. In the right of Fig. 5 masses of the wino-triplino M D W below 200 GeV for higgsino masses above 300 GeV can not be excluded by the analyses used here. As was discussed before, the wino-triplino decays predominantly into Higgs bosons and LSP and the Higgs boson again mainly into bottom quarks. As described before, this gives a low exclusion power since the experimental analyses so far are not sensitive to the Higgs bosons in the final state. In summary, our investigations show that there are three qualitatively different viable parameter regions compatible with very light bino-singlino state, represented by the three benchmark points BMP4,5,6. Two regions are characterized by light right-handed stau and either light or heavy down-higgsino (where light/heavy means masses around 100 GeV/larger than around 400 GeV, respectively). The third region is characterized by generally heavy sleptons and higgsino masses above around 300 GeV. In all cases, the winotriplino mass is not very critical; the benchmark points have M D W between 400-600 GeV, but smaller values are allowed as well, and do not give rise to a different phenomenology. Dark matter constraints As was pointed out in Sec. 2 the scenario with a light singlet Higgs state considered in this work leads to an upper bound on the bino-singlino mass parameter M D B < 55 GeV. The neutralino, which is mainly given by this component, will become the LSP. Therefore, it is the dark matter candidate of our model and we will study here how viable this scenario is when confronted with constraints from dark matter searches. Dirac neutralinos as a candidate for dark matter were already studied in Ref. [34,36]. Especially in Ref. [36] it was clarified that the bino-singlino state is viable candidate as it is possible to achieve the correct relic abundance by annihilation to leptons through sleptons, especially a light righthanded stau. Additionally, the spin-independent direct detection channels were investigated and the squark and Z boson exchange as main contributions were identified. Interference between those two channels was neglected, but we will see in the following that it plays an important role to evade the direct detection limits. The technical setup for the numerical computations described below is as follows. We use MicrOMEGAs-4.1.8 [94] to calculate the relic density and direct detection rate for the MRSSM. The model input is given by the CalcHEP output of SARAH and for each parameter point the mass spectrum and couplings calculated with SPheno at full one-loop level is passed to the program. For the comparison to the experimental data and statistical inter- Table 6. Values of dark matter observables for the benchmark points. pretation of the direct detection part we use LUXCalc [95], which uses results from the LUX experiment [96], and input on the cross section for each parameter point from MicrOMEGAs. Relic density We first consider the question which parameter space of the light-singlet scenario is compatible with the measured dark matter relic density. As in the MSSM, the crucial requirement is to achieve sufficiently effective LSP pair annihilation processes. In the parameter regions found in the previous sections, it turns out that two cases are promising. Either, if the condition m χ 1 ≈ M Z /2 is valid and S-channel resonant LSP pair annihilation into Z bosons is possible, or if right-handed staus are light and enable annihilation via t-channel stau exchange into tau leptons, see the corresponding Feynman diagrams in Fig. 6, when f,f replaced by τ,τ for non-Z-exchange diagrams. Fig. 7 (left) shows the resulting allowed contour in the M D B -mτ R parameter space. In the plot, all non-varied parameters are fixed to the values of BMP6, but this choice is not crucial. The important feature is that, as in the MSSM, to meet the measured value of the relic density the stau mass has to lie in a small interval once the other parameters are given. The contour has a sharp resonance-like peak around M D B ≈ M Z /2 which results from the S-channel annihilation process. The two different possibilities mentioned above correspond to being at the resonance or away from it. In the resonance peak the required stau mass has to be rather high. This is the situation realized in our benchmark point BMP5. For this scenario to be viable it has to be ensured that the Z boson does not decay to the LSP with a detectable effect to the invisible width of the Z boson. Since the same interaction is also important for, and thus constrained by, the direct detection cross section, we will discuss it in the subsequent subsection. For values of M D B away from the resonance, Fig. 7 (left) shows that the required stau mass is small and below 150 GeV. The benchmark points BMP4 and BMP6 represent this case. For such light staus the LEP bound on the stau mass of ≈ 82 GeV [97] needs to be taken into account. In general, the LEP bound implies that there is a lower limit on M D B for which the dark matter density can be explained. Table 6 provides the precise values for the LSP and right-handed stau masses, as well as the relic density, for the benchmark points. Again the different characteristics of BMP5 and BMP4/BMP6 are visible. The benchmark point BMP4 has been tuned to give exactly the experimental value of Ωh 2 = 0.1199 ± 0.0027 [98]. For BMP5 (BMP6) the bino-singlino (stau) mass can easily be tuned by a small percentage to achieve full agreement with the measured Ωh 2 . Direct detection The spin-independent dark matter-nucleon scattering cross section can be given in terms of two scattering amplitudes f p , f n . These are conventionally normalized such that the total spin-independent cross section at zero momentum transfer is Here µ 2 Z A is the dark matter-nucleon reduced mass, and A and Z are atomic mass and number, respectively. As noted in Ref. [36], the spin-independent cross-section for Dirac neutralinos is dominated by the vector part of the Z boson-exchange and squark-exchange contributions. The relevant Feynman diagrams are shown in Fig. 6, when f,f replaced by u,ũ and d,d. Each contribution can lead to large scattering rates and thus to strong bounds on the parameter space. For this reason we provide explicit expressions for the relevant amplitudes. For the Z-mediated diagram we obtain Here we also use the shorthand notation s W = sin θ W , c W = cos θ W . The coupling factors Z i,3−4 arise because the Z boson only couples to the (R-)higgsino content of the LSP. Qualitatively, in order to suppress these amplitudes and to ensure agreement with current direct detection bounds, the (R-)higgsino mix-in to the LSP has to be suppressed. This leads particularly to constraints on the higgsino mass parameters µ u and µ d . As a simple illustration, we assume tan β 1, λ u = 0, and µ u M D B , g 1 v and further that the direct mixing between up-higgsino and bino will be dominant. In this case, the mixing matrix elements appearing in Eqs. (4.2) and (4.3) take the form clearly exhibiting the suppression by large µ u . Similarly, we have evaluated the squarkmediated diagrams of Fig. 6 for general masses of the four first-generation squarks,ũ L ,ũ R , d L ,d R . For the qualitative discussion it is sufficient to provide the results in the limit of heavy squarks of equal mass mq = md ≈ mũ m χ 1 . In this limit we obtain f heavy squark The minus signs result from the different Dirac structure of the S and U channels that contribute to the process. Therefore, in order to suppress these amplitudes and ensure agreement with current bounds, the squarks need to be sufficiently heavy. 6 For our quantitative evaluation we include the full experimental information from LUX by using LUXCalc and the complete theoretical prediction of the proton and neutron scattering amplitudes. A likelihood is then constructed from Poisson distribution where N and b are the observed and expected numbers of events by the LUX experiment, respectively, and µ is the expected number of signal events for a given WIMP mass and its effective couplings. Furthermore, we assume that DM consists in equal proportions from χ andχ and use default settings for the halo profile in MicrOMEGAs. On the right of Fig. 7, the 95 % and 90 % exclusion bounds (violet (dark) and yellow (light dark) regions) derived using the log likelihood for the direct detection by LUX given in Eq. (4.7) are shown depending on µ u and the first/second generation squark masses. Here, it can be seen that the derived limits are quite sensitive to the combination of both parameters. This dependence stems from an interference of the amplitudes and can be understood by a simplified analysis of the expressions, as given before. In the approximation leading to Eq. where s 2 W = 1/4 has been taken in the proton case. Plugging these results into the expression for the dark matter-nucleus cross section, Eq. (4.1), we find a complete destructive interference, i.e. σ DM −N = 0, when where the numerical value for xenon is calculated with A = 54 and Z = 131.3. The line of destructive interference corresponding to Eq. (4.9), shown as a full line in Fig. 7 (right), explains the funnel-shaped allowed region. Above the line, the squarkmediated amplitudes become small, to the right of the line, the Z-mediated amplitudes become small. It should be noted that the result for the exclusion bounds is calculated using the complete information of micrOMEGAs and LUXCalc, while Eq. (4.9) is only approximated. As the squark masses of the first two generations are limited by LHC searches, the direct detection non-discovery provides a lower limit on µ u . This also excludes the region of low µ u allowed by LHC searches in the left of Fig. 5. As discussed before, the amplitude for the Z-mediated exchange is closely related to one responsible for the Z boson decay to a pair of LSP. Both depend strongly on the Z i,3−4 containing the neutralino mixing matrix elements. The direct detection limit of Fig. 7 limits the Z i,3−4 strongly enough so that Γ(Z → χ 0 1χ 0 1 ) 3 MeV is ensured over all the parameter space. Xenon is not the only element used for direct detection experiments, therefore the change of the values of A and Z has to be taken into account, when using Eq. (4.9) for other cases. The variation of the ratio A/(Z − A) (0.7 for xenon, 0.8 for argon and germanium) is small enough that the regions of strong destructive interference for different experiments will not overlap in the MRSSM parameter space. Here, we only take the LUX result into account as it is the most sensitive one available. Summary and conclusions The minimal R-symmetric model MRSSM is a promising alternative to the MSSM, which predicts Dirac gauginos and higgsinos as well as scalars in the adjoint representation of SU(3)×SU(2)×U(1). Here we have investigated the possibility that the scalar singlet mass is small and gives rise to a singlet-like mass eigenstate with mass below 125 GeV. The potential advantages of this scenario are an increased tree-level SM-like Higgs boson mass, many light weakly interacting particles which could be discovered at the next LHC run, and the possibility to explain dark matter. Many of the model parameters are strongly constrained in order to make this scenario viable. Here we briefly summarize the main constraints. Fig. 8 summarize the four most important non-LHC constraints on the relevant parameters. There, the SM-like Higgs boson mass is given by the green-yellow colour bands, while the red area shows the regions excluded by direct detection searches. Black full (blue dashed) lines give the W boson mass (Ωh 2 ) contours. All plots are based on BMP6, except for Fig. 8(d), where BMP4 was used. In order to realize a light singlet-like Higgs boson in the first place, not only the parameter m S but also M D B and λ u must be very small. The SM-like Higgs boson mass is increased at tree-level compared to the case with heavy singlet or to the MSSM, but loop contributions governed by Λ u are still important. Accordingly, the plots of Fig. 8 show that the Higgs mass measurement essentially fixes Λ u and constrains M D B and µ u . We have found that the other constraints in Higgs physics, arising from measurements of Higgs properties and of non-discovery of further Higgs states, can be easily fulfilled. As discussed in Ref. [46], the W-boson mass can be explained simultaneously with the SM-like Higgs boson mass. Fig. 8 confirms that this is still the case in the light singlet scenario. The dark matter relic density can be explained in the MRSSM light singlet scenario because the Dirac bino-singlino neutralino is necessarily the stable LSP. Agreement with the observed relic density requires particular combinations of the bino-singlino mass M D B and the right-handed stau mass. If the latter is heavy, as in our benchmark point BMP5, the LSP mass must be close to half of the Z boson mass. If the LSP mass is away from the Z boson resonance, the right-handed stau must be light, as in our BMP4 and BMP6. This resonance behaviour is clearly displayed in Fig. 8(a). Parameters like µ u , µ d influence the relic density due to their impact on the LSP mass via mixing, Fig. 8(d). Further bounds on the model parameters arise from the negative searches for dark matter and from the negative LHC searches for SUSY particles. The direct dark matter searches correlate µ u with m 2 q R ;1,2 , Fig. 8(b). Because of the non-discovery of squarks at LHC so far, this give rise to a lower bound on µ u , which is not affected by other parameters of the weak sector like Λ u , Fig. 8(c). Together with the Higgs mass value, however, these searches constrain both µ u and m 2 q R ;1,2 to a rather narrow range, see Fig. 8(b). Our recast of LHC analyses has revealed three distinct viable parameter regions of interest, represented by the three benchmark parameter points BMP4, BMP5, BMP6. The most obvious region is characterized by heavy M D W , µ d and sleptons (BMP5). A second is characterized by very light right-handed stau mass around 100 GeV (BMP6); in the third region the right-handed stau and µ d are very light (BMP4). Fig. 8(d), which gives the parameter variation for BMP4, shows that this region is also allowed by the other constraints. As explained in Sec. 3, these two regions are allowed because the existing LHC searches become ineffective. The LHC searches alone would allow further parameter regions with very small µ u , which however are excluded by dark matter constraints. To summarize: the experimental data from collider and dark matter experiments impose stringent constraints on the parameter of the light scalar scenario of the MRSSM. Nevertheless we have identified regions in the parameter space and proposed representative benchmark points fulfilling all the constraints. All viable parameter regions are characterized by several light weakly interacting SUSY particles and will be tested both by future dark matter and LHC SUSY searches. breaks into two 2x2 submatrices. The first, in the basis (T − ,H − d ), (W + ,R + d ) of spinors with R-charge equal to electric charge, takes the form of The diagonalization and transformation to mass eigenstates λ ± i is performed by two unitary matrices U 1 and V 1 as and the corresponding physical four-component charginos are built as The second submatrix, in the basis (W − , R − u ), (T + ,H + u ) of spinors with R-charge equal to minus electric charge, reads The diagonalization and transformation to mass eigenstates η ± i is performed by U 2 and V 2 as
10,847.2
2015-11-30T00:00:00.000
[ "Physics" ]
SARS-CoV-2 disease severity and diabetes: why the connection and what is to be done? Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a novel virus responsible for the current coronavirus disease 2019 (COVID-19) pandemic, has infected over 3.5 million people all over the world since the first case was reported from Wuhan, China 5 months ago. As more epidemiological data regarding COVID-19 patients is acquired, factors that increase the severity of the infection are being identified and reported. One of the most consistent co-morbidities associated with worse outcome in COVID-19 patients is diabetes, along with age and cardiovascular disease. Studies on the association of diabetes with other acute respiratory infections, namely SARS, MERS, and Influenza, outline what seems to be an underlying factor in diabetic patients that makes them more susceptible to complications. In this review we summarize what we think may be the factors driving this pattern between diabetes, aging and poor outcomes in respiratory infections. We also review therapeutic considerations and strategies for treatment of COVID-19 in diabetic patients, and how the additional challenge of this co-morbidity requires attention to glucose homeostasis so as to achieve the best outcomes possible for patients. Introduction Since the first reported case on December 9, 2019, in Wuhan, China, cases of the coronavirus 2019 disease quickly escalated to global pandemic levels within a month. At present, 5 months later, the number of confirmed cases surpasses 3.5 million, reported across 200 countries and regions all over the world [1,2]. The virus responsible for COVID-19, now named SARS-CoV-2, had not previously been described in humans or animals and was reported in January 2020 after isolation from bronchoalveolar fluid of patients [3,4]. It is a new beta-coronavirus that shares a high percentage of genomic similarities with SARS-CoV, the coronavirus responsible for the severe acute respiratory syndrome (SARS) outbreak in 2003 [4]. Among the respiratory syndromes caused by beta-coronaviruses in humans, SARS-CoV-2 infection has a higher basic reproduction number compared to SARS-CoV and the Middle East respiratory syndrome (MERS)-CoV, representing a greater threat for international public health [5]. Preliminary estimates of mortality rates indicate that SARS-CoV-2 infection is less often fatal than MERS-and SARS-CoV respiratory infections (3.4% against 34.4 and 15%, respectively) [2,6,7]. However, as evidenced by its higher reproduction rate, the COVID-19 outbreak already has caused a higher total number of deaths than the other coronavirus respiratory syndromes. While cases of COVID-19 were estimated at around 3.5 million around the world as of May 4th, 2020, with over 250,000 deaths, the 2003 SARS outbreak had only 8000 cases across 29 countries, with 774 total deaths, while the 2012 MERS outbreak had 2494 confirmed cases, reported by 27 countries, with a total of 858 deaths [2,6,7]. As an ongoing challenge and with the number of confirmed cases increasing exponentially, epidemiological data characterizing COVID-19 patients are valuable in helping to control the spread and treatment of the disease. While it's considered a respiratory virus, it is now evident that is invades many organs beside the upper and lower respiratory tract and can cause multisystem-wide disease with characteristics unique to each organ it invades. To date, studies describing comorbidities related to COVID-19 report correlations with distinctive underlying diseases. In one of the first reports describing the clinical features of hospitalized COVID-19 patients published in February 2020, Huang et al. found that 32% (13 patients) had other health issues, most common being diabetes, cardiovascular disease, hypertension, and chronic obstructive pulmonary disease [8]. The high incidence of diabetes throughout the world makes this truly worrisome as the pandemic has spread. Subsequent studies and broader data collection confirmed such observations. In a study including 1099 patients, 15.7% of whom were manifesting severe symptoms, the most common co-morbidities were hypertension (23.7%), diabetes (16.2%), heart disease (5.8%), and cerebrovascular disease (2.3%) [9]. Hypertension and diabetes were also the most common co-morbidities in a third study that included 140 patients, with a prevalence of 30 and 12%, respectively, where they also correlated with the severity and clinical outcome of the disease [10]. In a study of 138 patients, Wang et al. found that 46.4% (64 patients) had other underlying conditions and, notably, that number is much higher (72.2%) amongst patients in the intensive care unit [11]. In addition, among cases with a negative outcome, Yang et al. described that, of a group of 32 non-survivors, 7 (22%) had cerebrovascular diseases, and 7 had diabetes [12]. In a systematic review including eight different studies, encompassing a total of 46,248 confirmed cases, Yang et al. found that the most common co-morbidities among COVID-19 patients were hypertension (17%) and diabetes (8%). Patients with other underlying respiratory diseases amounted to only 2% [13]. A summary report from the Chinese Center for Disease Control of 72,314 cases, the largest case load to date, informs us that the overall fatality from SARS-CoV-2 is 2.3% but that percentage is 7.3% in those with diabetes [14]. So, in sum, while evidence that diabetes per se makes it more likely that one gets infected with SARS-CoV-2 is inconclusive or lacking altogether (because we lack information on the group of people who get infected but have no symptoms), the diabetic condition makes it more likely that infection will be more severe and therefore will result in more hospitalizations, that hospitalized diabetic patients will need more intensive care, that diabetes patients will spend a longer time in hospital until they are discharged, and that patients with diabetes are more likely to die than are nondiabetic patients. Pathophysiology of infections in people with diabetes Generalized immunological findings with viral infections in diabetic patients It is known that diabetic patients, especially with uncontrolled glycemia, are at higher risk to contract infections, a trend that correlates tightly with glycated hemoglobin levels. In clinical practice, higher incidence of foot infections, yeast infections, urinary tract infections, and surgical site infections is commonly seen in diabetic patients [15][16][17][18]. There are several host factors that can explain this observation, ranging from local changes in bacterial colonization, to systemic alterations in immune response. For instance, cytokine release by macrophages and T-cells is disturbed in diabetes, which also impairs neutrophil recruitment [19]. Alteration of innate immune responses and humoral innate immunity were observed both in vitro and in diabetic patients [20][21][22]. In non-obese diabetic mice, basal levels of interferon-α (IFN-α), a type I interferon, are reduced when compared to pre-diabetic mice, and IFN-γ production by CD8+ T cells is impaired [21]. Similarly, in persons with either type 1 or type 2 diabetes (T1DM and T2DM, respectively), IFN-α production by dendritic cells is reduced [22]. Impaired dendritic cell functions, together with reduced activation of natural killer cells, compromise proper adaptative immune responses (Fig. 1). In the adipose tissue of T2DM mouse models and human patients, regulatory or anti-inflammatory macrophages (M2) and regulatory T cells (Treg) change their profile to pro-inflammatory macrophages (M1) and Th1 and Th17 CD4+ T cells [23]. As such, delayed activation of Th1 response and a late hyper-inflammatory reaction are regular findings in diabetes [24]. These alternations in immune profile likely contribute to the reasons why patients with DM have higher incidence and severity of infections. Type I IFN plays a crucial role in CD4+ T cells differentiation and antiviral response [25]. Dendritic cells are a necessary element to induce type I IFN-dependent polarization of CD4+ T cells into follicular helper T cells (Tfh), vital for immune humoral responses against viral infections [26]. Interestingly, CD4+ T cell polarization into Tfh happens with IFN exposure at early stages of the infection, while exposure at delayed stages promotes differentiation into Th1 cells [26]. As mentioned above, reduced levels of IFN-α production by dendritic cells is a feature found in both T1DM and T2DM patients, which may favor Th1 differentiation to the detriment of Tfh. While the SARS-CoV responsible for the 2003 outbreak is able to suppress the induction of type I IFN in dendritic cells by inhibiting the transcription factor IRF-3 (interferon regulatory transcription factor-3) [27], this does not seem to be the case for SARS-CoV-2 [28]. In fact, Lokugamage and colleagues show that SARS-CoV-2 is much more sensitive to type I IFN priming than SARS-CoV, as measured by reduced viral protein and replication in Vero E6 cells [28]. As such, the reduced basal levels of type I IFN observed in diabetic patients offer an additional explanation for the increased risk and severity of COVID-19. Human IFNα2b by vapor inhalation is a adjuvant treatment recommended in guidelines for treatment of coronaviruses in China [29]. At least two on-going clinical trials (NCT04276688 and ChiCTR2000029387) are now evaluating two different type I IFNs (IFNβ1b and IFNα2b, respectively) in combination of other antiviral drugs for the treatment of COVID-19. In an investigator-initiated open-label study, human recombinant IFNα nasal drops seemed to effectively prevent infection in medical personnel who were likely to be exposed to SARS-CoV-2, indicating that IFNα inhalation has promise for protecting susceptible healthy people during the coronavirus pandemic, and underlining the importance of IFN in this disease [30]. If this report is replicated, it would be a hugely significant advance in fighting this disease. Diabetes, aging, and immune function According to the 2020 National Diabetes Statistics Report, elaborated and published by the CDC, 13% of all US adults (age 18 and up) have diabetes, corresponding to 10.4% of total population. Additionally, the prevalence of diabetes increases with age. While 4.3% of adults between 18 and 44 years of age have diabetes, this number increases to 17.5% in the age range of 45-64 years, and to 26.8% in people 65 and above [31]. Even though elderly individuals with DM get infections that are similar to common infections found in younger DM patients, age-related immune senescence is an additional component to the already compromised cell-mediated immunity seen in diabetic persons. Several different aspects of the immune response are altered during aging. Of those, changes in T-cell mediated response are the most studied and best described. Due to gradual thymic involution with aging, production of naive T cells is decreased, resulting in a limited diversity of T-cell receptors apt to recognize new antigens. As such, even though the total number of T cells is mostly unaltered with age, there is a higher prevalence of memory T lymphocytes in older individuals, to the detriment of naive cells [32]. This characteristic is one of the reasons why elderly persons have an adequate immune response to previously exposed antigens but poorer response when exposed to new ones. Decreased number of naive T cells, however, is only one aspect of the T-cell system that is altered with aging. Expression of IL2 and its receptor, IL2R, and signal transduction are decreased with aging [33], and a number of other steps in T-cell maturation and activation are also affected One factor that is particularly interesting in the context of diabetes is the association of type I IFNs and T-cell differentiation and survival. As previously mentioned, type I IFNs produced by dendritic cells are a crucial element in CD4+ T cells As is the case for SARS-CoV, SARS-CoV-2 uses the ectopeptidase ACE2 as an entry site in human host cells. ACE2 is part of the renin-angiotensin system, responsible for the termination of the angiotensin signal, promoting conversion of angiotensin II (Ang II) into angiotensin 1-7 (Ang 1-7). Ang II acts through the angiotensin receptor 1 (AT1), which results in vasoconstriction and sympathetic nervous stimulation. Ang 1-7, on the other hand, acts on angiotensin receptor 2 (AT2), and has opposite effects, inducing vasodilation. ACE2 is expressed throughout the respiratory epithelium, from the nasal and oral mucosa to the epithelium in alveoli. Diabetic patients are at higher risk of infection and complications from COVID-19 as a result of several factors. Dampened anti-viral response with reduced interferon production is a common feature seen in persons with diabetes. Additionally, diabetic patients may be more vulnerable to SARS-CoV-2 infection due to elevated ACE2 expression. Finally, diabetes-related subclinical pulmonary dysfunction and microvascular disease may be aggravating factors that contribute to the severity of the respiratory symptoms associated with COVID-19 differentiation and antiviral response. When dendritic cells present antigen molecules to antigen-specific T cells, there is clonal expansion of these lymphocytes and subsequent differentiation into effector cells, some of which will, later, establish T-cell memory. In elderly individuals this process is compromised. Li and colleagues [34] found that, even though the expression of IFN receptor and elements of its intracellular signaling (JAK-STAT pathway) did not change with age, type I IFN response was still diminished in activated naive CD4+ T cells from older individuals. Rather than decreased expression of components of the IFN response, they found an upregulation of inhibitory elements. Thus, CD4+ T cells of elderly persons can be considered resistant to TNF, contributing to impaired cell-mediate immune response. This may represent an additional burden in elderly individuals with DM since, as described above, IFN-α production by dendritic cells is reduced in T2DM, which may have therapeutic implications. Diabetes and respiratory infections Aside from bacterial and yeast infections, a correlation between diabetes and the prevalence and severity of viral respiratory infections are also well described. People with either T1DM or T2DM are reported to have a higher risk of serious complications from the common flu [35], caused by different influenza viruses, members of the myxovirus family. According to the Centers for Disease Control and Prevention (CDC), around 30% of patients hospitalized due to common flu-related complications had diabetes. The 2009 swine flu pandemic, caused by an outbreak of influenza A virus subtype H1N1 (A/H1N1), showed similar numbers. A Canadian study with 239 hospitalized patients with PCRconfirmed influenza A/H1N1 found that diabetes triples the risk of hospitalization and, once hospitalized, diabetic patients had quadruple the risk of being admitted to intensive care [36]. Diabetes and plasma glucose levels were also correlated with severity and mortality in SARS patients [37]. In a retrospective analysis, Yang et al. compared data from over 400 patients, and found that fasting plasma glucose levels were higher in SARS-positive patients than in patients with no-SARS pneumonia. Interestingly, in patients who had died from SARS infection, plasma glucose levels were even higher. Similarly, 21.5% of the patients who had a negative outcome from SARS infection had a history of diabetes, a percentage significantly higher than those who recovered from the infection (3.9%). Of particular interest right now, Kulcsar and colleagues studied the severity of MERS-CoV infection in a mouse model of diabetes induced by a high-fat diet [38]. Their results show a more severe and prolonged acute phase of infection in diabetic mice, with impaired CD4+ T cell recruitment to the lungs and altered cytokine profiles, such as elevated IL17. A similar inflammatory response is observed in COVID-19 patients as well, where low CD4+ T cell counts are seen together with a disproportionally high number of Th17 cells. As mentioned previously, one characteristic of the cell-mediate response during aging is decreased IL2 production and signaling. This also favors CD4+ T cell differentiation into Th17 cells [39]. In addition, this inflammatory profile is accentuated in diabetes, and may go a long way towards explaining the severity of COVID-19 in diabetic patients [40]. The diabetic lung An often-neglected characteristic of diabetes is its effects on the lungs. One common finding in diabetic patients is abnormalities in small vessels, with thickening of capillary basement membrane that affects many organs and tissues. This condition, known as diabetic microangiopathy or microvascular disease, is an important factor to determine the prognosis of diabetic patients, and it frequently affects kidneys, eyes, skin, and muscle. Another important target, however, and particularly relevant to this discussion, is the lungs. Alveolar diabetic microangiopathy usually presents itself as subclinical pulmonary dysfunction under normal conditions (Fig. 1), since alveolar microvasculature is structured with large physiological reserve. However, even in otherwise healthy diabetic patients, restriction of lung volume and diffusing capacity can be found in both T1DM and T2DM [41]. In a cross-sectional study of 69 patients, peak oxygen uptake, pulmonary blood flow, diffusing capacity, and capillary blood volume were all found to be significantly reduced in nonsmoking T2DM patients during exercise. This functional impairment correlated directly with glycemic control (as measured by HbA1C levels), extrapulmonary microangiopathy (retinopathy, neuropathy, and microalbuminuria), and was aggravated by obesity [41]. In a study with obese diabetic animals, Foster and colleagues found extensive fat deposits in the lungs, septal lipofibroblasts, interstitial matrix, and the alveolar macrophages, which correlated with alterations in the interstitial environment that presented as increased number of interstitial cells, matrix and collagen, and reduced capillary volume with thickening of capillary basement membrane [42]. Intracellular lipid droplets in macrophages is a hallmark feature of classical M1 activation, which is caused by a profound cellular metabolic shift towards aerobic glycolysis and decreased mitochondrial respiration [43]. Diabetes, in this case, stimulates macrophage activation in response to sterile inflammation, which may have profound consequences in the proper inflammatory response to infections, as discussed above. Another point of concern in diabetic patients regarding their pulmonary function is broncho-motor tone and control of ventilation, since autonomic neuropathy is a common finding that can be present in up to 30% of patients [44]. Impaired noradrenergic innervation in bronchioles occurs in diabetic patients with autonomic neuropathy despite apparent normality in respiratory functions [45]. Loss of autonomic innervation, however, may alter respiratory response to endogenous and exogenous stimuli. As such, it is very plausible that this damage has a negative impact on both the course and prognosis of respiratory infections. Diabetes and SARS-CoV-2 infection Even though a disturbance in immune response may explain the higher risk of infection and worse outcome of diabetic patients with influenza, the relationship between diabetes and respiratory infections caused by the coronaviruses MERS-CoV, SARS-CoV and SARS-CoV-2 may be more complicated. The first step in the process of viral infection is the attachment of the virus to its targeted cells. There are seven known human coronaviruses, all capable of infecting cells in the respiratory system: HCoV-OC43 and HCoV-229E were first described in the 1960s, and are thought to cause the common cold [46]; SARS-CoV, identified in 2003 [47]; HCoV-NL63 and HCoV-HKU1 in 2004, associated with weak respiratory infections [48]; MERS-CoV, described in 2012 [49]; and now SARS-CoV-2, responsible for the ongoing COVID-19 pandemic. In these coronaviruses, the cellular attachment and targeting is mediated by a glycoprotein, called spike glycoprotein, found in the enveloped surface of the virus. The interaction of this glycoprotein with specific molecular targets found in host cells allows for viral adhesion and, ultimately, cellular infection. Some of these targets were identified for human coronaviruses and, interestingly, all of them are membrane-bound exopeptidases. The aminopeptidase N (APN), a metalloprotease, mediates the attachment of HCoV-229E to host cells [50]. The angiotensin-converting enzyme 2 (ACE2), responsible for the hydrolysis of angiotensin II into angiotensin 1-7, is the co-receptor for SARS-CoV and HCoV-NL63 [51,52]. Finally, dipeptidyl peptidase 4 (DPP-4), a transmembrane glycoprotein, allows the attachment of MERS-CoV to human host cells, but not SARS-CoV-2 [53,54]. Different studies now suggest that, besides SARS-CoV and HCoV-NL63, ACE2 is also the host co-receptor for SARS-CoV-2 (Fig. 1). In in vitro studies, using HeLa cells with and without human ACE2 expression, Zhou et al. showed that the novel 2019 coronavirus can only infect those cells that express ACE2 [55]. Furthermore, Wrapp et al. reported the binding kinetics of SARS-CoV-2 to human ACE2, showing a strong and specific binding with an affinity 20x higher when compared to SARS-CoV [56]. As such, the association between ACE2 expression and SARS-CoV-2 infection vulnerability seems clear. In two different mouse models of diabetes, ACE2 expression levels are increased in different organs, including the lungs [57,58]. Recently, a similar finding was described in humans a well [54]. In a phenome-wide Mendelian randomization study investigating factors that causally correlate with pulmonary ACE2 expression, Rao et al. found that increased ACE2 expression had the most consistent causal link with T2DM traits. Significant correlations were also found for T1DM, as well as inflammatory bowel disease, ER+ breast and lung cancer, and asthma [54]. Supporting this finding, in a study with 106 COVID-19 patients, Chen et al. reported that diabetes negatively affects viral clearance, as measured by SARS-CoV-2 qRT-PCR [59]. Angiotensin is a prohormone that is activated into angiotensin II by the angiotensin-converting enzyme (ACE). Angiotensin II acts as an agonist for its receptor, the angiotensin receptor type 1 (AT1). ACE2, on the other hand, promotes the conversion of angiotensin II into angiotensin 1-7 (Fig. 1), which no longer is capable of activating AT1, but instead binds to and activates the angiotensin receptor type 2 (AT2). While AT1 activation is responsible for the major cardiovascular effects attributed to angiotensin II, inducing vasoconstriction, aldosterone synthesis and secretion, and renal tubular sodium reuptake, AT2 activity promotes vasodilation and natriuresis [60]. By controlling the ratio between angiotensin II and angiotensin 1-7, ACE2 is responsible for balancing the opposing roles of AT1 and AT2. In a mouse model, SARS-CoV infection reduced pulmonary ACE2 expression [61]. The same disturbance was observed in animals that were injected with the viral spike protein alone, indicating that the binding of ACE2 with the spike protein is enough for its downregulation. Interestingly, in this model, acute lung failure could be mitigated by renin-angiotensin antagonists [61]. Non-respiratory symptoms of COVID-19 Olfactory and gustation impairment can affect around 20% of diabetic patients [62,63]. Based on patient histories, altered taste and smell sensing are also a clinical feature in COVID-19. In some cases, altered chemosensation may be the only symptom. As such, it appears certain that the chemoreceptor cells for taste and smell are somehow affected by SARS-CoV-2 infection. The chemoreceptor cells for smell, which are modified neuronal cells, are located in the distal (superior) portion of the nasal cavity, encompassing the superior nasal conche and nasal septum, and the distal (nasal) ends of the cells project cilia into the nasal cavity to 'sense' and discriminate between airborne chemicals. However, different single-cell RNA-seq studies indicate ACE2 expression in nasal epithelium is found exclusively in non-neuronal cells [64,65]. This observation suggests that SARS-CoV-2-related anosmia may be caused by infection of support and stem cells in the olfactory epithelium. Of note, taste buds, which contain the taste receptor cells (TRCs), are not only embedded within lingual epithelium of fungiform, foliate and circumvallate papillae, but they are dispersed across soft palate, nasopharynx, larynx, upper bronchi and proximal esophagus [66]. In murine tissue, ACE2 is expressed in fungiform and circumvallate papillae and throughout the tongue epithelium. And, of immense relevance to SARS-CoV-2, ACE2 is present on the membranes of type I (salt-sensing) and type II (sweet and bitter sensing) TRCs [67]. It is therefore highly likely that SARS-CoV-2 also gains entry to the TRCs in those sites (Fig. 1). There is, as far as we can ascertain from current literature of the topic, no evidence of immune cells residing within taste buds under normal conditions [68,69] and therefore the capacity for any direct, early action of immune cells in taste buds to clear the virus is low, allowing the virus to replicate unimpeded and potentially shed so as to cause infection lower down the in the pharynx. Direct infection of the TRCs within the enclosed space of the taste bud itself would also likely disturb the signaling machinery of the TRCs by impacting neurotransmitter signals -diminishing synthesis of neurotransmitters, or causing unregulated neurotransmitter release, inability to clear neurotransmitters adequately leading to desensitization of receptors, and disturbance of sensory afferent neurons and their receptors. Besides serving as viral reservoirs, we hypothesize that TRCs may be involved in serving as the sentinels for an inflammatory response to SARS-CoV-2 and we propose that early loss of taste predicts a brisk IFN response to virus, clearing of virus from the oral and nasal cavities and a faster patient recovery. The innate immune NODlike receptor pyrin 3(NLRP) inflammasome [70] in TRCs is very likely activated in response to SARS-CoV-2 infection: several viral components from the original SARS-CoV are known to be sensed as danger-associated ?A3B2 show $132#?>molecular patterns (DAMPs) by the NLRP3 inflammasome [71]. The NLRP activates caspase-1, which converts the pro-cytokines IL-1ß and IL-18 into their active forms. Once activated, the cytokines recruit innate immune cells to the site of infection and cause production of IFN-γ. There is direct evidence that inflammation, IFN and taste are linked because of a well-known clinical observation that IFN therapy is reported to cause altered taste and smell [72]. In mice, two IFN-α receptor subunits IFNAR1 and IFNAR2, their downstream JAK kinases JAK1 and TYK2, and the transcription factors STAT1, STAT2, and IRF-9 are all expressed in normal TRCs, and LPS-induced systemic inflammation increases cleaved caspase levels and TRC death within taste buds [73]. Furthermore, with increased death of TRCs there could be increased viral shedding. No loss of taste therefore may be associated with a weak immune response to the virus and the use of IFN-based therapies [29,30], as described above, may be both preventing virus gaining an ideal replication target and clearing virus from. General principles of treatment of patients with diabetes There are approximately 30 million people diagnosed with diabetes in the USA and the vast majority of patients who get a COVID-19 infection will be managed at home. As regards in-patients, in one retrospective study of confirmed COVID-19 adults for whom 570 deaths or discharges were recorded, and for which point-of-care blood glucose test results (BGs) are transmitted and stored, the mortality rate was 28.8% in 184 diabetes and/or uncontrolled hyperglycemia patients, as compared with 6.2% of the remaining 386 patients without diabetes or hyperglycemia [74]. So, hyperglycemia, per se, is clearly associated with increased mortality. To underline the necessity of good glucose control, in a retrospective analysis of more than 7300 hospitalized COVID-19 with and without preexisting T2DM patients, well-controlled blood glucose, that is, maintaining levels within a range of 3.9 to 10.0 mmol/L, was associated with a significant reduction in composite adverse outcomes and death [75]. Attention to nutrition and adequate protein intake is important. Mineral and vitamin intake need to continue. Vitamin C in particular may have a role in COVID-19 prevention, as have been shown for other infections [76]. Hydration should be maintained and treatment with acetaminophen, humidifier use and continuation of CPAP in those with sleep apnea should continue. It seems reasonable that anti-hyperglycemic agents that are known to result in volume depletion, such as sodium glucose transporter-2 (SGLT2) inhibitors, or hypoglycemia, such as chlorpropamide, should be avoided. Care should also be taken with use of glucagon-like receptor-1 (GLP-1R) agonists. Dose reduction or discontinuation altogether may be needed to prevent nausea, abdominal distress and dehydration in patients eating and drinking very little, or in patients with gastrointestinal symptoms directly caused by invasion by the virus of the gut. Dipeptidyl peptidase-4 (DPP4) inhibitors (see discussion below) are weak antihyperglycemic agents, are not first line glucose lowering drugs, and probably should be stopped in anyone with any indications of volume depletion or reduced kidney function. Non-hospitalized patients with T1DM as well as T2DM patients using insulin should measure blood glucose and urinary ketones frequently, especially if fever and increasing hyperglycemia occur -as is true during all infections. Frequent changes in dosage and correctional boluses are par for the course in order to maintain normal glucose levels. Hospitalized patients with severe COVID-19 symptoms and concomitant diabetes need escalating frequency of blood glucose monitoring if infection signs worsen. Subcutaneous insulin via a basalbolus regimen, where possible, in hospitalized patients, with continuous insulin drip if in an ICU setting are the preferred methods to control blood glucose levels. Oral agents especially metformin and SGLT2 inhibitors should likely be stopped. Biguanides, of which metformin is the only one in use for patients with T2DM and pre-diabetes, are known to increase lactic acid production in the gut, which is then normally cleared in liver. However, as recently described by two independent groups, the human gut epithelium, which contains transmembranal ACE2, is a site of COVID-19 infection that supports viral replication and spread [77,78], and therefore the expectation would be that endogenous lactic acid levels are already high in gut, as well as other organs, due to infection, which could make the risk of lactic acidosis more likely in the presence of metformin. This might also have implications for lung function: acidosis would impede metabolism and cellular function in infected alveolar cells and T lymphocytes at a time of most vulnerability (see Discussion above). To reiterate, any drug use that might facilitate dehydration would seem unwise. Increasing thrombosis, especially venous thromboses and micro vascular thromboses in many organs including in pulmonary vasculature [79], and a hypercoagulable state with low platelets [80] are now being reported to occur in extremely sick COVID-19 patients; dehydration would be seriously detrimental under those conditions. Specific conundrums in SARS-CoV-2 management in the setting of diabetes (a) Chloroquine (CQ) and Hydroxychloroquine (HCQ). There has been much discussion on the use of CQ and HCQ as therapy for COVID-19 itself, and It is worth remembering that HCQ is an approved treatment, though not first line, for diabetes since 2014 in India. When used for malaria prophylaxis and for treating autoimmune arthritis, such as rheumatoid arthritis and systemic lupus erythematosus (SLE), patients appeared to have reduced risk of T2DM, though in those conditions risk may be reduced for other reasons besides use of the drugs (less obesity, more attention to nutrition, for instance). It was found that HCQ treatment led to less insulin requirements in T2DM, without altering Cpeptide levels. This would imply that HCQ reduces clearance of insulin, and not that it increases its secretion [81]. Clinical case reports of its use in T1DM would seem to bear this out because improved glycemic control in the setting of reduced insulin requirements have also been reported in that condition. Additionally, there are reports that CQ improves insulin sensitivity through activation of AKT, which would result in greater glucose uptake and glycogen synthesis [82]. An additional piece of the puzzle relates to adiponectin. It was found that use of the HCQ and CQ leads to increased adiponectin levels in circulation, which is well-described to have favorable effects on glucose uptake/utilization because of its insulin-sensitizing effects. Adiponectin also has antiinflammatory and anti-apoptotic effects that one could hypothesize would have added benefits during COVID-19, or indeed, any viral infection [83]. As regards its anti-inflammatory properties, HCQ use leads to a significant decrease in the production of proinflammatory markers and cytokines, which is why it is a successful disease modifying anti-inflammatory agent in autoimmune diseases. Based on in vitro studies, it changes the pH of acidic intracellular organelles including endosomes/lysosomes, essential for the membrane fusion and it was believed that both the agents could be effective tools against SARS-CoV. And data, again from in vitro work, seemed to indicate that HCQ might be cytotoxic for Sars-CoV-2 [84]. But clinical trials using HCQ have now reported either no benefit or increased mortality due to cardiotoxicity with use of HCQ, with or without azithromycin [85]. Many studies are still ongoing, which should give clarity to the use of these agents in SARS-CoV-2. Decreasing certain cytokines, and/or lowering intracellular pH in organelles (which might result in diminished autophagy) might be detrimental when one wants to accelerate clearance of virus. (b) Angiotensin converting enzyme inhibitors and angiotensin type 1 receptor blockers (RAAS inhibitors). Diabetes, hypertension and kidney disease coexist, and many such patients are prescribed RAAS inhibitors because they have well-proven beneficial effects in protecting the kidney and myocardium, and in lowering blood pressure. And while ACE2 expression in lung is increased in diabetes, as stated above, it is not clear that membrane-bound ACE2, which is necessary for SARS-CoV-2 spike protein attachment, is actually increased. There are suggestions in the literature that RAAS inhibitors increase ACE2 but again there is a paucity of data to suggest that these have any effect on lung-specific or nasal-specific expression of transmembranal ACE2. Vaduganathan et al. have written a thorough review on the topic of RAAS inhibitors in patients with SARS-CoV-2 infection [86]. In short, however, the European Society of Cardiology, Council on Hypertension; the American College of Cardiology; the American Heart Association; the Heart Failure Society of America; and the American Society of Hypertension have all released similar statements advising against discontinuing such medications as sudden withdrawal in cardio-renal stable patients who are already under severe stress from infection could lead to clinical instability and decompensation, for no other reason than a response to mere hypothetical concerns. (c) DPP 4 inhibitors. Of great interest in virology, membrane bound human (but not mouse) DPP4 is a functional co-receptor for MERS-CoV [53], though not for SARS-CoV-2. DPP4, also known as CD26, is a type II transmembrane glycoprotein, expressed in many tissues, including the immune cells and non-ciliated bronchial epithelial cells. It also is shed from cell membranes and is therefore present in circulation. In the last 3 decades it has been the subject of much research because of its ability to cleave alanine and proline in the penultimate position of peptides, chemokines and bioactive compounds, resulting in either inactivation or activation of its substrates. DPP4 is well known to diabetes researchers and medical practitioners because it inactivates the well-studied gut-derived incretins, GLP-1 and glucose-dependent insulinotropic polypeptide (GIP) that are released for enteroendocrine cells in response to food in the gut, and GLP-1R agonists, mentioned above, are now routine treatments for T2DM management. Additionally, their use can lead to weight loss. Incretin action through specific receptors on beta cells in islet of Langerhans is responsible for about 50% of the secreted insulin after eating [87,88]. Levels of active, full length GLP-1 and GIP are increased when DPP activity is eliminated and, as a consequence, DPP inhibitors are now routine, though not first line, oral agents in managing T2DM by improving insulin secretion. DPP4 also plays a role in immune regulation by activating T cells, upregulating CD86 expression and the NF-kB pathway, chemotaxis modulation, cell adhesion, and apoptosis [89]. Its expression is higher in visceral adipose than subcutaneous tissue, and directly correlates with adipocyte inflammation and insulin resistance. Despite it activity on cytokines, plasma levels of 27 plasma cytokines were unchanged after a four-week treatment with sitagliptin, a DPP4 inhibitor [90], and another inhibitor, vildagliptin, did not affect ex vivo cytokine response or function in lymphocytes taken from T2DM patients [91]. Review of the literature does not support any meaningful changes in immune function with DPP inhibition in humans and therefore it is not likely to play a role in treating COVID-19. Additionally, despite DPP4 being a coreceptor for MERS-CoV data are not forthcoming that point to DPP4 inhibitors (saxagliptin, sitagliptin, vildagliptin) sterically interfering with or modifying MERS-CoV binding to subunits of DPP4 [53]. The use of DPP4 by coronavirus seems to be simply due to coronavirus hitching a ride on an abundant target in epithelial cells, unrelated to DPP4's proteolytic activity. (d) HMG-CoA Reductase Inhibitors. Many T2DM patients are taking statins to control LDL levels and prevent atherosclerotic heart disease. In fact, The American College of Cardiology, the American Heart Association and the American Diabetes Association all have issued recommendations that anyone aged 40 and older with diabetes be prescribed statins: these should be continued during COVID-19 infection. HMG-CoA reductase inhibitors upregulate ACE2 as part of their function in reducing endothelial dysfunction and have many other pleiotropic effects, independent of their lipid-lowering effects. This class of compounds, based on observational studies, may actually reduce mortality in hospitalized patients with influenza and pneumonias, and acute lung injury [92]. There is also anecdotal evidence from Sierra Leone that statins were beneficial during its Ebola outbreak and it has been proposed that these agents have a potential role in managing patients with severe COVID-19 [93]. (e) Steroids Use. Steroid use will exacerbate hyperglycemia. But because very high cytokine and CRP levels are routine findings in SARS-CoV-2, steroids have been used in severely ill patients, to suppress the cytokine storm: however, they have not been shown to decrease mortality and theoretically might even slow viral clearance. Even in the setting of adult respiratory distress syndrome with SARS-CoV-2, there is no evidence to suggest they provide any therapeutic benefit. The National Institute of Allergy and Infectious Diseases has issued guidelines on steroid use in certain circumstances during COVID-19 [94]. Use of intranasal steroids, such as fluticasone for allergies, also should be discontinued, until data emerges to say otherwise. It is not yet reported if intranasal steroids enhance viral replication in olfactory epithelium or whether people who use them on a consistent basis are more likely to get infected; there are theoretical reasons why this may be so. Conclusions Individuals with diabetes are at increased risk for infections and, once acquired, generally get more severe infections, and have much greater increase in mortality, compared to non-diabetic patients. This is certainly proving the rule with SARS-CoV-2. It also transpires that 2 coronavirus co-receptors, ACE2 and DPP4, are well-established actors within metabolic and inflammatory pathways, and renal and cardiovascular physiology, and have been front and center in diabetes and metabolic research: ACE2 is a coreceptor for SARS-CoV-2 while DPP4 is a co-receptor for MERS-CoV. The medical and economic consequences of the SARS-CoV-2 epidemic require ongoing and real-time adaptation of protocols and standard medical procedures to deliver diabetes care, and bring SARS-CoV-2 under control so as to lower morbidity and mortality for all, including people who have diabetes and the elderly while the human population waits with breath that is bated for a vaccine.
8,716.4
2020-06-30T00:00:00.000
[ "Medicine", "Biology" ]
Training Interdisciplinary Data Science Collaborators: A Comparative Case Study Abstract Data science is inherently collaborative as individuals across fields and sectors use quantitative data to answer relevant questions. As a result, there is a growing body of research regarding how to teach interdisciplinary collaboration skills. However, much of the work evaluating methods of teaching statistics and data science collaboration relies primarily on self-reflection data. Additionally, prior research lacks detailed methods for assessing the quality of collaboration skills. In this case study, we present a method for teaching statistics and data science collaboration, a framework for identifying elements of effective collaboration, and a comparative case study to evaluate the collaboration skills of both a team of students and an experienced collaborator on two components of effective data science collaboration: structuring a collaboration meeting and communicating with a domain expert. Results show that the students could facilitate meetings and communicate comparably well to the experienced collaborator, but that the experienced collaborator was better able to facilitate meetings and communicate to develop strong relationships, an important element for high-quality and long-term collaboration. Further work is needed to generalize these findings to a larger population, but these results begin to inform the field regarding effective ways to teach specific data science collaboration skills. Introduction Statisticians and data scientists empower innovation by bridging gaps across fields and helping experts in a variety of disciplines gain critical insights from data.To appropriately equip data scientists for this complex work, it is necessary to understand effective ways of teaching collaboration.There is burgeoning research regarding how to teach interdisciplinary collaboration skills (Davidson, Dewey, and Fleming 2019;Kolaczyk, Wright, and Yajima 2020;Thabane et al. 2008;Vance 2021), but these studies most often use data such as student reflections and surveys or survey responses from the domain-specific collaborators as evidence of effectiveness (Bangdiwala et al. 2002;Davidson, Dewey, and Fleming 2019;Jersky 2002;Kolaczyk, Wright, and Yajima 2020).While this is promising data, more is needed to understand the effectiveness of these approaches to teaching statistics and data science collaboration.In this study, we present a method for teaching collaboration, a framework for identifying elements of effective collaboration, and provide a comparative case study (Yin 2017) of two collaboration meetings.The purpose of this case study is to model a way to evaluate collaboration on data science projects, provide emergent evidence of the effectiveness of one approach to teaching interdisciplinary collaboration, and contribute to the literature regarding assessing elements of effective interdisciplinary collaboration more generally. Literature Review Extant literature on teaching and learning about statistics and data science collaboration indicate that this type of learning typically occurs in one of three contexts: (a) traditional courses with collaborative project assignments, (b) on-campus consulting centers or statistics labs where students collaborate with a faculty supervisor (Vance and Pruitt 2022), and (c) direct consulting projects on which students collaborate with a faculty mentor and an outside client (Belli 2001).Much of the work around the effectiveness of these programs relies on informal documentation as opposed to empirical research with rigorous methodology (Bangdiwala et al. 2002;Jersky 2002;Mackisack and Petocz 2002;Roseth, Garfield, and Ben-Zvi 2008).Other work focuses on providing descriptions of teaching statistical collaboration in order to grow exposure to the different approaches (Davidson, Dewey, and Fleming 2019;Kolaczyk, Wright, and Yajima 2020;Sima et al. 2020;Thabane et al. 2008;Vance 2021).This work provides beginning evidence of effective training in statistics and data science collaboration, but additional research must be done to build a more robust research base. In the balance of this article, we begin by presenting an approach to teaching statistics and data science collaboration in Section 3 and then providing a definition of effective collaboration in Section 4. Next, we describe the context and data for the current study before explaining the methodology in Sections 5 and 6.Following in Section 7, we present results from comparing two different data science collaboration meetings, one facilitated by data science students and another by an experienced data science collaborator.We then provide discussion of the implications of our findings and conclude with thoughts regarding the contribution of this work to the field at large in Section 8. Teaching Statistics and Data Science Collaboration Students in this study learn statistical collaboration at University of Colorado Boulder through participation in the Laboratory for Interdisciplinary Statistical Analysis (LISA) housed in the Applied Mathematics Department.LISA has a 3-fold mission: 1. Train statisticians and data scientists to become interdisciplinary collaborators, 2. Provide research infrastructure to enable and accelerate quantitative research around the campus community, and 3. Engage with the community to improve statistical skills and literacy. Initiation into LISA begins when students take Statistical Collaboration, which is cross-listed as both an undergraduate and graduate course.The pre-requisite for this class is successful completion of an advanced statistical modeling or methods course to ensure students have sufficient technical skills to draw upon while they focus more on collaboration skills through the course.Students who take this course help LISA accomplish each area of its mission.During Statistical Collaboration, students work to master the following learning objectives: 1. Managing effective data science collaboration meetings with domain experts; 2. Communicating statistical concepts, analyses, and results to nonstatistical audiences; 3. Using peer feedback, self-reflection, and video analysis as a process for improving statistical collaboration skills; 4. Effectively collaborating with team members; and 5. Creating reproducible data science workflows and working ethically. The course is comprised of three parts: theoretical preparation, working on actual data science collaboration projects, and reflections on the collaboration projects.In preparation for engaging in actual collaboration projects, students read several papers throughout the course.They complete in-class and homework exercises to apply the main concepts of the readings. The bulk of the work to master the learning objectives occurs as students work on the three data science collaboration projects during the semester with individual researchers or decision makers across campus and in the community.On these projects, members of LISA are referred to as collaborators and the individuals who bring projects to LISA are referred to as domain experts.These labels work to recognize both the data science collaborator and the research collaborator as equal partners on the projects and to emphasize the importance of both roles. In addition to working on data science projects, students collectively discuss and reflect on projects during class meetings.The model is for each LISA collaborator to be comfortable asking for and giving advice regarding technical issues of statistics and data science along with any nontechnical issues encountered during any given project.In addition to time where students can simply ask questions about how to approach a project, dedicated activities facilitate collaboration skill development for students.In particular, students participate in role playing activities regarding specific collaboration skills and video coaching and feedback sessions. Role playing activities include practice with opening a meeting, talking about time allocations for meetings, asking great questions about a new project (Vance et al. 2022b), and summarizing action steps at the end of a meeting.During video feedback coaching sessions, collaborators watch three video clips (~1-5 min each) from a data science collaboration meeting, typically from the opening, middle, and end of the meeting.Following viewing of each clip, the faculty member along with other students in the class give feedback to the students whose collaboration meeting was recorded.Feedback may be given on topics such as if there are other questions students could have asked to gain better understanding, how the time could be spent better to ensure shared understanding about the project, and problem-solving when collaborators struggled to run the meeting in the way they hoped.At the end of the session, everyone in class shares at least one thing they learned to improve their own collaboration skills from the video feedback coaching session. After completion of Statistical Collaboration, students have the opportunity to remain involved in LISA either on a volunteer basis or through taking Advanced Statistical Collaboration.In this course, students often continue collaborating on projects that were not completed in their previous semester or that need to be extended to a second semester.Students also continue their development of collaboration skills through additional readings, role-playing activities, engaging in more advanced collaboration skills, and mentoring junior students.One key skill in the advanced class is that of reflection.Students are first introduced to personal reflection on projects in Statistical Collaboration as modeled in class discussion and the video feedback coaching session activities.In Advanced Collaboration students extend this skill to work on reflecting with domain experts regarding what works best in collaboration meetings.In addition, advanced students work on five collaboration projects and specifically mentor novice students on those projects. Defining Effective Interdisciplinary Collaboration Our conceptual framework for identifying effective data science collaboration relies on the ASCCR frame for collaboration skills (Vance and Smith 2019).Under ASCCR, an effective collaboration is based on five components: Attitude, Structure, Content, Communication, and Relationship.We provide a high-level summary of ASCCR as a guide for evaluating an effective collaboration (Vance, Alzen, and Şeref 2020); details are provided in Vance and Smith (2019). Attitude Interdisciplinary collaboration relies on the data scientist having a positive attitude toward themself, the domain expert, and the collaboration team.Three attitudes that we believe enable effective collaboration are that the data science collaborator has the willingness and ability to learn new data science skills as necessary to help the domain expert, the domain expert should be regarded as an expert in their field or for their project and should be treated as such, and that the data scientist and domain expert can accomplish more together than either could accomplish alone.It is key for the data science collaborator to consider both themself and the domain expert as equally important in the shared work of making a meaningful contribution to the field. Structure To maximize the effectiveness of collaborations, meetings should be structured to enable both the domain expert and the data scientist to accomplish their goals while fostering a positive relationship.The POWER process for organizing meetings (Zahn 2019) is a way to effectively structure meetings.The data scientist Prepares for the meeting, Opens the meeting to make sure all attendees have the same expectations for the meeting, ensures that the Work time is targeted and effective for accomplishing their shared goals, and devotes time at the End of the meeting to summarize what was accomplished and identify next steps.More advanced collaborators also take time to Reflect on the meeting with the domain expert to ensure that meeting time is used even more effectively and efficiently in the future. Content The content of collaboration meetings should consist of both qualitative and quantitative stages.In the ASCCR frame, effective collaboration follows the Q 1 Q 2 Q 3 method of addressing statistical content (Leman, House, and Hoegh 2015;Trumble et al. 2022).Under this approach, the data science collaborator first focuses on the qualitative (Q 1 ) aspects of the project before completing the quantitative (Q 2 ) analyses.Then the collaborator works to complete the content-specific contribution to the project by qualitatively (Q 3 ) conveying the analysis, results, conclusions, and recommendations to the domain expert for further action (Olubusoye, Akintande, and Vance 2021;Vance and Love 2021). Communication Effective communication is essential for successful collaboration.Key skills for effective communication include questioning (Vance et al. 2022b); listening, paraphrasing, summarizing (Vance et al. 2022a); and explaining.Statistical collaborators should use these skills to guarantee that both themselves and the domain expert understand one another at each step of the project.These communication skills should be used to elicit information and gain shared understanding (Vance, Alzen, and Smith 2022a).Shared understanding occurs when both the collaborator and the domain expert have common knowledge about project goals, project information, and the relevance of the information to the project goals.Clear communication also strengthens the relationship between the collaborator and the domain expert (Vance et al. 2022b). Relationship The final component of the ASCCR frame is that of relationship.Vance (2020) makes the case that, in addition to completing the project tasks, building a strong relationship with the domain expert should be a goal for every statistics and data science collaboration.Stronger relationships lead to better contributions to the field and sustained working relationships between domain experts and data scientists.As relationships grow stronger in statistics and data science collaborations, interdisciplinary fields are subsequently improved as well. In the LISA model, students specifically learn about the ASCCR frame by reading papers explicating the five components (Azad 2015;Kimball 1957;Trumble et al. 2022;Vance et al. 2022a;Vance and Smith 2019) and conducting in-class and homework exercises.For example, early in the semester students discuss an inventory of attitudes they may or may not have regarding collaboration and which ones they believe promote or detract from collaboration.Near the end of the semester, students are presented four models for thinking about relationships, reflect on the strength of their relationships with the domain experts with whom they have collaborated, and discuss barriers and facilitators for creating strong relationships in collaborations.Additionally, the components of ASCCR are discussed during each video coaching feedback session.Students provide constructive feedback to one another regarding the ways that relevant elements of the ASCCR frame are addressed in the specific video clips. We present this summary of the ASCCR frame to help the reader understand the overall ethos of effective collaborations.For the current study, we focus on two specific components, structure and communication, because they are the most easily observable in a single meeting out of several in a full project.We specifically seek to answer two questions.First, how do student collaborators trained in ASCCR compare to an experienced data scientist in the structure and communication components when conducting a collaboration meeting?Second, to what extent does this case study provide evidence of the effectiveness of ASCCR for interdisciplinary collaboration? Data Data for this study come from one data science collaboration project from Fall 2021 in which all individuals involved in the project completed informed consent forms approved by the University of Colorado Boulder's Institutional Review Board (Protocol 18-0554).Participants agreed to have their project documents, survey data, and video observation data used for research purposes.We use data from two Zoom recordings of the initial collaboration meeting between a Ph.D. candidate in geological sciences (the domain expert) and statistical collaborators.The domain expert first met with two student collaborators and then attended another "initial" meeting for the collaboration project with a LISA post-doctoral researcher (referred to as the experienced collaborator).The collaborator has clearly prepared for the meeting as evidenced by some knowledge of the project and/or domain expert as well as practical preparation such as a shared notes document. Opening Meeting is opened with a friendly tone.Time is spent discussing everyone's wants for the meeting. Work Collaborator shapes the meeting in a way that specifically responds to the wants agreed upon during the opening conversation. 4 4 Ending Time is reserved at the end of the meeting for summarizing outcomes and next steps. Reflection Collaborator allows space for and invites reflection regarding the meeting with the domain expert. 2 4 Communication a Questioning Collaborator effectively uses questioning throughout the meeting to provide clarity for both self and the domain expert Asks "great" questions that both elicit useful information and strengthen the relationship. Listening The collaborator listens attentively throughout the meeting.When the collaborator speaks, it is in direct response to the previous comment by the domain expert. 4 Paraphrasing Collaborator paraphrases regularly throughout the meeting to clarify both their own language and the language of the domain expert. Summarizing Collaborator summarizes regularly throughout the meeting to solidify discussion topics and decisions made. 3 4 a Note that since this was an initial meeting during which we typically expect for the domain expert to do the majority of the talking to describe the project, we did not expect to observe or code for collaborators to explain statistical concepts as is defined under Communication in the ASCCR frame. When student teams meet with a domain expert, one student leads the meeting.In this instance, the student leading the meeting was a senior undergraduate double-majoring in computer science and statistics/data science.This meeting was the first he led after observing and supporting in two other meetings.The second student was a Ph.D. student in aerospace engineering, and this was her second collaborative project.The experienced collaborator was a recent biostatistics Ph.D. graduate.At the time of the meeting, she had 3 years of experience working on approximately 15 collaborative projects at statistical centers in two research universities.The purpose of this meeting was for the domain expert to explain the project, give a general orientation to the statistics or data science need, and then for the collaborators to ensure they understood the project before conducting statistical analyses or providing advice regarding quantitative analysis.Meetings occurred at the end of September during Fall 2021, when the students were enrolled in Statistical Collaboration and learned about the ASCCR frame. We selected this project for a case study because we had complete data for the project, and it is representative of a common project context for LISA.In the particular semester from which we selected this case study, LISA participated in 36 collaboration projects.Thirty-two of the projects came from the campus community.Of those 32 campus projects, 18 came from the College of Arts and Sciences, which includes geological sciences, the department in which this domain expert works.Further, 19 of the 32 campus domain experts during this semester were graduate students. could anticipate questions or points of confusion.To this end, we expect the data to be biased in favor of the experienced collaborator and that evidence suggesting success for the students to minimize the extent of that effectiveness. In addition to data from the meeting Zoom recordings, we also use data from a survey the domain expert completed about the students and the experienced collaborator separately after the close of the project.The survey consisted of 21 Likert-scale items and 4 open response items.We use these data sources to answer our second research question regarding the effectiveness of ASCCR for teaching collaboration from the viewpoint of a domain expert. Methods We present a comparative case study (Yin 2017) for collaboration meetings.The purpose of this case study is to provide a rich description of the different enactments of two elements of the ASCCR frame.These rich descriptions allow for a holistic understanding of the difference in these two meetings and provide insight into the varying skill levels among the collaborators and the implication of that variation on project outcomes (Stake 2013;Yin 2017). First, two authors used the element codes in Table 1 to independently code Zoom recording transcripts2 for evidence of each element of effective structure and communication.Following the qualitative coding process of Miles, Huberman, and Saldaña (2014), the two coders then met to reach consensus and validate codes.Next, each coder independently scored elements for overall quality using a four-point rubric 3 (see Appendix) and met a second time to reach consensus on scores (Miles, Huberman, and Saldaña 2014).The rubric scores provide evidence of success in interdisciplinary collaboration as defined by adherence to the structure and communication components of ASCCR.These data 4 allow us to answer our research question about if the student team could perform comparably well to an experienced data science collaborator. After identifying elements of effective meeting structure and communication, we used the data from the domain expert surveys as a second data source for successful collaboration.These data provide some initial insight into the effectiveness of the collaboration meetings overall. Results Table 1 shows that for each element of the rubric, the experienced collaborator scored as well or higher than the students.Of the four elements where scores differed, the students received scores one level lower than the experienced collaborator on three elements and two levels lower on the relationship element. To better understand the evidence for these scores and to answer our first research question regarding how well students compare to an experienced collaborator, we present contrasting descriptions of the ways each collaborator engaged with structure and communication within the meetings.We examine key excerpts to identify patterns in the differences between the facilitation of each collaboration meeting.We then provide emerging descriptive evidence of the effectiveness of this approach for teaching interdisciplinary collaboration from the lens of a domain expert. Structure in Collaboration Meetings By organizing meetings with Zahn's (2019) POWER structure, the statistical collaborator can lessen the cognitive load of the meeting for themselves and the domain expert and help themselves focus on providing domain-specific statistics and data science advice (Vance and Smith 2019).Both the students and the experienced collaborator attend well to each element of the POWER structure in their meetings, as evidenced by the scores in Table 1.Although differences in scores do illustrate some variability in the quality of meeting structure between students and an experienced collaborator, our evidence also shows that the students learned how to follow the POWER structure well by this point in the semester. Prepare There is one clear way the experienced collaborator is more advanced in preparing for the meeting than the students.Near the beginning of the meeting, the experienced collaborator tells the domain expert that she read the domain expert's initial collaboration videos.Raters independently scored videos in teams of two and then met to discuss necessary changes to the rubric to appropriately identify differences in quality across score levels as well as to clarify language in the rubric. 4Full transcripts are available on the Open Science Framework (OSF; https:// osf.io/c95jd/) for readers to use with the included rubrics and paper text to better understand the coding, reproduce the results, and repeat the evaluation process in other contexts. request for a collaboration meeting and knew some basic information about the domain expert herself. Experienced Collaborator: Why don't we just start with you giving me kind of the big picture?I have read your collaboration request, so I know you're a student.You're doing geological sciences.There were a lot of words that I didn't know.{Laughs} The experienced collaborator shows she thought about the content of this collaboration prior to the meeting.She knows something about the domain expert and recognizes from the beginning how she will learn from the domain expert.While the students are clearly prepared to facilitate the meeting, they do not show evidence of having thought about the content of the study prior to the meeting.The lack of demonstrated preparation, however, does not seem to detract from the meeting. Opening Both the students and the experienced collaborator received the highest score possible for the meeting openings.The leading student starts the meeting with smiles and cordial introductions and then states: This student demonstrates all elements of the opening of the conversation discussed during class.The students are friendly and make sure everyone is on the same page regarding meeting timing.The student leading the meeting also provides the domain expert an opportunity to define objectives for the meeting and asks about the overall research project.This is very similar to the way the experienced collaborator begins her meeting: The experienced collaborator includes the same elements of the opening to a meeting as the student collaborators with just a bit more detail.Both openings ensure everyone is on the same page regarding the time available for the meeting and identifying and paraphrasing the domain expert's objectives for the meeting.The experienced collaborator provides a bit more in her opening by naming specific objectives she typically has for an initial collaboration meeting.Although a bit less direct, the students solicit similar information about the project by inviting the domain expert to share her overall research goals. Work Effective collaborators shape meetings in a way that specifically responds to the wants or objectives agreed upon during the opening conversation.The overall goal for this meeting was for the domain expert to give the collaborators a general sense of her project and to identify the statistical problem about which she came to LISA for help.The bulk of this section of the meeting includes the domain expert sharing details about her project and onboarding the collaborators to her content.Throughout this process, both the students and the experienced collaborator show evidence of attending to the stated objectives of understanding the research project and helping the domain expert think about the research analysis.The students regularly make statements like the following: To check my understanding… Really quickly before we go [on], I want to understand […] better.What does that comparison look like? Similarly, the experienced collaborator made comments such as these: Hold on.I'm trying to understand this… We're focusing on this first bullet point?What is this called on the x-axis? In addition to pausing the domain expert to ask questions and make sure they understand details, the collaborators also ask the domain expert to check their understanding.In both quotations below, the collaborators summarize the details of what they identify as the main task of the collaboration.The domain expert is interested in using statistical analysis to show that there are significant differences in facies, or rock types, about which she had collected data. Student 1: You're looking for the significant differences in your measurements between facie at the same stratigraphic height or between stratigraphic heights or both? […] What you're looking for from us right now. It's more of just the analysis between stratigraphic heights. […] You want to show that the facies are in fact facies. Experienced Collaborator: You want to look at carbon and oxygen composition between the facies and then you also want to look at temperature between the facies. […] And in order for you to classify something as different, you would be looking at if the delta 180 thing that you have is different. You want to tell between the different [facie]? In response, the domain expert affirms that the collaborators all understand her goals for the project.The use of domainspecific vocabulary by all collaborators helps ensure that they have a deep understanding of the domain expert's problem and research questions and strengthens the relationship by demonstrating an effort to use the domain expert's language, thus, making for a more effective collaboration. Ending and Reflection During the end of the meeting, the students and the experienced collaborator make similar moves.Both reserve time at the end of the meeting for a closing conversation, summarize what was discussed, and make plans for next steps. Student1: Okay.We only have about ten minutes left in our initial planned meeting time.[…] I think we can summarize.Experienced Collaborator: Okay.I'm just looking at the wants, goals again.So I think I've gotten through everything. Both also have some element of reflection in the meeting, but the experienced collaborator is more explicit about it and provides more space for the domain expert to give direct feedback. The students exchange multiple generally positive and reflective comments with the domain expert toward the end of the meeting such as the following: Student1: This was super helpful.Student2: I think we made a lot of progress today.And it helps us to understand your greater goals first before we jump into the analysis just so we understand what's going on.Domain Expert: Super Student2: Thanks for being patient with us [as we understand your project] Domain Expert: No.This is fun for me.Student1: I think I have a few ideas for potential analyses, but I have some things I want to look in to first.Domain Expert: Totally Student2: I have some ideas, but I would want to look into them more before I would give a recommendation as well.Domain Expert: That sounds great. The domain expert responds positively to the collaborators throughout the meeting and has a general tenor of excitement moving forward at the end of the meeting.This is all evidence to suggest that the students developed a positive connection with the domain expert, and they are all eager to continue working together. The experienced collaborator and the domain expert exchange similarly positive comments throughout their meeting. Experienced Collaborator: I didn't quite catch the last thing you said. Domain Expert: Oh, I was just being stoked about my own science. Experienced Collaborator: It's really cool.-Domain Expert: I appreciate you letting me take a lot -so much time to be like, "This is a rock." Experienced Collaborator: {Laughs} I'm so glad you did.It's so interesting.Domain Expert: Yes, yes.It's fun.These two exchanges are not unlike the example showed above from the student collaborators.However, the experienced collaborator makes one additional move the students did not that elevates her reflection during the meeting and resulted in the higher rubric scores in Table 1. Experienced Collaborator: Another thing we like to do at the end of the meetings is to reflect with the domain expert and say, "How did this meeting go for you?In the future, is there anything that we could do differently to make the meeting easier or anything that for you?" In response to this question the domain expert first comments on how the process of onboarding multiple collaborators to her project was a good experience for her and helped her think about how she explains her research to others.She also gives some feedback regarding how there were some vocabulary words (e.g., independent) that seemed to operate differently in her field of geology compared to statistics.The domain expert points out that making sure everyone understood vocabulary in the same way would help to avoid misconceptions. By and large, the students and the experienced collaborator facilitate their initial meetings with this domain expert in very similar ways.The meetings are well-organized, cover all aspects of the POWER structure (e.g., preparation, open, work, ending, and reflection), and achieve the goals of the collaborators understanding the overall project research goals.There are some small ways in which the experienced collaborator added additional levels of expertise to the structure of the meeting that not only led to greater shared understanding about the project but also worked to build a stronger relationship between herself and the domain expert. Communication in Collaboration Meetings Both the student and experienced collaborators demonstrate evidence of every element of communication in their respective meetings.All collaborators clearly listen intently to the domain expert as she explains her project.Each collaborator also responds to the domain expert input with thoughtful comments and intelligent questions such as the following: Student1: Would this whole stratigraphic section be considered one type, then, in your analysis or in your data the way that you've been recording it?Do you designate it based on what it is right now?How do you designate type?Experienced Collaborator: In the geological sciences, is there a certain number of categories of facies, or is it kind of like a continuous scale?Or what is it?How does it work? Both the students and the experienced collaborator also make sure to paraphrase regularly to ensure they understand the new domain-specific information.Finally, both meetings have regular moments where the collaborators summarize the thoughts just shared or next steps in the project.There are few differences in the skills shown in these elements of communication throughout both meetings. The student collaborators received lower scores on paraphrasing and summarizing because the experienced collaborator more frequently made connections to the overall project rather than focusing on specific details, likely resulting in better comprehension of the domain expert's research.The experienced collaborator also more frequently uses the domain expert's domain-specific vocabulary. Student1: I think what you were saying is you think that it might be different at these different stratigraphic heights, but you're not sure yet.Experienced Collaborator: The Jacob's staff allows you to be able to tell the height or how deep these things are, what kind of rock it is, and then, also, you can look at sort of -is it the CaCO3 nodules that tell you what kind of environment it was? The ASCCR frame suggests that while statistical collaborators work to understand and to be understood in a project, they also use communication to build a relationship with the domain expert.As seen in the structure results above, this is where we see nuanced differences between the students and experienced collaborator.Although the students certainly have moments where we observe a positive relationship being built with the domain expert, we see deeper moments with the experienced collaborator. Experienced Collaborator: In the geological sciences, is there a certain number of categories of facies, or is it kind of like a continuous scale?Or what is it?How does it work?Domain Expert: […It's like] "Okay.I have this rock in my hand, so it had to have formed somehow.-Domain Expert: I ran a couple of ANOVAs on my data and was like, "I don't know how to do this prudently." And I know that there is a lot of ways that one could just get the right -or get an answer you want out of stats, and I just really felt like a kid using a power tool that I wasn't supposed to be doing without supervision.I was like, "I probably should go get some handrails to make sure I use those correctly." {laughs} Experienced Collaborator: Well, I appreciate that.Yay, I'm so excited that you're here.So as part of what you maybe want in addition to us helping with this particular analysis is maybe kind of -it's a learning experience for you too, and you want to be able to understand what we're doing.And then maybe you came across a similar problem in the future, like you would have enough understanding to kind of do it on your own.Is that what your goal is? In both exchanges the experienced collaborator and the domain expert share moments of connection around the work.The experienced collaborator not only speaks with the goal of understanding the project, but she includes an element of interpersonal relationship.These excerpts show how the experienced collaborator used communication to accomplish the tasks of the meeting and develop a relationship with the domain expert, the two end goals of a collaboration under ASCCR. Domain Expert Evaluation Our qualitative data show that both the student and experienced collaborators include all the expected elements of structure and communication from the ASCCR frame in their collaboration meetings with this domain expert.More than that, our data show that the quality of the structure and communication are similar between the students and the experienced collaborator.We see some evidence of the experienced collaborator adding a subtle level of improved collaboration skills, but the data suggest that both collaboration meetings were successful. In addition to coding transcripts, we also asked the domain expert to complete a survey 5 at the end of the project in which she evaluated her experiences with both collaboration teams.A limitation to this data source is that the domain expert completed these surveys after six meetings with the student collaborators over the course of the semester and after only the initial meeting with the experienced collaborator.The repeated initial meetings allowed us to study how student collaborators compared to an experienced collaborator during the initial meeting, but the experienced collaborator only served as a consultant for the student team following the double initial meeting.While this survey data is problematic for direct comparisons between the experienced collaborator and the student collaborators in projects overall, it does still serve as a useful tool for understanding the domain expert's experience with the student collaborators as well as some nascent information regarding effectiveness of collaboration in general. When asked if her collaboration with LISA was helpful and if she was satisfied with her experience, the domain expert gave the students and the experienced collaborator scores of 6 (strongly agree) and 4 (somewhat agree; on a 6-point scale), respectively.This makes sense as the students were the ones who helped the domain expert through the full project.In another item about feeling welcomed during collaboration meetings that used the same scale, the domain expert rated both the students and the experienced collaborator a 5 (agree).These scores represent the general pattern in scoring across all items.For each, the domain expert either gave both the students and the experienced collaborator equally high scores, or she gave the experienced collaborator a slightly lower score than the students.However, all ratings were on the positive end of the scales.The domain expert reported that all collaborators made her feel welcomed both during and outside of meetings and that she would recommend the statistical lab to colleagues after her experiences with this collaboration.Finally, the domain expert indicated an equally high level of relationship between herself and all collaborators. When given the chance to provide some qualitative feedback regarding the collaboration, the domain expert indicated that the student collaborators "were very respectful and clearly were taking time to understand aspects of my research most relevant to the statistical goals.[Student1] was really excellent.[He was] very professional and thoughtful in the questions he asked regarding the research." In the qualitative items regarding the experienced collaborator, the domain expert noted: 5 Full survey available at (https://osf.io/c95jd/) Meeting with [the experienced collaborator] was pleasant and a positive experience, but I have to admit that I was confused to as why I met with her separately to the other student collaborators and then had no further contact with her.I thought we were going to all meet together (me, student collaborators, and her) after we had had separate onboarding meetings, but I guess I misunderstood that aspect of the process.With that in mind, giving detailed feedback on how effective the collaboration was with [the experienced collaborator] is difficult because I didn't work with her besides our one meeting during which I explained my research. In this instance, the experienced collaborator did not communicate to the domain expert how the project would progress, resulting in a lack of shared understanding.The domain expert had the impression that the experienced collaborator would be involved in all subsequent meetings, but she only provided support to the students when needed.This resulted in the experienced collaborator receiving lower scores than the students, which is evidence for the need to create shared understanding for a successful collaboration. Discussion This comparative case study provides illustrative examples of the structure and communication elements of ASCCR in two collaboration meetings.Further, this study provides some emerging tools for evaluating collaboration meetings grounded in collaboration theory (Vance and Smith 2019).This study differs from previous research regarding the effectiveness of teaching interdisciplinary statistical collaboration in that it explicitly defines specific elements of statistics and data science collaboration, uses pre-defined criteria to evaluate particular elements of collaboration meetings, and provides data from student collaborators alongside an experienced collaborator (Bangdiwala et al. 2002;Jersky 2002;Mackisack and Petocz 2002;Roseth, Garfield, and Ben-Zvi 2008).The case study method allows for the comparative inspection of detailed elements of the collaboration meetings.It also provides real world examples of structure and communication skills in action.By aligning evidence from collaboration meeting transcripts with the theoretical framing of effective collaboration and feedback from the domain expert, this study expands the body of evidence regarding the effectiveness of teaching interdisciplinary collaboration. A limitation of this approach is the small sample size.While an in-depth look at two meetings allows for careful scrutiny of individual collaboration skills within meetings, there is limited ability to generalize the results to comparisons between student and experienced collaborators at large.Future work will involve expanding the datasets by comparing more student and experienced data science collaboration meetings in order to bolster these claims.Such research will provide more information regarding the most effective ways to teach data science students to become successful interdisciplinary collaborators. Practice is essential for developing collaboration skills.For example, using the domain expert's vocabulary and making clear connections between specific points in the project and the overall project goals are ways in which novice collaborative data scientists may build their craft through practice.Additionally, building relationships throughout collaboration projects is something that may come more naturally through practice as inexperienced collaborators become more comfortable with asking questions and learning about new domains in ways that not only build their understanding of new fields but also develop strong relationships with new colleagues. We believe that statistics and data science educators should consider the ASCCR frame for teaching collaboration skills to their students.It provides a straightforward and clear way to implement successful interdisciplinary collaboration and names discrete skills to model and practice.Teaching collaboration with ASCCR can advance the field in training data scientists and push our thinking regarding successful interdisciplinary collaboration.Professional data scientists can also benefit from implementing ASCCR in their future collaborations. Most statisticians and data scientists will inevitably find themselves working on interdisciplinary teams at some point in their careers.To equip future data scientists to contribute most effectively to their own field as well as others, we must understand the best ways to support them in developing collaboration skills.The emerging evidence provided here suggests that training data scientists in the ASCCR frame can help improve collaboration skills.We encourage statistics and data science educators and practitioners to consider implementing aspects of the ASCCR frame and the related rubric to expand the data from this study to continue to build the body of evidence of effectively teaching interdisciplinary collaboration.We believe that the ASCCR frame can improve the quality of collaborations and ultimately improve collaborative data scientists' ability to transform evidence into action. It is clear that the collaborator has reviewed the collaboration request and has done some preparation for the meeting.They appear generally familiar with the project but have perhaps not done any other preparation for the meeting. Collaborator does not do anything that specifically reveals they have spent time thinking about the project ahead of time.However, this does not appear to detract from the meeting's success. No evidence of preparing for the meeting evident.Collaborator appears to have no prior knowledge of the project. Collaborator is ill-prepared to engage in the project. Open Meeting is opened with a friendly tone.Time is spent discussing everyone's wants for the meeting. Meeting is opened with a friendly tone.Time is spent discussing everyone's wants for the meeting, but may be hurried, rushed, or incomplete. Collaborator may or may not take time for friendly interaction at the beginning of the meeting. If goals/an agenda are shared, the collaborator tells the domain expert rather than including them in the process. Collaborator doesn't take time for friendly interaction at the beginning of the meeting.Goals and/or an agenda are not shared at the beginning of the meeting. Work Collaborator shapes the bulk of the meeting in a way that specifically responds to the wants agreed upon during the opening conversation. Collaborator attempts to address the wants agreed upon during the opening conversation but is sometimes unsuccessful. Collaborator does little to address domain expert wants and focuses more on own ideas for addressing the project goals. Collaborator ignores domain expert wants and jumps into project statistics with no consideration for project context of domain expert wants.End 10%-20% of meeting time is reserved for the closing conversation.All applicable elements of "end" are present. Time is reserved for the closing conversation, but it may be insufficient.Some elements of "end" are missed. Collaborator attempts to cover some elements of ending a meeting via the POWER structure, but time was not reserved.The conversation is rushed and incomplete. No time is reserved for meeting closure activities.Collaborator does not attempt to cover elements of ending a meeting via POWER structure. Reflection Collaborator allows space for and invites reflection with the domain expert. Collaborators reflect on the meeting after the domain expert leaves. Collaborator asks for or provides space for reflection but does not reserve sufficient time for the conversation. Collaborator does not provide space for meeting reflection. Questioning Collaborator effectively uses questioning throughout to provide clarity for both self and the domain expert.Asks "great" questions that elicit useful information and strengthen the relationship. Questioning is attempted frequently during the meeting but is sometimes insufficient for gaining clarity for collaborator or domain expert. Collaborator asks questions to help clarify for self or the domain expert but not both regularly throughout the meeting. Questioning is not evident in the meeting. Listening The collaborator listens attentively throughout the meeting.When the collaborator speaks, it is in direct response to the previous comment by the domain expert. The collaborator listens attentively throughout the meeting.When the collaborator speaks, it is in direct response to the domain expert.However, sometimes the collaborator does not fully respond to domain expert comments. The collaborator occasionally jumps ahead of the domain expert to talk about their own desired topics.Responses sometimes do not make sense after domain expert comments. The collaborator regularly jumps ahead of the domain expert to talk about their own desired topics.Responses do not make sense after domain expert comments. Paraphrasing Collaborator paraphrases regularly throughout the meeting to clarify both their own language and the language of the domain expert. Collaborator paraphrases regularly throughout the meeting for clarity but may only do this for the domain expert or themselves. Collaborator occasionally paraphrases during the meeting to add clarity for either themselves or the domain expert. Collaborator rarely, if ever, paraphrases during the meeting. Summarizing Collaborator summarizes regularly throughout the meeting to solidify discussion topics and decisions made. Collaborator summarizes regularly throughout the meeting but may miss some key moments that would have benefited from summarizing. Collaborator occasionally summarizes, but individuals will leave with lack of clear understanding of the discussions and decisions. Collaborator rarely, if ever, summarizes during the meeting. Okay.Thank you so much for meeting with us.I'm looking forward to collaborating on this project.First of all, I' d like to have a time conversation.Right now we're scheduled from 10:00 to 11:00 a.m.Does that still work for everybody?[…] Okay.Great.And then the next thing that I' d like to discuss is what are your wants from this collaboration both today in this meeting and also longer term?What are you looking to get out of this collaboration?[…] It sounds like you would like help with analyzing your data using statistical tools.[…] Okay.Great.So if you want to tell us a little bit about your overall research goals, that would be helpful. Usually how we start is to have a time discussion.I have an hour blocked out for this meeting until 10:00 a.m.Does that work for you?[…] If we're being productive, can you go a little over 10:00 a.m.like ten or 15 minutes?[…] Cool.And then the next thing we usually like to go through is what we each want to get out of the meeting.I've listed some generic things that I usually try to get out of my initial meeting: an overview of your research, the big picture of what you're doing and the real-world implications that it has, what exactly your hypothesis question is, if you know that, what does your data look like, and why you've come to us for collaboration.And then the last thing is what your deadlines are.Is there anything else you want to put on the agenda for today? " So now I have to figure out what setting and sequence of conditions provided the right moment for this rock to form.Experienced Collaborator: Kind of like playing detective.Domain Expert: It's totally like playing a detective with rocks {Laughter} Experienced Collaborator: You're a professional rock detective.Domain Expert: Yes, exactly.It's very fun. 1 Table 1 . Rubric Scores for Structure and Communication in Collaboration Meetings Scored from 1 (low) -4 (high).
10,878
2023-03-15T00:00:00.000
[ "Computer Science", "Education", "Mathematics" ]