text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Neural Crest-Derived Dental Pulp Stem Cells Function as Ectomesenchyme to Support Salivary Gland Tissue Formation Xerostomia, dry mouth due to loss of functional salivary gland, is caused by Sjögren’s syndrome, radiotherapy for head and neck cancer, medications and aging, leading to patients’ suffer from difficulties in swallowing and speech, as well as oral diseases. Stem cell therapy is considered a potential therapeutic alternative. However, combinatory approaches including not only salivary gland stem cells but also supportive cells and appropriate extracellular matrix are necessary to form a functional salivary gland. Like tooth formation, the development of salivary gland requires epithelium interacting with neural crest-derived mesenchyme. Dental pulp stem cells (DPSC) isolated from murine dental pulp is neural crest-derived. Herein, we used the human salivary gland (HSG) cell line as a model to study the effects of DPSC on salivary gland differentiation. Upon in vitro differentiation on Matrigel, HSG alone and HSG cocultured with Wnt1-Cre/R26R-LacZ derived DPSC (HSG+DPSC) differentiated into acinar-like structures. However, HSG formed more mature (higher expression of LAMP-1 and CD44), larger and increased numbers of acinar structures in HSG+DPSC. In vivo subcutaneous co-transplantation of HSG and DPSC with hyaluronic acid (HA) hydrogel after 2 weeks was evaluated by Q-RTPCR, morphological and immunohistological assessment. Compared to HSG transplants which only showed undifferentiated tumor-like cells, HSG+DPSC demonstrated (1) higher expression of murine mesenchymal marker Fgf-7 (2) higher expression of mature human salivary gland differentiation marker alphaamylase-1 AMY-1 (3) higher expression of murine endothelial, vWF, neuronal, NF-200, and angiogenic markers, Vegfr-3 and Vegf-C; (4) mucin-secreting acinarand duct-like structures with abundant blood vessels at the interface with DPSC; and (5) more mature glandular structures double-positive for salivary gland differentiation markers CD44 and LAMP-1. These results indicate that DPSC supported and enhanced HSG differentiation into functional salivary gland tissue. This study illustrates the potential of DPSC as inductive mesenchyme for salivary gland regeneration, repair, and tissue engineering. *Corresponding author: Morayma Reyes, Department of Pathology, University of Washington, 815 Mercer Street, Room 332, Box 358050, Seattle, WA 98109, USA, Tel: +1 206 6165004; Fax: +1 206 8971540; E-mail<EMAIL_ADDRESS>Received January 23, 2012; Accepted February 10, 2012; Published February 12, 2012 Citation: Janebodin K, Reyes M (2012) Neural Crest-Derived Dental Pulp Stem Cells Function as Ectomesenchyme to Support Salivary Gland Tissue Formation. Dentistry S13:001. doi:10.4172/2161-1122.S13-001 Copyright: © 2012 Janebodin K, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Diseases such as xerostomia and radiation treatment for head and neck cancer can induce irreparable salivary gland tissue damage [1]. Salivary gland stem cells have been characterized and shown capable to differentiate into all the epithelial components of the salivary gland tissue such as ductal and acinar cells [2]; however, their use for salivary gland tissue regeneration is hampered by the need of other supportive cells, vasculature, innervations and matrix necessary for functional regeneration of the salivary gland [3]. Although the epithelium of the parotid glands is ectodermal-derived whereas the epithelium of the submandibular and sublingual glands is endodermalderived, the salivary gland mesenchyme is neural crest-derived [4,5]. The interaction of epithelium and mesenchyme is essential for the branching morphogenesis of the salivary gland. Importantly, the epithelial-mesenchymal interactions of tooth bud and salivary glands are similar and molecular cues such as secretion of fibroblast growth factors (FGF-10, FGF-7) by the ecto-mesenchyme and expression of FGF receptors (FGFR-1, FGFR-2) by the epithelium are important for both the development of tooth bud and salivary gland [4-10]. Also important morphogens such as Shh, Wnt, fibroblast growth factors seem to play important roles in tooth bud and saliva gland development [11-13]. We have recently characterized neural crest-derived dental pulp stem cells (DPSC) from neonatal mice that can differentiate into neural crest lineages including mesenchyme, Schwann cells, odontoblasts and pericyte-like cells [14]. Due to similarities in tooth bud and salivary gland development, herein we studied the capacity of neural crest-derived DPSC to support and induce salivary epithelium differentiation. The human salivary gland cell line, HSG, is a neoplastic cell line originally developed from an intercalated duct cell of an irradiated human submandibular salivary gland. The HSG cell line forms primary adenocarcinoma tumors with malignant characteristics when transplanted into nude mice [15,16]. Nonetheless, when cultured on Matrigel the HSG cell line rapidly differentiates into mature salivary epithelium [17]. Thus, the HSG cell is a good cell line to study salivary gland tissue differentiation as it is capable of rapid expansion, and differentiation into multiple salivary gland epithelial cell phenotypes including myoepithelial-like, acinar like, keratinocytelike, chondrocyte-like and mucinous-like cells [18,19]. In order to study the inductive effects of mouse DPSC on HSG differentiation we performed in vitro xeno-cultures of mouse DPSC and HSG on Matrigel and we observed larger and increased number of acini in the co-cultures as compared to HSG cultures alone. Although Matrigel is widely used for in vitro culture [17] its immunogenicity limits its in vivo application [20]. Dentistry ISSN: 2161-1122 Dentistry Citation: Janebodin K, Reyes M (2012) Neural Crest-Derived Dental Pulp Stem Cells Function as Ectomesenchyme to Support Salivary Gland Tissue Formation. Dentistry S13:001. doi:10.4172/2161-1122.S13-001 Introduction Diseases such as xerostomia and radiation treatment for head and neck cancer can induce irreparable salivary gland tissue damage [1]. Salivary gland stem cells have been characterized and shown capable to differentiate into all the epithelial components of the salivary gland tissue such as ductal and acinar cells [2]; however, their use for salivary gland tissue regeneration is hampered by the need of other supportive cells, vasculature, innervations and matrix necessary for functional regeneration of the salivary gland [3]. Although the epithelium of the parotid glands is ectodermal-derived whereas the epithelium of the submandibular and sublingual glands is endodermalderived, the salivary gland mesenchyme is neural crest-derived [4,5]. The interaction of epithelium and mesenchyme is essential for the branching morphogenesis of the salivary gland. Importantly, the epithelial-mesenchymal interactions of tooth bud and salivary glands are similar and molecular cues such as secretion of fibroblast growth factors (FGF-10, FGF-7) by the ecto-mesenchyme and expression of FGF receptors (FGFR-1, FGFR-2) by the epithelium are important for both the development of tooth bud and salivary gland [4][5][6][7][8][9][10]. Also important morphogens such as Shh, Wnt, fibroblast growth factors seem to play important roles in tooth bud and saliva gland development [11][12][13]. We have recently characterized neural crest-derived dental pulp stem cells (DPSC) from neonatal mice that can differentiate into neural crest lineages including mesenchyme, Schwann cells, odontoblasts and pericyte-like cells [14]. Due to similarities in tooth bud and salivary gland development, herein we studied the capacity of neural crest-derived DPSC to support and induce salivary epithelium differentiation. The human salivary gland cell line, HSG, is a neoplastic cell line originally developed from an intercalated duct cell of an irradiated human submandibular salivary gland. The HSG cell line forms primary adenocarcinoma tumors with malignant characteristics when transplanted into nude mice [15,16]. Nonetheless, when cultured on Matrigel the HSG cell line rapidly differentiates into mature salivary epithelium [17]. Thus, the HSG cell is a good cell line to study salivary gland tissue differentiation as it is capable of rapid expansion, and differentiation into multiple salivary gland epithelial cell phenotypes including myoepithelial-like, acinar like, keratinocytelike, chondrocyte-like and mucinous-like cells [18,19]. In order to study the inductive effects of mouse DPSC on HSG differentiation we performed in vitro xeno-cultures of mouse DPSC and HSG on Matrigel and we observed larger and increased number of acini in the co-cultures as compared to HSG cultures alone. Although Matrigel is widely used for in vitro culture [17] its immunogenicity limits its in vivo application [20]. Thus, we sought extracellular matrix components important for salivary gland tissue development. Hyaluronic acid (HA) is one of the major extracellular matrix components of the developing and adult salivary gland [21,22]. HA accounts for 50% of all the glycosaminoglycans (GAG) in the basal lamina of the developing submandibular salivary gland [22]. In adult salivary gland tissue, HA accounts for 25% of all the GAGs synthesized by the secretory units (acini and intercalated ducts) [21]. Furthermore, HA is the ligand for CD44 which is highly expressed by DPSC [14]. Acinar cells also express CD44 [23,24]. We hypothesized that HA hydrogels will provide a HA rich basement membrane for close epithelial-mesenchymal interaction and will result in induction of differentiated salivary gland 3D structures. Tissue engineering for salivary gland restoration is limited by availability of stem cells or cell sources capable of providing all the cellular components necessary to generate a functional salivary gland tissue. Furthermore, vascularization and innervation of engineered salivary gland tissue is a long-lasting challenge. Herein, we report that neural crest-derived DPSC co-transplanted with HSG in HA hydrogels, secrete fibroblast growth factors, namely FGF-7, angiogenic factors such as VEGF-C and support HSG differentiation into mature salivary gland tissue in vivo. HSG and DPSC culture HSG cells, derived from a human submandibular salivary gland, were a gift from Dr. Kenneth Izutsu (Department of Oral Health Sciences, University of Washington). The cells were plated in 10,000 cells/cm 2 on plastic tissue culture dishes (BD Biosciences, Franklin Lakes, NJ) and cultured at 37°C under 5% CO 2 in growth media containing Dulbecco's modification of eagle's medium (DMEM) with 4.5g/L glucose, L-glutamine and sodium pyruvate (Cellgro, Manassas, VA) supplemented with 10% heat-inactivated fetal calf serum (FCS) (HyClone, Logan, UT), 100 units/ml penicillin with 100μg/ml streptomycin (Cellgro). In vitro differentiation of HSG, DPSC alone, or HSG cocultured with DPSC on Matrigel Cells were cultured on either plastic or Matrigel-coated surfaces. Growth factor-reduced Matrigel (BD Biosciences) was thawed on ice and diluted in DMEM at a final concentration of 2mg/ml. To form three-dimensional matrix in culture dishes, 100µl of Matrigel was added to 48-well tissue culture plate (0.75 cm 2 per well) and incubated at 37°C for 1 hour before cell seeding. HSG cells, DPSC alone, or a combination of HSG with DPSC (2.5 x 10 4 cells per each cell type/ cm 2 ) were seeded in either non-coated or Matrigel-coated plastic surfaces with 100µl of additional growth media. Culture medium was changed every two days. After 4 days, X-gal was stained to distinguish Wnt1-marked DPSC which express LacZ gene from HSG cells. The cells were fixed with 0.2% gluteraldehyde / 2% formaldehyde in PBS for 10 min, and washed with three times of PBS. The fixed cells were subsequently incubated in X-gal solution at 37°C overnight with light protection before washing with three times of PBS. The stained cells were photographed and analyzed. In vivo subcutaneous transplantation of HSG alone or HSG co-transplanted with DPSC with HA hydrogels In accordance with approved Institutional Animal Care and Use Committee (IACUC) protocols, HSG alone or HSG combined with murine DPSC (1x10 6 cells/cell type) were separately transplanted into 2-month-old male Rag1 null mice (Jackson Laboratory, Bar Harbor, ME, USA) by subcutaneous transplantation with hyaluronic acid (HA) hydrogel (HyStem, Glycosan) (n=3 mice/group) according to the manufacturer's protocol. 100µl of cell suspension was injected ventrally to the submandibular salivary gland without penetrating the gland. After 2 weeks of transplantation, HA plugs were dissected without involving mouse recipients' salivary gland tissues and fixed with 10% neutral buffered formalin (NBF) (Sigma) for 30 min at 4°C with agitation, then washed with three times of PBS at RT with agitation. The fixed transplanted tissues were embedded in paraffin and cut to 8-µm thick sections. Sections were analyzed by Hematoxylin and Eosin (H&E), Periodic Acid Schiff (PAS), and immunofluorescence staining. Quantitative reverse transcriptase polymerase chain reaction (Q-RT-PCR) Undifferentiated cultured DPSC and transplanted tissues (HSG alone or HSG co-transplanted with DPSC) were extracted for total RNA by using the RNeasy Mini kit (Qiagen) and TRIzol reagent (Invitrogen) according to the manufacturer's protocol, respectively. Quantity and purity of RNA was determined by 260/280 nm absorbance. First-strand cDNA was synthesized from 1000 ng of RNA using the High Capacity cDNA synthesis kit from Applied Biosystems per manufacturer's protocols using a randomized primer. Q-RT-PCR primers are included in Supplementary Method. cDNA (20ng) was prepared using the SYBR green PCR master mix from Applied Biosystems. Reactions were processed by the ABI 7900HT PCR system with the following parameters: 50°C/2 min and 95°C/10 min, followed by 40 cycles of 95°C/15 s and 60°C/1 min. Results were analyzed using SDS 2.2 software and relative expression calculated using the comparative Ct method. Each sample was run in triplicate reactions for each gene. cDNA of undifferentiated HSG cells and mouse salivary gland tissue were used to calibrate samples. Transplanted tissue sections were deparaffinized, rehydrated, and permeabilized with 1% bovine serum albumin (BSA) in 0.1% Triton X-100/PBS. Then sections were blocked with 10% normal goat serum for 1 h at RT, and incubated with primary antibody which is ratanti-mouse/human LAMP-1 or rabbit-anti-mouse vWF polyclonal antibody (1:400, Dako) overnight at 4°C following three times of washing. The stained sections were subsequently incubated with goatderived Alexa 488-conjugated secondary antibody (1:800) for 1 h at RT, and washed three times. Then the sections were incubated by the second set of primary antibody which is anti-human-CD44-PE or anti-mouse-SMA monoclonal antibody directly conjugated with Cy3 (1:400, Sigma) for 1 h at RT before washing three times. Cells were stained with 4', 6-diamine-2-phenylindol (DAPI, 1:1000) to visualize the nuclei. All antibodies were diluted in 1% BSA in 0.1% Triton-X 100/PBS. All immunofluorescence images described in this manuscript was detected using a Zeiss Axiovert 200 fluorescent microscope (Thornwood, NY). Photographs were taken with an onboard Periodic Acid Schiff (PAS) staining Transplanted tissue sections were dewaxed and rehydrated. The sections were stained in 1% Periodic Acid solution (Sigma) for 10min at RT with agitation before rinsing with deionized water. Then the tissues were incubated in Schiff reagent (Sigma) for 5 min at RT with agitation before washing three times with deionized water. The stained tissues were dehydrated in a series of alcohol solutions, and cleared with xylene before mounting. Statistical analysis The number of acini formed in the in vitro experiment and the number of blood vessels in the transplanted tissues was quantified using an image analysis program, ImageJ v1.43u (Wayne Rasband, NIH; http://rsb.info.nih.gov/ij). The number of acinar-like structure was counted from 6 wells in each experimental group. The percentage of blood vessel per area was determined upon examination of 10 areas in each experimental group. Data were represented as means ± the standard error of the mean (SEM) of results from three separate experiments. The data were analyzed by Student's t-test where p-value < 0.05 represented significant differences between HSG alone and HSG co-cultured or co-transplanted with DPSC. Results HSG co-cultured with DPSC formed more mature and increased number of acinar-like structures HSG and DPSC cultured separately on plastic surfaces showed their different cell morphology; the former were polyhedral-shaped epithelial cells whereas the latter were spindle-shaped fibroblasts ( Figures 1A and B). As expected, HSG, DPSC alone, and HSG cocultured with DPSC (HSG+DPSC) grown on non-coated surfaces proliferated, but only formed a confluent monolayer (Figures 1C-E). After cultured separately on Matrigel-coated plastic surfaces for 4 days, HSG underwent dramatically morphological changes in both HSG alone and HSG+DPSC. As previously reported [17], the salivary gland cells initiated to form acinar-like structures after day 1 in Matrigel culture; however, the acinar-like structures found in HSG+DPSC were larger and more numerous than those in HSG alone ( Figures 1F and H). The acinar-like structures in HSG alone and HSG+DPSC gradually increased in size (Figures 1I and J). DPSC which were cultured alone on Matrigel did not change their cell morphology ( Figure 1G). HSG cultured alone on Matrigel for 4 days showed small acinar-like phenotypes, represented by spherical structures with polarized nuclei and lumen formation ( Figure 1K) [6,25]. In contrast, HSG in the co- culture group formed large multi-cellular structures that resembled intact salivary glands with acinar-and duct-like structures ( Figure 1L, arrowheads). Noticeably, co-culturing DPSC shown by positive X-gal staining (in blue) were clustered near HSG-derived acinar-like structure (Figure1J, arrows). The HSG-derived acinar structures in both HSG alone and HSG+DPSC were positive for CD44 (in red) and lysosome associated membrane protein-1 (LAMP-1) (in green), confirming the differentiation of HSG to acinar cells (Figures 1K and L). CD44 and LAMP-1 are expressed by HSG and acinar salivary gland [23,24,26]. The acinar structures in the HSG+DPSC group were strongly positive for LAMP-1 when compared to those in HSG alone, suggesting higher maturity of acinar salivary gland structures. Additionally, the number of acinar-like structures quantified in the HSG+DPSC group was significantly larger than those in HSG alone (*p = 0.002) ( Figure 1M). Taken these results together, this suggests that DPSC enhance the ability of HSG to differentiate to acinar-and duct-like phenotypes. DPSC expressed high level of Fgf-7 and Fgf-10, neural crestderived mesenchymal genes essential for salivary gland formation To complement our in vitro study and to gain insight into DPSC's supportive role to induce HSG differentiation into functional salivary gland units in vivo, we transplanted HSG alone or the combination between HSG and DPSC subcutaneously by direct injection of cell suspension ventrally to the submandibular salivary gland without penetrating in the gland with hyaluronic acid (HA) hydrogel in Rag1 null mice to avoid immune rejection against the human cells (HSG) (Figure 2A). Undifferentiated DPSC in our culture condition showed the expression of ectodysplasin, Eda, and fibroblast growth factors, Fgf-7 and Fgf-10. Eda, Fgf-7, and Fgf-10 are proteins secreted by neural crest-derived mesenchymal cells to induce branching morphogenesis during salivary gland development and formation [7,25,27]. DPSC expressed significantly high levels of Fgf-7 and Fgf-10, which is approximately >10 folds greater than endogenous Fgf-7 and Fgf-10 expression in mouse submandibular salivary gland (*p = 0.0014 and 0.00005, respectively) ( Figure 2B). The high expression of both growth factors combined with our previous study showing that DPSC are neural crest-derived led us to hypothesize that DPSC may be a good source of ectomesenchymal supportive of salivary gland formation and regeneration [14]. In addition, DPSC expressed high level of vascular endothelial growth factor receptor 3, Vegfr-3 (>5 folds greater than endogenous Vegfr-3, *p = 0.0026), and similar level of its ligand, Vegf-C but not Vegf-A, when compared with that of mouse salivary gland ( Figure 2B). A previous study demonstrated that stimulation of blood vessel formation improved regeneration of submandibular salivary gland [28]. Therefore, the expression of angiogenic genes by DPSC suggests that their angiogenic potential may be beneficial for salivary gland formation. Co-transplantation of HSG and DPSC demonstrated high expression of murine neural crest-derived mesenchymal and human salivary gland differentiation genes After two-week post-transplantation, HA hydrogel plugs were processed for histological and Q-RT-PCR analyses using primers specific for human salivary gland differentiation genes and mouse specific mesenchymal genes. DPSC up-regulated the expression of Eda, Fgf-7, and Fgf-10 in the HA hydrogel plugs in vivo. The level of all three mesenchymal genes expressed in HSG+DPSC was also greater than that expressed in HSG alone. In particular, the levels of Fgf-7 showed >10 folds greater than endogenous Fgf-7 expression in untransplanted mouse submandibular gland (*p = 0.009) ( Figure 3A). To confirm the formation of functional salivary gland, we used human specific primers to determine the expression of salivary gland differentiation genes. The HSG alone and co-transplanted with DPSC expressed higher level of human ectodysplasin receptor (EDAR), mucin (MUC-5B and MUC-7), alpha-amylase-1 (AMY-1), and aquaporin-5 (AQP-5) (approximately >10-150 folds greater than the expression in undifferentiated human submandibular salivary gland cells) ( Figure 3B). Importantly, alphaamylase-1 which is an enzyme that is secreted by a functional salivary gland, was expressed significantly higher in the co-transplanted tissues (*p = 0.046), suggesting that HSG co-transplanted with DPSC formed more mature and functional salivary gland. Next, we studied if DPSC enhance human salivary gland cells to differentiate and form functional salivary gland by induction of blood vessel and nerve innervating formation. To answer this question, we examined endothelial-specific, neuronal, and angiogenic gene expression by Q-RT-PCR. Accordingly, HSG+DPSC group showed higher level of the endothelial markers, von Willebrand Factor (vWF) (*p = 0.018) as well as the neuronal marker, heavy neurofilament (NF-200) (p = 0.058), when comparing with HSG group ( Figure 3C). Likewise, the HSG+DPSC co-transplanted tissues expressed higher levels of Vegfr-3 and Vegf-C (*p = 0.023 and 0.027, respectively) (approximately >30-35 folds and >2 folds greater than expression in mouse submandibular salivary gland and in HSG transplants, respectively) ( Figure 3D). The expression of these transcripts suggests that DPSC enhance the salivary gland cells to differentiate and form salivary gland tissue by induction of blood vessel and nerve innervating formation. Glandular structures were observed in the co-transplantation of HSG and DPSC In addition to gene expression, we determined the morphology of transplants in HSG alone and HSG+DPSC. H&E staining revealed that HSG hydrogel plugs showed only immature cancer-like cells with large nuclei (Figures 4A-C). Conversely, duct-and acinar-like structures represented by their polarized nuclei were seen in the hydrogel plugs containing HSG and DPSC, specially at the periphery close to mesenchymal cells (Figures 4D-F) and vessels ( Figure 4E, inset). PAS staining distinguished acinar-from duct-like structures by revealing the formation of mucin/mucopolysaccharide containing cells. PAS-positive cells (in pink) were randomly found in the HSG transplants alone (Figures 4G-I). In contrast, several clusters of PASpositive acinar-like with some PAS-negative duct-like structures were present in the HSG+DPSC transplants ( Figure 4J, inset). Immunofluorescence showed more mature glandular structures which were double-positive for CD44 and LAMP-1 in the HSG+DPSC co-transplantation group, but not in the HSG alone ( Figures 5A and B, arrows). The cells in HSG transplants were immature which stained positive for CD44, but negative for LAMP-1 ( Figure 5A). Interestingly, differentiated HSG cells were observed at the interface with mesenchymal cells (DPSC) which encapsulated the HSG tumor ( Figure 5B, arrowheads). In addition, encapsulating DPSC which stained positive for smooth muscle actin (SMA) integrated deeper into the core of the transplanted tissue and also recruited blood vessels ( Figures 4E and K, insets, 4L, 5C and D, arrowheads, 5E and F). The blood vessels were recognized by SMA and vWF staining ( Figures 5C-F). The quantification of blood vessels in both transplanted tissues showed significantly increased number of blood vessels in HSG+DPSC than that in HSG alone (*p = 0.004) ( Figure 5G). Discussion In general stem cell therapy focuses on delivering only the desired stem cell population. This approach works if the stem cell can regenerate the whole organ. For salivary gland regeneration, although a putative salivary gland stem cell can potentially regenerate all the epithelial components it cannot give rise to the supportive tissue including mesenchyme, vasculature and nerves. Since dental pulp stem cells are neural crest-derived and epithelial-mesenchymal interactions of tooth bud and salivary glands are similar, we hypothesized that DPSC is a good source of mesenchyme to induce and support salivary gland tissue differentiation. and LAMP-1 (arrows) were seen in the HSG+DPSC co-transplantation group, but not in the HSG alone. (A) The cells in HSG transplants stained positive for CD44, but negative for LAMP-1, suggesting their immature state. (B, arrowheads) Differentiated HSG cells were observed at the interface with DPSC which encapsulated the HSG tumor. (C-F) Encapsulating DPSC which were positively stained for SMA (arrowheads) attempted to invade into the transplanted tissue and also recruited blood vessels. The blood vessels were recognized by SMA and vWF staining. (G) The bar graph shows the quantification of blood vessels as percentage per area in both HSG and HSG+DPSC transplants. The number of blood vessels in HSG+DPSC (82% ± 1.37) was significantly higher compared to that in HSG alone (33% ± 0.91). The number of blood vessels in both transplants were measures from 10 areas of three samples per transplant (n = 10). Student's t-test calculated * p ≤ 0.05. Error bars represent ± SEM. Scale bars indicate 100 µm. We first studied the effects of co-culture with DPSC on HSG differentiation in vitro. DPSC and HSG were co-cultured on Matrigel. After 4 days, HSG had formed acini but the number of acini and the size of the acini was significantly increased in the co-cultures with HSG as compared to HSG only cultures. Matrigel's unknown composition and immunogenicity are some of the disadvantages for its in vivo use [29,30]. Thus we sought a more natural matrix for salivary gland formation. Hyaluronic acid (HA) is the most abundant glycosaminoglycan in the developing salivary gland [22]. Also, adult salivary gland secretory units produce high amount of HA, which is deposited in the basal lamina [21]. Furthermore, acinar epithelial cells express CD44 the receptor for HA [31,32]. Thus we hypothesized that HA hydrogels will provide a good a natural scaffold for salivary gland formation and epithelial-mesechymal interaction. Therefore, we conducted xenotransplantation of HSG alone or HSG and DPSC in HA hydrogels subcutaneously ventrally to the endogenous submandibular gland. Although HA hydrogels have been previously used to induced 3D formation of salivary secretory units in vitro [33], to our knowledge this is the first report that demonstrate the potential use of HA hydrogels for in vivo formation of salivary gland tissue. We performed transplantation of DPSC and HSG in Rag1 null mice. Using this hetero-xenotransplantation approach, we can clearly monitor and distinguish the contribution of murine DPSC using murine specific mesenchymal primers compared to untransplanted murine salivary gland tissue as well as the contribution and differentiation stage of HSG using human specific primers of salivary gland epithelial markers. Before transplantation, we observed that DPSC expressed approximately 10 folds higher levels of Fgf-7 and Fgf-10. These fibroblast growth factors are essential for proper salivary gland formation as their respective knockout and their receptor Fgfr-1 knockout mouse models result in salivary gland development defects or aplasia [7,34]. Furthermore, upon transplantation, DPSC expressed significantly higher levels of Fgf-7 as compared to untransplanted murine salivary gland and HGS only transplants. This is therapeutically significant as FGF-7 (aka Keratinocyte Growth Factor, KGF) administration has been shown beneficial for salivary gland restoration [35,36]. Given the tumorigenic nature of HSG, we only performed short term transplantation. Nonetheless, in 2 weeks post-transplantation we showed significant increase in the levels of human salivary gland differentiation markers in both HSG only and HSG with DPSC cotransplantations. Moreover, gene expression of human alpha-amylase-1 (AMY-1) was significantly increased in the HSG co-transplanted with DPSC indicating that DPSC induced functional differentiation of HSG in vivo. Upon histological examination it became obvious that differentiated glandular structures were observed at the interface with mesenchyme and near vessels. This underlies the importance of epithelial-mesenchymal interaction for proper glandular differentiation and demonstrates that DPSC is a good source of ecto-mesenchyme. Consistent with our previous observations, DPSC exhibit great angiogenic capacity in vivo [14]. DPSC express high levels of Vegfr-3 and Vegf-C and significant higher levels of these angiogenic factors were found in the co-transplanted tissue containing DPSC and HSG as compared to HSG alone. This may explain the increased number of vessels in the co-transplanted tissue. This suggests that angiogenesis may be an important aspect of salivary gland tissue development and regeneration. Furthermore, we observed near significant higher levels of NF-200 that reached the levels seen in normal submandibular gland. Recently it has been demonstrated that innervation is crucial for normal development of the salivary gland [37]. Future studies are warranted to understand the mechanisms of DPSC induction of angiogenesis and neurogenesis. In summary we provide evidence of the potential use of DPSC as ecto-mesenchyme for induction and support of salivary gland development. DPSC is an easily accessible stem cell source from third molars and their multi-lineage stem cell differentiation capacity combined with their trophic epithelial-morphogenic, angiogenic and neurogenic capacity makes then an ideal inductive and supportive cell source for salivary gland tissue engineering approaches.
6,437.2
2013-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Siccanin Is a Dual-Target Inhibitor of Plasmodium falciparum Mitochondrial Complex II and Complex III Plasmodium falciparum contains several mitochondrial electron transport chain (ETC) dehydrogenases shuttling electrons from the respective substrates to the ubiquinone pool, from which electrons are consecutively transferred to complex III, complex IV, and finally to the molecular oxygen. The antimalarial drug atovaquone inhibits complex III and validates this parasite’s ETC as an attractive target for chemotherapy. Among the ETC dehydrogenases from P. falciparum, dihydroorotate dehydrogenase, an essential enzyme used in de novo pyrimidine biosynthesis, and complex III are the two enzymes that have been characterized and validated as drug targets in the blood-stage parasite, while complex II has been shown to be essential for parasite survival in the mosquito stage; therefore, these enzymes and complex II are considered candidate drug targets for blocking parasite transmission. In this study, we identified siccanin as the first (to our knowledge) nanomolar inhibitor of the P. falciparum complex II. Moreover, we demonstrated that siccanin also inhibits complex III in the low-micromolar range. Siccanin did not inhibit the corresponding complexes from mammalian mitochondria even at high concentrations. Siccanin inhibited the growth of P. falciparum with IC50 of 8.4 μM. However, the growth inhibition of the P. falciparum blood stage did not correlate with ETC inhibition, as demonstrated by lack of resistance to siccanin in the yDHODH-3D7 (EC50 = 10.26 μM) and Dd2-ELQ300 strains (EC50 = 18.70 μM), suggesting a third mechanism of action that is unrelated to mitochondrial ETC inhibition. Hence, siccanin has at least a dual mechanism of action, being the first potent and selective inhibitor of P. falciparum complexes II and III over mammalian enzymes and so is a potential candidate for the development of a new class of antimalarial drugs. Introduction Human falciparum malaria, caused by Plasmodium falciparum, accounts for an estimated 241 million cases and 627,000 deaths annually. Most of these cases are children under the age of five in developing countries [1]. Unfortunately, the number of cases yoelii [14]. However, in P. falciparum, only the Fp and Ip subunits have been identified to date, but not the CybS and CybL subunits, indicating low sequence similarity among the CybL and CybS subunits, even among Plasmodium species. Previously, we identified atpenin A5 as the first potent and specific inhibitor of complex II from mammals and nematodes [29]. Surprisingly, atpenin A5 and other classical complex II inhibitors, such as 2-thenoyltrifluoroacetone thenoyltrifluoroacetone (TTFA) and carboxin, are not effective against complex II from Plasmodium [14,30] (Table 1) and E. tenella [27], indicating that the structure of the ubiquinone-binding site of complex II is significantly divergent between apicomplexan parasites and mammals. as the first potent and specific inhibitor of complex II from mammals and nematodes [29]. Surprisingly, atpenin A5 and other classical complex II inhibitors, such as 2−thenoyltrifluoroacetone thenoyltrifluoroacetone (TTFA) and carboxin, are not effective against complex II from Plasmodium [14,30] (Table 1) and E. tenella [27], indicating that the structure of the ubiquinone−binding site of complex II is significantly divergent between apicomplexan parasites and mammals. The development of drugs targeting complex II has been reported for compounds with activity against many pathogens, including Ascaris suum [29], Trichophyton mentagrophytes [31], Mycobacterium tuberculosis [32], and Helicobacter pylori [33]. In the present study, we identified siccanin (Figure 2a) as the first (to our knowledge) nanomolar and selective inhibitor of the P. falciparum complex II. We also showed that siccanin inhibits complex III at micromolar concentrations. Moreover, we demonstrated that siccanin inhibits the growth of blood−stage P. falciparum, an effect that appears to be distinct from the ETC inhibitory activity. Therefore, siccanin is a promising lead compound for the development of new antimalarial drugs for the treatment of Plasmodium, including the potential for blocking parasite transmission. falciparum have yet to be identified. The reactions mediated by complex II are known to be reversible, such that complex II can act as a succinate:quinone reductase (SQR, forward reaction) or a quinol:fumarate reductase (QFR, reverse reaction). Genes encoding the plasmodial homologues of human SQOR, MDH, PRODH, ETF, and complex I are missing from the P. falciparum genome, as are the genes encoding human homologues of P. falciparum NDH2, and MQO from the human genome. NADH, reduced nicotinamide The anchor subunits of complex II (CybL and CybS) from P. falciparum have yet to be identified. The reactions mediated by complex II are known to be reversible, such that complex II can act as a succinate:quinone reductase (SQR, forward reaction) or a quinol:fumarate reductase (QFR, reverse reaction). Genes encoding the plasmodial homologues of human SQOR, MDH, PRODH, ETF, and complex I are missing from the P. falciparum genome, as are the genes encoding human homologues of P. falciparum NDH2, and MQO from the human genome. NADH, reduced nicotinamide adenine dinucleotide; NAD + , oxidized nicotinamide adenine dinucleotide; DHO, dihydroorotate; DHODH, DHO dehydrogenase; P5C, (S)-1-pyrroline-5-carboxylate; PRODH; proline dehydrogenase; SQOR, sulfide:quinone oxidoreductase; G3P, glycerol-3-phosphate; DHAP, dihydroxyacetone phosphate; G3PDH, G3P dehydrogenase; ETF, electron transfer flavoprotein; ETFDH, ETF dehydrogenase; Q, ubiquinone; QH 2 The development of drugs targeting complex II has been reported for compounds with activity against many pathogens, including Ascaris suum [29], Trichophyton mentagrophytes [31], Mycobacterium tuberculosis [32], and Helicobacter pylori [33]. In the present study, we identified siccanin (Figure 2a) as the first (to our knowledge) nanomolar and selective inhibitor of the P. falciparum complex II. We also showed that siccanin inhibits complex III at micromolar concentrations. Moreover, we demonstrated that siccanin inhibits the growth of blood-stage P. falciparum, an effect that appears to be distinct from the ETC inhibitory activity. Therefore, siccanin is a promising lead compound for the development of new antimalarial drugs for the treatment of Plasmodium, including the potential for blocking parasite transmission. Siccanin Strongly Inhibits P. falciparum SQR Activity Siccanin (Figure 2a), an antibiotic produced by the plant pathogenic fungus Helm thosporium siccans Dreschsler [35,36], was previously reported to be a potent and selecti inhibitor of fungal, trypanosomal, and nematode complex II [37][38][39][40] and has been us clinically for the treatment of tinea pedis (Tackle ® , Sankyo−Pharma). Our previous stu showed that siccanin is a species−selective complex II inhibitor, effective against comp II from trypanosomatid parasites [38], Pseudomonas aeruginosa, P. putida, rat, and mou but ineffective against the enzymes from Escherichia coli, Corynebacterium glutamicum, a porcine [34]. Therefore, we examined whether siccanin inhibits P. falciparum SQR activ in a crude mitochondrial fraction from P. falciparum 3D7. We found that siccanin strong inhibited the SQR activity of this fraction (Figure 2b). In contrast, classical ubiquinone−s inhibitors, such as TTFA [30], atpenin and carboxin, were ineffective against the Plasm dium mitochondrial complex II [14] ( Table 1). The inhibition by siccanin exhibited a cl sical biphasic inhibition pattern with 50% inhibitory concentration (IC50) values (mean Siccanin Strongly Inhibits P. falciparum SQR Activity Siccanin (Figure 2a), an antibiotic produced by the plant pathogenic fungus Helminthosporium siccans Dreschsler [35,36], was previously reported to be a potent and selective inhibitor of fungal, trypanosomal, and nematode complex II [37][38][39][40] and has been used clinically for the treatment of tinea pedis (Tackle ® , Sankyo-Pharma). Our previous study showed that siccanin is a species-selective complex II inhibitor, effective against complex II from trypanosomatid parasites [38], Pseudomonas aeruginosa, P. putida, rat, and mouse but ineffective against the enzymes from Escherichia coli, Corynebacterium glutamicum, and porcine [34]. Therefore, we examined whether siccanin inhibits P. falciparum SQR activity in a crude mitochondrial fraction from P. falciparum 3D7. We found that siccanin strongly inhibited the SQR activity of this fraction (Figure 2b). In contrast, classical ubiquinone-site inhibitors, such as TTFA [30], atpenin and carboxin, were ineffective against the Plasmodium mitochondrial complex II [14] ( Table 1). The inhibition by siccanin exhibited a classical biphasic inhibition pattern with 50% inhibitory concentration (IC 50 ) values (mean ± SD) of the first and second phases of 0.016 ± 0.006 µM and 8.93 ± 2.44 µM, respectively ( Figure 2b and Table 2). As a next step, the effect of siccanin at a concentration of 10 µM against other mitochondrial dehydrogenases (Figure 3a,b) was evaluated, revealing that siccanin did not inhibit MQO, DHODH, and NDH2 activities but weakly inhibited G3PDH (Figure 3b). These results clearly showed that siccanin is a selective inhibitor of the P. falciparum complex II amongst the mitochondrial ETC dehydrogenases that shuttle electrons to the Q-pool (Figure 3b). Our results also indicated that the P. falciparum complex II has higher sensitivity to siccanin than does the enzyme from T. mentagrophytes (IC 50~9 0 nM) [31]. siccanin did not inhibit MQO, DHODH, and NDH2 activities but weakly inhibited G3PDH (Figure 3b). These results clearly showed that siccanin is a selective inhibitor of the P. falciparum complex II amongst the mitochondrial ETC dehydrogenases that shuttle electrons to the Q−pool (Figure 3b). Our results also indicated that the P. falciparum complex II has higher sensitivity to siccanin than does the enzyme from T. mentagrophytes (IC50~90 nM) [31]. Inhibition of P. falciparum Growth by Siccanin In other organisms living in anaerobic or microaerophilic environments (e.g., E. coli [41], Mycobacterium tuberculosis [32,[42][43][44], Ascaris suum (adult stage) [29], Echinococcus Pharmaceuticals 2022, 15, 903 6 of 16 multilocularis (protoscoleces) [45], and even in several solid tumor cells [46]), the QFR activity of complex II is well documented and has been suggested to play an important role in environmental adaptation. Previously, we demonstrated that the disruption of the Fp subunit-encoding gene of P. falciparum (sdha) impairs the growth of blood-stage parasites; the growth of the ∆sdha mutant was rescued by succinate but not by fumarate, suggesting that this plasmodial complex II might function as a QFR rather than as a SQR [9]. Since siccanin potently inhibited complex II from P. falciparum, we next tested whether siccanin exposure phenocopied the ∆sdha mutant. As described above, complex II is not essential for the survival of blood-stage P. falciparum; however, we could not detect live parasites under the microscope following exposure to siccanin at a concentration of 50 µM (Figure 4a). Further experiments demonstrated that siccanin modestly inhibits the growth of P. falciparum with an IC 50 of 8.40 ± 0.60 µM (Figure 4b, Table 2). This result suggested that siccanin may have a secondary target other than complex II in P. falciparum; presumably this second target is essential for survival in the blood stage. Inhibition of P. falciparum Growth by Siccanin In other organisms living in anaerobic or microaerophilic environments (e.g., E. coli [41], Mycobacterium tuberculosis [32,[42][43][44], Ascaris suum (adult stage) [29], Echinococcus multilocularis (protoscoleces) [45], and even in several solid tumor cells [46]), the QFR activity of complex II is well documented and has been suggested to play an important role in environmental adaptation. Previously, we demonstrated that the disruption of the Fp subunit−encoding gene of P. falciparum (sdha) impairs the growth of blood−stage parasites; the growth of the Δsdha mutant was rescued by succinate but not by fumarate, suggesting that this plasmodial complex II might function as a QFR rather than as a SQR [9]. Since siccanin potently inhibited complex II from P. falciparum, we next tested whether siccanin exposure phenocopied the Δsdha mutant. As described above, complex II is not essential for the survival of blood−stage P. falciparum; however, we could not detect live parasites under the microscope following exposure to siccanin at a concentration of 50 μM ( Figure 4a). Further experiments demonstrated that siccanin modestly inhibits the growth of P. falciparum with an IC50 of 8.40 ± 0.60 μM (Figure 4b, Table 2). This result suggested that siccanin may have a secondary target other than complex II in P. falciparum; presumably this second target is essential for survival in the blood stage. (red) and the parent (wild−type) 3D7 (blue). The IC50s were calculated as 6.13 ± 0.91 μM and 6.10 ± 1.00 μM for Δsdha−3D7 and wild−type parasites, respectively, using the four−parameter logistic equation in GraphPad Prism ® ver.6.01. Data are presented as mean ± SD (n = 3). Effect of Succinate or Fumarate, and Pfsdha Disruption, on Growth Inhibition by Siccanin Next, we tested whether addition of succinate or fumarate [9] rescues the growth−inhibitory effect of siccanin (Figure 4a). Compared to the control's IC50 of 8.4 μM, the IC50 of siccanin in the presence of succinate or fumarate was 10.8 or 11.5 μM, respectively ( Figure 4b), suggesting that the 3D7−growth inhibition caused by siccanin was independent of the compound's effect on complex II activity. Consistent with this hypothesis, the effect of siccanin on Δsdha−3D7 P. falciparum growth was similar to that on the parent 3D7, with IC50s of 6.13 and 6.10 μM, respectively ( Figure 4c). Together, these results strongly indicated the existence of a secondary essential target of siccanin in the blood−stage parasite. Siccanin Inhibits P. falciparum Complex III DHODH, MQO, and complex III (Figure 1) are ETC enzymes that have been suggested to be essential for the survival of the blood−stage P. falciparum [11,47]. Since siccanin did not inhibit DHODH and MQO activities (Figure 3b), we tested whether complex III activity is inhibited by siccanin. The inhibition of complex III was evaluated using . The IC 50 s were calculated as 8.40 ± 0.60, 10.8 ± 2.16, and 11.5 ± 1.09 µM, respectively. (c) Effect of siccanin on the growth of ∆sdha-3D7 (red) and the parent (wild-type) 3D7 (blue). The IC 50 s were calculated as 6.13 ± 0.91 µM and 6.10 ± 1.00 µM for ∆sdha-3D7 and wild-type parasites, respectively, using the four-parameter logistic equation in GraphPad Prism ® ver.6.01. Data are presented as mean ± SD (n = 3). Effect of Succinate or Fumarate, and Pfsdha Disruption, on Growth Inhibition by Siccanin Next, we tested whether addition of succinate or fumarate [9] rescues the growthinhibitory effect of siccanin (Figure 4a). Compared to the control's IC 50 of 8.4 µM, the IC 50 of siccanin in the presence of succinate or fumarate was 10.8 or 11.5 µM, respectively (Figure 4b), suggesting that the 3D7-growth inhibition caused by siccanin was independent of the compound's effect on complex II activity. Consistent with this hypothesis, the effect of siccanin on ∆sdha-3D7 P. falciparum growth was similar to that on the parent 3D7, with IC 50 s of 6.13 and 6.10 µM, respectively (Figure 4c). Together, these results strongly indicated the existence of a secondary essential target of siccanin in the blood-stage parasite. Siccanin Inhibits P. falciparum Complex III DHODH, MQO, and complex III (Figure 1) are ETC enzymes that have been suggested to be essential for the survival of the blood-stage P. falciparum [11,47]. Since siccanin did not inhibit DHODH and MQO activities (Figure 3b), we tested whether complex III activity is inhibited by siccanin. The inhibition of complex III was evaluated using DHO-cytochrome c reductase. This assay measures electron transfer from DHODH to cytochrome c via complex III by recording the absorbance change of cytochrome c at 550 nm [14]. The results showed that, at 60 µM concentration, siccanin does not inhibit DHODH activity Table 2), indicating that siccanin inhibits complex III and not DHODH. It has been reported that siccanin does not inhibit mammalian complex II [34] (Table 2). We therefore evaluated whether siccanin inhibits mammalian complex I and III activities (NADH-cytochrome c reductase). Notably, even at concentrations as high as 500 µM, siccanin did not inhibit the mammalian complex I and III (not shown), demonstrating that siccanin is a selective inhibitor of complexes II and III from P. falciparum, with maximum selectivities of 57,400-fold and of more than 60-fold, respectively (Table 1). Although siccanin did not inhibit mammalian complexes I, II, or III, the growth of DLD-1 cells and HDF cells were inhibited by siccanin, which exhibited 50% effective concentrations (EC 50 s) of 34.2 ± 2.73 µM and 16.1 ± 1.21 µM, respectively (selectivities of 1.9 and 4.1-fold, Table 2). These results indicated that, in human cells, siccanin might have target(s) other than ETC enzymes. DHO−cytochrome c reductase. This assay measures electron transfer from DHODH to cytochrome c via complex III by recording the absorbance change of cytochrome c at 550 nm [14]. The results showed that, at 60 μM concentration, siccanin does not inhibit DHODH activity (Figure 5a) but completely inhibits DHO−cyt c activity with an IC50 value of 8.39 ± 2.92 μM (Figure 5b, Table 2), indicating that siccanin inhibits complex III and not DHODH. It has been reported that siccanin does not inhibit mammalian complex II [34] ( Table 2). We therefore evaluated whether siccanin inhibits mammalian complex I and III activities (NADH−cytochrome c reductase). Notably, even at concentrations as high as 500 μM, siccanin did not inhibit the mammalian complex I and III (not shown), demonstrating that siccanin is a selective inhibitor of complexes II and III from P. falciparum, with maximum selectivities of 57,400−fold and of more than 60−fold, respectively (Table 1). Although siccanin did not inhibit mammalian complexes I, II, or III, the growth of DLD−1 cells and HDF cells were inhibited by siccanin, which exhibited 50% effective concentrations (EC50s) of 34.2 ± 2.73 μM and 16.1 ± 1.21 μM, respectively (selectivities of 1.9 and 4.1−fold, Table 2). These results indicated that, in human cells, siccanin might have target(s) other than ETC enzymes. Interaction of Atovaquone with Siccanin against P. falciparum In Vitro Compounds inhibiting complex III can bind either at Qo (quinol binding site facing the outer membrane) or Qi (quinone binding site facing the matrix side) sites [48][49][50]. Because the growth inhibition of blood−stage parasites by siccanin was likely due to the inhibition of complex III, we tested by modified isobologram [51] based on the parasite's LDH assay as previously described [52], whether the binding site of siccanin overlaps with that of atovaquone, which is a well−known Qo site inhibitor. A pair of fractional IC50s for each combination of siccanin and atovaquone was plotted for the isobologram analysis. The fractional IC50 of siccanin was calculated by dividing the IC50 of siccanin combined with each atovaquone by the IC50 obtained for siccanin alone and plotted on the X−axis. Similarly, the corresponding atovaquone fractional IC50s were calculated and plotted on the Y−axis. In general, if two compounds bind at the same site, the isobologram will show an antagonistic pattern with a combination index exceeding 1. On the other hand, if the binding sites are distinct, the isobologram will show a synergistic or additive pattern, with Interaction of Atovaquone with Siccanin against P. falciparum In Vitro Compounds inhibiting complex III can bind either at Q o (quinol binding site facing the outer membrane) or Q i (quinone binding site facing the matrix side) sites [48][49][50]. Because the growth inhibition of blood-stage parasites by siccanin was likely due to the inhibition of complex III, we tested by modified isobologram [51] based on the parasite's LDH assay as previously described [52], whether the binding site of siccanin overlaps with that of atovaquone, which is a well-known Q o site inhibitor. A pair of fractional IC 50 s for each combination of siccanin and atovaquone was plotted for the isobologram analysis. The fractional IC 50 of siccanin was calculated by dividing the IC 50 of siccanin combined with each atovaquone by the IC 50 obtained for siccanin alone and plotted on the X-axis. Similarly, the corresponding atovaquone fractional IC 50 s were calculated and plotted on the Y-axis. In general, if two compounds bind at the same site, the isobologram will show an antagonistic pattern with a combination index exceeding 1. On the other hand, if the binding sites are distinct, the isobologram will show a synergistic or additive pattern, with a combination index of less than or equal to 1, respectively [53][54][55][56]. In the case of siccanin, the determined combination index was 1.0, indicating that siccanin has an additive effect when parasites are treated with the combination of siccanin and atovaquone (Figure 5c), thus, suggesting Q i site as the binding site of siccanin. Effect of Siccanin on P. falciparum yDHODH-3D7 and Dd2 Drug-Resistant-Panel Strains Previously, it was demonstrated that P. falciparum expressing the yeast DHODH (yDHODH) can oxidize DHO in a ubiquinone-independent manner, resulting in resistance to DHODH and complex III Q o /Q i site inhibitors [11]. Because siccanin inhibited complex III of this parasite (Table 2), we tested whether the P. falciparum 3D7-yDHODH strain was resistant to siccanin. Surprisingly, siccanin was equally active against both the parent (3D7) and 3D7-yDHODH strains, showing EC 50 s of 11.65 µM and 10.26 µM, respectively (Figure 6a). In contrast, atovaquone was a potent growth inhibitor of 3D7 (EC 50 = 0.560 nM) but not of 3D7-yDHODH (EC 50 > 100 nM) (Figure 6b). Discussion In this study, we demonstrated that siccanin is a nanomolar−order inhibitor of complex II from an apicomplexan parasite. We also demonstrated that, while siccanin does not affect other ETC dehydrogenases, the compound shows (in addition to its activity against complex II) micromolar−order inhibition of complex III. Thus, siccanin represents a novel scaffold compared to all known complex−III inhibitors. In mammals, mitochondrial ETC is essential for the maintenance of several processes, including energy production [62] and de novo biosynthesis of pyrimidine [63]. In contrast, the blood stage of P. falciparum does not depend on ETC for energy production, but instead on cytoplasmic glycolysis [64,65]. In blood−stage P. falciparum, ETC is essential for the de novo biosynthesis of pyrimidine, a process that is linked at the level of DHODH, Q−pool, Next, siccanin was tested against multidrug-resistant Dd2 and drug-resistant panel strains, Dd2_048 (PI4K-S743T mutation, MMV390048 R ) [57], Dd2_DDD (eEF2-Y186N mutation, DDD107498 R ) [58], Dd2_DHIQ (ATP4-G358S mutation; cipargamin R ) [59], Dd2_GNF (Carl-I1139K mutation; GNF156 R ) [60], and Dd2_ELQ300 (cyt b-I22L mutation at Q i site; ELQ300 R ) [61] to investigate the cross-resistance and potential target protein(s). No siccanin resistance, however, was detected in any of these strains, for which siccanin exhibited EC 50 s for growth inhibition ranging from 12.4 µM to 18.7 µM (Figure 6c), indicating that siccanin targets other protein(s) except for the mutated gene products of these strains. Discussion In this study, we demonstrated that siccanin is a nanomolar-order inhibitor of complex II from an apicomplexan parasite. We also demonstrated that, while siccanin does not affect other ETC dehydrogenases, the compound shows (in addition to its activity against complex II) micromolar-order inhibition of complex III. Thus, siccanin represents a novel scaffold compared to all known complex-III inhibitors. In mammals, mitochondrial ETC is essential for the maintenance of several processes, including energy production [62] and de novo biosynthesis of pyrimidine [63]. In contrast, the blood stage of P. falciparum does not depend on ETC for energy production, but instead on cytoplasmic glycolysis [64,65]. In blood-stage P. falciparum, ETC is essential for the de novo biosynthesis of pyrimidine, a process that is linked at the level of DHODH, Qpool, and complex III [11] (Figure 1). Since the P. falciparum genome apparently does not encode a pyrimidine salvage pathway, the pathway for the de novo biosynthesis of Pharmaceuticals 2022, 15, 903 9 of 16 pyrimidines and/or complex III are attractive targets for development of new antimalarial drugs [11]. Other P. falciparum ETC dehydrogenases, such as G3PDH and MQO, seem to be functional in the blood stage [66]. Notably, the MQO activity was higher than DHODH activity in our assay (Figure 3a). Recent studies showing that the MQO-encoding locus cannot be genetically ablated in P. falciparum suggests that MQO is essential for survival in the blood stage [47]. The essentiality of MQO was attributed to the key role played by MQO in linking the ETC to the fumarate cycle, given that the latter is essential for the purine salvage pathway [10]. However, other reports have suggested that PfMQO is not an appealing target [67]. Clearly, further studies will be required to determine the druggability of PfMQO. The knockout of the sdha gene in P. falciparum has been shown to exhibit a growth defect that is rescued by the supplementation of the culture medium with succinate, though not with fumarate [9]. In P. berghei, mutant parasites devoid of complex II as well as NDH2 activities have been reported to be impaired for development in the insect stage [8,68]. Complex IIs from apicomplexan parasites are known to be insensitive to classical inhibitors, and, to date, no potent inhibitors of those enzymes have been reported [27,30]. Interestingly, as shown in the present study, siccanin is a potent inhibitor of the P. falciparum complex II, exhibiting a biphasic inhibition pattern against this enzyme (Figure 2b). A similar biphasic inhibition pattern has been reported for S. cerevisiae complex II and its dinitrophenol derivative inhibitor, suggesting the existence of two quinone-binding sites [69]. The existence of two binding sites for quinone species also has been described for the quinol-fumarate reductases from E. coli and Wolinella succinogenes based on several approaches [70][71][72][73], including crystallographic studies [74]. This inference for the P. falciparum complex II is supported by previous reports demonstrating that malonate-sensitive NADH-fumarate reductase activity is detectable in the mitochondria-rich fraction of P. falciparum and also E. tenella (malonate is a general complex II inhibitor binding at the Fp subunit) [22,27,75]. Although similar studies of the P. falciparum complex II will be needed to definitively confirm this hypothesis, it is tempting to speculate that two quinone-binding sites also may exist in the plasmodial complex II; presumably, one of the quinone-binding sites has a higher affinity for siccanin than the other, resulting in the biphasic inhibition observed in Figure 2b. The first phase would be inhibited by siccanin at nanomolar-order concentration (0.016 µM), while the second phase would be inhibited by the same compound at micromolar-order concentration (8.93 µM). Interestingly, in addition to its activity against complex II, siccanin also inhibits complex III with an IC 50 of 8.39 µM. Moreover, siccanin inhibits the P. falciparum complex II with over 520-fold (first phase) and 0.93-fold (second phase) greater selectively than the inhibition of complex III but with no apparent effect against mammalian respiratory chain complexes (Table 2). These results suggest that the growth inhibition of blood-stage P. falciparum by siccanin may be the consequence of ETC inhibition, exerted primarily at the level of complex III (rather than by the direct inhibition of the TCA cycle via complex II). However, similar sensitivity between the 3D7 and 3D7-yDHODH, as well as Dd2 and Dd2-ELQ300 towards siccanin seems to exclude complex III as its primary target in P. falciparum. Malaria Parasite Strains and Cultivation The P. falciparum 3D7 strain was cultured, as described previously, in 3% hematocrit type A human red blood cells (RBCs) in RPMI1640 medium (Invitrogen; San Diego, CA, USA) supplemented with 25 mM sodium hydrogen biocarbonate, 10 µg/mL hypoxanthine, 40 µg/mL gentamicin sulfate, and 0.5% (w/v) Albumax II (Invitrogen) ("complete medium") [9,52]. Cultures were maintained in a MG-70M multi-gas incubator (Taitec) under the conditions of 5% O 2 , 5% CO 2 , and 90% N 2 at 37 • C. The medium was replaced daily [76]. Parasitemia was measured by thin blood smears stained with Giemsa. In our previous study, we generated mutant P. falciparum strains expressing either disrupted or full-length versions of the flavoprotein (Fp) subunit (encoded by the Pfsdha gene) [9]. Both strains were cultured in complete medium at 3% hematocrit supplemented with 5 nM WR99210 (Jacobus Pharmaceuticals; Plainsboro, NJ, USA) [9]. The experiments using human RBCs were performed under the guidelines of Research Ethics Committee of the Faculty of Medicine of the University of Tokyo and Institutional Review Board (IRB) of Nagasaki University (Permission nos. 10050 and no. 19, respectively). Human RBCs were obtained from The Japanese Red Cross Society. The synchronization of parasite cultures was performed by treating with 5% (w/v) sorbitol for 10 min [77]. P. falciparum 3D7-yDHODH was prepared and maintained as described previously [11,78,79]. P. falciparum Dd2, and Dd2-derived mutant strains were kindly provided by David A. Fidock [80], and maintained at Nagasaki University essentially as described in reference, except without supplementation with WR99210. Assessment of Parasite Survival Synchronized ring-form parasites (2% parasitemia) were prepared in complete medium containing 50 µM siccanin (at a fixed final DMSO concentration of 0.1% (v/v)) and distributed to 24-well plates at 1 mL/well. After incubation for 16, 24, 32, and 48 h, parasitemia was assessed under the microscope using thin blood smears stained with Giemsa. Drug sensitivity Assay Synchronized ring-form parasites (0.3% parasitemia) were prepared in complete medium containing different concentration of various compounds (at a fixed final DMSO concentration of 0.1% (v/v)) and distributed to 96-well plates at 100 µL/well. After 72 h of incubation, parasite growth was monitored by the previously described lactate dehydrogenase (PfLDH)-based assay method [81]; plates were read using a SpectraMax ® Paradigm spectrophotometer (Molecular Devices, Inc.; San Jose, CA, USA). EC 50 values were calculated using Prism ® ver.6.01 (GraphPad; San Diego, CA, USA). Values are presented as the means of three independent experiments. DLD-1 and HDF cells, (2.5 × 10 4 cells/mL) in 96-well plates, were cultured for 24 h. The cells then were washed with phosphate-buffered saline (PBS) and the medium was replaced with fresh medium supplemented with 10% (v/v) FBS. Medium containing siccanin at various concentrations was added to the wells and the cells were cultured for another 48 h. Controls were treated with medium containing 1% (v/v) DMSO and 10% (v/v) FBS. Next, the cells were washed with PBS and the medium was replaced with fresh medium supplemented with 10% (v/v) FBS. An aliquot (10 µL) of CCK8 solution (Cell Counting Kit-8; Dojindo Laboratories; Kumamoto, Japan) was added to each well and the plates were incubated for another 2 h, at which point absorbance at 450 nm was monitored using a SpectraMax ® Paradigm spectrophotometer. The EC 50 values were calculated using Prism ® ver.6.01. The selectivity was calculated as the ratio between DLD-1 or HDF versus P. falciparum IC 50 s. Preparation of Crude P. falciparum Mitochondrial Fraction A synchronized parasite culture was grown for 64 h and a crude mitochondrial fraction was prepared from trophozoite-stage cells (as confirmed by Giemsa staining). Infected red blood cells were collected by centrifugation at 800× g for 5 min at 4 • C and incubated for 5 min at room temperature with 40 mL of 0.075% (w/v) saponin in AIM buffer (120 mM KCl, 20 mM NaCl, 10 mM PIPES (1,4-piperazinediethanesulfonic acid) buffer [pH 6.7], 1 mM MgCl 2 , 5 mM glucose). Released parasites were collected by centrifugation at 2780× g for 7 min at 4 • C and washed three times with AIM buffer. P. falciparum was suspended in MSE buffer [225 mM mannitol, 75 mM sucrose, 0.1 mM ethylenediaminetetraacetic acid (EDTA), 3 mM Tris-HCl buffer, pH 7.4] containing 1 mM phenylmethylsulphonyl fluoride, and disrupted by nitrogen cavitation at 1200 psi using a 4639 Cell Disruption Bomb (Parr Instrument Company; Moline, IL, USA). The unbroken cells and cell debris were removed by centrifugation at 800× g for 5 min at 4 • C. The supernatant was centrifuged at 5000× g for 20 min at 4 • C. The resulting pellet was suspended in 200-400 µL MSE buffer and used as crude mitochondrial fraction for all enzymatic assays, as described previously [22]. DHO-cyt c activity was measured at 25 • C in 1 mL of the reaction mixture containing 20 µM cytochrome c and 2 mM KCN in 30 mM Tris-HCl buffer, pH 8.0. The reduction of cytochrome c was monitored at 550 nm after initiation of the reaction by addition of 500 µM dihydroorotate [14,27]. Various concentrations of siccanin (formulated in DMSO) were added to the reaction mixtures before the initiation of the reactions. In all assays, the final concentration of DMSO was fixed at 0.1% (v/v). IC 50 values were calculated using Prism ® ver.6.01. Measurement of NADH-Cytochrome c Reductase Activity in Porcine Mitochondria Porcine mitochondria were prepared as described previously [82]. NADH-cytochrome c (NADH-cyt c) activity was measured at 25 • C in 1 mL of the reaction mixture containing 1 mM MgCl 2 , 2 mM KCN, and 20 µM cytochrome c in 30 mM potassium phosphate buffer, pH 7.5. Activity was measured by recording the absorbance change of cytochrome c at 550 nm after the reaction was initiated by the addition of 20 µM NADH. Various concentrations of siccanin dissolved in DMSO were added to reaction mixtures 5 min before the initiation of the reactions. Selectivity was calculated as the ratio of mammalian to P. falciparum IC 50 s. Isobologram Analysis with Atovaquone and Siccanin The effect of the combination of atovaquone and siccanin was evaluated using isobologram analysis to evaluate the synergy, additivity, or antagonism between the two compounds [51,83]. Synchronized ring-form parasites (0.3% parasitemia) were prepared in complete medium containing different concentrations of siccanin and atovaquone (at a fixed final DMSO concentration of 0.2% (v/v)) and distributed to 96-well plates at 100 µL/well. After 72 h of incubation, parasite growth was monitored by the previously described PfLDH-based assay using a SpectraMax ® Paradigm spectrophotometer. IC 50 values were calculated using Prism ® ver.6.01. Data represent the means of three independent experiments, each performed in triplicate. The isobologram analysis was performed as described previously [51,83]. For each combination assay, IC 50 s were calculated from each culture of P. falciparum grown in the presence of atovaquone or siccanin alone and in combination of various concentrations of the two compounds. The fractional inhibitory concentration (FIC) value was determined using the following equation: Values of CI < 1 represent synergism, CI = 1 represent additivity, and CI > 1 represent antagonism. Conclusions In conclusion, we identified siccanin as a promising antimalarial drug candidate effective not only against blood-stage parasites but also with the potential to be active against the insect stage. Such a dual-stage-targeting drug might show utility for blocking transmission as well as for the treatment of malaria patients. Notably, siccanin has no effect on mammalian respiratory chain enzymes (complex I, II, and III). However, given that siccanin inhibits the growth of DLD-1 (cancer cell line) and HDF (a normal cell line), it will be critical to determine the mammalian target of siccanin in order to increase the compound's selectivity against the parasite for potential future development as anti-parasitic drug.
7,926.8
2022-07-01T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Factors Affecting the Performance of Sub-1 GHz IoT Wireless Networks Internet of Things (IoT) devices frequently utilize wireless networks operating in the Industrial, Scienti fi c, and Medical (ISM) Sub-1 GHz spectrum bands. Compared with higher frequency bands, the Sub-1 GHz band provides broader coverage and lower power consumption, which are desirable properties for low-cost IoT applications. However, low-power and low-cost IoT modules cause high variability in network performance. The varying in fl uence from real-world environments additionally undermines wireless propagation and aggravates this variability. We explore these in fl uences and provide a checklist of potential factors a ff ecting wireless network performance in real-world environments. Using multiple low-cost IoT modules, we conduct multiple experiments in fi ve real-world scenarios: indoor, street, open fi eld, ground-to-drone (G2D), and drone-to-drone (D2D). Speci fi cally, the tests are conducted inside a building, on a straight street with wooded sidewalks and aligned houses, on an open fi eld golf course, and high up in the air between drones. To understand the di ffi culty of reproducibility in IoT deployments, we studied the e ff ect of factors in four categories. This includes the e ff ect of path (line of sight, distance, and obstruction), con fi guration (transmit power level), weather (precipitation, temperature, and humidity), and installation (IoT module mobility and position). We fi nd that some of the factors in the path and weather categories have the most in fl uence among all the factors, while the rest have moderate to low impacts. In the end, we provide a complete checklist of all the tested factors, which we believe would be constructive not only to academics but also to industrial practitioners working on wireless IoT systems. Introduction The wireless signal continuously varies and attenuates as it propagates from the transmitter to the receiver [1]. Apart from the attenuation along the line of sight, many other factors would affect the wireless signal propagation in a realworld deployment. The combined impacts are exceptionally intricate and undetermined, especially for the low-cost Internet of Things (IoT) modules. These modules are produced in large quantities and have high variability in quality and may also be highly sensitive to minor changes in deployment scenarios. As is indicated in [2], the production of low-cost wireless modules is significantly different from the regular networking environments. The wide range of the frequency bands, 169 MHz to 2.4 GHz, used by these modules makes the testing more complicated and expensive [2]. It is also incredibly arduous to reproduce a specific experiment since uncontrollable factors such as weather and surroundings are almost impossible to replicate. Reducing the influence of different real-world environments, such as indoor, urban, outdoor, and line of sight, becomes increasingly noteworthy. We believe that a checklist to keep in mind when conducting experiments and analyzing raw data would significantly help academics and practitioners working with IoT technologies. Hence, we have recorded and studied the observed behaviors of several IoT modules under selected environments and conditions, aiming to provide an initial checklist of potential factors affecting wireless network performance in realistic situations. This paper is the first to present a full checklist of realistic factors and each factor's influence based on empirical pieces of evidence to the best of our knowledge. Moreover, Sub-1 GHz wireless networks are developing into one of the critical elements of low-cost IoT applications. These are low-frequency networks that consume less energy, cover a broader area, and are less prone to interference than higher frequency networks [3]. However, the need for lowcost IoT modules also brings variability and instability to the low-and high-frequency networks. Thus, to understand the performance disparity of different frequency bands, we selected two Sub-1 GHz modules Digi XBee-PRO 900HP (900 MHz) [4] and two 2.4 GHz modules Digi XBee Zigbee (2.4 GHz) [5], for initial analyses. Technical details of the modules and protocols will be introduced in Section 3. More information about the 900 MHz frequency band can be found in [6,7]. Higher frequency bands (e.g., the 2.4 GHz band used by the Digi XBee Zigbee modules) carry more information than lower frequency bands. However, they have higher attenuation and higher interference. Also, the 2.4 GHz band is now congested with signals from Wi-Fi and Bluetooth. Lower frequency bands, such as Sub-1 GHz Industrial, Scientific, and Medical (ISM) band, which in our case is the 900HP module, have longer ranges and lower interference. Sub-1 GHz ISM band is used in primarily proprietary links with fewer competing applications using the same spectrum [3]. Because of their long wavelengths, Sub-1 GHz signals can pass through walls and turn corners better (bend farther around obstacles and reduce the blocking effects), thus making them propagate among buildings in urban environments better. However, they have a lower data rate. Nonetheless, the extended range and lower energy consumption generally make them preferable to IoT applications [3]. Therefore, our experiments are targeted in the context of the typical lower frequency band networks, Sub-1 GHz wireless networks. Specifically, we use the Digi XBee 900HP radio frequency (RF) modules to analyze and determine the common factors affecting network behaviors in realworld scenarios. We have conducted controlled indoor and uncontrolled outdoor experiments in different real-world scenarios and conditions to record the experimental environments' potential influence. Five experimental setups were arranged for the field tests: indoor, street, open field, ground-to-drone (G2D), and drone-to-drone (D2D). In other words, the experiments were conducted inside a building, on a straight street with wooded sidewalks and aligned houses, on an open field golf course, and high up in the air between drones. Our study's most important contribution is to show the impact of factors affecting Sub-1 GHz wireless network performance in real-world conditions by recording and analyzing field tests' experimental results. We classify these factors into the following four categories: (1) Path: the physical medium between the modules, such as the distance, the line of sight, and the environment (obstacles and surroundings) (2) Configuration: the configurable parameters of the modules, such as the transmit power level, number of packet delivery attempts, and channel mask (3) Weather: precipitation (such as rain, hail, snow, sleet, or thunderstorm), along with temperature and humidity (4) Installation: IoT module mobility, position, antenna direction, and hardware components Some of these parameters are not reported when IoT devices' performance is specified (by manufacturers or scholars), but they affect the performance often in a significant way. At the end of this paper, a complete checklist of all the tested factors is provided. The organization of the paper is as follows. In Section 2, we present the related work and the motivation. In Sections 3, we discuss the experimental setup and the potential factors. In Section 4, we analyze and present the experimental results in addition to the complete checklist. Finally, we give overall conclusions in Section 5. Related Work Wireless communication technologies have been developed and standardized for years. Recently, the growth of IoT applications has resulted in increasing interest in the performance of these technologies. Therefore, performance analyses become indispensable and crucial for the comparison and selection of wireless modules. For example, Ferrari et al. evaluate wireless sensor networks' indoor performance in realistic scenarios by comparing the Zigbee and Z-Wave protocols in different topologies [8]. Rathod et al. test Sub-1 GHz modules in real-world environments to select a network deployment inside their campus and differentiate the wireless signal propagation performances in different indoor and outdoor ambiances [9]. Vondrouš et al. evaluate mesh networks' performance in the ISM frequency band to find the leading cause of the degradation in network stability and the decrease of Quality of Service (QoS) [10]. Aust et al. evaluate transmission characteristics' performance and discuss the transmission boundaries and the modulation schemes of Sub-1 GHz modules [11]. Robinson and Knightly investigate the deployment factors in the aspects of topological and structural characteristics [12]. However, these experimental results are mostly not reproducible because the wireless signal is susceptible to even minor changes in the environments. Hence, it is essential to record as many factors as possible to see their extent of influence in the measured settings since they potentially affect wireless signal propagation. With the increasing investigation on performance evaluation and technology comparison, leveraging realistic environments and network emulation are drawing more attention and are being continuously discussed. For example, Khan et al. indicate the time-consuming, expensive elements of extensive hardware and human resources in real-world experiments [13]. Jardosh et al. use simulations to produce real-world scenarios for mobile network evaluation using a mobility and signal propagation model [14]. Suranata et al. record and analyze their field experiment results on the practical efficiency of Sub-1 GHz networks for low-power systems [15]. Researchers also try to ease the 2 Wireless Communications and Mobile Computing tension between the realistic environment and simulation by adding realistic simulation features. For example, Judd and Steenkiste develop a wireless emulator for practical and repeatable experimentation in the physical layer to leverage the natural environment and the repeatability of simulations [16]. They use the emulator to understand real-world experiments better and improve wireless network applications [17]. Moreover, performance evaluations on wireless networks often use specific protocols, standards, and metrics, for example, received signal strength indicator (RSSI), throughput, round-trip time (RTT), and packet loss rate (PLR) [10,12,16,18,19]. However, few research papers have worked on balancing theory and practice and capturing the target configuration and the environment's realistic behavior. Moreover, Sub-1 GHz wireless links have gained more attention due to their practicability for low-cost IoT applications. However, due to the complexity of real-world situations, the real-world robustness of Sub-1 GHz wireless networks has become a critical issue. It cannot solely rely on abstraction and assumption when developing new protocols or new hardware. Srinivasan et al. present their observations of low-power wireless links and summarize the standard assumptions for network protocols based on the observations [20]. Their experimental observations indicate that these assumptions are not always valid. For instance, "link quality is the same on all channels" is not always valid. Thus, it is crucial to involve the realistic behaviors of wireless hardware modules. Sha et al. propose a protocol for data-intensive sensing applications based on empirical power control and interference models [21]. They study the correlations between transmit power and the RSSI by implementing real-world experiments. Liando et al. conduct realworld experiments with practical conditions to verify the performance of Long Range (LoRa) networks. The results show that LoRa's performance is severely affected in realworld situations with obstructions such as buildings and vegetation. Kim et al. investigate the Sub-1 GHz frequencyhopping-based 6LoWPAN and verify the impact of network size and other factors on the network performance in natural environments [22]. Chandu et al. analyze the performance of a proposed Sub-1 GHz IoT system integrated with temperature and humidity sensors in different settings [23]. Even though prior research on wireless performance has been intensively investigated, there is no complete guideline on what should be examined and how they should be examined for real-world deployments of wireless devices, specifically, Sub-1 GHz wireless networks. Our goal is to provide a helpful checklist of potential factors affecting wireless performance in realistic environments. We believe it will facilitate and speed up future experiments and serve as a primary reference for analyzing raw data. Specifically, we resolve to initiate this checklist of realistic parameters in the context of Sub-1 GHz wireless networks, crucial for low-cost and longrange IoT systems. Experimental Setup To explore the realistic factors for signal propagation and wireless network performance, we set up controlled and uncontrolled experiments in indoor and outdoor environments, respectively. As indicated earlier, five experimental setups and multiple low-frequency IoT modules were selected for the tests. The RSSI values and PLR were recorded during the tests. The five setups are listed below: (1) Indoor. This setup is inside a building. The transmitter is stationary, while the receiver moves away from it and stops every 10 m. We recorded the RSSI values at every stop (2) Street. This setup is on a straight road with wooded sidewalks, aligned apartments, and houses. We recorded the RSSI values every 50 m (3) Open Field. This setup is on a golf course that has very few obstacles around. We recorded the RSSI values every 50 m (4) Ground-to-Drone (G2D). In this setup, the receiver and its battery are attached to a drone. Meanwhile, the transmitter is installed to a table on the ground (G2D), as shown in Figure 1(a). We recorded the RSSI values every 50 m (5) Drone-to-Drone (D2D). In this setup, the transmitter and receiver with batteries are attached to two different drones. Additionally, a third module is set up on the ground to monitor and initiate the experiments. This module controls the operations by sending messages to the transmitter. The transmitter then starts the test and continuously sends RSSI values between the transmitter and the receiver to the ground. We recorded the RSSI values every 50 m The modules on the ground in all the experiments are set up on a 70 cm high, round table, as shown in Figure 1(b). Based on the above setups, we aimed to study the factors' influence in the four categories described in Section 1. Our aim here is to experiment with commonly used Sub-1 GHz protocols (i.e., Zigbee and its variations). We did not include Low-Power Wide-Area Networks (LPWANs) in this study. We have experimented with LPWANs in the past [24]. Also, while LPWANs are designed for IoT, they have been mostly ignored by IoT devices and designers. Popular LPWANs Sigfox and LoRa both require a service provider. Therefore, the deployment has been extremely slow. While they may have been used by some utility companies, there are hardly any cities in the United States where these services are available for the general public. Instead, 2G (GSM), 3G, and 4G are commonly used for all long-distance IoT applications. 5G has numerous features exclusively designed to support IoT. It is expected to be commonly available everywhere in the near future further diminishing the prospects for other LPWANs. Even Wi-Fi 802.11ah, which has been designed by IEEE 802.11 WG specifically for IoT, is not in common use. To the best of our knowledge, Zigbee and Wi-Fi (not including 802.11ah) are used in most IoT devices used in homes currently. Therefore, out of simplicity and their popularity, we selected the Digi XBee-PRO 900HP modules, as discussed 3 Wireless Communications and Mobile Computing in Section 1, to construct an end-to-end wireless network in our experiments. Each module works with a 2.1 dBi, halfwave dipole and omnidirectional antenna on a specific frequency in the 902-928 MHz band using pulse-width modulation (PWM). According to the specifications, the RF data rate and the transmit power can support up to 200 kbps and 250 mW, respectively. The networking protocol of the 900HP module is the DigiMesh protocol developed by Digi International Incorporation [25]. This protocol is similar to the Zigbee protocol utilized by the Zigbee module, with fewer complexities and more flexibility. Zigbee and DigiMesh are based on the IEEE 802.15.4 standard [26]. The physical layer and Medium Access Control (MAC) sublayer are built as defined in the standard. The differences between the Zigbee and DigiMesh protocols can be found in [25]. The DigiMesh protocol is suitable for low-power IoT devices since it targets powersensitive applications relying on low-power batteries or power-harvesting technologies. Notably, we set up an extra modular coordinator connecting to the transmitter for test management and monitoring, as shown in Figure 2. With this coordinator and the configuration software, XBee Configuration & Test Utility (XCTU), we could remotely change the transmitter's settings during the tests [27]. RSSI and PLR are used as performance metrics to determine the transmission quality of the wireless modules initially. The RSSI value can differentiate the channel status (i.e., crowded or not) and show how the broadcast signal strength can be optimized. Still, it only indicates the energy of the signal detected at the antenna port, which means that the value may carry undesired amounts of background noise and other interference. Using it in a combination of different metrics such as SINR (Signal-to-Interference-plus-Noise Ratio), SNR (Signal-to-Noise Ratio), PDR (Packet-Delivery Ratio), or PLR makes it a good indication of link quality [28,29]. A low baud rate may affect the throughput or data rate of the wireless transmission if serial data is lost or delayed due to operational conditions. Depending on the specific implementation, a wireless module might strategically adjust the data rate according to RSSI or SNR, such as [30]. Theoretically, the baud rate and payload do not influence the signal strength, but they are constant throughout our investigation to reduce the possible bias of implementation and practical experience. On the other hand, the transmitted power value does affect signal strength, so it is vital to keep it constant throughout the experiments as a control variable. An independent investigation on the relationship between transmit power and RSSI is presented later in Subsection 4.3. Wireless Communications and Mobile Computing After conducting several test runs with the number of samples ranging from 100 to 1000 samples to determine the number of samples required in each experimental run, we found out that the 95% confidence intervals of the results stabilized and stayed constant between 200 and 1000 samples. Hence, for convenience, we decided to use 200 samples per experimental run. Each experiment took about six minutes per run because a packet error or loss can take a fixed fivesecond timeout, which occurs more frequently when the distance between the transmitter and receiver is considerable. The other settings have been chosen as the default values recommended by the manufacturer. Thus, the critical settings of the wireless modules are as follows: As indicated earlier, all the ground modules are set up on a 70 cm high, round table, as shown in Figure 1(b). In addition, each module is connected to Arduino Uno development boards with the Xbee Shield for Arduino for power, control, and data collection. With these general settings and metrics, we describe the details of specific experimental setups in this section. 3.1. Indoor. We configure two XBee-PRO 900HP modules with XCTU and set them up inside a building. The transmitter module is plugged into a Windows laptop. A power bank battery powers up the receiver. The transmitter outputs the raw data to the computer using two scripts, one of which is written in processing [31]. The first script is an Arduino script that sends requests and synchronously reads RSSI values from the receiver until it collects the desired number of observations, which we set at 200. The second script is a processing script that reads the raw data from the transmitter and stores it as a comma-separated value (CSV) file to the computer. We use the stored data to draw graphs and analyze the experimental results afterward. The transmitter is set up stationary on the first floor of the building while the receiver is mobile. The receiver is moved from a distance of 0 m to 50 m, in steps of 10 m, away from the transmitter. The maximum distance is 50 m because PLR exceeds 10% and significantly increases after 50 m (starting 60 m) for our modules. This is a typical range limit for most current IoT wireless modules. Since the modules are in common areas in the building, the ground truth direct distance between the modules is calculated approximately by measuring the floor's dimensions. The direct distance means the straight-line distance from the transmitter to the receiver along the building walls. Since the indoor test is for understanding the effect of reflection, scattering, diffraction, or the multiobstacle environment, we consider the approximate distance measurements acceptable. The place is a long hallway, 3-meter wide with two walls alongside, with no significant obstacles in the way. There are objects along the walls but not directly blocking the transmitter and the receiver. We avoided direct obstacles between the transmitter and the receiver for all experiment setups. The modules were both placed on a round table. Existing indoor wireless networks have been studied to identify potential interference. To the best of our knowledge, only conventional Wi-Fi networks (no Wi-Fi HaLow/IEEE 802.11ah networks that use the 900 MHz band) were present inside the buildings besides cellular networks. Our aim is for the Sub-1 GHz wireless network, so the conventional Wi-Fi networks, whose frequency bands are at 2.5 GHz and 5 GHz, would not interfere much with our experiments. Also, to avoid cellular interference from cell phones, the tests were performed during weekends when only a few people were present in the building. Therefore, we consider that the interference from other wireless networks is minimal and acceptable for the experiments. In this experiment, with the transmitter staying on the same table, the receiver is in the air, attached to a drone (see Figure 1(a)). As shown in Figure 3, 200 m was the height limit for the drone. The drone with the receiver is flying vertically at the beginning until it reached 200 m. When it hits the height limit, it begins to fly horizontally until the slant distance reached 600 m. We collect the metric data at every 50 m of the direct distance between the ground transmitter and the drone receiver. The transmitter collects the results on a computer during the tests. 3.4. Drone-to-Drone (D2D). In the D2D setup, we attach the transmitter to one drone and a receiver to another drone. As shown in Figure 4, the ground module initiates the experiment and then collects the results. The transmitter starts working according to the command from the ground module. The Arduino script of the transmitter synchronously sends repeated requests to the receiver and collects RSSI values. The transmitter then sends the RSSI values to the ground module for storage after the experiment run is complete to avoid any possible interference effect on the experiments. Finally, the ground module saves the collected data Experimental Result This section presents the experimental results and discusses the potential factors affecting Sub-1 GHz wireless performance in real-world environments. As discussed earlier, Sub-1 GHz networks provide a more extended range and broader coverage and consume lower power than the higher frequency networks, such as 2.4 GHz. However, the long range requires a clear line of sight [32] or a clear Fresnel zone [33,34]. Otherwise, the range would be severely reduced by a non-line-of-sight transmission path with obstructions, shown in our experiments and the observations conducted in [35]. Moreover, Sub-1 GHz networks are also known for their capability of large-scale deployments with many connected devices. The performance of such deployments usually correlates with the network protocols and the hardware configuration. The impact of network size is out of this paper's scope and can be found in [22]. Generally, the experiments show that wireless signal propagation is affected by the characteristics of the medium of the transmission path (distance, path, and surroundings) [36], configuration, and qualitative conditions, such as weather and installation. Qualitative factors are problematic to be quantified but cannot be neglected in real-world experiments. As mentioned earlier, we categorize them into four categories: path, configuration, weather, and installation. In the following sections, we examine and present the factor effects in the context of the line of sight (path), Fresnel zone (path), configurable parameter (configuration), weather, and installation, along with the overall extent of influence. Line of Sight. The line of sight represents the medium through which the wireless signal travels from the transmitter to the receiver antennas. The wireless signals change characteristics as they propagate through the line of sight [32]. These changes come along with the distance between the transmitter and receiver. They also depend on the surroundings' variations (buildings, trees, vehicles, and people). Simultaneously, the reflection and absorption along the path accelerate the changes. It is hard to generalize a realistic channel model due to the continuous variation of the realworld environments. Hence, many studies use empirical models layered on measurements in different real-world situations [37,38]. As stated in [36], the leading influencers of the line of sight are path loss, multipath, and shadowing, which were demonstrably reflected in the experimental results. As shown in Figure 5, RSSI values decrease as the distance increases and increase as the environments become more spacious (from indoor to D2D). When the signal travels further, it becomes weaker. On the other hand, the space correlation reflects multipath and shadowing effects, such as reflection, absorption, diffraction, and scattering. These effects attenuate and interfere with the signal further in the path. When the environment becomes more complicated (from D2D to indoor), the signal strength becomes weaker, even if the modules' distance is the same. Similarly, the furthest distance with no packet loss increases in the same way, from 30 m (or indoor) to 600 m (or D2D), as shown in Figure 6. From Figures 5 and 6, we see that the signal travels the worst in the indoor environment. Street and the open field are slightly better than indoor. The G2D scenario is slightly worse, and D2D (high in the air) is the best. For the indoor scenario, 50 m is the furthest distance. In the street and open Wireless Communications and Mobile Computing field scenarios, the operating distance of wireless modules is also limited. They can barely communicate with each other after the distance is more than 350 m. In the G2D test, the signal reaches 600 m with 8.6% PLR, while in the D2D experiment, communications can be maintained up to 700 m with 10% PLR. To determine the PLR threshold for our experiments, we conducted the experiments until the successful transmission rate reached 90%, after which we found that the PLR increases rapidly to about 90% from below 10%. It means it has almost reached the limit of the communication range for the devices. Also, based on our experience and conclusions in some literature, such as [39], we believe that PLR above 10% is undesired. In most such cases, additional measures are needed to ensure the minimum performance (e.g., data retransmission to ensure data reception). Therefore, we set 10% PLR as the threshold for our experiments. As shown in Figure 6, the indoor experiment begins to have packet losses and failures after 30 m. In the street and open field scenarios, the furthest distances with no packet loss are 250 m and 150 m, respectively, much less than those in the air. No packet loss is detected until 450 m and 600 m in the G2D and D2D scenarios, respectively, much better than the indoor, street, and open field scenarios. As expected, the fewer objects around, the better the performance. In addition to that, the closer the receiver to the transmitter, the less chance of losing a packet or encountering a transmission failure. In other words, a simpler environment with fewer obstacles provides more stable conditions for the wireless signal. Finally, according to the 95% confidence interval bars shown in Figure 5, wireless performance is generally more stable in the G2D and G2D scenarios because their results have lower variance (narrower confidence intervals) at all distances. Fresnel Zone. A Fresnel zone is an ellipsoid region of space between and around a transmitter and a receiver [33,34]. Transmitted radio, sound, or light waves can follow slightly different paths before reaching a receiver, especially if there are obstructions or reflecting objects between the two ends. As shown in Figure 5, the signal propagates the best in the D2D scenario. This conforms to how the wireless signal travels in the Fresnel ellipsoid region. The fewer obstacles in the ellipsoid zone result in better signal propagation. Even though there is a direct line between the modules and not many objects around in the open field experiment, the RSSI value is still not significantly higher than the street experiment, from which we speculate that the signal is also reflected and shadowed by the ground. As shown in Figure 7, the Fresnel zone intersects with the ground in the street and open field scenarios. This also happens in the G2D scenario, although its effect is not as much as in the street and open field scenarios. We compare the specific midpoint radii of the first Fresnel zones of various testing distances with the height of the wireless modules in the street and open field tests in the following paragraphs. The midpoint radius r n of the nth Fresnel zone radius can be calculated by Equation (1) [33]: where d T is the distance between the midpoint and the transmitter, d R is the distance between the midpoint and the receiver, and λ is the wavelength of the wireless signal. We used meter (m) as the distance unit for both d T and d R . We know that the frequency band range of the experimental modules is 902-928 MHz; thus, the wavelength λ should be approximately from 0.32 m to 0.33 m according to Equation (2), the wavelength formula, Wireless Communications and Mobile Computing where v is the signal velocity or phase speed of the signal wave and f is the frequency. We assume that the signal velocity equals the speed of light in free space, which is 3 × 10 8 m/s. Combining Equations (1) and (2), we calculate the midpoint radius of the first Fresnel zone for each experimental distance, as shown in Table 1. Table 1 shows that the experimental Fresnel zones' midpoint radii exceed the table height (0.7 m) in all experiments. Thus, in the experiments set up on the table, the first Fresnel zones of the transmitter and the receiver all intersected with the ground. It explains the experimental result that the performance in the air was better than those on the ground, no matter what the surroundings were. Configurable Parameter. Manufacturers of wireless modules progressively release their products with softwaredefined parameters, giving customized functionalities to users who want more flexibility. With different settings to particular parameters, the deployed modules may have significant differences in performance. In the open field experiment, we set different transmit power levels from 7 dBm to 24 dBm for both modules at each distance. As shown in Figure 8, settings of higher power levels provide better performance than the lower ones. However, a higher power level causes a higher power drain. There is a trade-off between performance and power consumption for this setting. In other words, setting a higher power level gives the module better immunity to the interference, but it costs more energy than a lower power level. Furthermore, for lower transmit power levels at the same distance, it is more likely to experience packet errors or losses with moving objects such as people or pets coming across [40]. We want to note that the other scenarios on different power levels have similar results, but we show only one scenario for simplicity. As indicated earlier, this paper's experiments were conducted without moving objects, people, or animals between the transmitter and receiver. Configurable parameters of the wireless modules peculiar to certain manufacturers also affect the performance at different levels. For example, the channel mask parameter of Digi modules allows the user to select different frequencies from a set of supported rates. This reduces the chance of using interfering frequencies by averting interference among modules using the same frequency band. Proper settings of these kinds of configurable parameters may provide more sensible performance. Parameters such as preamble ID, number of packets for insurance of reception, times of packet delivery attempts, sleep mode, and wake time have similar effects. Mainly, preamble IDs help avoid interference among networks operating in the same radio frequency band. Lastly, the interference between neighboring nodes can also affect wireless performance, which can be mitigated by employing an interference-aware routing protocol, as stated in [19]. Weather and Installation Parameter. In addition to the propagation path and configuration effects, wireless performance is also affected by the weather and installation parameters. Some of these parameters may not directly change the wireless module or the line of sight but may change other equipment pieces working with the modules. For example, the temperature may influence the battery discharge and consequently affect the emitted wireless signal [41]. We did not find any suspicious influence of battery in the experiments reported here, but it happened in our previous experiments with LoRaWAN [24]. Similar effects may exist in As stated in [42], weather conditions affect wireless performance at different levels. Different forms of precipitation, such as rain and snow, have the most significant influence because the wireless signal could be absorbed and interfered with by raindrops and snow [43]. The signal will be reflected, scattered, and obstructed during the propagation. We conducted experiments on a rainy and sunny day, respectively, with the D2D setup. As shown in Table 2, the mean RSSI value is much lower on a rainy day than a sunny day. However, it is worth noting that while the drone was flying on a rainy day, it was hard to keep the drone stationary in the air. The drone's slight movements could be a confounding parameter to cause the wireless modules' worse behavior on a rainy day. We speculate that other forms of precipitation (hail, snow, sleet, or thunderstorm) have similar effects on the wireless modules from the rainy day experiments. In addition to different forms of precipitation, which directly interfere with the electromagnetic waves, temperature and humidity are also potential factors that indirectly affect wireless performance. As indicated earlier, the temperature may not directly change the propagation path or the signal. Instead, it may affect other hardware pieces of equipment, which could indirectly cause performance differences. By conducting experiments under different outdoor temperatures, we find that low temperature is more likely to cause unstable wireless performance. We used a lithium-ion power bank in the experiments. Its permissible discharge temperature is -20°C to 65°C, and the operating temperature of the XBee-PRO 900HP module is -40°C to 85°C. Tests at the temperature of -3°C and 15°C should have similar results. However, as shown in Table 2, experiment results at -3°C (clear sky at night) have a higher standard deviation than results at 15°C at the same distance, 100 m. We also find that humidity affects wireless performance more in long-range than in short-range communication from the experiments. As shown in Table 2, the standard deviation of the RSSI values on a foggy day with a 500 m dis-tance is much higher than that at a distance of 100 m in the D2D setup. The weather-related factors are confounded with each other, and thus, the experimental results of one factor may also be affected by other factors. We tried our best to keep the variables as consistent as possible, but this difference is inevitable in real-world experiments. Thus, weatherrelated factors are considered qualitative parameters in this research. Additionally, the wireless performance is also affected by the transmitter or receiver's mobility [18,24,43]. We conducted a new experiment with a receiver on a constantly flying drone and compared it with the original stationary case. As shown in Table 2, the moving receiver has a lower RSSI value than a stationary receiver. Due to Doppler shift [44], when either end or both ends of the end-to-end network are moving, the frequency of the received signal changes. The speed in the moving receiver experiment is about five m/s. The receiver was attached to a drone, and the constant speed was set along with the drone waypoints. We want to point out that there was no appreciable wind on the ground during the experiment, but the wind speed could be different high in the air. The hardware installation also subtly affects signal propagation via positions of specific devices [24,45,46]. For example, wireless modules are installed on top or below the drone carrier. Individually, we did experiments with the wireless module installed in different positions, top or bottom, of the drone in the G2D experiment. The two tests were done consecutively to avoid weather influences as much as possible. In both cases, the antenna on the drone module pointed downward. As shown in Table 2, modules installed on top of the drone perform slightly worse than those installed below it. Theoretically, the antenna direction should not affect the signal strength in our experiments because we use omnidirectional antennas. Still, practically, this is not always the case, for example, in the G2D experiments. As shown in Table 2, the air modules' antenna pointing upwards gets weaker signal strength than when it is pointing downwards. We speculate that this is caused by the metal objects in the vicinity of the antennas, such as the battery and the drone body. Overall, from the above experiments, we see that the qualitative conditions impacting the outdoor wireless performance include different forms of precipitation (rain, hail, snow, sleet, or thunderstorm), temperature, humidity, and installation, along with module mobility. Extent of Influence. In this subsection, we explore how (to what extent) each of the factors discussed above influences wireless performance. We quantify the degree of influence by comparing the RSSI differences based on two conditions: condition 1 (C1) and condition 2 (C2), which are recorded in Table 3. This table's data is from the experiments where the distance between the transmitter and receiver is 100 m. In the last column of Table 3, the extent of influence is calculated by dividing the difference (Diff) between RSSI values of condition 1 (RSSI C1 ) and condition 2 (RSSI C2 ) by the RSSI value of condition 2 (RSSI C2 ). As shown in the table, rainy weather has a significant influence on performance. 4.6. The Checklist. All the tested factors considered in this work and their effects are summarized in the four categories as shown in Table 4. Path Fresnel zone When a wireless signal travels in the Fresnel ellipsoid region, the less obstacle in the ellipsoid zone, the better the signal travels. Configuration Transmit power level The hardware configuration parameters can significantly affect signal strength. A higher power level results in higher signal strength, but it costs higher power consumption. A trade-off between signal strength and power consumption should be considered. Other configurable parameters may have similar influences. Weather Precipitation Precipitation forms such as rain, hail, snow, sleet, or thunderstorm have significant impacts on wireless signals because they could be absorbed and interfered with. In our experiments, the mean RSSI value is much lower on a rainy day than on a sunny day. Similar effects are expected for other forms of precipitation. Weather Humidity Humidity affects wireless performance more in long-range communication than in short-range. With the same setup of the D2D experiment, the standard deviation of the RSSI values on a foggy day at a distance of 500 m was much higher than at the range of 100 m. Weather Temperature Hardware operation is presumably affected by temperature, particularly when the hardware runs outside its nominal operating temperature range. From our experiments, we conclude that the influence of temperature depends on the operating temperature range and the durability of the battery. Similar effects may exist in other pieces of hardware equipment. Installation Mobility According to the Doppler shift effect, the movement of the wireless modules will affect the signal propagation. The frequency of the received signal may be changing as the modules move. As shown in Table 2, the moving receiver has a lower RSSI value than a stationary receiver. Module position The installed position of hardware components subtly affects wireless signal propagation. As shown in Table 2, modules installed on top of the drone perform slightly worse than those installed below it. Installation Hardware component Other than the temperature, which affects hardware pieces, the hardware components might have their own influences. As discussed in Subsection 4.4, the battery charge status and its durability would affect the emitted signal and influence the wireless performance. Conclusion Real-world environments can be highly dynamic and complex for deployments of Sub-1 GHz IoT wireless networks. We conducted many different types of experiments, which comprise wireless modules on the ground and in the air, to study real-world factors affecting wireless performance. These experiments include five different scenarios: indoor, street, open field, G2D, and D2D. We recorded factors that potentially cause the nonreproducibility of real-world deployments. In particular, factors affecting wireless performance in Sub-1 GHz networks are classified into four categories: path (distance, obstruction), configuration (transmit power level), weather (precipitation, temperature, and humidity), and installation (IoT module mobility status and position). We found that line of sight and precipitation have much higher degrees of influence than other factors. Temperature, humidity, and module mobility have moderate impacts. The hardware components and their installed position have a relatively slight effect. Finally, we made a checklist out of them to help potential future experimenters. We believe that this checklist would be constructive not only to academics but also to industrial practitioners whose work would involve low-cost IoT wireless modules. As future work, open Sub-1 GHz standards such as IEEE 802. 15.4g and open-source hardware platforms such as Zolertia REmote and OpenMoteB [47] could be included to further validate the checklist for completeness and comparison among different Sub-1 GHz hardware and protocols. Data Availability The module configurations and experimental data used to support the findings of this study can be accessed on http:// www.cse.wustl.edu/~jain/sub1ghz.
9,595.6
2021-06-21T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Crosstalk between basal extracellular matrix adhesion and building of apical architecture during morphogenesis ABSTRACT Tissues build complex structures like lumens and microvilli to carry out their functions. Most of the mechanisms used to build these structures rely on cells remodelling their apical plasma membranes, which ultimately constitute the specialised compartments. In addition to apical remodelling, these shape changes also depend on the proper attachment of the basal plasma membrane to the extracellular matrix (ECM). The ECM provides cues to establish apicobasal polarity, and it also transduces forces that allow apical remodelling. However, physical crosstalk mechanisms between basal ECM attachment and the apical plasma membrane remain understudied, and the ones described so far are very diverse, which highlights the importance of identifying the general principles. Here, we review apicobasal crosstalk of two well-established models of membrane remodelling taking place during Drosophila melanogaster embryogenesis: amnioserosa cell shape oscillations during dorsal closure and subcellular tube formation in tracheal cells. We discuss how anchoring to the basal ECM affects apical architecture and the mechanisms that mediate these interactions. We analyse this knowledge under the scope of other morphogenetic processes and discuss what aspects of apicobasal crosstalk may represent widespread phenomena and which ones are used to build subsets of specialised compartments. Introduction A wide range of mechanisms allow cells to transition from simple cuboidal structures into complex shapes that serve various functions. These mechanisms extend from subcellular, autonomous processes to tissue-scale rearrangements, and they involve complex architectures that stabilise shape changes. Executors of shape changes are, for instance, polarised actomyosin networks that induce apical constriction during Drosophila melanogaster gastrulation, or that drive the formation of a leading edge to allow cell migration (Figueiredo et al., 2021;Kolsch et al., 2007;Martin et al., 2009). These changes must be stabilised, and this is achieved through interactions with the extracellular matrix (ECM). How cells establish apicobasal polarity depends largely on bECM interactions; therefore, these influence other aspects of cell physiology like vesicle transport and bulk membrane delivery (Denef et al., 2008;Devergne et al., 2014;Mathew et al., 2020;Vanderploeg et al., 2012;Yamazaki et al., 2016). bECM also influences cell shape changes, but how apical remodelling is mechanically coupled to bECM adhesion and how morphogenetic forces are transmitted between the two cellular domains are not fully resolved. Pioneering works have shown that apicobasal crosstalk and coordination are critical during morphogenesis, for example, for epithelial cell reorganisation during Drosophila germ-band elongation, and for endoderm invagination during ascidian development (Sherrard et al., 2010;Sun et al., 2017). But the subcellular players that mediate this coordination vary significantly across model systems. The actin cytoskeleton and MTs are generally the key players mediating these interactions, with varying contributions, organisation, and regulatory modules. Here we review these issues by looking at two well-characterised morphogenetic processes that take place during Drosophila development: tracheal subcellular tube formation and apical oscillations of amnioserosa cells during dorsal closure. In the former, a wide landscape of mechanisms contributes to coordinate apical remodelling with bECM attachment, whereas in the latter, mechanisms of such crosstalk are less obvious. With these models in mind, we analyse how apicobasal crosstalk is established in other morphogenetic processes to identify the more general aspects of interaction between the two compartments. Apicobasal growth coordination during subcellular tube formation The complex morphology of the Drosophila tracheal system is very sensitive to perturbations, allowing straightforward identification of genes involved in its development. The finest tubes of the system lie at the tips of tracheal branches and are built by so-called terminal cells (Samakovlis et al., 1996). These subcellular tubes form by invagination of the apical plasma membrane of the terminal cells, allowing the tracheal lumen to grow inwards as the cell elongates ( Fig. 1; Gervais and Casanova, 2010). The cell and its tube grow at very similar rates , making tracheal terminal cells particularly useful to study mechanisms of interaction between the apical and basal plasma membranes. Also, the mechanisms of terminal cell lumenogenesis have been shown to operate in the building of other tubular structures across the animal kingdom, such as the Caenorhabditis elegans excretory cell and vascular capillaries in vertebrates (Abrams and Nance, 2021;Bär et al., 1984;Cohen et al., 2020b;López-Novoa and Bernabeu, 2010). De novo subcellular tube formation initiates in the embryo, and during larval development, the cell and its tube continue branching, forming ∼25 branches by the third larval instar . Each branch consists of a cellular protrusion containing a single ramification of the subcellular tube. Together, embryonic and larval studies provide us with a robust view of the cellular elements that coordinate apical and basal membrane extension. Failures in proper apicobasal interactions manifest as an uncoupling in the growth of the two membrane compartments, from absent subcellular tubes to tubes curling within cytoplasmic branches Jones et al., 2014;Levi et al., 2006;Schottenfeld-Roames et al., 2014). These phenotypes allow us to understand how different mechanisms come together to allow proper subcellular tube formation. Initial subcellular tube formation is driven by centrosomes near the apical compartment, which organise MTs to drive apical membrane invagination (Ricolo et al., 2016). Throughout subcellular tube growth, MTs run parallel to the tube with plus ends towards the growing tip of the cell (Gervais and Casanova, 2010;Schottenfeld-Roames and Ghabrial, 2012). Coordinated growth requires the interplay of actin and MTs; while actin is organised into different networks throughout the cell, MTs link these actin pools to coordinate their behaviour. Therefore, disrupting MTs by expression of the MT-severing protein Spastin uncouples directed growth of the apical compartment and results in guidance defects (Gervais and Casanova, 2010). Disrupting actin organisation in any subcellular compartment also prevents coordinated apicobasal growth (Gervais and Casanova, 2010;JayaNandanan et al., 2014;Levi et al., 2006;Okenve-Ramos and Llimargas, 2014;Ríos-Barrera and Leptin, 2021 preprint). Organisation of the actin cytoskeleton requires the function of several proteins recruited at specific compartments. The apical actin cortex requires two pathways: one that depends on diaphanous (Dia), a crosslinker of the family of formins (Massarwa et al., 2009;Rousso et al., 2013), and one that uses the Ezrin-Radixin-Moesin (ERM) protein Moesin (Moe) along with its regulators Bitesize (Btsz), a synaptotagmin-like protein that recruits active Moe apically , and Slik, a kinase that mediates Moe activation (Ukken et al., 2014). In addition to the apical actin cortex, subcellular tube guidance depends on the basal plasma membrane. In this compartment, actin interacts with integrins and Talin, which presumably link the cell cytoskeleton to the bECM (Levi et al., 2006). Another actin-based structure involved in subcellular tube formation lies in the space between the tip of the subcellular tube and the cell-growth cone, and it is referred to as the actin core (Gervais and Casanova, 2010;Okenve-Ramos and Llimargas, 2014;Oshima et al., 2006). We recently showed that late endosomes are responsible for the formation of this structure by promoting actin nucleation through Wash, a Wiscott-Aldrich Syndrome Protein (WASP) member (MacDonald et al., 2018;Mathew et al., 2020;Nagel et al., 2017;Ríos-Barrera and Leptin, 2021 preprint). Late endosomes and the associated actin core precede subcellular tube growth and branching, and inducing their mislocalisation affects directed apical membrane growth. Therefore, it is likely that endosome-associated actin ahead of the tube is required to coordinate subcellular tube and cell growth (Ríos- Barrera and Leptin, 2021 preprint). This model agrees with recent work from Ricolo and Araujo (2020) who have shown that Shortstop (Shot), the only member of the Spectraplakin family in Drosophila, is recruited to the different actin pools in the cell including the actin core, an essential function for proper tube formation. Shot crosslinks actin and MTs, therefore, it is responsible for bridging the distinct actin pools within the cell through MTs, stabilising subcellular tube growth and coupling it with cell elongation (Ricolo and Araujo, 2020; Fig. 1). Together, the data discussed so far explain how actin pools and MTs collaborate to guide subcellular tube elongation with respect to the growing basal plasma membrane. Besides actin and MTs, the apical ECM (aECM) is also required to coordinate apicobasal behaviour. Affecting the synthesis of chitin, one of the major constituents of the aECM, leads to severe tube discontinuities and tortuosities in larval terminal cells (Rosa et al., 2018), but the intracellular mediators of these effects have not been identified. Other prominent components of the aECM are proteins bearing Zona Pellucida domains, also referred to as ZP proteins, and several of these have been studied in tracheal cells (Bökel et al., 2005;Caviglia et al., 2016;Jazẃinśka et al., 2003;Sakaidani et al., 2011). Terminal cells express at least two of these: Dumpy (Dp) a secreted molecule, and Piopio (Pio), a transmembrane protein that can bind to MTs (Bökel et al., 2005;Rios-Barrera et al., 2017). However, even though these molecules are essential for the morphogenesis of other tissues (discussed below), knockdown experiments suggest that they are not required for subcellular tube formation in terminal cells (Rios-Barrera et al., 2017). In conclusion, current evidence suggests that terminal cells possess redundant mechanisms to ensure a robust interaction between the apical and basal plasma membranes. This is reflected by the involvement of multiple cytoskeletal crosslinkers and ECM regulators that may provide at least partial redundancy to ensure proper apicobasal coordination (summarised in Fig. 1C-C′). The bECM and its impact on apical shape oscillations in the amnioserosa during dorsal closure Dorsal closure is a morphological rearrangement taking place halfway during Drosophila embryogenesis. In this process, two opposing lateral epidermal sheets stretch and meet at the dorsal midline of the embryo, covering an eye-shaped epithelium called amnioserosa (Fig. 1). Originally considered passive players of dorsal closure, the cells of the amnioserosa are now known to generate the forces that promote stretching of the epidermis (Pasakarnis et al., 2016;Scuderi and Letsou, 2005;Wells et al., 2014). At the onset of dorsal closure, amnioserosa cells show a stochastic, pulsatile behaviour driven by apical actomyosin networks (David et al., 2010;Solon et al., 2009). Apical actomyosin accumulation leads to an acute reduction in the cell apical surface; upon resolution of the actomyosin foci, cells relax and expand apically (Blanchard et al., 2010;Duque and Gorfinkiel, 2016). Since these pulses are also asynchronous, during this phase the amnioserosa shows no net change in its area. In a second phase, actomyosin pulses remain stochastic but they gradually produce a net decrease in the apical surface of the whole tissue, a process that coincides with the stretching of the epidermis (Solon et al., 2009;Sumi et al., 2018). Therefore, as closure proceeds, interactions within cells of the amnioserosa through adherens junctions, and between the amnioserosa with the bECM and the epidermis, are critical for proper force propagation and coordination (Flores-Benitez and Knust, 2015;Goodwin et al., 2016;Jurado et al., 2016;Narasimha and Brown, 2004). The amnioserosa lies on top of the yolk cell, and attachment between them via integrins and a bECM rich in laminin is essential for proper dorsal closure (Narasimha and Brown, 2004). Increasing or decreasing amnioserosa cell adhesion to the bECM alters the rate of dorsal closure because both conditions perturb the optimal propagation of forces within the amnioserosa. Artificially increasing tension by expressing an overactive form of Talin results in decreased shape oscillations, low force transmission to neighbouring cells, and in consequence, inefficient epithelial remodelling. Conversely, cells that lack β-Integrin show very pronounced apical surface oscillations but the forces these cells generate are propagated to more cells than in control embryos, albeit inefficiently (Goodwin et al., 2016). Cells mutant for β-Integrin also show aberrant E-Cadherin distribution and turnover, together with altered apical actomyosin dynamics that explain the defects in apical pulsations (Goodwin et al., 2017;Jurado et al., 2016;Meghana et al., 2011;Saravanan et al., 2013). Given that E-Cadherin also interacts with the actin cytoskeleton and that it is required for proper force transmission across cells, both components, cell-cell and cell-ECM adhesion should be coordinated to allow amnioserosa cell pulsation. In contrast to subcellular tube formation, what mediates the crosstalk between the basal plasma membrane and the apical compartment in amnioserosa cells is not immediately obvious. While in tracheal terminal cells many mediators of this crosstalk have been described, these do not seem to be required for amnioserosa shape changes. This might be explained by the topological differences between amnioserosa and tracheal terminal cells. Amnioserosa cells are large, almost flat cells, with outspread apical and basal surfaces. In contrast, tracheal terminal cells completely invaginate their apical compartment to build a subcellular tube. Amnioserosa cells concentrate most of their MTs at the apical cortex, parallel to the plasma membrane. There, they favour the organisation of actomyosin pulses, suggesting they support apical membrane organisation rather than apicobasal coordination ( Fig. 1B; Guru et al., 2021 preprint;Meghana et al., 2011;Pope and Harris, 2008). Further support of this comes from perturbation experiments; if MTs mediate the apicobasal coordination in amnioserosa cells, altering MT dynamics should result in similar phenotypes as the ones caused by loss of integrins. However, preventing MT growth by expression of a dominant negative form of EB1 or by overexpression of Spastin leads to the opposite effect, with decreased apical shape fluctuations and actomyosin pulses (Guru et al., 2021 preprint). Shot, one of the main regulators of apicobasal crosstalk in tracheal terminal cells, participates in dorsal closure by regulating filopodia formation in the epidermal leading edge cells, consistent with the polarised distribution of MTs in these cells (Gomez et al., 2016;Takács et al., 2017), but currently no role for Shot in the amnioserosa has been reported. Besides forming actomyosin pulsing networks, the apical domain of amnioserosa cells is shielded by a persistent actin cortex that is required for proper apical oscillations (Dehapiot et al., 2020). The actin crosslinker Formin-like (Fmnl) organises this cortex, which coexists with actomyosin pulses throughout dorsal closure. Knocking down Fmnl results in wider actomyosin pulses and greater apical area oscillations, a response that is also seen upon loss of bECM attachment. In addition, similar to increasing bECM adhesion, overexpression of Fmnl leads to reduced apical pulsations (Dehapiot et al., 2020;Goodwin et al., 2016). These results argue in favour of cortical actin directly mediating the interaction between the apical and basal plasma membranes. Other studies have shown that changes in tension during apical oscillations alter actin dynamics, for instance, at cell junctions to ensure their integrity (Hara et al., 2016;Sumi et al., 2018). Apicobasal crosstalk could therefore be regulated by stable actin networks at the apical and basal plasma membranes that are then crosslinked by junctional actin, which is constantly buffered by mechanosensation. In agreement with this, altering Moesin apical recruitment greatly affects actomyosin organisation in the apical compartment and at the adherens junctions, leading to higher amplitude actomyosin pulses (Flores-Benitez and Knust, 2015). Altogether, these results suggest that cortical and junctional actin pools communicate with each other, allowing coordination of apicobasal behaviour. Other mechanisms of apicobasal crosstalk Subcellular tube formation has shown us that multiple pathways can collaborate to ensure proper apicobasal coordination, but as illustrated by amnioserosa apical oscillations, these mechanisms can vary significantly in other contexts. In the following, we will summarise the crosstalk principles that have been described in other morphogenetic rearrangements and the molecules that mediate them. Crosslinking actin cortices through MTs In many morphogenetic events in Drosophila and other organisms, non-centrosomal MTs connect local cell-shape changes to distant positions through specific adaptors (Lee and Harland, 2007;Yano et al., 2021). Many tissues orient MTs with minus ends toward the apical compartment and the plus ends toward the basal, and this organization is held through MT-binding adaptors ( Fig. 2A). One of the very versatile MT regulators is Shot, which depending on the context, gets recruited basally, apically, or in both compartments, and depending on its binding partners, it can interact with MT plus or minus ends. Shot recruitment at MT minus ends is mediated by Patronin, and its plus end localization is mediated by EB1 (Applewhite et al., 2010;Booth et al., 2014;Ghislain et al., 2021;Goodwin and Vale, 2010;Lee et al., 2016;Mimori-Kiyosue et al., 2000;Molines et al., 2018;Nashchekin et al., 2016). Despite the different scenarios in which Shot can regulate MTs and their interaction with apicobasal compartments, its contribution to a given process can vary drastically. For instance, in late pupal wing development, MTs run along the apicobasal axis and they stabilise adhesion to the apical and basal ECMs. Shot is found at both ends, as it is in terminal cells. However, loss of Shot has no effect on MT architecture or epithelial morphology. Instead, in this tissue, MTs are stabilised through their interactions with Pio in the apical membrane and ILK and integrins in the basal domain (Akhtar and Streuli, 2013;Bökel et al., 2005). These results contrast with subcellular tube formation, where Shot seems to contribute more than Pio in mediating apicobasal interactions (Jazẃinśka et al., 2003;Ricolo and Araujo, 2020). Shot and Patronin organise MTs at the apical compartment in a range of processes, like in the formation of apical actin-based microvilli of follicle cells and in salivary glands, where they are required for proper tissue invagination (Booth et al., 2014;Ghislain et al., 2021;Khanal et al., 2016;Röper, 2012). In these models, MT apical anchoring is required to sustain novel architectures, but whether they interact with the basal plasma membrane is still unknown. There are processes that use MT apical anchoring to drive cellshape changes but do not seem to depend on Shot at all. This is seen for instance in dorsal fold formation during early development, where instead, Patronin organises an MT apical cortex that is required to initiate fold formation. In this process, instead of a transversal array, MTs form an apical dome that allows proper tissue folding (Takeda et al., 2018). Similarly, a morphogenetic furrow regulates proper photoreceptor organisation in the eye disc (Ready et al., 1976). This furrow is formed through apical recruitment of actomyosin which induces apical constriction, and apicobasal MTs are also required for the shape change. Loss of integrins affects MT organisation and furrow formation, but the elements that mediate MT anchoring to the apical and basal compartments have not been reported, and at least involvement of ILK has been ruled out (Corrigall et al., 2007;Fernandes et al., 2014). Organelle-mediated crosstalk mechanisms To coordinate apical and basal behaviour, some tissues have adapted mechanisms that involve intracellular relay points (Fig. 2B). Leg disc development uses apoptosis as pulling force to induce fold formation (Manjón et al., 2007). In this process, cells prompted to die first reduce their apicobasal axis, a process that requires nuclear repositioning toward the basal compartment and actin reorganization around the nucleus. The nucleus mediates the interactions between the apical and basal compartments; it interacts apically with an actomyosin network, and it is associated to F-actin and Talin/integrins on the basal compartment. Disturbing actomyosin by using laser cuts or affecting basal anchoring by removing Talin prevent proper force transmission from the apical to the basal compartment. Evidence for the involvement of the nucleus comes from experiments on Klarsicht, a KASH domain protein that connects the cytoskeleton to the nuclear envelope. Loss of Klarsicht prevents cell shortening, nuclear repositioning and fold formation, showing that both the nucleus and the actin around it are required for proper cell deformation (Ambrosini et al., 2019). Klarsicht also interacts with MTs, a role required for nuclear positioning in other cells like differentiated photoreceptors (Fischer et al., 2004;Mosley-Bishop et al., 1999). In salivary glands, Klarsicht favours collective cell migration by regulating MT organisation (Myat and Andrew, 2002;Myat et al., 2015). Klarsicht expression is enriched at the salivary gland placodes (Myat and Andrew, 2002). It is not known whether MTs are anchored basally during placode invagination, but it is an intriguing possibility that Klarsicht and the nucleus could have a role in this process, as they do during leg development. Late endosomes have various roles in migration and cytoskeletal organisation throughout metazoans (MacDonald et al., 2018;Palamidessi et al., 2008;Ramírez-Santiago et al., 2016;Schiefermeier et al., 2014). As mentioned for subcellular tube formation in tracheal terminal cells, late endosomes can also mediate interactions between distant plasma membrane domains by regulating the cytoskeleton (Ríos- Barrera and Leptin, 2021 preprint). Regulation of the cytoskeleton by endosomes has been documented in other processes: peripheral sensory neurons use endosomes as MT organising centres (MTOCs) at branching points, which allows MT growth towards dendritic termini. This function is mediated by Axin, a component of the Wnt signalling pathway that can interact with γ-Tubulin. Loss of Axin results in reduced branching, and targeting Axin to mitochondria is sufficient to induce MT reorganization (Weiner et al., 2020). Golgi outposts have also been proposed as MTOCs that mediate neuronal branching in Drosophila and mammalian systems (Du et al., 2021a;Fu et al., 2019;Ori-McKenney et al., 2012;Ye et al., 2007). However, presence of Golgi and MT minus-end markers does not seem to correlate in various experimental setups, which has brought MT nucleation at Golgi outposts into question (Nguyen et al., 2014;Weiner et al., 2020). Further ultrastructural analyses or other studies should define whether Golgi outposts can indeed mediate MT nucleation. Also, more works are required to determine how widespread organelle-mediated apicobasal crosstalk mechanisms are. Direct apicobasal force propagation through cortical actin The actin cytoskeleton forms cortical arrays throughout the surface of the cell (Fig. 2C). Remodelling of these pools can also propagate forces to distant subcellular compartments promoting cell shape changes, as experiments in the amnioserosa suggest. This has been also observed in larval wing disc development; first, in a transition from cuboidal to columnar, wing disc cells concentrate actomyosin in the apicolateral compartment, which decreases cortical tension basally. This allows cells to elongate in the apicobasal axis. In this process, severing MTs has no effect on the cuboidal to columnar transition, further reinforcing the relevance of cortical actin in the shape change (Widmann and Dahmann, 2009). Later, the wing epithelium folds to form a central pouch. Fold formation is again regulated by actomyosin redistribution. One of the folds is formed by basal relaxation of the actin cytoskeleton accompanied by bECM degradation, whereas the other fold is formed by increased lateral tension generated by actomyosin redistribution (Sui et al., 2018). Similar cortical actin rearrangements control the elongation of the pseudostratified epithelium of the zebrafish retina, which, together with proliferation, allows the whole tissue to grow (Matejcǐćet al., 2018). Cortical actin also participates in the elongation of the follicular epithelium, and in this case spectrins at both compartments are responsible for organising actin. Loss of basal spectrins or integrins prevents proper cuboidal to columnar transition, with general actin disorganization (He et al., 2010;Santa-Cruz Mateos et al., 2020;Ng et al., 2016;Qin et al., 2017). Loss of integrins also affects the formation of actomyosin networks in the apical and basal compartments (He et al., 2010;Santa-Cruz Mateos et al., 2020;Qin et al., 2017). Thus, follicle cells resemble the amnioserosa, in that both have persistent actin cortices (mediated by Fmnl in the amnioserosa and spectrins in follicle cells) and actomyosin pulses that are dependent on proper integrin function. As mentioned earlier, follicle cells also use MTs to stabilise their apical architecture, although it is not known if these MTs are stabilised basally. Why some tissues require MT crosslinking the apical and basal compartments while others rely solely on cortical actin is not immediately clear, but the answer might be in the geometry of cells or in tissue-scale forces and how they influence cytoskeletal organization. Cell-shape changes driven by aECM remodelling aECMs cover many epithelia, particularly within tubular structures like the vertebrate vasculature, lungs and kidneys, the C. elegans vulva and excretory cell, and the Drosophila tracheal system. While we have discussed the role of Pio as a link between MTs and the aECM, recent works in Drosophila and C. elegans have revealed multiple ways in which this matrix influences morphogenesis. ZP proteins like Pio are very abundant constituents of the aECM, with 43 ZP genes in C. elegans and 20 in Drosophila. These molecules are able to form complex web-like multimers through their ZP domains, which similar to the bECM, provide a scaffold that drives and stabilises cell shape changes (Cohen et al., 2019(Cohen et al., , 2020aItakura et al., 2018;Ray et al., 2015;Smith et al., 2020). Experiments in tracheal multicellular tubes have shown multiple ways in which the aECM contributes to morphogenesis. Conditions that increase aECM deposition or that reduce it result in reduction or expansion of apical surfaces, respectively (Dong et al., 2014a,b;Öztürk-Çolak et al., 2018). Additionally, whereas apical actin bundles instruct supracellular organisation and synthesis of the aECM, the aECM also feeds back into the cells to reinforce actin bundle and adherens junctions organisation (Öztürk-Çolak et al., 2016b). Direct effects on the organisation of the basal compartment have not been reported, nevertheless, changes in apical architecture affect the overall shape of the tracheal multicellular tubes. The role of ZP proteins in shaping tubes is conserved in vertebrates; Endoglin, a transmembrane ZP protein, is involved in angiogenesis in zebrafish and in mice (Sugden et al., 2017). Furthermore, in human patients, mutations in Endoglin results in hereditary haemorrhagic telangiectasia, a condition that affects vascular morphology and leads to haemorrhages (McAllister et al., 1994). Endoglin also interacts with MT-associated proteins, suggesting that Pio's function in MT organisation is also conserved (Meng et al., 2006). Further evidence of the role of the aECM in apicobasal crosstalk comes from experiments in the pupal wing disc. As mentioned earlier, during larval development the wing disc epithelium transitions from cuboidal to cylindrical in a process that requires reorganisation of the cortical actin cytoskeleton. In pupal development, wing disc eversion requires a transition back from columnar to cuboidal together with convergent extension to allow the elongation of the wing. This is achieved by coordinated secretion of Stubble (Sb), a protease that degrades the aECM, and of Matrix Metalloprotease 2 (MMP2), which degrades the bECM. As in tracheal multicellular tubes, aECM remodelling also reorganises the actin cytoskeleton, which couples the matrix reorganization with cell shape changes (Diaz-de-la-Loza et al., 2018;Ray et al., 2015). Conservation of apicobasal coordinators: examples beyond the animal kingdom We have described various mechanisms that allow coordinated behaviour between the apical and basal compartments during cellshape changes. While we focused our analyses on Drosophila development, these mechanisms also operate in the morphogenesis of other animals and most proteins discussed are conserved across the animal kingdom (Table 1). However, cell shape changes participate in the development of most organisms, therefore, some of these principles could have more ancestral functions. For instance, in filamentous fungi, growth of hyphal tips is regulated by polarised organisation of actin and MTs that generate force and transport molecules to the growing tip. This is coordinated by a collection of secretory vesicles known as Spitzenkörper, which provide membrane material to the growing tip and also allow anchoring of MTs and actin, as late endosomes do during tracheal -Strilićet al., 2009;Gardiner et al., 2011;Satooka et al., 2017;Hashimoto et al., 2008 Context-dependent Formin- terminal cell development (Crampin et al., 2005;Ríos-Barrera and Leptin, 2021 preprint;Zheng et al., 2020). In addition, similar to late endosomes in subcellular tube formation, Spitzenkörper relocalisation precedes changes in the direction of hyphal growth (Crampin et al., 2005). Adhesion complexes in plants also reveal intricate modes of functional conservation across eukaryotes. Even though plants do not possess integrins, they also interact with the cell wall matrix and transduce mechanical information intracellularly. Among other mechanosensors, plants use Formin Homology (FH) proteins to interact with the cell wall. Arabidopsis possesses 21 FH genes, and they all carry out distinct functions; FH1 is a transmembrane protein that binds the ECM extracellularly and regulates cortical actin organisation intracellularly (Wolf, 2017). Yeasts also use formins for various processes; interaction with the cell wall is mediated by the transmembrane protein Wsc1, which, in turn, activates the formin Bni1 to regulate the actin cytoskeleton (Levin, 2005). In filamentous fungi, the Spitzenkörper also recruits Bni1/Formin homologs, which organise the actin cytoskeleton around these vesicles (Zheng et al., 2020). As discussed above, formins in animals have typically been recognised for their roles as actin regulators, as is the case for Dia in subcellular tube formation and for Fmnl in amnioserosa apical shape oscillations. However, Drosophila Formin3 and Dishevelled-associated activator of morphogenesis (DAAM) interact with MTs and actin during neuronal branching (Das et al., 2021;Szikora et al., 2017). The role of formins as actin-MT regulators also goes beyond animals; Arabidopis FH14 is another formin capable of interacting with MTs (Du et al., 2021b). Together, these works suggest that the involvement of formins in execution and stabilisation of shape changes is highly conserved. Concluding remarks and open questions Here we summarised a range of mechanisms that cells and tissues use to coordinate remodelling of their apical and basal compartments. We show that actin and MT regulators (1) can be recruited differently depending on the context, (2) can act in parallel to other mechanisms increasing redundancy, and (3) that their relative contribution to a process can vary greatly from one context to the other. For some of the systems described here there is plenty of information on how multiple pathways convey to regulate apicobasal interactions, like the case of subcellular tube formation, wing and leg disc development, and follicle cell elongation. In other models, some gaps still need to be filled in. We focused our work on the role of integrins and Talin, but other basal complexes could also be involved in mediating apicobasal crosstalk. This is the case of the Dystroglycan/Dystrophin complex, which contributes to cytoskeletal reorganisation in different scenarios (Alégot et al., 2018;Campos et al., 2020). It will also be interesting to see how variations in ECM composition and stiffness can affect apical membrane remodelling, how these changes are translated intracellularly, and how they intersect with the bECM as signalling scaffold (Chen et al., 2019;Crest et al., 2017;Ma et al., 2017).
6,893.6
2021-11-15T00:00:00.000
[ "Biology" ]
CsgA gatekeeper residues control nucleation but not stability of functional amyloid Abstract Functional amyloids, beneficial to the organism producing them, are found throughout life, from bacteria to humans. While disease‐related amyloids form by uncontrolled aggregation, the fibrillation of functional amyloid is regulated by complex cellular machinery and optimized sequences, including so‐called gatekeeper residues such as Asp. However, the molecular basis for this regulation remains unclear. Here we investigate how the introduction of additional gatekeeper residues affects fibril formation and stability in the functional amyloid CsgA from E. coli. Step‐wise introduction of additional Asp gatekeepers gradually eliminated fibrillation unless preformed fibrils were added, illustrating that gatekeepers mainly affect nucleus formation. Once formed, the mutant CsgA fibrils were just as stable as wild‐type CsgA. HSQC NMR spectra confirmed that CsgA is intrinsically disordered, and that the introduction of gatekeeper residues does not alter this ensemble. NMR‐based Dark‐state Exchange Saturation Transfer (DEST) experiments on the different CsgA variants, however, show a decrease in transient interactions between monomeric states and the fibrils, highlighting a critical role for these interactions in the fibrillation process. We conclude that gatekeeper residues affect fibrillation kinetics without compromising structural integrity, making them useful and selective modulators of fibril properties. | INTRODUCTION The widespread presence of amyloid fibrils in Nature reveals that the ability to fibrillate is an intrinsic property of amino acid-based peptide chains (Ke et al., 2020;Knowles et al., 2014;Knowles & Mezzenga, 2016).The amyloid fold is known for its high stability, physical and chemical resistance, and self-templating replication (Knowles & Mezzenga, 2016).Amyloid fibrils are known for their role in mammalian neurodegenerative diseases such as Alzheimer's, Parkinson's, and Mad Cow disease (Wong & Krainc, 2017).In pathology, a common feature for these proteins is a runaway aggregation mechanism which becomes increasingly harder for the cellular quality control machinery to deal with as the fibrils grow and multiply (Ke et al., 2020;Meisl et al., 2022;Wong & Krainc, 2017).In contrast, functional amyloids have evolved to utilize the unique properties of amyloids for the benefit of the host (Badtke et al., 2009;Evans et al., 2018;Fowler et al., 2007;Otzen & Riek, 2019).To achieve this, functional amyloids are formed in a very tightly controlled process that ensures that aggregation only occurs at the right time and place.Functional amyloids, in addition to having the support of chaperones and translocation pores, also display far less replication through secondary (and less easily controlled) processes such as secondary nucleation and fragmentation (Meisl et al., 2022), which are much more prevalent in pathological amyloids (Cohen et al., 2013;Meisl et al., 2014).Thus, functional amyloids achieve replication rates appropriate to the growth rates of their biological hosts (Meisl et al., 2022).It is likely that evolutionary pressure has guided the functional amyloids to suppress uncontrolled self-replication, but the molecular mechanism behind it is still unclear. Curli is one of the most studied functional amyloids and an important component of E. coli biofilm formation, providing physical and chemical protection to bacterial communities (Barnhart & Chapman, 2006;Chapman, 2002).The curli system consists of at least six different proteins, with CsgA as the main constituent of curli fibrils while the other five help facilitate safe, efficient, and correct fibril formation by CsgA (Barnhart & Chapman, 2006;Chapman, 2002).The minor component of curli fibrils, CsgB, acts as tethering and nucleation sites for CsgA, anchoring the fibrils to the outer cell membrane (Hammer et al., 2007).CsgA remains unstructured in vivo in the absence of CsgB (Hammar et al., 1996).CsgD is a transcriptional activator for the csgBA operon (Barnhart & Chapman, 2006).CsgG, CsgE, CsgF, and CsgC are all mediator proteins, acting as transport proteins and chaperones (Chapman, 2002).CsgG is thought to be responsible for the transport of CsgA, CsgB, and CsgF across the outer membrane (Nenninger et al., 2009).CsgF interacts with CsgG at the outer membrane and acts as a tethering point for CsgB (Robinson et al., 2006).CsgE interacts with CsgG in the periplasmic space and together with CsgC is responsible for stabilizing CsgA, CsgB, and CsgF through a chaperone-like mechanism until properly translocated (Nenninger et al., 2009). Besides help from ancillary proteins, CsgA is also optimized for fibrillation through its sequence.The amyloid core of CsgA (i.e., the whole of the mature protein except for a 22-residue N-terminal stretch which is not structured in the final fibril (Bu et al., 2024;Sleutel et al., 2023;Tian et al., 2015)), consists of five imperfect repeats, all 19-23 amino acids in length and sharing at least 30% amino acid identity with the consensus sequence Ser-X 5 -Gln-X 4 -Asn-X 5 -Gln (Wang et al., 2010).Each repeat forms a β-hairpin in which the fibrillar structures stack on top of each other to form a very compact amyloid repeat unit (Tian et al., 2015).A recent alignment of >40,000 different CsgA sequences identified an even more general amyloid motif across the bacterial metagenome where each strand in the repeat contains the motif N-$-Ψ-$-Ψ-$-Q, where Ψ are hydrophobic residues facing into the core of the amyloid structure alternating with surface exposed ($) residues (see alignment of this motif to CsgA from E. coli in Figure 1a).Repeats 1 (R1) and 5 (R5) are responsible for intermolecular interactions between both CsgA and CsgB, with repeats 2-4 sandwiched in between.While R2-R4 are similar in sequence to R1 and R5, certain Asp and Gly residues found in R2-R4 act as gatekeeper residues, making these repeats significantly less amyloidogenic and effectively restraining protein fibrillation.This avoids untimely and mislocated fibril formation in the cell (Wang et al., 2010).When introduced into position 116 and 129 in R5 (note we use the numbering for mature CsgA, excluding the first 20 residues comprising the signal peptide), Asp had a detrimental effect on the amyloidogenicity of CsgA while residues with similar properties as the wildtype residues showed minimal effect (Wang et al., 2010).Other studies of gatekeeper residues in aggregation-prone regions (APR) have likewise concluded that electrostatic repulsion of charged residues is one of the main contributors to the suppression of aggregation (Beerten, Schymkowitz, & Rousseau, 2012;Houben et al., 2020;Kallberg et al., 2001;Monsellier & Chiti, 2007;Sant'Anna et al., 2014). Given the role of R1 and R5 in intermolecular interactions, gatekeeper residues in these positions are expected to have a major impact on CsgA self-assembly.Accordingly, here we investigate how the introduction of Asp gatekeeper residues into specific positions of the amyloidogenetic R1 and R5 of CsgA affects both the kinetics of amyloid formation and the structure and behavior of the resulting amyloid fibrils.As points of mutagenesis, we chose Asn26 and Leu39 in R1 and Asn116 and His129 in R5, all of which are predicted to be surface exposed according to the CsgA consensus motif (Figure 1a, b) and thus involved in inter-monomer quaternary contacts.Additionally, we were interested in determining if and how multiple gatekeepers show cumulative effects which would shed more light on how the amyloidogenicity of functional amyloids in contrast to their pathological counterparts are fine-tuned by their specific number of gatekeepers.We show that stepwise substitution of specific residues in R1 and R5 of CsgA with Asp gatekeeper residues gradually eliminates CsgA amyloidogenicity, primarily by inhibiting transient interactions and the primary nucleation step of the fibrillation mechanism.At the same time, NMR spectroscopy does not detect any appreciable change in structural propensity for the variants.These mutants only fibrillate in the presence of nucleation seeds or surface catalysis.Once formed, they are virtually indistinguishable from wt CsgA fibrils.Lastly, we utilize the monomeric stability of the CsgA QM variant in Dark-state Exchange Saturation Transfer experiments to probe the interaction and on-off rates between CsgA and fibrils.This allows us to conclude that increasing the number of gatekeeping residues significantly reduces the fraction of the protein in the transiently bound state. | Site-directed mutagenesis The residues were introduced via site-directed mutagenesis using QuikChange (Braman et al., 1996).All CsgA constructs contain a C-terminal His-tag which facilitates purification by Ni-NTA chromatography and which is retained in the subsequent studies. | Protein expression Electro-competent BL21-DE3 slyD knockout cells were electroporated using 1 mm cuvettes in an Eporator (Eppendorf) at 1800 V for ≤5 ms and transformed with pET11a containing the CsgA genes.The wt CsgA gene was derived from the major curlin subunit from K12 (Uniprot ID: P28307), with the first 20 amino acids (signal peptide) removed.Sequences can be found in Data S1.Transformed cultures were plated on LB-agar plates containing 100 μg/ml ampicillin and incubated overnight at 37 C. A single colony was used to inoculate a 10 ml preculture of 2xYT medium containing 100 μg/ml, which was incubated overnight at 37 C and then added to 1 L of 2xYT containing 100 μg/ml ampicillin and 0.5% glucose.The culture was incubated at 37 C with 150 RPM shaking, induced with 0.1 mM Isopropyl β-D-1-thiogalactopyranoside (IPTG) at 1.7 OD 600 , and left to incubate for 2 h.The culture was harvested by centrifugation at 4000g, 4 C for 10 min.The resulting cell pellet was either frozen at À80 C for later purification or used straight away. | Protein purification The cell pellet was dissolved by incubation overnight on a magnet stirrer at 4 C, in buffer A (7.5 M guanidinium hydrochloride (GdmCl) with 50 mM Tris, pH 7.4) using 100 ml buffer per 1 L of expression culture.The solution was centrifuged at 12.000 g, room temperature for 20 min to pellet insoluble cell debris, which was discarded.The supernatant was incubated with Ni-NTA beads (2 ml per liter of expression culture) in a blue-cap flask on a magnet stirrer for 1 h at room temperature.The Ni-NTA beads were captured in a 50 ml Corning centrifuge tube by multiple rounds of gentle centrifugation at 1000 g for 1 min with slow deceleration.The supernatant was removed and discarded by careful decantation.The Ni-NTA beads were then washed once with 50 ml buffer A and twice with buffer B (buffer A with 20 mM imidazole) by gently inverting the tube for 2 min followed by centrifugation and discarding the supernatant.The washed beads were transferred to a plastic gravity column with a ceramic filter bottom and 30 ml buffer B was added as a final wash.To elute bound CsgA, 20 ml buffer C (buffer A with 500 mM imidazole) was gently added to the top of the column and the flowthrough was collected in 1 ml fractions.CsgA concentration was measured on a DeNovix DS-11 Nanodrop blanked with buffer C to eliminate imidazole absorption (0.5 M imidazole has an absorption of ca.0.7 at 280 nm).All fractions containing CsgA were pooled, passed through a 0.2 μm filter, aliquoted, snapfrozen in liquid nitrogen, and stored at À80 C. When needed for assays, CsgA was thawed under gentle shaking to fully dissolve all GdmCl crystals followed by desalting into 50 mM Tris, pH 7.4 using Cytiva PD Mini-Trap desalting columns with Sephadex G-25 resin.CsgA concentration was likewise measured on a DeNovix DS-11 Nanodrop before use using ε 280 = 11,460 M À1 cm À1 . | SDS-PAGE Protein solutions were mixed with 2Â BioRad Laemmli SDS-PAGE Loading Buffer and loaded on BioRad TGX Stain-Free 4-20% gels along with a BioRad Precision Plus Protein unstained ladder.The gels were run in Laemmli buffer at 125 V for 45 min.Protein bands were visualized with a GelDoc Go Gel Imaging System using the Stainfree program. | ThT assay Protein solutions were prepared in Eppendorf tubes and ThT was added to a final concentration of 20 μM before transfer to 96-well plates.100 μl solution was added to each well in half-area plates while 200 μl was added to regular plates.All plates were sealed with MicroAmp Optical Adhesive Film prior to measuring.Fluorescence was measured every 10 min after 5 s of double orbital, 200 RPM shaking in a BMG Labtech CLARIOstar Plus microplate reader set to excitation at 448 nm and emission at 485 nm with a dichroic filter at 466.6 nm and gain set to 1500.All experiments were conducted at 37 C. Corning 96-well Half Area Black, Clear Flat Bottom Polystyrene Nonbinding surface (NBS) plates were used for experiments that required non-binding surface while Corning 96-well Black, Clear Bottom Polystyrene Microplates were used for surface-catalyzed experiments but were not washed prior to samples being added. | Fibril and fibril seeds preparation Fibrils for TEM imaging, formic acid assays, The and seeding experiments were prepared by incubating high concentrations (>25 μM) of protein at 37 C in Eppendorf tubes for 48 h.For TEM imaging, fibrils were used without further treatment to avoid fragmentation.Fibrils needed for formic acid assays and seeding experiments were washed 3 times by 13,000 g centrifugation for 10 min followed by careful removal of supernatant and vigorous resuspension of the fibril pellet in fresh 50 mM Tris, pH 7.4 buffer.The protein concentration of the supernatant was measured to ensure that unpolymerized monomers were accounted for.Washed fibrils were sonicated immediately prior to use using a Fisherbrand 505 Sonicator with Probe in 5 cycles of 5 s bursts followed by 5 s of rest. | Formic acid assay A sample of washed fibrils was sonicated according to the protocol described above and was immediately transferred to Eppendorf tubes.50 μg of fibrils, calculated based on monomeric concentration, were used for each sample.All fibril samples were then centrifuged at 13,000 g for 10 min and the supernatant was very carefully removed and discarded.50 μl of different concentrations of formic acid were added to the fibril pellet and carefully pipetted up and down followed by 10 s of gentle vortexing.The samples were incubated for 20 min at room temperature followed by centrifugation at 13,000 g for 10 min.20 μl of supernatant were carefully pipetted from the top of the liquid and transferred into fresh Eppendorf tubes.Holes were poked in the lid and the samples were flash-frozen in liquid nitrogen before being lyophilized for 90 min.The protein powder was carefully dissolved in 30 μl 2Â BioRad Laemmli SDS-PAGE Loading Buffer and loaded directly on a BioRad TGX Stain-Free 4-20% SDS-PAGE gel and run according to the protocol described above.SDS-PAGE band intensity was analyzed using BioRad's Image Lab Software and all intensities were normalized to the 98% formic acid sample band.Data were fitted to the two-state unfolding model described below (Christensen et al., 2020) using Prism 9 plotting software: The midpoint of denaturation [FA] 50% , the free energy of unfolding ΔG, and the dependence of ΔG on [FA] (m FA ) were obtained through this fitting. | TEM imaging All TEM data were recorded on a Tecnai FEI F20 microscope using copper mesh 400 grids coated with carbon film from EMResolutions.All grids were prepared by placing 3 μl of sample on the grid for 2 min to allow for proper absorption onto the grid surface.Sample liquid was blotted away with filter paper followed immediately by staining with 3 μl 2% uranyl formate for 15 s.The uranyl formate was blotted away and the staining process was repeated two additional times after which the excess liquid was blotted and the grid air dried for 2 min before being put in storage. | TIRF imaging To visualize the growth of fibrils of wt CsgA and CsgA DM, monomeric proteins were fluorescently labeled with Alexa Fluor 488 NHS Ester as follows: CsgA was desalted into Solution C (20 parts PBS, pH 7.4, and 1 part 0.2 M NaHCO 3 , pH 9.0) using a Cytiva PD MiniTrap desalting column with Sephadex G-25 resin.The concentration of the desalted CsgA was immediately adjusted to 30 μM to reduce aggregation.9.2 μl 7.8 mM Alexa Fluor 488 NHS Ester was added and mixed thoroughly before being wrapped in tin foil to protect the fluorescent dye from light, and then incubated at room temperature for 30 min.The solution was then desalted into PBS, pH 7.4, using a Cytiva PD10 desalting column with Sephadex G-25 resin, taking care not to collect any free dye.The concentration of the conjugated CsgA protein was determined with a DeNovix DS-11 Nanodrop using a CsgA extinction coefficient ε 280 = 11,460 M À1 cm À1 and an Alexa 488 extinction coefficient ε 495 = 73,000 M À1 cm À1 .The CsgA solution was diluted to 1 μM in PBS and transferred into an Ibidi Sticky-Slide VI 0.4 6-channel slide glued onto a pre-treated 170 μm thick microscope coverslip.Prior to this step, the coverslip had been cleaned by sonicating the glass slide 5 times 10 min in 2% Hellmanex solution and rinsed with ultrapure (MilliQ) water in between followed by 5 times sonication for 10 min in ultrapure water rinsed with ultrapure water in between sonication steps.Finally, the glass slides were sonicated one time in 96% ethanol and the ethanol was exchanged to new 96% ethanol and stored.The glass slide was dried with nitrogen and processed with plasma cleaning treatment for 5 min, after which it was incubated with a 1 g/L PLL-g-PEG solution for 30 min and rinsed with ultrapure water before being nitrogen dried and then attached to an Ibidi Sticky-Slide VI 0.4 6 channel slide.All TIRF data were recorded on an ONI Nanoimager TIRF microscope.Laser intensity was adjusted to maximize contrast while autofocus and Z-axis lock were activated during all experiments.Growth movies were recorded by taking images every 30 seconds for at least 3 h for wt CsgA and 8 h for CsgA DM for four fields of views per technical replicate enabled by the autofocus and Z-axis lock.Growth data analysis was done in Ima-geJ by measuring the final length and time of growth for selected fibrils.Labeling with AlexFluor488 did not appreciably affect aggregation kinetics of CsgA (data not shown), probably because the two Lys residues in positions 89 and 96 are predicted to be in a turn and a surface-exposed position, respectively. | NMR spectroscopy To gain insights into the structural properties of CsgA, NMR experiments were conducted on CsgA QM in solution, using 200 μM 15 N-labeled CsgA in 20 mM Na 2 PO 4 , 10% D 2 O, pH 7.2.Spectra were assigned using HSQC, NOESY-HSQC, and TOCSY-HSQC spectra recorded at 283.2 K on a Bruker Avance III HD 950 MHz spectrometer using a 5 mm TCI liquid state cryoprobe.Spectra were processed in TOPSPIN and were analyzed and assigned using CCPN (Skinner et al., 2016).To gain insights into the dynamics of the interaction of CsgA with fibrils using relaxation and saturation transfer spectra, 30 μM solutions each of wt CsgA and the three mutants (DM, TM, QM) were prepared in 50 mM Na 2 PO 4 and 10% D 2 O at pH 7.4.Following initial relaxation and saturation transfer measurements, samples containing wt CsgA and DM were allowed to fibrillate by incubation for 4 days at 6 C, in which measurements were taken at various time points.A final measurement was taken at the end of the experiment.CsgA TM and QM were unable to form fibrils in the same time span, and thus 5% preformed fibril seeds were added in order to obtain a similar dataset.Details of the NMR measurements are described below.These spectra were recorded at 298 K on a Bruker Ascend 800 MHz spectrometer with an Avance III HD console equipped with a 5 mm z-gradient CP-TCI (H/C/ N) cryogenic probe at the NV-NMR-Centre/Norwegian NMR Platform, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.Here two sets of measurements were made.(a) The apparent association rate, k app on , between CsgA proteins and fibrils was measured using changes in the R 2 relaxation rate.CPMG spectra with water suppression by excitation sculpting were used to acquire 1 H R 2 rates in the presence and absence of fibrils.In this experiment, peak integrals (I) were obtained at eight values of τ from 0 to 250 ms, and fitted to I ¼ I 0 e ÀR 2 τ to extract R 2 .The maximum difference in R 2 between the two conditions (with and without fibrils) was used to estimate k app on .The 1 H R 1 relaxation rate of CsgA QM was estimated using an inversion recovery experiment.(b) To investigate the "dark" state of CsgA transiently bound to the fibrils, 1 H saturation transfer spectra were recorded as described (Fawzi et al., 2010).Briefly, 700 ms continuous wave irradiation at off-resonance frequencies (15 offsets ranging from +35 to À35 kHz from the water resonance) with radiofrequency field strengths of 180 or 350 Hz were used to partially saturate the broad resonances of the fibril-bound state.Saturation transfer from this "dark" state to the visible monomer was monitored by the decrease in intensity of an amide region (7.85-8.6 ppm) in the 1 H NMR spectrum.To quantify the exchange dynamics between the "dark" and visible states, we analyzed the intensity profiles as a function of off-resonance frequency.This analysis employed a two-site exchange model as described (Fawzi et al., 2012), implemented in a Matlab script kindly provided by G. Marius Clore. | AlphaFold model generation The AlphaFold model for CsgA QM was generated by inserting the amino acid sequence into the online Google Colab webpage (Jumper et al., 2021) | Introduction of gatekeeper residues in R1 and R5 completely inhibits fibril formation Inspired by the pioneering work of Chapman and coworkers (Wang et al., 2010), we hypothesized that the introduction of Asp residues into two key locations each in R1 and R5 identified by alignment of the five repeats (shown in yellow in Figure 1a) would reduce CsgA's amyloidogenicity.Four Asp residues were introduced in a stepwise manner to elucidate if and how the effect of multiple gatekeepers was cumulative.We constructed three mutants with increasing amounts of gatekeeper residues: CsgA Double Mutant (DM) with mutations N26D and H129D, CsgA Triple Mutant (TM) (DM plus N116D), CsgA Quadruple Mutant (QM) (TM plus L39D) (Figure 1b). Each protein was purified according to the same expression and purification protocol.Interestingly, the protein yield generally increased with gatekeeper count (Figure 1c).On SDS-PAGE, the four CsgA variants showed decreasing migration with an increasing number of gatekeepers (Figure 1d).We attribute this to a reduction in SDS-binding (and thus loss of overall negative charge, which leads to a reduction in mobility in an electric field) upon introduction of SDS-repelling anionic Asp residues. We used Thioflavin T (ThT) assays in low-binding 96-well plates to investigate the effect of gatekeeper residues in the 3 CsgA variants between 1.4 and 14.4 μM (Figure 2).The introduction of two gatekeeper residues (CsgA DM) significantly reduced the fibrillation propensity, giving both a large increase in the lag phase and a large decrease in the growth rate compared with wt CsgA fibrillation (Figure 2).The gatekeeper residues introduced in CsgA DM thus have a clear negative effect on both the primary nucleation as well as the elongation rates.Further addition of 1-2 gatekeeper residues in CsgA TM and QM completely inhibits fibrillation (Figure 2).The complete absence of fibrillation by both TM and QM prevented any conclusions at this stage about the additional effect of the fourth gatekeeper residue. Interestingly, all four variants readily formed fibrils in non-coated 96-well plates (Figure S1).We attributed this to a catalytic effect of the non-treated plastic surface on CsgA self-assembly.This in turn suggested that CsgA TM and QM could form fibrils if provided with a nucleation point, prompting us to carry out seeding experiments by the addition of pre-formed fibrils.Such experiments could also clarify whether the gatekeeper residues of TM and QM mainly inhibit the primary nucleation or elongation mechanism.The occurrence of fibril polymerization in the presence of fibril seeds would strongly indicate that the inhibiting effect of the gatekeeper residues is limited to fibril nucleation rather than elongation.All reactions were conducted with a constant monomeric concentration of 7.2 μM and preformed seed % was calculated based on the total protein mass.Self-replicating processes such as secondary nucleation and fragmentation are virtually absent during the polymerization of CsgA (Andreasen et al., 2019).This means that relatively large (>1% of total protein mass) amounts of fibril seeds are required to promote fibrillation.Consistent with this, all CsgA variants required 1% preformed fibril seeds before any significant seeding effect was observed (Figure 3), allowing us to conclude that secondary processes such as elongation of existing fibrils are not major contributors to CsgA fibrillation.Since all 4 constructs can aggregate in the presence of seeds or a catalytic surface that likely mimics seeds in some capacity (Figure S1), the introduction of gatekeeper residues must affect fibril nucleation more than the elongation of existing fibrils.Given that the mutations introduce Asp residues, it is possible that the inhibitory effect on fibrillation could be caused by the ionic repulsion of the acidic groups.If this was the case, such effects should be mitigated by shielding charges through increased ionic strength or by altering the pH.To address this, we fibrillated wt CsgA and CsgA DM in increasing concentrations of NaCl (Figure S2a) and at various pH values (Figure S2b).The addition of NaCl did not seem to have any restorative effects on CsgA DM, rather, it decreased the fibrillation propensity of both wt CsgA and CsgA DM (except for wt CsgA at 1000 mM NaCl).The pH study showed a different trend, with the amyloidogenicity of wt CsgA decreasing with pH from 9 to 3.3; CsgA DM displayed a peak fibrillation rate at pH 4.4-5.7,while the largest ThT signal was achieved at pH 7.4.These data suggest that a slightly protonated population of Asp residues in CsgA DM weakly promotes fibrillation while wt CsgA gains no benefits from a slightly acidic pH.Thus, simple electrostatic repulsion does not rationalize the effect of the gatekeeper residues. In conclusion, the introduction of just two gatekeeper residues in R1 and R5 of CsgA significantly reduces the protein's propensity to form fibrils, while two additional gatekeeper residues completely inhibit the ability to form fibrils, mainly by inhibiting the primary nucleation processes unless fibrillation is stimulated by seeding or surface catalysis.However, we cannot distinguish between TM and QM effects.fibrils, generated for all four CsgA variants by incubating monomers for 24 h in Eppendorf tubes at 37 C while shaking.The plastic surface of a standard Eppendorf tube allows both CsgA TM and QM to form fibrils, even in the absence of preformed seeds.When deposited on TEM grids, fibrils of all four variants were heavily associated laterally, forming thick cable-like structures that ultimately led to large tangles of fibrils (Figure 4a).Isolated, free fibrils were rarely observed.Thus, once formed, the fibrils of CsgA DM, TM, and QM form fibrils that are indistinguishable from wt CsgA fibrils at TEM resolution.Nevertheless, it might be possible that the stability of these fibrils was affected.Sensitivity to protease treatment could in principle report on fibril stability but will do so in an indirect fashion.The compact structure of CsgA does not provide obvious points of proteolytic attack, so protein degradation will reflect a coupling between (usually slow) fibril dissociation and cleavage of the monomers.Rather, to measure fibril stability, we subjected the fibrils to increasing concentrations of formic acid, known to dissociate and dissolve functional amyloid at high concentrations in a quantitative manner (Christensen et al., 2020). Dissolved monomeric CsgA was quantified by SDS-PAGE (Figure 4b).Visual inspection of these gels showed very little difference in stability among the four CsgA variants.All underwent a transition from no solubilization <50% FA to complete solubilization >65% FA.Using densitometric quantification, data were fitted to a twostate unfolding model (see M&M for details) to obtain the midpoint of denaturation [FA] 50% , the free energy of unfolding ΔG and the dependence of ΔG on [FA] (m FA ) (Figure 4c, summarized in Figure 4d).All four CsgA variants display very similar FA ½ 50% values and the three mutants also show very similar ΔG values.wt CsgA has a much lower ΔG value, but we attribute this to the large errors in the m FA value due to noisy data in the transition region. We conclude that despite a very high kinetic barrier to fibril formation, the three mutants all show the same morphology and stability as wt CsgA.Thus, the gatekeeper residues in CsgA primarily affect the initial formation of fibrils, likely by discouraging intramolecular or intermolecular interactions that are normally required to obtain the correct folding for fibril formation. | CHAPS interacts differently with wt CsgA, DM, TM, and QM We decided to probe whether this early-stage blockage of fibrillation made the mutants more sensitive to other modulators of fibrillation.Harsh anionic surfactants such as SDS inhibit the fibrillation of functional amyloid at high concentrations (Najarzadeh et al., 2019).We therefore turned to the gentler zwitterionic surfactant CHAPS which only had modest effects on wt CsgA, even at concentrations well above its critical micelle concentration which we determined to be around 3.2 mM using pyrene fluorescence (Figure S3). In 50 μM CHAPS, there was no effect on wt CsgA fibrillation (Figure 5a).250-500 μM CHAPS led to an increased primary nucleation rate as well as an increase in ThT-intensity (Figure 5a), while ≥1 mM CHAPS mM again saw a shift in the aggregation kinetics, with a drastic decrease in primary nucleation and elongation rate (Figures 5a and S4).At and above 10 mM CHAPS, the aggregation rate for wt CsgA steadily declined with CHAPS concentration until we reached a plateau at 150 mM (Figure S4).We saw no evidence of direct contact between CHAPS and CsgA since the critical micelle concentration (CMC) was only insignificantly affected by CsgA, rising from 3.2 to 3.4 mM in the absence and presence of CHAPS, respectively (Figure S3). We hypothesized that the mild perturbation of the aggregation mechanism by CHAPS might resolve the effect of the fourth gatekeeper residue introduced in QM.We monitored the fibrillation of all four CsgA variants in the presence of 0-1 mM CHAPS, using noncoated 96-well plates to promote fibril formation of CsgA TM and QM (Figure 5a-d).CsgA DM responded to CHAPS with the same three concentration regimes as observed for wt CsgA but at much lower CHAPS concentrations.Thus, fibrillation of CsgA DM was almost completely inhibited already at 500 μM CHAPS, an effect only observed at 150 mM CHAPS for wt CsgA.This suggests that DM fibrillation is so compromised compared to wt CsgA that only small amounts of a mild surfactant are sufficient to severely reduce aggregation.This effect is even more pronounced for mutants TM and QM, for which there was no increase in fibrillation rate at the lowest CHAPS concentrations but simply a large increase in lag-phase already at 250 μM CHAPS.TM and QM show lag times of 30 and 40 h, respectively, at 250 μM CHAPS, showing that CsgA QM is slightly more inhibited by the additional gatekeeper residue but with the largest effects caused by the first three gatekeeper residues.At 500 μM CHAPS or above, both TM and QM were completely inhibited from aggregating.In principle, CHAPS could work indirectly on aggregation through effects on surface tension; however, CHAPS' relatively high cmc means that there is only a modest decline in surface tension at sub-mM concentrations (cf. the observed decline in surface tension by only $8 mN/m up to the CMC in figure 1 of Lunkenheimer et al., 2007). These results prompted us to investigate if the interaction between CHAPS and CsgA affected the morphology of the mature fibrils.TEM images of CsgA fibrils grown in the absence and presence of 10 mM CHAPS showed that CHAPS led to much more dispersed fibrils and much less dense clumps of fibrils (Figure 5e) than in its absence.This allowed for clear imaging of both individual fibers and fiber bundles, suggesting that CHAPS not only interferes with the initial fibril formation mechanism but also coats the fibrils, reducing the otherwise strong lateral association.This data likewise demonstrates how CsgA fibrils consist of an ensemble of fibers that are formed from varying amounts of intertwined fibrils (Figure S5).Our observations are nicely consistent with the very recent report by Liu and coworkers who used the nonionic surfactant Tween 20 to provide better dispersion of CsgA fibrils, leading to a 3.62 Å cryoEM structure of CsgA in combination with AlphaFold (Bu et al., 2024), in extension of the pioneering work by Remaut and coworkers (Sleutel et al., 2023). | Growth measurements of single fibrils confirm a significant slowing of the elongation rate of CsgA DM Having investigated the growth parameters of CsgA and the gatekeeper mutants using ThT fluorescence bulk measurements, we set out to measure the growth behavior of individual fibrils using total internal reflection fluorescence (TIRF) microscopy, which allows us to track individual fibrils. As opposed to most other TIRF amyloid studies that use amyloid-specific fluorescence probes such as Nile Blue or ThT, we opted to use an AlexaFluor 488 crosslinked fluorescent dye to get greater contrast in our measurements.Only wt CsgA and CsgA DM were recorded since these were the only two CsgA variants that were able to form fibrils in the absence of seeds or catalytic agents.Both growth experiments were conducted at 1 μM monomeric concentration to avoid crowding at the imaging surface. The growth rate of the fibrils captured by the TIRF microscope was determined simply by measuring the length of the fibrils over time with ImageJ and then dividing it by the length of time in which the fibrils were growing (Figure 6a).End of growth was defined as either the point in time when the fibril would no longer grow or when the measurements were stopped.The growth rates of 20 fibrils were measured for both wt CsgA and CsgA DM (Figure 6b).With a growth rate of 1.86 nm/s, wt CsgA fibrils elongate almost 5 times faster than CsgA DM at 0.39 nm/s. The results are highly congruent with the observations made in the bulk ThT measurements with wt CsgA showing a significantly higher fibrillation rate compared with CsgA DM.This direct observation of the growth rate further demonstrates that the introduction of gatekeeper residues has a large impact on the elongation rate of CsgA. | NMR spectroscopy shows that the monomeric structures of CsgA variants are intrinsically disordered and insensitive to mutation A major obstacle when conducting experiments on any amyloidogenic protein is their inevitable aggregation, especially when experiment conditions require high concentrations.We reasoned that the far less amylodogenic CsgA QM variant would therefore be advantageous to use for this purpose.Sewell and coworkers have previously recorded and assigned a Heteronuclear Single Quantum Coherence (HSQC) spectrum of wt CsgA (Sewell et al., 2020) and the spectrum of QM was highly similar, except for the few positions in the immediate vicinity of the amino acid substitutions.We recorded high-resolution 3D Nuclear Overhauser Effect Spectroscopy 15 N (NOESY)-HSQC and 3D Total Correlation Spectroscopy 15 N (TOCSY)-HSQC spectra, and found that the CsgA QM spectra aligned sufficiently well with wildtype to transfer literature assignments when corroborated by connectivities in the 3D NOESY and TOCSY spectra. 1 H-15 N correlations were assigned for 110/131 (84%) residues (excluding the C-terminal His 6 -tag).Many of the unassigned peaks are located in the Gly region of the spectrum or are part of multiple overlapping peaks, making the assignment ambiguous.Figure 7 shows HSQC spectra of wt and QM side-by-side.To demonstrate that the structural ensemble was unaltered, we compared the experimentally obtained assignments with random coil predictions for the respective sequences using the POTENCI standard (Nielsen & Mulder, 2018) to obtain secondary chemical shifts (Δδ = δ exp À δ POTENCI ). Figure S6 shows these results and established that the conformational ensembles are not detectably altered by the gatekeeper mutations. | Gatekeeping residues affect the formation of transiently bound states While Cryo-EM has emerged as the leading technique for solving the structure of mature fibrils, information on intermediate species and the disordered state is still required to elucidate the mechanism of fibril formation.Nuclear magnetic resonance (NMR) remains the preferred technique for such experiments and has previously been used to probe the structure of CsgA both in its fibril and disordered state (Sewell et al., 2020;Shewmaker et al., 2009).Dark-state Exchange Saturation Transfer (DEST) serves as a powerful tool in solution NMR studies, allowing us to characterize interactions between an "NMR-visible" species and an "NMR-invisible" species transiently bound to a massive (>1 MDa) macromolecule.The bound state experiences significantly slower tumbling, making it invisible to traditional solution NMR methods.DEST leverages this difference in motional freedom to provide valuable insights into such interactions.Here, we employed DEST to probe the transient binding of monomeric CsgA proteins to their corresponding fibrils (Fawzi et al., 2012).Based on the findings in the previously described experiments, we hypothesized that the introduced gatekeeper residues would decrease the monomer-fibril association and would correlate with the number of gatekeepers. Since the high molecular weight of fibrils leads to rapid decay of their transverse magnetization due to large R 2 (transverse relaxation) rates, they remain invisible NMR spectra in contrast to the freely tumbling monomers.Consequently, as monomers associate and dissociate with fibrils in the solution, their R 2 rates increase, resulting in an overall increase in the observed R 2 rate of the CsgA monomers.The magnitude of this transverse relaxation rate increase in the presence of fibrils directly reflects the association rate between monomers and fibrils (Fawzi et al., 2010).In the first experiment, we allowed monomeric wt CsgA to form fibrils over the course of 4 days, measuring the signal decay from fibril formation (Figure S7a).After 4 days of fibrillation, the transverse relaxation rate of the remaining CsgA monomers was measured as 13.4 ± 0.4 s À1 (Figure S7b).During the 4-day fibrillation process, the transverse relaxation rate was likewise measured at various time points and no change in the relaxation rate between these time points was observed, indicating that fibrils had formed from the start of the experiment, producing a dark state through the interaction with monomeric CsgA.To probe the change in relaxation rate induced by fibrilmonomer association, a sample with little to no interaction between fibrils and monomers was required.The previous experiment indicated that wt CsgA produces small fibrils as soon as it has been buffer changed from GnHCl solution and was thus not suitable.Instead, a similar experiment was performed using CsgA QM, and signal decay and transverse relaxation rate were followed over a 4-day period (Figure S7a).Similar to the wt CsgA measurements, the transverse relaxation rate did not change over the course of the experiment but no loss in signal was detected, indicating that no fibrils had formed. We hypothesized that the transverse relaxation rate of CsgA QM was unaffected by monomer-fibril interactions.To confirm this hypothesis, we measured the R 2 rate of CsgA QM in the presence of 5% pre-formed fibrils and observed no difference in the R 2 rate (Figure S7c).The final measurement of the CsgA QM sample produced a transverse relaxation rate of 9.1 ± 0.2 s À1 (Figure S6B).Assuming that the R 1 relaxation (3 s À1 ) and R 2 relaxation (9.1 s À1 ; Figure S7b) of CsgA QM accurately represent all monomeric CsgA proteins in solution, and further assuming that cross-relaxation is significant in the fibrilbound state (Fawzi et al., 2010), an estimate of the apparent first-order association rate constant k on app could be Similar experiments were conducted using CsgA DM and TM monomers (CsgA TM required the addition of 5% pre-formed fibrils due to slow fibrillation), and the resulting observed saturation profiles (Figure 8) were fit using R 2 dark = 31,000 ± 3000 s À1 and the values provided in Table 1.The k on app was assumed to be the same for all CsgA proteins.The results show a clear trend: the fraction of protein in the transiently bound state (calculated from the k off /k on app ratio) significantly decreases with the increasing number of gatekeeping residues in the DM and TM mutants, while QM showed no transiently bound state at all. | DISCUSSION Gatekeeper residues (e.g., charged and β-breaker residues) have long been known to play an important role in suppressing misfolding of globular proteins by destabilizing alternative folded states such as amyloid (Beerten, Jonckheere, et al., 2012;Ganesan et al., 2016;Otzen et al., 2000;Rousseau et al., 2006).Gatekeeper residues are likewise found in functional amyloid proteins, where they seem to play an analogous role, delaying aggregation until the protein reaches its destination.wt CsgA contains gatekeeper residues in repeats R2-R4, whose removal by mutation significantly accelerated fibrillation (Wang et al., 2010).In the present study, we set out to determine how the fibrillation mechanism would change when new Asp gatekeeper residues were introduced into R1 and R5.The gatekeepers were introduced in a stepwise fashion to investigate how each contributed to the inhibition of the fibril formation and if their effect was additive.The introduction of two Asp residues in CsgA DM resulted in a significant reduction in primary nucleation and elongation rates, while fibrillation was completely inhibited for CsgA TM and CsgA QM.Thus, the introduction of multiple copies of a single type of mutation (X !Asp) was sufficient to completely eliminate spontaneous fibrillation of CsgA.Interestingly, CsgA TM and CsgA QM were all able to form fibrils in the presence of preformed seeds or a catalytic surface, suggesting that the introduced gatekeeper residues mainly affected fibril nucleation and elongation only to a smaller degree.As the HSQC NMR spectra are virtually unchanged by the introduction of gatekeeper residues, and the analysis of secondary chemical shifts did not show significant differences, this data suggests that the productive states for nucleation are only present at minute amounts at equilibrium.Changes in concentration and electrostatics (pH, ionic strength) may strongly modulate self-association, albeit with undetectable signatures on the monomeric species.The small populations of these productive species cause the aggregation process to be slow, while at the same time highly susceptible to its milieu (vide infra).TIRF microscopy (TIRFm) has traditionally been used to characterize fibril structure, amyloid branching, and growth fingerprinting (Andersen et al., 2009).Recent advances in machine learning for data analysis have sparked new interest in the tracking of fibrillar structures (Bender et al., 2024).To probe the degree of elongation inhibition, we utilized TIRFm to track the growth of individual wt CsgA and CsgA DM fibrils.Since the CsgA DM elongation rate is almost 5 times slower than that of wt CsgA, this result highlighted specifically how elongation is affected by the introduction of gatekeeper residues.The experiment likewise serves as an example of how TIRFm can be utilized as a powerful tool for amyloid fibril tracking and growth measurement on the singleparticle level.Using basic image manipulation tools, a robust measurement of fibril growth rate can be obtained in a very short time span. | Gatekeeper residues tune fibrillation kinetics but not fibrillation stability Aggregation in the presence of the mild detergent CHAPS showed a small but significant increase in the lag phase of CsgA QM compared to TM and DM.Thus, each new Asp added to CsgA increases the barrier to fibrillation.This clear correlation between the number of introduced gatekeeper residues and reduced amyloidogenicity suggests that evolution can utilize the number of gatekeeper residues to fine-tune the amyloidogenicity of functional amyloids and the stability of aggregation-prone regions in globular proteins.For CsgA, these results strongly suggest that the number and placement of native gatekeeper residues are optimized so that the protein remains disordered for the time required to reach the nucleator protein CsgB on the outer membrane of the cell.This ensures correct and controlled aggregation and minimizes the otherwise toxic effects of the runaway-aggregation.Surprisingly TEM imaging demonstrated that mature fibrils from all four CsgA proteins, despite their different aggregation kinetics, were visually indistinguishable.Not only were the fibrils visually similar but they also had near-identical stability towards formic acid dissolution, suggesting that once formed, the fibrils retain both structure and stability.This further supports the idea that gatekeeper residues mainly affect the nucleation phase of the aggregation mechanism while preserving the amyloid fold.From an evolutionary point of view, this behavior seems highly beneficial to organisms with functional amyloids since it preserves the function of the amyloid fold but delays or even eliminates aggregation until initiated by external nucleators. | DEST NMR studies show a reduction in the degree of transient binding to fibrils caused by gatekeeper residues An analysis of the AlphaFold model of CsgA QM found that all four introduced gatekeeper residues point outwards from the amyloid core (Figure 9).This could explain their minimal effect on fibril structure and stability while still affecting nucleation.It is likely that the association between two or more monomers during fibril nucleation is weakened by electrostatic repulsion from the additional Asp residues.This is further supported by a significant decrease in the population of transiently F I G U R E 9 AlphaFold structure of CsgA QM shows that it adopts a β-hairpin very similar to wt CsgA with five repeats.The four Asp gatekeepers residues introduced in CsgA QM (red) all point outwards from the amyloid core.This might explain their lack of influence on the stability and structure of the mature fibrils while still affecting initial nucleation.For clarity, the His-tag has been removed.bound states observed in the DM and TM mutants compared to the wild-type protein through DEST experiments (Table 1).The DEST method has previously been used to observe the transiently bound states of Aβ and α-synuclein (Bodner et al., 2009), and transient interactions have been documented as crucial early steps in the nucleation process (Fawzi et al., 2010;Karamanos et al., 2014).Given the observed decrease in CsgA's transiently bound population with gatekeeper mutations, we propose that these interactions play a role in the nucleation and/or growth of CsgA fibrils.While the DEST experiments do not provide any information on the location of the interactions, the monomers could either be associating with the fibril surface or the growing ends of fibrils; it seems clear that prolonged monomer-fibril association is crucial to fibril nucleation and/or fibril growth.The introduction of additional charges from the gatekeeper residues in the CsgA mutants may be the cause of the disruption of this interaction, ultimately resulting in lower amyloidogenicity. The DEST experiments also highlight the potential of using CsgA QM as a substitute for wt CsgA in prolonged NMR experiments that are otherwise complicated by rapid fibril formation at elevated concentrations.CsgA QM could be an excellent candidate for studies of the monomeric structure, its interactions with the remaining curli proteins, and even structural analysis of the early oligomeric species that may be formed prior to nucleation.Additionally, the use of gatekeeper residues could also be an invaluable tool in the development of nanomaterials based on amyloid fibrils (Das et al., 2018;Gilbert & Ellis, 2019;Hauser et al., 2014;Mankar et al., 2011) in situations where fibrillation kinetics but not stability need to be modulated. F I G U R E 1 Asp gatekeeper residues originally only present in R3 were introduced into R1 and R5 through site-directed mutagenesis.(a) Alignment of the five repeats of CsgA responsible for forming the amyloid core.Asp gatekeeper residues of R3 are highlighted in green and residues targeted for site-directed mutagenesis in R1 and R5 are highlighted in yellow.The red sequence above the five repeats is the consensus sequence obtained by comparison of >40,000 different CsgA sequences (Sleutel et al., 2023), where Ψ are hydrophobic residues facing into the core of the amyloid structure alternating with surface exposed ($) residues.(b) Residue changes found in CsgA DM, TM, and QM.(c) The amount of purified recombinant CsgA produced per L culture increases significantly with mutation.(d) SDS-PAGE gel highlights the difference in migration behavior of wt CsgA, DM, TM, and QM. F I G U R E 2 CsgA fibrillation propensity decreases as more Asp gatekeeper residues are introduced.All THT assays experiments were performed in low-binding 96-well plates.ThT fluorescence trace of fibrillation of wt CsgA, CsgA DM, CsgA TM, and CsgA QM.Error bars indicate 95% standard error of mean (SEM) with four technical repliates of each sample. 3. 2 | wt CsgA, DM, TM, and QM all have the same fibrillar architecture Prompted by these very different aggregation properties, we used TEM to compare morphologies of mature F I G U R E 3 The addition of preformed fibril seeds accelerates the polymerization of all CsgA variants.Relatively high amounts of fibrils are required to observe significant effects.All polymerization experiments were performed at 7.2 μM total monomeric equivalent concentration with indicated amounts of preformed fibril seeds.Experiments were performed in low-binding 96-well plates.ThT fluorescence trace of self-seeded polymerization of wt CsgA, CsgA DM, CsgA TM, and CsgA QM.Error bars indicate 95% standard error of mean (SEM) with four technical repliates of each sample. F I G U R E 4 Despite different polymerization rates, all four CsgA variants appear visually similar and have the same association tendencies.The mature fibrils similarly have near identical stability towards formic acid dissolution.(a) Transmission electron microscopy (TEM) images of uranyl formate stained mature fibrils from the four CsgA variants (Scalebar, 500 nm).(b) SDS-PAGE bands of samples taken from formic acid dissolution assay.Formic acid concentration is indicated in %-V/V.All CsgA variants required 55% formic acid before dissolution was visible while the majority of the fibril mass was dissolved at 65% formic acid resulting in a strikingly similar FA50 value.(c) The intensity of each SDS-PAGE band was measured and normalized to the intensity of the 98% band for each CsgA variant.The resulting data were plotted as a function of formic acid concentration and modeled with a formic acid dissolution model to determine FA50, mFA, and ΔG values.(d) FA50, mFA, and ΔG values determined from formic acid dissolution model. F I G U R E 5 The mild detergent CHAPS interacts with wt CsgA and DM to produce distinct transitions in the polymerization profile and produces fibrils with a reduced tendency to form clumps.All ThT aggregation assays were performed in non-treated polystyrene 96-well plates.(a) ThT fluorescence traces of 7.4 μM wt CsgA polymerization in the presence of CHAPS.Distinct transitions in the polymerization profile happen at 250 and 1000 μM CHAPS.(b) ThT fluorescence traces of 7.4 μM CsgA DM polymerization in the presence of CHAPS.The same distinct transitions are present both occur at 50 and 250 μM instead.(c, d) ThT fluorescence traces of 7.4 μM CsgA TM and QM polymerization in the presence of CHAPS.Only one transition is visible at 50 μM.A small but significant difference in lag phase is visible between TM and QM at 250 μM CHAPS, suggesting QM to be the most inhibited variant.(e) TEM images of wt CsgA fibril formed in the absence (left) and presence (right) of 10 mM CHAPS.The presence of CHAPS during the fibrillation process produces mature fibrils that are significantly more disperse compared with wt CsgA without CHAPS (Scalebar, 500 nm). F I G U R E 6 TIRF microscopy allows accurate determination of growth rate of individual fibrils for wt CsgA and CsgA DM.(a) Example of time series of growth of wt CsgA and CsgA DM fibrils, showing real-time tracking of individual fibrils, giving accurate growth rates determined on a single-fibril level.Scale bare is 5 μm.(b) Growth rate determined for 20 wt CsgA fibrils (blue) with an average growth rate of 1.86 nm/s and 20 CsgA DM fibrils (red) with an average growth rate of 0.39 nm/s.Error bars indicate 95% standard error of mean (SEM). F I G U R E 7 15 N-1 H HSQC spectrum of 250 μM CsgA QM in 20 mM Na 2 HPO 4 , pH 7.2, 10 C recorded at 950 MHz (red peaks, this study) and wt CsgA (blue crosses, fromSewell et al., 2020).Note that wt CsgA assignments are numbered according to the full-length CsgA sequence, including the 20-residue signal peptide, while our assignments are based on mature CsgA.calculated by simply subtracting the R 2 rate of the CsgA QM experiment from wt CsgA experiment, yielding a value of k on app = 4.3 s À1 , indicating significant exchange between monomers and fibrils of wt CsgA. F I G U R E 8 Attenuation (I(ω)/I 0 ) of the integrated intensity of monomeric CsgA proteins by transfer of saturation from the transient fibril-bound ("dark") state following application of off-resonance radio frequency (ω) fields as a function of offset from the water resonance, at field strengths of 180 Hz (black triangles) and 350 Hz (red squares).The data from both field strengths were simultaneously fit to a twosite exchange model using the Bloch-McConnell equations to obtain dissociation rates reflecting the population of CsgA in the transiently bound state.T A B L E 1 Fitted parameters for the observed saturation profiles in Figure 8. Single-fibril tracking by TIRF microscopy directly illustrates slower elongation rates of gatekeeper mutants
12,082
2024-09-20T00:00:00.000
[ "Biology", "Chemistry", "Materials Science" ]
Towards Thermodynamics with Generalized Uncertainty Principle Various frameworks of quantum gravity predict a modification in the Heisenberg uncertainty principle to a so-called generalized uncertainty principle (GUP). Introducing quantum gravity effect makes a considerable change in the density of states inside the volume of the phase space which changes the statistical and thermodynamical properties of any physical system. In this paper we investigate the modification in thermodynamic properties of ideal gases and photon gas. The partition function is calculated and using it we calculated a considerable growth in the thermodynamical functions for these considered systems. The growth may happen due to an additional repulsive force between constitutes of gases whichmay be due to the existence of GUP, hence predicting a considerable increase in the entropy of the system. Besides, by applying GUP on an ideal gas in a trapped potential, it is found that GUP assumes a minimummeasurable value of thermal wavelength of particles which agrees with discrete nature of the space that has been derived in previous studies from the GUP. Introduction One of the intriguing predictions of various frameworks of quantum gravity such as string theory and black hole physics is the existence of a minimum measurable length.This has given a rise to the so-called generalized uncertainty principle (GUP) or, equivalently, modified commutation relations between position coordinates and momenta [1][2][3][4][5][6][7][8].This can be understood in the context of string theory since strings cannot interact at distances smaller than their size [9].The GUP is represented by the following form [10][11][12]: where 2 = ∑ , = 0 /( ) 2 = 0 (ℓ 2 /ℎ 2 ), = Planck mass, and 2 = Planck energy.Inequality (1) is equivalent to the following modified Heisenberg algebra [10]: [ , ] = ℎ ( + 2 + 2 ) . ( This form ensures, via the Jacobi identity, that [ , ] = 0 = [ , ] [11].Recently, a new form of GUP was proposed in [13,14], which predicts a maximum observable momenta beside the existence of a minimal measurable length.This new form was constructed to be consistent with doubly special relativity theories (DSR) [15], string theory, and black holes physics [1][2][3][4][5][6][7].It ensures [ , ] = 0 = [ , ], via the Jacobi identity.This new form of the GUP is given as follows: where = 0 / = 0 ℓ /ℎ, = Planck mass, ℓ = Planck length, and 2 = Planck energy.The most important consequence of this model is the discreteness of space which confirms that all measurable lengths are quantized in units of a fundamental minimum measurable 2 Advances in High Energy Physics length [13,14].As a result, according to (3), GUP modifies the physical momentum [13,14,16]: Then it is natural to expect that this would result in considerable modifications in the dispersion relation to be in the form [17,18] This definitely would affect a host of quantum phenomena.In a series of earlier papers, various applications of the new model of GUP were investigated on atomic physics, condensed matter physics, preheating phase of the universe, and black holes at LHC [16][17][18][19][20][21][22][23][24][25][26][27][28][29].Also, one of the authors investigated its effect with the Liouville theorem in statistical mechanics [25].So according to Liouville theorem, it should be stated that the number of states inside volume of phase space does not change with time evolution in presence of the GUP.So GUP should modify the density states and this implies essential effects on the statistical and thermodynamic properties of any physical system.According to [25], the modification in the number of quantum states per momentum space volume should be as follows: For more details about proving the above equation, (6), we suggest consulting [25].The upper bounds on the GUP parameter have been derived in [16].It was suggested that these bounds can be measured using quantum optics techniques and gravitational wave techniques in [30,31].Recently, Bekenstein [32,33] proposed that quantum gravitational effects could be tested experimentally; he suggests "a tabletop experiment which, given the state-of-the-art ultrahigh vacuum and cryogenic technology, could already be sensitive enough to detect Planck scale signals" [32].This would put several quantum gravity predictions to test in the laboratory [30,31].Definitely, this is considered as a milestone in the road of quantum gravity phenomenology. It appears that the total number of microstates is increased due to GUP correction.It is worth mentioning that there are other theories that predict other forms of modified dispersion relations.These models were introduced in [34][35][36][37] and suggest a violation in Lorentz symmetry at some energy scale which could be quantum gravity scale.Their thermodynamical implications have been studied in [34,35].Besides there are other approaches that suggest the existence of an independent observer scale which could be a Planck energy scale and this has been formulated to the so-called doubly special relativity (DSR) [38][39][40][41].We can see that DSR is a generalization of special relativity.DSR transformation has several approaches; one of the important approaches is considered in [42,43].The interesting thing about DSR is that it does not violate Lorentz symmetry and the basic postulates of special relativity are still satisfied, but it introduces an upper limit of energy.In this approach, the dispersion relation for massless particles does not change, modifications take place only for massive particles, and according to [44,45] there is no change in the density states.However, there remains a considerable change in the partition function due to the presence of an upper bound in energy of particles. In this paper we continue investigating the modification of thermodynamics but this time in the presence of GUP, proposed by Ali et al. [13,14,16].We consider two cases: ideal gas and photon gas.It is worth mentioning that a different version of GUP has been studied with ideal gas in [46,47].The derivation of the expression for partition function in the presence of GUP is the most crucial result of our paper.We calculated analytic expressions for the pressure, internal energy, entropy, and specific heat of the considered systems.We show the difference between the GUP case and standard case.Our results go back to the standard forms derived when we set = 0. Ideal Gas Consider noninteracting particles, obeying Maxwell-Boltzmann statistics and are confined in a volume at temperature .For a canonical ensemble, the thermodynamics of this system can be derived from the total partition function [48,49]: We have considered classical Maxwell-Boltzmann statistics along with Gibb's factor and (1) is the single partition function given by where = 1/ and is the total energy.The integral over the coordinate yields the volume of the gas.Equation ( 8) can be rewritten in the form where and are the total relativistic energy and the rest mass of gaseous particles.To calculate the quantum gravitational effect, the GUP can be considered in the phase space analysis by two equivalent pictures: (a) considering deformed commutation relations (i.e., deformed the measure of integration) simultaneously with the nondeformed Hamiltonian function or (b) calculating canonical variables on the GUP-corrected phase space which satisfy the standard commutative algebra (i.e., nondeformed standard measure of integration), but the Hamiltonian function now gets deformed.These two pictures are related to each other by the Darboux theorem in which it is quite possible to find canonical coordinates on the symplectic manifold which satisfy standard Heisenberg algebra (we thank the referee for paying our attention for this important note).In this paper we are going to consider deformed measure of integration with nondeformed Hamiltonian function.According to (6), we find that (9) takes the form In order to solve this integral, use the substitution In first order approximation in , we can prove that As one might expect, the approximation used so far, that is, 1/ > , breaks down around a maximum measurable energy (∼1/) and we need an exact treatment to describe the solution around this scale.Therefore, one can only trust a perturbative solution where ≪ 1 (we thank the referee for pointing out this important note).Then the expression which remains to be calculated is This integration can be calculated to take the form where = 2 and () is the second order modified Bessel function [50].Setting = 0 the partition function reduces to the usual case [49].In nonrelativistic limit, → ∞, or the thermal energy of the particle is very small compared to the rest mass; we have 2 () ≈ √/(2) − ; thus Analogously, for very high temperature → 0, which means ultra-relativistic particles in which the thermal energy is much greater than the rest mass.The modified Bessel function behaves like 2 () ≈ 2/ 2 and ( 14) takes the form It should be noted that as → 0, we get back the results of special theory of relativity thermodynamics [49].The partition function relates the microscopic properties with the thermodynamic (macroscopic) behavior of the physical systems.Using the expression of the partition function, we will go on to study various thermodynamic quantities of ideal gases with GUP effects.The free energy of the system is defined as Using Stirling's formula ln ! ≈ ln − , the free energy for the ideal gas is given by where Using the expression of the free energy (18), the pressure can be obtained by the relation We have the same result as in the relativistic case which means that the pressure does not change with considering the GUP.The chemical potential is given by the relation We can evaluate the entropy using free energy by the relation where By setting = 0, we get back the usual results.We see that the new terms, due to quantum gravity, carry a positive sign.This indicates that the modified entropy grows faster than standard case by increasing temperature.The elevation in the number of accessible microstates in high momentum regime, or in Planck scale, leads to increase in corresponding microcanonical entropy of the system.This may be interpreted that the existence of the GUP implied an additional repulsive force between the constitutes of the gas which leads to an increase in the entropy of the system.Modification in the expression of internal energy is expected due to change in the free energy.It can be evaluated from the following relation: If we put = 0, we obtain the standard special relativity internal energy expression.Without quantum gravity effect, ≪ 1 yields ultrarelativistic limit = 3 and → ∞ yields = (3/2).Figure 1 shows relativistic internal energy compared with a modified one (solid curve). It is observable from this figure that the GUP-modified internal energy grows faster than the relativistic case.At high temperature, or in ultrarelativistic regime, the modified internal energy becomes greater than the asymptotic value.The increase in entropy, due to quantum gravity, leads to an increase in the internal energy as it would be expected.Each of the entropy and the internal energy are not measurable quantities directly.We can detect the effect of GUP through other thermodynamic quantities.The specific Heat can be detected experimentally.It is defined by where It is straightforward to see from (25) that we get back the usual specific heat for = 0.In Figure 2 the dashed curves represent the usual relativistic specific heat; it grows from the nonrelativistic limit value = (3/2) up to an ultrarelativistic limit value = 3.The solid curve represents the modified specific heat.It is clear that the new specific heat grows faster than the relativistic one with increasing temperature.We can see from Figure 2 that in ultrarelativistic range the modified specific heat is greater than the corresponding relativistic one and it approaches a constant value.This approach is specified for ideal gas particles that move in free regime in a relativity small pressure.But if the gas particle motion is restricted in a small volume, then the particles may be considered in a potential trap.Now we will move to study the effect of quantum gravity on a ideal gas in potential trap. An Ideal Gas in a Trap Let us consider an ideal gas contained by some potential well acting as a trap.This can take place by wall of a container, such that the wall represents positions of a classical turning points.Also the trap can be represented by surrounding particle in a very high densities and pressure case.Quantum mechanically, the particle has a small probability to tunnel past the turning point.So the linear size of the potential trap has uncertainty Δ due to quantum fluctuations, such that it results uncertainty of order Δ = Δ.We can define the pressure of the gas by the relation where is the gas density and is the gas particle velocity, and is the number of collisions per unit time per unit area.According to GUP effect, if we consider ⟨⟩ ≈ , the uncertainty in first order approximation will be Use (27) to translate the uncertainty in pressure and length to take the form The mean energy of the gas particle is of order ∼, so the velocity of the particle can be given as = √/.Equation (29) will take the form Define the thermal wavelength of the particle as = ℎ√2/ and the uncertainty will take the form Since the volume ∼ 3 , then This equation represents the generalized thermodynamic uncertainty principle for an ideal gas in potential trap.It ensures that when the potential trap size is comparable with the thermal wave length, the pressure and the volume cannot be treated as commuting observable.They are playing as quantum mechanical operators acting in wave functions in quantum mechanical approach.This means that we cannot build up the equation of state by a simple relation among , , and , but we must have a wave equation consistence with the new thermodynamical GUP.Since ℎ = 0 , (32) will be satisfied at which This inequality shows that the thermal wavelength of the particles should be greater than or equal to the minimum length.This naturally assumes a minimum measurable value of thermal wavelength of particles.This result agrees with the discrete nature of the space that has been derived based on the GUP in [13,14].Besides, this result motivates us to suggest that the thermal energy of the particle should not exceed a maximum energy or Furthermore, these derived inequalities prove that the dynamics of the gas particle is not free but restricted by some conditions derived from GUP approach.The appearance of a minimal measurable length and maximum measurable momentum or energy, in (33) and (34), is expected in this quantum gravity approach as a restriction on observable measurement.Finally, (32) shows that when the trap size is much greater that the thermal wavelength, the thermodynamic uncertainty breaks down and drives us to go to the usual observables. Gas Photon In this section, we aim to determine the thermal properties of the radiation field in the presence of GUP.With quantum gravity effects, the photon obeys the dispersion relation: Since photons are gauge bosons with a spin = 1 and only two possible orientations, the degeneracy factor is therefore = 2. Also the number of photons is not conserved and there is no condition on the number of photons, so the canonical partition function, according to Bose-Einstein statistics, is given by where is the zeta function, and the free energy is where = ( 2 4 )/(60ℎ 3 2 ) is the Stefan-Boltzmann constant.The free energy can be rewritten in a standard form as where the thermal dependance Stefan-Boltzmann constant is Using the partition functions, we determine all thermodynamical properties of the photon gas.Entropy and internal energy density, respectively, are as follows: It is clear from the above two expressions that the entropy and internal energy grows faster than the case of special relativity results with the temperature.This modification is expected where the total number of available microstates of the system is a direct measurement of entropy as well as the internal energy.The pressure of the radiation is This is an interesting result which is not like the case of ideal gases in which the pressure does not change at the Planck scale.But introducing a new statistical distribution (Bose-Einstein statistic) leads to an appearance of quantum gravity effect as an increase in pressure.Indeed, the increase in repulsive force among the particle gas entails the increase in pressure.The specific heat is a suitable measurable quantity that can be detected experimentally.The specific heat of photon gas is As a consequence of an elevation in entropy and internal energy, specific heat is increased in comparison with special relativistic one.In order to obtain pressure-energy density relation divide (42) The usual relation = 3 is recovered after taking = 0. To obtain an equation of state one has to write the temperature as a power series of .Solving (41), keeping only first order of approximation in , one gets = ( 4 ) Substitute (45) into (44) one can obtain the first order approximation in equation of state Equation ( 46) represents a modified equation of state due to the generalized uncertainty principle. Conclusions Various approaches to quantum gravity such as string theory, black hole physics, and doubly special relativity predict a considerable modification in Heisenberg uncertainty principle to be a generalized uncertainty principle.This modification leads to a change in the energy-momentum dispersion relation and in the physical phase space.These lead to a considerable enhancement in the number of accessible microscopic states of the phase space volume with GUP effects.In this paper we investigate the effect of the GUP in thermodynamic properties of ideal and photon gases.An analytical expression of the partition function for massive ideal gas and photon gas is derived.Using the modified partition function, we determined the thermodynamic functions such as free energy, entropy, pressure, internal energy, and specific heat.We found that there is a considerable increase in these quantities in comparison with the corresponding relativistic quantities.This is due to an increase in accessible microscopic states in the phase space which leads to an increase in entropy that carries a physical properties of the system.This in turn leads to an increase in thermodynamic properties of the system.The pressure of the ideal gas does not change, but the pressure of the photon gas is increased.Another problem is considered and we study the effect of GUP in a gas contained in a potential trap.When the size of the potential trap is comparable with the thermal wavelength of the particles, the pressure and the volume cannot be treated as commuting observable.Quantum gravity puts some restrictions on the particle dynamics, where we found that the thermal wavelength of the particles should be greater than, or equal to, a minimum length and this definitely agrees with the discrete nature of the space that has been derived based on GUP in previous studies.Besides, it is found that the thermal energy of the particle of the ideal gas should not exceed a maximum energy.The results we obtained in this paper could be useful to study the effect of the GUP with astrophysical objects such as standard model of stars (photon plus nonrelativistic ideal gas) [51], white dwarfs (degenerate electron gas) [51], and neutron stars (Oppenheimer-Volkoff model: degenerate neutron gas) [52,53].We hope to report on these in the future. Figure 1 : Figure 1: The dashed curves represent the unmodified internal energy and its non-and ultrarelativistic limits.The solid curve represents the modified internal energy. Figure 2 : Figure 2: The dashed curves represent the unmodified specific heat and its non-and ultrarelativistic limits.The solid curve represents the modified specific heat.
4,457.6
2014-02-25T00:00:00.000
[ "Physics" ]
Pedestrian load models of footbridges The increase of vibration problems in modern footbridges shows that footbridges should no longer be designed for static loads only. Not only natural frequencies but also damping properties and pedestrian loading determine the dynamic response of footbridges and design tools should consider all of these factors. In this paper the pedestrian load models for serviceability verification of footbridges, which are missing in the current European codes, are presented. For simplicity reasons the proposed pedestrian load models are based on stationary pulsating loads instead of moving pulsating loads. It is shown that simplified procedure can be used in verification of the serviceability limit state related to vibration due to pedestrians. Footbridge vibrations don’t cause usually structural problems, but if the vibration behaviour does not satisfy the comfort criteria, changes in the design or damping devices could be considered. The most popular external damping devices are viscous dampers and tuned mass dampers (TMD). The efficiency of TMD is demonstrated on the example of a footbridge prone to vibrations induced by pedestrians. It is shown that if the TMD is tuned quite precisely the reduction of accelerations can be very significant. Introduction Modern footbridges are very often lightweight and flexible structures, where the first natural frequencies of vibration may fall close to dominant frequencies of the dynamic excitation due to walking or running.Such bridges are susceptible to vertical as well as to horizontal vibrations leading to a resonant response characterized by high levels of vibration and a dynamic design is necessary. In this paper, the different models of dynamic loads caused by pedestrian crossing the bridge, which can be used in serviceability verification, are presented.Although footbridge vibrations do not cause usually structural problems, they can induce some uncomfortable sensation, and so many codes establish maximum acceptable values of acceleration.Provided that the vibration behaviour due to expected pedestrian traffic is checked with dynamic calculations and satisfies the required comfort, any type of footbridge can be designed and constructed.If the vibration behaviour does not satisfy some comfort criteria, changes in the design or damping devices could be considered. Pedestrian loads In order to verify serviceability limit state related to vibration due to pedestrians it is necessary to define dynamic pedestrian load.Numerous studies deal with determination of human walking, running or jumping force over the years, cf.e.g.[1]. The possible loading scenario can be divided into five categories: -Single person loading; -Normal traffic -spatially unrestricted traffic where each individual can move freely without having to change walking pattern to avoid contact with others; -Crowd loading -spatially restricted traffic where the walking of each individual is restricted due to limited space; -Group loading -a number of persons is walking closely together; -Vandal loading -a person, or a group of people, tries to excite the structure by moving in a correlated harmonic way in response-sensitive areas. In addition to these five groups three different types of human motion are commonly considered to model the dynamic loads applied by pedestrians, namely walking, running and rhythmic jumping.All these load models can often be categorized into deterministic and probabilistic models.In this paper only the deterministic models of a single pedestrian, group and crowd loading will be considered. Two types of analytical force models can be found in the literature: time-domain models (deterministic and probabilistic force models) and frequency-domain models -for a detailed review cf.[1] and [2].The suitable model of mutual interaction between human gait and elastic bridge has been developed in [3]. As an example the deterministic force model for walking is given.The vertical force component is greater than the horizontal one, but the lateral and longitudinal horizontal components can also cause vibration related problems of slender bridges.Frequency of lateral movement, which occurs as a result of moving the centre of mass from one foot to the other, is equal to half of the step frequency of vertical or longitudinal movement. General shapes of the temporal evolution of the pedestrian loads -assuming a perfect periodicity of the force -can be performed using appropriate load-time functions, for a vertical periodic force Fp,ver (t), lateral periodic force Fp,lat (t) and longitudinal periodic force Fp,long (t): , ( ) 0.05 sin 2 2 , ( ) 0.20 sin 2 where G is the weight of the person (usually G = 700 N), fp is the pacing frequency, a1 = 0.4, a2 = a3 = 0.1 are the Fourier coefficients of the i-th harmonic for vertical, lateral and longitudinal forces, φ1 = 0 and φ2 = φ3 = π/2 are the phase shifts of the ith harmonic contributions. The pacing frequency fp and the pedestrian forward speed vp are two parameters that play a fundamental role in terms of the characterisation of the excitation.The corresponding average values are presented in Table 1 for walking and running.A general proposal as to the typical frequency ranges for different human activities is given for walking 1.6-2.4Hz and 3.5-4.5Hz (first and second walking harmonics), for running 2.0-3.5 Hz, for jumping 1.8-3.4Hz and for bouncing 1.5-3.0Hz.Commonly adopted mean value frequency for running and jumping is 2.5 Hz [10]. Proposed load models The current European standard for determination of traffic loads on bridges [4] does not recommend the load models for serviceability limit verification due to pedestrians.The Guidelines for the design of footbridges [5] gives the certain pedestrian load models.The load models are divided into three categories: Single pedestrian load model (DLM1), Group of pedestrians load model (DLM2) and Continuous pedestrian stream load model (DLM3).Instead of pulsating forces in vertical and lateral direction which move with the speed of 0.9 fp, the stationary pulsating forces applied at the most adverse position on the bridge are defined. DLM1 defines vertical Fp,v (t) and horizontal (lateral) components Fp,h (t) as: DLM2 defines the effect of a group of 8 ÷ 15 persons walking across the bridge by vertical Fg,v (t) and horizontal components Fg,h (t) as: The effect of synchronisation of step frequencies and the phase shift between pedestrians is taken into account by coefficients kv and kh (Fig. 1). Fig. 1.Coefficients kv and kh [5].The load should be applied in the way to produce the most unfavourable loading case (depending on the mode shape) and a uniformly distributed mass of 400 kN/m 2 (if unfavourable) should be applied at the same location. Example -Footbridge in Čelákovice The footbridge is a cable-stayed structure with 3 spans 43.0 + 156.0 + 43.0 meters made of Ultra-High Performance Concrete.The height of the steel pylons is 36 meters (Fig. 2 and Fig. 3).Natural frequencies are summarized in Table 2 and important modes of vibration are shown in Fig. 4 to 6. Results Pedestrian loading was modelled using Eq. ( 6) and Eq. ( 7) for three principal load casesa) horizontal excitation with pacing frequency corresponding to the fundamental lateral frequency; b) vertical excitation with pacing frequency corresponding to the fundamental vertical frequency; c) vertical excitation with pacing frequency corresponding to the commonly adopted mean value frequency for walking 2.0 Hz.The results of the analysis are given in Table 3.It can be seen that for the load case c) the accelerations are higher than the limit values taken from Eurocode EN 1990.In such that case the changing of vibration characteristics of the footbridge (natural frequencies) or damping devices should be considered. Footbridge with TMD To avoid undesirable vibrations of the structure it is a good idea to install tuned mass dampers (TMDs) on the footbridge to dissipate the energy from one or more modes.A TMD is often a much more lucrative solution when compared to changing the natural frequencies of the structure. The theory of how a TMD works, and how to determine the optimal characteristics are summarized in [7].With respect to the antisymmetric shape with natural frequency of vertical bending 2.04 Hz (cf.As a result of the increased mass of the footbridge with two TMDs) the corresponding natural frequency changed to the value of 1.83 Hz.The response was calculated for the pacing frequency 1.83 Hz and results are given in Table 4. Conclusions In this paper the pedestrian load models for serviceability verification of footbridges, which are missing in the current European codes, are presented.For simplicity reasons the proposed pedestrian load models are based on stationary pulsating loads instead of moving pulsating loads.It is shown that simplified procedure can be used in verification of the serviceability limit state related to vibration due to pedestrians.Not only natural frequencies but also damping properties and pedestrian loading determine the dynamic response of footbridges and design tools should consider all of these factors.Footbridge vibrations don't cause usually structural problems, but if the vibration behaviour does not satisfy the comfort criteria, changes in the design or damping devices could be considered.The most popular of these are viscous dampers and TMDs.The efficiency of TMD is demonstrated on the example of a footbridge prone to vibrations induced by pedestrians.Is has been shown that if the TMD is tuned quite precisely (especially its frequency) the reduction of accelerations can be very significant. Table 2 . Footbridge Čelákovice -natural frequencies and modes of vibration. Table 3 . Footbridge Čelákovice -response due to pedestrian loading Table 4 . Footbridge Čelákovice -response due to pedestrian loading
2,132.2
2017-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Thermal dark matter co-annihilating with a strongly interacting scalar Recently many investigations have considered Majorana dark matter co-annihilating with bound states formed by a strongly interacting scalar field. However only the gluon radiation contribution to bound state formation and dissociation, which at high temperatures is subleading to soft 2->2 scatterings, has been included. Making use of a non-relativistic effective theory framework and solving a plasma-modified Schrodinger equation, we address the effect of soft 2->2 scatterings as well as the thermal dissociation of bound states. We argue that the mass splitting between the Majorana and scalar field has in general both a lower and an upper bound, and that the dark matter mass scale can be pushed at least up to 5...6 TeV. Introduction Negative results from direct and indirect detection experiments and collider searches pose a challenge for many minimal dark matter models. This has led to the construction of less minimal models. In the latter, the cross sections probed experimentally are not directly related to the cross section affecting freeze-out dynamics in the early universe. Therefore experimental bounds might be respected while at the same time maintaining the correct cosmological abundance. If the dark matter particles are massive and interact strongly enough with the Standard Model to have been in equilibrium with it at some time in the early universe, the basic feature that is needed for the above task is a strongly temperature-dependent annihilation cross section. At low temperatures, the cross section needs to be very small, to satisfy the non-observation bounds from indirect detection. In the early universe, the cross section needs to be large enough to keep dark matter in chemical equilibrium for a long while, reducing its number density and thereby evading overclosure of the universe. An example of a possible scenario along these lines is to postulate a model in which the dark sector consists of two particle species. The lighter one is the true dark matter, long-lived and interacting very weakly. In contrast, the heavier one could interact strongly and act as an efficient dilution channel for the overall abundance at high temperatures (cf., e.g., ref. [1]). If the heavy species interacts strongly, as in QCD, this scenario could lead to rather rich phenomenology. Strongly interacting particles form generally bound states. Because of the associated binding energy, their thermal abundance is larger than that for the same particles in a scattering state. Bound states may annihilate efficiently because the two particles are close to each other. Though often alluded to previously, a more intensive study of bound-state effects on the freeze-out dynamics has only started a few years ago (cf., e.g., refs. [2,3]). Recently, we have participated in developing a non-perturbative formalism for addressing the thermal annihilation of non-relativistic particles [4,5]. The formalism was already applied to a first full model, which did not include strongly interacting particles but nevertheless displayed weakly bound states [6]. The purpose of the current paper is to apply the formalism to a strongly interacting model that has been much discussed in recent literature. Our plan is as follows. Having introduced the model in sec. 2, we review some salient features concerning its thermal behaviour in sec. 3. The main technical ingredients of our analysis are specified in sec. 4: the operators responsible for the hard annihilation event; the spectral functions describing the soft initial-state effects that influence the annihilation; as well as generalized "Sommerfeld factors" which capture the effect both of bound and scattering states on the thermal annihilation cross sections. The cosmological evolution equations are integrated in sec. 5, whereas conclusions and an outlook are offered in sec. 6. Model The model considered consists of the Standard Model extended by a gauge singlet Majorana fermion (χ) as well as a scalar field (η) which is singlet in SU L (2) but carries non-trivial QCD and hypercharge quantum numbers. 1 The Majorana fermion is chosen as the dark matter particle, given that its low-energy scattering cross section is naturally suppressed, being pwave at tree level [8]. In the MSSM language, the Majorana fermion could be a bino-like neutralino and the scalar a right-handed stop or sbottom. However, for generality we do not fix couplings to their MSSM values. The hypercharge coupling of the scalar is generally omitted, as its effects are subleading compared with QCD effects. The Lagrangian for this extension of the Standard Model can be expressed as (2.1) The notation λ 1 is reserved for the self-coupling of the Higgs doublet (H). The chiral projectors a L = (½ − γ 5 )/2, a R = (½ + γ 5 )/2 imply that χ only interacts with SU L (2) singlet projections of quarks. We assume that the Yukawa coupling y couples dominantly to one quark flavour only. The Yukawa coupling determining the mass of that flavour is denoted by h, and the strong gauge coupling by g s . The free parameters of the model are then the two mass scales 2 M and ∆M ≡ M η − M as well as the two "portal" couplings λ 3 and |y| 2 that are assumed to be small at the MS scaleμ ∼ 2M . In the MSSM context, the importance of co-annihilations in such a model was stressed long ago [1]. Sommerfeld enhancements from QCD interactions were included in refs. [9,10], however without the consideration of bound states. Similar theoretical ingredients were applied to the generalized model in ref. [11]. Direct, indirect and collider constraints on the generalized model were reviewed in ref. [7]. More recently, bound-state effects have been approximately included in this model [12][13][14], as a single additional degree of freedom in a set of Boltzmann equations, a treatment which we aim to improve upon in the following. Parametric forms of thermal masses and interaction rates The coloured scalars are responsible for most of the annihilations during thermal freeze-out. We start by reviewing the thermal mass corrections and interaction rates that they experience. The important point is that, because of Bose enhancement, the gluonic contributions are infrared (IR) sensitive, and need to be properly resummed for a correct result. As a first step, consider a naive (i.e. unresummed) computation of the self-energy of the coloured scalar. Evaluating the (retarded) self-energy at the on-shell point yields (the line " " stands for η and the wiggly line for a gluon) The real part is analogous to that for a heavy fermion [15]. The imaginary part vanishes because there is no phase space for the 1 ↔ 2 process. However, at high temperatures these naive results are misleading. Perhaps the simplest way to see this is to replace the scalar in the loop by a particle with a different mass, M η + ∆M , and consider the case ∆M ≪ πT ≪ M η . Then it can be verified that Re Π R /M η is modified by a correction of order ∼ g 2 s C F ∆M , and Im Π R /M η by a correction of order In other words, the result in eq. (3.2) seems to change qualitatively because Bose enhancement of the soft contribution compensates against the phase-space suppression. The correct treatment of the Bose-enhanced IR contribution requires resummation. The heavy scalars are almost static, and interact mostly with colour-electric fields (A a 0 ). In a plasma, colour-electric fields get Debye screened. We denote the Debye mass by m D . Parametrically, m D ∼ g s T , where g s ≡ √ 4πα s . The proper inclusion of Debye screening in a gauge theory requires Hard Thermal Loop (HTL) resummation [16][17][18][19]. Recomputing the 1-loop self-energy with HTL propagators, and setting ∆M → 0 since IR sensitivity has now been regulated, we get (here p ≡ |p| and a blob stands for a HTL-resummed propagator) The new contribution in eq. (3.3), originating from the Debye-screened Coulomb self-energy, is known as the Salpeter correction (cf. ref. [20] for a review). It dominates over the other mass correction if T < ∼ g s M η , which is generally the case. The imaginary part in eq. (3.4), i.e. the interaction rate, reflects fast colour and phase-changing 2 → 2 scatterings off light medium particles. It was first derived for the case of a heavy quark [16]. We finally replace the coloured scalar by a pair of heavy scalars, separated by a distance r. The HTL-resummed computation of the thermal mass correction ("static potential") and interaction rate as a function of r was carried out in refs. [21][22][23]. At leading non-trivial order the result can be expressed as As a crosscheck, for r → ∞ twice the results of eqs. This can be compared with the 1 ↔ 2 gluon radiation contribution, ∼ g 2 s C F (∆E) 3 r 2 n B (∆E) [23], where ∆E is the energy difference between the singlet and octet potentials. At high temperatures, when m D , πT ≫ ∆E, the 2 → 2 contribution dominates over the 1 ↔ 2 one. In order to determine the spectral function of the scalar pair, characterizing the states that appear in the scalar-antiscalar sector of the Fock space, V (r) and Γ(r) can be inserted into a time-dependent Schrödinger equation satisfied by the appropriate Green's function [24]. More details are given in sec. 4. We have checked numerically that, in accordance with theoretical expectations [25], the states originating from this solution respect the qualitative pattern seen above for Γ(r), namely that at high temperatures the width from 2 → 2 reactions dominates over the gluo-dissociation contribution. We close this section by considering another essential ingredient of the framework, namely the rate at which Majorana dark matter particles convert into the coloured scalars. Once more, this rate is dominated by 2 → 2 scatterings, and obtaining the correct result requires HTL resummation. Setting for simplicity the external momentum to zero, we find (the thick line is the Majorana fermion and the arrowed line the quark flavour with which it interacts, treated for simplicity as massless in vacuum which is a good approximation if m vac < ∼ πT ) where the last line applies under the assumption ∆M ≪ m q , πT ≪ √ T M . The thermal quark mass m q , originating from the phase space integral of the light plasma particles off which the 2 → 2 scattering takes place, is The rate in eqs. (3.8) and (3.9) is faster than the Hubble rate in a broad temperature range, e.g. down to M/T > ∼ 3000 for y = 0.3 and ∆M/M < ∼ 0.01. It does fall out of equilibrium when T ≪ ∆M , however transitions to virtual bound-state constituents may continue and form presumably the relevant concern. Non-equilibrium effects have been discussed in ref. [26]. Quantitative framework for estimating the annihilation rate We now present a framework for computing (co-)annihilation rates in the model of sec. 2. Non-relativistic fields The basic premise of our framework is to make use of the non-relativistic approximation, assuming that πT , m top , ∆M ≪ M , where M is the dark matter mass and ∆M = M η − M is the mass splitting within the dark sector. This simplification opens up the avenue to a non-relativistic effective field theory investigation of soft initial-state effects. In the non-relativistic limit, the interaction picture field operator of the coloured scalar is expressed as The non-relativistic fields φ and ϕ † transform in the fundamental representation of SU(N c ), with colour indices denoted by α, β, γ, δ, ... . The Majorana spinor χ is simplest to handle by choosing the standard representation for the Dirac matrices, i.e. γ 0 = diag(½, −½). Then where the Grassmannian spinor ψ has two spin components, labelled by p, q, r, s, ... . Only the left-chiral projection of χ participates in interactions according to eq. (2.1). In the following, we generally set M η → M whenever possible. The influence of ∆M = 0 (and its thermal modification) is discussed in sec. 4.3. Imaginary parts of 4-particle operators The first step is to determine annihilation cross sections for all possible processes with dark matter initial states. The leading order Feynman diagrams are shown in fig. 1. According to the optical theorem, the amplitudes squared |M| 2 can be expressed as an imaginary (or "absorptive") contribution to an effective Lagrangian [27]. An important simplification in the Majorana case follows from the identity satisfied by Pauli matrices, σ k pq σ k rs = 2δ ps δ qr − δ pq δ rs . Therefore a possible spin-dependent operator can be reduced to a spin-independent one: ψ † p ψ † r ψ s ψ q σ k pq σ k rs = −3ψ † p ψ † q ψ q ψ p . At leading order in an expansion in 1/M 2 , the absorptive operators read Here T a are Hermitean generators of SU(N c ). In the partial wave language, the operators in eq. (4.4) A non-zero value of c 1 may be generated at higher orders. To minimize the magnitude of higher-order effects, the couplings should be evaluated at the MS renormalization scalē µ ∼ 2M . We note that c 5 gets contributions from the "Majorana-like" processes M i and M j in fig. 1, but not from the "Dirac-like" amplitude M h . Number density, effective cross section, evolution equation Within Boltzmann equations the overall dark matter abundance evolves as [28][29][30] n = − σ eff v n 2 − n 2 eq , (4.5) whereṅ is the covariant time derivative in an expanding background. To go beyond the quasiparticle approximation underlying the Boltzmann approach, the effective cross section can be re-interpreted as a chemical equilibration rate, Γ chem , and then defined on the non-perturbative level within linear response theory [31]. Furthermore, within the nonrelativistic effective theory, Γ chem can be related to the thermal expectation value of L abs from eq. (4.3) [4]. These relations can be expressed as Im L abs . (4.6) In our model the number density amounts to The mass difference ∆M T gets a vacuum contribution, ∆M = M η − M , and a thermal correction from eq. (3.3) as well as from a similar tadpole involving λ 3 , Note that the negative Salpeter correction may cancel against the positive terms. At leading order the Debye mass parameter amounts to 9) where N f is the number of quark flavours (cf. ref. [32] for higher orders). The effective values of g s and N f are changed with the temperature, as reviewed in appendix A. For future reference we define a "tree-level" effective cross section, σ eff v (0) , by evaluating the thermal expectation value Im L abs at leading order and then making use of eqs. (4.6) and (4.7). Wick contracting the indices in eq. (4.3) leads to (4.10) Plasma-modified Schrödinger equation and generalized Sommerfeld factors Going beyond leading order, we evaluate Im L abs as elaborated upon in ref. [5], expressing it as a Laplace transform of a spectral function characterizing the dynamics of the dark matter particles before their annihilation. Denoting by E ′ the energy of the relative motion and by k the momentum of the center-of-mass motion, this implies where α 2 M ≪ Λ ≪ M restricts the average to the non-relativistic regime. 3 The spectral functions are obtained as imaginary parts of Green's functions, 4 Here V i contains a negative imaginary part, and N i is a normalization factor giving the number of contractions related to the operator that c i multiplies in eq. (4.3): If the potentials V i (r) were r-independent and with an infinitesimal imaginary part, i.e. V i (r) = Re V i (∞)−i0 + , they would only induce mass shifts. In this case the spectral functions can be determined analytically, This form can be used for defining generalized Sommerfeld factors: Then eq. (4.11) combined with eqs. (4.6) and (4.7) leads to a generalization of eq. (4.10), If a potential V i (r) leads to a bound state, whose width is much smaller than the binding energy, the corresponding generalized Sommerfeld factor can be computed in analytic form. In this case eq. (4.12) can be solved in a spectral representation, resulting in 3 Some elaboration about the need to introduce such a cutoff can be found in ref. [5]. In practice, we choose Λ ≃ 2α 2 M , and have verified that making it e.g. 2-3 times larger plays no role on our numerical resolution. where ψ j are the bound state wave functions. Inserting into eq. (4.16), the contribution of the jth bound state toS i reads This becomes (exponentially) large when T ≪ α 2 s M , however chemical equilibrium is lost in the dark sector at low T , which imposes an effective cutoff on the growth (cf. secs. 5 and 6). Thermal potentials In order to write down the potentials V i (r) appearing in eq. (4.12), let us define The integrand in eq. (4.20) corresponds to the static limit of the time-ordered HTL-resummed temporal gluon propagator. Then we find The structure V 3 (r) equals the combination V (r) − iΓ(r) shown in eqs. (3.5)-(3.7), whereas C F Re[v(0)] yields the Salpeter part of ∆M T in eq. (4.8). The potential V 3 (r) corresponds to a singlet potential, V 4 (r) to an octet potential, and V 5 (r) to a particle-particle potential, relevant because of the presence of a particle-particle annihilation channel generated by Majorana exchange (cf. the discussion around the end of sec. 4.2). We note in passing that at T < 160 GeV, when the Higgs mechanism is operative, additional potentials can be generated, particularly through the Higgs portal coupling λ 3 in eq. (2.1) (cf. e.g. ref. [33]). However the coefficients of these potentials are suppressed by ∼ λ 2 3 v 2 /M 2 , where v is the Higgs expectation value. Given that we consider M ≥ 2 TeV, we expect their contributions to be negligible compared with QCD effects and have not included them. We also note that an r-dependence can be generated for V 2 (r) through quark exchange, however this is suppressed by ∼ |y| 2 σ · ∇/M . For a practical use of eq. (4.21), numerical values are needed for the parameters g 2 s and m 2 D . We relegate a discussion of this point into appendix A. Let us however note that we Here ω ≡ 2M + E ′ . At low temperatures a dense spectrum of bound states can be observed, which gradually "melts away" as the temperature increases. Right: The generalized Sommerfeld factors, eq. (4.16), corresponding to the annihilation of the coloured scalars via different channels. restrict to temperatures T > ∼ 1 GeV, so that the real part of the potential contains no trace of a string tension [34]. Furthermore, in accordance with the low-temperature gluon-radiation contribution specified below eq. (3.7) and with general arguments presented in ref. [6], the imaginary part of the potential is multiplied by the Boltzmann factor e −|E ′ |/T for E ′ < 0. Numerical evaluations Having determined the spectral functions from eqs. (4.12) and (4.13) and the generalized Sommerfeld factors from eq. (4.16) or (4.19), the effective cross section is obtained from eq. (4.17). Subsequently eq. (4.5) can be integrated for the dark matter abundance. As usual we define a yield parameter as Y ≡ n/s, where s is the entropy density, and change variables from time to z ≡ M/T , whereby eq. (4.5) becomes Here m Pl is the Planck mass, e is the energy density, and c is the heat capacity, for which we use values from ref. [36] (cf. also ref. [37]). The final value Y (z final ) yields the energy fraction Ω dm h 2 = Y (z final ) M /[3.645 × 10 −9 GeV]. We integrate eq. (5.1) up to z final = 10 3 . At around these temperatures, depending on the value of ∆M/M , the processes of interest have either ceased to be active, or are falling out of chemical equilibrium, because their rates are suppressed by e −∆M/T ≪ 1. Therefore they cannot be reliably addressed within the current framework. In fig. 2(left) we show the spectral function ρ 3 corresponding to the attractive channel, displaying a dense spectrum of bound states at low temperatures. The corresponding generalized Sommerfeld factor, obtained from eq. (4.16), is shown in fig. 2(right). An exponential increase is observed at low temperatures, as indicated by eq. (4.19). The repulsive channels also show a modest increase at very low temperatures, due to the fact that the spectral function extends below the threshold at finite temperature [6]. Examples of results obtained by integrating eq. (5.1) are shown in fig. 3. In particular, it can be observed how a very efficient annihilation sets in at low temperatures, if ∆M is small so that bound states of coloured scalars are lighter than scattering states of Majorana fermions. Finally, fig. 4 shows slices of the parameter space leading to the correct dark matter abundance. In the plots the Yukawa couplings have been set to the stop-like values y = 0.3, h = 1.0. However these couplings only have a modest effect if chosen otherwise, because they do not affect the coefficient c 3 appearing the attractive channel, cf. eq. (4.4). As an example, setting h = 0.0 increases the abundance typically by ∼ 5%, cf. fig. 3. The most important role is played by the coupling λ 3 . For c 3 this coupling has been evaluated at the scaleμ = 2M , whereas for collider phenomenology its value at a scaleμ ∼ m H would be more relevant. The latter can be obtained from eq. (A.7), and is some tens of percent smaller than λ 3 (2M ). We stress that, as shown by eq. (A.7), Yukawa couplings always generate a non-zero value for λ 3 through renormalization group running. Conclusions We have investigated a simple extension of the Standard Model, cf. sec. 2, which has become popular as a prototypical fix to the increasingly stringent empirical constraints placed on "WIMP"-like frameworks. In this model dark matter consists of a Majorana fermion, which only has a p-wave annihilation cross section at tree level, helping to respect experimental non-observation constraints from indirect detection. The Majorana fermion has a Yukawa interaction with a QCD-charged scalar field (such as a right-handed stop or sbottom in the MSSM) and a Standard Model quark. For large masses and small mass splittings between the Majorana fermion and the scalar field, the best sensitivity for discovering the Majorana fermion appears to be direct detection by XENON1T [7], enhanced by resonant scattering off quarks through scalar exchange, even if interactions with top or bottom quarks are much less constrained than those with up or down quarks. Despite its simplicity, the model displays rich physics in the early universe. We have extended previous investigations [7,[9][10][11][12][13][14] by incorporating the full spectrum of thermally broadened bound states as well as the effect of soft 2 ↔ 2 scatterings. In general such scatterings dominate interaction rates at small mass splittings, because they are not phasespace suppressed in the same way as 1 ↔ 2 scatterings are, cf. sec. 3. The reason that the model leads to a viable cosmology is that at high temperatures dark matter annihilates efficiently through the scalar channel, guaranteeing that its overall abundance remains low. The fast annihilations proceed particularly through bound states formed by the scalars, cf. fig. 2. As shown in fig. 4, the model can be phenomenologically viable for masses up to M ∼ 5...6 TeV, provided that the mass splitting is small, ∆M/M < 5 × 10 −3 , and that the "Higgs portal" coupling λ 3 between the coloured scalar and the Higgs doublet is substantial. We recall that in supersymmetric theories, λ 3 is proportional to the quark Yukawa coupling squared, λ 3 ∼ |h| 2 , and therefore indeed large if we identify the coloured scalar as a right-handed stop. Actually, similar arguments but a somewhat more complicated analysis are expected to apply to a left-handed stop as well (cf. e.g. ref. [38]). We believe that the mass splitting should not be too small, however. The non-relativistic binding energy of the lightest bound state, E ′ 1 , is negative. If it overcompensates for the mass difference, so that 2∆M + E ′ 1 < 0, the lightest two-particle states in the dark sector are the bound states formed by the coloured scalars. However these states are short-lived. Therefore it seems possible that (almost) all dark matter converts into the scalars and gets subsequently annihilated, so that the model may not be viable as an explanation for the observed dark matter abundance. This domain has been excluded through the grey bands in fig. 4. If we close eyes to this concern and assume that chemical equilibrium is maintained, then the value of M could be substantially larger than in fig. 4, for instance M ∼ 8 TeV as shown in fig. 3, and even more if we integrate down to lower temperatures. We end by remarking that the model contains two portal couplings, λ 3 and y. The roles that these play are rather different. The value of λ 3 at the scaleμ = 2M influences the coefficient c 3 which mediates the most efficient annihilations, cf. eq. (4.4). In contrast y affects the rate of transitions between the Majorana fermions and coloured scalars, cf. eq. (3.9), as well as the running of λ 3 , cf. eq. (A.7). As long as y is not miniscule, so that the rate in eq. (3.9) remains in equilibrium, it has in practice little influence on our main results in fig. 4. The only coupling that we need at a scaleμ ≪ M is the strong coupling. Since it has a large influence, we evaluate it at 2-loop level forμ ≤ M (nowadays running is known up to 5-loop level [39][40][41]). Denoting by N f the number of flavours and setting N c = 3 for brevity, the 2-loop running is given by The value of N f = 3, ..., 6 is changed when a quark mass threshold is crossed atμ = m i , where continuity is imposed. The initial value is α s (m Z ) ≃ 0.118. Forμ > M , the contribution of the coloured scalar is added and we switch over to 1-loop running, i.e. eq. (A.11). When we evaluate the static potential, a wide range of distance scales appears. At short distances, inspired by refs. [42,43], we evaluate the 2-loop coupling at the scaleμ = e −γ E /r. Since parametrically only the scales αM ≪ M play a role in the Schrödinger equation, the running does not include the coloured scalar in this domain. At large distances, we employ effective thermal couplings. In the absence of NLO computations for thermal quarkonium observables, we adopt effective couplings from another context, that of dimensionally reduced field theories [44,45]. There the Debye mass parameter and an "electrostatic" coupling are expressed as [46] For general masses, only α MS E4 and α MS E7 are available at present: Here the functions read (n F (x) ≡ 1/(e x + 1); chemical potentials have been set to zero) Given that α MS E6 is not currently known for general masses, we estimate inserting here PDG values for the quark masses [47]. The scale parameter is set toμ = 2πT .
6,475
2018-01-17T00:00:00.000
[ "Physics" ]
Research progress of optical H2O sensor with a DFB diode laser In the field of near infrared H2O sensing, the acquisition of the absorption signal usually is from a noisy background, thus it is important to adopt an effective signal demodulation method. This study introduced the research progress in the field of trace water vapor detection, covering different individual gas detection techniques. On the basis of the conventional double-beam differential absorption, the division method in voltage and the dual-peak method based on the differential value of two adjacent absorption lines have been studied. Voltage division has an excellent stability to temperature variation, mechanical extrusion, and fiber bend loss. The dual-peak method proved a linear relation with the water vapor concentration, and this method provided a way to measure the concentration at high pressure. Furthermore, the so called balanced ratiometer detection (BRD) was introduced. It has an outstanding self-adjusting capability, and it can also avoid an excess phase difference caused by the current-to-voltage converting circuit, thus this method has a high sensitivity. In addition, the second harmonic technique applied to gas detection was introduced, and for the high-frequency modulation via driving current, 1/f was suppressed apparently; as a result, this technique realized a better sensitive detection by one to two orders of magnitude. Introduction Water vapor monitoring is an important task in the high voltage electrical equipment, manufacturing plants, and environment sciences, etc [1]. Common methods usually are the gravimetric method, electrolytic process and dew-point measurement technique [2]. Compared with these methods, the detection based on optical and spectroscopic techniques is attracting more and more attention for fast and selective on-line measurement, especially, the tunable diode laser absorption spectroscopy (TDLAS), in conjunction with wavelength modulation spectroscopy (WMS), has been developed into a very sensitive and general technique for monitoring trace species [3]. When using the optical measurement based on the absorption spectrum, one has to resolve small changes in a large background. For ppmv level trace water, the absorption is particularly weak; this needs a very effective method to extract the weak signal. In the following, the progress of optical gas sensing in our research group will be introduced to provide reference during the design of the optical gas sensor. During the demodulation of the absorption signal, the subtraction in voltage has been one of the simplest signal extraction methods. But when compared with the division method, a slight change could cause a more obvious instability of the signal by subtraction. In this paper, a conventional division method in voltage and an approach based on the balanced ratiometer detection (BRD) are introduced. Furthermore, we introduce a new method named as "dual-peak" method, reported in [4], which is aimed at coping with the misreading of the reference value because of the noisy absorption line bottom and the affection of the linewidth broadening at high gas pressure, hence, this method provides a way to measure the concentration at high pressure. In addition, the second harmonic technique is introduced, for the purpose of high sensitive gas detection. Because of the high frequency modulation, the high signal to noise ratio can be obtained. Basic principles The fundamentals of the molecular absorption spectroscopy have been discussed widely elsewhere [5,6]. According to the HITRAN2008 database, many gas species exhibit absorption in the ultraviolet (UV), visible, near infrared or mid infrared regions. It is shown in Fig. 1 in which absorption lines in near infrared are typically overtones of fundamental absorption lines in the mid infrared and hence can be significantly weaker. However, the availability of high quality of light sources and detectors, derived from the telecommunications applications, can compensate this disadvantage and reach a high sensitivity. In the region of near infrared, water vapor has absorption lines at the wavelengths of 1368.579 nm and 1367.862 nm, and there is no absorption of background gases near these wavelengths which guarantees the measurement accuracy. During measuring by the single absorption line, 1368.597 nm is usually selected for its stronger line strength. A distributed feed-back (DFB) diode laser (WSLS-137010C1424-20) operating at 1370 nm, with a linewidth of 3 MHz, is chosen as the light source. And the output wavelength is tuned by driving current modulation or temperature modulation, and the shifts of the wavelength are 3 pm when the change in the driving current is 1 mA and 90 pm when the change in the LD temperature is 1 ℃, respectively. Once the monochromatic radiation of the light source at frequency v 0 (cm -1 ) overlaps with a rotation/vibration transition of a gas, the absorption will happen, resulting in the attenuation of the light intensity. The absorption spectroscopy is governed by the Beer-Lambert law, which relates the transmitted intensity I t to the incident intensity I 0 as where α is the absorption coefficient, C is the concentration of the target gas, L is the length of the absorption path, and I 0 is intensity of the incident light. In the situation of low absorption, αCL<<1, there can be Then, the concentration of the gas C can be got by measuring the information corresponding to I or I 0 , or both of them. The detection techniques are most important during the demodulation of concentration C. To provide reference during the design of the gas sensor, different techniques will be introduced in the next section. Division in voltage Division and subtraction are the most common methods to extract the differential signal. According to (2), division could completely eliminate the influences of the common-mode noise in theory, such as laser power drift, temperature variation, extrusion on optical components, and fiber bend loss before splitting. The division process is shown in Fig. 2. The signal beam transferred through the target gas cell and then was coupled onto an InGaAs photodiode (PD). The reference beam was directly coupled to an identical PD. After the current-tovoltage conversion and amplification, the signals were delivered to a divider to get the ratio of I to I 0 . Processed by the essential circuits above, the final signal was detected to figure out the target gas concentration as the follows: Bending loss, mechanical extrusion, and ambient temperature would add differential mode interference to the signal and reference laser beams. In this case, the division performs an excellent stability. Actually, the extrusion and temperature influences are the loss caused by components, thus, all the influences can be roughly regarded as components loss. Taking the fiber for example, the random loss of 0.08 dB was applied to the signal beam channel, and the simulation result is given in Fig. 3. For the division method, the deviation was only 1.36%, which outperformed the BRD method, 3.60%. This indicated that division is more stable. Detection based on dual adjacent absorption lines In direct absorption spectroscopy measurement, generally, the single absorption line is adopted. To calculate the concentration, we should measure the absolute value of the absorption line. As shown in Fig. 4, the existence of the random noise and interferometric noise [7] increases the difficulty and uncertainty of selecting the reference point. Recently, a solution was proposed in [4], using another appropriate absorption peak as the reference point. The experiment has verified that this so-called dual-peak measurement method based on the differential value of two adjacent absorption lines is practical and highly accurate. In the measurement experiment, the temperature modulation was used to tune the DFB diode laser, and the emission wavelength range overlapped the two adjacent absorption lines of water vapor at 1367.862 nm and 1368.597 nm, respectively. The experimental setup was the same as that of the division method introduced in Section 3.1. Figure 5 presents the measuring result. The concentration varied from 100 ppmv to 1200 ppmv. Curve A indicates that there is a well linear relation between the concentration and the measured signal. Curve B shows the measurement error. This error was within 20 ppmv. Meanwhile, the resolution could reach to 10 ppmv when the concentration of water vapor was not higher than 1200 ppmv [4]. In addition, this method provides a feasible way to detect the gas concentration under a high pressure as long as the wavelength tuning range could overlap two appropriate absorption lines of the target gas. High sensitive detection using BRD The BRD is an electric noise cancellation technique, initially innovated by Hobbs [9] based on the Ebers-Moll model. The detailed circuit of the BRD is given in Fig. 6(b). The reference photocurrent was split into two parts across through a differential pair of bipolar junction transistors (BJTs), used as a variable current divider. The split ratio of the reference photocurrent only depended on the difference ΔV b e =V b e 2 -V b e 1 , which was independent of the amplitude of the reference photocurrent. Simultaneously , the signal photocurrent was subtracted by the current across Q 2 at the invert junction of the amplifier A1. The output of A1 reflected current subtraction after the feedback resistor. With the integrating amplifier A2, a negative feedback loop was formed to adjust ΔV be automatically by sensing A1's output. Then, A1's output could be forced to be zero, this ensured that the signal and divided reference photocurrents were equal, and thus the noise was eliminated identically. The log ratio output, shown in Fig. 6(b) is given in the following equation [8,9]: where V should be in Volts, and I ref and I sig refer to photocurrents derived from the reference beam and signal beam, respectively. In this method, normalization is processed in current instead of voltage, which fundamentally avoids an excess phase difference caused by the current-to-voltage converting circuit, mainly because of the difference in amplifiers. Figure 7 shows a research test. The demodulated water vapor concentration was well proportional to the water vapor concentration ranging from 56 ppmv to 809 ppmv. The proportional relationship was described by a linear equation with an R-square of 0.99983. This technique can reach to ppbv level; the minimum concentration for water vapor at 1368.597 nm that could be detected by the system has been proven to be 71.8 ppbv with just a 10-cm path length based on the BRD method [8]. It is important to point out that the intercept in Fig. 7 is small, approximately zero. It is because this experiment was carried out at the situation of two matched PDs. As shown in Fig. 8, the light from the DFB diode laser with an emission wavelength of 1370 nm directly was coupled on a PD. After data processing, the differential signal was obtained, and the absorption line was measured to be 1368.597 nm. There existed water vapor inside some optical components of the optical fiber gas sensor, which has been proven in [10]. During the above experiment, the water vapor inside the DFB diode laser has been eliminated by applying the differential processing (BRD technique), and the water vapor inside the two PDs has been suppressed by matching processing. Besides the water vapor, other gases like N 2 , O 2 , and CO 2 potentially existing in the components, based on our research, have to be taken into consideration during the design of the corresponding high sensitive and high precision gas sensing system. High sensitive detection based on the second harmonic signal Compared to the direct detection mentioned above, the harmonic signal technique enables the alternating current (AC) detection at some frequency chosen and the use of a lock-in amplifier for the better signal recovery. The chosen modulation and detected signal frequency is sufficiently high to eliminate laser and 1/f noise. Besides, the lock-in amplifier allows the narrower signal bandwidth and, hence, noise is minimized. As a result, the technique is more sensitive by one to two orders of magnitude relative to the direct detection. A general schematic that introduces this detection technique based on harmonic signals is shown in Fig. 9. In the design of the detection system, a low-frequency, the 4-Hz sawtooth current combined with a high-frequency, 1k-Hz sinusoidal current was applied to drive the DFB diode laser. With the dither over absorption features and wavelength scan by the sawtooth current, the second harmonic signal profile could be recovered by a lock-in amplifier at the receiver. The detected result is shown in Fig. 10. In general, this method confers one apparent advantage: improved detection sensitivity resulting from a decrease in 1/f noise because of high frequency sinusoidal modulation and from the narrow bandwidth of the lock-in amplifier. Conclusions In this paper, the progress of the DFB diode laser based water vapor sensor is reported, and different techniques used for gas sensing are introduced above. Division in voltage has a more excellent stability than BRD in suppressing ambient temperature variation, mechanical extrusion, and fiber bend loss, while the BRD outperforms in self-adjusting capability by using a differential pair of BJTs as a variable current divider, this method performs well in term of sensitivity. As to the dual-peak method, it provides a method for accuracy improvement and a feasible way to high pressure condition. In addition, a technique based on the harmonic signal is introduced. The detection sensitivity has been further improved for the high-frequency modulation via the driving current and narrow bandwidth of the lock-in amplifier.
3,074.4
2014-03-13T00:00:00.000
[ "Physics" ]
Crowd Face Detection with Naive Bayes in Attendance System Using Raspberry Pi . PT. Restu Agung Narogong is a company with a total of 176 employees, queues often occur in the attendance process, both incoming and outgoing attendance. The employee needs to register their attendance. It is time consuming during the shift change. Therefore, a biometric system is needed to support the attendance system to identify employee without registering themselves. One of the alternative biometric systems is face recognition by using a computer vision. The purpose is to implement a crowd face detection with Raspberry Pi using the Naïve Bayes classifier. This system uses an algorithm to extract facial characteristics into mathematical data. Then the data is compared with data from other facial characteristics collected in the database. This device uses Python as a programming language with some of the scientific Python libraries. The testing of the Naïve Bayes method was conducted using a sample of dataset of 370 augmented facial imagery. The accuracy of this implementation is 76.31%, the precision is 78.25% and recall 81.25%. The background and lighting of the captured image affect the accuracy of this device. Introduction The rapid advancement of technology has resulted in the emergence of various electronic devices and software which help to assist human activities, so the time efficiency increases productivity if a company or organization [1].In a company, whether in a large or small and medium-sized company, attendance is an inseparable thing.Many companies use attendance as a basis for salary [2].One of the electronic devices for attendance which are developing now is by using fingerprint.Fingerprint is relatively effective to validate and ensure an employee attendance.Because fingerprint is a biometric technology that offer biological authentication which enable the system to recognize the user accurately [3].Biometrics recognition system, sometimes known as biometric system, is an authentication system using biometrics.The biometric system will automatically identify a person based on a biometric characteristic by matching that characteristic to a biometric feature that has been stored in the database [3].Besides fingerprint, other biometric systems commonly used are face recognition [4] and retinal scan [5].The advantages of fingerprint biometric system are secure, ease of use, non-transferable, and higher accountability [6].The disadvantages of fingerprint biometric system are Exclusions, cost, and system failure [7].The advantages of facial recognition system are robust, touchless, effortless, real-time, and valid.The disadvantages of facial recognition biometric system are privacy concerns, maturity of technology and storage [8].The advantages of retinal scan biometric system are most reliable, very quick, and unique data points [9].The disadvantages of retinal scan system are health risk to the eye and intrusiveness [10].Based on the afore mentioned advantages, fingerprint is the most common biometric system used especially for the attendance logging in the industry [11].However, it has weaknesses about scanner issues and physical traits.In this research, the application of facial recognition system as an alternative biometric system was studied, with the case study at PT. Restu Agung Narogong.This company is a manufacturing company with a total of 176 employees, queues often occur in the attendance process, both incoming and outgoing attendance.The employee needs to register their attendance.It is time consuming during the shift change.Therefore, a biometric system is needed to support the attendance system to identify employee without registering themselves.Based on the facial recognition technology in computer vision, the application of employee facial recognition can be used as an alternative method in monitoring the presence of the employees, which aims to conduct employee attendance that will capture many faces in a crowd.Employees are only required to face the camera while walking into the attendance gate.The system will detect many faces in a walking crowd.So, it can reduce the attendance queue that occurs if using fingerprint.In implementing one of the features of this system function, the system can later perform detection and facial recognition.The facial detection process is done using Naïve Bayes Classifier method that serves to classify employees' faces.Naïve Bayes Classifier is one of the algorithms used for classification and is a Machine Learning method that uses probability calculations and statistics put forward by Thomas Bayes.The algorithm is used to predict future probabilities based on experience [12]. Literature Review Face recognition in a crowd has been studied by [13].In this research skin segmentation has been used to many procedures as in other methodological on face recognition.recently, partial face detection has necessitated and drawn considerable attention in the information science and technology community.However, the purpose of this study was to investigate new ideas for recognizing a specific individual in a crowd, as well as obscured face information.This article is the result of an in-depth investigation of face identification under partial visibility conditions such as chrominance, lighting, posture changes, and saturation, among others.in this article we give a presentation of new algorithm handling methods and system for the partial face recognition and perceive a specific individual in a crowd.Finally, we applied common classifiers such as neural networks, SVM, and HMM to assess the proposed approach and achieved a high detection rate.Another research on crowd detection has been done by [14].The fundamental drive of this article is to describe the idea of empty seat revelation system and thus track down the quantity of empty seats left vacant in a corridor.The efficient empty seat revelation system is accomplished by utilizing the combination of the Viola-Jones algorithm with template-based correlation matching.The suggested empty system is extremely efficient.This technology aids crowd management by updating the quantity of empty seats on a regular basis.Facial image recognition in [15] utilizes Naive Bayes for classifying the result of eigenface feature extraction.the normalization z-score is included to improve accuracy.to evaluate the performance of the suggested method, the 200 datasets are separated into data training and data testing by utilizing cross validation (k=10).The outcomes show that the suggested method can predict the facial image up to 70%.Additionally, in average, the prediction accuracy increases to 89.5 percent by including the normalization Z-Score.A system of face recognition for biometric has been proposed by [16].This research uses Raspberry Pi as its processing unit and utilize Pyimage search library for the face recognition.For the face recognition, it is accomplished by using David King's Dlib and Adam Geutgey's module.This article also uses standard frontal face using Haar Cascade classifier in the form of an xml file as its face detection.The dataset comprises of 5 persons, with 30 photographs, for a total of 150 photos.As for the size parameter in pixels variations include 20x20, 25x25, 30x30, and 35x35 and the scale factor parameters' values include 1.1, 1.2, 1.3, and 1.4.The neighboring parameters' variation values are 3, 4, 5, and 6.The test results reveal that the best parameters, namely the size parameter 20x20, scale factor parameters 1.1, and parameters neighborhood of 3, achieve the greatest Accuracy value of 80% and the True Positive Rate of 100%.Face recognition system testing is done at four distinct distances: 1.5 meters, 2 meters, 2.5 meters, and 3 meters.Size parameters, scale factor parameters, and neighborhood parameters are the three categories of testing parameters. Research Method The research framework is an overview that explains the logic flow of research in general.The research framework of this study can be explain using Figure 1. Fig.1. Framework of Research Based on Figure 1, the stages performed in this study can be divided into nine steps as follows: Problem Identification Stage This stage is to find out the problems that often occur in the attendance system at PT. Restu Agung Narogong. Data Collection Stage The data collection is done to support how the application will be created in it.Data collection is done by using interview methods, observation, and data studies. Requirement The requirement stage is done to generate the need for system development to be carried out such as software and hardware needs. Design At this stage, the system design will be built based on the idea of solutions for the problems that occur, including the design of appearance of the application. Evaluation The evaluation of the prototype is based on the design made to determine the suitability of the problem. Coding At this stage the design execution is carried out into the system to be built using the Python -MySQL program. Testing The system that has been developed will be tested to know the development and suitability of the program with the design using black box testing and evaluation of the model confusion matrix to know the percentage of models used. System Evaluation Evaluation of the system is the result of the advantages and disadvantages of the system that has been developed which will provide conclusions and suggestions for development. System Implementation This stage is the stage of using the system according to the needs of PT.Restu Agung Narogong. Attendance system at PT. Restu Agung Narogong can be explained using Figure 2. Fig.2. Working system analysis Figure 2 shows the employee attendance system that runs at PT. Restu Agung Narogong.The process of the system can be explained as follows: 1. Attendance Officer Officers are available in advance to maintain, provide, and documenting employee attendance 2. Employee Employees register their attendance in the attendance book, then the book will be handed over to the absentee officer who will do the documentation (note the absence) 3. Attendance Recap Attendance that has been documented by the absentee officer is then gathered as a report. The flowchart for the face recognition can be described using Figure 3 as follow: Testing Set A test set is a data set used to assess the strength and utility of a predictive relationship.The test set is obtained with two approaches, first using the face image contained in the face database.Second, using facial images obtained in real time or using video. Capturing face images or face detection will be carried out by a webcam which will detect the user's face.After the user's face is detected, then the user's face image will be located (face locating).Then the application will track the face of the user (face tracking).The system used for face tracking is the Two-Dimensional System, where this system tracks p g g faces and outputs of the image space where the subject's face is located.In this application, the Haar-Cascade Classifier is used to detect human faces or subjects.The main basis for this classifier is the Haar-like feature.This feature uses changes in the contrast value between adjacent rectangles, compared to the pixel intensity value. Feature Extraction Feature Extraction is used to find the most appropriate image representation so that can be identified.The main task of feature extraction is intelligence and the ability to sense the similarities between the test set and the training set.This main task requires feature extraction to find the relevant distance measures in the selected feature space. Projection of Test Image The face image to be tested is projected onto the training model image to get the exact extracted features. Feature Vector A feature vector is a vector image.Where in the vector has a random variable with the possibility of a face or not. Classifier The classifier in this application uses Naïve Bayes. Where Naïve Bayes is used to classify data (feature vector) based on probability. Decision Making After the smallest probability is known and the test vector has been classified as belonging to a certain subject class, then a decision is made.If the test vector has been classified as belonging to a particular subject class, the decision is made that the subject in the test set is the same as the subject in the training class set. Results and Discussion The hardware that has been assembled can be seen in Figure 4 as follows: The design interface for this application is as follows: Employee data page Employee data page is a page to display information for each employee that has been added.This page displays the employee's name and NIK.This page can also be used to edit and delete data.In this study, the data used for the experiment was taken first from the photo of an idol band member "one direction".The members are including Niall Horan, Liam Payne, Louis Tomlinson, and Harry Styles.They have different facial characteristics which will be stored in the system database used as training.Then on the analysis stage, the test image will be tested using one direction photos simultaneously, and then they will be matched to the face image that has been stored in the system databases.The data were taken from 4 members using 3 photos with a different pose.After the augmentation process, 370 photos were obtained.Testing the confusion matrix method is carried out using a sample dataset of face images from several band personnel "one-direction" face images as the following test set: The result from each class is obtained as follows: = 100% = 76.31% .Based on the calculation in Table 4, the accuracy of the application based on Table 3. above is 76.31%, the average precision is 78.25%, and the recall average is 81.25%. Conclusion The conclusions of this study based on the implementation that has been done, are as follows: 1.A Crowd Face Detection attendance model has been made using a Raspberry Pi and a camera with the Naïve Bayes Classifier method which has data storage features on the server, the tool can capture employee faces simultaneously to perform attendance at the same time. 2. The results of the evaluation using the confusion matrix, the accuracy of the application is 76.31%, the average precision is 78.25%, and the recall average is 81.25%.3. The results of the evaluation from users, this face recognition application is easy to use and useful if used as an attendance system, however for the information displayed the user interface needs to be improved as needed.Based on the results of application implementation, found suggestions for application development that can be done in the next research are as follows: 1.Using other algorithms, such as CNN (Convolutional Neural Network) for face recognition in the first step or process, namely face detection.2. Using more complex and detailed features, it not only detects the faces of several people, but also recognizes eyes, voices, and photos.3. When facial recognition is done using a webcam, there should not be too much interference behind the user (noise), it is better to use a plain background and a supporting lighting system.4. The camera used has a minimum resolution of 8 megapixels with the distance of the user's face to the camera not less than ± 4 m at the time of facial recognition. Fig. 3 . Fig.3.Flowchart Software Design The flowchart stages are explained as follows: 1. Face Database Face database is a collection of face images used in a face recognition system.The face images contained in the face database can be used as a training set or a testing set.2. Training Set A training set is a data set used to find potentially predictive relationships.The face database must have an image of the faces of each person or subject in the training set.The facial images in this training set should represent a front view of the person or subject with little difference in point of view.The training set should also include different facial expressions, different lighting, background conditions, and the use of attributes on the face.This training is set with the assumption that all images have been normalized for the m X n array and that the facial image in the training set is only the face area and does not have many other limb images.3.Testing SetA test set is a data set used to assess the strength and utility of a predictive relationship.The test set is obtained with two approaches, first using the face image contained in the face database.Second, using facial images obtained in real time or using video.Capturing face images or face detection will be carried out by a webcam which will detect the user's face.After the user's face is detected, then the user's face image will be located (face locating).Then the application will track the face of the user (face tracking).The system used for face tracking is the Two-Dimensional System, where this system tracks Fig. 5 .Fig. 6 . Fig.5.Employee data page Training data page This page is a page to upload data training.This page can be used to add or delete the data training Fig. 7 . Fig.7.Attendance page Face Recognition During the Testing Process Fig. 8 . Fig.8.Face Recognition During the Testing Process The result of Implementing of the system for face detection at employee attendance can be seen at Figure 9. Base of this Figure the application could detect employee at the gate. p Table 1 . Data training Table 2 . Data testingTo generate more data to train more models, the training data is augmented using keras method.This using python library Keras.Using Keras can be used as follows: , to enlarge the image in a random order.horizontal_flip,flipping half the image horizontally randomly.fill_mode, fill recently formed pixels, that may appear because of rotation, width, or height shift. zoom_range Table 4 . The Result of Testing model
3,935.4
2023-01-01T00:00:00.000
[ "Computer Science" ]
Analysis of Physical Parameters of Limestone Deposits in Ewekoro Formation , Southwestern Nigeria Physical parameters of limestone in Ewekoro formation, south-western Nigeria were determined via direct laboratory method. The permeability and bulk density values obtained range from 1.47 to 7.99 1  ms and 1.26 to 1.90 3  gcm respectively. The resistivity values of the limestone samples collected from the study site were obtained by laboratory direct method and the result revealed that the resistivity values fall within 6 and 171 m k . These values correlate favorably with the results obtained from electrical resistivity method of geophysical prospecting of the study area. The two approaches showed a good degree of correlation in the resistivity value of the limestone and their varying qualities. This research work further showed the occurrence of vast deposit of limestone, which can be of economic importance in mining and for industrial purposes. Keyword: bulk density, limestone, permeability and resistivity Introduction Mineral exploration work was carried out to analyse the physical parameters of limestone in Ewekoro formation.This geophysical exploration was carried out to map the limestone deposit of Ewekoro formation, southwestern Nigeria using direct laboratory method via direct current power supply.This direct method approach is to complement and compare the study carried out by Badmus and Ayolabi (2005) where limestone of Ewekoro formation was delineated using Schlumberger electrode array to map both vertical and lateral extents of the deposit (Figure 1).This approach would also confirm the accuracy of electrical resistivity method of geophysical prospecting for mineral exploration as well as characterizing the various litho-facies of Ewekoro limestone (Badmus & Ayolabi, 2005).Badmus et al. (2006) carried out geoelectric evaluation of mica schist deposit of Area J4, southwestern, Nigeria using the direct laboratory method to characterize the mica schist into different degrees of purity for both economic and commercial purposes.Several studies have also been carried out to evaluate the distribution of limestone deposits at Ewekoro quarry.Adegoke et al. (1970) subdivided the limestone deposits into three units.Omatosola and Adegoke (1981) then proposed a fourth unit.These units in stratigraphic order are: sandy biomcrosparite (bottom), shelly biomcrite, algal biosparite and red phosphate biomcrite (top).The sandy biomcrosparite forms the base of the formation and consists of a light brown sandy limestone with very few bioclassic fragments.Stratification is evident and is accentuated by variations in the quantity and grain size of the interbedded quartz and glauconite.The shelly biomcrite consists of pure limestone of about 4.5 to 6.0 m and constitute the bulk of the Ewekoro formation in the quarry.Limestone has abundant macrofossil content particularly gastropods, pelecypods, echinoderms and corals.The algal biosparite limestone overlies the shelly biomcrite unit.Resistivity is the only electrical characteristic of geoelectric property.Most rock forming minerals have conductivities spreading over a wide range and are semiconductors.However, these rocks are seen as insulators in their dried form.The amount of moisture contained in the rock depends on the structure especially the amount of pore space and cracks.Rocks and sediments contain space between grains (pore space) in fractures or in dissolved cavities (limestone), which may be filled with water.Isolating the drain from the atmosphere can lead to an increase of the partial pressure of carbon dioxide within the system and to an increase of the dissolution rate of calcite.More importantly, it also minimizes the potential for oxidation of Fe 2+ (to Fe 3+ ) and the risk of precipitation of Fe(OH) 3 and of other related solids in the drain.Formation of these precipitates can result in premature drain failure due to limestone armouring, which can significantly reduce the rate of calcite dissolution and affect the flow.The presence of Al 3+ within the influent is also of concern; Al(OH) 3 is formed at a pH between 4 and 5 and it tends to accumulate in pores, thus potentially clogging the voids and consequently reducing the material hydraulic conductivity Poirier and Aubertin (2011).The porosity and chemical content of the water filling the pore spaces are more important in governing resistivity than conductivity of the mineral grains of which the rock is composed.The resistivity curves are deflected towards high levels with positive separation.The glauconite is prevailing in these intervals.The limestone of this formation is dolomitic, sandy and shaly in parts.In some limestone intervals, the glauconite is present.The marl is light gray, silty, calcareous and grading to limestone Ghorab (2010).Limestone is a raw material for the manufacturing of cement, asphalt filler, ceramics, flux in glass making, fertilizer filler, explosives to mention just a few.Jones and Hockey (1964) revealed Ewekoro limestone and the overlying Akinbo shale to be lateral equivalents of the Imo formation of eastern Nigeria.Other authors such as Omatsola and Adegoke (1991) and Oladeji (1992) have investigated the stratigraphy and depositional characteristics of limestone and clay/shale deposits in southwestern Nigeria.The West African Portland Cement Company also conducted extensive geological survey and commercial appraisal of Ewekoro limestone and shale beds for commercial cement production. Study Area The study area lies within Ogun State, which is bounded in the west by Benin Republic, in the south by Lagos State, in the north by Oyo and Osun States, and in the east by Ondo State.It occupies a total area of.Ewekoro is the host to West African Portland cement quarry and lies between longitude 3°05´E to 3°15´E and latitudes 6°40´N to 6°55´N (Figure 2). Geology of the Study Area The study area is located in the sedimentary area of southwestern Nigeria.Ewekoro formation belongs to tertiary-formed Palaeocene and Eocene; and the greater part of the depression is a potential artesian basin where ground water can be sourced.Adegoke et al. (1976) outlined the Albran and younger Palaeographic history of Nigeria and summarized the nature and extent of transgressive, regressive phases as well as the nature of the sediment.The geology of Ogun State comprises sedimentary and basement complex rocks, which underlie the remaining surface area of the state.It also consists of intercalations of argillaceous sediment.The rock is soft and friable but in some places cement by ferruginous and siliceous materials.The sedimentary rock of Ogun State consists of Abeokuta formation lying directly above the basement complex (Figure 2).Ewekoro, Oshosun and Ilaro formations in turn overlie this, which are all overlain by the coastal plain sands (Benin formation). Direct Laboratory Measurement Using D.C Supply Eight different samples were collected from different location from the study area.Resistivity of limestone sample collected from the study area were analysed using a simple laboratory experimental set-up (Figure 3).The limestone samples collected were packed into a core sampler of cylindrical shape and saturated hydraulically for 24 h.After which they were dried in the oven to remove completely the water content in the samples and were shaped into regular forms so as to make good contact with the pins (connecting wires) inserted into the samples.Direct current source was used to supply voltage across the two ends of the core sampler. Voltage was supplied at 12 and 24 V and the corresponding currents were recorded respectively (Table 1). Figure 3. Direct method experimental setup at 24 V. Results and Discussions For the resistivity measurements, there is no significant difference between the values obtained for all the rock samples collected.However, all rock samples revealed a significant difference for both permeability and bulk density values.For hydraulic conductivity, the formula below was used: 1). Samples A, D and E belong to facie III because the resistivity and permeability values revealed that the limestone here is porous with grains as confirmed by Badmus and Ayolabi (2005).While sample F belongs to facie IV with the lowest value of resistivity and highest value for permeability.The limestone here is confirmed to be highly porous with cracks of different degrees. Conclusion The resistivity values of limestone rock samples collected from different location within the study area has revealed by the simple laboratory set up using DC source of 12 and 24 V showed that the limestone of Ewekoro formation has various degrees of qualities as characterized by their resistivity and permeability values.This research work also showed to certain extents the accuracy of the electrical resistivity method of geophysical prospecting when compared with the results obtained in the work of Badmus and Ayolabi (2005).This research work further showed the occurrence of vast deposits of limestone, which can be of economic importance in mining and industrial purposes. Figure 1 . Figure 1.Geological map of Papa-Alanto and its environ (Bulletin: Geological Survey, Nigeria, No. 31, R. D.Hockey, H. A.Jones and J. D. Carter, 1957-61) of sample inside the core sampler H = length of water from the top of the sample inside the core sampler T = time of water run off  sat K Saturated hydraulic conductivity Table1and 2 shows the resistivity values, permeability values and bulk density as obtained from the simple laboratory direct measurement using a direct current supply of 12 and 24 V.The resistivity values range between Table 1 . Resistivity obtained from direct laboratory measurement Table 2 . Badmus and Ayolabi (2005)nd Bulk Density obtained from direct laboratory measurementBadmus and Ayolabi (2005)already characterized the limestone into various litho-facies on the basis of resistivity variations; which is now correlated with the results obtained from the direct laboratory measurement.From this result, samples B and C are characterized as facie I because of their high values of permeability as well as resistivity.Sample G and H belong to facie II, which revealed limestone with high compaction and of low economic quality (Table
2,170
2012-06-19T00:00:00.000
[ "Geology" ]
Interactive comment on “ The Global Terrestrial Network for Permafrost Database : metadata statistics and prospective analysis on future permafrost temperature and active layer depth monitoring site distribution ” Instruments Data Provenance & Structure Tables Figures structure of the Data Management System in regard to user operability, data transfer and data policy.We outline data sources and data processing including quality control strategies.Assessment of the metadata and data quality reveals 63 % metadata completeness at active layer sites and 50 % metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost.The depth distribution of the boreholes reveals that 73 % are shallower than 25 m and 27 % are deeper, reaching a maximum of 1 km depth.Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps.Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models.The distribution of GTN-P sites within zones of projected temperature change show a high representation of areas with smaller expected temperature rise but a lower number of sites within arctic areas were climate models project extreme temperature increase.This paper offers a scientific basis for planning future permafrost research sites on large scales. Background and motivation Research on the cryosphere has shown over the last few decades that it has warmed rapidly since the beginning of industrialisation.This warming will probably greatly exceed the global average temperature increase (ACIA, 2004;Groisman and Soja, 2009;Miller et al., 2010;Stocker et al., 2013).Permafrost is defined as ground that remains frozen for at least two consecutive years (Van Everdingen, 1998) and it underlies about one quarter of the Northern Hemisphere landmass, widespread in the Arctic, Antarctic and mountain areas.Further increases in air temperature will induce a warming of subsurface conditions, leading to the thawing of permafrost in some areas.Ongoing permafrost warming (Romanovsky et al., 2010b) and near-surface thawing in permafrost regions associated with rising air temperatures are considered to reinforce warming of the atmosphere through the conversion of the large soil organic carbon pool in permafrost into greenhouse gases, a process termed "permafrost carbon feedback" (Grosse et al., 2011;Hugelius et al., 2013;Schaefer et al., 2014;Schuur et al., 2013). A recent study shows that a global temperature increase of 3 • C could result in an irreversible loss of 30 to 85 % of the near-surface permafrost, with a corresponding release of carbon dioxide between 43 to 135 Gt by 2100 (Schaefer et al., 2014).Monitoring permafrost is essential to understand the impact of climate change on its thermal state and to assess the impact of permafrost thaw on the Earth climate system.Increase of permafrost temperature and thickening of the active layer could also result in substantial effects on northern infrastructure, forcing local, regional and national governments to devise new adaptation and mitigation plans for permafrost regions (Schaefer et al., 2012).This is why permafrost temperature and active layer thickness have been together identified as an Essential Climate Variables (ECV) by the World Meteorological Organization global observing community (http://www.wmo.int/pages/prog/gcos/).The "Permafrost" ECV is monitored by the Global Terrestrial Network for Permafrost (GTN-P), the primary international programme concerned with monitoring permafrost Figures characteristics (Fig. 1).GTN-P, formerly known as GTNet-P, was developed in 1999 by the International Permafrost Association (IPA) with active support by the Canadian Geological Survey (Brown et al., 2000;Burgess et al., 2000) under the Global Climate Observing System (GCOS) and the Global Terrestrial Observing Network (GTOS) of the World Meteorological Organization (WMO).Two components of GTN-P, the Circumpolar Active Layer Monitoring program (CALM) and the Thermal State of Permafrost (TSP) currently serve as the major providers of permafrost and active-layer data (Romanovsky et al., 2010b;Shiklomanov et al., 2012). State of the art and research gaps The GTN-P experienced substantial growth at the beginning of the 21st century.(Brown, 2010).Efforts of the IPA and the GTN-P at the end of the IPY resulted in reports on the thermal state of permafrost in high latitudes and high altitudes which were called the "IPA snapshot" (Christiansen et al., 2010;Romanovsky et al., 2010a;Smith et al., 2010;Vieira et al., 2010;Zhao et al., 2010). The growing amount of high-resolution measurements and annual collection of permafrost data clearly prompted the need for comprehensive management of the GTN-P, including its data management system.Several databases exist for particular regions, e.g.NORPERM (Juliussen et al., 2010) (Allard et al., 2014).The Geological Survey of Canada (GSC) also spent great efforts to collect and store thermal permafrost data from the western Arctic, feeding the data into the former GTN-P data management system.The Thermal State of Permafrost (TSP, Brown et al., 2010) and the Circumpolar Active Layer Monitoring (CALM, Shiklomanov et al., 2008) programs oversee the collection permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions.These programs provide the majority of the content to the GTN-P Database (Fig. 1).Both TSP and CALM provide an online data repository and are actively expanding observational network.However, all existing permafrost repositories so far were conceived as rather static aggregations of data and the modern permafrost community lacks of a dynamic database with the capability to interlink between field scientists in polar research and scientists working on global climate and permafrost models. Aims The long term goal of GTN-P is to obtain a comprehensive view of the spatial structure, trends and variability in permafrost temperature as well as active layer thickness (GTN-P Strategy and Implementation Plan 2012-2016).While the overall aim of the GTN-P Database is to function as an "early warning system" for the impacts of climate change in permafrost regions and provide standardized permafrost data needed as input to global climate models. In this paper, we introduce the first dynamic database for parameters measured by performing spatial analyses on the metadata; and (iii) to identify spatial gaps in GTN-P by comparing its site distribution with relevant environmental geospatial datasets. 2 Description of the data management system Database design and web interface The GTN-P Database is accessible online at the URL http://gtnpdatabase.org or The online interface of the GTN-P Database was developed to maximize usability both for the data submitter and the user of the data products.The resulting roles (data administrator, data submitter and data user) are built into the database providing different rights to read, edit or modify data.Data users can access the database without account and password and have access to (i) permafrost temperatures, (ii) annual thaw depths and (iii) help sections.While administrators have full access and data submitters cannot modify or delete data of third parties.Data not marked as "published" by the data submitters are not accessible to third parties or the public.The help section provides tutorials and template-files for upload and download of borehole temperature and active layer grid data as well as GTN-P maps and fact sheets. Data structure and upload The GTN-P Database is compliant with existing international standards for geospatial metadata ISO 19115/2 and TC/221 (www.iso.org).The database specifically builds on the GTN-P metadata form that was developed as a standard by the GTN-P leadership in 1999 (Burgess et al., 2000).Site metadata must be entered and selected from parameters and properties, which are selectable in dropdown lists in the upload interface.Tutorials and templates of data files provide the necessary information to bring the data into the right CSV format, prior to data upload.The maximum file size for upload is 1.5 MB. The data upload procedure was conceived to eliminate the need for any prior knowledge of databases by the user.National Correspondents (NC's) from all participating countries were nominated by the national committees and by the scientific international permafrost community to input data on an annual basis, collecting information from the investigators and data managers from that country.NC's are listed on the GTN-P website and can be contacted by permafrost researchers interested in contributing monitoring data to the GTN-P Database.NC's are also encouraged by the GTN-P Executive Committee to pro-actively engage national investigators in the process to ensure a continuous data upload into the system. Data search and output The GTN-P Database features both (i) basic search and (ii) custom search functions. The goal of these functions is to narrow down the number of data records based on a set of criteria.While the basic search is a simple filter by manual character input, the advanced custom search allows the use of multiple search criteria to retrieve a defined list of data records from the repository.The data and metadata associated with the search results can be downloaded by the data user as compressed file packages containing standardized metadata forms in text and XML and the corresponding raw data in CSV (comma separated values) format.However, the CSV format and the in- consistency of the time series, in regards to completeness, frequency and geometry does not allow their direct use within climate models, as they do not comply with the CF 1.6 convention (for Climate and Forecast). To address this issue, the GTN-P Data Management System processes and aggregates all data on-the-fly through a set of internal functions and Python libraries.All eligible datasets are aggregated into a NetCDF file that has been formatted to catch the geometry of the data.NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. TSP datasets are linearly interpolated at consistent 0, 1, 2, 3, 5, and 10 m borehole depths.The results are two products in NetCDF format: a TSP dataset of annual time series borehole temperature profiles in an orthogonal relation and a CALM dataset of annual time series of active layer thickness in a time orthogonal template.Future work focuses on the establishment of data quality control and flags for the data as well as on the conversion of the stations distributed data to a regular grid at locations where the monitoring sites' scattering allows it. Data policy GTN-P follows an open-access policy in line with the IPY data policy.The data management unit of PAGE21 mediates between the GTN-P Database and the PANGAEA Data Publisher for Earth and Environmental Science (Diepenbroek et al., 2002) The GTN-P Executive Committee decided for a general embargo period of one year.This means that data from 2015 will be available at the earliest in 2016 in order to allow investigators the first opportunity to publish their data.For special cases, e.g.doctoral dissertations, this embargo may be extended on demand. The data will be made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights and that global scientific problems require international cooperation.Data download is unrestricted and requires only a free registration needed for web security reasons.Before being able to download data, users must accept the terms and conditions of the data use policy.Therein, the user is asked to contact the site PI's prior to publication to prevent potential misuse or misinterpretation of the data.In addition, an email is automatically sent to the contact person of each dataset downloaded to inform them of the interest in the data. 3 Data quality Data sources A thorough data mining effort was conducted prior to the creation of the GTN-P Database in order to recover as much archive permafrost temperature and active layer thickness data as possible.The recovered datasets were characterized by an extreme diversity.These included global datasets on active layer temperature from the CALM data collection (Shiklomanov et al., 2008), but also datasets aggregated the- In addition to GTN-P standard datasets on temperature and active layer thickness, several ancillary existing datasets were opportunistically added to the database.These include in particular remotely sensed land surface temperature and surface soil moisture values that were transferred from ESA DUE Permafrost (Bartsch and Seifert, 2012; DUE-Permafrost-Project-Consortium, 2012). At the time of submission of this paper the GTN-P Database contained metadata from 1074 TSP boreholes and 274 CALM active layer monitoring sites.31 boreholes are located in the mountain permafrost regions and 72 in Antarctica.Currently, 277 borehole sites have temperature data and 78 active layer monitoring sites have thaw depth data.Due to the fact that one site can have more than one measurement unit or period, the total number of datasets is 1300, including ground temperature, active layer thickness, surface soil moisture, air temperature, and surface temperature. Data quality control Data being entered into the database undergoes several steps of quality control before receiving approval for data output.To harmonize the different data formats and produce one standard format within the GTN-P Database, every dataset retrieved from external sources underwent a review and if necessary a standardization ("cleansing") procedure to bring the file into the format needed for upload.This includes in particular conversion of file structures, date formats, reference points and null values.To match the metadata that accompanied older datasets to ISO standards, a set of obligatory metadata information were developed considering the existing metadata forms from the old GTN-P web repository (closed in 2014).Metadata input must be compliant with the database rules and include a number of mandatory fields following the terminology of code lists and control vocabulary associated with the GTN-P Database as documented on the GTN-P website.The interface for metadata input is arranged as drop-down menus that must be completed before the system enables the user to proceed with the data input procedure.In addition to the data quality control of the individual permafrost scientist, the GTN-P Data Management System offers quality control.Successful upload assures correctness and consistency of the dataset.Screening for obvious errors follows with the help of automated data visualization during and after the upload procedure.Interactive and adjustable data plots on the database website serve also as on-the-fly data visualization for scientific purposes. According to the GTN-P Strategy and Implementation Plan (2012-2016) metadata and data considered for input into the GTN-P Database will be coordinated and reviewed by National Correspondents (NC's) on a regular basis, at least once per year.Datasets are published only after approval by the NC's. Additionally, overall quality verification is inherently provided by the production of this article.As stated in the online ESSD journal description, the peer-review process this paper went through secures that the involved data sets are (i) plausible without detectable problems, (ii) of sufficiently high quality with clearly stated limitations and (iii) well annotated by standard metadata. Quality assessment and limits In geoscience, errors start to emerge already at the measurement stage.The most common technique of continuously recording borehole ground temperatures at specific depths is the use of permanently installed multi-thermistor cables, providing an accuracy and precision between ca.0.02 and 0.1 • C (Brown et al., 2000;Romanovsky et al., 2010b).The logger resolution and measurement frequency, however, varies with the type and the depth of the individual borehole.Due to active layer dynamics, the relative vertical position of measurement probes can change and hence introduce an error in the depth indications of old boreholes in sensitive areas.Additionally, the number of vertical positions of sensors varies not only between and within boreholes (and research groups) but also through time.Commonly, sensors are placed every 0.2-0.4m until 2 m depth, every 0.5 m until ca.4-5 m depth, every 1-3 m until 15 m depth and in the deeper parts of a borehole sensor are reduced to 5-10 m steps (Brown et al., 2000).As expected, a linear regression on 180 datasets (Fig. 3) indicates that the overall number of temperature sensors increase with increasing borehole depth.Based on this data, however, the average SD of the number of temperature recordings within a borehole at different dates is 14.0.The maximum difference in the number of measurements between dates is 57.Minimum is 0. The GTN-P Database support group established an IPA action group to develop strategies for profound numerical assessment and control of the GTN-P data quality. The active layer thickness data (CALM) have generally fewer numbers of potential biases due to the majority of sites performing measurements of summer thaw depths using mechanical probing either in grids or transects resulting in multiple measurements compared to point locations associated with sites using thaw tubes or temperature boreholes.The complex nature of grid metadata, however, created inconsistencies in the structure of the primary data files.Even though the files were standardized before implementation, the low resolution of a number of CALM grid references and TSP borehole coordinates led to imprecise geopositioning; 275 longitudes and 287 latitudes have less than 4 decimal places.374 datasets had coordinates with decimal degree precision below 4 decimal places of either the latitude or the longitude or both.These datasets have been flagged and will be submitted to the NC's for revision.We assessed the overall metadata completeness for TSP and CALM datasets by calculating the percentage of available fields that are filled in. Figure 4 indicates the percentages of both data types according to the metadata completeness.CALM metadata is generally more complete with values between 50 and 80 % (average 63 %).TSP metadata has a bimodal distribution of completeness with most datasets between 31 and 40 % and a second peak between 61-70 % (average 50 %). Metadata fields with the most missing information are accessibility, distance from disturbance, bibliographic references, terrain morphology, hydrology, slope and aspect, borehole diameter and permafrost thickness.While these "extra" information are not essential for the direct permafrost monitoring, they are relevant to gain a holistic future view on the thermal state of permafrost by feeding high quality data to global models. Spatial sample representation of TSP and CALM sites Table 1 summarizes the distribution of boreholes and active layer monitoring sites per country.The total numbers per country and permafrost zone were calculated by plotting the sites as points and the areas as polygons in ArcGIS.During the analyses some polygons and site coordinates suffered from inaccuracy -e.g.terrestrial boreholes with imprecise coordinates were shown as "offshore" sites.In these cases, land-ocean polygon boundaries were slightly shifted and the land polygons extended to capture the relevant points.For calculating the borehole per area ratios, however, we used the original polygon dimensions. In order to measure the degree of inhomogeneous sampling and to identify the main geographical gaps, we performed a numerical quantification of the distribution of boreholes and active layer grids in the Northern Hemisphere with the help of a Voronoi Tessellation Analysis (VTA) as suggested by Molkenthin et al. (2014).To reduce the potential bias that result from multiple boreholes or active layer monitoring grids around the same coordinate or which are very close to each other, buffers of 1 km radius for each coordinate were created in ArcGIS.Sites with site-to-site distance of ≤ 2 km were merged and the gravitational centers of the resulting buffer areas were converted to points for further calculations.With the help of this method we reduced 1073 TSP coordinates to 614 buffered TSP sites and 242 CALM coordinates to 187 buffered CALM sites.Voronoi cells were calculated using the Thiessen polygon tool and subsequently clipped to the extension of the IPA map of permafrost zones. The VTA creates a mosaic by drawing area (cell) boundaries exactly in the middle between neighboring nodes: TSP sites (Fig. 5) and CALM sites (Fig. 6).Every point within a cell is closer to its node than to any other node.Glaciated areas (shapefile from NaturalEarthData, 50 m resolution) were removed from the analysis.In a VTA, uniform distribution of sites would result in maximum peak in the cell size distribution at the same value as A total /N cells (Molkenthin et al., 2014), which is basically the same as the mean Voronoi cell size.Hence, to quantify the overall deviation from equidistant sampling of the terrestrial Northern Hemisphere permafrost and glacier-free area, we used the SD of the Voronoi cell size distribution from TSP (SD: 9.08 × 10 4 km 2 ) and CALM (SD: 8.68 × 10 4 km 2 ).For visualization, we calculated the number of Voronoi cells in a cubic size sequence x 2 (1 to 2, 2 to 4, 4 to 8,. . ., 1.05 × 10 6 to 2.10 × 10 6 km 2 ) and plotted the results on a logarithmic scale (Fig. 7).Voronoi Cell Size Ranges were attributed to the same colors types as in Figs. 5 and 6.According to the VTA, the TSP cell size distribution peaks two times at smaller values than the A total /N cells = 3.79×10 4 km 2 indicating a significantly clustered sample distribution.TSP bimodal size distribution is attributed to (i) linear spatial sample configuration along transportation corridors in areas with developed economic and infrastructure as well as several high-density borehole transects and (ii) to the good coverage and high number of boreholes in Alaska, both indicated in green color.The CALM cell size peaks at about the same values as A total /N cells = 1.25 × 10 5 km 2 .The plateau between 2 × 10 4 km 2 and ca.6 × 10 5 km 2 , however, indicates a clustered sample distribution, albeit the panarctic CALM sampling is clustered to a lesser degree than the borehole configuration.High skewness of both TSP (4.99) and CALM (6.52) cell size distributions indicates that the peaks are inclined towards higher cell size values demonstrating inhomogeneous sample distribution. The boundaries of the bigger Voronoi cells (orange and red) and especially their intersections (Figs. 5,6 and 9) indicate locations with the highest potential for improving the representativeness of permafrost monitoring from hemispherical or global perspective.However, this statement is based on a purely statistical view of the Northern Hemisphere and is not taking into account disturbing landscape features such as water bodies, forest fires, infrastructure, areas of deforestation, urbanization, farming, mining and wetland drainage. Site distribution compared with soil organic carbon content and vegetation To identify the main geographical gaps in the distribution of boreholes and active layer monitoring sites, we compared the GTN-P metadata with environmental data from different sources.Permafrost thaw is likely to foster the metabolization of the greatest organic carbon pools in the Northern Hemisphere and is believed to create a positive feedback to the Earth's climate system by releasing enormous amount of greenhouse gases (Grosse et al., 2011;Schaefer et al., 2014).Together with the TSP and CALM Voronoi cell boundaries and simplified permafrost zones, we illustrated the panarctic distribution of soil organic carbon content within the top two meters by using data from the Northern Circumpolar Soil Carbon Database (Hugelius et al., 2013) in Table 2.The distribution of CALM and TSP point coordinates was calculated within the different carbon content groups and shows that, at the circumpolar scale, 25.2 % of all boreholes and almost 29 % of all CALM sites are located in permafrost areas that contain more than 25 % organic carbon.While, only 1.7 % of the boreholes and zero CALM sites cover areas with more than 50 % organic carbon.We conducted a similar analysis using vegetation zones.For this, we used the vegetation zone information provided in the original GTN-P metadata.Locations with missing vegetation information were attributed to vegetation zones by using photographs of the site (if available) and/or other sources such as atlases of the local flora.This information is provided in Table 2.Because of the wide variety of sources used to define vegetation zones, we prefer not to base recommendations for future locations of monitoring sites based on this information.However, the treeline (Walker et al., 2005), through its function as a major ecotone between forest and tundra, offers high potential for sensitive recording of climate change signals (Biskaborn et al., 2012) and is therefore shown in Fig. 9. Preferential slope orientation Topography and in particular slope orientation influences the amount of solar radiation received by the ground surface and the accumulation of snow.Due to orbital parameters, in mountainous regions of lower latitudes, permafrost occurs preferably on northfacing slopes in the Northern Hemisphere.Similarly, in continuous permafrost regions, the active layer is usually thinner on north-facing slopes (French, 2007).To inspect the monitoring bias that might be caused by preferential slope orientation, we analyzed the slope and aspect for boreholes and active layer sites. Only few of the original GTN-P metadata collections contained slopes and aspects of the ground surface at the permafrost borehole or the active layer grid sites.This information also existed in various formats.We used the ESA DUE Permafrost Circumpolar digital elevation model (Santoro and Strozzi, 2012) in ArcGIS to calculate slope and aspect from the Northern Hemisphere topography.This remote sensed derived model, however, has a resolution of 100 m and therefore, the calculated values (in degree units) for each site north of 60 • N should be evaluated carefully.Figure 10 shows the slopes and aspects and their statistics for the original metadata and the calculated values in spherical projections plotted with STEREONET 9.2 (Cardozo and Allmendinger, 2013).The graph includes both surface areas at TSP and CALM sites as (i) planes in equal-angle projections and (ii) the frequencies of slope aspects as rose diagrams with a bin size of 30 • .A comparison between the original metadata and the DEM-derived values shows major differences in the CALM sites, amplified (i) by the very low amount of slope metadata entries (n = 8) and (ii) due to the fact that most active layer monitoring sites are located on a more or less flat terrain.It must be considered, however, that recent CALM sites are usually selected by constant geophysical conditions on flat watersheds and only few historically adapted sites are located on slopes. The closer the slope values are to zero, the higher the potential uncertainty in the aspect values.Aspects in the original TSP and CALM metadata had various formats including verbal descriptions and abbreviations of main (rough) geographical directions. Accordingly, these rose diagrams and planes are concentrated in categorized directions such as N, NW, WNW etc.A higher overall number of TSP borehole slopes and aspects (n = 48) from the metadata than for CALM sites enabled a more reliable comparison between original and calculated values.For all TSP sites north of 60 • N, 25 % of the original metadata and 20 % of the DEM derived data rank in the bin between 271 and 300 • .Both mean vectors point towards a WSW direction, as indicated in Fig. 10 by the arrows.The fact that slopes at the borehole and CALM sites are dipping towards a preferential direction indicates, that there is a different amount of incoming solar energy received by the monitored ground than compared to the average.Therefore, preferential slope orientation causes a bias in the overall representativeness of temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change Climate models project temperature increases in the Arctic towards the end of the 21st Century that are larger than anywhere else on Earth (ACIA, 2004;Stocker et al., 2013).CMIP5 models show that for each degree of global temperature increase about 1.6×10 Full (Koven et al., 2013) and boreal landscapes will most likely lose all present discontinuous permafrost zones by the end of the 21st Century (Slater and Lawrence, 2013).To assess the distribution quality of present permafrost temperature monitoring, we calculated the number of TSP and CALM sites per zone of projected temperature change for 15 different climate models.Differences of mean annual near surface temperature be- GTN-P metadata statistics can help to identify potential new monitoring sites.Vegetation types, soil organic carbon content and the slope orientation at boreholes and active layer depth monitoring sites show the existence of biases and hinder the representativeness of these sites at the global level.The distribution of GTN-P sites according to projected temperature change shows a high representation of areas with smaller expected temperature rise but a lower number of sites within arctic areas where climate models project extreme temperature rise. We conclude that for gaining a representative global view on the thermal development of the Earth's permafrost landscapes, more permafrost monitoring sites must be established at key sites and entered into the GTN-P Database.These sites should be preferentially located in areas where monitoring is lacking, but also where soil organic carbon contents are high and projected temperature change is high.This paper offers a scientific basis and maps for planning future permafrost research monitoring sites, which could feed into existing planning efforts such as the Global Cryosphere Watch (GCW) Implementation Plan 2015 (http://globalcryospherewatch.org).Full Discussion Paper | Discussion Paper | Discussion Paper | , a database for Norwegian permafrost data (including Svalbard); and PERMOS, the Swiss Permafrost Monitoring Network (PER-MOS, 2013).The permafrost thermal data from the USA is archived with ACADIS (Advanced Cooperative Arctic Data and Information Service), which took over for the former CADIS (Cooperative Arctic Data and Information Service) as a repository for all data from NSF funded Arctic research.A good example for DOI-referenced data publication is Nordicana D, an online data report series of the Canadian Centre d'études Nordiques (CEN), including long-term time-series of permafrost borehole temperatures ESSDD Discussion Paper | Discussion Paper | Discussion Paper | the GTN-P.The new GTN-P Database is a state-of-the-art tool for storing, processing and sharing parameters relevant to the permafrost ECV measured in the Arctic, Antarctic and mountain regions.It is hosted at the Arctic Portal in Akureyri (Iceland) and managed in close cooperation with the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) in Potsdam (Germany) and supported by the European Union 7th framework programme project PAGE21.The specific objectives of this paper are (i) to describe the framework of the GTN-P data management system, (ii) to provide statistics on site distribution in the GTN-P by Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | which provides digital object identifiers (DOI's) for the data products.PANGAEA follows the Principles and Responsibilities of the ICSU World Data System (WDS) and the "Principles and Guidelines for Access to Research Data from Public Funding" established by the Organisation for Economic Co-operation and Development (www.oecd.org).It has also adopted the Creative Commons license procedure, which provides a simple, standardized way to give the public permission to share and use creative work, according to the conditions established by the author.Discussion Paper | Discussion Paper | Discussion Paper | matically, geographically or institutionally.These other sources include the Advanced Cooperative Arctic Data and Information Service (www.aoncadis.org)at the National Snow and Ice Data Center (http://nsidc.org), the Permafrost Laboratory (University of Alaska, Fairbanks), NORPERM (Juliussen et al., 2010) and PERMOS (PERMOS, 2013), among others.Part of the data was provided by individual permafrost research groups and relayed into the database by the GTN-P National Correspondents.Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 3 %) of all TSP boreholes belong to the surface class ESSDD Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | tween2070 -2099 AD and 1970 -2000 AD for representative concentration pathways (rcp's) 4.5 and 8.5 were taken into account for following models: ACCESS1-0, bcc-csm1-1, CanESM2, CCSM4, CNRM-CM5, CSIRO-MK3-6-0, GISS-E2-H, GISS-E2-H, GISS-E2-R, HadGEM2-ES, inmcm4, IPSL-CM5A-LR, MPI-ESM-LR, MRI-CGCM3 and NorESM1-M.Figure11shows that in rcp 4.5, an intermediate greenhouse gas emission scenario, most boreholes and CALM sites are located in relatively narrow zones of less extreme projected temperature change (ca.3-6 • C for TSP and ca.2-5 • C for CALM).The high-emission scenario rcp 8.5 projects a more extreme temperature increase for larger areas and more GTN-P monitoring sites are located in zones of up to a 10• C potential temperature rise.A comparison of the applied models shows that, depending on the model uncertainties and variety of possible climate futures, the spatial distribution of projected temperature change varies from model to model.This is why increasing the number of soil temperature and active layer monitoring sites by filling main geographical gaps is critically important to constrain projections of climate change's impact on permafrost.5 ConclusionsThe GTN-P Database contains standardized and quality checked permafrost temperature and active layer thaw depth data from the Earth's permafrost regions: 1074 TSP boreholes and 274 CALM sites.The associated Data Management System provides automated visualization and data output formats developed for the needs of a high variety of users including climate modelers.Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Figure 1 .Figure 2 . Figure 1.Framework within the Global Terrestrial Network for Permafrost defined by permafrost temperature and active layer thickness data from TSP and CALM programs, respectively. Figure 3 .Figure 5 . Figure 3. Number of temperature measurement sensors per borehole based on 180 datasets containing temperature time series data. ). Deep boreholes are generally older than shallow boreholes.The average drilling dates (AD) for the GTN-P depth classes are as follows: SU = 2003; SH = 1997; IB = 1993; DB = 1984.The overall average drilling date of boreholes is 1997.However, only 82 % of TSP datasets contain metadata information about borehole ages.The lack of age metadata affects all depth classes.The average borehole depth of datasets without age information is 29 m.The oldest borehole currently present in the database is located in Russia (Vorkuta K-887) and was drilled to 85 m depth in 1957. 6 km 2 or ca.1/4 of the present permafrost area is expected to start to disappear Table 1 . TSP borehole and CALM active layer monitoring site distribution.Cell color darkens with increasing values within rows. Table 1 : TSP borehole and CALM active layer site distribution.Cell color darkens with 624 Table 2 . (Hugelius et al., 2013)rameters within the borehole and active layer monitoring site distributions.Distribution of soil organic carbon contents in the top 200 cm from Northern Circumpolar Soil Carbon Database(Hugelius et al., 2013).Vegetation zones taken from the standardized GTN-P metadata.
7,912.8
2015-03-09T00:00:00.000
[ "Geology" ]
Connected diagnostics to improve accurate diagnosis, treatment, and conditional payment of malaria services in Kenya Background In sub-Saharan Africa, the material and human capacity to diagnose patients reporting with fever to healthcare providers is largely insufficient. Febrile patients are typically treated presumptively with antimalarials and/or antibiotics. Such over-prescription can lead to drug resistance and involves unnecessary costs to the health system. International funding for malaria is currently not sufficient to control malaria. Transition to domestic funding is challenged by UHC efforts and recent COVID-19 outbreak. Herewith we present a digital approach to improve efficiencies in diagnosis and treatment of malaria in endemic Kisumu, Kenya: Connected Diagnostics. The objective of this study is to evaluate the feasibility, user experience and clinical performance of this approach in Kisumu. Methods Our intervention was performed Oct 2017–Dec 2018 across five private providers in Kisumu. Patients were enrolled on M-TIBA platform, diagnostic test results digitized, and only positive patients were digitally entitled to malaria treatment. Data on socio-demographics, healthcare transactions and medical outcomes were analysed using standard descriptive quantitative statistics. Provider perspectives were gathered by 19 semi-structured interviews. Results In total 11,689 febrile patients were digitally tested through five private providers. Malaria positivity ranged from 7.4 to 30.2% between providers, significantly more amongst the poor (p < 0.05). Prescription of antimalarials was substantially aberrant from National Guidelines, with 28% over-prescription (4.6–63.3% per provider) and prescription of branded versus generic antimalarials differing amongst facilities and correlating with the socioeconomic status of clients. Challenges were encountered transitioning from microscopy to RDT. Conclusion We provide full proof-of-concept of innovative Connected Diagnostics to use digitized malaria diagnostics to earmark digital entitlements for correct malaria treatment of patients. This approach has large cost-saving and quality improvement potential. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-021-01600-z. countries, only 380 laboratories were accredited against international standards, with many countries not even hosting one single accredited provider [2]. Also, in urban and semi-urban centres where lab facilities and staff are usually better available, inaccurate diagnoses with limited sensitivity and specificity are common practice [1,[3][4][5]. Fever is one of the most common reasons people in Africa visit health providers. However, due to the aforementioned challenges with lab providers across the continent, febrile patients are often not diagnosed using laboratory tests, but only presumptively, based on clinical grounds [1]. The most common presumptive diagnosis in febrile patients is malaria, followed by bacterial infection(s), resulting in over-prescription of antimalarials and antibiotics. A recent study indicated that more than 70% of fever cases in Tanzanian children was caused by viral infections, against which antimalarials and antibiotics do not work [6]. SSA countries hold a disproportionately high share of the global malaria burden. In 2018, the region hosted 93% of malaria cases and 94% of malaria deaths [7]. Over 99% of malaria cases in malaria-endemic areas of Africa were caused by P. falciparum in 2018 [7]. Substantial investments continue to be needed to reduce malaria morbidity and mortality in Africa. Estimates show that by 2025, the global annual malaria investments should increase to $7.7 billion [8]. However, global funding for malaria control and elimination in Africa is flat-lining and the recent call for universal health coverage (UHC) puts pressure on traditional vertical funds for malaria, such as the Global Fund to Fight Aids Tuberculosis and Malaria (GFATM) and President's Malaria Initiative (PMI) [9,10]. Additional challenges to malaria service delivery are emerging with the recent outbreak of COVID-19 [11]. Important drawbacks in combatting malaria were encountered in West Africa during the recent Ebola outbreak, setting examples for responding to the imminent COVID-19 pandemic [12], indicating the importance of more efficient, targeted digital malaria service delivery. Presumptive diagnosis and treatment for malaria in febrile patients often lead to over-prescription of malaria drugs in malaria-endemic African countries [13,14]. This can have key health implications, including the development of drug resistance, higher risk of treatment failure, and increased morbidity. Besides, it leads to economic implications such as unnecessary drug costs, recurring visits, and economic productivity losses due to longer sick days. Consequently, the costs for both households and healthcare providers increase [15,16]. Moreover, the prescription of ineffective drugs can lead to reduced patient trust in healthcare provision with subsequent decreased willingness to participate in financial (insurance) schemes for pre-payment and risk-sharing. Recent developments in digital technology provide new opportunities to address the above situation in a radically different way. Firstly, the emergence of the Internet of Things comprising a rapidly growing arsenal of 'digital diagnostics': tools and devices that digitalize and link human physical parameters to the internet, complete with sensing and measuring capabilities [17]. Secondly, the mobile revolution: 75% of the African population has access to a mobile phone. In Kenya, mobile penetration is at 86%, and still growing rapidly [18,19]. Thirdly, Africa is leapfrogging with the advent of 'bankless banking': digital payment systems through mobile phones transferring entitlements between individuals. An example of this is M-PESA, launched in 2007 in Kenya: an electronic mobile money service to store, send and receive money on any mobile phone (smartphone or non-feature) with an M-PESA account [20]. Since 2014, the non-profit foundation PharmAccess has leveraged the above developments to create a 'mobile health wallet' linked to the M-PESA payment platform, known as M-TIBA ('mobile therapy' in Swahili). Today, M-TIBA is hosted by CarePay International and represents the first African digital platform exchanging data and funds/entitlements that are exclusively earmarked for health and healthcare [21]. M-TIBA works on simple, non-feature mobile phones as well as smartphones. It connects patients, healthcare providers and healthcare payers (such as insurers and donors) and exchanges data and entitlements between them. Users can save into their own (family) wallets, they can receive money from relatives elsewhere in the country, from donors, and even from individuals in other countries willing to donate directly for health [22]. This is a first step in creating new digital solidarity mechanisms where people can financially contribute to each other's health: the rich for the poor, the healthy for the sick, the young for the old, and communities for individuals. As of April 2020, 4.2 million people in Kenya, Nigeria, Tanzania and 1500 health providers on the continent were connected to the platform [21]. Given the above developments, we saw an opportunity within the Kenyan context to combine digital diagnostics and M-TIBA technology to support marked improvements in the efficiency of diagnosis and treatment of febrile diseases. This approach is referred to as 'Connected Diagnostics (ConnDx)' . ConnDx introduces a new system-wide digital delivery approach for malaria care, linking phones to diagnostics and payment systems. Providing ConnDx services yields simultaneous real-time insight into data and funds exchanged between patients, providers, and healthcare payers. ConnDx digitalizes rapid diagnostic tests (RDTs) through photography and interprets and stores results in cloud-based databases through dedicated RDT readers. Such readers are available, like the Fio Deki Reader ™ [23], i-CalQ lateral flow-test imaging [24] and the Mobile Assay 'lab-on-mobile device' platform [25]. Moreover, some Apps can directly digitalize RDT results [26]. The digital diagnostic results subsequently inform a mobile phone-based payment system (M-TIBA) to channel dedicated funds for diagnosis and treatment of the pertinent disease to the provider and/or patient. A recent proof-ofprinciple of ConnDx was provided in Samburu, Kenya, in which RDTs interpreted by the Fio Deki Reader ™ were used to diagnose brucellosis in remote populations [27]. Brucellosis, a rare bacterial zoonotic illness, is often underdiagnosed due to having similar febrile symptoms as malaria [28]. Patients testing positive were directly linked to funds through their respective mobile health wallets to allow for payment of drugs for the pertinent diseases, conditional to a positive diagnosis of these diseases. Feasibility was demonstrated, patients were correctly treated, their user experience was positive, and hotspots of brucellosis were identified [27]. Having proven that ConnDx could be effective in identifying a rare disease in rural, remote populations, this paper reports on the next step: scaling ConnDx to a more densely populated area for a more prevalent disease (malaria) that causes high morbidity, mortality and serious economic impact. The objective of this study is to evaluate the feasibility, user experience, and clinical performance of a ConnDx intervention in Kisumu, Kenya. Additionally, the aim is to assess the over-prescription of antimalarials in this population. The location of choice, Kisumu County in Kenya, has a population of 1.2 million [29], and a reported high malaria prevalence of approximately 28-35% in the general population [30,31]. Our intervention took place from October 2017-December 2018 amongst five private health providers and a total of ~ 12,000 beneficiaries. The private sector market share for malaria is large in Kisumu and the whole of Kenya. The majority (three-fourths) of antimalarials are distributed through private sector providers [32]. Methods The ConnDx process starts upon patient presentation at participating providers or healthcare outreach events. The project was announced to the public through posters in the waiting room referring to a free 'Malaria Test & Treat Campaign' . Consenting patients presenting with fever and/or malaria symptoms were referred by informed clinicians for malaria testing by qualified lab technicians using malaria RDTs, digitalized using a Fio Deki Reader ™ . The Reader contained a drop-down menu to collect demographic data (gender, age, pregnancy status), basic economic data (socioeconomic status) and geographic location data (community of residence). For socioeconomic status assessment, three questions were added to the menu pertinent to access to electricity, toilet type and education level of the household head. These questions had been previously identified most informative by principal component analysis using a Multiple Indicator Cluster Survey in Nyanza for reference [33]. This led to a poverty index, in which level 1 corresponds to the poorest category; level 3 to the least poor. With geographical location data, malaria hotspots can be identified. In this context, we define a hotspot as a geographical area in which the malaria transmission intensity is significantly higher than the surrounding area. Simultaneously, the participating patients were enrolled on the M-TIBA platform and individual health wallets were created. The collected Fio Deki Reader ™ demographic data, along with patient test results, was uploaded to the Fionet cloud database and linked to the patient's mobile health wallet account using a unique M-TIBA transaction code. Only patients with a positive diagnosis were entitled to receive funding for antimalarials, which (in an interim functionality) was paid to their respective healthcare providers, thus effectuating fully paid treatment for the patients. Patients received antimalarials at the same provider as where they received a positive diagnosis. Figure 1 provides a schematic illustration of the ConnDx process. ConnDx was implemented from October 2017-December 2018 across five private care providers (anonymized A-E) within sub-counties of Kisumu County. Providers were selected based on DHIS-2 registered (high) patient throughput, geographic location close to water, the willingness of management to participate and ranged from smaller providers with set opening hours to large 24/7 hospital facilities. The provider perspectives on the use of ConnDx were gathered through 19 semi-structured interviews in the participating health providers. Interviewees were selected in agreement with the facility manager or based availability of staff members at the time of the interviews. Interviewed staff were managers, lab technicians, receptionists, and doctors or nurses. Informed consents were obtained from each interviewee before the interview. The interview guide, specifically developed for this study, covered topics related to acceptability of the intervention and outstanding outcomes of the quantitative data analysis that needed further clarification (Additional file 1). In total, four types of interview guides were constructed, based on the professional background of the respondents. The first interview held served as a pilot interview to test the interview guide. The interviews took place at the work location of the interviewees. One PharmAccess researcher held, transcribed, coded, and analyzed the interviews. The first five interviews were coded openly. Thereafter, all codes were compared to each other and one list of codes was created that was used to code the remaining 14 interviews. The interviews were analyzed by applying a thematic analysis approach using the software Atlas.ti [34]to identify patterns and themes within the data. Two other researchers checked the coding and theme patterns to ensure consistency in the analysis of the interviews. Quantitative data from both cloud-based databases (FIO and M-TIBA) were analyzed through descriptive statistics in Stata 12 and Microsoft Excel 2016. Descriptive analyses were done for each provider separate and over time. Relationships between categorical variables were tested by using the chi-square test to calculate p values and compare proportions between groups. A p value of < 0.05 was considered statistically significant. For geographic location analyses, the subdivision of Kisumu County was divided into the 163 administrative community units, each with roughly 1000-1200 households. Participants were asked at the point of care which community unit their home was based. Geographic analyses were performed in QGIS (Version 3.0) [35] by using the geographic centre of each community unit as a proxy for the location of the participants' domicile. The project was positioned as a quality improvement project of existing malaria services according to Kenyan Guidelines and run in parallel to existing systems and services. All study participants provided consent for participation before the enrolment process into M-TIBA. Results Between October 2017 and December 2018, 11,689 people with fever were tested in five private providers (A-E). Figure 2 depicts the numbers of malaria RDTs uploaded by location of these providers and participants' domiciles according to the community unit they reported to live in. Figure 3 shows the participation dynamics per provider during the intervention. Providers A and B started with ConnDx in October 2017, providers C, D and E enrolled in April 2018. Malaria testing rates coincided with the rainy seasons in Kisumu for 2017 and 2018 ("long rains" from March to June, and "short rains" from October to December), known to increase malaria transmission. During the study period (2018), rainfall in Kisumu was particularly high and prolonged compared to the average [36]. From all 11,689 consenting patients who were tested for malaria, there was an 18.3% overall test positivity rate. The descriptive analyses showed that slightly more women were tested (58.2%), but more positive cases were found amongst men (19.0% versus 15.4%). The mean age of patients was 23.6 years and 16.5% of patients were aged under 5 years old. For children under 5 years of age, the percentage of positive malaria cases was comparable to adults 17.7% (n = 340/1926). From the participants, 63.1% were a member of NHIF, the national health insurance in Kenya. A chi-square test showed patients in poverty level 1 were significantly more often tested positive for malaria than patients with a higher poverty score (p value < 0.05). The RDT result uploads of ConnDx allowed for quality assessment since the Deki Reader ™ indicates 'error' at point of care when the RDT is not performed correctly. Figure 4a-c illustrate the quality improvement of lab technicians' RDT performance represented by reductions of error rates over time. Figure 4a demonstrates considerable overall improvement over the first month of RDT usage (from 25% range down to below 5%). Figure 4b, c provide individual data for later starters E (improvement) and D (continued good performance). For all participating providers, general over-prescription of antimalarials was observed. Over-prescription is defined by the proportion of antimalarials dispensed to the actual number of positive cases as identified by a positive malaria test. The overall over-prescription was 28.0%, fluctuating between 4.6% (provider D) and 63.3% (provider B). There were fluctuations in over-prescription over time, as illustrated in Fig. 5. High over-prescription rates were recorded before the introduction of ConnDx at provider B. During the ConnDx intervention, overprescription declined significantly. Only towards the end of the intervention, an increase was observed. Figure 6 demonstrates M-TIBA derived information on individual provider-specific prescribing behaviour in choices of drugs (1st line and 2nd line) for antimalarials. Descriptive analyses showed the most common prescriptions were: artemether/lumefantrine (75.4%, either branded (Coartem, 43.7%) or generic (28.5%), followed by artemether injection, a 2nd line drug (15%). Overall, 75.6% of all antimalarials dispensed by the providers were 1st line and 24.4% were 2nd line. It was found that 2nd line antimalarials were significantly more frequently prescribed to the more affluent population and to participants who were not insured with NHIF (p value < 0.05; chi-square test). Provider A dispensed most 2nd line antimalarials (average 53.7, ranging from 26.5%-100.0%). This is followed by provider D (24.7%), provider B (9.2%), provider E (7.1%) and provider C (3.9%). Figure 6 illustrates that three (A, C, D) out of five private providers substantially diverted from the Kenyan guidelines for the prescription. The choice of branded versus generic antimalarials appeared dichotomous: providers A, B and E prescribed predominantly generics and providers C and D branded drugs. Qualitative interviews with the healthcare providers identified various perceived challenges and opportunities. Concerning opportunities, ConnDx malaria diagnosis through RDT was experienced as easier, more efficient, and faster. Providers mentioned patients were also positive towards this way of testing for malaria which qualified them to receive payments for treatment. Moreover, healthcare providers mentioned ConnDx providing interesting additional (management) information on patients, prescription practices and drug procurement. The interviewees appreciated ConnDx to expand access to malaria treatment, particularly for the poor patients and for children. Providers experienced improvement on their awareness that diagnostics should determine treatment decisions, reducing the prescriptions of unnecessary drugs. One of the challenges mentioned by healthcare providers was a lack of trust in the performance of RDTs. Microscopy is still often seen as the gold standard for malaria testing in Kenya, despite National Guidelines indicating equality of RDT and microscopy as a diagnostic procedure. Some healthcare providers mentioned they verified RDT results by microscope. Other challenges were more technical, related to lack of electrical power and internet connection. Despite these challenges, most respondents stated they would continue with ConnDx, as it did help many patients. "It was positive. Because for us being in the community and a malaria-prone area, it was part of the solution that we have been wanting. So, we really embraced it, it was positive. "-Respondent 18 (admin, female). Discussion This study describes feasibility and user experience through private healthcare providers in Kisumu of a novel digital approach to malaria diagnosis that directs conditional payments for malaria treatment: ConnDx. We demonstrate significant potential for increasing efficiencies of malaria service delivery in the Kenyan private healthcare sector concerning better diagnosis, reducing over-prescription, selecting correct 1st and 2nd line drug combinations and reducing malaria transaction costs, while at the same time generating valuable real-time data on malaria prevalence and incidence that can be fed into the DHIS-2, that captures routine health service data. ConnDx proved through this pilot its potency to monitoring malaria epidemiology in semi-real time and generate important data for malaria management. Considerable variation was revealed between providers, with malaria positivity rates ranging from 7.4% (provider B) to 30.2% (provider D). This led to verifiable assumptions such as Provider B being a referral hospital and therefore less likely to serve primary malaria cases, while provider D, being located near wet rice fields, serving known van Duijn et al. BMC Med Inform Decis Mak (2021) 21:233 hotspots for malaria. During months with more rainfall, there were significantly more malaria tests done at the providers than months with less rainfall (p value < 0.05, chi-square test). However, no relation was found between months with more rainfall and positive malaria test results. Providers located in or near low-income settlements (A and E) appeared to have higher malaria positivity rates. A chi-square test showed a relation between poorer patients and positive malaria tests (p value < 0.05, chi-square test). The quantitative analyses showed a relatively low participation rate of children: only 16.5% of reported patients were aged < 5 years, with a positivity rate of 17.7%. Through our qualitative interviews, it was learnt that more children were tested for malaria, but clients experienced challenges subscribing their children to M-TIBA as dependents and sometimes reported them incorrectly as adult primary members. This was noticed later during the campaign and corrected but could have contributed to general underreporting of pediatric malaria cases. All in all, it was demonstrated that ConnDx can facilitate in semi-real time important healthcare provider-differences in malaria case management. Such information, when collected at a larger-scale level could help policymakers and health system managers to target their efforts for (human) malaria capacity building. Secondly, this study demonstrated the overall potency of ConnDx to monitor provider prescription behaviours and identify practices that are significantly aberrant from the Kenyan National Guidelines. Important overall over-prescription was recorded of antimalarials (28.0%), varying between providers from 4.6% (provider D) to 63.3% (provider B). There are multiple reasons for overprescription, ranging from monetary considerations of private providers to patient expectation and pressure to receive drugs, avoidance of clinicians to take the risk of a false negative diagnosis and subsequent fatality, etc. [14]. Our qualitative interviews showed patient pressure was mentioned as a reason by most of the interviewees when addressing this issue. Furthermore, ConnDx revealed an unexpected and erroneously high level of prescription of 2nd line antimalarials (overall 28.0%). Provider A revealed 2nd line prescription levels of overall 53.7%, at times going up to even 100%. This is remarkable, as 2nd line antimalarials are generally used for severe cases of malaria, which represent on average < 2% [37], or in (rare) cases of suboptimal parasitological response with 1st line antimalarials (resistance). When probed with this observation, provider A reported a prolonged stock-out of 1st line antimalarials and therefore switching to 2nd line. Over-prescription of 2nd line antimalarials was more often found with more affluent and uninsured participants. This could indicate providers are aware of the socioeconomic status of their clients and they incorporate this into their prescriptions. Moreover, it appeared there was a very dichotomous, almost exclusive usage of either branded (provider C and D) or generic (provider A, B, E) antimalarials. One possibility could be that providers serving more affluent customers prefer procurement of branded versus generic antimalarials. Conversely, more affluent customers might request for branded instead of generic antimalarials. Often, generic medicines are considered to be of poor quality and treated with more suspicion than branded medicines [38,39]. Third, this study indicates that ConnDx can increase efficiency in malaria service delivery by decreasing costs in several ways. Over-prescription of antimalarials can be monitored, aberrations identified, and actions can be undertaken to address those. A 2013 study conducted in four providers in western Kenya, noted that presumptive malaria treatment can lead to misdiagnosis rates as high as 53% in public facilities. [13]. ConnDx can play an important role to reduce such a figure in facilities. Further cost reductions can potentially be realized by ConnDx, such as decreasing paperwork in health providers; such automated systems saving time, manpower and being more accurate in reporting cases [40]. In addition, ConnDx implies less dependency on expensive and maintenance-dependent microscopy, and less electricity will be required to perform diagnostics tests. Moreover, due to its user-friendliness ConnDx, will provide more opportunities for lower trained lab-staff to perform such tests, saving personnel costs. Finally, and most importantly, ConnDx can facilitate a much more targeted bottom-up payment for malaria services to providers and clients, creating unprecedented transparency as compared to current top-down payment systems [manuscript in preparation]. During this pilot, patients also benefitted from the reduction in costs as they received a free RDT test and treatment. However, when these would be obtained in a private facility outside this pilot, patients would have to pay on average 150 KES ($1.38 at the time) for an RDT and 150-500 KES ($1.38-$4.59 at the time) for malaria treatment when tested positive, depending on whether they require first line or 2nd treatment. In public facilities, the average costs to patients per malaria visit were found to be 112 KES ($1.03) in 2016, including registration fees, diagnosis, and treatment [41]. However, in public facilities, there is a gap in the availability of both testing and treatment of malaria [32]. Overall user experiences of ConnDx from the perspectives of providers were positive. For all providers, the key challenge was reservations of their staff to adopt the use of RDTs instead of microscopy. In general, microscopy was still seen as the golden standard for malaria testing in Kenya, despite National Guidelines indicating equality of RDT and microscopy as diagnostic procedure [42]. The fact that microscopy can identify different malaria species and quantify the severity of infection is counterargued by the fact that 1st line treatment for uncomplicated malaria is identical for Eastern African malaria species (see below), independent of their load. Indeed, ConnDx is dependent on the use of RDTs instead of microscopy for malaria diagnostic testing. Apart from National Guidelines, also the international literature reports sensitivity and specificity of malaria RDTs equal to microscopy [43,44]. RDTs are also recommended by WHO [3]. Several studies demonstrated impaired sensitivity of microscopy in actual field situations in Africa as compared to perfectly controlled laboratory circumstances, with regular refresher training being required [33,34]. RDTs have the added advantage that, in contrast to microscopy, these can easily be externally quality controlled by visual inspection of independent third parties. This opportunity is further enhanced through the ConnDx feature of making digital photographs of every test result, stored in secure cloud-based databases that can be accessed anywhere in the world. An additional advantage of RDT is that results can be digitalized, which accelerates data collection to (semi) real-timeliness, allows for telemedicine-based quality control and improves quality and completeness of data collection (versus paper-based malaria files being entered into national DHIS-2 systems on a several-time-per-year basis). These options are all much more problematic when performing microscopy. Moreover, in the sub-Saharan African reality, RDTs can readily detect Plasmodium falciparum, which causes the highest malaria morbidity and mortality and represents 99.7% of cases [35]. RDTs indeed are less available that specifically detect P. vivax, but this species is virtually absent in the region. Finally, indeed RDTs can provide a false-positive result with patients who had recent malaria episodes. This can be addressed by building a feature into the ConnDx algorithm that patients should be asked whether (s)he experienced malaria episodes in the past 1-2 months and if so, microscopy should be prioritized. The above-outlined challenges suggest diagnostics tests for febrile diseases, such as RDTs, should be embedded in a digital infrastructure of logistics and human decision support to raise to the next level of effectiveness and cost reductions. In the future, ConnDx could be deployed for bacterial infections if these can be diagnosed by RDTs (e.g. for C Reactive Protein), leading to better-informed antibiotic prescriptions, which is important in fighting antimicrobial resistance [45]. This study has several strengths and weaknesses. Strength is the important innovation of ConnDx providing reliable, geo-tagged and semi-real-time insights in malaria diagnostic and therapeutic services by private sector providers in a semi-rural setting in Kenya. Kisumu county hosts 94 private providers, which deliver approximately half of all primary healthcare services to its population (the government supplying care through 148 additional providers). With the majority (1 million) of Kisumu citizens currently connected to M-TIBA [46], ConnDx could in principle rapidly be scaled to all private providers and supplement the governments' DHIS-2 database with valuable real-time private sector information. Another strength is the pioneering nature of this study that was supported by the local health authorities to run in parallel to existing malaria services. This allowed for rapid collection of important data, leading to actionable information for policymakers who demonstrated strong involvement. In terms of weaknesses, this study was observational and not a formal clinical trial. Therefore, there are no statistically validated results on (improved) diagnostic performance and (improved) clinical outcomes for malaria. Moreover, in this study, there were no special provisions taken for febrile patients who tested negative for malaria and pertinent consequences concerning changes in clinical decision making. For example, it was not studied what the effect was of reduced malaria prescription on provider prescription of alternative drugs for fever (in particular, antibiotics) and what were the clinical consequences of such decisions. Furthermore, as the ConnDx process was not yet fully digitalized, several steps were still performed manually, such as linkage of cloud-databases and payout mechanisms. Therefore, the study did not allow for direct and automated feedback-loops with any of the participating stakeholders (patients, providers, payers, policymakers). For this reason, progress observed concerning quality improvement or increased cost-efficiency during this pilot was modest and likely due to the realization of providers that they were being remotely observed by the ConnDx intervention. There were also external factors that influenced the ConnDx pilot, such as civil unrest due to national elections, which hindered the uptake of participants due to security issues. Several strikes of medical staff put constraints on general malaria service provision. Moreover, M-TIBA is using Safaricom as the mobile operator (with a market share of 70% of Kenya), which became political in Kisumu where most of the population is from another tribal background than the Safaricom ownership, resulting in temporary boycotts of usage of this platform. Finally, it should be kept in mind that ConnDx was implemented in parallel to existing malaria services covered by the NHIF and the MoH. Thus, health providers could in principle benefit by participating in two parallel financing mechanisms, which could potentially create perverse incentives. This would not be the situation when ConnDx is fully integrated into NHIF or any other UHC prepayment mechanism and made a compulsory condition for payouts. Conclusions This paper demonstrates the potential of ConnDx for more efficient malaria services at scale. ConnDx links important datasets in (semi) real-time, which previously were in silos and reported irregularly in DHIS-2. This allows for improved efficiencies at all levels of the healthcare system. For clients, the quality of care can improve by avoiding over-prescription of ineffective drugs and by providing the possibility to save and remunerate funds for malaria. Moreover, the linkage of patients' telephone numbers to the platform allows for additional services like malaria information, appointment keeping, adherence support, patient feedback loops to providers on the experienced quality of care, individual alarms, early warning systems for geographic malaria hotspots, etc. For providers, better information is given on their diagnosis and treatment performance versus the National Guidelines, benchmarked and rated against colleagues. Moreover, providers save capitation fees when ConnDx is integrated into NHIF services, avoiding over-prescription of antimalarials. Finally, reputation will be increased due to better-quality care delivered. For payers, ConnDx reduces overhead costs by increasing transparency, supporting healthcare transactions at marginal costs. Funding can be traced to pertinent individual patient cases and vertical malaria funds can be integrated into larger UHC funding pools, covering both the public and private sector. For Kenyan policymakers and healthcare managers, ConnDx opens ample opportunities to timely identify weaknesses in service delivery and undertake targeted remedial actions, such as specific training to providers. With ConnDx linking cloud-based databases of digital diagnostics data to a digital healthcare exchange platform such as M-TIBA, diagnostic results can target entitlements for malaria treatment directly through the mobile phones of M-TIBA users. This improved financial transparency, combined with marked quality gains through ConnDx presents a value proposition for scaling through (inter)national funders, such as the NHIF in Kenya, supported by the GFATM to channel vertical malaria funds through M-TIBA-facilitated payment platforms and contribute to UHC. ConnDx offers ample opportunities to enable more efficient service delivery for other high-morbidity medical conditions that can be digitally diagnosed, such as cervical cancer and cataract.
7,320.6
2020-07-08T00:00:00.000
[ "Medicine", "Environmental Science", "Economics" ]
A Call to Digital Health Practitioners: New Guidelines Can Help Improve the Quality of Digital Health Evidence Background: Despite the rapid proliferation of health interventions that employ digital tools, the evidence on the effectiveness of such approaches remains insufficient and of variable quality. To address gaps in the comprehensiveness and quality of reporting on the effectiveness of digital programs, the mHealth Technical Evidence Review Group (mTERG), convened by the World Health Organization, proposed the mHealth Evidence Reporting and Assessment (mERA) checklist to address existing gaps in the comprehensiveness and quality of reporting on the effectiveness of digital health programs. Objective: We present an overview of the mERA checklist and encourage researchers working in the digital health space to use the mERA checklist for reporting their research. Methods: The development of the mERA checklist consisted of convening an expert group to recommend an appropriate approach, convening a global expert review panel for checklist development, and pilot-testing the checklist. Results: The mERA checklist consists of 16 core mHealth items that define what the mHealth intervention is (content), where it is being implemented (context), and how it was implemented (technical features). Additionally, a 29-item methodology checklist guides authors on reporting critical aspects of the research methodology employed in the study. We recommend that the core mERA checklist is used in conjunction with an appropriate study-design specific checklist. Conclusions: The mERA checklist aims to assist authors in reporting on digital health research, guide reviewers and policymakers in synthesizing evidence, and guide journal editors in assessing the completeness in reporting on digital health studies. An increase in transparent and rigorous reporting can help identify gaps in the conduct of research and understand the effects of digital health interventions as a field of inquiry. Introduction Over the last decade, there has been a dramatic increase in health programs employing digital tools, such as mobile phones and tablets, to stimulate demand for or the delivery of health care services. This is especially true in low-and middle-income countries, where public health practitioners are tapping into the unprecedented growth in the use of mobile phones to overcome information and communications challenges [1,2]. Donors have rallied around digital approaches, and much has been invested into developing, testing, and deploying digital systems. However, after nearly a decade of concerted efforts, widely available evidence in support of digital health is limited [1,3,4]. As an emergent field, there is substantial variability in the reporting of digital program implementations, evaluations, and outcomes. Inconsistency in reporting is problematic as it limits policy makers' ability to understand precise program details and extract, compare, and synthesize linkages (if any) between the digital investments and consequent health effects. To address gaps in the comprehensiveness and quality of reporting on the effectiveness of digital programs, the mHealth Technical Evidence Review Group (mTERG)-an expert committee convened by the World Health Organization (WHO) to advise on approaches to strengthening digital health evidence-proposed guidelines for reporting evidence on the development and evaluation of digital health interventions. These guidelines-presented as the mHealth Evidence Reporting and Assessment (mERA) checklists-were published in March 2016 [5] and have since been widely accessed [1,[6][7][8][9][10]. Methods The design of the mERA checklist followed a systematic process for the development of reporting guidelines [11]. In October 2012, WHO convened an expert working group led by the Johns Hopkins Global mHealth Initiative to develop an approach for the mERA guideline. In December 2012, this working group presented an initial draft of the checklist to a global panel of 18 experts convened by WHO during a 3-day meeting in Montreaux, Switzerland. At this meeting, the approach and checklist underwent intensive analysis for improvement, and a quality of information (QoI) taskforce was established to pilot-test the checklist. After testing by the QoI taskforce, the checklist and associated item descriptions were applied to 10 English language reports to test the applicability of each criterion to a range of existing mHealth literature. Readers may refer to further details about the methodology in the complete manuscript [5]. Results The mERA checklists comprises 2 components. The core mHealth checklist (see Table 1) identifies a minimum amount of information needed to define what the mHealth intervention is (content), where it is being implemented (context), and how it was implemented (technical features). This checklist may be valuable to researchers in reporting on the program and research results in peer-reviewed journals and reports, to policy makers in consolidating evidence and understanding the quality of information that has been used to generate the evidence, and to program implementers thinking through and selecting core elements for new digital health projects. L'Engle et al [12] applied the mERA checklist to evaluate the quality of evidence on the use of digital health approaches to improving sexual and reproductive health outcomes for adolescents. The study found that, on average, 7 out of 16 (41%) of the core mHealth checklist items were reported on, suggesting a lack of the availability of a clear description of the digital health intervention [12]. During the development and testing phase, the mERA checklist was applied to literature on the use of digital devices in reducing drug stockouts and the use of digital protocols to improve provider adherence to treatment protocols. Interested authors should refer to the definitions and examples for the core mHealth checklist available freely online [5]. The methodology checklist (see Textbox 1) outlines 29 items that highlight the key study design features that should be reported by researchers and evaluators of digital health interventions. Authors interested in using this checklist should note that there are other recommended checklists specific to different study designs-for example, Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) for observational studies [13] and Consolidated Standards of Reporting Trials (CONSORT) for randomized trials [14]. We recommend that the core mHealth checklist be used in conjunction with these extant checklists based on the appropriate research study design that is being reported. However, we also recognize that a number of digital health studies that are being conducted to evaluate early-stage digital health interventions are more exploratory in nature, and the extant guidelines might not be as relevant to them. In such cases, the authors may decide to use the mERA methodology checklist, developed to be study-design agnostic, for reporting on the study design and results. A detailed explanation of the mERA methodology checklist items is available as a Web appendix [5]. Discussion We present an overview of the mERA checklist. For details about each of the checklist items under the core checklist items and the methodology items, we refer the readers to the complete publication [5]. The mERA checklist marks the culmination of several years of multiinstitutional collaborations, led by WHO, to determine appropriate standards for reporting on digital health evidence-standards that not only address issues of methodological and reporting rigor but also are responsive to the current state of the digital health space. We recognize that the digital health space is constantly evolving and is somewhat unique in its multidisciplinary nature, borrowing approaches from the fields of health care and technology and often engaging innovators who are unfamiliar with scientific methodologies. The mERA core and methodology checklists were pragmatically developed to be useful to a wide audience of innovators. We expect that the detailed explanations and examples make the checklist easy to use for individuals with varying levels of experience in academic reporting. Even as the numbers of digital health interventions continue to increase, the evidence to support such interventions remains sparse. Without the support and shared commitment of the diverse digital health community in advancing the quality of evidence, the state of the much-critiqued "pilotitis" in mHealth will not change [15]. Transparency in the reporting of what constitutes a digital health intervention and clarity on evaluation methods are both critical to determining whether the digital strategy might be scalable to an entire population. In order to support the widespread adoption of the checklist, we encourage digital health researchers and program managers to ensure conformity with the checklist items. Additionally, we would like to call upon editors of journals publishing mHealth literature to encourage the use of the mERA checklist by presenting the link to the guidelines under Instructions to Authors and inclusion of a statement in the manuscript that "this manuscript was developed in conformity with the recommended criteria for reporting digital health as described in the mERA guidelines." Conflicts of Interest None declared
1,947.4
2017-10-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Formation of Dislocations and Stacking Faults in Embedded Individual Grains during In Situ Tensile Loading of an Austenitic Stainless Steel The formation of stacking faults and dislocations in individual austenite (fcc) grains embedded in a polycrystalline bulk Fe-18Cr-10.5Ni (wt.%) steel was investigated by non-destructive high-energy diffraction microscopy (HEDM) and line profile analysis. The broadening and position of intensity, diffracted from individual grains, were followed during in situ tensile loading up to 0.09 strain. Furthermore, the predominant deformation mechanism of the individual grains as a function of grain orientation was investigated, and the formation of stacking faults was quantified. Grains oriented with [100] along the tensile axis form dislocations at low strains, whilst at higher strains, the formation of stacking faults becomes the dominant deformation mechanism. In contrast, grains oriented with [111] along the tensile axis deform mainly through the formation and slip of dislocations at all strain states. However, the present study also reveals that grain orientation is not sufficient to predict the deformation characteristics of single grains in polycrystalline bulk materials. This is witnessed specifically within one grain oriented with [111] along the tensile axis that deforms through the generation of stacking faults. The reason for this behavior is due to other grain-specific parameters, such as size and local neighborhood. Introduction Austenitic stainless steels are of considerable engineering importance due to their excellent corrosion resistance and mechanical properties [1,2]. In these steels, it is known that the stacking fault energy (γ SF ) can be used to predict the predominantly active deformation mechanism controlling the material's mechanical properties. During plastic deformation, austenitic stainless steels with a low γ SF are known to form large stacking faults and undergo deformation-induced martensitic phase transformation responsible for the well-known transformation-induced plasticity (TRIP). Two martensitic products can form where the ε-martensite (hcp) is mainly of importance for promoting the formation of α -martensite (bcc), which in turn provides significant strain hardening and contributes mainly to the TRIP effect [3][4][5]. Since ε-martensite is associated with the periodic ordering of stacking faults on every second fcc {111} lattice plane, stacking faults play a key role in the formation of ε-martensite and thus in the deformation of austenitic stainless steel. Therefore, enhanced knowledge of the formation and evolution of stacking faults and their relation to the deformation-induced martensitic transformations is crucial to understand the deformation behavior of not only austenitic stainless steel, but also other steels containing the austenite phase, and in particular when the TRIP or the twinning-induced plasticity (TWIP) effect is exploited. The chemical composition of the austenite and the deformation temperature have a significant effect on the γ SF [2,6,7]. However, it is known that stacking faults, ε-martensite and α -martensite, in polycrystalline materials do not form homogenously in the bulk and instead grain size, grain morphology, grain orientation and grain neighborhood can play a role in their formation. Thus, the predominantly active deformation mechanism varies across the bulk [8][9][10] and it is necessary to consider all the mentioned parameters to predict the active deformation mechanism on the local scale. Despite its importance, the understanding of the deformation of individual grains within polycrystalline samples with respect to grain orientation and grain neighborhood is still vague. The incomplete understanding is partially due to the under-utilization of experimental methodologies capable of resolving the 3D microstructure non-destructively during in situ deformation. When using dominant conventional techniques such as scanning electron microscopy (SEM) or transmission electron microscopy (TEM), it is only possible to probe the surface (SEM) or thin foils (TEM) in situ. These regions do not reflect the bulk response, and additionally, there may be pronounced stress and strain relaxation effects [11]. A powerful method capable of resolving individual grains and sub-grains in the bulk of polycrystalline materials is high-energy X-ray diffraction microscopy (HEDM). It provides the possibility to follow, for example, the microstructure evolution non-destructively during deformation under consideration of grain position, orientation and neighborhood of individual grains within the bulk. Therefore, it captures the true-bulk response to plastic deformation [12]. The application of grain-resolved far-field HEDM (ff-HEDM) to study the deformation of austenitic stainless steel was presented by Hedström et al. In that work, the transformation of individual austenite grains into ε-martensite [10] and α -martensite with respect to grain-resolved strain and grain orientation was investigated [13]. The combination of HEDM and peak-shape analysis opens up further opportunities to study the evolution of defects, such as dislocations and stacking faults in individual grains under consideration of grain orientation, neighborhood and morphology, as well as bulk-specific properties such as chemical composition and temperature. Pantleon et al. [14] demonstrated the power of this approach in their study on a polycrystalline Al specimen subjected to tensile strain. In the present work, we apply the combination of ff-HEDM and line profile analysis during in situ tensile loading to study the orientation dependence of the formation and evolution of stacking faults and dislocations in individual grains embedded in austenitic stainless steel. Materials and Methods The composition of the investigated metastable austenitic stainless steel determined prior to deformation is given in Table 1. The analysis was performed by X-ray spectrometry (Cr, Ni), optical emission spectrometry (Mo, Ti) and combustion (C, N). The steel was supplied as hot-rolled strips. These strips were cut, cold-rolled and thereafter annealed at 1050 • C for 10 min to achieve a grain size of approximately 40 µm prior to further investigations. The fully austenitic steel was prepared as a dog-bone-shaped specimen with a gauge length of 3 mm and a gauge width of approximately 1 mm by electrical discharge machining. The tensile specimen was then ground and polished to a thickness of approximately 0.8 mm. The ff-HEDM experiment during uniaxial tensile loading was performed at the Cornell High Energy Synchrotron Source (CHESS) Ithaca, NY, USA, at the FAST (ID-3A) beamline. The experiment was conducted with a monochromatic X-ray beam energy of 61.332 keV. The illuminated gauge section of the sample was limited by rectangular slits with a size of 0.15 × 2 mm (H × W), resulting in a probed volume of 1 mm × 0.8 mm × 0.15 mm. The diffraction patterns were collected with two Dexela 2923 area detectors (3888 × 3072 pixels, 74.8 × 74.8 µm 2 per pixel, and a sample-to-detector distance of approximately 1000 mm). Interrupted loading was performed in displacement control with a strain rate of 5 × 10 −4 s −1 using the Rotation and Axial Motion System (RAMS2) load frame [15]. During pauses in loading, 2D diffraction patterns were collected in rotation increments of 0.25 • while continuously rotating the sample 360 • around the vertical axis by angle ω, resulting in 1440 frames per load step. The applied stress was measured by a load cell, and the strain was determined by digital image correlation. The analysis and reconstruction of the diffraction data were performed with the aid of the HEXRD software [16]. A detailed description of the HEDM reconstruction procedure can be found in [16,17]. During an ff-HEDM experiment, a multitude of component reflections (hkl) for families of lattice planes {hkl} are recorded. After the ff-data reconstruction, the (hkl) of a grain of interest was fitted. This was carried out within the HEXRD software. Each (hkl) was first integrated along ω (see Figure 1). To avoid peak overlap, the diffraction intensity of the (hkl) of interest was integrated over three successive detector images at ω i−0.25 , ω i and ω i+0. 25 , where the highest diffraction intensity of the (hkl) was observed at ω i . Secondly, the summed 2-dimensional intensities were integrated along the azimuthal direction on the detector, ψ, in order to generate 1-dimensional peaks. Thereafter, a pseudo-Voigt function was fitted to each reflection individually in 2θ space, from which the peak position and integral breadth of each component reflection at the specific strain can be determined. The integral breadth is determined by the area under a diffraction peak divided by the peak height. The ff-HEDM experiment during uniaxial tensile loading was performed at the Cornell High Energy Synchrotron Source (CHESS) Ithaca, NY, US, at the FAST (ID-3A) beamline. The experiment was conducted with a monochromatic X-ray beam energy of 61.332 keV. The illuminated gauge section of the sample was limited by rectangular slits with a size of 0.15 × 2 mm (H × W), resulting in a probed volume of 1 mm × 0.8 mm × 0.15 mm. The diffraction patterns were collected with two Dexela 2923 area detectors (3888 × 3072 pixels, 74.8 × 74.8 µm 2 per pixel, and a sample-to-detector distance of approximately 1000 mm). Interrupted loading was performed in displacement control with a strain rate of 5 × 10 −4 s -1 using the Rotation and Axial Motion System (RAMS2) load frame [15]. During pauses in loading, 2D diffraction patterns were collected in rotation increments of 0.25° while continuously rotating the sample 360° around the vertical axis by angle , resulting in 1440 frames per load step. The applied stress was measured by a load cell, and the strain was determined by digital image correlation. The analysis and reconstruction of the diffraction data were performed with the aid of the HEXRD software [16]. A detailed description of the HEDM reconstruction procedure can be found in [16,17]. During an ff-HEDM experiment, a multitude of component reflections (hkl) for families of lattice planes {hkl} are recorded. After the ff-data reconstruction, the (hkl) of a grain of interest was fitted. This was carried out within the HEXRD software. Each (hkl) was first integrated along (see Figure 1). To avoid peak overlap, the diffraction intensity of the (hkl) of interest was integrated over three successive detector images at . , and . , where the highest diffraction intensity of the (hkl) was observed at . Secondly, the summed 2-dimensional intensities were integrated along the azimuthal direction on the detector, , in order to generate 1-dimensional peaks. Thereafter, a pseudo-Voigt function was fitted to each reflection individually in 2 space, from which the peak position and integral breadth of each component reflection at the specific strain can be determined. The integral breadth is determined by the area under a diffraction peak divided by the peak height. The formation of stacking faults and dislocations was studied by plotting the integral breadth of each (hkl) versus the magnitude of its diffraction vector, . Simm [18] showed, with the formalism derived by Balogh et al. [19], that the diffraction peak broadening introduced by the presence of stacking faults or dislocations depends on the {hkl} in a particular manner. This is based on the fact that the displacement field around a dislocation is anisotropic. Thus, the broadening of a diffraction peak introduced by the dislocation's displacement field is given by the type of dislocation described by Burger's vector, , as well as its relation to , the slip normal n and the dislocation line s. In a powder diffraction pattern, this effect is expressed by the average contrast factor ̅ , given as The formation of stacking faults and dislocations was studied by plotting the integral breadth of each (hkl) versus the magnitude of its diffraction vector, g. Simm [18] showed, with the formalism derived by Balogh et al. [19], that the diffraction peak broadening introduced by the presence of stacking faults or dislocations depends on the {hkl} in a particular manner. This is based on the fact that the displacement field around a dislocation is anisotropic. Thus, the broadening of a diffraction peak introduced by the dislocation's displacement field is given by the type of dislocation described by Burger's vector, b, as well as its relation to g, the slip normal n and the dislocation line s. In a powder diffraction pattern, this effect is expressed by the average contrast factor C hkl , given as where C h00 and q depend on the elastic constants and the type of dislocation present and [18,[20][21][22]. In the case of stacking faults, the broadening of diffraction peaks is additionally affected by a size effect [23], i.e., broadening due to very small, coherent crystallite sizes, determined by the constant ω hkl . Considering intrinsic and extrinsic stacking faults, the broadening of the {220} is affected almost twice as much as the broadening of the {111} and {222} and approximately 0.15 times as much as the {200} and {311} [18]. Thus, the integral breadth versus g plot determined from a deformed fcc material with the presence of stacking faults results in a distinct "hook-shape". However, not all (hkl) are affected by the presence of stacking faults or dislocations to the same extent [23,24] since the lattice distortion responsible for an observable peak broadening is orientationdependent with respect to g, thus inducing different peak-broadening for different (hkl). In order to have a better representation and comparison of the results, the median of the integral breadth of all (hkl) was calculated. With the median, the average density of planar faults in a single grain can be quantified utilizing the modified Williamson-Hall (WH) plot [18,20,25], since the extent of the strain anisotropy and thus the apparent broadening anisotropy is directly related to their density. The quantification of α furthermore underpins the qualitative view from the shape of the integral breadth versus g plot. The modified WH plot takes both C hkl and ω hkl into account and, moreover, includes the fitting parameter where α is the stacking fault probability, β the twinning fault probability and a is the lattice parameter. The planar fault density can be determined by adjusting β for each grain (at a considered nominal macroscopic strain) in order to achieve the best linear fit during the modified WH plot fitting procedure. The fitting procedure was furthermore compared and verified by a conventional WH plot, in which the line broadening is plotted versus g, and the anisotropy is corrected by ω hkl and β only, i.e., the anisotropy due to dislocations is not included. An example of the fitting is given in Figure 2, where the median of the integral breadth of grain #29 at 0.09 nominal strain before the fitting procedure is represented by solid blue circles, and after fitting in orange crosses, utilizing the conventional WH plot (Figure 2a) and the modified WH plot (Figure 2b). The dashed lines represent the linear regressions of the integral breadth. It can be seen, that for both the conventional as well as the modified WH plot, the linear regression achieves a better fit after including planar faults in the fitting procedure. where and depend on the elastic constants and the type of dislocation present and = (ℎ + ℎ + )/(ℎ + + ) . It follows that a perfect dislocation in fcc crystals with Burger's vector and slip plane [18,[20][21][22]. In the case of stacking faults, the broadening of diffraction peaks is additionally affected by a size effect [23], i.e., broadening due to very small, coherent crystallite sizes, determined by the constant . Considering intrinsic and extrinsic stacking faults, the broadening of the {220} is affected almost twice as much as the broadening of the {111} and {222} and approximately 0.15 times as much as the {200} and {311} [18]. Thus, the integral breadth versus plot determined from a deformed fcc material with the presence of stacking faults results in a distinct "hook-shape". However, not all (hkl) are affected by the presence of stacking faults or dislocations to the same extent [23,24] since the lattice distortion responsible for an observable peak broadening is orientation-dependent with respect to , thus inducing different peak-broadening for different (hkl). In order to have a better representation and comparison of the results, the median of the integral breadth of all (hkl) was calculated. With the median, the average density of planar faults in a single grain can be quantified utilizing the modified Williamson-Hall (WH) plot [18,20,25], since the extent of the strain anisotropy and thus the apparent broadening anisotropy is directly related to their density. The quantification of furthermore underpins the qualitative view from the shape of the integral breadth versus plot. The modified WH plot takes both ̅ and into account and, moreover, includes the fitting parameter = . , where is the stacking fault probability, the twinning fault probability and is the lattice parameter. The planar fault density can be determined by adjusting ′ for each grain (at a considered nominal macroscopic strain) in order to achieve the best linear fit during the modified WH plot fitting procedure. The fitting procedure was furthermore compared and verified by a conventional WH plot, in which the line broadening is plotted versus , and the anisotropy is corrected by and ′ only, i.e., the anisotropy due to dislocations is not included. An example of the fitting is given in Figure 2, where the median of the integral breadth of grain #29 at 0.09 nominal strain before the fitting procedure is represented by solid blue circles, and after fitting in orange crosses, utilizing the conventional WH plot (Figure 2a) and the modified WH plot (Figure 2b). The dashed lines represent the linear regressions of the integral breadth. It can be seen, that for both the conventional as well as the modified WH plot, the linear regression achieves a better fit after including planar faults in the fitting procedure. Figure 3a shows the center of mass positions of the grains reconstructed from the ff-HEDM experiment. It shows the position of the studied grains within the studied volume at 0.09 strain. The grain orientations relative to the loading direction at 0.09 strain are given in Figure 3b. The red square marker indicates individual grains in which predominantly stacking faults were observed, whereas the blue triangle indicates grains with predominantly dislocations. Figure 3a shows the center of mass positions of the grains reconstructed from the ff-HEDM experiment. It shows the position of the studied grains within the studied volume at 0.09 strain. The grain orientations relative to the loading direction at 0.09 strain are given in Figure 3b. The red square marker indicates individual grains in which predominantly stacking faults were observed, whereas the blue triangle indicates grains with predominantly dislocations. (Figure 4a,b), the integral breadth of the component reflections from grain #29 has only a slight spread around the median. At 0 nominal strain, no planar fault could have been fitted to the plot, whereas at 0.03, was determined to be = 0.01x10 (Table 2). This indicates that no significant amount of stacking faults formed at low strains. However, the broadening anisotropy in Figure 4b suggests the presence of predominantly dislocations since the strain anisotropy of dislocations introduces an "M-shape", in which the integral breadth of the {200} and {220} is slightly larger compared to their neighboring reflections {111}, {220} and {222}. At 0.05 nominal strain, the integral breadth increases for all (hkl) compared to at 0.03 nominal strain (Figure 4c). However, the integral breadth determined for the component reflections of the {220} increases to a greater extent with increasing nominal strain, resulting in the transition from the "M-shaped" anisotropy to an anisotropic broadening that emerges due to the presence of stacking faults appearing as a "hook-shape". This increase in the formation of stacking faults is also suggested by a slight increase in the determined at 0.05 nominal strain to = 0.05 × 10 . This shape transition of the integral breadth versus plot was also found in a prior study by Neding et al. using average-grain highenergy X-ray diffraction measurements [26], where the formation of stacking faults was connected to the transition from "M shape" to "hook shape". This transition becomes more obvious at 0.09 nominal strain, in which the median of the integral breadth of the {220} component reflections is significantly larger compared to {111}, {200}, {311} and {222}, leading to a distinct "hook-shape" (Figure 4d). In addition, the experimentally determined (Figure 4a,b), the integral breadth of the component reflections from grain #29 has only a slight spread around the median. At 0 nominal strain, no planar fault could have been fitted to the plot, whereas at 0.03, α was determined to be α = 0.01 × 10 −3 ( Table 2). This indicates that no significant amount of stacking faults formed at low strains. However, the broadening anisotropy in Figure 4b suggests the presence of predominantly dislocations since the strain anisotropy of dislocations introduces an "M-shape", in which the integral breadth of the {200} and {220} is slightly larger compared to their neighboring reflections {111}, {220} and {222}. At 0.05 nominal strain, the integral breadth increases for all (hkl) compared to at 0.03 nominal strain (Figure 4c). However, the integral breadth determined for the component reflections of the {220} increases to a greater extent with increasing nominal strain, resulting in the transition from the "M-shaped" anisotropy to an anisotropic broadening that emerges due to the presence of stacking faults appearing as a "hook-shape". This increase in the formation of stacking faults is also suggested by a slight increase in the determined α at 0.05 nominal strain to α = 0.05 × 10 −3 . This shape transition of the integral breadth versus g plot was also found in a prior study by Neding et al. using average-grain high-energy X-ray diffraction measurements [26], where the formation of stacking faults was connected to the transition from "M shape" to "hook shape". This transition becomes more obvious at 0.09 nominal strain, in which the median of the integral breadth of the {220} component reflections is significantly larger compared to {111}, {200}, {311} and {222}, leading to a distinct "hook-shape" (Figure 4d). In addition, the experimentally determined α increases considerably to α = 1.17 × 10 −3 . This indicates that stacking faults are generated rapidly within the individual austenitic grain #29 between 0.05 and 0.09 nominal strain and domi-nates the plastic deformation. This transformation sequence has been observed before in austenite [27][28][29]. It is suggested that at low strains, the critical stress needed to generate partial dislocations and to ease their separation to form wide stacking faults is not reached, and thus the stacking faults are not recognized by XRD. At higher strains, stacking faults are generated, and dislocations dissociate into partial dislocations to form wide stacking faults. Thus, the density of faulted planes increases and their presence can be detected by XRD. Results and Discussion Materials 2021, 14, 5919 6 of 10 increases considerably to = 1.17 × 10 . This indicates that stacking faults are generated rapidly within the individual austenitic grain #29 between 0.05 and 0.09 nominal strain and dominates the plastic deformation. This transformation sequence has been observed before in austenite [27][28][29]. It is suggested that at low strains, the critical stress needed to generate partial dislocations and to ease their separation to form wide stacking faults is not reached, and thus the stacking faults are not recognized by XRD. At higher strains, stacking faults are generated, and dislocations dissociate into partial dislocations to form wide stacking faults. Thus, the density of faulted planes increases and their presence can be detected by XRD. From Figure 3, it can be seen that austenite grains deformed along the [100] are found to form stacking faults, whereas in grains oriented with [111] along the tensile direction, the formation of stacking faults is impeded, and the formation of perfect dislocations is dominant. This was studied by investigating the anisotropic broadening of the diffraction peaks as a function of g at 0.09 nominal strain. Individual grains at 0.09 nominal strain oriented with [100] along the loading direction, #29, #19, #26 and #27 in Figures 4d and 5a-c, respectively, show the same anisotropic broadening effect in the integral breadth plotted versus the magnitude of g leading to a "hook shape". Besides the shape of the anisotropy, the experimentally determined α for grains #29, #19, #26 and #27 with values of 1.17 × 10 −3 , 0.86 × 10 −3 , 1.29 × 10 −3 and 0.96 × 10 −3 , respectively, also suggests the presence of significant amounts of stacking faults. In contrast, the integral breadth of the component reflections of the {220} from grain #12 and grain #17 oriented with their [111] along the loading direction at 0.09 nominal strain (see Figure 5d,e) are significantly smaller as compared to {200} and approximately the same magnitude as {311}. Therefore, the plot implies the "M shape", indicating the presence of predominantly dislocations. Furthermore, the magnitude α was determined to be considerably smaller. In grain #12 and in grain #17, α was determined to be α = 0.26 × 10 −3 and 0.24 × 10 −3 , respectively (see Table 2). However, stacking faults with an amount comparable to the grains oriented with their [100] parallel to the loading direction, were found in grain #19, even though grain #19 is oriented with its [111] along the loading direction. The observed orientation-dependent deformation behavior can be explained by the fact that the partial dislocations bounding a stacking fault, referred to as leading and trailing dislocation, experience different resolved shear stress [8,30,31]. Table 3 shows the Schmid factors for all possible partial dislocations with the direction of <121> on {111} calculated for tension in [111] and [001] using MTEX [32]. It can be seen that the difference in Schmid factor for the leading and trailing partial dislocation in grains oriented with their [001] along the loading direction is with 0.2357 larger compared to the difference in Schmid factor in grains oriented with their [111] along the loading direction, with 0.1571. Thus, the partial dislocations in grains oriented with their [001] along the loading direction separate readily and more faulting occurs. In contrast, the partials in grains with the [111] along the loading direction have a lower Schmid factor difference between the leading and trailing partial dislocation, which leads to a smaller separation distance between the partials. Thus, the formation of stacking faults in grains oriented with their [111] along the loading direction is less likely. Extensive faulting in grains with the <001> parallel to the external load was also observed by Goodchild et al. [8], who studied the transformation of grains in textured metastable austenite steels ex situ. However, as it can be seen in Figure 3, the formation of stacking faults occurs in grain #19, which is oriented with their [111] along the loading direction. This indicates that, even though the difference in the Schmid factor due to the grain's orientation with respect to the loading direction indicates the formation of predominantly dislocations, the separation of partial dislocations; with that, the formation of stacking faults can still occur. The occurrence of a different deformation mechanism in grains with similar orientations might be due to differences in size and local grain neighborhood, which can affect the deformation behavior in addition to grain orientation. This observation emphasizes that it is crucial to consider the combination of all effects in order to reliably predict the deformation behavior of individual grains and therefore the deformation behavior of the bulk. Conclusions Dislocations and stacking faults have been investigated in individual grains embedded within a polycrystalline bulk austenitic stainless steel. In situ X-ray line profile analysis was successfully applied to six individual grains with different orientations with respect to the external load. The integral breadth of more than 88 diffraction peaks per loading step was extracted for each individual grain from which the predominant deformation mechanism and stacking fault probability was deduced. It was shown that the orientation of individual grains has a significant impact on the predominant deformation mechanism for nominal strain up to 0.09. The formation of stacking faults was observed in grains oriented with their [100] along the loading direction, resulting in α = 0.96 × 10 −3 − 1.29 × 10 −3 , whereas in grains oriented with their [111] along the loading direction, the presence of mere dislocations and only a small amount of stacking faults could be observed. This observation is believed to be related to the Schmid factor of partial dislocations. Orientations with a large difference in Schmid factor are more prone to form large stacking faults compared to grains with a small difference. Furthermore, the predominant deformation mechanism of an individual grain was followed as a function of external load. It was observed that at low nominal strain, plastic deformation occurs predominantly by dislocation, whereas with increasing nominal strain, the formation of stacking faults becomes prevailing, leading to α = 1.17 × 10 −3 at 0.09 nominal strain. It was furthermore revealed that the grain orientation alone is not sufficient to predict the deformation behavior and additional factors such as grain size and neighborhood must also be considered. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
6,682.2
2021-10-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Influence of Activated Fluxes on the Bead Shape of A-TIG Welds on Carbon and Low-Alloy Steels in Comparison with Stainless Steel AISI 304L : The article presents results of comparative A-TIG welding tests involving selected unalloyed and fine-grained steels, as well as high-strength steel WELDOX 1300 and austenitic stainless steel AISI 304L. The tests involved the use of single ingredient activated fluxes (Cr 2 O 3 , TiO 2 , SiO 2 , Fe 2 O 3 , NaF, and AlF 3 ). In cases of carbon and low-alloy steels, the tests revealed that the greatest increase in penetration depth was observed in the steels which had been well deoxidized and purified during their production in steelworks. The tests revealed that among the activated fluxes, the TiO 2 and SiO 2 oxides always led to an increase in penetration depth during A-TIG welding, regardless of the type and grade of steel. The degree of the aforesaid increase was restricted within the range of 30% to more than 200%. Introduction A-TIG (A-GTAW) welding with activated flux is a TIG (Tunsten Inert Gas) or GTA (Gas Tungsten Arc) welding method that was first used in the E.O. Paton Electric Welding Institute, Ukraine at the end of the 1950s and the beginning of the 1960s [1]. Initially, the A-TIG method was used in the welding of titanium, next, in the welding of martensitic high-strength steels (Re ≈ 1500 MPa), and finally, in the welding of stainless steels. After the publication of the first test results, subsequent research-related articles were published rather rarely and irregularly. The renaissance of the interest in the A-TIG method took place in the mid-1990s, which was manifested by a sudden increase in the number of scientific and technical articles published after 1995. The authors of the articles were scientists representing research centers from all over the world. The subject of the significant majority of research works has been the effect of the A-TIG method on the welding of stainless, primarily austenitic, steels (ASS). The foregoing is fully understandable as the advantages resulting from the use of activated fluxes during the TIG welding of the aforesaid group of materials are most spectacular, including a nearly two-fold increase in penetration depth and the minimization of welding distortions resulting from the specific shape of welds in cross-section [2]. Issues accompanying the A-TIG welding of low-alloy and unalloyed steels are relatively rarely discussed in scientific publications. There can be several reasons for a relatively low number of conducted tests concerning the A-TIG welding of unalloyed and low-alloy steels. However, it seems that such a situation can be primarily ascribed to the fact that the most common method used in the welding of the aforesaid groups of steel is MAG (Metal Active Gas) technology, characterized by higher efficiency than the TIG method. The TIG method is used rather rarely in the welding of unalloyed and low-alloy steels. Its main application areas include the joining of pipelines (e.g., in power generation systems), the welding of joints characterized by specific structure, and welding without the use of filler metals. The lack of filler metal during welding is typical of the A-TIG welding process, which indicates the purposefulness of tests aimed to determine the possibility of replacing the TIG method with the A-TIG process and to identify advantages resulting from the aforesaid replacement. The analysis of previous publications concerning the A-TIG welding of carbon and lowalloy steels revealed that the authors discussed results related to the use of oxides, fluorides, chlorides, carbonates, and their mixtures [3][4][5][6][7][8]. Depending on the scope of research, the articles presented the effect of individual ingredients on the depth of penetration, structure, hardness, and mechanical properties of welded joints. Nearly all of the works emphasized the positive effect of SiO 2 and TiO 2 on the depth of penetration, restricted between 40-50% and more than 200%. However, publication [9] mentioned the reverse effect of titanium oxide, i.e., a decrease in penetration depth by 35%. In turn, work [10] described a slight increase (by approximately 30%) in penetration depth when using individually developed activated flux during the welding of mild steels. The analysis of publications revealed that the tests were performed using various carbon [9] or low-alloy steel grades: SA516 GR-70 [5], DMR-249A [6], AISI 4130 [7], and BS700MC [8]. Some publications [3,4] did not specify the grades of steels or their chemical composition. The tests involved the application of various welding process parameters (current and welding rate) as well as the use of plates having various thicknesses (from 5 mm to 8 mm) and various dimensions (from very small, i.e., 60 mm × 80 mm up to standard dimensions of 150 mm × 300 mm) [5][6][7][8][9][10]. The above-presented factors affect welding thermal conditions and the behavior of liquid metal in the weld pool. However, it should also be noted that even in relation to the same conditions used during the welding of the same grade of steel, the effect of flux may vary depending on the pre-weld condition of the plate surface (e.g., the presence of scale after hot rolling, the presence of an oxidized layer after normalizing, or another type of heat treatment). All this impedes not only the formulation of general conclusions based on the results of tests of only one steel grade but also makes it more difficult to draw appropriate conclusions on the basis of the comparative analysis of research results obtained under various conditions. Because of this, presented below are results of tests concerning the A-TIG welding of selected unalloyed steels and fine-grained steels obtained applying the same welding conditions and parameters, and compared with results obtained in relation to high-strength steel WELDOX 1300 and austenitic stainless steel AISI 304L. Test Materials and Methods The comparative tests of A-TIG welding involved the use of 8 mm thick plates (300 mm × 300 mm) made of unalloyed (carbon) steel S235JR+N as well as fine-grained steel grades P265GH (for pressure vessels) and S355J2+N. The selection of the above-named steels resulted from their significantly varying deoxidation degrees but relatively similar chemical composition, particularly as regards the content of sulfur ( Table 1). As is known, in terms of steel, the content of sulfur and that of oxygen have the most significant impact on the movement of liquid metal in the weld pool [11,12]. This, in turn, affects the shape of welds and, consequently, the depth of penetration. If the content of sulfur is similar, the most important factor is oxygen. Steel S235JR+N was selected because of the fact that it was rimmed steel, which means that its production was not accompanied by deoxidation (which was manifested by a silicon content of 0.009%-see Table 1). In turn, steel grades P265GH and S355J2+N are fully killed steels characterized by similar metallurgical purity; the content of silicon in steel P265GH amounted to 0.17%, whereas that in steel S355J2+N amounted to 0.29% (Table 1). For comparative purposes, bead-on-plate welds were made on a 7 mm thick plate made of low-alloy high-strength steel WELDOX 1300 and on an 8 mm thick plate made of austenitic stainless steel AISI 304L (1.4307/X2CrNi18-9). The chemical compositions of the above-named steels are also presented in Table 1. Steel WELDOX 1300 was selected because of the fact that, in addition to containing alloying elements and being subjected to appropriate rolling technology, the steel had undergone thorough metallurgical purification, including deep deoxidation [13]. The welding tests involved the use of single-component fluxes in the form of oxides (Cr 2 O 3 , TiO 2 , SiO 2, and Fe 2 O 3 ) or fluorides (NaF and AlF 3 ). The components were ground in a ceramic mortar and, next, sifted through a laboratory sieve having a mesh of 0.056 mm. Before application, the component was mixed with a quick-evaporating liquid (acetone) in order to obtain dense suspended matter. The flux prepared in this way was applied onto plate surfaces using a brush. This method of applying the activating flux is currently used in all investigations concerning the A-TIG process. In order to achieve similar flux thickness and minimize the impact of this factor on test results, the paste density was always the same; the application of flux was always conducted by the same person, performing the same brush movements. All experimental welds were made under the same conditions, without the use of the filler metal (autogenous TIG welding), using a current of 200 A and a welding rate of 150 mm/min. The arc voltage was restricted within the range of 10.4 V to 12.8 V; the heat input was restricted within the range of 0.499 kJ/mm to 0.614 kJ/mm. The station for mechanized welding was composed of a LORCH V40 welding power source and a Promotech DC-20 drive unit (enabling the precise movement of the welding torch at a preset welding rate). The flow rate of shielding argon was restricted within the range of 9 l/min to 10 l/min. The welding tests were performed using DC with reversed polarity on a tungsten electrode (ø 2.4 mm) with the addition of thorium oxide (grade WT20 in accordance with EN ISO 6848). All of the welds were subjected to the visual validation of surface condition and macrostructural tests aimed to identify their dimensions ( Figure 1). The specimens subjected to microstructural observation were etched using Adler's reagent. The measurements of the width of the welds were performed every 50 mm (between measurement points) along the entire length of the test welds. The specimens subjected to macrostructural tests were sampled from the central part of the test weld. Test Results and Analysis The results of the macrostructural tests of the welds made on individual steel grades using various activated fluxes (Cr 2 O 3 , TiO 2 , SiO 2 , Fe 2 O 3 , NaF, and AlF 3 ) are presented in The analysis of the dimensions of the bead-on-plate welds made of unalloyed steel grades S235JR+N, S355J2+N, and P265GH revealed ( Figure 6) that penetration depth increased in nearly all of the cases, regardless of flux application. However, the aforesaid increase was not significant and amounted to a maximum of 30%. Only in relation to steel S235JR+N welded using Cr 2 O 3 and NaF was it possible to notice a decrease in penetration depth. However, the aforesaid reduction was not significant. The greatest increase in penetration depth was observed in relation to steel S355J2+N, whereas the lowest increase in penetration depth was observed with steel grade S235RJ+N. It was also noticed that TiO 2 and SiO 2 had a very similar effect on penetration depth, regardless of the substratum type. In turn, the effect of Cr 2 O 3 , Fe 2 O 3 , NaF, and AlF 3 varied depending on the steel grade. It should be noted that the greatest increase in penetration depth accompanied the welding of steel S355J2+N using Fe 2 O 3 , whereas the very same oxide had no influence on penetration depth in steel S235JR+N. A similar result was observed during the welding process involving the use of NaF. As regards the unalloyed and the fine-grained steels, the analysis of the macrostructures of the experimental welds revealed (Figures 2-5) that additional information about the effect of activators could be provided not only by penetration depth but also by the dimensions (visible in the metallographic specimens) of the heat-affected zone (HAZ). The obtainment of such information was not possible during the welding of the austenitic stainless steels as the HAZ was very narrow and nearly invisible ( Figure 5). As regards steel S235JR+N, the tests revealed that the shape and the dimensions of the HAZ remained nearly unchanged (Figure 8). Significantly different was the case with steel S355J2+N. The welding of the aforesaid steel using TiO 2 , SiO 2 and Fe 2 O 3 resulted in a significant increase in the HAZ depth, much greater than could be implied by the increase in penetration depth. Interestingly, the width of the HAZ changed only slightly. On one hand, the change in the HAZ dimensions could result from the thermal conductivity and the initial structure of the steel, yet, on the other, it could also indicate the direction of liquid metal pool movement in the weld and the transfer of energy (which could be indicated by the dimensions of the welds and of the HAZ made on steel P265GH using TiO 2 ( Figure 4b) and NaF (Figure 4f). In both cases, the depth of penetration was the same, yet the depth of the HAZ of the weld made using NaF was greater than that of the HAZ of the weld made using TiO 2 . The comparison of the effect of the activated fluxes on penetration depth during the A-TIG welding of the unalloyed steels and stainless steel AISI 304L revealed a similarly favorable effect of the TiO 2 , SiO 2, and Fe 2 O 3 oxides as regards both steel groups. The effect of oxides or, more precisely, oxygen provided by the oxides to the weld pool, is well explained by the Marangoni convection, cited in many scientific publications. A different influence could be observed in terms of the fluorides ( Figure 5). The use of NaF and AlF 3 resulted in the reduction of penetration depth in steel AISI 304L and in an increase in penetration depth in steel S355J2+N ( Figure 6). Additional information for analysis was provided by the results of the tests related to penetration depth during the A-TIG welding of high-strength steel WELDOX 1300, characterized by very high metallurgical purity. The tests revealed that the use of TiO 2 , SiO 2, and Cr 2 O 3 resulted in more than a two-fold increase in penetration depth in comparison with that obtained using traditional TIG welding without flux (Figures 9 and 10). As regards penetration depth, differences between TIG and A-TIG welding were clearly visible, which was not the case with steels S235JR+N, S355J2+N, and P265GH. Interestingly, the use of the TiO 2 and SiO 2 oxides resulted in the obtainment of nearly the same penetration depth in cases of all of the unalloyed and low-alloy steels ( Figure 10). The test results concerning the A-TIG welding of the unalloyed and low-alloy steels revealed that penetration depth increased in the steels characterized by increasingly high deoxidation and metallurgical purity. Obviously, not each activator was responsible for an increase in penetration depth, yet the use of the TiO 2 and SiO 2 oxides proved undoubtedly favorable. The favorable influence of Cr 2 O 3 could be observed along with the increasingly high quality of the unalloyed and low-alloy steels. In relation to the aforesaid oxide, it was also observed that, depending on steel grades, the depth of penetration changed significantly, yet the width of the welds remained nearly the same, regardless of the type and the grade of steel (Figures 6 and 7). A similar, yet less evident, correlation could be observed during the welding tests involving the use of iron oxide (Fe 2 O 3 ). Worth noticing is also the comparison of the influence of SiO 2 and Fe 2 O 3 on penetration depth in steel S235JR+N ( Figure 6). Related measurements revealed that the use of SiO 2 led to an increase in penetration depth, whereas the use of Fe 2 O 3 did not produce any visible result. The arc temperature-triggered decomposition of both oxides resulted in providing the liquid pool with additional oxygen, yet only the SiO 2 oxide was responsible for increased penetration depth. The foregoing could imply that not only temperature and the amount of oxygen affected the direction of the movement of liquid metal in the weld pool (Marangoni convection) but also the presence of other chemical elements influencing surface tension, e.g., through the specific course of reactions at the phase boundary and the dynamic formation of thin subsurface layers. As regards steel S355J2+N, characterized by the relatively high amount of Si (Table 1), it should be mentioned that both the use of Fe 2 O 3 and that of SiO 2 ( Figure 6) led to an increase in penetration depth. As mentioned above, a change in the direction of the Marangoni convection is the most commonly indicated reason for an increase in penetration depth. However, there is also a view stating that an increase in penetration depth is the result of the narrowing of the arc, triggered by the combination of free chemical elements of relatively low potentials with oxygen (e.g., Mn −7.4 eV and Fe −7.8 eV). As a result, the aforementioned elements do not enter the welding arc (which would deionize its peripheral areas and result in the narrowing of the arc and, consequently, could lead to the concentration of energy in a smaller area). The additional narrowing of the heated area may also result from the lack of the electric conductivity of activated fluxes, most of which are oxides or halides. In terms of steel AISI 304L, the analysis of related test results revealed that both of the above-presented conclusions were justified as penetration depth increased in the presence of all oxide activators (Cr 2 O 3 , TiO 2 , SiO 2, and Fe 2 O 3 ), triggering a change in the surface tension of liquid metal in individual areas of the weld pool (Marangoni convection) and the release of oxygen from the above-named oxides as well as the oxidation of chemical elements of relatively low potentials. This, in turn, led to the deionization of the peripheral areas of the arc and, ultimately, to the narrowing of the arc. However, the theory concerned with the effect of the deionization of the peripheral areas of the arc on the A-TIG welding process was challenged by the lack of an increase or even by a decrease in penetration depth in steel AISI 304L when using sodium and aluminum fluorides ( Figure 6). Fluorine entering the welding arc deionized its peripheral areas, which, however, did not lead to an increase in penetration depth during the welding of austenitic stainless steel AISI 304L. It should also be mentioned that the width of the weld obtained when using NaF decreased in comparison with that obtained using the traditional TIG welding method and was exactly the same as the weld width obtained using Fe 2 O 3 ( Figure 11). The use of the NaF fluoride and of the Fe 2 O 3 oxide resulted in the narrowing of the arc, which, in both cases, resulted in the narrowing of the weld width. However, only the use of the iron oxide as the activator led to an increase in penetration depth. The foregoing indicates that the above-presented differences depended on processes taking place on the surface of liquid metal and were connected with changes of surface tension in various areas of the weld pool (Marangoni convection). This supposition could also be confirmed by the fact that the use of sodium fluoride (NaF) when making welds on steels S355J2+N and P265GH resulted in an increase in penetration depth ( Figure 12). However, the composition of unalloyed steels differed significantly from that of stainless steel AISI 304L, which could be the reason for the varying influence of NaF on the surface tension of liquid metal in the weld pool and on penetration depth. Figure 11. Macrostructure of the welds made on steel AISI 304L using NaF and Fe 2 O 3. Figure 12. Macrostructure of the welds made on steels AISI 304L and S355J2+N using NaF. Summary The results of the above-presented tests revealed that in cases of the traditional unalloyed and fine-grained steels, certain oxides and fluorides were responsible for an increase in penetration depth by 20-30%. Noticeably, as regards metallurgically pure low-alloy steel WELDOX 1300, an increase in penetration depth triggered by TiO 2 , SiO 2 and Cr 2 O 3 exceeded 200%. The test results were, as a rule, consistent with the results presented in referenced scientific publications, yet they slightly differed in detail. This fact was undoubtedly connected with the use of various unalloyed and low-alloy steel grades, different conditions of the plate surface, various conditions of experiments, and the specific nature of the A-TIG welding technology (related primarily to the method, in which activated fluxes were applied). Presently, the most popular manner of the application of activated flux is by hand, i.e., using a brush. Obviously, such a method makes it difficult to precisely control the amount of flux placed on the surface of a plate to be welded. The amount of flux may differ not only as regards the type of its primary ingredient but also due to the fact that there may be more flux at the beginning and less at the end of the layer applied on the plate. The author's multiannual practical experience and numerous research results provided by other researchers revealed that the amount of flux present on the plate surface may significantly affect the results of A-TIG welding [3,14]. To eliminate the above-named difficulties and minimize the effect of the aforesaid factors on welding test results, in the research work discussed in this article, the flux in the form of paste was characterized by similar density, and the process of flux application was always performed by the same person. The tests revealed that, in comparison with TIG welding without flux, the greatest increase in penetration depth accompanied the A-TIG welding of low-alloy steels characterized by high metallurgical purity. The above-named steels are known to be characterized by a higher yield point and strength. In cases of narrow and deep welds, obtained during A-TIG welding, there is a risk of increased HAZ hardness and worsened mechanical properties. Publication [15] presents tensile test results concerning welded joints made of steel WELDOX 1300 using various welding methods, including the A-TIG process. An A-TIG welded joint subjected to tests described in work [15] was made without the filler metal and with activated flux developed on the basis of the above-named test results. The tests concerning the strength of welded joints made on steel WELDOX 1300 revealed that the strength of the A-TIG joint was only lower by approximately 100 MPa than that of the base material and, at the same time was higher than the strength obtained during traditional TIG welding and MAG welding with the filler metal. In turn, work [16] presents tests related to mechanical properties of A-TIG joints made of steel WELDOX 1100 (also made by the author of this article). The tests revealed that the A-TIG joints satisfied related requirements in terms of mechanical properties and impact energy. The hardness of the HAZ was higher than that of the weld (but only slightly) and did not exceed 419 HV. The only problem, which occurred during the bend test, was observed on the weld face side as the specimen contained cracks that appeared at a bend angle of 120 • and 150 • . Interestingly, bending on the weld root side was successful; the obtainment of a bend angle of 180 • was not accompanied by the formation of any scratches or cracks. The above-presented results indicate that, similar to traditional TIG and MAG methods, the use of the A-TIG process enables the obtainment of welded joints satisfying related requirements. However, in each case, the mechanical properties of A-TIG joints must be verified before use in specific applications. One of the practical conclusions drawn on the basis of the above-presented tests is that the A-TIG welding of ordinary structural steels available on the market is characterized by low effectiveness not only in comparison with the commonly used MAG (GTAW) method, but also if compared with the traditional TIG welding process (rarely used in the joining of the aforesaid steels). The advantages of the A-TIG method in comparison with both TIG and MAG processes are noticeable in relation to the welding of steels characterized by higher metallurgical purity, i.e., (usually) low-alloy high-temperature creep-resisting steels and high-strength steels. Concluding Remarks The analysis of the above-presented test results justified the formulation of the following conclusions: -among the activated fluxes subjected to the tests, the use of silicon and titanium oxides (TiO 2 and SiO 2 ) always led to an increase in penetration depth during A-TIG welding, regardless of the type and the grade of steel. The degree of an increase in penetration depth was restricted within the range of 30% to more than 200%; -in cases of the carbon and low-alloy steels, the greatest increase in penetration depth was observed in the steels which had been previously appropriately deoxidized and purified at the production stage in the steelworks. The purer the unalloyed or low-alloy steel, the more favorable the effect of TiO 2 and SiO 2 ; -influence of other tested oxides and fluorides depended on types and grades of steels. For instance, the sodium and the aluminum fluorides decreased the penetration depth of the weld on the austenitic stainless steel, yet they increased the depth of penetration in cases of fine-grained steels. Funding: This research has received funding from the Polish Ministry of Science as part of the funds for maintaining the research capacity of the Łukasiewicz Research Network-Institute of Welding.
5,881.4
2021-03-24T00:00:00.000
[ "Materials Science" ]
Effect of casing yield stress on bomb blast impulse An equation to predict blast effects from cased charges was first proposed by U. Fano in 1944 and revised by E.M. Fisher in 1953 [1]. Fisher’s revision provides much better matches to available blast impulse data, but still requires empirical parameter adjustments. A new derivation [2], based on the work of R.W. Gurney [3] and G.I. Taylor [4], has resulted in an equation which nearly matches experimental data. This new analytical model is also capable of being extended, through the incorporation of additional physics, such as the effects of early case fracture, finite casing thickness, casing metal strain energy dissipation, explosive gas escape through casing fractures and the comparative dynamics of blast wave and metal fragment impacts. This paper will focus on the choice of relevant case fracture strain criterion, as it will be shown that this allows the explicit inclusion of the dynamic properties of the explosive and casing metal. It will include a review and critique of the most significant earlier work on this topic, contained in a paper by Hoggatt and Recht [5]. Using this extended analytical model, good matches can readily be made to available free-field blast impulse data, without any empirical adjustments being needed. Further work will be required to apply this model to aluminised and other highly oxygen-deficient explosives. 1 Blast impulse equations The equation derived in [2] for the blast impulse I from a cased charge as a fraction of the impulse I0 from the same charge without a casing is: Blast impulse equations The equation derived in [2] for the blast impulse I from a cased charge as a fraction of the impulse I 0 from the same charge without a casing is: This equation applies where the casing metal is very ductile and therefore expands to a radius at which the internal driving pressure of the explosive gases is negligible.It also applies only to explosive compositions that are neither aluminised nor otherwise highly oxygen deficient, since these generate additional blast energy through exothermic reactions with the surrounding air (i.e.after-burn).However, many real bomb casings are made from metals with significant yield strength and these will fracture at expansion radii where internal driving pressure of the explosive gases is significant and the simple energy balance between gases and casing fragments predicted by Gurney [3] has not been reached. Casing fracture criteria It has been pointed out by G.I. Taylor [4] that the internal driving pressure exerted upon the casing metal by the gaseous products, while it eventually strains the casing metal towards fracture, initially suppresses fracture.The casing material is initially compressed between the gas pressure acting on its inside surface and its own inertia.The casing metal shears in compression, both losing thickness and gaining diameter and surface area in the process. Taylor also pointed out that the compressive stress in the casing falls in value from the instantaneous gas pressure at its inside surface, as expressed by the following equation for a perfect gas, to near zero (i.e.just the pressure of any surrounding air) at its outside surface.This means, as illustrated in figure 1, that as the internal gas pressure P reduces adiabatically with advancing casing expansion, a release wave propagates inwards from the casing outer surface.Behind this wave, the casing material can fracture.Ahead of this wave, the material can still yield in compressive shear. According to Taylor [4], the Tresca criterion for shear failure applies at the point where this effective release wave reaches the casing inner surface, and this defines the critical stress condition at which the casing metal is obliged instead to fracture and then expand as an envelope of discrete fragments. The choice of criterion to define the critical stress condition for through-casing fracture was subsequently reviewed by Hoggatt and Recht [5] and they proposed a different failure criterion, i.e. that the stress components, normal to a shear plane in the casing metal, should sum to zero.However, this paper calls this new approach by Hoggatt and Recht into question and argues for the original Taylor/Tresca criterion, based on the following two points: Firstly, the Tresca criterion used by Taylor is that most relevant on the prediction of shear yield in response to biaxial stresses. Secondly, Hoggatt & Recht's criterion predicts failure at higher strain than does the Taylor/Tresca criterion, which would thus allow fracture first. Hoggatt and Recht themselves found evidence of shear strain localisation into adiabatic shear bands.Considering the weakened condition of casing material within these bands, a crack should propagate inwards along such a band, as soon as the compressive shear could no longer be maintained.Thus it is hard to see why the stress components, normal to a shear plane in the casing metal, should first have to sum to zero, as in Hoggatt & Recht's criterion. Nonetheless, the internal dynamics of expanded metal casings have a significant impact on the shape, size EPJ Web of Conferences distribution and initial velocity of the projected casing fragments and, for the purposes of this paper, on blast impulse.The mathematical methods of Hoggatt and Recht provide a valuable insight into those internal dynamics and will therefore be reviewed here. Radial dependency of compressive stress Hoggatt and Recht approach the solution for the dynamic radius within the casing thickness, at which the critical stress condition exists, via a number of stages, the last of which is numerical rather than analytical.Firstly, an equation of motion is derived for the wall of a cylindrical bomb.Then, based on casing metal volume conservation, the dynamic radius a of any point within the casing thickness can be derived, relative to its initial radius, a 0 : from this can be derived the following expression for the radial pressure p a in the wall, at any radius a: In ( 4), r is the radius of the inner surface, R the outer surface and a an arbitrary point through the thickness of the casing, r < a < R. Subscripted terms refer to the radius of these three points at time t = 0. Equation ( 2) for the fall in gas pressure with casing expansion is derived, but is based on the assumption that the initial mean gas pressure will be the Chapman-Jouget pressure, P CJ .In reality, P CJ is only reached just behind a detonation wave and the true initial mean pressure P 0 ≈ 0.42P CJ .An empirical relation given between the compressive plastic natural strain and the compressive uniaxial stress, p ae : where k is a strength coefficient and n, a work hardening component.A Von Mises (three-dimensional) description is given of the compressive uniaxial stress, where σ θ and σ Z are the hoop and axial stress components: Eliminating the underlying elastic strains, as defined by the tri-axial stress-strain equations that incorporate Young's modulus, Y, and Poisson's ratio, ν, Hoggatt and Recht derived values of a p , the purely plastically expanded radius of a differential element of casing originally situated at radius a 0 : Equation ( 7) is their solution for σ θ = 0, the Taylor/Tresca criterion, rather than Hoggatt & Recht's own failure criterion.In (7), a is the actual expanded radius of the casing thickness element, including the elastic strain.The final analytical steps are to derive the corresponding value of the equivalent compressive stress: and, from (3), ( 4) and ( 5) the failure strain at any expanded radius a within the casing thickness: Further details can be found in Hoggatt & Recht's own paper [5], however, the above analytical derivation does not lead to analytical solutions.Values for p a in (5) must be found by first finding the value of a 0 at a selected value of expansion radius r.Based on equation (3) for the dynamic radius, a of an element within the casing, and knowing the instantaneous mean gas pressure from equation ( 2) equation ( 4) therefore becomes: (10) Hoggatt and Recht were then able to find, by iteration, a/a 0 values that obeyed both the geometric requirement in (10) and the stress condition σ θ = 0. As a check on this rather complex iterative method of deriving failure strain values, it can be shown that the point at which the failure condition appears at the inner surface can be found more simply by solving only for the inner surface of the casing, i.e. when a 0 = r 0 and a = r.Based on equation (9), and substituting for the value of p a = P at inner radius r from equation (2), one can obtain the following part-logarithmic, part-polynomial equation for the reciprocal of failure radius r f : However, even this equation does not have a straightforward analytical solution.Remaining therefore with iterative methods, but using equation (11) as a check on the end values, it is possible to match very closely the values of a and r at which dynamic failure should occur with those obtained by Hoggatt & Recht [5] for a worked example from Taylor [4]. Example from Taylor In order to demonstrate their iterative method, Hoggatt and Recht used the example of a steel cased bomb given by Taylor, together with textbook values for Y, k , ν and n for the mild steel tube of internal radius 1in (0.0254 m) and external radius 1.25 in (0.0318 m), packed with reduced density (1.5g/cc) RDX explosive.Of significance here is that both Hoggatt and Recht and Taylor made the error of using a P CJ value of 20.9 GPa for the initial pressure P 0 when the value used should have been 8.8 GPa. The values of a 0 were stepped in intervals of 0.01in, 0.254 mm from r 0 (= 1.0 in, 25.4 mm) to R 0 (= 1.25 in, 31.8 mm).The values of r for each value of a 0 were then adjusted by trial and error to bring those values of a/a 0 into line with the values of a/a 0 derived for each stress condition p a .A simple macro was written to use the goal seek function in MS-Excel to look for a value of r for which the two values of a/a 0 had a ratio of 1.0 and it was found that this gave the necessary values for the required curves to < ±0.1%. In figure 2, the straight black lines show the changing casing inner and outer radii with increasing inner radius.The casing material is located between these two lines.The dotted curve, based on the above up-to-date version of Hoggatt and Recht's iterative method, plots the loci of radii a within the dynamic casing thickness at which, according to Taylor, the material is in transition from compressive to tensile stress. This curve effectively shows the progress of a wave, starting from the outer casing surface, which allows fracture when it reaches the dynamic inner surface (lower straight line).When the value of a, which is increasing more slowly than r, is caught up with at r = r f , all the casing material is in tension and through-fracture can occur. Working through method of Hoggatt and Recht, while retaining the Taylor/Tresca failure criterion, provides the necessary underpinning in the form of a stress-based fracture methodology to predict modified blast impulse in the presence of strong casings. Blast impulse modified for casing yield stress In a further paper [6] it will be shown that the work energy EC remaining with the explosive gases at the radius of casing fracture can be expressed by the following modified version of equation ( 1): where E is the total work energy (i.e.Gurney energy) available per unit charge mass, C is the mass of explosive and M the mass of casing and r f the casing inner radius at fracture.Applying Taylor's fracture criterion, the gas pressure P within the casing can firstly be related to the radius r f to which the casing has expanded from its initial radius r 0 and to the initial gas internal pressure P 0 , and secondly equated to the metal yield stress, σ y : Therefore, rearranging the right hand equation in (13), at casing failure: Using equation ( 14) to substitute for r/r 0 in (12), we now obtain: While gas kinetic energy and gas momentum are not simply related, due to the distribution of velocities within the gas, the same proportionality should hold for both bare and cased charges of the same geometry.Therefore, by taking the square root of the right-hand side of (15), we can obtain the blast impulse I of the cased charge as a fraction of the impulse I 0 from the same charge without a casing: Thus we have a new equation for the blast impulse from a cased charge, which only requires us to know the casing/charge mass ratio, the explosive properties and the casing metal yield stress. Experimental comparison Figure 3, a plot in Gurney parameter space shows good comparisons between the predictions of equation ( 16) and blast impulse data from two independent sets of experiments, one from an unpublished report by Bishop and James at AWE Foulness in 1970 and another data set published much more recently by Flynn and Wharton [7]. Both sets of data were for cylindrical charges, i.e. steel tubes of varying steel thickness and mass filled with nonaluminised explosives.In all instances, side-on pressure gauges were ranged at intervals along a radius struck from the mid-point charge axis.Dynamic pressure readings have been time integrated to provide relative side-on impulse values. The diagonal straight dotted line is the prediction of equation (1).It is referred to here as the 'Gurney Line' because it represents the condition where the final energy balance between casing fragments and explosive gases is that postulated by Gurney [3].The point in the top right hand corner is that for a bare charge, in the bottom left hand corner is the point for an infinitely heavy casing. The smooth, shallow curves are the predictions of equation ( 16), both for the stronger steel (0.96 GPa) and more powerful explosive used in the BAE Systems experiments, and the milder steel (0.4 GPa) and less powerful explosive used in the AWE Foulness experiments. Conclusions Regarding the internal dynamics of the casing metal, the analytical method of Hoggatt and Recht [5] provides a valuable expansion and conformation of the methodology first set out by Taylor, but the Tresca fracture criterion adopted by Taylor [4] should be adhered to.Significant misunderstandings regarding the inertia and initial pressure of the explosive gases exist in these previous papers and should be noted. The Tresca/Taylor criterion can be used as a basis on which to derive an equation (16) for cased charge relative blast impulse which allows for the casing and explosive dynamic properties.The available experimental data validate the predictions of this equation, regarding the effect of different charge compositions and steel casing yield stresses.It is thus concluded that this equation ought to replace any empirical equations currently in use to predict the relative blast impulse of cased charges.Further work will be required to apply this model to aluminised and other highly oxygen-deficient explosives. Fig. 3 . Fig. 3. Plot of predicted (equation 16) and experimental relative blast impulses against the function √ (C/(C+2M)) of the charge and casing masses.Points for thicker casings are towards the left.
3,560.2
2012-08-01T00:00:00.000
[ "Engineering", "Physics" ]
Breakdown of chiral perturbation theory for the axion hot dark matter bound We show that the commonly adopted hot dark matter (HDM) bound on the axion mass $m_a \lesssim$ 1 eV is not reliable, since it is obtained by extrapolating the chiral expansion in a region where the effective field theory breaks down. This is explicitly shown via the calculation of the axion-pion thermalization rate at the next-to-leading order in chiral perturbation theory. We finally advocate a strategy for a sound extraction of the axion HDM bound via lattice QCD techniques. Introduction. The axion originally emerged as a low-energy remnant of the Peccei Quinn solution to the strong CP problem [1][2][3][4], but it also unavoidably contributes to the energy density of the Universe. There are two qualitatively different populations of relic axions, a non-thermal one comprising cold dark matter (DM) [5][6][7][8], and a thermal axion population [9] which, while still relativistic, would behave as extra dark radiation. Such hot dark matter (HDM) component contributes to the effective number of extra relativistic degrees of freedom [10] ∆N eff 4/7 (43/[4g S (T D )]) 4/3 , with g S (T D ) the number of entropy degrees of freedom at the axion decoupling temperature, T D . The value of ∆N eff is constrained by cosmic microwave background (CMB) experiments, such as the Planck satellite [11,12], while planned CMB Stage 4 (CMB-S4) experiments [13] will provide an observable window on the axion interactions. There are several processes that can keep the axion in thermal equilibrium with the Standard Model (SM) thermal bath. From the standpoint of the axion solution to the strong CP problem, an unavoidable process arises from the model-independent coupling to gluons, αs 8π a fa GG. 1 For T D 1 GeV thermal axion production proceeds via its scatterings with gluons in the quark-gluon plasma [19,20], while for T D 1 GeV processes involving pions and nucleons must be considered [21][22][23]. The latter, have the advantage of occurring very late in the thermal history, so that it is unlikely that the corresponding population of thermal axions could be diluted by inflation. The transition between the two regimes depends on the strength of the axion interactions set by f a or, equivalently, by m a 5.7 × (10 6 GeV/f a ) eV, and it encompasses the range m a ∈ [0.01, 0.1] eV (with heavier axions leading to lower decoupling temperatures). Although the transition region cannot be precisely determined due to the complications of the quark-hadron phase transition, for heavier axions approaching the eV scale the main thermalization 1 Other thermalization channels arise from model-dependent axion couplings to photons [9], SM quarks [14][15][16][17] and leptons [18]. channel is aπ ↔ ππ [22,23], with T D 200 MeV. In this regime, scatterings off nucleons are subdominant because of the exponential suppression in their number density. The highest attainable axion mass from cosmological constraints on extra relativistic degrees of freedom, also known as HDM bound, translate into m a 1 eV [24]. Based on a leading-order (LO) axion-pion chiral effective field theory (EFT) analysis of the axion-pion thermalization rate [22,23], the axion HDM bound has been reconsidered in Refs. [25][26][27][28][29][30][31][32][33], also in correlation with relic neutrinos. The most recent update [33] quotes a 95% CL bound that ranges from m a 0.2 eV to 1 eV, depending on the used data set and assumed cosmological model. Although the axion mass range relevant for the HDM bound is in generic tension with astrophysical constraints, the latter can be tamed in several respects. 2 It is the purpose of this Letter to revisit the axion HDM bound in the context of the next-to-LO (NLO) axion-pion chiral EFT. This is motivated by the simple observation that the mean energy of pions (axions) in a heat bath of T 100 MeV is E ≡ ρ/n 350 MeV (270 MeV), thus questioning the validity of the chiral expansion for the scattering process aπ ↔ ππ. The latter is expected to fail for √ s ∼ E π + E a 500 MeV, corresponding to temperatures well below that of QCD deconfinement, which was estimated to be T c = 154 ± 9 MeV in Ref. [43], see also [44,45]. In this work, we provide for the first time the formulation of the full axion-pion Lagrangian at NLO, including also derivative axion couplings to the pionic current (previous NLO studies only considered non-derivative axion-pion interactions [46,47]), and paying special attention to the issue of the axion-2 Tree-level axion couplings to electrons are absent in KSVZ models [34,35], thus relaxing the constraints from Red Giants and White Dwarfs. The axion coupling to photons, constrained by Horizontal Branch stars evolution, can be accidentally suppressed in certain KSVZ-like models [36][37][38]. Finally, the SN1987A bound on the axion coupling to nucleons can be considered less robust both from the astrophysical and experimental point of view [39][40][41][42]. pion mixing. Next, we perform a NLO calculation of the aπ ↔ ππ thermalization rate (that can be cast as an expansion in T /f π , with f π 92 MeV) and show that the NLO correction saturates half of the LO contribution for T χ 62 MeV. The latter can be considered as the maximal temperature above which the chiral description breaks down for the process under consideration. On the other hand, the region from T χ up to T c , where chiral perturbation theory cannot be applied, turns out to be crucial for the extraction of the HDM bound and for assessing the sensitivity of future CMB experiments. We conclude with a proposal for extracting the axion-pion thermalization rate via a direct Lattice QCD calculation, in analogy to the well-studied case of π-π scattering. Axion-pion scattering at LO. The construction of the LO axion-pion Lagrangian was discussed long ago in Refs. [48,49]. We recall here its basic ingredients (see also [22,50]), in view of the extension at NLO. Defining the pion Goldstone matrix U = e iπ A σ A /fπ , with f π 92 MeV, π A and σ A (A = 1, 2, 3) denoting respectively the real pion fields and the Pauli matrices, the LO axion-pion interactions stem from where χ a = 2B 0 M a , in terms of the quark condensate B 0 and the 'axion-dressed' quark mass matrix M a = e i a 2fa Qa M q e i a 2fa Qa , with M q = diag (m u , m d ) and Tr Q a = 1. The latter condition ensures that the axion field is transferred from the operator αs 8π a fa GG to the phase of the quark mass matrix, via the quark axial field redefinition q → exp(iγ 5 a 2fa Q a )q. In the following, we set Q a = M −1 q /Tr M −1 q , so that terms linear in a (including a-π 0 mass mixing) drop out from the first term in Eq. (1). Hence, in this basis, the only linear axion interaction is the derivative one with the conserved SU(2) A pion current. The latter reads at LO while the derivative axion coupling in Eq. (1) is where the first term arises from the axial quark rotation that removed the aGG operator and the second one originates from the model-dependent coefficient [34,35], while c 0 u = 1 3 cos 2 β and c 0 d = 1 3 sin 2 β in the DFSZ model [51,52], with tan β the ratio between the vacuum expectation values of two Higgs doublets. Expanding the pion matrix in Eq. (1) one obtains At the LO in the diagonalization of the a-π 0 term is obtained by shifting a → a − π 0 and π 0 → π 0 + O( 3 )a, where we used the fact that m a /m π = O( ). Hence, as long as we are interested in effects that are linear in a and neglect O( 3 ) corrections, the axionpion interactions in Eq. (3) are already in the basis with canonical propagators. For temperatures below the QCD phase transition, the main processes relevant for the axion thermalization rate are a(p 1 )π 0 (p 2 ) → π + (p 3 )π − (p 4 ), whose amplitude at LO reads with s = (p 1 + p 2 ) 2 , together with the crossed channels aπ − → π 0 π − and aπ + → π + π 0 . The amplitudes of the latter are obtained by replacing Taking equal masses for the neutral and charged pions, one finds the squared matrix element (summed over the three channels above) [23] Axion-pion scattering at NLO. To compute the axion thermalization process beyond LO we need to consider the one-loop amplitudes from the LO Lagrangian in Eq. (1) as well as the tree-level amplitudes stemming from the NLO axion-pion Lagrangian, both contributing to O(p 4 ) in the chiral expansion. The NLO interactions include the derivative coupling of the axion to the NLO axial current, which has been computed here for the first time. We stick to the expression of the NLO chiral Lagrangian given in Ref. [53] (see for example Appendix D in [54] for trace notation), which, considering only two flavours, depends on 10 low-energy constants (LECs) 1 , 2 , . . . , 7 , h 1 , h 2 , h 3 . The axion field has been included in the phase of the quark mass matrix, as described after Eq. (1). Note that since we are interested in 2 → 2 scattering processes, we can neglect the O(p 4 ) Wess-Zumino-Witten term [55,56] since it contains operators with an odd number of bosons. To compute the axial current J A µ at NLO, we promote the ordinary derivative to a covariant one, defined as D µ U = ∂ µ U − ir µ U + iU l µ , with r µ = r A µ σ A /2 and l µ = l A µ σ A /2 external fields which can be used to include electromagnetic or weak effects. The left and right SU(2) currents are obtained by differentiating the NLO Lagrangian with respect to l A µ and r A µ , respectively. Taking the R − L combination and switching off the external fields, the NLO axial current reads where curly brackets indicate anti-commutators. New a-π 0 mixings arise at NLO, both at tree level from the NLO Lagrangian and at one loop from L LO a-π . These mixings are explicitly taken into account in the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula [57] (focussing e.g. on the aπ 0 → π + π − channel) where the index i runs over the external particles, Z a (Z π ) is the wave-function renormalization of the axion (pion) field and the full 4-point Green's function is given by The first term is the amputated 4-point function, multiplied by the 2-point functions of the external legs with the axion mass to zero. Working with LO diagonal propagators, the 2-point amplitude for the a-π 0 system reads P ij = diag (p 2 , p 2 − m 2 π ) − Σ ij , where Σ ij encodes NLO corrections including mixings. The 2-point Green's function G ij = (−iP) −1 ij is hence Plugging Eq. (9) and (10) into the LSZ formula for the scattering amplitude and neglecting O(1/f a ) 2 terms, one finds (with Z a = 1, Z π = 1 + Σ ππ (m 2 π ) and primes indicating derivatives with respect to p 2 ) where the G's are evaluated at the physical masses of the external particles. The one-loop amplitudes have been computed in dimensional regularization. To carry out the renormalization procedure in the (modified) MS scheme, we define the scale independent parameters i as [53] i = with R = 2 d−4 − log(4π) + γ E − 1, in order to cancel the divergent terms (in the limit d = 4) with a suitable choice of the γ i . Eventually, only the terms proportional to 1,2,7 contribute to the NLO amplitude, which is renormalized for γ 1 = 1/3, γ 2 = 2/3 and γ 7 = 0. The latter coincide with the values obtained in Ref. [53] for the standard chiral theory without the axion. The renormalized NLO amplitude for the aπ 0 → π + π − process (and its crossed channels) is given in Eq. (16) of the Supplementary Material. We have also checked that the same analytical result is obtained via a direct NLO diagonalization of the a and π 0 propagators, without employing the LSZ formalism with off-diagonal propagators. For consistency, we will only consider the interference between the LO and NLO terms in the squared matrix elements, , since the NLO squared correction is of the same order of the NNLO-LO interference, which we neglect. Breakdown of the chiral expansion at finite temperature. The crucial quantity that is needed to extract the HDM bound is the axion decoupling temperature, T D , obtained via the freeze-out condition (following the same criterium as in [23]) Here, H(T ) = 4π 3 g (T )/45 T 2 /m pl is the Hubble rate (assuming a radiation dominated Universe) in terms of the Planck mass m pl = 1.22 × 10 19 GeV and the effective number of relativistic degrees of freedom, g (T ), while Γ a is the axion thermalization rate entering the Boltzmann equation where n eq a = (ζ 3 /π 2 )T 3 and f i = 1/(e Ei/T − 1). In the following, we will set the model-dependent axion couplings c 0 u, d = 0 (cf. Eq. (4)), to comply with the standard setup considered in the literature [22,23,[25][26][27][28][29][30][31][32][33] (see [58] for an exception). Moreover, we will neglect thermal corrections to the scattering matrix element, since those are small for T m π [59][60][61]. By integrating numerically the phase space in Eq. (14) we find (see Eq. (17) step, or [62] for a slightly different approach) [47], m u /m d = 0.50 (2) [64], f π = 92.1(8) MeV [24] and m π = 137 MeV (corresponding to the average neutral/charged pion mass). The h-functions are normalized to h LO (0) = h NLO (0) = 1 and they are plotted in Fig. 4 of the Supplementary Material. We have checked that h LO reproduces the result of Ref. [23] within percent accuracy. It should be noted that Eq. (15) is meaningful only for m π /T 1, since at higher temperatures above T c pions are deconfined and the axion thermalization rate should be computed from the interactions with a quark-gluon plasma. Nevertheless, we are interested in extrapolating the behaviour of Eq. (15) from the low-temperature regime, where the chiral approach is reliable. In Fig. 1 we compare the LO and NLO rates contributing to Γ a = Γ LO a + Γ NLO a . In particular, the |Γ NLO a /Γ LO a | ratio does not depend on f a . Requiring as a loose criterium that the NLO correction is less than 50% of the LO one, yields T χ 62 MeV as the maximal temperature at which the chiral description of the thermalization rate can be reliably extended. Fig. 2 shows instead the extraction of the decoupling temperature (defined via Eq. (13)) for two reference values of the axion mass (setting the strength of the axion coupling via f a ), namely m a = 1 eV and 0.1 eV. Assuming a standard analysis employing the LO axion thermalization rate [23], the former benchmark (1 eV) corresponds to the most conservative HDM bound [33], while the latter (0.1 eV) saturates the most stringent one [33] and also represents the While in the former case the decoupling temperature is at the boundary of validity of the chiral expansion, set by T χ 62 MeV, in the latter is well above it. Hence, the region where the chiral expansion fails, T D T χ , corresponds to m a 1.2 eV. Since m a 1.2 eV yields a too large contribution to ∆N eff incompatible with Planck data (cf. Fig. 3), this value can be regarded as the axion HDM bound that can be reliably extracted within chiral perturbation theory. However, in the relevant mass range m a ∈ [0.1, 1] eV the decoupling temperature and consequently the axion HDM bound cannot be reliably extracted within the chiral approach. Note, finally, that in the presence of modeldependent axion couplings c 0 u,d 1 (as in some axion models [65]), the same decoupling temperature as in the c 0 u,d = 0 case is obtained for larger f a , thus shifting down the mass window relevant for the axion HDM bound. Towards a reliable axion HDM bound. The failure of the chiral approach in the calculation of the axion-pion thermalization rate can be traced back to the fact that in a thermal bath with temperatures of the order of T 100 MeV the mean energy of pions is E π 350 MeV, so that π-π scatterings happen at center of mass energies above the validity of the 2-flavour chiral EFT. The latter can be related to the scale of tree-level unitarity violation of π-π scattering resulting in √ s √ 8πf π 460 MeV [66,67]. A possible strategy to extend the theoretical predictions at higher energies is to compute the relevant aπ → ππ amplitudes using lattice QCD simulations. To this end one may employ the standard techniques used to compute weak non-leptonic matrix elements [68,69] and π-π scattering amplitudes as a function of the energy at finite volume [70][71][72][73]. Although this approach has limitations with respect to the maximum attainable center of mass energy, we believe that it can be used to compute the amplitudes up to values of √ s ∼ 600 − 900 MeV or higher [74]. We conclude by stressing the importance of obtaining a reliable determination of the axion-pion thermalization rate, not only in view of the extraction of a notable bound in axion physics, but also in order to set definite targets for future CMB probes of the axion-pion coupling, which could represent a 'discovery channel' for the axion.
4,298
2021-01-25T00:00:00.000
[ "Physics" ]
Charged particle scattering near the horizon We study Maxwell theory, in the presence of charged scalar sources, near the black hole horizon in a partial wave basis. We derive the gauge field configuration that solves Maxwell equations in the near-horizon region of a Schwarzschild black hole when sourced by a charge density of a localised charged particle. This is the electromagnetic analog of the gravitational Dray-'t Hooft shockwave near the horizon. We explicitly calculate the S-matrix associated with this shockwave in the first quantised $1\rightarrow 1$ formalism. We develop a theory for scalar QED near the horizon using which we compute the electromagnetic eikonal S-matrix from elastic $2\rightarrow 2$ scattering of charged particles exchanging soft photons in the black hole eikonal limit. The resulting ladder resummation agrees perfectly with the result from the first quantised formalism, whereas the field-theoretic formulation allows for a computation of a wider range of amplitudes. As a demonstration, we explicitly compute sub-leading corrections that arise from four-vertices. Introduction Eikonal physics in field theory and gravity arises in the very high energy limit of scattering processes.In field theory, these are 2 → 2 elastic t-channel scattering processes where external momenta are far greater than virtual momenta that are exchanged [1].In perturbative quantum gravity about flat space, these processes involve trans-Planckian scattering where the centre of mass energies of scattering processes satisfy E ≫ M P l .Of course, the impact parameter of scattering in this case must necessarily be the largest length scale in the game to remain within the regime of validity of perturbation theory.Eikonal physics in gravitational theories has far reaching theoretical consequences.More recently, its relevance for the calculation of gravitational observables of interest in the inspiral phase of compact binary mergers in gravitational wave astronomy has gained prominence.We refer the reader to a recent review for further details and references [2]. Shockwave solutions in flat space [3,4,5] are among the first examples that led to eikonal techniques providing for an amplitude-based approach for calculating gravitational observables. 1 Shockwaves have also been found as non-linear perturbations to classical Einstein equations in black hole backgrounds [6] and general curved spacetimes [7].These shockwaves can be thought to have an eikonal representation in the following two ways.As was argued by 't Hooft, one avatar of these solutions is in terms of the modified (with respect to the background before the insertion of the shockwave) geodesics experienced by observers propagating on these backgrounds.When such an observer is taken to be a probe particle, the classical solution manifests itself as a change in the wavefunction of the said probe particle.This calculation is intrinsically in a first quantised formalism and can be thought of as 1 → 1 scattering.This approach was pioneered by 't Hooft both in flat space [8] and in a black hole background [9,10,11]. 2The second and arguably more powerful avatar is in a field-theoretic setting where the amplitudes arise from elastic 2 → 2 scattering of high energy particles exchanging soft virtual modes.This was established in flat spacetime in [13].In the black hole background, the field-theoretic analog is only a recent development.The eikonal manifestation of the Dray-'t Hooft shockwaves in the Schwarzschild black hole in terms of virtual graviton exchange has been developed in [14,15]. 3In fact, the field-theoretic avatar of eikonal amplitudes in black hole backgrounds has further applications that are difficult to envision in the first quantised 1 → 1 formalism [11,19,20,21]. In [9], 't Hooft argued for an extension of these techniques to including other forces in the Standard Model in the near-horizon region of the black hole.In particular, he argued that the first quantised manifestation of the electromagnetic force near the horizon involves a certain gauge rotation of the gauge field of the charged particle being scattered near the horizon.The primary result of this article is an extension of the techniques developed in [14,15] to include charged particle scattering mediated by near-horizon photons.To this end, we organise our presentation as follows: • We derive the gauge field configuration that solves Maxwell equations in the nearhorizon region of a Schwarzschild black hole when sourced by a charge density of a localised charged particle.This is the electromagnetic shockwave analog of the gravitational Dray-'t Hooft shockwave near the horizon.This is done in Section 2.1. • In Section 2.2, we explicitly calculate the S-matrix associated with this shockwave in the first quantised 1 → 1 formalism.Sections 2.1 and 2.2 may be seen as a detailed derivation of the expectations outlined in [9], and written out in a partial wave basis for direct comparison with the 2 → 2 eikonal resummation. • Finally, we develop a theory for scalar QED near the horizon, following the general formalism of [14,15], in Section 3, using which we compute the electromagnetic eikonal S-matrix from elastic 2 → 2 scattering of charged particles exchanging soft photons in Section 4. The resulting eikonal resummation is identical to the amplitude found in Section 2.2, and can be seen as a proof of principle of our proposed effective description for scattering near the horizon of black holes at small impact parameters.Whereas the field-theoretic formulation allows for a computation of a wider range of amplitudes.As a demonstration of the fact, in Section 4. 4, we compute the one-loop diagrams arising from the four-vertex in the theory which are parametrically sub-leading in comparison to the eikonal amplitude that arises from the three-vertex. Our formalism naturally allows for straightforward extensions to non-Abelian gauge fields since we perform these calculations in a basis of partial waves, owed to [22,23], that has come to much use in the case of scattering of gravitational perturbations in black hole backgrounds.We conclude the paper with further discussion and future directions in Section 5. Shockwave of a charged particle in the Schwarzschild background In this section, we review 't Hooft's shockwave analysis in the case of a charged particle [9] propagating in the background of a Schwarzschild black hole.The metric for the background, in four dimensions, can be written as follows: where the functions A (u, v) and r (u, v) are defined as The line element dΩ 2 (2) defines the round metric on the unit two-sphere and R = 2GM is the Schwarzschild radius. Gravitational backreaction and electromagnetic gauge rotation In this section, we review the backreaction of a highly boosted charged shockwave on a probe test particle [6].The gravitational backreaction of the shock leaves an imprint on the gravitational field experienced by the probe.The probe then experiences geodesics that are shifted across the null surface traced out by the shockwave. Backreaction on the gravitational field The stress tensor associated with a localised source carrying momentum p in at a location u 0 and a point on the sphere Ω 0 can be parametrised as Here, the factor A (u, v) r 2 in the first line arises from the square root of the determinant of the longitudinal part of the metric.The angular part is contained in the angular delta function.In the second line, we took a near-horizon limit where A (u, v) ∼ 1 when r ∼ R.An ansatz for the backreacted geometry that solves the Einstein equations with the above source can be taken to be where, again, dΩ 2 (2) is the line element on the unit round two-sphere.Outside of the location of the source shock, a probe particle experiences the background Schwarzschild solution with a vanishing λ 1 (Ω, Ω 0 ).At the location of the source δ (u − u 0 ), however, the Einstein equations with the source (2.3) reduce to [10,11] 4 where with ∆ Ω we denoted the Laplacian on the unit two-sphere.Expanding the above equation in partial waves, we find the following solution: Backreaction on the electromagnetic field In analogy to the gravitational backreaction discussed above, an electromagnetically charged shock leaves an imprint on the electromagnetic field of the probe.The probe then experiences a discontinuity in its electromagnetic field across the null surface traced out by the shockwave. In [9], 't Hooft argued that this field is pure gauge for a highly boosted observer near the horizon.And therefore, he argued that at the location of the shockwave, its field is affected by the source.To see this explicitly, let us consider a localised source of charge q in , namely The ansatz for the electromagnetic field of the probe upon the introduction of the above source can be parametrised as Therefore, as in the gravitational case, we must solve the Maxwell equations at the location of the horizon u = 0: (2.9) The left hand side of this equation can be simplified in the Schwarzschild background to find Explicit computation shows that this expression simplifies to reduce (2.9) to an equation for the undetermined bilocal function λ 2 (Ω, Ω 0 ) in the field configuration (2.8): where we defined Ṽa := ∂ a log r and Ũa := ∂ a log A(r).Notice that the Latin index a runs over the coordinates u and v.To arrive at this simplification, we integrate the equation against an arbitrary test function to handle the delta function in u.In a partial wave basis, we find . (2.12) Therefore, while the electromagnetic field of the probe could be gauge-fixed to vanish in the absence of sources, the backreaction of a source shock results in a gauge rotation of the probe electromagnetic field. An S-matrix for the wavefunction of a probe charged particle The aim of this section is to calculate the S-matrix for the wavefunction of a charged particle in the presence of a gravitationally backreacting charged shockwave.To this end, let us first begin by writing the wavefunction of a charged particle in the said eigenbasis as ψ (p in , q in ) = ⟨ψ|p in , q in ⟩.In order to label states as such, we may demand the existence of a charge operator which when acted on its eigenstate yields the charge of the state.Just as a superposition of momentum eigenstates yields a state of definite position, a superposition of charge eigenstates will yield a state with definite electric field.As we argued in the previous subsection, for boosted particles backreacting near the horizon of a black hole, this electric field approaches a pure gauge configuration and may be parameterised by a gauge parameter, say, Λ.Therefore, we may label states in the momentum-charge basis by |p, q⟩ or by |y, Λ⟩ in the position-gauge field basis.In terms of momentum and charge eigenstates, the S-matrix is formally given by (p in , q in ; p out , q out ) := ⟨p in , q in |p out , q out ⟩ .This allows us to write ψ (p in , q in ) = ⟨ψ|p in , q in ⟩ = dq out dp out 2π ⟨ψ|p out , q out ⟩⟨p out , q out |p in , q in ⟩ = dq out dp out 2π ⟨ψ|p out , q out ⟩S * (p in , q in ; p out , q out ) = dΛ out dy dq out dp out 2π ψ (y, Λ out ) ⟨y, Λ out |p out , q out ⟩ × S * (p in , q in ; p out , q out ) . (2.13) where we used the completeness relations dq out dp out 2π |p out , q out ⟩⟨p out , q out | = 1 = dΛ out dy|y, Λ out ⟩⟨y, Λ out | , (2.14) and the definition of the scattering matrix.As we argued in the previous section, the gravitational backreaction implies that the position of the outgoing particle is determined by the momentum of the incoming particle.Similarly, the gauge parameter of the outgoing particle is given by the charge of the incoming particle.These relations are5 which we insert in the previous expression for the wavefunction to find The rescaling of integration variables to arrive at the second equality does not change the ranges of integration (which remain from −∞ to ∞ for both the integrals.)This relation must hold for any wavefunction as (2.15) contains invertible basis transformations.Therefore, we can finally write dq ′ in dp ′ in ⟨y, Λ out |p out , q out ⟩S * (p in , q in ; p out , q out ) = δ p ′ in − p in δ q ′ in − q in .(2.17) To invert this equation for the S-matrix, we now need an expression for ⟨y, Λ out |p out , q out ⟩. Writing the positions y in a momentum basis gives us a plane wave.Similarly, we know that the electric field and charge density are conjugate and therefore we may write ⟨y, Λ out |p out , q out ⟩ = exp (i y p out + iΛ out q out ) = exp (i λ 1 p in p out + iλ 2 q in q out ) .(2.18) Plugging this into the previous expression, we see that it is a Fourier transform equation for the scattering matrix which can easily be inverted to find S * (p in , q in ; p out , q out ) = exp (−i λ 1 p in p out − iλ 2 q in q out ) . (2.19) Generalisation to many particles and the continuum We would now like to generalise the previous results to the case of many particles in order to then take a continuum limit to describe a distribution of particles on the horizon.Since quantum mechanics does not allow for particle production, we may safely assume that the number of incoming and outgoing particles is equal and large; we call the number of incoming and outgoing particles as N in and N out respectively.We will label the i-th incoming particles by its longitudinal position x i , angular position on the horizon Ω i , and momentum p i in such that i ∈ N in .Similarly, outgoing particles would be labelled by y j , Ω j , p j out and j ∈ N out .Assuming that there is no more than one particle at each angular position on the horizon, in the continuum limit N in = N out → ∞, the positions of particles may be described by distributions x (Ω) and y (Ω).The basis of states may now be written as where we assumed a factorised Hilbert space because all parallel moving particles are independent.The completeness relations are now integrals defined with measure dp out, tot = j dp j out and dy tot = j dy j .The S-matrix may formally be written as S tot := S (p in, tot , q in, tot ; p out, tot , q out, tot ) := ⟨p in, tot , q in, tot |p out, tot , q out, tot ⟩ . (2.21) This S-matrix is dictated by the backreaction relations derived before, which are now given in terms of invertible matrices that are in turn functions of the transverse distance between the in and out particles: such that we can write Since the scattering matrix is a basis transformation, it is necessarily bijective between the in and out Hilbert spaces.This implies that the matrices λ 1 (Ω i , Ω j ) and λ 2 (Ω i , Ω j ) are invertible, which in turn implies that there is no more than one particle entering (leaving) the horizon at any given angle.Moreover, we have the condition that N in = N out .Consequently, we may now repeat our strategy from the single particle case to the multiparticle case.We begin with the wavefunction We now insert the backreaction relations resulting in the measures to write the wavefunction as For every j in the product, we have a sum over all incoming particles labelled by i.In each term of the sum, we rescale the integration variables p in and q in to neutralise the corresponding factors of λ 1 and λ 2 , just as we did in the single particle case, to arrive at In analogy to (2.18), we now write Therefore, as we did in the single particle case, we may invert the previous relation for the scattering matrix to finally find where a sum over all in and out particles is implicit.The continuum limit is now easy to achieve.We first promote the momenta and charges to be distributions as smooth functions of the sphere coordinates and then replace the sum over in and out particles with integrals over the sphere coordinates as where we expanded the expression in partial waves in the second line and substituted for λ 1 and λ 2 using (2.6) and (2.12).Of course, the momentum and charge distributions are also expanded in spherical harmonics, but their partial wave indices have been suppressed. Scalar QED near the horizon In this section, we set up the effective theory of scalar QED near the horizon.Before doing so, it is useful to think about the regime of validity of such an effective theory.As discussed in [14,15], this theory goes beyond semi-classical physics as it takes metric fluctuations into account.This implies that it is a perturbative quantum gravity theory about the black hole background.However, it is important to note that the physics captured by this theory is nonperturbative in nature in comparison to the perturbative quantum gravity theory governed by fluctuations about the global flat vacuum.In the present article, we will restrict our amplitude calculations to only scalar-photon interactions on the black hole background.However, the general theory is meant to include graviton perturbations and should perturbatively be valid to all orders in ℏ including couplings between the photons and gravitons.Nevertheless, it is important to keep in mind that the regime of phase space where the theory is valid is when impact parameters of the scattering particles, say b, satisfies L P lanck ≫ b ≥ R where L P lanck is Planck length and R is the Schwarzschild radius.Since we are interested in near-horizon scattering, the particles inevitably scatter at distances of the order of the Schwarzschild radius or less.Our theory breaks down if impact parameters are sub-Planckian because our effective cannot resolve such short distances.Furthermore, the eikonal diagrams that we will compute are the leading contributions when the centre mass energy of the scattering satisfies EM BH ≫ M 2 P lanck .Finally, if the energy of scattering, E, exceeds the mass of the black hole itself, this regime is analogous to scattering at sub-Planckian distances where the effective low energy description of the theory about a black hole background breaks down.In what follows, we will first derive the effective field theory to set up an appropriate near-horizon limit before explicitly demonstrating the necessity of the restriction on phase space for the validity of our effective theory.Thereafter, we will explicitly compute the diagrams of interest. With those general remarks behind us, let us consider a complex scalar field, minimally coupled to the photon in the gravitational background of the Schwarzschild black hole (2.1): Here, the covariant derivative D enables gauge and gravitational covariance whereas in what follows, gravitational covariance is enabled by ∇.The action of the former on the complex scalar is defined by Partial integration now allows us to write the matter action as6 where with □ we denoted the d'Alembertian in the Schwarzschild background while the scalar current j µ has been defined as The Maxwell action can also be partially integrated to write it in the form A µ O µν A ν : Since the Schwarzschild metric is a vacuum solution to Einstein equations, the quadratic operator in (3.5) therefore reduces to Gauge fixing In what follows, we will exploit the background spherical symmetry of the theory to expand the gauge field into partial waves.As Regge and Wheeler argued [22,25], vector spherical harmonics can be split into even and odd parity modes where Y ℓm (Ω) are the familiar spherical harmonics written in a real basis, lowercase Latin indices represent coordinates along the longitudinal directions and uppercase Latin indices represent coordinates along the transverse sphere.All fields A a and A ± depend only on longitudinal coordinates and carry partial wave indices which we suppressed to avoid clutter of notation.Moreover, the antisymmetric tensor ϵ A B whose indices are raised and lowered by the round metric on the sphere or radius r is given by The Maxwell field has a gauge redundancy that needs to be fixed by a choice of gauge.We choose one 7 where A + (u, v) = 0.This choice may be seen as the adaptation of the "Regge-Wheeler gauge" for gravitational scattering to the electromagnetic case.This results in After making the field redefinitions and plugging the spherical harmonic decomposition (3.10) and (3.11) into the Maxwell action, Eq. (3.5), we find Above, the quadratic operators are given by where we defined λ := ℓ 2 + ℓ + 1 and η ab is the flat metric in two-dimensions with off-diagonal elements given by −1.It is evident that we have traded a single four-dimensional theory in the Schwarzschild background for an infinite tower of decoupled two-dimensional theories, one for each partial wave, with curvature effects encapsulated in potentials.We present the details of this calculation in Appendix A. Near horizon limits and the photon propagator While the four-dimensional theory in curved space can be simplified into decoupled twodimensional theories in flat space with extra potentials as we demonstrated in the previous section, we have not lost any generality.Therefore, it is still an analytically intractable task to invert the quadratic operators (3.14).As was shown in the gravitational case in [14,15], the way forward is a near-horizon approximation where the operators simplify considerably. Shockwave approximation Since the eikonal approximation near the black hole horizon derived its motivation from consideration of shockwave geometries, it is natural to impose a constraint on the gauge field fluctuations to obey the shockwave configuration (2.8) in the near-horizon region.We would like to impose these restrictions in a covariant manner on the longitudinal directions for each partial wave.Considering a near-horizon approximation to linear order (in u, v) implies that x a x a ∼ uv ∼ 0. Additionally, the shockwave approximation can be captured by the condition that x a A a ∼ uA u +vA v ∼ 0. This is understood as follows.Consider the past horizon located at v = 0. Thus, one of the terms8 , vA v , is naturally vanishing on the horizon.The second term drops if we choose the shockwave configuration as in (2.8) where the u-component of the gauge field vanishes.An analogous approximation clearly holds on the past horizon. In order to employ this approximation on (3.14), we first note that to linear order (in u, v), r (u, v) = R + O (uv) and therefore, A (u, v) ∼ 1.With these considerations, the quadratic operators (3.14) simplify to In the action, the term containing x b ∂ a can further be simplified as Similarly, the term containing x a ∂ a can also be simplified to (3.17) The issue of the boundary term may potentially be subtle.On first glance, it appears to be safe to assume that the field falls off at the boundaries.However, notions of "far past" and "far future" on the horizon are not well-defined in an evaporating black hole formed by collapse within the effective field theory regime being considered in this paper.Nevertheless, we will blithely ignore this boundary term in this work and leave a careful analysis of the relevance of it in the effective theory for the future.Therefore, the quadratic operators in this approximation scheme can be written in their final form in the following way It is noteworthy that when ℓ = 0, we have that λ = 1 and thus the odd action vanishes.This is consistent with the fact that there are no odd degrees of freedom in the monopole sector. Propagator for the photon: These quadratic operators above, in Eq. (3.18), may be written in Fourier space as follows: In order to find their inverses, we demand that Lorentz invariance along the longitudinal directions near the horizon implies that the most general ansatz for the propagator for the even mode can be written as Explicitly computing ∆ −1 ab (k) ∆ bc (k ′ ) and solving for the unknown functions f i , we find that the propagator for the even mode of the photon is In similar vein, the propagator for the odd mode of the photon can be worked out to be Just as in the case of the graviton, the photon acquires an effective mass near the horizon owing to curvature effects, while the photon in four dimensions remains massless as it must. A leading order near-horizon approximation As it turns out, there is a different approximation that simplifies the quadratic operators considerably.This was also noted in the case of the graviton [15].In this approximation, unlike in the shockwave approximation, the configurations that the photon may acquire are not constrained.Instead, we simply work to leading order in the near-horizon approximation assuming that the gauge field does not blow up on the horizon.This implies that all terms proportional to x a can be dropped, leading us to the following operators in this scheme: Following the calculation in the shockwave approximation, the corresponding propagators in this alternative leading order near-horizon approximation can easily be found to be (3.27) Interaction vertices In this section, we proceed with writing the interaction vertices in a partial wave basis, starting from the three-vertex in the following section and subsequently focussing on the four-vertex. Three-point interaction The three-vertex in (3.3) is given by Since the dominant contribution to the high-energy amplitudes in the eikonal sector arise from the longitudinal momenta, we will henceforth ignore the transverse effects.This amounts to dropping the second term in the square brackets above.We then expand all fields in partial wave basis to find To arrive at this expression, we defined the following integral of three spherical harmonics at different ℓ, m's on the two-sphere: In general, interaction terms break the spherical symmetry of the background as can be seen from the presence of the Clebsch-Gordon coefficients in the three-vertex above.While it is certainly possible to perform several calculations with this general vertex, it turns out to be very cumbersome for the resummation of eikonal diagrams.Therefore, it is convenient to choose one scalar leg in each vertex of the diagrams to always be in a fixed partial wave, say ℓ = 0.Such a choice may be thought of as being reasonable given that we do not imagine the spherical symmetry of the large black hole background to be badly destroyed by perturbative scattering processes.This approximation then leads us to a simplification of the above three-vertex where one of the spherical harmonics merely gives an overall factor of Y 00 as follows: where we denoted the scalar mode in the s-wave by ϕ 0 .In order to use the same photon mode that appeared in the propagators of the previous section, we perform the field redefinition in Eq. (3.12) in addition to rescaling the scalar field as ϕ → φ r to find This result is approximate in two ways.One is that we have ignored the mixing of partial waves as described above.On the other hand, we took a near-horizon limit where the field redefinitions of the scalar result in sub-leading terms in 1/R which we have ignored. Four-point interaction Next, we move to the four vertex in (3.3): We now expand all fields in partial waves as before.However, the integral over the two-sphere now involves four spherical harmonics in the even sector.Whereas in the odd sector, two of the four spherical harmonics come with derivatives on them as can be seen from the definition of the odd component of the photon in (3.11).Following our previous choice to ignore partial wave mixing, we now take both scalar modes in the vertex to be in the s-wave. 9Finally, redefining the fields as in the three-vertex case, we find To arrive at this expression, in the odd sector, we made use of the following familiar integral 4 Eikonal S-matrix = shockwave S-matrix Having built up all the tools necessary for computing scattering amplitudes in the theory near the black hole horizon, we first summarise the necessary Feynman rules before moving on to the computations of the amplitudes. Feynman rules near the horizon In this section, we collect all the Feynman rules we have derived in the previous sections (propagator of the even mode, propagator of the odd mode, scalar propagator and finally the two vertices). • The propagators of the even mode of the photon in the shockwave and leading order near-horizon approximations, respectively, are • The propagators of the odd mode of the photon, on the other hand, again in the shockwave and leading order near-horizon approximations, respectively, are given by These were derived in Section 3.2.1 and Section 3.2.2. • The scalar propagator is straightforward to compute and was done in [15].We have: • Next, we have two three-vertices arising from the results of Section 3.3.1.These are drawn in Fig. 1 below. Âb (k) • Finally, we have two four vertices, one each from the even and odd photon as can be seen from Section 3.3.2.These are drawn in Fig. 2 below. Figure 2: In these vertices, the solid black wiggly lines represent the even mode of the photon as in the three-vertex case, whereas the blue wiggly line refers to the odd mode of the photon.The scalar modes remain as before. Tree level elastic 2 − 2 diagrams Using the Feynman rules from the previous section, we first start with the two dominant tree level diagrams, which are drawn in Fig. 3.In terms of the Mandelstam variables, namely the two diagrams in the left and right panels of Fig. 3 can be evaluated to find The two dominant tree level diagrams built out of the three vertices of the theory in the t-channel. and respectively.Here, we have made extensive use of the fact that the external particles are of course on-shell, namely .9) We are primarily interested in the eikonal limit of scattering in this paper, which in flat space amounts to negligible momentum transfer t → 0.Moreover, we demand the black hole eikonal condition that M BH E = M BH √ s ≫ M 2 P l which is equivalent to demanding that sR 2 ≫ 1 and that s ≫ m 2 .In this black hole eikonal limit, the above tree level amplitudes reduce to There is of course a third tree level diagram which is in the s-channel but it can be checked that this is of O s 0 and therefore heavily sub-leading in the large s limit.The above results were derived in the leading order near-horizon approximation of Section 3.2.2.The analogous result in the shockwave approximation of Section 3.2.1 is given by . (4.12) Loop diagrams and the eikonal ladder Loop diagrams in the black hole eikonal limit are dominated by the so-called ladder diagrams. The one and two loop diagrams are shown in Fig. 4 and Fig. 5, respectively.Following the analysis in the gravitational case [1,13,14,15], a general loop diagram with n virtual photons exchanged can be written as Of course, n exchanged photons implies an n − 1 loop amplitude.This equation is the twodimensional analog of Eq. (3.1) in [1], with electromagnetic vertices replacing the meson ones, and with q = p 1 − p 3 = p 2 − p 4 = 0. To get to the second equality, we assumed the two momenta to be light-like, i.e., p 1 = p 1 u , 0 , p 2 = 0, p 2 v .All the matter propagators to be inserted are contained in the quantity I, which can be derived analogously to [1], resulting in where the quantity χ has been defined as The expression in square brackets can be rewritten in a more convenient form as Now, making use of the identity we arrive at a simple expression for χ: in the shockwave approximation of Section 3.2.1 , − q 2 4πλ in the leading order approximation of Section 3.2.2 . (4.18) Since the resulting eikonal function, conveniently enough, does not depend on spacetime coordinates, we may write The complete resummed perturbatively exact amplitude is therefore given by Inserting (4.18) in (4.20) in the above equation, and recalling that λ = ℓ 2 + ℓ + 1, we find where we also relabelled the external momenta as p in and p out .This amplitude is a result of diagrams of the e − e − kind in Fig. 5. Considering the remaining case of e − e + scattering results in an overall sign in the phase of the exponent.These two cases can be combined into a single formula, resulting in where q in and q out are the asymptotic charges of the in-particle and out-particle, respectively. For particles, we have that q in/out = −q and that q in/out = q for antiparticles.The relation between the scattering amplitude and the S-matrix is given by For instance, considering particles, the in-and out-states can be defined as ) A similar definition exists for antiparticles.In the free theory, a straightforward calculation leads to10 On the other hand, using Eq.(4.20), we may write the interacting piece as Putting it all together, Eq. (4.23) in the operator notation gives q in q out ℓ 2 + ℓ shockwave approximation , q in q out ℓ 2 + ℓ + 1 leading order approximation . (4.28) This result agrees with the expectation from the first quantised shockwave S-matrix in (2.33) up to a curious factor of 4π, which we address in Appendix B. One-loop diagrams with the four vertex In the second quantised theory, there are further corrections to the eikonal amplitudes that are not visible in the first quantised formalism of 't Hooft [9].In the present case, the first such correction arises from the four vertex.The contributions of the four vertex at tree-level are naturally sub-leading in comparison to those of the three-vertex.This is down to the simple fact that the vertex does not contain momenta.Therefore, in the limit of large energies of centre of mass, the three-vertex naturally dominates.As it turns out, this is also true at loop level as we will demonstrate in this appendix.In what follows, we consider one-loop diagrams of the type drawn in Fig. 6. Using the Feynman rules presented in Section 4.1, we write the amplitude as follows 11 : with m2 : It is easy to see that Eq. (4.29) can be split into four contributions.We can thus write where the following quantities have been defined: In what follows, we will work in coordinates where the near-horizon two-dimensional flat metric is diagonal, instead of the light-cone variants we have used so far.These two sets of coordinates are related by We will employ dimensional regularization and start by considering (4.31), temporarily suppressing the iϵ's for notational convenience, where we shift to d dimensions: Using the familiar Feynman trick the above integral can be written as Shifting the k integral above by k → k + xp and performing a Wick rotation (we substitute where we defined ∆ := p2 x(1 − x) + m2 .Momentum integrals of the kind above can be expressed in terms of gamma functions In our case, with α = 2 and d = 2 + ε, we have where we introduced an auxiliary mass parameter, M .This allows us to consider small-ε expansions of the following dimensionless quantities: where γ E ≈ 0.5772 is the Euler-Mascheroni constant.Using these expressions we obtain12 In principle, various cases must be considered, depending on the values p2 can assume; however, since we are only interested in the limit p → 0 (negligible momentum transfer), we directly expand the integrand and consider the first term of such an expansion 13 .We have: Therefore, in this specific limit the result of the above integral is Before proceeding, let us make another observation about dimensions.When calculating Feynman diagrams in 2 + ε spacetime dimensions, the coupling constants will carry the dimension that is appropriate for the theory in 2 + ε dimensions.For the scalar quantum electrodynamics built here, the dimension of the effective coupling constant turns out to be equal to 1 − ε/2 in mass units.On the other hand, in the 2-dimensional case we have that [µq] = 1 (of course, integrating the sphere out does not change the dimensions of the quantity q).Therefore, in order to ensure that dimensional counting remains consistent throughout the calculations, we make again use of the auxiliary parameter M and write the effective coupling constant as M −ε/2 µq.Putting it all together (taking into account the various prefactors), we now write down the final expression for I 1 in 2 + ε spacetime dimensions, in the limit p → 0: Let us now consider the second contribution, namely I 2 .Ignoring the prefactors for a moment, shifting to d dimensions, and writing k ′ = k ′ + m2 − m2 , leads to The first term can be quite easily computed by performing a Wick rotation to use (4.40) with α = 1, where the role of ∆ is now played by m2 .We have: We now substitute d = 2 + ε and expand in powers of ε, keeping track of possible poles at ε = 0.In terms of ε, Eq. (4.50) then becomes Introducing M as before and rearranging, we get We can now safely expand, obtaining which is dimensionally consistent.Concerning the second term in (4.49), it has already been computed.Putting it all together, we obtain the final result for I 2 : Now, looking at the third contribution to the amplitude, Eq. (4.33), we notice that it is equal to Eq. (4.32) upon shifting the momentum k, k → k + p.Therefore, we move on to the fourth contribution, I 4 .Let us first consider the numerator of the integrand.By recalling how k ′ is defined, it can be split as Thus, shifting to 2 + ε spacetime dimensions, the integral in (4.34) can be written as .56) As we can see, I 4 has been split into four contributions.The first gives The second piece of the right-hand side of the above equation has already been calculated before.The first one can be easily computed by Wick rotating and making use of the identity In terms of ε, by setting α = 1 and shifting k → k + p, we can write Moreover, inserting M and expanding in powers of ε, we end up with Therefore, Eq. (4.57) results in Let us now consider the second piece coming from Eq. (4.56).By writing k ′ = k ′ + m2 − m2 , we can write this contribution as The first integral vanishes since the integrand is antisymmetric under k → −k.Concerning the second one, upon shifting k → k +xp and combining the denominator by using Feynman's trick once again, we have The first term, the one proportional to p • k, vanishes.The remaining one can be written as .65) We can now proceed in the same way as before, see below Eq. (4.38).However, we immediately notice that the first term of the expansion (4.46) would be multiplied by p2 , and so we can safely conclude that, in this specific limit, the above integral vanishes.The next contribution in (4.56) can be also shown to be vanishing.We have: The first term is zero because the integrand is antisymmetric under k → −k.In the second term we recognise two expressions we already proved to be zero in the limit of interest.We now finally consider the last contribution in Eq. (4.56), which can be written as follows: The second term in the numerator of the above expression vanishes since, excluding the factor of p2 , it is exactly the same integral as in Eq. (4.65).Concerning the first piece, we have where the definition of ∆ is the same as the one below Eq. (4.39).Splitting the numerator we immediately notice that the above expression gives rise to integrals that vanish as long as p → 0. Thus, the only non-vanishing contribution in Eq. (4.56) is the first one.Putting it all together 14 , we now write down the final result for I 4 : Thus, summing over all the contributions, iM even seagull in the limit ε → 0 results in .70) The second one-loop diagram with four-vertices arises from the odd mode of the photon and is drawn in Fig. 7.This diagram evaluates to where we made use of the result obtained in (4.47), in the limit ε → 0. As expected, we see from these results that the four vertex contributions do not scale with the centre of mass energy of the scattering process.They may, nevertheless, be seen as corrections to the eikonal ampltidues (that yield the classical electromagnetic shockwave) that are calculable in this second quantised formalism. Conclusions In this article, we established an equivalence between the 1 → 1 S-matrix in the first quantised formalism arising from electromagnetic shockwaves as classical solutions to Maxwell equations near the black hole horizon and the t-channel elastic 2 → 2 in the black hole eikonal limit.In order to do so, we developed a second quantised theory for electromagnetic fluctuations and charged particle scattering near the black hole horizon.While the 1 → 1 result builds on [9], the 2 → 2 result extends the formalism first developed in [14,15]. The formalism developed in this article is naturally suited for incorporating other forces of the Standard Model.It would be very interesting to see if there are non-Abelian shockwaves near the horizon and if new physics emerges.The second quantised theory allows for a calculation of various quantities of physical interest, including corrections to the electromagnetic potential near the horizon in the spirit of [26] and other classical observables [27]. The gravitational eikonal also led to speculation about a certain antipodal correlation on the bifurcation sphere on the horizon [10,28,29].It would be interesting to find an electromagnetic analog of these proposals.An analog of the relation between the shockwave algebra and the soft algebra near null-infinity found in [30] is also an interesting question to explore near the horizon of a black hole. In [31], all symmetries associated with the near-horizon scattering of gravitational radiation have been derived.The techniques developed there can easily be adapted to the electromagnetic radiation that will emerge from the theory developed here in this paper.Such electromagnetic radiation is expected to result in a near-horizon memory effect that may have observable consequences in the spectral fluctuations of stellar oscillations as discussed in [31]. B A certain factor of 4π As can be seen from comparing (4.21) in the field-theoretical eikonal and (2.33) of the first quantised shockwave calculation, there is a curious discrepancy of a factor of 4π.This factor arises in the field theory calculation from the s-wave component of one of the scalars in the three-point vertex in Section 3.3.1.One may suspect that the difference between the two resides in the sources.Should the charge densities in the two frameworks be the same, we expect the S-matrix elements to agree.In what follows, we show that this is indeed true. In the quantum-mechanics case, the v-component of the current density, in a given partial, can be directly read off from Eq. (2.7): On the other hand, in the field-theory side, the v-component of the current is given by as can be seen from (3.4).Expanding in spherical harmonics, inserting the rescaling ϕ = φ/R, and demanding one of the scalars to be in the s-wave, we find 3) The main difference between (B.2) and (B.3) is that the current density in quantum field theory is an operator.Thus, a proper comparison warrants its expectation value of j ℓm v in an appropriately defined initial state: where Φ(p) is a normalized test function localized around a specific momentum, say, p = p 1 .The expression above, (B.4), represents a one-particle state where we have a superposition of two shells at equal momentum where one of them is in the s-wave.We now recall that the only non-vanishing commutation relations are To compute the expectation value of j ℓm v , we begin with the first term in (B.2): Figure 1 : Figure 1: Here, the dashed lines refer to the scalar mode in the s-wave whereas the solid black line corresponds to the scalar mode in an arbitrary partial wave.The hats indicate that the modes are in Fourier space.The arrows superimposed on the scalar legs indicate the flow of charge while the external arrows indicate flow of momentum. Figure 3 : Figure 3:The two dominant tree level diagrams built out of the three vertices of the theory in the t-channel. Figure 4 :Figure 5 : Figure 4: All leading one-loop diagrams in the black hole eikonal. Figure 6 : Figure 6: One-loop diagram arising from the four vertex involving the even mode. Figure 7 : Figure 7: One-loop four-vertex diagram involving the odd mode of the photon.
10,507
2023-09-11T00:00:00.000
[ "Physics" ]
THE USE OF ARTIFICIAL NEURAL NETWORKS IN SUPPORTING THE ANNUAL TRAINING IN 400 METER HURDLES This paper presents an evaluation of the annual cycle for 400 m hurdles using artificial neural networks. The analysis included 21 Polish national team hurdlers. In planning the annual cycle, 27 variables were used, where 5 variables describe the competitor and 22 variables represent the training loads. In the presented solution, the task of generating training loads for the assumed result were considered. The neural models were evaluated by cross-validation method. The smallest error was obtained for the radial basis function network with nine neurons in the hidden layer. The performed analysis shows that at each phase of training the structure of training loads is different. Introduction The 400 m hurdles race is a complex motor and rhythm (technical) athletics race.In terms of motor preparation, the dominant part is endurance of a specific character (anaerobic), supported by a high level of speed and strength.Given the interdisciplinary nature of race training those means, which combine both the technical and the motor aspects of the race, should be used on a very frequent basis (McFarlane, 2000;Iskra, 2012b). The analysis of training loads in selected disciplines and sports competitions evokes different reactions among scholars and coaches.Some of them claim that the evaluation of an athlete's (or group of athletes) training can be an inspiration to other sportsmen.Others believe that sport is about individual cases where patterns or "average" data have no value (Hiserman, 2008;Iskra, 2012a). In the analysis of training loads in athletics, including the 400 m hurdles, three approaches can be distinguished: -Analysis of individual training programme -analysis of the intensity and content of the training of the best competitors, usually record holders and champions (Olympic, world and continent) (Alejo, 1993;Iskra, Widera, 2001;Winckler, 2009).-Statistical analysis of average data -from a group of competitors, who often train over the long term (Brejzer, Wróblewski, Koźmin, 1984;Iskra, 2001;Guex, 2012).-Mathematical analysis -it is an attempt to use basic science to provide training solutions in competitive sports (Iskra, Ryguła, 2001;Przednowek, Iskra, Cieszkowski, Przednowek, 2015;Wiktorowicz, Przednowek, Lassota, Krzeszowski, 2015).Each of the above methods of training load analysis has its strengths and weaknesses.For example, the use of artificial neural networks allows a multidimensional analysis of training loads to be carried out, by creating a system that not only analyses the training already carried out, but also lets the coach decide on the size of the training loads to be applied at a given phase in the sports training.The system which is built on the basis of knowledge accumulated over many years of coaching will assist decision-making by providing valuable coaching tips (Przednowek, Iskra, Cieszkowski, Przednowek, 2015).It should be noted that such a system will act as a consultant, since a coach's intuition and the human capacity to analyse reality is still unsurpassed by computer systems. The aim of this study is to evaluate the annual preparation cycle for 400 m hurdlers using neural networks.The analyses can be helpful in verifying the views adopted a-priori by coaches, taking into account long-term standards of periodization of training. Material and methods The analysis included 21 Polish hurdlers aged 22.25 ±1.96 years participating in competitions from 1989 to 2011.The athletes had a high sport level (the result over 400 m hurdles: 51.26 ±1.24 s).They were the part of the Polish National Athletic Team Association representing Poland at the Olympic Games, World and European Championships in junior, youth and senior age categories.The best result over 400 m hurdles in the examined group was equal to 48.19 s.The collected material allowed for the analysis of 48 annual training plans. In the presented solution the task of generating training loads (GT) for the assumed result were considered.The neural model generates training for the expected result and the parameters of the athlete (Figure 1 and Table 1).Table 1 contains the variables considered and their basic statistics, i.e. the arithmetic mean of x, the minimum value x min , the maximum value x max , standard deviation SD and the coefficient of variation V.This study uses artificial neural networks in the form of the multilayer perceptron (MLP) and the radial basis function (RBF).Multilayer perceptron is the most common type of artificial neural networks (Bishop, 2006).During MLP training, exponential and hyperbolic tangent function were used as the activation functions of hidden neurons.The feature of RBF network is the fact that the hidden neuron performs as a basis function that changes radially around the selected center.All the analysed networks have one hidden layer.For the implementation of neural networks, StatSoft STATISTICA software was used (Statsoft, 2011).The cross-validation method was implemented using Visual Basic language. The models presented in this paper were evaluated by leave-one-out cross-validation (LOOCV) (Arlot, Celisse, 2010).The idea of this method is based on the separation from data set n subsets, where n is the number of all patterns.Each subset is formed by removing from the data set only one pair, which becomes the testing pair.The cross validation error (CVE) is expressed by the formula: where: NRMSE j -the normalized root mean square error for the j-th output, r -the number of outputs, n = 48 -the number of patterns, y ij -the real (measured) value, ŷ -ij -the output value constructed in the i-th step of crossvalidation based on a data set containing no testing pair (x i , y i ), y jmax -the maximal value of the j-th training load, y jmin -the minimal value of the j-th training load. Results with discussion The main aspect of supporting sport training presented in this study is generating training loads for selected parameters of an athlete.In this way, the proposed approach allows, among others, for individualization of a training plan (Bompa, Haff, 1999). Taking into consideration various topologies of networks, an optimal multi-layer perceptron was calculated.This model has 5 neurons in the hidden layer and hyperbolic tangent activation function.Compared to the best model with an exponential function it is superior because it generates the error smaller by 0.2%.The best perceptron generates the annual training plan with the error CVE = 19.95%(Figure 2).The optimal RBF network has 5 hidden neurons and CVE = 19.34%.This result is better than that obtained for the MLP networks.Therefore, as the optimal method, the RBF network with five hidden neurons was used.The optimal model was analysed to determine the errors generated for different outputs, which allowed to identify which training means are generated with the smallest error (Table 2).The detailed analysis showed that y 4 , y 9 and y 14 Vol. 17, No. 1/2017 The Use of Artificial Neural Networks in Supporting the Annual Training in 400 Meter Hurdles (speed endurance, strength endurance II, upper body strength) are generated with the highest accuracy (NRMSE j at the level of 14-15%), whereas the output representing technical exercises in march (y 17 ) has the largest error (30%).The chosen neural network was tested by generating training plans for a hypothetical athlete (age: 21 years, body height: 185 cm, weight 75 kg).In every case the result was expected to improve by one second as a result of accepting the output from 56 to 49 seconds.Training loads forming speed (y 1 -y 3 ) are very similar in nature (Figure 3).At the beginning of an athlete's career, the highest content of these loads can be noted, and with their increasing competitive level, a decrease in loads (until the competitor achieves 51 s), can be observed.While obtaining the best results, the rates of training loads influencing speed go up with increasing sports level. The 400 m hurdles race is still a sprint distance so the need for speed training is the priority, but requires a variety of assessments in terms of a year-round and long-term cycle of preparation.For "high-speed" hurdlers short races can be an important part of training, but in the group of other hurdlers ("endurance" and "rhythm" type) maximum speed exercises are only additional to the basic training (Iskra, 2012b;Balsalobre-Fernández, Tejero-González, del Campo-Vecino, Alonso-Curiel, 2013).Analyses show a characteristic tendency to reduce the importance of speed training in the middle phase of the development of a sports career with a return to speed exercises for the highest performance (y 1 -y 3 ).This fact can be explained, on the one hand, by a particular emphasis on anaerobic exercise during the period of "growing up" to athletic championship level, and, on the other hand, by shortening distance of training at the final, highest phase.Such tendencies can be observed in the analysis of the content of training of the best Polish hurdlers who have been competing for many years (Iskra, Widera, 2001). In the group of endurance training loads (y 4 -y 7 ) two trends of changes depending on the level of training (Figure 3) were observed.The content of exercises that form speed endurance (y 4 ) and aerobic endurance (y 7 ) increases when the athlete obtains average results (up to 52-51 s), while at a later phase, when his/her form is improving, the value of these loads is consistently declining.Other training loads related to strength have similar tendencies to the speed loads.The values of these loads (y 5 , y 6 ) at the beginning gradually decrease until the competitor achieves 52 s.The values start rising with the increasing competitive level of the athlete. The whole essence of the running training of a 400 m hurdler, supported by research in the physiology of physical effort, lies in the statement above.The 400 m hurdles distance is a typical anaerobic effort, for which the value of lactate amounts to 20 mmol/l (Ward- Smith, 1997;Gupta, Goswami, Mukhopadhyay, 1999;Zouhal et al., 2010).Therefore, the best, in terms of motor skills, competitors use specific training means at the prime time of their career.Including "alternative" sets of exercises of reduced intensity in this period (the so-called "tempo endurance" system) can be explained by the difficult conditions for Polish winter training, which encourages coaches to reduce the speed of races in favor of training intensity (Iskra, Przednowek, 2016).Changes in the content of strength and speed exercises of lower and upper limbs (y 15 , y 16 ) have similar variability (Figure 4).At first the changes are very small and the level of the loads is relatively small.The content of these loads is increasingly going up only when the competitor achieves results of 52 s.At the championship level the loads stabilize at a high level. Improvement of the strength capacity in athletics speed races is now one of the trends in searching opportunities to improve results.It is mentioned by the classics of the theory of sports training (Bompa, Haff, 1999;Sozański, Sadowski, Czerwinski, 2015) and the best coaches of this sport (Smith, 2005;Husbands, 2013).The results of the analyses in the group of the best Polish hurdlers do not confirm entirely this trend.Only the basic strength training exercises of the lower limbs from the "average" level remain at the same high level (y 10 -y 11 ).Attention should be Figure 1 . Figure 1.Block diagram of the model for generating training loads Figure 2 . Figure 2. CVE error for artificial neural networks Figure 3 . Figure 3. Training loads y 1 -y 14 generated for results from 55 s to 48 s Table 1 . The variables and their basic statis Table 2 . Errors for the outputs of the RBF network
2,633.4
2017-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Trifurcated lined ducts: A comprehensive study on noise reduction strategies The present research is centered on analyzing and modeling the scattering characteristics of a trifurcated waveguide that includes impedance discontinuities. A mode-matching method, grounded in projecting the solution onto orthogonal basis functions, is devised for the investigation. The impedance disparities at the interfaces are represented in normal velocity modes, which, when combined with pressure modes, result in a linear algebraic system. This system is subsequently truncated and inverted for numerical experimentation. The convergence of scattering amplitudes is assured by reconstructing matching conditions and adhering to conservation laws. The computational results indicate that optimizing attenuation behavior is achievable through manipulating variation bounding properties and impedance discontinuities. Introduction The theory of noise reduction has become a dynamic area of research due to large-scale industrial advancements.This research is crucial for various applications, including, aircraft and vehicle engines, turbofan engines, ducts, and pipes.Guided wave systems, known for their efficiency in carrying acoustic energy by preventing lateral diffusion, resist the decay of sound waves according to the inverse square law. Numerous scientists and engineers have addressed noise reduction by considering different material properties of ducts and diverse geometrical designs.Rawlins [1] discussed noise reduction through a duct with a thin acoustical absorbent lining on parallel plates partitioning.According to Demir and Buyukaksoy [2], fixing the walls of a conduit with an acoustically lining material can fundamentally improve its acoustic performance.Morse [3] investigated the attenuation of sound in boundless shut pipes using acoustically absorbing liners.Subsequent analyses confirmed that fixing the properties of on the walls of waveguide enhances sound absorption. The study of waveguides based on different mathematical formulations has been extensively discussed.Koch [4] introduced the Wiener-Hopf solution to specify the problem of the radiation of sound from a semi-infinite 2D channel with walls fixed with a responding sound retention substance.Jones [5] evaluated the far field and near-zone solutions for the issue of wave recasting of the differential system into a linear algebraic system that can be solved through inversion. Precisely, the underlying problem provides a step further in generalizing the study of planar trifurcated lined ducts.The following sections make up the article: The basic waveguide structure is defined in section 2. In section 3, the mode-matching method is used to estimate the scattered field potentials in each region.The energy flux distribution in various regions is obtained in part 4 by numerically solving truncated infinite linear systems.In Section 5, numerical results are presented graphically.Section 6 summarizes the investigations. Formulation of boundary value problem The study focuses on the propagation of acoustic waves in a waveguide with partitions and impedance discontinuities.In a rectangular coordinate system (� x; � y), the waveguide can be divided into four regions defined as follows: • Region R 1 : � x < � 0; � y < j� aj Note that the bars in the variables represent the dimensional setting of those variables.The regions mentioned are filled with a compressible fluid with density ρ and sound speed c.In particular, region R 1 is bounded by rigid walls with an infinite impedance � Z 1 , while the surfaces of regions R 2 , R 3 , and R 4 have finite impedances � Z 2 , � Z 3 , and � Z 4 , respectively.Assuming a harmonic time dependence of e À io� t , where ω = ck represents the radian frequency with k being the wave number, the surface impedance � Z j can be expressed in terms of the time-independent fluid potential � φj, as mentioned in reference [31], that is where j = 1, 2, 3, 4 is used to specify the regions R j .The waveguide's time-independent fluid potential � φ is governed by the Helmholtz equation [31], which can be expressed as follows: The governing boundary value problem is non-dimensionalized using the length scale k −1 and time scale ω −1 such that x ¼ k� x; y ¼ k� y and illustrating the schematic configuration.The non-dimensional problem incorporates Helmholtz's equation with a unit wave number, which can be expressed as follows: The dimensionless form of boundary conditions are • At y = b, the specific impedance in dimensionless form Solution methodology: Mode-matching approach In order to understand the scattering properties of the given structure, we utilize the modematching technique to solve the governing boundary value problem.This technique involves obtaining the eigenfunction expansions of the duct regions and applying the matching interface conditions to convert the differential systems into linear algebraic systems.These linear algebraic systems are then truncated and inverted.In the subsequent subsections, we will explore the specifics of the eigenfunction expansions and the properties of the eigenfunctions in more detail. 0.1 In region R 1 : fx < 0; jyj < ag The acoustic region denoted as R 1 is enclosed by acoustically rigid boundaries described by Eq (4).Within this region, the propagation of sound satisfies the Helmholtz's equation as stated in Eq (3).To solve Eq (3) under the boundary condition (4), we employ the separation of variable method.This method allows us to decompose the solution into a series of eigenfunctions.The resulting solution takes the form of an eigenfunction expansion In region R 1 , the eigenfunction is represented as Y 1n ðyÞ ¼ cos½t n ðy þ aÞ�.Here, ϑ n denotes the wave number of the n-th mode and is defined as W n ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi 1 À t 2 n p .The eigenvalues τ n satisfy the dispersion relation described by the following equation: In Eq (9), the first term represents the incident wave, while the second term represents the reflected field.The coefficients A n in the second term represent the unknown reflected mode coefficients.Additionally, the eigenfunctions Y 1n ðyÞ associated with this analysis also fulfill the orthogonality relation provided by: where, � m = 2 for m = 0 and 1 otherwise and δ mn is Kronecker delta. 0.2 In regions R 2 : fx < 0; À b < y < À ag and R 3 : fx < 0; a < y < bg The upper boundary of region R 2 is defined by the rigid wall condition at y = −a, as stated in Eq (4).On the other hand, the lower boundary at y = −b is governed by the impedance wall condition specified in Eq (5).Similarly, the lower boundary of region R 3 is determined by the rigid wall condition at y = a, as given in Eq (4).The upper boundary, on the other hand, is subject to the impedance wall condition at y = b, as described in Eq (6).By solving Eq (3) while considering these boundary conditions for regions R2 and R3, the eigenfunction expansion can be expressed in the following formulations: In this context, the eigenfunctions for regions R 2 and R 3 are given by Y 2n ðyÞ ¼ cos½l n ðy þ aÞ� and Y 3n ðyÞ ¼ cos½l n ðy À aÞ� respectively.The wave number associated with the n th mode can be mathematically represented as ϰ n ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi , where λ n denotes the n th eigenvalue.For the case of mixed boundary conditions at y = ±b, the eigenvalues for n = 0, 1, 2, � � � are determined as the roots of the following dispersion relation: In their respective domains, the eigenfunctions Y 2n ðyÞ and Y 3n ðyÞ are orthogonal to each other.This orthogonality is expressed through the following relations: where The region R 4 is bounded by impedance type conditions at y = ±h, as specified in Eq (6).Within this region, the eigenfunction expansion for the propagation of sound waves can be obtained by solving Eq (3) while considering the boundary conditions given in Eq (7).This allows for a comprehensive understanding of the behavior of sound waves within this region in term of following expansion Here Y 4n ðyÞ ¼ r sin ½g n ðy þ hÞ� þ sg n cos ½g n ðy þ hÞ� expresses the eigenfunction of a mode having mode wave number B n ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi 1 À g 2 n p in which γ n are the eigenvalues.These eigenvalues are the roots of following dispersion relation: Moreover, the eigenfunctions Y 4n ðyÞ satisfy the orthogonality relation where Note that with impedance conditions, the roots of dispersion relations ( 14) and ( 18) must be found numerically and must be arranged in accordance with the properties as given in [14].Furthermore, the coefficients {A n , B n , C n , D n } are unknowns.To find these unknowns, we use matching interface conditions. Matching interface conditions Regarding the conditions governing interface matching, our focus is on ensuring the matching of pressure and normal velocity modes at the interface.To accurately capture the scattering response in the presence of impedance variations and geometric discontinuities, it is crucial to carefully consider the interface conditions.The literature provides several formulations of such conditions, as demonstrated by [14].Our specific attention is directed towards the aperture located at the interface, precisely at x = 0, where achieving consistency in pressure values across different regions is of paramount importance.Simultaneously, we integrate impedance discontinuities into the matching conditions for the velocity modes.We adopt the approach of maintaining the continuity of pressure modes, normalized with respect to the eigenfunctions of regions R 1 , R 2 , and R 3 , thereby providing a comprehensive framework for an accurate representation of the system. By incorporating the eigenfunction expansions ( 9), ( 12), ( 13) and ( 17) into Eqs ( 21)-( 23), and subsequently solving the resultant equations with the assistance of the orthogonality relations outlined in ( 11) and ( 15), and following certain mathematical rearrangements, we derive the explicit expression for the scattering amplitudes as follows: • For region R 1 , we get where where • For region R 3 , we achieve where At the interface, we utilize the matching condition of normal velocities to determine the unidentified coefficient for region R4.Normalizing these conditions with respect to the eigenfunctions of region R4 results in: Utilizing the fluid potentials provided in ( 9), ( 12), ( 13) and (17) in Eq (30), and subsequently applying the orthogonality relation (19), one can obtain the explicit formulation of scattering amplitudes of region R4 through some mathematical rearrangements: where By applying Eqs ( 24), ( 26) and (28) into Eq (31), we obtain a linear algebraic system with unknowns D m , where m = 0, 1, 2, � � �.To determine these unknowns, the system is truncated and then inverted.Once the values for D m are determined, the quantities A n , B n , C n can be easily calculated using Eqs ( 24), ( 26) and (28).It is important to note that the system for the rigid discontinuities at x = 0 can be derived from Eq (31) by setting μ = 0. Energy flux Energy flux or power forms the basis for quantifying the distribution of energy across various aspects of the guiding structure, enabling a comprehensive understanding of its scattering behavior.The formulas for radiated energy flux, reflection, and transmission can be determined by applying the definition provided in [14], which is: where superscript asterisk (*) denotes the complex conjugate.By substituting the incident field e ix from Eq (9) into Eq (32), we can determine the incidence power as P inc ¼ a.Similarly, by substituting the reflecting and transmitting fields into Eq (32), we can calculate the power or energy flux in the duct sections Rj as follows: and Note that the negative sign in (33)-( 35) indicates that the powers are propagating in negative direction.The energy conservation law can be established by equating the powers propagating in negative and positive directions, that gives For analytical purposes, P inc is adjusted at unity, which is achieved on dividing (37) by a that is; where, E j ¼ À P j =a for j = 1, 2, 3 and E 4 ¼ P 4 =a.It's worth noting that Eq (38) is recognized as the conserved power identity, rooted in the principle of energy conservation.This identity implies that if one unit of power is input into the system, it will be equivalent to the combined sum of reflected and transmitted powers. Numerical results and discussions To solve the linear algebraic system presented in Eq (31), a numerical approach is employed by setting m = n = 0, 1, 2, � � �.This enables us to obtain the truncated amplitudes.In the numerical computations, a fluid density of ρ = 1.2043 kgm −3 and a sound speed of c = 343ms −1 are considered.Before delving into the scattering properties of the structure, it is important to evaluate the accuracy of the truncated solution.This can be achieved by numerically reconstructing the matching conditions and conservation law using the truncated form of the solution.To assess the accuracy, specific values are assigned to the parameters involved.In this case, we set , and N = 120, thereby establishing the structure setting and impedance.Furthermore, a frequency of f = 230Hz is chosen as the operating frequency.21)-( 23) and (30), indicating that the truncated amplitudes have adequately converged.Note that to analyze how changes in the geometric proportions of the duct structure affect the transmission and absorption of sound waves some specific parametric setting is used.The parameter settings, such as � b ¼ 3� a, were chosen to simulate realistic scenarios encountered in duct structures, where variations in dimensions are common.This parameter setting aligns with previous studies in the literature, for instance see [16], enabling comparisons and ensuring consistency in methodology.Furthermore, our aim is not only to model specific real-world scenarios but also to explore a range of plausible configurations to elucidate general trends and behaviors.Therefore, while the chosen parameters may not correspond directly to a particular practical problem, they serve the purpose of elucidating the physical behavior of the system under varied conditions, contributing to a deeper understanding of acoustic phenomena in duct structures., these abrupt variations diminish.This reduction is attributed to a decrease in the number of propagating modes supported by the system, leading to a smoother transition between transmission and reflection characteristics.The altered resonance spectrum, with cut-on modes now appearing at four different frequencies, underscores the sensitivity of the system to geometric parameters and highlights opportunities for optimizing its performance in practical applications. To obtain the results depicted in 7, it can be seen that by changing the surface conditions form reflecting powers are changed.Therefore, one may see that by changing the surface conditions the scattering behavior can be optimized.Moreover, by using the step discontinuities the reflected powers are varied.The reason behind is the participation of more number of propagating modes with step-discontinuities involving conditions leads to variations in reflecting powers, showcasing the potential for optimizing scattering behavior through such changes.Furthermore, the introduction of step discontinuities results in varying reflected powers.This phenomenon is attributed to the involvement of a greater number of propagating modes with step discontinuities compared to a planar structure.Specifically, for Figs 8(A) and 9(A) with step discontinuities, the number of cut-modes is 6 and 3, respectively, corresponding to values of � a around 0.387052, 0.782517, 0.875073, 1.41358, 1.91002, 2.03623m and 0.387052, 1.3042, 1.9605m.In contrast, with a planar setting (Figs 8(B) and 9(B)) and � h ¼ � b, the cut-on modes are 3 and 1, respectively.These cut-on modes are the primary contributors to abrupt variations in scattering powers. The aforementioned findings provide significant insights into the intricate interplay between geometric parameters, surface conditions, and scattering characteristics within bifurcated structures.Our investigation, as depicted in Fig 5, reveals that varying the height of R 1 while maintaining fixed impedance parameters leads to a notable reduction in cut-on modes from seven to three as � b decreases from 3� a to 1:5� a.This reduction in cut-on modes correlates with diminished abrupt variations in scattering graphs, emphasizing the sensitivity of the system to geometric changes and suggesting avenues for optimization.Transitioning to Figs 6-9, where surface conditions are systematically altered, unveils the profound impact on scattering behavior.Distinct surface conditions yield diverse reflected powers, underscoring the potential for optimizing scattering behavior through strategic adjustments.Furthermore, the introduction of step discontinuities induces significant variations in reflected powers due to the involvement of a greater number of propagating modes.Specifically, the comparison between settings with and without step discontinuities highlights the pronounced influence on cut-on modes and subsequent abrupt variations in scattering powers.Overall, these findings offer valuable insights into optimizing the scattering behavior of bifurcated structures, holding promise for practical applications across engineering and physics domains. The surfaces within the duct regions can be designed to absorb sound effectively by selecting specific parametric settings for impedance conditions.Relevant settings from existing literature, as indicated in [16], offer valuable insights into these parametric configurations.In this article, the chosen mixed parameters correspond to [16] with p = r = μ = 1, q = s = κ = iξ/k, where ξ defines the specific impedance as ξ = z + iη.Here, z and η represent the resistive and reactive components of the surface material. To illustrate the impact of varying absorbent surfaces on transmission, Concluding remarks The current investigation delves into the wave scattering analysis of a planar trifurcated lined duct, considering diverse boundary properties, has yielded significant insights.The study extensively explored a spectrum of mixed boundary conditions, successfully addressing the governing problem.Through the integration of eigenfunctions, orthogonality relations, and matching conditions, the initially intricate differential system underwent a transformation into a numerically solvable linear algebraic system post-truncation. The wave scattering behavior exhibited by the trifurcated lined duct under varying conditions, including alterations in boundary properties, duct size, and impedance discontinuity is studied.The results, derived in a generalized manner, effectively recaptured existing findings for the trifurcated lined duct as a distinct case.Specifically, we successfully recovered previously established results for varied boundary properties (soft, rigid, impedance) in scenarios devoid of structural discontinuity.Moreover, the analysis extended to the computation and scrutiny of radiated energy across all regions of the duct.A notable revelation emerged, indicating that lined ducts exhibited a reduced noise generation compared to their hard or soft counterparts.This finding underscores the practical advantage of employing lined duct configurations in situations where noise reduction is of paramount importance. Additionally, the successful conservation of energy flux across diverse duct regions served as a validation of our algebraic approach.It confirmed the coherent propagation of cut-on duct modes within different sections, adding credibility to our methodology.Importantly, our study demonstrated that the simplified solution accurately recovered pressure and normal velocity modes, emphasizing the versatility and accuracy of our approach. Figs 2 and 3 provide visual representations of the matching conditions for pressures and normal velocities at the interface x = 0 with respect to y. Fig 2 demonstrates that the real and imaginary parts of pressures, denoted as φ 4 (0, y), precisely align with φ 1 (0, y) within the domain |y| < a, φ 2 (0, y) within the domain −b < y < −a, and φ 3 (0, y) within the domain a < y < b.Similarly, Fig 3 illustrates that the real and imaginary components of the normal velocity, φ 4x (0, y), perfectly match at the aperture with φ 1x (0, y) in the domain |y| < a, φ 2x (0, y) in the domain −b < y < −a, and φ 3x (0, y) in the domain a < y < b.Additionally, along the impedance discontinuities, the real and imaginary components of the normal velocity, φ 4x (0, y), coincide with −φ 4 for κ = μ = 1 within the range −h < y < −b and b < y < h.This matching aligns with the assumptions made in Eqs ( Figs 4 and 5 illustrate the energy propagation in the waveguide with respect to frequency (f) and the variation in symmetric height discontinuities (a ¼ k � � a), respectively.To obtain the results depicted in Fig 4, we fix the height of region R 1 as � a ¼ 0:24m, while the impedance parameters are assumed to be r = s = p = q = 1.The dimensions of the other regions are defined Fig 9 . Fig 9. Reflected energies against a ¼ k � � a for rigid, soft and impedance conditions; (A) with step-discontinuities ( � h ¼ 5� a) (B) without-discontinuities ( � h ¼ � b), where � b ¼ 1:5� a. https://doi.org/10.1371/journal.pone.0306115.g009 Figs 10 and 11 have been generated.Results are presented against frequency and height for z = 0 and different values of η = −0.8,0, 1, 2. In Fig 10, transmitted powers are plotted against frequency with � b ¼ 3� a at � a ¼ 0:24m, while Fig 11 depicts transmitted powers against a ¼ k� a with � b ¼ 3� a at a frequency of f = 230Hz.The system is truncated with N = 50 terms.Notably, for Figs 10(A) and 11(A), � h ¼ 5� a was considered, while for Figs 10(B) and 11(B), � h ¼ � b was used.Cut-on modes occurred at (178, 214, 355, 715)Hz and (178, 355, 715)Hz for Fig 10(A) and 10(B), respectively.For Fig 11(A) and 11(B), cut-on modes occurred at � a � 0:782517m, 0.942386m, 1.56503m, and k � � a � 0:782517m, 1.56503m, respectively.It is evident that the presence of step-discontinuities leads to more propagating cut-on modes compared to results without discontinuities.Fig 11(B) closely aligns with the findings of Rawlins [16] using the Wiener-Hopf technique, supporting the mode-matching solution computed in the presence of step-discontinuities. Furthermore, the examination of Figs 10 and 11 provides crucial physical insights into the effect of varying absorbent surfaces on transmission characteristics within duct structures.By systematically manipulating the reactive component (η) while holding the resistive component constant (z = 0), these figures offer valuable insights into how different surface configurations influence the transmission of sound waves.Notably, Fig 10 presents transmitted powers plotted against frequency, offering a comprehensive view of how surface parameters impact transmission across a spectrum of frequencies, with � b ¼ 3� a and � a ¼ 0:24m.In contrast, Fig 11 illustrates transmitted powers against height (a ¼ k� a) at a fixed frequency of f = 230Hz, revealing the spatial variations in transmission resulting from diverse surface configurations.The identification of cut-on modes at specific frequencies in both figures further enriches our understanding.For Fig 10(A) and 10(B), where � h ¼ 5� a and � h ¼ � b respectively, cut-on modes manifest at distinct frequencies, highlighting how variations in surface conditions influence the resonant behavior of the system.Similarly, in Fig 11(A) and 11(B), where the height is varied with � b ¼ 3� a, cut-on modes occur at different heights, demonstrating the spatial dependence of resonance within the duct structure.These findings yield critical insights for designing and optimizing sound absorption systems, empowering engineers to tailor surface parameters to achieve desired transmission characteristics in practical applications.
5,363.4
2024-07-26T00:00:00.000
[ "Engineering", "Physics" ]
Game-Play: Effects of Online Gamified and Game-Based Learning on Dispositions, Abilities and Behaviours of Primary Learners . This meta-level review of the literature set-out to examine the impacts of game-based/ gamified learning on dispositions, cognitive abilities and behaviours of learners aged 6-12, and to identify the factors that contributed to these impacts. A total of seventeen relevant studies were identified that had been implemented across a range of disciplinary areas in the period under review (2005-2015). The results indicate that online gamified/ games-based learning has been shown to increase the level of academic performance of learners, and improve cognitive competencies in problem-solving, multiplicative reasoning ability, self-efficacy and critical thinking. Learners’ intrinsic motivation has been shown to have been enhanced through motivational factors (confidence, satisfaction and enjoyment) promoted within the online game design, and this had a direct effect on increasing engagement and improving academic achievement. Introduction The availability of new platforms and online technologies for the delivery of games has become an important factor in 21st century learning.According to Prensky [1], our education system needs to respond to the needs of our "digital natives" who, having grown up with the proliferation and permeation of technologies in their everyday lives, are said to be more technologically savvy and to process information (or learn) differently to previous generations.However, others such as Helsper and Eynon [2] contest the evidence base for such generalization that forms the basis of the concept of digital natives and instead have provided evidence that digital natives are perhaps better identified across a broader range of factors which move beyond the narrow generational concept, including the degree of immersion in the technology (i.e.breadth of online activities) and experience in using technology, as well as sociodemographic factors (gender dimensions and educational levels).Helsper and Eynon [2] further call for more evidenced based research exploring how younger and older generations learn through and engage with technology, so that our education system can respond appropriately to needs of our learners.In order to keep up with everchanging information and communication technologies and to prosper within dynamic social, cultural and economic environments of the 21st century, learners need to develop and/ or enhance skills such as critical thinking, teamwork, digital literacy, problem solving, collaboration and cooperation.According to Prensky [3] computer games provide a new way to motivate learners.Gaming is particularly important in supporting learners to interact, communicate and collaborate with each other, and thus can help facilitate types of learning required for 21st century living. This review of the literature set-out to identify factors that impact on dispositions, behaviours and/ or cognitive abilities of primary level learners' (6-12 years old), in game-based/ gamified learning environments.For the purpose of this study, gamebased learning refers to the use of stand-alone online games within learning contexts and the gamification of learning refers to features of gaming, such as competition and rewards systems, being used within online learning contexts. Methodology The electronic databases searched in this review were Science direct, IEEE (Institute of Electrical and Electronics Engineers) and Springer.The search terms included: "online game-based learning" "online digital game-based learning", "gamified learning" and "gamification", along with more specific terms such as "primary education" and "younger learner or children".The following search results were obtained: Science Direct (590 papers), IEEE (107 papers) and Springer (395 papers). Studies were selected with the following inclusion criteria; 1) have been published from 2005 to 2015, 2) have focused on primary education aged 6-12 3) have been written in English language.It also should be noted that only those games employing an Internet or Wifi connection were considered 'online', and, consequently, papers presenting research on topics, such as digital based-learning without use of internet connection, were not included in this review. Given the permeation of broadband/ wifi and mobile technologies, this study took a particular focus on games being used for learning purposes that were accessible online through commonly used mobile devices with wifi connection (laptop and/ or tablet technologies) and desktop computers with an Internet connection within classrooms or at home.Console games were thus excluded, as they are not commonly integrated within classrooms, as console games are often beyond budgetary constraints of many school systems -we do recognize that there are some notable exceptions to this such as Learning and Teaching Scotland's exploration and promotion of consoles in Scottish classrooms from 2006 onwards.Furthermore, there was emerging anecdotal evidence at the time of this review of games traditionally accessed via consoles being translated for use via PCs and/ or mobile devices (the launch of Microsoft Educational Version of Minecraft in 2016 which is accessible by PC and Mobile phone is evidence of one such recent translation) Furthermore, this study was particularly interested in online games being used by the 6 to 12 year old age group.As such, the research terms used narrowed the search field to include games employing an Internet or Wifi connection for 6-12 year olds, while excluding console games.In total over 1000 papers were isolated using the initial search terms, and following a manual reading and review of each of these, only 17 studies were found to have met the criteria as outlined above for this meta-analysis, as explained below To enable the implementation of the selection criteria, and given the diversity of online games, two steps were taken in the selection process.First, during abstract screening, records reporting the same study were clustered together.Second, during full-text vetting, the references were reviewed, which resulted in the delivery of several papers relevant for the review but not covered in the databases.The literature review uncovered seventeen papers reporting on studies exploring the impact of online game-play.The questions that guided the review of each of the selected papers were as follows: • What does this study reveal about dispositions, cognitive abilities and/ or behaviours of learners within online game-based/ gamified learning environments?• What factors contribute to changes in learners' dispositions, cognitive abilities and/ or behaviours within online game-based/ gamified learning environments?For the purpose of the review, Dispositions were understood as learners' attitudes or feelings towards engagement within the disciplinary area; Abilities were understood as development of learners' cognitive abilities within the disciplinary area; and, Behaviours were understood as the nature, types and degree of engagement in the disciplinary area within and beyond the classroom. The findings from the selected papers were initially coded according to whether an increase, decrease or no change was recorded in the dispositions, abilities and/ or behaviours of learners.The age-group, disciplinary area and size of study were also recorded.In addition, factors that contributed to changes in dispositions, abilities and/ or behaviours of learners were noted.The outcomes from the coding process were then cross-tabulated to ascertain common outcomes and corresponding themes, and these were then presented within the frame of discussion under the headings of 'Dispositions', 'Abilities' and 'Behaviours' in game-based/ gamified learning contexts. Nature of studies under review Seventeen studies were identified that focused on online game-based learning/ gamification across a range of disciplinary areas.Overall, the studies adopted research approaches that utilised solely quantitative or mixed methods, with solely qualitative approaches being less common.Furthermore, studies measuring cognitive abilities (academic achievement) tended to use pre-testing and post-testing of abilities, whereas the studies of behaviours and dispositions tended to use direct observation as their research tool of choice. Mathematics and Science were the most common disciplinary areas for the studies under review.Five studies implemented online games in Mathematics course [4,9,14,17] and another seven studies were in the disciplinary area of Science [6,7,8,11,12,13,15].The remainder focused on other disciplinary areas, such as Geography [5,10], English [5] and Literacy skills-Reading [18]. Most of the studies sought to explore the effect of engagement in online games on learners' dispositions.A number of studies explored the impact of engagement in online games on the learners' cognitive abilities such as problem solving, multiplicative reasoning ability and academic achievement.While a few studies examined the effect of engagement in online games on learners' behaviour, this review has found that only three studies have implemented gamification elements.Two of these studies integrate gamification elements within 3D virtual worlds [11,20].The study by Su et al. implemented gamification elements in a mobile learning environment [6]. Impact of online game-based learning /gamification on dispositions Eleven studies focused on the effect of online games on learners' dispositions and attitudes [4,8,10,11,12,13, 1], 18,20].A variety of types of online gaming products, including 3D immersed games [10,11] and mini games [8,9] which support social interaction (cooperation, collaboration) and competition have been shown to positively enhanced primary learners' dispositions toward learning across a range of different disciplinary areas [8,9,10,11].Game-based learning has been shown to promote an increase in positive attitudes towards disciplinary areas [4,5,12,13,14,19] to make the learning experience more enjoyable [14] and to promote engagement beyond the classroom [19].This can lead to learners exhibiting independent behaviours (becoming more self-directed, autonomous) and a positive shift in their interest towards the process of learning, as opposed to focusing on academic grades [10]. Game-based learning supports this through the inclusion of motivational gaming features such as fantasy and relevance [12], collaboration and team-based type activities [5], and appropriately designed aesthetic interfaces with attractive illustrations for example [5].Immersive gaming environments that support 3D virtual engagement among multiple players were further shown by Tüzün et al. [10] to increase motivation through use of exploration, interaction, collaboration and through activation of player presence.The act of constructing games was also shown to increase positive attitudes and motivational levels [20], particularly if it involved experimentation and sharing of ideas -learners liked 'messing around with scripts' [20].Ronimus et al. [18] found that the presence of reward systems had an initial significant positive effect on concentration levels.Su and Chengt [6] found that leaderboards, badges and missions increase learner engagement.Kuo [12] found that game and non-game learning environments should be more fun to motivate learners and keep them on task. There were some cautionary notes about use of game-based learning in some of these studies.In a study by Ronimus et al. [18] when the novelty of using reward system within games wore off, the learners' engagement decreased.Furthermore, Ke & Grabowski [4] found that cooperative game-playing encouraged more positive dispositions than competitive game-playing towards the disciplinary area of maths.Also gaming environments without a sufficient degree of learning challenge -such as 57 those involving just the gathering of information -can be perceived as boring as shown in a study by Tüzün et al. [10], thus decrease levels of motivation, engagement or interest in disciplinary area. Impact of online game-based learning/gamification on cognitive abilities Nine of the studies specifically explored the impact of online games on learners' academic achievement [4,5,6,7,8,9,10,12,13,15] Some of these and other studies further examined the effect of online game-based/gamified learning on specific abilities such as problem-solving skills, multiplicative reasoning ability, self-efficacy [4,5,6,7,8,10,11,12,13,15].In terms of academic achievement, the results of these studies found game-based/ gamified learning in general led to improvement in learners' academic achievement.This improvement comes from learners' enjoyment, involvement and satisfaction within the online gaming process [4,5,6,8,9,13]. Online game-based learning/gamified learning has been shown to enable improvements in learning performance, knowledge and/ or skills-sets through the use of the constructivist platforms and communication interfaces that promote collaboration, increase players enjoyment and/ or value the ownership and personal expression [17].In terms of academic achievement, Hwang et al. [8] found that competition and challenge of the online game resulted in an increase in learners' interest, with fuller involvement, concentration and enjoyment, and improved performance.In a study by Filsecker et al. [11] players interacting with each other through the 3D virtual space were shown to have a greater understanding of key concepts and increased interest in solving problems.Sung & Hwang [13] found that collaborative computer games enhances learners' confidence and self-efficacy. Participants in a study of a mathematics educational game by Costu, Aydın & Filiz [14] highlighted the need for enjoyment in educational games, but also cautioned about the need to keep a balance between entertainment and knowledge dimensions of game-based learning environments, recommending that the game be well-connected to the lesson learning outcomes.They further recommended that a competition-type use of the game would likely increase the level of engagement in the game. A study exploring the potential of mobile gamified learning by Su & Cheng [6] highlighted the positive correlation between intrinsic motivation and learning achievement.In this case, the use of gamification features such as leaderboard, badges and mission resulted in an increase in learners' interest and satisfaction, and thus, positively impacted on their intrinsic motivation, which in turn is reflected in an increase in their academic performance. A study by Vos et al. [20] concluded that game-makers demonstrated more cognitive competence (in deep learning strategies) than those who just played existing games.This indicates that the process of game creation is of more value from a cognitive perspective than that of game-playing. A study into 3-D immersive learning environments by Tüzün et al. [10] showed significant learning gains among participants but highlighted the importance of the promotion of cooperative game play (with peer support) as opposed to competitive game-play (with no peer support).They concluded that co-operative game-play led to positive increases in both the participants' dispositions and academic performance, whereas competitive game-play only resulted in improved academic performance.A study by Ronimus et al. [18] on web-based game learning reported improvement in academic performance but cautioned that activities which are perceived by learners to have too high a degree of learning challenge can result in decreased interest in that activity. A study of web-based geography game by Dourda et al. [5] showed considerable improvement in content knowledge and highlighted the need for cooperation with peers in achieving the learning outcomes.Dourda et al. [5] also found that teamwork, communication and collaboration inherent in game-playing enhance learners' satisfaction and enjoyment.A number of cognitive strategies were displayed, including abilities in skimming, scanning and translating web texts.Furthermore, it was noted that face-to-face compensation strategies (including gestures and facial expressions) were used in to overcome limitations in understanding the English language (their second-language) within the web-based content.A study by Garcia & Pacheco [17] further found that online game-based learning can improve understanding of key concepts and improve cognitive skills, through the use of collaborative elements in problem solving and by helping learners to build their own knowledge, and by providing direct contact with real problems. In contrast, according to Kuo [12] learners' academic achievement can be improved by game and non-game learning environments.He found no significant difference for learning outcomes between online game-based learning and non-game based learning.The author concludes that design for both learning environment should be more fun to motivate learners. Impact of online game-based learning/gamification on behaviours Eight of the studies explore the impact of online games on learners' behaviour [5,9,10,11,12,17,18,19].The results generally are positive with respect to learner behaviours.For instance, Kuo [12] found that learners visit online game environment after school time where no homework was required.Furthermore, the learners enjoy teamwork in the collaborative learning environment [5,10].Sandberg et al. [19] reported that learners spend more time within the online learning environment.Ronimus [18] found the level of learning challenge increased playing time.Online games provided direct contact with real problems and provide better opportunities for promoting the participation by children [17].On the other hand, Filsecker et al. [11] noted that gamification elements such as external rewards did not show any effect on learners' levels of engagement and playing time [18].From a review of these studies, it is clear that online game-based learning/ gamified learning can have positive impacts on learners' behavior, specifically in terms of increasing the level of engagement in learning activities within and beyond the classroom [5,9,10,12,17,18,19], but can also include features that negatively impact on engagement [11,17]. The level of engagement of participants can be increased in online gaming through raising intrinsic motivation [8,12], through inclusion of activities incorporating competition [14], through the inclusion of group work [5], and through self-directed activities that promote ownership and agency [5]. Participants in a study by Tüzün et al. [10] were so motivated by engagement in game-based learning that they had to be ejected on occasion from the computer room, and furthermore expressed the desire to play the game outside school time.In a study by Sandberg et al. [19], participants were motivated to engage in game-based learning in their own time by the use of smart-phone technologies platform.A study by Garcia & Pacheco [17] showed that the interactive platform provided direct contact with real problems and provides better opportunities for promoting participation of learners. In a study by Hwang et al. [8], participants were found to be highly engaged in game-related activities that promoted intrinsic motivation.The level of intrinsic motivation was examined through flow-experiences.In the flow experience, participants fully engage with and are fully focused on the activity, and thus become intrinsically motivated to remain engaged in the activity (Csikszentmihalyi, 1975, as cited by Hwang et al.,) [8].The degree of learning challenge, control and enjoyment are core factors that can impact on the flow-experience, and thus, the levels of intrinsic motivation.In the study by Hwang et al. [8], the flow experience in the experimental group was shown to have significantly improved through the inclusion of 'instant interactions', 'explicit objectives' and 'dynamic challenges' within the game.A study by Ronimus et al. [18] showed high level of [learning] challenge increases playing time and concentration. A study by Costu, Aydın & Filiz [14] recommended the inclusion of competition features to increase levels of engagement by participants within game-based learning contexts. In other studies, it was noted that the process of gaming promotes team-work and collaboration [5], and can result in increased desire to engage in learning at home [10]. Some studies have highlighted how particular features of online gaming/ gamified learning environments can reduce levels of engagement.A study by Garcia & Pacheco [17] found that engagement can be negatively impacted by differing levels of abilities among group of participants (particularly when gaming occurs in the absence of supervision/ outside of class-time).Moreover, Garcia & Pacheco [17] found that differing levels of computer skills resulted in participants preferring to collaborate face-to-face rather than within virtual contexts.Furthermore, a study by Ronimus et al. [18] found that while the presence of a reward can initially increase engagement, the effects of rewards as a motivating factor for engagement decreases over-time.Furthermore, this study found that shortcomings in the design of control, goal setting and feedback features in an online game may have contributed to lower participation levels within the online game.Finally, a study by Filsecker & Hickey [11] found no link between external rewards and disciplinary engagement. Conclusion This state-of-the-art review has examined the impacts of online game-based learning /gamified learning on learners' dispositions, cognitive abilities and behaviours, as well as the factors that have been found to contribute to changes in learners' dispositions, cognitive abilities and/ or behaviours.The results demonstrate that online game-based learning/ gamified learning has mainly positive effects on learners' dispositions, cognitive abilities and behaviours. In the current review, the factors contributing to the successful implementation of game-based learning/ gamified learning in enhancing young learners' dispositions include: motivational gaming features, social interaction (collaboration), immersive gaming environments, enjoyment elements, and some gamification elements (such as: feedback, leaderboards, and badges).Furthermore, the application of constructivist principles in game-design, inclusion of opportunities for social interaction (collaborative, cooperative) and integration of competitive features within game design have been shown to have positive impacts on learners' cognitive abilities and academic performance. However, studies have also highlighted factors reducing levels of learners' engagement and motivation, and thus impact negatively on learners' dispositions, within games-based/ gamified learning contexts.These include games with low levels of challenge and, conversely, games that promote competition between players, which have been shown to result in decreased levels of motivation, engagement or interest in disciplinary area; thus, impacting learners' dispositions.Interestingly, studies of games-based learning with too high a degree of challenge have also been shown to decrease learner interest and to negatively impact on their cognitive abilities and academic performance.Finally, game-designers need to be mindful that gamified reward system (whether attempting to motivate intrinsically or extrinsically) can positively, or negatively, impact on motivation levels of learners.
4,657.8
2017-07-03T00:00:00.000
[ "Computer Science", "Education" ]
GPGPU VIRTUALIZATION TECHNIQUES A COMPARATIVE SURVEY . The Graphic Processing Units (GPU) are being adopted in many High Processing Computing (HPC) facilities because of their massively parallel and extraordinary computing power, which makes it possible to accelerate many general purpose implementations from different domains. A general-purpose GPU (GPGPU) is a GPU that performs computations that were traditionally handled by central processing unit (CPU) to accelerate applications along with handling traditional computations for graphics rendering. However, GPUs have some limitations, such as increased acquisition costs as well as larger space requirements, more powerful energy supplies, and their utilization is usually low for most workloads. That results in the need of GPU virtualization to maximize the use of an acquired GPU to share between the virtual machines for optimal use, to reduce power utilization and minimize the costs. This study comparatively reviews the recent GPU virtualization techniques including API remoting, para, full and hardware based virtualization, targeted for general-purpose accelerations. 1. Introduction & Background.Since the start of 21st century, HPC programmers and researchers have embraced a new computing model combining two architectures: (i) multi-core processors with powerful and generalpurpose cores, and (ii) many-core application accelerators.The dominant example of accelerators is GPU, with a large number of processing elements/cores, which can boost performance of HPC applications using higher level parallel processing paradigm [3].Because of the high computational cost of current compute-intensive implementations, gpus are considered as an efficient mean of accelerating the executions of such application by utilizing the parallel programming paradigm.Present-day gpus are excellent at rendering graphics, and their highly parallel architecture gives them an edge over traditional cpus to be more efficient for a variety of different compute-intensive algorithms [4].High-end computing units comes with gpus that include very large number of small computing units (cores) supported with a high bandwidth to their private embedded memory [1].HPC has become a must have technology for most demanding applications in scientific fields (high-energy physics, computer sciences, weather, climate, computational chemistry, medical, bio-informatics and genomics), engineering (computational fluid dynamics, energy and aerospace), crypto, security, economy (market simulations, basket analysis and predictive analysis), creative arts and designs (compute-intensive image processing, very large 3d rending and motion creative) and graphics acceleration [2] Traditionally, the general-purpose computations are performed by central processing unit (CPU) like additions, subtractions, multiplications, divisions, shifts, matrix and other similar operations, but with the growth of GPU programming languages such as compute unified device architecture (CUDA), openacc, [5] opengl and opencl and high computation power of GPU [5] has made it preferred choice of HPC programmers. In GPGPUaccelerated implementations, the performance is usually boosted by dividing the application parts into computeintensive and the rest, and compute-intensive portion is off-loaded to GPU for parallel execution [1], to carry out this operation, programmers have to define which portion of the application will be executed by CPU and which functions (or kernels) will be executed by the GPGPU [1].Fig. 1.Architecture of the system with CPU and a discrete GPU. Figure .1. shows the architecture of a heterogeneous system with a CPU and discrete GPGPU.A GPGPU has many streaming multiprocessors (SM).Each SM has 32 (can vary) processing cores, L1 cache and a low latency shared memory.Every computing core has its own local registers, an integer arithmetic logic unit (ALU), a floating point unit (FPU), and several special function units (SFUs), which executes special set of instructions e.g.special math & scientific operations.Memory management unit (MMU) of a GPU offers virtual memory spaces for GPGPU-accelerated implementations.MMU resolves GPU memory address to the physical address by using its own page table of the application.This ensures that application can only access its own address space. Discrete GPU is connected to the physical (host) machine through PCI Express interface.The interaction of GPU and CPU is done through memory mapped I/O (MMIO).The CPU can access the GPU's registers and memory via MMIO interface.The application's required GPU operations are submitted into buffer associated to the application's command submission channel (hardware unit in GPU), which is accessible to CPU through MMIO.Direct memory access (DMA) engine can be used to transfer data between host and GPU memories [3].Unlike traditional multi-core processors, GPUs exhibits a basically different approach to execute parallel applications [6].GPUs are throughput-oriented, with numerous simple processing cores, and a highbandwidth memory architecture.This architecture empower maximizing the throughput of applications with a high level of concurrent processing, which are split into a large amount of threads executing on different ends in the allocated program space.This architecture allows hiding the latency of the other queued threads by the hardware scheduler, when some threads are waiting in long latency to complete operations (arithmetic or memory access) [7]. Regardless of increasingly more cores, multi-core processors architecture still focus on decreasing latency in sequential applications by means of utilizing state-of-the-art control logics and larger cache memories.In contrast, GPUs using parallelism paradigm, speed-up the execution of applications with heaps of simple cores and excessive memory bandwidth structure.Heterogeneous systems, using multi-core CPUs and GPUs together, can boost performance of HPC applications, offering better control and parallelism.Traditional CPUs, generally use control logics and larger caches to effectively handle conditional branches, deadlocks, pipelining stalls, and poor data locality, while present-day GPGPUs can process intensive workloads, have larger Static Random Access Memory (SRAM) based local memories, and have some extra functionalities as of conventional processors, but mainly focus on ensuring higher level of parallel processing and memory bandwidth [3]. Cloud infrastructure can utilize heterogeneous systems to lower the overall operational cost of acquisition with advantage of better performance and power efficiency [8,9].A cloud platform allows users to run compute-intensive implementations over heterogeneous compute-nodes without acquiring large-scale clusters, which also save them from maintenance hassle and huge-costs.Additionally, heterogeneous nodes gives an edge over homogeneous nodes, with the freedom to have compute-intensive programs computed by either traditional CPUs or highly-parallel GPUs depending on the level of parallelism required.Collectively, these attractive advantages are encouraging cloud platform service providers to add GPUs to the cloud instances and offer heterogeneous programming facilities to the users to achieve higher performance as needed [10,11,12].System virtualization is a model that allows to concurrently run diverse operating systems on a single physical machine, with the goal to attain optimal resource sharing of physical machine in private and shared computing environments, popular example is cloud computing.The virtualization software is called hypervisor (a.k.a.virtual machine monitor (VMM)), that virtualizes physical machine resources i.e.CPU, memory, and I/O resources.A Virtual Machine (VM) use virtualized resources and have a guest OS installed.The guest operating system runs on VM similarly as the VM were a physical machine.Some well-known hypervisors being used in production environments for private and shared cloud are VMware ESXi [13], Kernal-based virtual machine (KVM) [14], Hyper-V [15], and Xen [16].Fig. 2. shows the virtualization of a host machine through hypervisor. System resources virtualization can be classified in three main categories: (i) full-virtualization (ii) para-virtualization (iii) hardware based virtualizations.In full-virtualization setup, the guest OS doesn't know that it's a guest OS, hence directly issues the resources related calls to underlying hardware including CPU, I/O and memories.The hypervisor translates those privileged calls into binary format for the guest OS.The benefit of fullvirtualization is that guest OS doesn't need to be modified in order to run in virtual environment, but may have performance bottlenecks due to direct interaction with host machine hardware.In para-virtualization approach, the guest OS is modified for system calls and knows that it's a guest OS, hence issues hypercalls (to communicate with hypervisor) when needs to interact with hardware resources.As compared to full-virtualization, para-virtualization has lower overheads and better performance.The downside of this approach is that it requires changes in guest operating system, which can be hectic as drivers and OS updates are released quite often.Hardware-based virtualizations needs to be capable to run privileged system calls from guest OS.Generally, two modes for virtualization are there: (i) guest (ii) root, where guest mode is for OS and root mode for hypervisor.Upon a privileged call from OS, the control is transferred to hypervisor running as root mode, that process the instruction and control is returned to the guest.The mode changes known as VM Exit (guest to root) and VM Entry (root to guest).This approach doesn't need to modify the guest OS and offers better performance as compared to fullvirtualization [3]. Resources virtualization plays key role in Cloud computing technology.Virtualization programs enable creation of virtual environment as VM, which gives freedom of operating system choice, ensures optimal use of resources at reduced cost.Virtualized systems are always supported with techniques to multiplex available physical machine resources.Full virtualization solutions are already available for common physical resources including CPUs, memories, and peripheral devices, since there has been huge amount of research in this area since early 1960s.[17].On the other hand, GPU virtualization is a relatively new field of research and a challenging task.The main barrier to GPU virtualization is the implementations of GPU drivers, which are not available open for customizations because of intellectual property protections.Moreover, there are no GPU design standards are regulated, and GPGPU manufacturers have been providing variety of architectures, which supports different degrees of virtualization.Because of such reasons, usual virtualization methods are not directly applicable for GPU virtualization [3]. The GPU programming models/APIs include CUDA [18], OpenGL, OpenCL and Direct3D, and most virtualization methods target these models/APIs for GPU virtualization.CUDA [18] is NVIDIA owned programming model for parallel computations, which enables programmers to utilize CUDA-enabled GPUs for general-purpose parallel computations, ultimately converting graphics cards into GPGPU.OpenGL [19] is an application programming interface (API) library, which is used to access GPU hardware for graphics acceleration.Its special use includes video games, images processing & rendering, and visualization needs for diverse applications.OpenGL offers hardware independent API, which can be used to interact with variety of graphics cards, despite of vendor system software.OpenCL [20] is a library for parallel computations which works over heterogeneous environments.OpenCL programming language offers syntax similar to C language known as OpenCL C to code computing kernels, and set of APIs to launch kernels into an OpenCL device (e.g.GPU) and facilitates data transfer management between device and host memories.The main difference in OpenCL and CUDA is that CUDA is only supported by NVIDIA GPUs, while OpenCL can execute applications over variety of accelerators (e.g.GPU, regardless of vendor) and CPUs.Direct3D [21] is a Microsoft owned graphics API for Windows.This API can be used to accelerate 3D graphics for performance hungry applications e.g.games.It provides general abstraction layer to interact with GPU hardware, and offers advanced graphics features such as buffering and anti-aliasing.The GPU applications can be classified into two categories a) conventional graphicaccelerations b) general purpose computing.Graphic acceleration includes rendering of 2D, 3D graphics and simulations, while general-purpose computing involves parallel computations.The rapidly increasing demand of GPU for general-purpose computing requires the availability of GPU instances in cloud infrastructure services.This study comparatively reviews the recent available GPU virtualization techniques & strategies for generalpurpose computing i.e.GPGPU Virtualization techniques.Gpgpu Virtualization Techniques: This section briefly describes the recent GPU virtualization techniques.In terms of implementation approach, the GPU virtualization can be classified into three classes: (i) API Remoting (ii) Para &Full (iii) Hardware-based virtualization methods. Api Remoting Gpgpu Virtualization: Application Programming Interface (API) is a method to interact with remote providers for different types of requests fulfillments.In GPU virtualization perspective, API Remoting is a higher level frontend approach where a GPGPU related request is forwarded to remote (or host) server equipped with GPGPU, that process the request and send the results back to VM.Without source-code of GPU drivers, it's difficult to virtualize them at driver level, thus API remoting allows virtualization at libraries level.Fig. 3. illustrates the process of interaction between VM and remote server for GPGPU related requests, the steps shows that a request is initiated by GPU application, is intercepted by wrapper of programming model on VM to frontend layer, that is transferred to remote host OS, that dispatch the request to programming model handler, that further is transferred to GPU driver to be executed on GPU, and in reverse process the response is returned to GPU application.The API remoting approach is allows to write portable GPU based applications and is easier to setup and integrate [22].The advantages of this approach includes; easy setup, highly-portable applications, dynamic linking and wide range of supported GPU models and architectures.Generally this virtualization layer runs in user-space, in result, such library calls may bypass hypervisor.The restraints are to keep wrapper libraries updated in order to comply with vendor updates [24], and difficult to have fundamental virtualization features including faulttolerance and live migration [25].Since launch of NVIDIA's CUDA, GPUs are being used widely as GPGPU to accelerate applications as CUDA allows programmers to exploit GPU's power for general-purpose computations, which increased the need of GPGPU's virtualization to be used in cloud environment and sharing between VMs [3]. GViM [26] enables virtualization at CUDA API level which can be implemented on Xen-based hypervisor.This allows a guest machine to use GPU attached to the host, an Interpose Library to access CUDA from guest OS, a frontend driver to communicate with the host backend driver.GViM utilize memory allocated by Xenstore [27] instead of using network transfer for data-intensive application, and concentrates on efficient sharing of heaps of data between host and guest VM.Furthermore, it use shared memory concept to share address spaces of application on guest VM & host GPU, which eliminates the need to copy data between user and kernel spaces, that ultimately boost the processing performance.vCUDA [28] allows to virtualize GPU in the Xen hypervisor.It offers a CUDA library wrapper and virtual GPU (vGPU) to the VM and vCUDA library at host level.The guest OS use CUDA wrapper to generate API call to the host as a client, and wrapper library creates vGPUs to give full view of host GPUs.vCUDA stub at host level works as server to execute API requests for GPU access.It use XML-RPC [29] channel for efficient communicate between host vCUDA and guest VM.In recent release [30], vCUDA is deployed in KVM using VMRPC [31] with VMCHANNEL [32].Due to XML-RPC network transmission overhead, VMRPC use shared-memory space between VM and host OS.To minimize latency between virtual machines, VMCHANNEL allows an asynchronous message system in KVM.vCUDA utilize Lazy RPC for GPU calls that can be delayed, and process them in a batch to boost performance by reducing context switching overheads. CUDA [34] targets remote GPGPU acceleration to have GPU related computations performed over a remote host.It implements virtual CUDA-complied layers to execute GPU related calls over remote host without involving hypervisor.More precisely, rCUDA offers wrapper library for CUDA API on guest VM to generate and send GPU related calls to the remote GPU host.The guest VM and GPU host use TCP/IP communication protocol to interact with each other.rCUDA performance may be limited by network overloads when large number of VMs are accessing remote GPU server concurrently.To eliminate network issues, rCUDA offers application-level interaction mechanism [33].Recent improvements in rCUDA provides support multithreaded applications, lower overheads to use GPU within local cluster, and allows an application running on a VM to utilize all the available GPUs within the cluster to, which truly maximize the performance of HPC applications [34]. GVirtuS [36] offers support for many hypervisors e.g.VMWare, Xen and KVM by establishing a transparent layer between VM and host.It uses usual approach as CUDA wrapper, frontend and backend drivers.The VM have frontend driver, while backend driver operates in host machine, both drivers communicates through a hypervisor specific communicator.Since the GPU virtualization efficiency depends on communication between guest VM and GPU host, to gain maximum performance, GVirtuS utilize communication channel offered by hypervisors i.e.VMSocket is used for KVM, XenLoop [37] for Xen and VMCI [38] for VMware.Later, GVirtueS introuced VMShm communicator [39] to improve communication using shared memory paradigm, which reserves POXIS shared memory block on the host to allow memory mapping for communication between VM and backend host.GVirtuS also support remote GPU accelerations through TCP/IP-based channel.It now also support x86 and ARM CPUs over cloud clusters and appliances as well as local work stations [40]. GVM [41] model estimates the performance of GPU-based implementation and verify it by own virtualization framework consisting of (i) user process APIs, (ii) virtual shared memory and (iii) GPU Virtualization Manager (GVM).The guest OS is modified to include APIs, which programmers use to make calls virtual GPUs.The GVM operates in host which initialize the vGPUs, receives guest requests and pass them to discrete GPU.POSIX shared memory is used for communication between host and guest operating systems. Pegasus [42] is an advances the GViM [26], which operates at hypervisor level and share accelerators among multiple VMs.It introduced the concept of an accelerator virtual CPU (aVCPU) which is similar to virtual CPUs, and it manifests the state of a guest executing calls over GPGPU.aVCPU is first-class schedulable component that have call buffer at guest VM, polling process on GPU host, and runtime API for CUDA.Guest OS calls for GPU are stored in shared buffer between guest and host, a polling thread selects the GPU calls from buffer and pass to CUDA to execute on physical GPU.By making its interface similar to CUDA API, Pegasus provide support for existing application for GPGPU needs [42].Shadowfax [43] further advances Pegasus [42] by handling the limitations that Pegasus powered application face when in need of higher GPU computational powers, as Pegasus can utilize only local GPUs while combining additional remote nodes with local GPU can boost performance.Shadowfax offers concept of GPGPU clusters, which can be used by variety of virtual solutions according to GPGPU application need.It use Pegasus's concept to empower applications for local GPU, and for remote host, it creates fake VM through remote server thread, which have a buffer to queue calls, and for each VM, a polling process in remote host.Shadowfax use batching to minimize remote communication overheads for GPU requests and data.VOCL [44] offers GPGPU virtualization similar to rCUDA [34] but for OpenCL implementations.It utilize remote GPUs to accelerate virtual devices that support OpenCL.It implements wrapper library on guest VM and VOCL process on GPU host.The library on client end sends the request to remote host, where it's processed by VOCL proxy process.A rich and dynamic MPI [45] channel is used for communication between client and remote host.DS-CUDA [46] too offers remote GPU virtualization similar to rCUDA [34].DS-CUDA consists of a compiler and a server.Compiler is used to generate wrapper functions for CUDA API calls, and server to receive calls/data and execute them on remote host.It uses RPC or IniniBand channels for communication.In contrast to other similar remote virtualization approaches, DS-CUDA performs redundant calculations over two different GPUs in a cluster, to ensure the integrity that the computed outcome is correct.Enhanced XMLRPC [47] model is based on XML-RPC and CUDA.It's similar approach as of rCUDA [34] for remote GPU virtualization but with optimized data encapsulation concept.In this paradigm, the XMLRPC data is optimized using XMLRPC-String method before sending over the network to the remote GPU host.It focus on keeping the number of packets at minimum with optimized packet size.It claims to boost performance by 4.5X to 7X, with and without pre-processing respectively. FairGV [48] model focus on weighted fair sharing and utilization of the GPU in mixed workloads.It introduce trap-less architecture for GPU processing, queuing methods and co-scheduling policies.Trap-less architecture helps boosting the performance, since trapping in OS kernel adds execution overhead and impact the performance.FairGV can interact with GPU calls directly from user space without hypervisor or kernel trapping.In FairGV, the guest VM sends request to host to be processed and data is shared through shared memory between guest and host OS.The VM frontend polls the response ring for the result of request.The major difference between FairGV and other similar solution is that FairGV is trap-less, it also offers queuing and scheduling policies in combination to boost performance. Para & Full Gpgpu Virtualization: API Remoting gives ability to virtualize GPU with less effort and acceptable performance, but requires to update API libraries as soon as the underlying GPU vendor libraries update or new functionalities may not be available and it may break existing functionalities if vendor decides to remove certain feature calls from libraries.To keep libraries updated is a tedious task, to eliminate these limitations, para & full virtualization approaches are used, which allows to virtualize the GPUs at driver level.Para virtualization requires driver's modification, while full virtualization doesn't need driver modification.Generally, vendor doesn't provide source code for GPU drivers, but AMD has released GPU architecture documentation for their models [49], also some programmers has reverse engineered [50, 51, 52] NVIDIA GPU interfaces.Collectively, due to the efforts, custom drivers has been built for AMD and NVIDIA GPUs that leads to para & full virtualization methods.Fig. 3. explains the architecture of the para & full GPU virtualization techniques, in which guest OS is equipped with modified (para) or unmodified (full) GPU driver.QEMU GPU device on the host end receives the GPU calls from guest through a shared memory at hypervisor level.QEMU device pass the GPU requests to physical GPU and result is returned to guest OS.The vGPU manifests the requests for each VM.The guest driver thinks QEMU device as actual GPU. This approach has an edge that GPU libraries doesn't need to be modified and existing applications can also run over this virtualization architecture.Additionally, since hypervisor is involved in this approach, so the GPU calls can be controlled, monitored and also live migration is supported.The downside of this approach is that it relies on custom GPU drivers, which can be produced only through open documentation provided by the GPU vendors or reproduction by inference.LoGV [53] is a para-virtualization solution for KVM virtualized platform where guest VM is equipped with custom PathScale [51] GPU driver.The PathScale driver is a reverse engineered solution for NVIDIA GPUs and is available as open-source.The basic function of the LoGV architecture is to partition GPU memory into various parts, and a VM is only allowed to interact with own part.GPU's partitioned memory and GPU accelerated application's address space is mapped together using memory management unit (MMU) available in today's GPUs.LoGV support guest to interact with mapped region without a role of hypervisor by configuring GPU page tables referenced by MMU.LoGV intervenes memory allocation to prevent a VM from mapping in other guest VMs spaces.The driver at guest end is responsible to send such operations to hypervisor, and these requests are validated by virtual device at hypervisor level.After validations, upon receiving mapping requests from virtual device, GPU driver in the host completes the memory allocations.A command submission channel is established similarly, where GPU application can send requests directly to the GPU without involving hypervisor.KVM-based hypervisor [54] paravirtualization approach was developed for Heterogeneous System Architecture (HSA).HSA by AMD architecture puts CPU and GPU on same chip to eliminate communication overheads and enables both GPU/CPU to use shared virtual memory.The KVM-based hypervisor solution implements the assignment of page tables by CPU's MMU to IOMMU so GPU can use them as shadowtables.Furthermore, an interrupt is generated by HSA when a page fault is occurred in GPU, which triggers CPU to modify the referenced page for GPU.This KVM-based approach asks the guest OS to update guest page table upon interrupt from HSA. GPUs use shadow page tables for addressing but for the sake of integrity, the guest page tables are updated too.The guest VMs are equipped with a custom driver, responsible to these operations.Lastly, HSA architecture facilitates a shared buffer between CPU and GPU in user space to queue GPU commands.This KVM-based approach just inform GPU about this buffer address residing at guest OS, so that guest VM can interact with physical GPU.VGVM [55] is another para-virtualization solution, consist of a VGVM library, a frontend driver and a backend driver.Existing applications using CUDA Runtime API remains compatible, and VGVM library transfer the routine arguments to the host through frontend driver, and receive results.It intercepts the routine arguments, bundle them GPU request along with other data and forward to the frontend driver.VGVM frontend driver works as a middle agent between library and host, which sends execution requests to the GPU host backend driver.Frontend driver is implemented in guest VM's kernel, and also manage the memory allocations and copy the arguments from user-space to kernel-space, upon receiving result from backend, the process is reversed by copying results from kernel to user space.Finally, a backend driver as virtual CUDA device, which acts like a dispatcher to handle multiple requests from VMs and have them executed over GPU.The results of execution are dispatched back to requesting VM's frontend driver that further deliveries to GPU application.GPUvm [56,57] offers both para & full virtualization solutions over Xen hypervisor using Nouveau [50] driver on virtual machine.In full-virtualization scenario, GPUvm partitions GPU's physical memory and MMIO space in several parts, where each part is assigned to different VM, which helps keeping the VMs isolated.By using dedicated GPU shadow page table for each VM, Virtual GPU memory addresses are translated to physical GPU address of allocated part of partitioned memory.GPUvm cannot handle page faults because of limitations in NVIDIA GPU architecture, thus GPUvm scans whole page table on every TLB flush to update shadow page table.Every GPU request from guest VM creates a page fault, as parted MMIO space is setup as readonly, thus OS intervenes and surpass the access to Xen driver space.Hardware has limited number of command submission channels, GPUvm virtualize them.It then creates shadow channels that are mapped to virtual channels.Overall, GPUvm full virtualization solution is not efficient because (i) it need to scan whole page table on each TLB flush (ii) it needs to intercept each GPU request.GPUvm handles the second limitation by using BAR Remap, intercepting only GPU related calls that requires access GPU channel descriptors, and other possible isolation problems are handled by using shadow page tables.The first GPUvm limitation can be solved with para virtualization approach offered by GPUvm.Similar to Xen [16], GPUvm creates guest page tables, and guests use these page tables directly instead of shadow tables.VM driver perform hypercalls to GPUvm when there is a need to update GPU page table, which then are validated by GPUvm for isolation between guests.G-KVM [58] is a full GPGPU virtualization approach on KVM hypervisor which is inspired by its predecessor GPUvm [56,57] (which can only run on Xen).Both hypervisors Xen and KVM are different in architecture, Xen is implemented at bare-level hardware which can manage memory and tasks scheduling for VMs like an OS, while KVM is implemented in kernel to offer virtualizations.G-KVM, in addition to GPUvm features, uses an aggregator and QEMU device design. Hardware-Based Gpgpu Virtualization: Hardware-based virtualization techniques allows VMs to directly access the GPU instead of using APIs or emulators.This approach utilize the hardware virtualization extensions offered by the GPU vendors e.g.NVIDIA GRID [65], AMD-Vi [68] and Intel VT-d [67].The Direct Memory Access (DMA) channels and interrupts are mapped to VM directly, which allows direct data transfer from GPU memory to VM without hypervisor intervention and interrupts are directly sent to VM. Hardware virtualization is further divided into two categories (i) single VM per GPU support (ii) multiple VMs per GPU support.AMD-Vi and Intel VT-d falls under first category, while NVIDIA GRID allows multiplexing and falls under second category.The VM can interact with GPU directly without any custom modified driver or library and hypervisor involvement is not needed.As AMD-Vi and VT-d just support single virtual machine per GPU, so a pluggable approach, which dynamically add/remove the GPU device in VM, is used to share GPU between multiple VMs.Hardware-based virtualization gives the maximum performance since no middleware (libraries) are involved and address spaces are directly mapped with VM and host, but implementation of live-migration and fault-tolerance is relatively difficult over this architecture. Amazon Elastic Compute Cloud (Amazon EC2) [59] is the first platform that introduced GPUs for cloud users, and used Intel's pass-through technology [60].Amazon introduced Cluster GPU Instances (CGI), which gives couple of NVIDIA GPGPUs to every VM [61].HPC applications in need of massive parallelism can exploit CGIs, which offers direct GPU access to each VM.The virtualization performance of the CGIs are measured and benchmarks shows that compute-intensive applications can exploit GPUs in cloud for performance boost, while memory-intensive applications may have performance penalty due to EC2 structure that implements ECC memory error detection, which can ultimately cap the memory bandwidth. vmCUDA [62] is a hybrid solution that use API Remoting and Hardware-based approaches together that enable GPU virtualization in VMware ESX hypervisor.It introduces the concept of appliance VM, which acts like middle server to serve multiple VMs, and pass-through the intercepted requests to actual GPU.It offers CUDA applications from different VMs to utilize GPU and ensure optimal usage.It's compatible with vMotion, which supports live migration, that means the guest VM can either be on the same node as of appliance VM or can be remotely located.vmCUDA doesn't need any modification to the hypervisor or existing CUDA applications.It offers API libraries and a frontend driver to the guest VM to interact with appliance VM. vmCUDA utilize vRDMA [63], VMCI [64] or TCP/IP channels in order to have communication between frontend / backend drivers. NVIDIA GRID [65] eliminates the limitations of GPU sharing between the VMs that may be caused in pass-through methods.It implements Input/output Memory Management Unit (IOMMU), which translates VM's virtual address space to physical GPU's address space.Furthermore, GRID offers dedicated input buffer to each VM to isolate commands from other VMs.These changes to architecture has enabled GRID to sense virtualization and isolate every VM's GPU interaction, but still provides performance as of pass-through design.Cloud infrastructure can best utilize the NVIDIA GPUs including GRID Tesla M6, M60, K1 and K2, they all support the GRID, that make the NVIDIA GPUs better choice for multi-VM services.Hong et al. [66] benchmarked the performance of NVIDIA GRID-enabled GPGPUs for cloud gaming platforms, which shows that GRID-enabled GPUs gives higher performance than usual pass-through based GPUs due to optimized GPU hardware. Comparison & Discussion: In this section all the virtualization techniques discussed above are summarized and compared for their features and architecture.Fig. 6 describes the list of supported hardware by each virtualization method.The hardware detail column show the GPU vendor and model that was used to test the corresponding technique, GPU models other than in following table may be supported as well.This would help in quickly scanning through the techniques for your needed hardware and available GPU models.Fig. 7 shows the detailed comparison of the GPGPU virtualization approaches discussed in previous section.The solutions are compared on the following metrics; Fig. 7 shows the detailed comparison of the GPGPU virtualization approaches discussed in previous section.The solutions are compared on the following metrics; Virtualization category: This field compares the available techniques for their approach type, whether they offer API Remoting, para-virtualization, full-virtualization or hardware-based GPGPU virtualization. Hypervisor: This field compares the discussed solutions from their compatible hypervisor perspective, hypervisors options include KVM, Xen, VMware, Parallels or other. Remote acceleration: This metric shows that whether an approach support the usage of a GPGPU installed on a remote node, either in same cluster or totally different network based cluster. Programming model: This field compares the approaches from their programming model point, to whether an approach support CUDA or OpenCL programming library, this metric can help programmers to pick the right library to develop GPGPU empowered applications.GPU Hardware: This field lists the GPGPU vendors for their support for particular virtualization method, Fig. 6 describes this column in detail with GPU models and vendor details. Multiplexing: This field shows that whether an approach supports or not, the GPU sharing between multiple VMs.Fig. 7 shows that most of the techniques falls under API Remoting category, because of its relatively easy implementation and maintenance attracts more programmers.Additionally, API Remoting can be used on the environments where GPGPU virtualization is not supported by the hardware natively.API Remoting also allows to develop portable GPGPU application, which can be deployed on an environment equipped with the dependent library, regardless of underlying hardware.Para & Full virtualizations are also trendy due to other benefits over API Remoting, which allows live-migrations and also lower communication overheads makes this approach faster than API Remoting.Hardware-based methods provides ultimate performance because of almost none communication overheads, as shared memory paradigms are used and VMs can directly interact with host without interference of hypervisors or network layers.Further, the comparison reveals that KVM and Xen are the popular hypervisors being used for virtualization approaches from all three categories, VMware support is available only hardware-based approaches.Remote acceleration is supported by majority of API Remoting approaches, which allows the sharing of a GPU installed over a remote node either on the same cluster or total different network.GPGPUs support two programming models (i) CUDA (ii) OpenCL.Comparison table shows that CUDA is the most popular programming model supported by majority of the virtualization approaches.It also shows that which availability of particular architecture as open-source, list reveals that not many of the architectures are available as open-source.GPU vendors and compared which shows that NVIDIA is the leading provider of the GPGPUs and are being supported by the wide range of the virtualization techniques.Multiplexing all ticks shows that almost approaches support the sharing of a single physical GPU between multiple VMs. This survey reveals that though there has been good amount of research dealing with GPGPU virtualization and many challenges has been addressed, but still GPGPU virtualization has not reached its adulthood.There are many areas that needs improvements including scalability, security, portability, power efficiency, shared spaces, live migration, communication between guest VMs and host. Conclusion: Since the rise of HPC, heterogeneous systems are being exploited to improve the efficiency of the applications by exploiting the parallel paradigms to achieve higher computational performance but at lower cost.This study has reviewed the GPU virtualization techniques that focus on virtualization for general-purpose accelerations.Cloud is a heterogeneous environment, and GPGPU virtualization allows to share a physical GPU between multiple heterogeneous virtual machines to save costs, ensure optimal usage of the GPGPU devices, and offer its customers high-performance platform.In this survey, the available GPGPU virtualization solutions are explored and compared for their possible features and supported frameworks.The study reveals that API Remoting, para, full, and hardwarebased solutions has been presented to perform the GPGPU virtualization.Furthermore, NVIDIA is the leading vendor among GPGPU providers and CUDA is the most supported programming language.Each virtualization solution has own benefits and limitations, which can be adopted according to the need and available hardware resources.Future work may involve exploring the GPGPU virtualization along with scheduling methods, and benchmarking these techniques to get real numbers and comparisons. Fig. 5 Fig.5illustrates the hardware-based architecture for AMD-Vi or VT-d.The VM can interact with GPU directly without any custom modified driver or library and hypervisor involvement is not needed.As AMD-Vi and VT-d just support single virtual machine per GPU, so a pluggable approach, which dynamically add/remove the GPU device in VM, is used to share GPU between multiple VMs.Hardware-based virtualization gives the maximum performance since no middleware (libraries) are involved and address spaces are directly mapped with VM and host, but implementation of live-migration and fault-tolerance is relatively difficult over this architecture.Amazon Elastic Compute Cloud (Amazon EC2)[59] is the first platform that introduced GPUs for cloud users, and used Intel's pass-through technology[60].Amazon introduced Cluster GPU Instances (CGI), which gives couple of NVIDIA GPGPUs to every VM[61].HPC applications in need of massive parallelism can exploit CGIs, which offers direct GPU access to each VM.The virtualization performance of the CGIs are measured and benchmarks shows that compute-intensive applications can exploit GPUs in cloud for performance boost, while memory-intensive applications may have performance penalty due to EC2 structure that implements ECC memory error detection, which can ultimately cap the memory bandwidth.
8,286.6
2018-11-04T00:00:00.000
[ "Computer Science", "Engineering" ]
On New Classes of Stancu-Kantorovich-Type Operators The present paper introduces new classes of Stancu–Kantorovich operators constructed in the King sense. For these classes of operators, we establish some convergence results, error estimations theorems and graphical properties of approximation for the classes considered, namely, operators that preserve the test functions e0(x)=1 and e1(x)=x, e0(x)=1 and e2(x)=x2, as well as e1(x)=x and e2(x)=x2. The class of operators that preserve the test functions e1(x)=x and e2(x)=x2 is a genuine generalization of the class introduced by Indrea et al. in their paper “A New Class of Kantorovich-Type Operators”, published in Constr. Math. Anal. Introduction By C[0, 1], we denote the space of continuous functions defined on [0, 1], and by L 1 [0, 1], the space of all functions defined on [0, 1], which are Lebesgue integrable. Let N be the set of all positive integers. Following the generalization of Bernstein operators proposed by Stancu, D. Bȃrbosu introduced in paper [5] the Kantorovich variant of Stancu-Bernstein operators, which, for m+β+1 , is defined as: with 0 ≤ α ≤ β, p m,k (x) = ( m k )x k (1 − x) m−k , for every m ∈ N and f ∈ L 1 [0, 1] The study of Kantorovich operators is still in the spotlight of many recent research papers (see [6][7][8]). Among the numerous generalizations of the Kantorovich-Bernstein type operators, we mention the one by Indrea et al., (see [9]), which introduces a new general class which preserves the test functions e 1 (x) and e 2 (x). In [10], J.P. King introduced a new class of positive linear Bernstein-type operators which reproduce constant functions (e 1 (x)) and e 2 (x). These operators are a generalization of the Bernstein operators, but they are not polynomial-type operators. With the results introduced by King, a new direction of research was initiated, which concerns the construction of new operators with better approximation properties, obtained by modifying existing sequences of linear positive operators. This subject has been one of great interest. Gonska and Pit , ul (see [11]) studied estimates in terms of the first and second moduli of continuity for the operators introduced by King. Among the first generalizations of King's result, we mention those of Agratini (see [12]), Cardena-Morales et al., (see [13]), Duman and Özarslan (see [14,15]) and Gonska et al., (see [16]). The subject is still of interest. Among the more recent studies, we mention the one by Popa (see [17]) where Voronovskaja-type theorems for King operators are studied. A recent extensive review of King type operators is that of Acar et al. (see [18]). Based on the results in [3,9,19,20], we introduce three new classes of King-type approximation operators. The aim of our paper is to obtain convergence properties of the uniform approximation of continuous functions using a Korovkin-type theorem(see [21]). Our results are a generalization of previous results on the topic. The article is structured as follows. Section 2 presents some known results and notions that are to be used throughout the paper. In Section 3, we introduce the general form of our operators and some properties they satisfy. In Sections 4-6, we introduced three new classes of operators, which preserve exactly two of the test functions e i , i = 0, 1, 2. These operators are particular variants of the operator considered in Section 3. Preliminaries In the following, we present the notions and results that will be used to prove the main results of the paper. We will denote by F (I) the set of all functions defined on I ⊂ R. Remark 1. The operators (L m ) m∈N defined above are linear and positive on U(I). Definition 2. For i ∈ N we denote by (Γ i L m ) the moments of order i of the operators defined in (1): Definition 3. Let I ⊂ R be a compact interval and f be continuous function on I. The modulus of continuity is a function ω( f , ·) : [0, ∞) → R defined for any h ≥ 0, Now, let us recall the well-known result by Shisha and Mond (see [22]). Other recent evaluations with moduli of continuity can be seen in [23]. A General Method for Constructing New Classes of Stancu-Type Operators In this section, we consider a general method of constructing new types of Stancu operators, namely Stancu-Kantorovich operators with King modification. In the following sections, we will construct three new classes of such operators and we will study some properties of approximation for these operators, taking into account their expressions on the test functions e i (t) = t i , i = 0, 1, 2, and imposing that the operators preserve two of the test functions e i and e j , i = j, i, j ∈ {0, 1, 2}. A motivation for this type of modification of the operators is, as pointed out, for instance, by Acar et al., in [18], finding better properties of approximation and improved error estimates. Our approach is influenced by the modifications of Bernstein-Kantorovich operators considered in [6,9]. With this in mind, let us introduce the following operator. Definition 4. Let I be a compact interval and c m , d m : I → R be some functions that satisfy c m (x) ≥ 0, d m (x) ≥ 0 for all x ∈ I, 0 ≤ α ≤ β and m ∈ N. We define the following Stancu-Kantorovich type operators: for any x ∈ I, m ∈ N and f ∈ L 1 α m+β+1 , m+α+1 m+β+1 . Lemma 1. The operator proposed in relation (3) has the following properties Proof. For e 0 (x) = 1, we have: which, from the binomial theorem, yields: Denoting k − 1 = l in the sum from the right hand side in the above relation, we get: and, again, by the binomial theorem, we get: Lastly, we shall compute S (α,β) * m for e 2 (x) = x 2 : Now, by doing the calculations in the square brackets, we get: By denoting k − 2 = l in the first sum from the right hand side in the relation from above and by the binomial theorem, we will have: which completes the proof. Stancu-Kantorovich Type Operators Which Preserve the Functions e 0 and e 1 In this section, we shall construct an operator of Stancu-Kantorovich type as in (3), that preserves the test functions e 0 and e 1 , i.e., an operator that satisfies Now, from the conditions in (7) and relations (4), (5) we get and for any m ∈ N and x ∈ I. In order to have a positive operator, we shall assume that the functions c m and d m are positive. This condition yields the following inequality Lemma 2. For 0 ≤ 2α ≤ β and any integers m 0 < m, we have Proof. Let us consider the sequences (x m ) m≥1 , Imposing the condition 0 ≤ 2α ≤ β we have that (x m ) m≥1 is a decreasing sequence, and (y m ) m≥1 is an increasing sequence, thus implying that our inclusion holds for any m ≥ m 0 . Proof. The first two relations from (11) are obvious, and the third follows by applying relations (5), (8) and (9) and after some computations. Lemma 4. The following relations hold Proof. Using the previous lemma and relation (2), we get which, after some calculations, yields (14). uniformly with respect to x ∈ I. Consequently, for any ε > 0, there exists an integer m ε ≥ m 0 , sufficiently large, such that for any x ∈ I and m ∈ N such that m ≥ m ε . Proof. The relation (15) follows from (12) and (14). The existence of m ε follows from the definition of the limit of a function and the inequality (16) follows from (15) for any x ∈ I and m ∈ N such that m ≥ m ε . Proof. The Theorem from above follows from Theorem 1 by taking h = 1 √ m . Stancu-Kantorovich-Type Operators Which Preserve the Functions e 0 and e 2 In this section, we shall construct an operator of Stancu-Kantorovich-type, as in (3), that preserves the test functions e 0 and e 2 , i.e., an operator that satisfies Now, imposing the condition (17) and the Equations (4) and (6), we get: (18) and the following quadratic equation, in c m (x): (19) Note that for α ≥ 0, β ≥ 0, the discriminant of the quadratic Equation (19) is positive. We make the following notation: Now, solving Equation (19) we obtain, for m ≥ 2: and, from relation (18) we get In order to have positive linear operators, we shall impose that the functions c m and d m from (21) and (22) are positive. In this case, we obtain the following inequalities: for every m ∈ N such that m ≥ m ε and α, β, satisfying α ≤ β. Remark 5. Since the functions c m and d m are positive on the interval considered in (23), from now on, we will consider I = [ε , 1 − ε ], for all ε > 0 and m ≥ m 0 . Now, taking into account the sequences c m and d m obtained in (21) and (22), the operator in (3) will be for any x ∈ I and m ≥ m 0 . Proof. The first and last relation from (25) are obvious, and the second follows by applying relations (5) and (21). Now, we can obtain the following result. Lemma 8. The following relations hold for any x ∈ I and m ∈ N. Proof. Using the previous lemma and the definition of the operator Γ i from (2), we get the results after some calculations. Lemma 9. We have: lim uniformly with regard to x ∈ I. For any ε > 0, there exists m ε > m 0 such that for any x ∈ I and m ∈ N such that m ≥ m ε . Proof. We have: and after some calculations, we get: Now, replacing the right hand side term in (27) and (28) with (32), we will get the convergences in (29) and (30). Using the definition of the limit of a function and the inequality x(1 − x) ≤ 1 4 , ∀x ∈ [0, 1], we have that for every ε > 0, there exists m ε ∈ N such that the inequality (31) holds, for every m ≥ m ε . Now, using the above results we obtain the following theorem. uniformly on I and for every ε > 0, there exists m ε ∈ N such that for any x ∈ I and m ∈ N such that m ≥ m ε . Proof. The Theorem follows from relation (31) and from Theorem 1 by taking h = 1 √ m . Graphic Properties of Approximation Remark 6. Furthermore, in this case, it can be seen in Figures 3 and 4 that our operators approximate the given functions. Stancu-Kantorovich Type Operators Which Preserve the Functions e 1 and e 2 In this section, we shall construct an operator of Stancu-Kantorovich type as in (3), that preserves the test functions e 1 and e 2 , i.e., an operator that satisfies In order to obtain the main results of this section, we shall consider the following notation where x ∈ I, m ∈ N and w m : I → R. With the previous notation, we have the following remark. Remark 7. In order to have positive operators S (α,β) * m , m ∈ N, 0 ≤ α ≤ βand for the relation (34) to hold, we shall impose that S (α,β) * m (e 0 , x) ≥ 0, which implies which implies Now, from the above considerations and imposing the conditions we will obtain the following lemma. Imposing the condition (39), we will obtain the following quadratic equation in w m (x): which has the following solutions: From now on, in this section we will consider w m (x) = w m,1 (x). In order to have a positive operator, the quantities c m (x) and d m (x) from relations (40) and (41) shall be positive. With that condition, we get the following inequalities: for all x ∈ I, m ∈ N and 0 ≤ α ≤ β which lead to for all x ∈ I, m ∈ N and 0 ≤ α ≤ β. Graphic Properties of Approximation As a first comparison, we considered the function f (x) = sin(20x), and we obtained the following graphics, Figure 5, where PolKS(x) represents our operators that preserve e 1 and e 2 , and P(x) is the operator obtained by Indrea et al. in [9], which is also a particular case of our operators considered in the third section for α = β = 0.
3,038
2021-05-28T00:00:00.000
[ "Mathematics" ]
A Survey of Tools and Techniques for Web Attack Detection —Web attacks include types of attacks to websites and web applications to steal sensitive information, to possibly disrupt web-based service systems and even to take control of the web systems. In order to defend against web attacks, a number of tools and techniques have been developed and deployed in practice for monitoring, detecting and preventing web attacks to protect websites, web applications and web users. It is necessary to survey and evaluate existing tools and techniques for monitoring and detecting web attacks because this information can be used for the selection of suitable tools and techniques for monitoring and detecting web attacks for specific websites and web applications. In the first half, the paper surveys some typical tools and techniques for monitoring and detecting web attacks, which have been proposed and applied in practice. The paper’s later half presents the experiment and efficiency evaluation of a web attack detection model based on machine learning. Experimental results show that the machine learning based model for web attack detection produces a high detection accuracy of 99.57% and the model has the potential for practical deployment. INTRODUCTION Along with the rapid development of the Internet and the global Web, websites and web applications (referred to as web applications from now on) have also grown very strongly and have become one of the most popular and essential applications on the Internet [1] [2]. Web applications are used in almost all areas of social life such as business, commerce, financebanking, manufacturing, sports, entertainment and social communication. Because of their popularity and importance, websites and web applications are also subject to many types of dangerous and sophisticated attacks and intrusions, aiming to steal sensitive information and disrupt operations, or take control of the web system. Types of popular attacks on websites and web applications (referred to as web attacks from now on) can be listed as SQL injection (SQLi), Cross-Site Scripting (XSS), Command injection (CMDi), path traversal, defacements and DoS/DDoS [1] [2]. Our attacks can cause serious consequences to websites, web applications, as well as web users. They allow the attacker to (i) bypass the authentication system of the web application, (ii) carry out unauthorized modifications to web databases and content of web pages, (iii) extract data from web databases, (iv) steal sensitive information from webservers and user web browsers and (v) take control of web and database servers [3]. Figure 1 shows an A Survey of Tools and Techniques for Web Attack Detection example of a SQLi attack to a website, in which the Attacker inserts the malicious SQL code into the search keyword to delete a table of the website's Database. Due to the danger of web attacks, a number of tools and techniques have been developed and applied in practice to monitor, detect and prevent this kind of attack to protect websites, web applications and web users. In general, there are three approaches to web attack defense, including (a) validating input data, (b) reducing the number of attackable surfaces, and (c) adopting a defense in depth strategy [1] [2]. In particular, approach (a) requires a thorough examination of all input data and only valid data is passed to the next processing stage to reduce types of malicious code embedded in the input data. According to approach (b), the web application is divided into components and appropriate access control mechanisms are applied to restrict user access to each component in order to reduce the number of attackable locations. In approach (3), many layer defenses are used to form complementary layers to detect and prevent web attacks in order to better protect websites, web applications and web users [3]. Each of these three approaches mentioned above has many tools and techniques that have been researched and developed. Therefore, the survey and evaluation of existing web attack detection tools and techniques is necessary and is the basis for the selection of appropriate tools and techniques for monitoring and detecting web attacks for the practical deployment in specific websites and web applications. This paper carries out two main tasks which are also two contributions as follows: 1. Investigate typical web attack detection and monitoring tools and techniques that have been developed and applied. This is a necessary task and the survey results can be used as a basis for selecting suitable web attack detection and monitoring tools and techniques for deployment in specific websites and web applications; and 2. Test and evaluate the effectiveness of web attack detection techniques based on traditional supervised machine learning algorithms to find the best-supervised machine learning algorithm for the web attack detection model. This is an important premise for building a web attack detection system for practical applications. The rest of the paper includes the following sections: Section II describes typical types of security vulnerabilities and web attacks, Section III examines tools and techniques for monitoring and detecting web attacks, Section IV describes and tests a machine learning-based web attack detection model and Section V is the conclusion of the paper. ATTACKS According to OWASP, the reason that attackers can perform various types of web attacks is that they exploit the types of security holes that exist in websites and web applications [1]. OWASP periodically publishes a list of security vulnerabilities and the most recent list of web vulnerabilities is released in 2017. Table I lists the OWASP Top 10 Web Security Vulnerabilities in 2017 [1]. A1-Injection The code injection error allows the insertion and execution of various types of malicious code, such as SQL, OS and LDAP. A2-Broken Authentication Weak application authentication or session authentication failure that allows the theft of password, encryption keys, or impersonation/user session hijacking A3-Sensitive Data Exposure This error allows the theft of sensitive data because they are not properly protected, or by appropriate measures. A4-XML External Entities (XXE) Error handling XML files allows execution of malicious code embedded in externally referenced XML files. A5-Broken Access Control Weak access control error allows unauthorized access to functionality, or data, like accessing other users' accounts, viewing sensitive files... A6-Security Misconfiguration Improper security configuration error, such as insecure default configuration, incomplete or special configuration... Vulnerability Description A7-Cross-Site Scripting (XSS) XSS code injection error allows HTML or JavaScript code to be inserted to steal sensitive data from web users' browsers. A8-Insecure Deserialization Unsafe deserialization can lead to remote code execution, replay attacks, or privilege escalation. A9-Using Components with Known Vulnerabilities Components, libraries containing known vulnerabilities used in applications and running with application privileges can be easily exploited to attack the system. A10-Insufficient Logging & Monitoring Inadequate logging and monitoring, coupled with poor incident response allow further system attacks, while maintaining persistence, pivoting to more systems, and tampering, extracting, or destroying data. In addition to the list of web security vulnerabilities, OWASP also provides a list of typical attacks on websites and web applications as described in Table II. Types of Attacks Description SQLi SQL injection attack, in which SQL exploit code is inserted into user data submitted to webserver and finally executed on the website's database server. XSS XSS injection attack, where an XSS exploit (usually JavaScript) embedded in a web page runs within the browser with user access privilege, can access sensitive user information stored in the browser. Defacements The attack changes the content and thereby changes the appearance of the web page. This type of attack causes damage to the website owner. Path traversal The attack exploits path traversal, or normalization, errors in webservers and web applications. Session hijacking The attack exploits web session security flaws, in which an attacker can access a user's session and perform the same operations as the user himself. A. TOOLS AND SOLUTIONS FOR MONITORING AND DETECTING WEB ATTACKS This section examines some commercial and open-source solutions and tools for monitoring and detecting typical attacks on websites and web applications. Many solutions and tools are used in practice, such as VNCS Web Monitoring [4], Nagios Web Application Monitoring Software [5], Site24x7 Website Defacement Monitoring [6], ModSecurity [7], Snort IDS [8]. This section examines three widely used solutions and tools, including VNCS Web Monitoring [4], Nagios Web Application Monitoring Software [5] and ModSecurity [7]. VNCS Web Monitoring VNCS Web monitoring [4] is a solution that allows monitoring of many websites simultaneously based on collecting, processing and analyzing access logs using Splunk platform developed by Vietnam Cybersecurity Technology Joint Stock Company (VNCS). VNCS Web monitoring collects weblogs from servers to be monitored, then transfers them to the central system for processing and analysis. This system allows for centralized log management, has automatic analysis and realtime alerting, supports manual log analysis to find problems, supports monitoring and alerting the operating status of the website, supports the detection of attacks to change content, change the interface, detection of SQLi, XSS and malicious code on websites. The limitation of VNCS Web monitoring is that the problem of transporting large volumes of logs from the monitored servers to the processing center requires a stable connection with high throughput. In addition, this is a commercial solution, so the installation and operating costs are relatively high. Nagios Web Application Monitoring Software Nagios Web Application Monitoring Software [5] is a toolkit that allows monitoring websites, web applications, web transactions and web services, including features such as availability monitoring, URL monitoring, HTTP status monitoring, content monitoring, website hijacking detection, and SSL certificate monitoring. Nagios provides many tools to help configure objects to be monitored easily and quickly, such as Website Monitoring Wizard, Website URL Monitoring Wizard, Website Transaction Monitoring Wizard, HTTP Monitoring Plugins. The main drawback of Nagios is that it is a commercial solution, so installation and operating costs are high. ModSecurity ModSecurity [7] is a type of web application firewall (WAF) that is used to filter user queries sent to webservers, as shown in Figure 4. ModSecurity uses a set of predefined rules to filter HTTP requests. If the request is determined to be valid, it is sent to the webserver, otherwise, the request is blocked and a warning is logged. ModSecurity can be installed on the server to protect multiple websites, preventing most types of web attacks, such as SQLi and XSS. The advantage of ModSecurity over commercial WAF solutions (such as Barracuda Networks WAF, Imperva SecureSphere, and Radware AppWall) is that it is open, free, lightweight, and deeply integrated into webservers, like the Apache webserver. The limitation of ModSecurity is its compatibility with webservers. ModSecurity may work fine with the Apache webserver, but may have problems with the Microsoft IIS webserver. Comments In general, solutions and tools for monitoring and detecting web attacks are relatively diverse, including solutions and tools for monitoring, detecting web attacks, anomalous access behaviors, as well as web application firewalls, or intrusion detection systems. However, most of the solutions and tools mentioned above perform signature-based, or rule-based monitoring and detection. Therefore, they are only capable of detecting known attacks. In addition, some commercial solutions and tools have high installation and operating costs. Some solutions, like ModSecurity, have compatibility issues with different webserver platforms. B. TECHNIQUES FOR DETECTING WEB ATTACKS This section examines a number of studies on monitoring techniques to detect typical attacks on websites and web applications. The surveyed studies fell into two groups: (i) the detection group based on the signatures, patterns, or rules and (ii) the detection group based on the anomaly. Detection based on signatures and rules Detecting attacks in general and detecting web attacks in particular based on signatures, or rules, is an attack detection method based on matching a set of signatures of known attacks with collected monitoring data. An attack is detected when at least one signature matching is successful. The signature-based attack detection technique has the advantage of being able to quickly and accurately detect known attacks. However, this technique's disadvantage is not able to detect new types of attacks or attacks that exploit zero-day vulnerabilities because their signatures have not been updated on the database. In addition, building and updating the signature database are often done manually, which is labor-intensive. Many researches and proposals for signaturebased web attack detection techniques have been published in specialized journals and conferences over last decade [9]. This section introduces three typical proposals for rule-based web attack detection, including OWASP ModSecurity Core Rule Set (CRS) [10], SQL-IDS [11] and XSS-GUARD [12]. CRS [10] is a set of rules developed by the OWASP project to detect many types of web attacks listed in the OWASP Top 10 with low false alarm rates. CRS can be used with ModSecuritythe module included with the Mozilla Apache webserver. The advantage of CRS is that it is supported and regularly updated by OWASP and the global web security community. However, because the CRS includes a fairly large number of rules, it is relatively cumbersome and can have compatibility problems when integrated into other web application firewalls, or used with other webservers, such as the Microsoft IIS webserver. SQL-ID [11] is a specification-based SQLi attack detection system. SQL-ID first constructs a rule set that specifies the structure of valid SQL commands generated by the web application and sends it to the database server for execution. Then, SQL-ID performs monitoring, preprocessing and classification of the incoming SQL queries based on the built specification rule set. Only SQL statements classified as "valid" are sent to the database server for execution, and invalid SQL statements are blocked and logged. Experiments show that SQL-ID achieves 0% false alarms and can be used to protect multiple websites simultaneously because the system is deployed as an intermediary between the webserver and the database server. However, the manual building of the rule set specifying the structure of valid SQL statements must be done separately for each website and consumes a lot of time, especially for large-scale websites and web applications. XSS-GUARD [12] is a framework that allows monitoring and preventing XSS attacks by creating a "shadow page" and comparing the shadow page with the real page before sending the real page to the client. The shadow page is automatically generated side-by-side with the real page from the webserver response, but uses clean input data (no scripting) of the same length as the real data. Tests show that XSS-GUARD is capable of blocking many forms of XSS attacks listed by OWASP. Moreover, this method does not require a large rule set and frequent updates. However, XSS-GUARD significantly increases the load on the webserver because it has to continuously generate and compare shadow pages with real pages on each request from the web user. Detection based on anomaly Anomaly-based attack detection is based on the assumption that attack behaviors are often closely related to anomalous behaviors. The process of building and deploying an anomalybased attack detection system consists of two phases: (1) training and (2) detection. During the training phase, the object's profile in normal working mode is built. During the detection phase, the current behavior of the system is monitored and a warning is fired if there is a sharp difference between the current behavior and the behavior recorded in the object's profile. The advantage of anomaly-based attack detection is that it has the potential to detect new types of attacks without requiring prior knowledge about them. However, this method has a relatively high false alarm rate compared to the signature-based detection method. In addition, this technique is also resource intensive for object profiling and current behavior analysis. As can be seen, the most important task to be done in most anomaly-based attack detection proposals is building a profile of the object to be monitored in normal working mode. There are many methods for profile construction, including statistics, finite state machines, and machine learning, of which statistics and machine learning are the most widely used methods [13]. Therefore, this section focuses on investigating some proposed web attack detection based on statistics and machine learning, including Bearte et al. [14], Torrano-Gimenez et al. [15], Liang et al. [16], Pan et al. [17] and Hoang [3]. Betarte et al. [14] propose to use 1-class classification models and n-gram-based classification models to improve the detection ability and accuracy of website attack detection for ModSecurity [7]. Experiments performed included (1) detection using only OWASP CRS [10], (2) detection using only 1-class classification model, or n-gram-based classification model, and (3) combined detection using OWASP CRS and machine learning models. Experimental results on CSIC2010 [18], ECML/PKDD2007 and self-generated datasets show that the machine learning and association models have much higher accuracy than OWASP CRS. Torrano-Gimenez et al. [15] propose to build an anomaly-based detection system. The system is installed as a proxy server or a web application firewall that stands between the client and the webserver. During the training phase, an XML file describing the set of valid requests for a website is automatically generated from the training data by a statistical method. Then in the detection phase, the XML file is used to categorize the requests sent to the webserver. If the request is determined to be normal, it is forwarded to the webserver for processing. Otherwise, requests that are determined to be anomalous will be blocked. Experiments show that the proposed system gives high accuracy and low error rate when the amount of training data is large enough (from 10,000 requests). Liang et al. [16] propose to use the RNN deep learning network to build web attack detection models. Experiments on the CSIC 2010 dataset [18] show that the proposed model achieves an overall accuracy of over 98%. Moreover, this method can eliminate the manual and timeconsuming work including selecting and extracting features for training and detection. Using a fairly similar approach, Pan et al. [17] suggest using the Robust Software Modeling Tool to monitor and extract the execution information of an application. Then use the collected information to train the stacked denoising autoencoder deep learning network to build the detection model. Experiments show that the proposed method can detect many types of web attacks with an average F1 measure of over 90%. Hoang [3] proposes a machine learning-based detection model of web attacks using weblogs, in which a decision tree machine learning algorithm is used to build a detection model from the training data extracted from weblog. The experimental results on the HTTP Param Dataset dataset [19] and real weblog data set show that the proposed model is capable of effectively detecting 4 common web attacks including SQLi, XSS, CMDi and path traversal with an overall accuracy of 98.56%. In addition to the advantage of high accuracy, the proposed model also requires lower computational resources than the recommendations based on deep learning networks [16] [17]. Comments From the above surveys, some comments about web attack detection proposals can be drawn as follows:  Proposals based on signatures and rules are capable of quickly and accurately detecting known types of web attacks. However, building and updating the set of signatures and rules manually consume a lot of time [10][11];  Anomaly-based proposals, especially those based on machine learning give high detection accuracy and the ability to detect new attacks. Furthermore, building profiles or detection models can be done automatically from the training data. Nevertheless, anomaly-based proposals often have a relatively high false alarm rate compared to signature-based detection. In addition, they also require a lot of computational resources for detection model building [14][15] [16] [17];  Some proposals are only capable of detecting one type of web attack. In addition, some suggest using comparison or monitoring mechanisms that can severely degrade server performance [12][17]. IV. EXPERIMENTS ON DETECTING WEB ATTACKS BASED ON MACHINE LEARNING In this section, we evaluate the common web attack detection model proposed in [3] by examining the detection performance of the model using various supervised machine learning algorithms. On that basis, the machine learning algorithm that gives the best detection performance is selected to develop a web attack detection system for practical deployment. A. INTRODUCTION TO THE DETECTION MODEL The web attack detection model [3] is implemented in 2 phases: (1) the training phase and (2) the detection phase. The training phase as shown in Figure 2 includes the following steps:  Collect training data, including normal URIs (Uniform Resource Indicators) and attack URIs;  Training data is preprocessed to select and extract features. After preprocessing, each URI is converted to a vector and the training dataset is converted to a training matrix of M×(N+1) elements, where M is the total number of URIs in the training set. training and N is the characteristic number. The last column of the training matrix stores the label of the URI.  Training data in the form of a training matrix is included in the training phase to build a classifier, or a model used for the detection phase. The detection phase as depicted in Figure 3 includes the following steps:  URIs extracted from weblog data are used as input for discovery;  Each URI is preprocessed as it is done with the training URI. The result of the preprocessing is the vector of the URI to use for the next step;  The URI's vector is classified using the Classifier built during the training phase. The result of this step is the status of the URI: Normal or Attacked. Experimental dataset The dataset used for the test is the HTTP Param Dataset [19] consisting of 31,067 URIs of web requests, including the URI length and label. There are 2 types of URI labels, Norm (Normal) and Amon (Attacked). The Anom label includes four types of attacks: SQLi, XSS, CMDi, and path traversal. The dataset is distributed by label as follows: 19,304 normal strings, 10,852 SQLi strings, 532 XSS strings, 89 CMDi strings and 290 path traversal strings. All the records of the dataset are used for training and testing the detection model using different machine learning algorithms, with 80% of the records randomly selected for training and 20% remaining for testing. Data pre-processing The preprocessing performs the separation and vectorization of the features of URIs. This stage includes 2 tasks:  Separation of the URI features using the n-gram technique. The n-gram technique was chosen because of its simplicity and fast execution. We choose 3-gram to separate the URI features;  Vectorize the URI features using the IF-IDF (Term Frequency-Inverse Document Frequency) method. For each 3-gram, the tf-idf value is calculated as follows: (1) where, tf(t, d) is the frequency of 3-gram t in URI d; f(t,d) is the number of times 3-gram t occurs in the URI d; max{f(w,d): w∈ d} is the maximum number of occurrences of any 3-gram in the URI d; D is the set of all URIs and N is the total number of URIs. Because the number of URI features (3-gram number) is quite large, the PCA (Principal Component Analysis) method is used to reduce the number of features to 256experimentally selected. C. MODEL TRAINING AND TESTING The training phase uses supervised machine learning algorithms supported by the SkLearn library in Python to build and test the detection model. The supervised machine learning algorithms used include Naïve Bayes, support vector machines (SVMs), decision trees, and random forests. These are traditional machine learning algorithms that are widely applied in many fields, including information security [20][21]. With the SVM algorithm, Linear and RBF kernel functions are used [20]. For each machine learning algorithm, the training and testing are run 10 times and the final result is the average of the results of the runs. For each run, 80% of the randomly drawn records from the dataset are used for model training and the 20% remaining are used for model testing. The model testing metrics used include PPV (accuracy), TPR (positive true rate, or sensitivity), FPR (positive error rate), FNR (negative error rate), F1 (measure F1) and ACC (overall accuracy). These metrics are calculated using the following formulas: (8) (9) where, the parameters TP, TN, FP and FN are the components of the confusion matrix given in Table III. TABLE III. FN TN The testing results of the web attack detection model with different machine learning algorithms using the 10-fold cross-validation method are shown in Table IV. D. Discussion The experimental results in Table IV show that the random forest algorithm gives the best detection performance, while the Naïve Bayes algorithm gives the worst performance. The detection performance of the models based on the decision tree algorithms and the SVM is at the same level. The F1 measure of the detection models based on the machine learning algorithms of random forest, decision tree, Rbf-SVM, Linear-SVM, and Naive Bayes is 99.57%, 98.76%, 98.33, 98.10%, and 75.10%, respectively. Although the random forest algorithm requires higher computational resources than the decision tree algorithm, the random forest-based model has significantly higher detection performance than the decision tree-based model, especially in reducing false alarm rates. Specifically, the measures of FPR and FNR of the random forest-based model are 0.0775% and 0.7253%, respectively, while these measures of the decision tree-based model are 0.5185% and 1.6122%, respectively. Table V provides the results of comparing the overall detection accuracy (ACC) of the tested model with the surveyed models. It can be seen that the tested web attack detection model gives superior overall detection accuracy compared to the existing detection models. Specifically, the overall detection accuracy of the tested model, Hoang [3], the deep learning-based models of Liang et al. [16] and Pan et al. [17] is 99.68%, 98.56%, 98.42% and 91.40%, respectively. V. CONCLUSION This paper has surveyed tools, solutions and techniques for monitoring and detecting web attacks. The survey and evaluation results are the basis for the selection of tools, solutions, or techniques to monitor and detect web attacks in accordance with the specific web application systems in practice. The results of testing and evaluating detection models built on different machine learning algorithms show that the random forest machine learning algorithm gives the highest detection accuracy and the lowest false alarm rate. Moreover, the tested model in the paper also achieved superior detection performance compared to the investigated machine learning-based detection models. Future expansion directions of the paper include (1) expanding the model that allows to detect more types of web attacks and (2) developing a web attack detection system based on the web attack detection model using random forest algorithm for practical deployment.
6,123.6
2022-06-08T00:00:00.000
[ "Computer Science" ]
Level crossings and other level functionals of stationary Gaussian processes This paper presents a synthesis on the mathematical work done on level crossings of stationary Gaussian processes, with some extensions. The main results [(factorial) moments, representation into the Wiener Chaos, asymptotic results, rate of convergence, local time and number of crossings] are described, as well as the different approaches [normal comparison method, Rice method, Stein-Chen method, a general $m$-dependent method] used to obtain them; these methods are also very useful in the general context of Gaussian fields. Finally some extensions [time occupation functionals, number of maxima in an interval, process indexed by a bidimensional set] are proposed, illustrating the generality of the methods. A large inventory of papers and books on the subject ends the survey. Introduction This review presents the mathematical aspect of the work done on level crossings and upcrossings of a level, and insists on the different approaches used to tackle a given problem. We speak only briefly about examples or applications which of course were the reason for such theoretical work and which have been greatly developing in the extreme value field over the past years. This review of mathematical results and methods can also be very useful in mathematical statistics applications, reliability theory and other areas of applications of stochastic processes. When possible, we choose to privilege conditions on the behavior of spectral or covariance functions rather than general conditions of mixing, since they are easier to work with. We mainly consider continuous parameter processes, since they appear in most of the mathematical models that describe physical phenomena. Nevertheless one approach to work on level crossings or on upcrossings of a level that are analogous to the exceedances used in the discrete case, can be through discretization, using discrete parameter results. Let X = (X t , t ∈ IR d ) be a real stochastic process. Our main concern is the measure of the random set of level x defined by C X x := {t : X t = x}, by taking the same notation as Wschebor (see [163]). The unidimensional case will be exposed in detail with results and methods (especially the ones which can/could be easily adapted to a higher dimension), whereas only a partial view of the multidimensional case will be given. As a preliminary, let us mention two important results (see [31] for the first one and for instance [90] for the second one) which reinforce our interest in the study of C X x . Theorem 1.1 (Bulinskaya, 1961) Let X = (X s , 0 ≤ s ≤ t) be an a.s. continuously differentiable stochastic process with one-dimensional density p s (u) bounded in u for all s. Then for any level x the probability of existence of a point s such that the events (X(s) = x) and (Ẋ(s) = 0) occur simultaneously, is equal to 0. In particular, the probability that X t becomes tangent to the level x, is equal to 0, in other words the probability of contingence of the level x by the process X is equal to 0. In the case of a Gaussian process, as a consequence of the separability property and the Tsyrelson theorem, we have Theorem 1.2 For any Gaussian process X on a arbitrary parameter set T , for any x such that there is no contingence, with probability one. Crossings of Gaussian processes Studies on level-crossings by stationary Gaussian processes began about sixty years ago. Different approaches have been proposed. Here is a survey of the literature on the number of crossings of a given level or of a differentiable curve in a fixed time interval by a continuous spectrum Gaussian process. Besides the well-known books about Extremes, let us quote a short survey by Slud in 1994 (see [148] or [149]), a more general survey about extremes including a short section about level crossings by (see [86]) and by Rootzen in 1995 (see [132]), and another one by Piterbarg in his 1996 book (see [122], in particular for some methods described in more detail than here). Our purpose here is to focus only on crossing counts in order to be more explicit about the subject, not only recalling the main results, but also giving the main ideas about the different methods used to establish them. To make the methods easier to understand, we consider mainly the crossing counts of a given level. Note that the problem of curve (ψ) crossings by a stationary process X may also be regarded as a zero-crossing problem for the non stationary process X * := X − ψ (but stationary in the sense of the covariance), as pointed out by Cramér and Leadbetter (see [38]). Let X = (X s , s ≥ 0) be a real stationary Gaussian process with variance one and a.s. continuous sample functions. Let ψ be a continuously differentiable function. We denote N I (x) or N I (ψ)) the number of crossings of a given level x or of a curve ψ(.) respectively, X on the interval I, and N + I (x) the number of upcrossings of x by X (recall that X s is said to have an upcrossing of x at s 0 > 0 if for some ε > 0, X s ≤ x in (s 0 − ε, s 0 ) and X s ≥ x in (s 0 , s 0 + ε)). Hereafter this will be simply denoted N t (x) or N t (ψ) or N + t (x) respectively, when I = [0, t], t > 0. Moments and factorial moments Distributional results about level or curve crossings by a Gaussian process are often obtained in terms of factorial moments for the number of crossings, that is why many authors have worked not only on moments but also on factorial moments to find out expressions or conditions (in terms of the covariance function of the process) for their finiteness, since a certain number of applications require to know if they are finite. Note also that the conditions governing finite crossing moments are local ones, since Hölder inequality implies that (IE[N 2t (x)]) k ≤ 2 k (IE[N t (x)]) k , and therefore when (IE[N t (x)]) k is finite for some t, it is finite for all t. Introduction • Kac's method and formula (1943). We can start with Kac (see [73]), who studied the number N of zeros of a Gaussian random polynomial on some bounded interval of IR. To compute IE[N ], he proposed a method which used, so to speak, in a formal way, the approximation of the "Dirac function" δ 0 (x) by the function 1 2ε 1I [−ε,ε) (x). He gave the first heuristic expression of N t (0) as a function of the process X: From now on, we suppose w.l.o.g. that X is centered, with correlation function r(t) = IE[X 0 X t ], given also in respect with the spectral function F as cos(λt)dF (λ). We denote λ 2 the second moment (when it exists) of the spectral function, i.e. λ 2 = ∞ 0 λ 2 dF (λ). One of the best known first results is the one of Rice (see [129]) who proved by intuitive methods related to those used later by Cramér and Leadbetter (as the discretization method described below), that Theorem 2.2 (Rice formula, 1945) IE[N t (x)] = te −x 2 /2 −r ′′ (0) /π. (2.1) It means that the mean number of crossings is the most important for zero crossings, and decreases exponentially with the level. • The discretization method. From the intuitive method developed by Rice at the beginning of the 40s (see [129]), Ivanov in 1960 (see [70]), Bulinskaya in 1961 (see [31]), Itô in 1964 (see [71]) and Ylvisaker in 1965 (see [165]) proposed rigorous proofs for zero countings. There followed the general formulation due to Leadbetter in 1966 (see [38] p.195 or [85] p. 148), known as the method of discretization, which also applies to non-normal processes. This method is based on the approximation of the continuous process X = (X t , t ≥ 0) by the sequence (X(jq), j = 1, 2, · · · ) where q satisfies some conditions related to the level x of upcrossings: q = q(x) → 0 and xq ↓ 0 as x = x(t) → ∞ (or equivalently as t → ∞). Therefore this last result, together with (2.5), provide the Rice formula for the number of upcrossings of any fixed level x per unit time by a standardized stationary normal process, namely • Cramér andLeadbetter (1965,1967), Ylvisaker (1966). In the 60s, following the work of Cramér, generalizations to curve crossings and higher order moments for N t (.) were considered in a series of papers by Cramér and Leadbetter (see [38]) and Ylvisaker (see [166]). As regards curve crossings, the generalized Rice formula was obtained: Lemma 2.2 (Generalized Rice formula; Ylvisaker, 1966;Cramér and Leadbetter, 1967). Let X = (X t , t ≥ 0) be a mean zero, variance one, stationary Gaussian process, with twice differentiable covariance function r. Let ψ be a continuous differentiable real function. Then where ϕ and Φ are the standard normal density and distribution function respectively. Again the authors used the discretization method and approximated the continuous time number N t (ψ) of ψ-crossings by the discrete-time numbers (N ψ (t, q n )) of crossings of continuous polygonal curves agreeing with ψ at points jq n (j = 0, 1, · · · , 2 n ), with time steps q n = t/2 n : 1I ((Xjq n −ψ(jqn))(X (j+1)qn −ψ((j+1)qn))<0) (2.8) to obtain that N ψ (t, q n ) ↑ N t (ψ) with probability one, as n → ∞. ✷ 2.1.2. Moments and factorial moments of order 2 • Cramér and Leadbetter proposed a sufficient condition (known nowdays as the Geman condition) on the correlation function of X (stationary case) in order to have the random variable N t (x) belonging to L 2 (Ω), namely Theorem 2.4 (Cramér et al., 1967) If • Explicit (factorial) moment formulas for the number of crossings have been obtained by Cramér and Leadbetter (see [38]), Belyaev (see [19]) and Ylvisaker (see [165]), based on careful computations involving joint densities of values and derivatives of the underlying Gaussian process. In particular, the second moment of N t (x) is given by Concerning the second factorial moment, Cramér & Leadbetter provided an explicit formula for the number of zeros of the process X (see [38], p.209), from which the following formula for the second factorial moment of the number of crossings of a continuous differentiable real function ψ by X can be deduced: ,t2 (ψ t1 ,ẋ 1 , ψ t2 ,ẋ 2 )dẋ 1 dẋ 2 dt 1 dt 2 (2.10) where the density φ t1,t2 is supposed non-singular for all t 1 = t 2 . The formula holds whether the second factorial moment is finite or not. • Ershov (1967). This author proved (see [51]) that whenever a Gaussian stationary process X = (X t , t ∈ IR) with covariance function r is with mixing (i.e. that |r(|i − j|)| ≤ f (|i − j|) with lim k→∞ f (k) = 0), then the number of its x-upcrossings on all IR cannot hold a finite second moment: Geman proved in 1972 (see [54]) that the condition (2.9) was not only sufficient but also necessary, by showing that if r ′′ (t)−r ′′ (0) Of course, this result holds for any stationary Gaussian process when considering the number of crossings of the mean of the process. • Wschebor (1985). This author provided in [163] (under the Geman condition) an explicit expression in the case of two different levels x and y, with x = y, namely: Note that this expression differs from the one of Cramér et al. when x → y, which means that the function IE[N t (x)N t (y)] is not continuous on the diagonal. • Piterbarg (1982), Kratz and Rootzén (1997). Let us assume that the correlation function r of X satisfies 12) that r(s) and its first derivative decay polynomially, for some α > 2 and constants c, C, C 0 , and that the range of t, x is such that for some fixed K 0 , ε > 0; by using Piterbarg's notations, let Piterbarg (see [121]) proposed some bounds of the second factorial moment of the number of upcrossings; Kratz and Rootzén (see [84]) gave a small variation of his result, but with more precise bounds, namely Lemma 2.3 (Kratz and Rootzén, 1997) Suppose that the previous hypotheses are satisfied (with γ = 2). Then, for t, x ≥ 1, • Kratz and León (2006). Recently, Kratz and León (see [83]) have succeded in generalizing theorem 2.6 to any given level x (not only the mean of the process) and to some differentiable curve ψ. Note that the problem of finding a simple necessary and sufficient condition for the number of crossings of any level has already been broached in some very interesting papers that proposed sufficient conditions as the ones of Cuzick (see [40], [43], [44] 2) Suppose that the continuous differentiable real function ψ satisfies for some δ > 0, δ 0 γ(s) s ds < ∞, where γ(τ ) is the modulus of continuity ofψ. Then . This smooth condition on ψ is satisfied by a large class of functions which includes in particular functions whose derivatives are Hölder. The method used to prove that the Geman condition keeps being the sufficient and necessary condition to have a finite second moment is quite simple. It relies mainly on the study of some functions of r and its derivatives at the neighborhood of 0, and the chaos expansion of the second moment, a notion which will be explained later. Factorial moments and moments of higher order • Concerning moments of order higher than 2, Cramér and Leadbetter (see [38]) got in the stationary case, under very mild conditions, results that Belyaev (see [19]) derived in the non-stationary case under slightly more restrictive conditions, weakened by Ylvisaker (see [166]) in the stationary case but which may also be adapted to cover nonstationary cases. Let us give for instance the kth factorial moment of the number of zero crossings by a stationary Gaussian process: where p(x 1 , · · · , x k ) is the joint density of the r.v. (X t1 , · · · , X t k ). Concurrently to this study, Belyaev proposed in [19] a sufficient condition for the finiteness of the kth factorial moment M k t (0) for the number N t (0) of zero crossings on the interval [0, t] in terms of the covariance matrix Σ k of (X t1 , · · · , X t k ) and of Theorem 2.8 (Belyaev, 1967) • Cuzick (1975Cuzick ( -1978. Cuzick proved in 1975 (see [40]) that Belyaev's condition (2.18) for the finiteness of the kth factorial moments for the number of zero crossings, was not only sufficient but also necessary: The proof that condition (2.18) is equivalent to condition (2.19) is immediate with the change of variables ∆ i := t i+1 − t i , 1 ≤ i ≤ k − 1, after noticing the symmetry of the integrand in (2.18). Now the necessity of (2.19) comes from the fact that the lemma given below im- Then Cuzick tried to derive from his result (2.19) simpler sufficient conditions to have the finiteness of the kth (factorial) moments for the number of crossings. In particular, in his 75s paper (see [40]), he proved that M k t (0) < ∞ for all k, for a covariance function r having a behavior near 0 such that Later (see [43] and [44]), for X with path continuous nth derivative X (n) and spectral distribution function F (λ), he proposed a series of sufficient conditions involving F and σ 2 Let us give an example in terms of the spectral density of X among the sufficient conditions he proposed. Theorem 2.10 (Cuzick, 1978) If X has a spectral density given by f (λ) := 1/ 1 + λ 3 | log λ| α , then for k ≥ 2 and α > 3k/2 − 1, M k t (0) < ∞. Those results are not restricted to zero crossings and would also apply to a large family of curves (see [39] and [43]). However, necessary conditions are more difficult for higher monents, the main difficulty lying in obtaining sharp lower bounds for the σ 2 i defined in (2.19). • Marcus (1977): generalized Rice functions. Marcus (see [101]) generalized results of Cramér and Leadbetter and Ylvisaker by considering not only Gaussian processes but also by computing quantities such as IE[N j1 t (x 1 ) · · · N j k t (x k )] for levels x 1 , · · · , x k and integers j 1 , · · · , j k , called generalized Rice functions. For the proofs, the author returns to the approach used by Kac (see [73]), Ivanov (see [70]) and Itô (see [71]) to obtain the mean number of crossings at a fixed level, which consists in first finding a function that counts the level crossings of a real valued function, then in substituting X for the function and finally, in considering the expectation. • Nualart and Wschebor (1991). We know that the general Rice formula giving the factorial moments of the number of level crossings of a stochastic process satisfying some conditions can hold whether finite or not. In the search of conditions for the finiteness of moments of the number of crossings, Nualart and Wschebor (see [113]) proposed some sufficient conditions in the case of a general stochastic process, that reduce in the Gaussian case to: Theorem 2.11 (Nualart and Wschebor, 1991) If X = (X t , t ∈ I ⊂ IR) is a Gaussian process having C ∞ paths and such that var(X t ) ≥ a > 0, t ∈ I, then M k t (u) < ∞ for every level u ∈ IR and every k ∈ IN * . • Azaïs and Wschebor (2001). The computation of the factorial moments of the crossings of a process is still a subject of interest. In particular, when expressing these factorial moments by means of Rice integral formulas (of the type of (2.16) in the case of the 2nd factorial moment of the zero crossings, for instance), there arises the problem of describing the behavior of the integrands (appearing in these formulas) near the diagonal; it is still an open (and difficult) problem, even though partial answers have been provided, as for instance by Azaïs and Wschebor (see [10], [12], and references therein). Let us give an example of the type of results they obtained, which helps to improve the efficiency of the numerical computation of the factorial moments, in spite of the restrictive conditions. Proposition 2.1 (Azaïs and Wschebor, 2001) Suppose that X is a centered Gaussian process with C 2k−1 paths (k integer ≥ 2), and that for each pairwise distinct values of the parameter t 1 , t 2 , · · · , t k ∈ I the joint distribution of (X t h , where J k (.) is a continuous non-zero function. Two reference methods in the Gaussian extreme value theory • The normal comparison method. The main tool in the Gaussian extreme value theory, and maybe one of the basic important tools of the probability theory, has certainly been the so-called normal comparison technique. This method, used in the Gaussian case, bounds the difference between two standardized normal distribution functions by a convenient function of their covariances. This idea seems intuitively reasonable since the finite dimensional distributions of a centered stationary Gaussian process is determined by its covariance function. It was first developed by Plackett in 1954 (see [125]), by Slepian in 1962 (see [146]), then by Berman in 1964 and 1971 (see [22]) and by Cramér in 1967 (see [38]) in the independent or midly dependent cases. An extension of this method to the strongly dependent case was introduced in 1975 by Mittal and Ylvisaker (see [109] and also the 84s paper [108] of Mittal for a review on comparison techniques). Then for any x, IP sup The proof constitutes a basis for proofs of the Berman inequality and a whole line of its generalizations, and is part of what we call the normal comparison method. Let us illustrate it by considering a pair of Gaussian vectors of dimension n, X 1 and X 2 , that we suppose to be independent, with respective distribution functions F i , i = 1, 2, density functions ϕ i , i = 1, 2, and covariance matrices Σ i = ((σ i (j, k)) j,k ), i = 1, 2 such that σ 1 (j, j) = σ 2 (j, j). Then the covariance matrix Σ h := hΣ 2 + (1 − h)Σ 1 is positive definite. Let f h and F h be respectively the n-dimensional normal density and distribution function based on Σ h . Let recall the following equation, discovered by Plackett in 1954 (see [125]), recorded by Slepian in 1962 (see [146]) and proved later in a simpler way than the one of the author, by Berman in 1987 (see [22]): This equation will help to compute the difference between the two normal distribution functions F i , i = 1, 2. Indeed, x being in this case a real vector with coordinates (x i ) 1≤i≤n , we have represents the integral of order n−2: The same results hold in the case of Gaussian separable functions of arbitrary kind. From those last two results, Berman obtained: Then, for any real numbers x 1 , · · · , x n , In particular these results hold when choosing one of the two sequences with iid r.v., hence the maximum does behave like that of the associated independent sequence; it helps to prove, under some conditions, results on the maximum and on the point process of exceedances of an adequate level x n of a stationary normal sequence with correlation function r; for instance we obtain that the point process of exceedances converges to a Poisson process under the weak dependence condition r n log n → 0 (see [85], chap.4) or to a Cox process under the stronger dependence condition r n log n → γ > 0, or even to a normal process if r n log n → ∞ (see [85], chap.6, or [108])). There is a discussion in Piterbarg (1988 for the Russian version, 1996 for the English one) (see [122]) about two directions in which the Berman inequality can be generalized, on the one hand on arbitrary events, on the other hand for processes and fields in continuous time. Piterbarg points out that it is not possible to carry the Berman inequality (2.20) over to the processes in continuous time as elegantly as it was done for the Slepian inequality (2.12), but provides a solution for Gaussian stationary processes with smooth enough paths (see theorems C3 and C4, pp.10-12 in [122]) (and also for smooth enough stationary Gaussian fields). Finally let us mention the last refinements of the Berman inequality (2.20) given by Li and Shao in 2002 (see [89]) that provide an upper bound in (2.20), cleared of the term (1 − ρ 2 ij ) −1/2 : Theorem 2.15 (Li and Shao, 2002) Suppose that (X 1i , 1 ≤ i ≤ n) are standard normal random variables with covariance matrix Then, for any real numbers x 1 , · · ·, x n , Moreover, for n ≥ 3, for any positive real numbers x 1 , · · · , x n , and when For other precise versions and extensions of this method, we can also refer to e.g. Leadbetter et al. (see [85]), Tong (see [158]), Ledoux and Talagrand (see [87]) and Lifshits (see [90]). • The method of moments, also called the Rice method. This method, introduced by Rice to estimate the distribution of the maximum of a random signal, consists in using the first two moments of the number of crossings to estimate the probability of exceeding some given level by a trajectory of a (Gaussian) process, as shown in the lemma below (see [122] p.27 and chap.3). In particular it relies on the fact that the event ( implies the event that there is at least one upcrossing: (N + t (x) ≥ 1), knowing that the probability of more than one up/down-crossing of the level x becomes smaller as the level becomes larger. This method works only for smooth processes, but can be extended to nonstationary Gaussian processes (see [136] and [137]) and to non-Gaussian processes. Let X = (X s , s ∈ [0, t]) be a.s. continuously differentiable with one-dimensional densities bounded. Then and Estimates previously proposed for the (factorial) moments can then be used at this stage of calculation. Recently, Azaïs and Wschebor (see [13]) adapted this method to express the distribution of the maximum of a one-parameter stochastic process on a fixed interval (in particular in the Gaussian case) by means of a series (called Rice series) whose terms contain the factorial moments of the number of upcrossings, and which converges for some general classes of Gaussian processes, making the Rice method attractive also for numerical purpose. Representations of the number of crossings Let X = (X s , s ≥ 0) be a stationary Gaussian process defined on a probability space (Ω, F , IP), with mean zero, variance one and correlation function r such that −r ′′ (0) = 1. We are first interested in having a representation of the number of x-crossings of X as a sum of multiple Wiener-Itô integrals or in terms of Hermite polynomials, where the nth Hermite polynomial H n can be defined as Let W be the standard Brownian Motion (or Wiener process). Let H(X) denote the space of real square integrable functionals of the process X. Recall that H(X) = ∞ n=0 H n , H n being the Wiener Chaos, i.e. the closed linear is the stochastic integral of h with respect to W and H 0 is the set of real constant functions. We can make use as well of the multiple Wiener-Itô integral I n defined as in Major (see [96]), since we have This integral operator I n satisfies the multiplication rule, namely: for f p ∈ L 2 s (IR p , m p ) and g q ∈ L 2 s (IR q , m q ), with , · · · , λ p(n) ), ∀p ∈ S n , m n denoting the product Borel measure on IR n , S n the symetric group of permutations of {1, · · · , n} and λ = (λ 1 , · · · , λ n ), then where f p⊗k g q denotes the average overall permutations of λ-arguments of the function and p ∧ q ≡ min(p, q). We can introduce the Sobolev spaces ID 2,α for α ∈ IR as in Watanabe (see [161]) . • Slud (1991Slud ( , 1994: MWI integral expansion. Multiple Wiener-Itô integrals (MWI) may be a tool to represent and to study non-linear functionals of stationary Gaussian processes, as shown by Kalliampur (see [74], chap.8). Slud first applied the stochastic calculus of MWI integral expansions, in 1991 (see [147]) to express the number of crossings of the mean level by a stationary Gaussian process within a fixed time interval [0, t], the motivation being to obtain probabilistic limit theorems for crossing-counts, then in 1994 (see [148] or [149]) to extend his results to C 1 -curve crossings. Theorem 2.16 (Slud, 1991 and1994) Let X be a mean zero, variance one, stationary Gaussian process with continuous spectral measure, and twice-differentiable correlation function r. where * in the case of a given level x (i.e. ψ(y) = x, ∀y), the mean of N t (x) is given by the Rice formula (2.1) and ds, * and in the case of a C 1 -curve, the mean of N t (ψ) is given by the generalized Rice formula (2.7) and where λ is an n-vector of coordinates λ i . Note that the functional N t (ψ) may thus be expressed as the integral on [0, t] of e is(λ1+···+λn) multiplied by the formal MWI expansion of the form The proof is mainly based on the discrete approximation method of Cramér and Leadbetter (see [38]), already used to obtain the generalized Rice formula (2.7), by introducing the discrete-time number of crossings N ψ (1, 2 −m ) defined in (2.8), which increases to N ψ (1). Since by hypothesis in the theorem var (N ψ (1)) < ∞, so is the limiting variance of N ψ (1, 2 −m ) (via the monotone convergence theorem); then, because of the orthogonal decomposition of L 2 (Ω), the MWI integrands for N ψ (1) are the L 2 (and a.e.) limits of the corresponding integrands for N ψ (1, 2 −m ). To provide the MWI integrands for N ψ (1, 2 −m ), some work on indicators of the type 1I (X0>c) or 1I (X0>a) 1I (X h >b) , is needed, after having noted that The main new technical tools used for the study of indicators are a generalization of the Hermite polynomial expansion for the bivariate-normal density in (2.28) and the identity (2.29), both enunciated below because of their own interest. Lemma 2.5 (Slud, 1994) ∀x, y ∈ IR, k, m, n ∈ IN and |t| < 1, . (2.29) The case of a constant level is simply deduced from the general case. Note that Slud used a different method in 1991, when considering directly a constant level, based mainly on properties of generalized hypergeometric functions; , obtained the MWI expansion for the indicator 1I (Xs−x>0) by using first the Hermite polynomial expansion of this indicator, then by studying the asymptotical behavior of hypergeometrical functions. Then he used the Diagram theorem to express the products of expansions as a sum of Wiener-Itô integrals (see [49] or [96], p.42; another version will also be given in terms of Hermite polynomials by Arcones in 1994 (see [4]), and recalled below) and finally he used Fourier transforms for computations. • Kratz and León (1997): Hermite polynomial expansion. At the end of the 1990s, Kratz and León (see [80]) proposed a new and direct method to obtain, under some assumptions on the spectral moments of the process, the Hermite polynomial expansion of crossings of any level by a stationary Gaussian process: Proposition 2.2 (Kratz and León, 1997) Let X be a mean zero stationary Gaussian process, with variance one, satisfying Then the following expansion holds in L 2 (Ω) This approach is based on an analytical formula involving the "Dirac function", which formally defines the number of crossings N t (x) as and which can be made precise when approximating the Dirac function; it makes then explicit formulas for MWI expansions much easier to obtain than in Slud's papers, mainly because expanding |Ẋ s | in Hermite polynomials inẊ s rather than in X s quite simplify the calculations (note that at s fixed, X s andẊ s are independent). Note that the condition (2.30), stronger than the Geman condition (2.9), can from now on be replaced by the Geman condition only because of Kratz and León's recent result (theorem 2.7), which makes the chosen method even more attractive . Moreover this approach is natural in the sense that formally the Dirac function ; then (2.32) has the corresponding development given by lemma 2.6 below, made precise by approximating δ x by Φ ′ σ,x := ϕ σ,x . Lemma 2.6 (Kratz and León, 1997) Let X satisfy the conditions of proposition 2.2. Let f ∈ L 2 (φ(x)dx) and let (c k , k ≥ 0) be its Hermite coefficients. One has the following expansion where (a k , k ≥ 0) are the Hermite coefficients of the function |.|, defined by is a Cauchy sequence in L 2 (Ω) and we deduce from the Hermite expansions of |x| and f (x) that ζ K,L converges to t 0 f (X s )|Ẋ s |ds in L 2 (Ω). Then to conclude the proof of the lemma, just notice that the second expansion is a consequence of the orthogonality. ✷ Let us be more explicit about the proof of proposition 2.2, by giving the main steps. We will apply the previous result (lemma 2.6) to the function and prove by Fatou's lemma and by Jensen inequality that E[(N t (x)−N σ t (x)) 2 ] → 0 and via the chaos decomposition that E[(N σ t (x) −N t (x)) 2 ], as σ → 0, to conclude to proposition 2.2. ✷ Now by using the method of regularization of Wschebor, the result of proposition 2.2 can be extended to a larger class of processes: Proposition 2.3 (Kratz and León, 1997). Let X be a mean zero stationary Gaussian process with variance one and satisfying the Geman condition (2.9). In addition we will assume that L 1 (h) being an even function belonging to L 1 ([0, δ], dx). Then the following expansion holds Indeed, Wschebor's regularization method allows us to drop the strong condition involving the fourth derivative of the covariance function of the process to replace it by a new smoother condition, which could be named "uniform Geman condition", constituted by the conditions (2.9) and (2.33) together, in the following way. By introducing the regularized process X ε defined by we can check that proposition 2.2 applies to the number N ε t (x) of x-crossings associated to X ε , then we can prove that, under the uniform Geman condition, N ε t (x) → N t (x) in L 2 as ε → 0, to conclude on proposition 2.3. Note that for this last convergence, an important step is the use of the diagram formula that we already mentioned (Major, 1981) to show that the partial finite developments of converge to the same developments in the right hand side of (2.34), from which we deduce that for each fixed q To be more explicit, let us give a version of the Major Diagram formula in terms of Hermite polynomials, which provides the expectations of product of Hermite polynomials over a Gaussian vector, a version that can be found in Breuer and Major (see [29]) or in Arcones (see [4]). We need to introduce some notations. Let G := {(j, l) : 1 ≤ j ≤ p, 1 ≤ l ≤ l j } be the diagram of order (l 1 , · · · , l p ), V (G) the set of vertices (j, l) of the diagram G, Γ{l 1 , · · · , l p } the set of diagrams of order (l 1 , · · · , l p ), L j = {(j, l) : 1 ≤ l ≤ l j } the jth level of G, {((i, l), (j, m)) : Theorem 2.17 : Diagram formula Let (X 1 , · · · , X p ) be a Gaussian vector mean zero, variance 1 and with IE[ • Let us go back to the heuristic formula (2.32) for the number of crossings. Suppose that X = (X s , s ≥ 0) is a mean zero stationary Gaussian process with variance one, that the function of covariance r has two derivatives and satisfies (2.30). Let 0 = α 0 < α 1 < .... < α k−1 < α k = t be the points where the change of sign of the derivative of X occurs. They are in finite number because the process has a finite fourth spectral moment (condition (2.30)). There is an x-crossing of X between α i and α i+1 if is the Heaviside function whose generalized derivative is the "Dirac function" δ x (u) = ∞ if u = x and 0 if not. Therefore we can formally write which gives the formula (2.32). Note also that (2.32) allows to retrieve formally some classical results, as: -the Rice formula: if g is a positive function on IR, if G is a primitive of g, then (see [32]) -the Kac formula: sinceδ 0 (t) = 1, by formally applying the Fourier inversion formula, we cos(ξX s ) |Ẋ s | dξds. • Kratz (2000) If we choose for a Gaussian process the Brownian motion B, then its number of x-crossings is such that either N t (x) = ∞ or N t (x) = 0 a.s., whereas its local time L t (x) can be defined formally by and rigourously by a Gaussian approximation (see the table below) or by a uni- s. (see for instance [55]). More generally, let us define the local time L t (x) of a Gaussian process X as the density of the occupation measure of X, as for instance in Berman (see [22]). Its construction by limiting processes is based on the sample path properties of X, as first introduced by Levy in the case of the Brownian motion (see for instance [55]). We may then notice that the notions of number of crossings and local time present some "analogies" in their formulas (heuristic and non-heuristic), as it is shown in the following table: Formally Mathematically the Heaviside function Y x and the "Dirac function" δ x are approximated respectively by the distribution function and the density of a Gaussian r.v. mean x and variance σ 2 , with σ → 0. Banach-Kac (1925-43) formula (see [15] and [73]) Because of this type of correspondance with the local time, we became interested in 'validating mathematically' the heuristic formula (2.32) for crossing-counts by looking for the appropiate Sobolev space, as done for Brownian local time by Nualart and Vives (see [111], [112]), and Imkeller et al. (see [69])). Those authors proved that L x t ∈ ID 2,α , 0 < α < 1/2 ID p,α , ∀p ≥ 2, 0 < α < 1/2 , where ID 2,α denotes the Sobolev space over canonical Wiener space obtained via completion of the set of polynomials F with respect to the norms ||F || 2,α = ||( In what concerns the number of crossings N t (x), whereas the distribution δ x (X s ) and the random variable |Ẋ s | belong to ID 2,α for any α < −1/2 and for any α ∈ IR respectively, the integral in the definition of N t (x) would appear to have a smoothing effect, showing N t (x) as a random variable which would belong to ID 2,α for any α < 1/4 (see [78] and [79]). Approximation of the local time by the number of crossings Several authors took an interest in the problem of approximating the local time of an irregular process X by the number of crossings of a regularization of this process. One classical regularization is the one obtained by convolution, already presented, defined by ε and ψ some smooth function. (2.35) • Wschebor (1984,1992). Wschebor considered this problem for a specific Gaussian process, the Brownian motion (in the multiparametric case, but we will only present the one parameter case). Let W = (W t , t ≥ 0) be a Brownian motion and let W ε be the convolution approximation of W defined by W ε := W * ψ ε , where ψ ε is defined in (2.35) with ψ, a non-negative C ∞ function with compact support. Let N x ε ([a, b]) be the number of crossings of level x by the regularized process W ε on the interval [a, b]. Wschebor (see [162]) showed that, for x ∈ IR, Theorem 2.18 (Wschebor, 1984) π 2 Later (see [164]), he proved the following: Theorem 2.19 (Wschebor, 1992) For any continuous and bounded function f , for a.e. fixed w, 1]) and L(x) := L(x, [0, 1]). • Azaïs, Florens-Zmirou (1987, 1990) Azaïs and Florens-Zmirou, in their 87 paper (see [8]), extended Wschebor's result (2.36) to a large class of stationary Gaussian processes, when considering the L 2 convergence and the zero crossings. Under some technical conditions on the Gaussian stationary process X and its regularization X ε := X * ψ ε (requiring a bit more than the non-derivability of X and giving some bounds on the second and fourth spectral moments of X ε ), on its correlation function r (namely r twice differentiable outside a neighborhood of zero, with bounded variation at zero, r ′ and r ′′ bounded at infinity) and on the convolution kernel ψ (i.e. ψ(u) and ψ ′ (u) bounded by a constant time |u| −2 ), they proved that where λ 2,ε denotes the second spectral moment of X ε . Note that Azaïs in 1990 (see [6]) considered more general stochastic processes and provided sufficient conditions for L 2 -convergence of the number of crossings of some smooth approximating process X ε of X (which converges in some sense to X) to the local time of X, after normalization. Meantime, Azaïs and Wschebor considered in 1996 (see [10]) the a.s. conver- for any continuous function f and for X belonging to a larger class of Gaussian processes than the one defined by Berzin, León and Ortega. Note that we did not mention all the papers on this subject (or closely related to it); indeed much work on the approximation of local times and occupation measures has been done also on specific Gaussian processes such as the fractional Brownian motion (we can quote for instance the work by Révész et al. in the 80s on invariance principles for random walks and the approximation of local times and occupation measures, as well as the work by Azaïs and Wschebor (see [11]) on the first order approximation for continuous local martingales). The law of small numbers and rate of convergence • Due to the extension of the Poisson limit theorem from independent to dependent Bernoulli random variables (see [36]), there has been the same extension in extreme value theory, in particular for the study of the number of (up-)crossings. As the level x = x(t) increases with the length t of the time interval, the upcrossings tend to become widely separated in time provided that there is a finite expected number in each interval. Under some suitable mixing property satisfied by the process X to ensure that the occurrences of upcrossings in widely separated intervals can be considered as asymptotically independent events, we get a limiting Poisson distribution for the number of x-upcrossings N + x by using the Poisson theorem for dependent random variables (see [160], [38], chap.12, [126], [116], [117], [22], and, for a survey and further minor improvement, [85], chap.8 and 9). We will give here a version of such a result, proposed in Leadbetter et al. (see [85], theorem 9.1.2 p.174). Under the condition that µ(x) := IE[N + x ((0, 1))] < ∞, then N + x (I) < ∞ a.s. for bounded I, and the upcrossings form a stationary point process N + x with intensity parameter µ = µ(x). The point process of upcrossings has properties analogous to those of exceedances in discrete parameter cases, namely: Suppose that x and t tend to ∞ in a coordinated way such that Define a time-normalized point process N + * of upcrossings having points at s/t when X has an upcrossing of x at s. Then N + * converges to a Poisson process with intensity τ as t → ∞. Note that the asymptotic Poisson character of upcrossings also applies to nondifferentiable normal processes with covariance functions satisfying r(τ ) = 1−C|τ | α +o(|τ | α ), as τ → 0 (with 0 < α < 2, and C positive constant), (2.40) if we consider ε-upcrossings, introduced by Pickands (see [116]) and defined below, instead of ordinary upcrossings (for which under (2.40) their mean number of any level per unit time is infinite). For a given ε > 0, a process ξ is said to have an ε-upcrossing of the level x at t if ξ(s) ≤ x, ∀s ∈ (t − ε, t) and, ∀η > 0, ∃s ∈ (t, t + η), ξ(s) > x; so an ε-upcrossing is always an upcrossing while the reverse is not true. The expectation of the number of ε-upcrossings of x by ξ satisfying (2.40) can be evaluated, and it can be proved that asymptotically this mean number is independent of the choice of ε for a suitably increasing level x, which leads, under the Berman condition and (2.40), to the Poisson result of Lindgren et al. (1975) for the time-normalized point process of ε-upcrossings (see [116] and [85], chap.12 for references and more detail). • Remark: Another notion related to the one of upcrossings of level x by the process X is the time spent over x, called also sojourn of X above x, defined in the table of section 2.2.1 by S x (t) = t 0 1I (Xs≥x) ds. An upcrossing of the level x marks the beginning of a sojourn above x. If there is a finite number of expected upcrossings in each interval, then the number of sojourns above x is the same as the number of upcrossings. Under some mixing conditions (recalled in the result of Bermangiven below) and the assumption of a finite expected number of upcrossings in each interval, Volkonskii & Rozanov (see [160]), then Cramér & Leadbetter (see [38]) showed, under weaker conditions, by using the reasoning used in the proof of the Poisson limit for the distribution of upcrossings, that the sojourn above X has a compound Poisson limit distribution (as the sum of a Poisson distributed random number of nearly independent r.v. which are the durations of the sojourns). As early as the 70s, Berman (see [22] for the review of these topics) proposed an alternative to discussing upcrossings; he introduced a method based on Hermite polynomial expansions to study the asymptotic form of the sojourns of X above a level x, on the one hand for x → ∞ with fixed t, on the other hand for x, t → ∞ in a coordinated way. He considered a larger class of processes (with sample functions not necessarily differentiable), allowing a possible infinite expected number of upcrossings in each finite interval, to prove the compound Poisson limit theorem. More specifically, by using arguments on moments, Berman (see [22], chap.7) proved that: if the level function x = x(t) satisfies the asymptotic condition x(t) ∼ 2 log t as t → ∞ and for a covariance function r such that 1 − r is a regularly varying function of index α (0 < α ≤ 2) when the time tends to 0, then the limiting distribution of v(t)S x (t) is a compound Poisson distribution, where v is an increasing positive function determined by the asymptotic form of 1 − r(t) for t → 0 and the compounding distribution is uniquely defined in terms of the index of regular variation of 1 − r(t) for t → 0. • For practical use of the asymptotic theory, it is important to know how faithful and accurate these Poisson approximations are, and in view of applications, we must study the rate of convergence carefully. It took some years before getting information about those rates. Finally, Pickersgill and Piterbarg (see [120]) established rates of order t −ν for point probabilities in theorem 2.21, but without giving any information on the size of ν. Kratz and Rootzén (see [84]) proposed, under the condition that r(.) and r ′ (.) decay at a specified polynomial rate (quite weak assumption, even if more restrictive than r(s) log s → s→∞ 0), bounds for moments of the number of upcrossings and also for the rate of convergence in theorem 2.21 roughly of the order t −δ , with where ρ(.) has been defined by Piterbarg (see [121]) by Their approach proceeds on the one hand by discretization and blocking to return to discrete parameter cases, and on the other hand by combining the normal comparison method and the Stein-Chen method (see [152] and [36]) as developed by Barbour, Holst and Janson (see [61], [16] and also [52]), this last method imposing to measure the rate of convergence with the total variation distance, defined between integer valued r.v.'s. X and Y by More precisely, the authors achieved the following result on what concerns the rate of convergence, throughout assuming for convenience that x ≥ 1 and t ≥ 1: Theorem 2.22 (Kratz and Rootzén, 1997) Suppose {X(s); s ≥ 0} is a continuous stationary normal process which satisfies (2.11), (2.12) with γ = 2, (2.13) and (2.14) and that r(s) ≥ 0 for 0 ≤ s ≤ t. Then there are constants K and K ′ which depend on r(s) but not on x or t such that and, for t ≥ t 0 > 1, (2.44) The constants K in (2.43) and K ′ in (2.44) are specified in the authors paper (see [84]). Let us present briefly the method used to prove such results. The discretisation consists in replacing the continuous process {X(s); s ≥ 0} by a sampled version {X(jq); j = 0, 1, ...}, with q roughly chosen equal to s −1/2 , which makes extremes of the continuous time process sufficiently well approximated by extremes of its sampled version. Nevertheless it raises a new problem: we may count too many exceedances. So a new parameter θ is introduced to divide the interval (0, t] into blocks ((k − 1)θ, kθ], k = 1, ..., t/θ with θ roughly equal to t 1/2 (this is what blocking method refers to). This choice of θ makes the blocks long enough to ensure approximate independence of extremes over disjointed blocks. It leads to considering W the number of blocks with at least one exceedance: Then by the triangle inequality for the total variation distance d, we have d(N x (t), P(tµ)) ≤ d(N x (t), W ) + d(W, P(λ)) + d(P(λ), P(tµ)), (2.45) and we estimate the three terms in the righthand side (RHS) of (2.45) separately. To estimate d(N x (t), W ), we introduce N (q) x (θ) the number of exceedances of x by the sampled process {X(jq); jq ∈ (0, t]} upon an interval of length θ, and the essential part concerns the evaluation of the difference between N (q) x (θ) and the probability that we have at least one exceedance by the sampled process on the same interval, i.e. IE[N (q) x (θ)] − IE[I 1 ]. To evaluate it, we use lemma 2.3. The last term of the RHS is bounded in the same way as the first one, using d(P(λ), P(tµ)) ≤ |λ − tµ| (see e.g. [61], p.12), where λ − tµ = t θ (IE[I 1 ] − θµ). To study the middle term of the RHS, we use the Stein-Chen method described below. It means that we have to evaluate double sums of covariances of indicators that are grouped into blocks, which implies, because of stationarity, the study of cov(I 1 , I k ),k ≥ 2. A version of the normal comparison lemma given in theorem 2.14 (by taking x = min(x i , 1 ≤ i ≤ n)) helps to treat the case k ≥ 3. When k = 2, a new parameter θ * is introduced to ensure a quasi-independence between blocks. Then classical normal techniques are used. • The Stein-Chen method. In 1970, Stein proposed a new method to obtain central limit theorems for dependent r.v. (see [152]). Chen adapted this method in 1975 to the Poisson approximation (see [36]). A pair (X, X ′ ) of exchangeable variables satisfies the following property: This approximative equality is often easier to check than the direct approxima- Therefore, let us illustrate the Stein-Chen method when W is the number of occurrences of a large number of independent events. The advantage of this method is that the dependent case involves only minor transformations. Let (X i ) 1≤i≤n be n independent r.v. with values in {0, 1}, with (for fixed n) X i . We proceed in four steps: 1. Construction of an exchangeable pair (W, W ′ ). Let (X * i ) 1≤i≤n be n independent r.v., independent of the X i and s.t. Then we can apply the property of exchangeable mappings to (W, (2.46) 3. Artificial solution U λ h (particular case of the general method of Stein). Let h : IN → IR be bounded and let the function U λ h be defined by with Z Poisson P(λ) distributed r.v., solution of the equation Taking f := U λ h in (2.46) provides that for all bounded function h defined on IN, Note that Barbour (see [16]) extended the Stein-Chen method by combining it with coupling techniques (see [140] and [141]) to solve the general problem of Poisson approximations for the distribution of a sum of r.v., not necessarily independent, {0, 1}-valued. In the case of independent indicators, Deheuvels et al. (see [47] and references therein) combined semigroup theory (see [140]) with coupling techniques to obtain results for the Poisson approximation of sums of independent indicators; it has been shown to give sharper results than Barbour's method (see [77]). In the 70s, some work was done to prove Central Limit Theorems (CLT) for the number of zero crossings N t (0) as t → ∞, for instance by Malevich (see [98]) and by Cuzick (see [41]). Cuzick gave conditions on the covariance function of X which ensure on the one hand a mixing condition at infinity for X and on the other hand a local condition for the sample paths of X; those conditions are weaker than the ones given by Malevich to prove the same result, although the same type of proofs is used. If the following conditions are satisfied: i) r, r ′′ ∈ L 2 , ii) the Geman condition (2.9) is verified, Remark: A sufficient condition to have the assumption (iii) and which is directly related to the covariance structure of X can be expressed by The m-dependent method. The proof of this theorem relies on what is called the m-dependent method, based on an idea of Malevich, which consists in approximating the underlying Gaussian process X (and its derivatives when they exist) by an m-dependent process X m (i.e. such that IE[X m (s)X m (t)] = 0 if |s − t| > m), in order to apply the CLT for the m-dependent process given in Hoeffding and Robbins (see [60]). Note that the m-dependent process X m is obtained directly from the stochastic representation of X by clipping out the portion (−∞, −m/2[∪]m/2, ∞) of the integration domain and by normalizing the resulting integral. Applying both the method of comparison and the method of discretization, Piterbarg provided a central limit theorem for the number N + x (t) of upcrossings at level x by a Gaussian stationary process X (see [118] or [122]), namely Theorem 2.24 (CLT for the number N + x (t) of upcrossings, Piterbarg, 1978) Let X = (X s , s ∈ IR) be a stationary Gaussian process, mean zero, unit variance, with covariance function r satisfying the Geman condition (2.9) and ∞ 0 s (|r(s)| + |r ′ (s)| + |r ′′ (s)|) ds < ∞. (2.47) Also, the central limit theorem holds for N + x (t). For the computation of the variance of N + x (t), it suffices to write var(N + , and to use results on moments, which, combined with the condition (2.47), give the convergence of σ 2 . To obtain the CLT, Piterbarg proceeded both by discretization and by smoothing approximation, since only a discretized in time approximating process does not satisfy the conditions of the CLT in discrete time, even under (2.47) . He then introduced a smooth enough process X δ △ discretized in time, in order to apply to it under some conditions a known result, a CLT for the number of upcrossings for stationary Gaussian process in a discrete time (see for instance [122]). This approximating process X δ △ has been defined as X δ △ (s) := for s ∈ [k△, (k + 1)△), k = 0, 1, · · · , △ such that 1/△ is an integer and △ → 0, and where the smoothed process X δ is given by X δ (s) := 1 δ δ 0 X(s + v)dv. • Berman (1971. Let us return briefly to the notion of sojourn time. When choosing the level function x = x(t) of the asymptotic order of √ 2 log t (as t → ∞), we recalled in the section Law of small numbers, that the sojourn time above the level x tends to a compound Poisson limit, since the sojourns are relatively infrequent and their contributions are few but individually relatively substantial. In this case, the local behavior of the correlation function r was determining in the form of the limiting distribution. Now when choosing the level function rising at a slower rate (such that x(t)/ 2 log t is bounded away from 1), the sojourns become more frequent and their contributions more uniform in magnitude, implying with the customary (in the application of CLT) normalization, namely var(S t (x)) (with IE[S t (x)] not depending on r), a normal limit distribution. Here the local behavior of r does play in the normalization function, but not in the form of the limiting distribution. Berman proved this CLT for two different types of mixing conditions, namely when the covariance function r decays sufficiently rapidly to 0 as t → ∞, or conversely at a sufficiently small rate. In the case of a rapid rate of decay of r, the mixing condition is based not on the function r itself but on the function b that appears in its spectral representation given by when supposing that the spectral distribution is absolutely continuous with derivative f (λ), and where b is the Fourier transform in the L 2 -sense of f (λ). This mixing condition is given by b ∈ L 1 ∩L 2 , which means in particular that the tail of b is sufficiently small so that r tends to 0 sufficiently rapidly. The proof of the CLT is based on the m-dependent method. Indeed, Berman introduced a family of m-dependent stationary Gaussian processes {X m (t); −∞ < t < ∞} to approach uniformly in t the original process X in the mean square sense for large m. Then he deduced a CLT to the normalized sojourn of X from the CLT of the m-dependent process (established by adapting a blocking method used in the proofs of CLT for dependent r.v.). In the case of slowly decreasing covariances, the proof of the CLT relies on a method specific to Gaussian processes, based on the expansion of S t (x) in a series of integrated Hermite polynomials and the method of moments. In fact it is a special case of what is known as a non-central limit theorem for Gaussian processes with long-range dependence (see [50], [156] and the next section Non central limit theorems). Theorem 2.25 (Slud, 1994) Let x ∈ IR be arbitrary, and let X = (X t , t ≥ 0) be a mean 0, variance 1, stationary Gaussian process with continuous spectral measure σ and twice-differentiable where α 2 > 0 is given by the expansion Note that Slud extended this result to curve crossings in cases where the curve is periodic and the underlying process has rapidly decaying correlations (see [149], theorem 6.4). In 2001, Kratz and León (see [82]) introduced a general method which can be applied to many different cases, in particular when the dimension of the index set T is bigger than one, to provide a CLT as general as possible for level functionals of Gaussian processes X = (X t , t ∈ T ). This method is a combination of two approaches, the one developed by the authors in 1997 (see [80]) and one derived from the work of Malevich (see [98]), Cuzick (see [41]) and mainly Berman (see [22]), that consists in approaching the process X by an m-dependent process, in order to be able to use well-known results on m-dependent processes. Applying this method, a CLT is given for functionals of (X t ,Ẋ t ,Ẍ t , t ≥ 0), which allows in particular to get immediately the CLT for the number of crossings N t (x) of X, given in Slud (see [149], theorem 3.1). We suppose that the correlation function r of our (stationary Gaussian) process mean zero, variance one, satisfies r ∈ L 1 and r (iv) ∈ L 2 . (2.48) Note that (2.48) implies that r ′ , r ′′ and r ′′′ belong to L 2 as well. Let Z s be a r.v. independent of X s andẊ s (for each s fixed) such thaẗ Note that X s andẊ s are independent, as well asẊ s andẌ s , which ensures the existence of Z s with the stated properties. Let F X t be the Hermite expansion given in H(X) by with (C(q)) q some bounded sequence. Besides the property of stationarity of the process and the orthogonality of the chaos, which will be used to simplify the computations whenever possible, let us give the basic points that constitute this general method. ⊲ Let F Q,t (X) be the finite sum deduced from F t (X) for q = 1 to Q, i.e. ⊲ First we show that F t (X) can be approximated in L 2 by the r.v. F Q,t (X): To prove this convergence, we follow the method developed in Kratz & León (see [81], proof of theorem 1) where two results (cited below), one of Taqqu (see [155], lemma 3.2) and the other of Arcones (see [4], lemma 1), helped respectively for the computation of expectations of the form IE Lemma 2.7 (Arcones inequality, 1994) and f a function on IR d with finite second moment and Hermite rank τ , 1 ≤ τ < ∞, w.r.t. X. Recall that the Hermite rank of a function f is defined by Then In our case, ψ denotes the supremun of the sum of the absolute values of the off-diagonal terms in the column vectors belonging to the covariance matrix for the Gaussian vector Under the conditions on r, we have ∞ 0 ψ 2 (u)du < ∞. ⊲ Now by classical tools (the dominated convergence theorem, Fatou lemma, Cauchy-Schwarz inequality, ...) and Arcones inequality, we can get the limit variance σ 2 of F Q,t (X) and prove that it is finite: ⊲ A version of the m-dependent method: Berman's method. This method consists in approaching the process X by an m-dependent process, in order to be able to use well-known results on m-dependent processes. Define X ε as a (1/ε)-dependent process to approach the process X, as follows. Suppose X has a symmetric spectral density f ; so its Itô-Wiener representation is given by Letφ ε (λ) = 1 2πε |φ( λ ε )| 2 , and then introduce (note that X ε t for fixed t is a standard Gaussian r.v.) and its derivatives denoted by X So we can prove that (2.54) by using Arcones inequality and some results on the correlation between the process (respectively its derivatives) and the (1/ε)-dependent process associated (respectively its derivatives), obtained vhen working with the spectral representation of the correlation functions, namely Proposition 2.4 (Kratz and León, 2001) (.) converge uniformly over compacts and in L 2 as ε → 0 towards r j,k (.) := (−1) k r (j+k) (.). For j = k = 0, the convergence takes place in L 1 as well. t ] converge uniformly over compacts and in L 2 as ε → 0 towards r j,k (.). ⊲ Then it is enough to consider the weak convergence of the sequence F Q,t (X ε ) towards a Gaussian r.v. as t → ∞ to get the CLT for F X t . We can write where θ is the shift operator associated to the process Hence the weak convergence of F Q (X ε ) towards a Gaussian r.v. is a direct consequence of the CLT for sums of m-dependent r.v. (see [60] and [21]), which, combined with (2.52), (2.53) and (2.54), provides the CLT for F t (X), namely Theorem 2.26 (Kratz and León, 2001) Under the above conditions, we have where IE[F X t ] = td 000 and σ 2 = Remark. Condition (2.48) can of course be weakened to: r ∈ L 1 and r ′′ ∈ L 2 (2.55) when considering the process X and its first derivative only, as for the number of crossings, and to r ∈ L 1 (2.56) when considering the process X only. • As an application of theorem 2.26, we get back, under the condition (2.55), Slud's CLT for the number of crossings of X already enunciated in theorem 2.25. Non central limit theorems From the CLT given in the previous section, or even more generally from the literature of the CLT for non-linear functionals of Gaussian processes, we can ask what happens when some of the conditions of those theorems are violated, in particular when we are under a condition of regular long-range dependence. This problem interested many authors, among whom we can cite in chronological order Taqqu (see [154], [156]), Rosenblatt (see [133], [134], [135]), Dobrushin and Major (see [50]), Major (see [97]), Giraitis and Surgailis (see [56]), Ho and Sun (see [63]), Slud (see [148] or [149]). MWI's expansions proved to be quite useful in defining the limiting behavior in non central limit theorems for functionals of Gaussian processes with regular long-range dependence. It is what allows Slud to prove, by using techniques proper to MWI's, in particular Major's non central limit theorem for stationary Gaussian fields with regular long-range dependence (see [96], theorem 8.2), the following non CLT for level-crossing counts. Extensions The aim of this section is to present examples of possible applications of some of the methods reviewed in the previous section. CLT for other non-linear functionals of stationary Gaussian processes As an application of the heuristic of the previous section, in particular of theorem 2.26, we may look at various functionals related to crossing functionals of a stationary Gaussian process X, as for instance the sojourn time of X in some interval, the local time of X when it exists, or the number of maxima of the process in an interval. • Time occupation functionals. The simplest application of theorem 2.26 is when the integrand appearing in F X t , defined in (2.50), depends only on one variable, without needing other conditions than the smooth one r ∈ L 1 (see the remark following the theorem). ⊲ A first example of this type is when F X t represents the sojourn of the process X above a level x in an interval [0, t], i.e. when F X t := S x (t). It is easy to obtain the Hermite expansion of S x (t) as Then an application of theorem 2.26 yields the CLT for the sojourn time under the condition r ∈ L 1 , namely ⊲ Another example already discussed is the local time L x t for X in the level x (when it exists) (see [22]). Its Hermite expansion is given by and again theorem 2.26 allows to retrieve its asymptotical normal behavior under the condition r ∈ L 1 , namely • Number of maxima in an interval. One of the main concerns of extreme value theory is the study of the maximum max t∈[0,T ] X t of a real-valued stochastic process X = (X t , t ∈ [0, T ]) having continuous paths, in particular the study of its distribution F . There is an extensive literature on this subject, going mainly in three directions, according to Azaïs and Wschebor (see [12]): one looking for general inequalities for the distribution F , the other describing the behavior of F under various asymptotics, and the last one studying the regularity of the distribution F . For more references, see [12]. In the discussion of maxima of continuous processes, the upcrossings of a level play an important role, as was the case in the discrete case between maxima of sequences and exceedances of level u n through the equivalence of the events (M (k) n ≤ u n ) = (N + n < k), M (k) n being the kth largest value of the r.v. X 1 , · · · , X n and N + n the point process of exceedances on (0, 1] (i.e. N + n = #{i/n ∈ (0, 1] : X i > u n , 1 ≤ i ≤ n}). In the continuous case, we have already seen that crossings and maxima are closely related when describing the Rice method, in particular with lemma 2.4 providing bounds of IP[ max 0≤s≤t X s ≥ x] in terms of factorial moments of the level x-crossings of the process X. Note that Cramér (1965) noted the connection between u-upcrossings by X and its maximum, e.g. by {N u (T ) = 0} = {M (T ) ≤ u} ∪ {N u (T ) = 0, X(0) > u}, which led to the determination of the asymptotic distribution of the maximum M (T ). Recently, when looking for a reasonable way based upon natural parameters of the considered process X to compute the distribution of the maximum of X, Azaïs and Wschebor (see [13]) established a method, based upon expressing the distribution of the maximum IP[ max 0≤s≤1 X s > x] of the process satisfying some regularity conditions, by means of Rice series, whose main kth term is given by (−1) k+1 ν k /k!, ν k denoting the kth factorial moment of the number of upcrossings. This method, named 'the Rice method revisited' because inspired by previous works such as the one of Miroshin (see [106]), can be applied to a large class of processes, and allows a numerical computation of the distribution in Gaussian cases more powerful in many respects than the widely used Monte-Carlo method, based on the simulation of the paths of the continuous parameter process. Note also another useful connection with the maximum max 0≤s≤t X s of a process X, which is the sojourn time. Indeed, we can write Berman uses this equivalence between the events max 0≤s≤t X s > x and (S x (t) > 0) to study the maximum of the process X (see [22], chap.10). Here we are interested in the number of local maxima of a stationary Gaussian process lying in some interval, and more specifically in its asymptotical behavior, one of the motivations being the applications to hydroscience. As for the number of crossings, the condition appearing in the theorem could be weakened by taking a similar condition to (2.33) for the fourth derivative of r(.); it would then provide that IE(M X [β1,β2] ) 2 < ∞. Previously two cases of application of theorem 2.26 were considered, on the one hand when the integrand appearing in F X t , defined in (2.50), depends only on one variable (under the simple condition r ∈ L 1 ), on the other hand when the integrand depends on two variables (under the conditions r ∈ L 1 and r ′′ ∈ L 2 ), case of our main study, the number of crossings, but which also concerns any convex combination of it as, for instance, Cabaña estimator (see [32]) of the second spectral moment (slightly modified) given by γ = π t ∞ −∞ N t (x)dα(x) (with α(.) a distribution function on IR), and which has been studied by Kratz & León (see [81]). To apply theorem 2.26 to obtain a CLT for the number of local maxima M X [β1,β2] made us consider the last possible case, i.e. when the integrand appearing in F X t depends on three variables. Note that the condition r (vi) (0) < ∞ and r ∈ L 1 imply the condition (2.48) of theorem 2.26 and that we can write M X [β1,β2] = F X t when taking d qnm = − r (iv) (0) −r ′′ (0) δ nm H q−(m+n) (0) (q − (m + n)) √ 2π . Moreover we can easily check that ∞ q=0 d 2 qnm n !m !(q − (n + m)) ! < C, with C some constant, by using again proposition 2.5. stochastic process X = (X t , t ∈ IR d ) having C 1 paths. Wschebor studied problems related to the level sets of the paths of X defined by C X u := {t : X t = u}, proving at the same time some type of Rice formula. For more detail, see his 1985 book. It is in this context that the work of Kratz and León (see [82]) on Gaussian fields took place. They considered the problem of the asymptotic behavior of the length of a level curve of a Gaussian field, by adapting the method used in the one dimensional case, for random processes indexed by a set of dimension larger than one. Note that once again this study was motivated because of its various applications, in particular for the random modelisation of the sea (see e.g. [9]). Consider a mean zero stationary Gaussian randon field (X s,t ; (s, t) ∈ IR 2 ) with variance one and correlation function r having partial derivatives ∂ ij r, for 1 ≤ i + j ≤ 2. Assume that r ∈ L 1 satisfies r 2 (0, 0) − r 2 (s, t) = 0, for (s, t) = (0, 0), and that ∂ 02 r, ∂ 20 r are both in L 2 . Let H(X) be the space of real square integrable functionals of the field (X s,t ; (s, t) ∈ IR 2 ). Let L X Q(T ) (u) be the length of {(s, t) ∈ Q(T ) : X s,t = u} the level curve at level u for the random field X, Q(T ) being the square [−T, T ] × [−T, T ] and |Q(T )| its Lebesgue measure. By theorem 3.2.5 of Federer (see [53], p.244), we have for g ∈ C(IR) g(X s,t )||∇X s,t ||dsdt. (Note that in the non-isotropic case, i.e. when the density function of (∂ 10 X s,t , ∂ 01 X s,t ) is N (0, Σ) with Σ = Σ 11 Σ 12 Σ 12 Σ 22 , Σ i,j = 0 for i = j, we can consider the isotropic process Y s,t defined by Y s,t = 1 and deduce the results for X s,t from the ones of Y s,t .) Under these conditions, we can obtain the chaos expansion in H(X) for L X Q(T ) (u), as well as a CLT for it, namely: where δ q,2m,2l (u) := d q−2(m+l) (u)c 2m,2l , and I q,2m,2l (s, t) := H q−2(m+l) (X s,t )H 2m (∂ 10 X s,t )H 2l (∂ 01 X s,t ). The proof is an adaptation to dimension 2 of the methods used to obtain proposition 2.2 and theorem 2.26 respectively. Indeed we introduce L X Q(T ) (u, σ) := 1 σ IR L X Q(T ) (v)φ u − v σ dv and prove that it converges to L X Q(T ) (u) in L 2 . Then the generalization to dimension 2 of lemma 2.6 will give us the Hermite expansion of L X Q(T ) (u, σ) with coefficients d σ k (u)c 2m,2l , with d σ k (u) → σ→0 d k (u). H k−2(m+l) (X s,t )H 2m (∂ 10 X s,t )H 2l (∂ 01 X s,t )dsdt, where c 2m,2l is given in theorem 3.3. The first part of theorem 3.3, i.e. the chaos expansion follows then, exactly as in dimension one. As regards the CLT, we define I q,2m,2l (s, t)dsdt. With respect to Rice formulas and the distribution of the maximum of Gaussian fields, let us also mention the recent works by Adler et al. and Taylor et al. (see [157] and [2]), and by Azaïs and Wschebor (see [14]). Conclusion Although this work on level crossings focuses specifically on the stationary Gaussian case, it is to be noted that much research has also been conducted on the non-stationary Gaussian case (notion of local stationarity, notion of nonstationarity but with constant variance; work on diffusions, Brownian motions, fractional Brownian motions, etc...), as well as on the non-Gaussian case (in particular when considering the class of stable processes). For completeness, we tried to make a large inventory of papers and books dealing with the subject of level crossings, even though all the references are not explicitely mentioned in the synopsis above. Note that in case an author published a paper prior to a book, the only reference mentioned is that of his/her book.
18,188.8
2006-12-05T00:00:00.000
[ "Mathematics" ]
An experimental evaluation of the impact of a Payment for Environmental Services programme on deforestation Despite calls for greater use of randomized control trials (RCTs) to evaluate the impact of conservation interventions; such experimental evaluations remain extremely rare. Payments for environmental services (PES) are widely used to slow tropical deforestation but there is widespread recognition of the need for better evidence of effectiveness. A Bolivian nongovernmental organization took the unusual step of randomizing the communities where its conservation incentive program (Watershared) was offered. We explore the impact of the program on deforestation over 5 years by applying generalized additive models to Global Forest Change (GFC) data. The “intention‐to‐treat” model (where units are analyzed as randomized regardless of whether the intervention was delivered as planned) shows no effect; deforestation did not differ between the control and treatment communities. However, uptake of the intervention varied across communities so we also explored whether higher uptake might reduce deforestation. We found evidence of a small effect at high uptake but the result should be treated with caution. RCTs will not always be appropriate for evaluating conservation interventions due to ethical and practical considerations. Despite these challenges, randomization can improve causal inference and deserves more attention from those interested in improving the evidence base for conservation. Despite calls for greater use of randomized control trials (RCTs) to evaluate the impact of conservation interventions; such experimental evaluations remain extremely rare. Payments for environmental services (PES) are widely used to slow tropical deforestation but there is widespread recognition of the need for better evidence of effectiveness. A Bolivian nongovernmental organization took the unusual step of randomizing the communities where its conservation incentive program (Watershared) was offered. We explore the impact of the program on deforestation over 5 years by applying generalized additive models to Global Forest Change (GFC) data. The "intention-to-treat" model (where units are analyzed as randomized regardless of whether the intervention was delivered as planned) shows no effect; deforestation did not differ between the control and treatment communities. However, uptake of the intervention varied across communities so we also explored whether higher uptake might reduce deforestation. We found evidence of a small effect at high uptake but the result should be treated with caution. RCTs will not always be appropriate for evaluating conservation interventions due to ethical and practical considerations. Despite these challenges, randomization can improve causal inference and deserves more attention from those interested in improving the evidence base for conservation. K E Y W O R D S deforestation, effectiveness, efficacy, experimental evaluation, forest conservation, impact evaluation, intention-to-treat, land use change, payments for ecosystem services, PES | INTRODUCTION Following calls for improvements in the quality of evidence underpinning conservation interventions (Ferraro & Pattanayak, 2006), there are a rapidly growing number of robust conservation impact evaluations. Impact evaluation seeks to establish the extent to which an outcome can be attributed to the intervention itself, rather than to confounding factors (Baylis et al., 2016;Ferraro & Hanauer, 2014). Careful statistical analysis is increasingly used for constructing counterfactuals (what would have happened in the absence of the intervention). For example, statistical matching is now quite widely used (e.g., Eklund et al., 2016; Data accessibility: Data and code to reproduce analysis in this paper are available here: doi.org/10.6084/m9.figshare.7418264. The full details of a baseline and endline social survey of participants and non-participants in Watershared from control and treatment communities (a small amount of data from this is used in the paper) is publically archived (Bottazzi et al., 2017). Rasolofoson, Ferraro, Jenkins, & Jones, 2015;Sills et al., 2017) while other quasi-experimental methods which require particular conditions such as instrumental variables (Sims, 2010) or regression discontinuity (Alix-Garcia, Mcintosh, Sims, & Welch, 2013), have spread more slowly. Randomized control trials (RCTs) where units are experimentally allocated to treatment or control reduce the influence of confounding factors (Ferraro & Hanauer, 2014) and therefore, at least in theory, greatly improve the quality of causal inference. RCTs at the field scale have been the mainstay of applied ecology for decades, however are vanishingly rare at the landscape scale despite calls for wider use (Ferraro, 2011;Miteva, Pattanayak, & Ferraro, 2012;Pattanayak, Wunder, & Ferraro, 2010;Samii, Lisiecki, Kulkarni, Paler, & Chavis, 2014). The rarity of RCT in evaluating the impact of large-scale conservation interventions can be attributed to the numerous practical and ethical considerations involved (Baylis et al., 2016;Pynegar, Jones, Gibbons, & Asquith, 2018). One of these is scale itself: it clearly would not be feasible to randomly allocate Protected Areas in a landscape. Furthermore, despite the enthusiasm with which RCTs have been promoted in some fields such as development economics, interpretation is not always simple and randomization does not relieve one of the needs to consider covariates and confounding factors (Deaton & Cartwright, 2018). Finally, RCTs require involvement of researchers throughout the implementation phase; they cannot be conducted post-hoc. All are likely to be important reasons for the limited number of RCTs evaluating large-scale conservation interventions. A useful distinction in any impact evaluation is between "effectiveness" and "efficacy" (how interventions work in real-world practice versus under ideal implementation; Pullin & Knight, 2001). Effectiveness may be low not because the intervention lacks efficacy but because implementation, uptake and adherence are imperfect (Glennerster & Takavarasha, 2013). When analyzing RCTs, including the outcomes for individuals as randomized in "intention-to-treat" (ITT) estimates is widely considered most appropriate for evaluating real world effectiveness (Gupta, 2011). Where uptake is incomplete, examining outcomes according to uptake and adherence can be informative, especially for exploring the potential efficacy of new approaches (Glennerster & Takavarasha, 2013;Ten Have et al., 2008). For example an "as-treated" impact estimate (where units are analyzed as they were treated rather than as they were randomized) can be useful (McNamee, 2009). Payments for environmental services (PES, also known as Payments for Ecosystem Services; Wunder, 2015), which incentivize land managers to provide ecosystem services, have been promoted to slow tropical deforestation since the late 1990s (Landell-Mills & Porras, 2002; Sánchez-Azofeifa, Pfaff, Robalino, & Boomhower, 2007). While strong evidence on PES impacts is limited (Börner et al., 2017;Miteva et al., 2012, Samii et al., 2014, approaches such as statistical matching have been quite widely used to evaluate deforestation impacts for example in Costa-Rica (Robalino & Pfaff, 2013) and Cambodia (Clements & Milner-Gulland, 2015). Regression discontinuity was recently used to evaluate the impact of payments on land management actions in Mexico (Alix-Garcia et al., 2018). The only RCT to evaluate PES (Jayachandran et al., 2017) suggested-for high forest pressure, low opportunity cost, and the requirement to enroll all of one's forest land-that a PES scheme in Uganda costeffectively reduced deforestation over a two-year period. Given the heterogeneity of PES impacts across varied settings, and few evaluations relative to the exploding number of programs (Salzman, Bennett, Carroll, Goldstein, & Jenkins, 2018), more such RCTs would be valuable. In 2010, the Bolivian nongovernmental organization Fundación Natura Bolivia (Natura) and five municipal governments initiated an RCT of their conservation incentive program known as Watershared (Pynegar et al., 2018). Watershared makes in-kind compensations to incentivize landowners to cease deforestation and cattle grazing on enrolled parcels. A total of 129 communities were randomly allocated to treatment or control (offered agreements or not). We investigate the effectiveness and efficacy of Watershared at reducing deforestation, over 5 years, by applying generalized additive models (GAMs) to global forest change (GFC) data (Hansen et al., 2013). We undertake a standard ITT evaluation to explore effectiveness at the level of randomization regardless of uptake of Watershared agreements in individual communities. We further quantify efficacy by evaluating the effect of uptake on deforestation (c.f. "as-treated" analysis). Throughout, we control for factors that can relate to both uptake and deforestation, including propensity to enroll (endogeneity), and consider the potential influence of unobserved confounding factors. | Study context Since 2003, Natura's Watershared program in the Bolivian Andes has used in-kind incentives to encourage land owners to conserve forests, to preserve exceptional biodiversity, store carbon, and ensure locally valued ecosystem services (Asquith, 2016). Watershared is not a PES scheme according to the original definition involving buyers and sellers of services (Wunder, 2007), however it does involve "voluntary transactions between service users and service providers that are conditional on agreed rules of natural resource management for generating offsite services" (Wunder, 2015). Therefore the Watershared scheme is relevant to those interested in the design of conservation incentive schemes such as PES. In exchange for enrolling parcels of land in Watershared agreements, farmers receive varied forms of support (including fruit trees, bee boxes, irrigation material and barbed wire) to help shift away from swidden agriculture and improve livestock management (Bottazzi, Wiik, Crespo, & Jones, 2018). More than 210,000 ha belonging to 4,500 families are under agreements (Asquith, 2016). The study region: The Río Grande Valles Cruceños Natural Integrated Management Area (Spanish acronym ANMI) is a 734,000-ha protected area in the Santa Cruz valleys of Bolivia, created in 2007 (Figure 1a). There are regional differences in rainfall which contribute to the existence of five ecoregions which we simplified to three (Appendix S1, Supporting Information): Tucuman-Bolivian Forest; Chaco; and the dry valleys. The area is home to approximately 20,000 people scattered across small towns and hamlets. Most people farm using a mixed system of staple crops including maize and potato, small-scale vegetable cultivation, and livestock rearing. Cattle are grazed in the forests for at least part of each year. RCT: In 2010 Natura, motivated by a desire to know if their intervention was effective, decided to roll out Watershared in 129 communities in the ANMI as an RCT to facilitate impact evaluation (Pynegar et al., 2018). Following baseline data collection, including a socioeconomic survey (Bottazzi et al., 2017), communities were randomly allocated to control (n = 64) or treatment (n = 65), stratified by cattle ownership and population density. However, when our team later constructed community boundaries using national data (National Institute of Agrarian Reform) and field validation we found that two neighboring control communities were in practice considered as one and did not have separate boundaries. Thus, we examine 128 communities (control n = 63 and treatment n = 65). The randomization was consented to by municipal leaders on the grounds that the program would subsequently be implemented in all communities (this occurred in 2016 and the program now runs in both treatment and control communities). Watershared agreements were offered to households in treatment communities. There were three levels of agreement with slightly different conditions and incentives (SI 2). For example, the strictest level (level 1) only applied to forest within 100 m of a stream and cattle had to be excluded as well as deforestation stopped. The other two levels did not require cattle exclusion (SI 2). While the analysis looking at the impact of Watershared on water quality (Pynegar et al., 2018), considered only level 1 agreements, in this paper investigating the impact of Watershared on deforestation we include all levels. Compliance for level 1 and 2 agreements was monitored annually by Natura technicians walking transects within the parcels under agreement. Level 3 agreements did not receive active monitoring. In cases of gross noncompliance, in-kind incentives (such as irrigation tubing or bee hives) have been redistributed to the community. As with many such schemes, not all land enrolled represented additional conservation (additionality was ca. 13%; Bottazzi et al., 2018) and there were barriers to entry leading to higher uptake by households with formal land title, larger homes, cattle, and stronger social connections (Grillos, 2017). Uptake (percentage of a community area under Watershared agreements) was highly variable across the treated communities (Figure 1b), varying from 3 to 80% (median = 14%). 25km 50km Consent to randomization was granted by community leaders in the area on the understanding that the intervention would subsequently be implemented in all communities (this general roll-out was conducted in 2016). The consent forms used in baseline and endline are archived alongside the data (Bottazzi et al., 2017). The endline social survey data used in part of this analysis was assessed under the Bangor University Research Ethics Framework. | Deforestation data and data validation Deforestation data were extracted from the GFC product (Hansen et al., 2013) that provides spatially explicit treecover percentage for 2000 and annual tree-cover change for 2000 to 2016. Thus, "Treecover2000" and "lossyear" layers were downloaded for tile 10S_070W and projected into UTM zone 20S. A threshold of 30% of tree cover was applied to generate a Forest/Non-Forest mask and then applied to the lossyear layer to select loss occurring on that mask only. The layers were combined into a deforestation map, with the resulting pixels classified into four groups: Forest stable; Non-Forest stable; Loss in the baseline period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010); Loss in the RCT period (2011)(2012)(2013)(2014)(2015)(2016). This map was validated following (Olofsson et al., 2014) using visual checks on a stratified random sample of 426 points (see SI 3). Twenty-two points were excluded as poor-quality time series imagery made validation impossible. Accuracy of the remaining points (n = 404) was 94% (Table S3.1) with user's accuracy ranging from 63% (for the loss in the RCT period) to 97% (stable forest). | Statistical analysis 2.3.1 | Analytical approach Although Watershared agreements are individual (a farmer will agree to enroll land or not), our deforestation analysis is at community level for three reasons. First, the randomization unit is the community. Second, although there are shapefiles for enrolled parcels, we do not have shapefiles for unenrolled parcels (either in control or in treatment communities), making finer-resolution comparison impossible. Finally, an analysis looking at whether deforestation was lower in enrolled parcels than other land would be highly vulnerable to confounding by on-farm leakage (Pfaff & Robalino, 2017). Three further considerations impacted the analysis of Watershared's effect on deforestation. First, while randomization exogenously allocated treatments, the voluntary nature of uptake yielded nonrandom variation in uptake. We controlled for factors that might influence both participation and the outcome as much as possible by controlling for uptake propensity (see below). Second, owing to this variation in uptake, there is a distinct difference between the randomization, which is binary (control/treatment), and the intervention, which is continuous (% area under agreements). We therefore have two models: an ITT model evaluating the effectiveness of Watershared as-implemented and a "continuous-treatment" (CT) model to explore the potential efficacy. Third, due to implementation error, a few households living in treatment communities enrolled land they own in control communities. Therefore there were control communities with enrolled land (see Figure 1b). We included all communities in our analysis despite this contamination of the control, accepting that it may introduce noise. | Modeling the propensity for uptake of Watershared agreements We modeled uptake propensity by regressing % land area enrolled in a treatment community against socioeconomic predictors aggregated to community scale as means (We also tested a model using medians, which explained 10% less variation; this is not shown). We selected predictors based on an analysis of household-level participation (Grillos, 2017), derived from a baseline survey by Natura in 2010 in all communities (Bottazzi et al., 2017). The predictors were: wealth (land, cattle available); education of household head (years); social embeddedness (generations a household has been present, frequency of involvement in community work); environmental attitudes (perceptions of forest value and local water quality); and remoteness (travel time to the nearest market-see SI 4 for more details). We used the predictions to create a propensity score for treatment and control communities, and used this score as a control variable in our deforestation analysis. One community which lacked baseline socioeconomic data and therefore a propensity score, had to be discarded from the analysis. An important assumption in our deforestation models is that deviation from uptake propensity (i.e., uptake that cannot be explained by predicted uptake) is independent of confounding factors. For this to be the case, some of the unexplained variation in the uptake model would need to be related to variation affecting uptake but not deforestation. We suggest that such variation may be due to differences in how the offer of the program was experienced across the communities, for example by the timing of Natura's visits to certain communities, the relationship between Natura technicians and communities, or the willingness of the community leader to spread the word about Natura's visit. We support our interpretation of our propensity model results using households' answers to the question (asked of those who did not take up agreements) "Why did you not join the scheme" (n = 513). | Leakage Leakage is a common concern in Payments for Ecosystem Services schemes as pressures may be displaced rather than eliminated (Alix-Garcia, Shapiro, & Sims, 2012;Börner et al., 2017). It is well known that leakage poses challenges for conservation evaluation (Pfaff & Robalino, 2017). As noted, we controlled for within-community leakage by analyzing deforestation at the community scale (as deforestation driven to areas near enrolled parcels would simply reduce our impact estimate). We could not control for betweencommunity leakage, which, if preferentially occurring from treatment to control, would bias our estimated impact upward. However, we argue that such a bias is unlikely because treated communities' neighbors are randomized (thus the effect should cancel out). Also, local deforestation is mostly due to small-scale conversion to agriculture for local markets, so households are unlikely to clear land far from their home. | Modeling details Our primary, ITT analysis compared deforestation between treated and control communities regardless of the extent to which Watershared agreements were signed. This estimates the effectiveness of the Watershared intervention as rolled out in the region. To explore the potential efficacy of the intervention, we further developed a "continuous treatment" (CT) model, which has some analogy to "as-treated" models commonly used in the medical trials literature. However, in our situation, treatment is continuous (% land area enrolled). We followed published guidelines for analysis of intervention effects in randomized trials (European Medicines Agency, 2015). In addition to uptake propensity, we included as control variables those used for initial stratification of the control and treatment group (population and cattle density), the baseline value of our continuous outcome measure (deforestation 2000-2010), and other geographical variables expected a priori to be strongly associated with the outcome (limited using a screening model; SI 5). All our models were fitted using GAM (Wood, 2011) to account for nonlinear relationships and nonnormal errors, leading to our use of the Tweedie distribution family for deforestation (percentage) and beta for uptake propensity (proportion), all selected based on a priori expectation combined with model comparisons for fit. The ITT and CT predictor set was identical apart from whether the intervention was coded as a binary control/treatment variable or % uptake across communities (see Table 1 for all predictors). In both cases, the intervention variable was interacted with uptake propensity, which would indicate whether treatment has an effect above and beyond the effect of endogenous factors. In other words, if there is no deforestation difference between control and treatment communities with high predicted uptake, it implies that the scheme has had no effect above the "null" behavior under predisposing conditions. Other plausible interactions between predictors were tested for significance and included where necessary. The effect on the impact evaluation exerted by data points with high leverage (Cook's distance) was evaluated by repeating analysis without them, which provided us with a more conservative estimate of the effect size of the PES scheme. Significance of predictors as well as variable selection was determined using GAM internal Wald tests and by allowing for shrinkage (Wood, 2017). Model performance was examined by inspection of residuals (Faraway, 2006). The effect size of the intervention as per the CT model was approximated by predicting % deforestation in five scenarios where % uptake was set at 0, 20, 40, 60, and 80%. For each scenario, we made 30 plausible predictions of the effect of the intervention based on the model confidence interval. The percentages for each community were multiplied by its forest cover to attain deforestation in hectares, as well as an overall % change in deforested hectares out of available forest in 2010. | Distribution of, and trends in, deforestation Total deforestation in the baseline period (2000-2010) was 4,147 ha (±742 ha) but was variable across communities (mean 1.2%, median 0.9%; Figure 2a). With the caveat that any systematic difference between randomized cohorts is necessarily due to chance and therefore invalidates the premise for frequentist significance tests, we note that there was no significant difference (Wilcoxon rank sum test) in either measure between control and treatment communities (Figure 2b), supporting the visual inspection of balance between control and treatment. The control and treatment communities were also largely balanced in the potential drivers of deforestation we identified (SI 6). Communities with high baseline deforestation tended to also have high deforestation during the intervention period, however there was considerable scatter around this relationship (Figure 2a). The total area of deforestation during the intervention (2011-2016) was 6,042 ha (±3,933 ha); again, this was variable across communities (mean 1.7%, median of 1.2%). Considering the intervention period is shorter, this implies increased overall deforestation in the intervention period. | Modeling propensity of uptake in Watershared agreements Our model of uptake propensity explained 50% of uptake ( Figure S4.1a), with considerable and slightly biased scatter around the 1:1 line. Testing different model family specifications and predictor interactions did not improve fit, suggesting omitted variable bias; however control and treatment communities were largely balanced in uptake propensity ( Figure S4.1b in SI4). Responses to the question "Why did you not join the scheme," provide some evidence that the unexplained variation in uptake can be explained, at least in part, by nonconfounding factors. Not having attended a sign-up meeting was the most common reason (50%) given by nonparticipants instead of for example, lack of interest (SI 7). While not attending a meeting may be correlated with some confounders, it could also reflect variation in the way in which the program was offered across the study area. | ITT model (79.7% deviance explained-see SI 8) ITT analysis revealed no significant difference overall (i.e., intercept) between deforestation in control and treatment communities after accounting for control variables including uptake propensity (Figure 3). The slope of uptake propensity varied between control and treatment: uptake propensity was significant for the treatment communities but insignificant for control communities (p = 0.016 vs. p = 0.11). For treatment communities, the relationship suggested decreasing deforestation with increasing uptake propensity (Figure 4a). However, following removal of data points with high leverage on model outcomes (n = 3), there was no significant control/treatment difference in the relationship between deforestation and uptake propensity and therefore the effect is volatile (Figure 4b). 3.4 | Continuous-treatment model (80.3% deviance explained-see SI 8) The continuous-treatment model indicated a significant negative relationship between both increasing % uptake and % uptake propensity (as interaction) and deforestation (p = 0.008; Figure 5a; SI 8). For this model, removing the data points with high leverage on model outputs did not remove the treatment effect (Figure 5b). The treatment effect is small. If an 80% uptake were achieved our models suggest 1) and (b) their differences between control and treatment communities. Horizontal line = median, hinges = 25th and 75th percentiles, and whiskers extend to 1.5 × interquartile ranges from each hinge. Points represent data lying beyond this range this would result in a reduction of deforestation of just 670 ha compared with 0% uptake. This represents a reduction in deforestation rate from 1.58 ± 0.015% (1 SD) (with 0% uptake scenario) to 1.41 ± 0.042 (1 SD). We did not detect a measurable impact of Watershared on deforestation using the ITT model. This suggests that, as implemented in the landscape, Watershared was not effective at slowing deforestation. While there is some evidence that deforestation was reduced for communities with a higher propensity to take up the scheme in treatment communities (but not in control communities; Figure 4), this effect was driven by three communities with high leverage. Our exploration of efficacy (CT model) showed that deforestation decreased slightly with increasing uptake regardless of uptake propensity, which suggests that improvement of uptake rates could potentially lead to effective intervention. Interpretation of our CT model rests on the assumption that deviation from intended treatment (both uptake in control communities, and some element of variation of uptake in treated communities) was independent of confounding factors (McNamee, 2009). We are confident that confounding factors did not drive the cases of uptake in control communities; they were the result of an accident of geography (people living in treatment communities who owned land in control communities) and limited monitoring of the RCT (they should not have been allowed to enroll that land). The 50% of uptake variation The intention-to-treat model suggests decreasing deforestation with increasing uptake propensity for treatment but not control communities when all communities are included (a). However when three communities with high leverage on model results are discarded, there is no difference between control and treatment cohorts (b). The rug shows the distribution of real data points FIGURE 3 Difference in deforestation between control and treatment communities based on the intention-to-treat model with (full) and without (N -3) communities with high leverage on model results. Bars are standard errors we could not explain with our propensity score, while potentially influenced by unobserved confounding factors, is also plausibly due to differences in the way the scheme was offered between communities. Opportunities to enroll land may have varied because of timing of visits by Natura, or links between technicians and community members affecting how effectively news of the meetings spread in some communities, or chance (people being sick, or away). These possibilities, although not directly monitored as part of this RCT, are supported by our interviews with nonparticipants. Fifty percent of respondents gave "did not attend meeting" as the reason for not taking up the agreements. The randomization increases confidence in our analysis. Given that uptake propensity scores and preintervention deforestation rates, inter alia, were balanced between control and treatment communities, we can reasonably expect balance also in unobservable confounders. For example, treatment communities with both high uptake propensity and high uptake, can be expected to be balanced in the analysis with similar control communities who would have taken up the scheme if offered it. In the absence of being able to perfectly model propensity to take up the scheme, randomization was therefore very useful for supporting causal inference. It is important to note that our estimated effect is very small, and potentially trivial. If 80% uptake was achieved across the landscape (unlikely to be achievable), our scenario modeling suggests that deforestation would reduce from 1.58% (with 0% uptake scenario) to 1.41%. The only other published RCT evaluation of a PES program looked at the impact of payments to households in Uganda over a 2-year period (Jayachandran et al., 2017) and found a much larger reduction in deforestation rate (from 9.1% to just 4.2%). However, this project operated in a small area (<99,300 ha vs. 489,400 ha here) with higher deforestation rates. Low baseline deforestation inevitably reduces the scope of impacts of a program seeking to reduce deforestation rates (Alix-Garcia et al., 2012). | Might the Watershared program have had other environmental impacts? The Watershared program was introduced with the aim not only of conserving forest cover, but also conserving biodiversity (potentially damaged by forest degradation) and ensuring the supply of locally valued ecosystem services (particularly the quality and quantity of downstream water; Asquith, 2016). A recent analysis of the impact of the Watershared scheme on water quality using the same RCT design as in this study showed that while excluding cattle from water sources reduced Escherichia coli contamination at that location, there was no difference between control and treatment communities in the quality of their water (Pynegar et al., 2018). Pynegar et al. (2018) suggest that the lack of impact on water quality is because so little land was enrolled in level 1 contracts, and the scheme involved no targeting meaning that not all the land enrolled had the potential to impact water quality. It is possible that the scheme may have had a positive impact on local biodiversity through cattle exclusion. However, although detailed data on amphibians, reptiles and dung beetles were collected at endline, this data has not yet been examined. | How could the impact of the Watershared program be increased? Watershared already fulfils some criteria recently identified as correlating with PES success (Börner et al., 2017) such as compliance monitoring and in-kind payments. However, it FIGURE 5 The results of the continuous-treatment model with (a) and without (b) three communities with high leverage on model results. The estimated effect of uptake can be seen by comparing deforestation (color scale) between communities with similar uptake propensity (x-axis) but different actual uptake (y-axis) did not deliver reductions in deforestation with the levels of uptake which were achieved in the study area. We mention above that low uptake in some communities may have been driven at least partly by differences in how the intervention was presented between communities, which could be beneficial to explore in the future. The value of the in-kind incentives is also likely to have a role in uptake. While it is difficult to draw comparisons across countries with different economies, the value of the incentives in Watershared are low compared with other program (the value of the incentives for the most restricting agreements is $10 a hectare plus the equivalent of a $100 value joining bonus, but just $1 a hectare for the least restricting agreements; SI 2). For comparison, Mexico's program pays 27-36 USD ha −1 year −1 depending on forest type (Muñoz-Piña, Guevara, Torres, & Braña, 2008), Costa Rica's national program pays 45-163 USD ha −1 year −1 (Wunder, Engel, & Pagiola, 2008), and the Ugandan PES program paid 28 USD ha −1 year −1 (Jayachandran et al., 2017). Those promoting Watershared argue that it works through nudging, by emphasizing environmental norms and reciprocity rather than paying the opportunity cost, so the level of incentives is relatively unimportant (Asquith, 2016). There is evidence that farmers enroll due to the perception that they or their community will benefit from improved water quality (Bottazzi et al., 2018). However both theory (Persson & Alpízar, 2013) and empirical data (Arriagada, Sills, Pattanayak, & Ferraro, 2009) do predict low incentives lead to low participation. We suggest that higher valued incentives could increase uptake of Watershared. Our evidence suggests that even if uptake could be greatly increased, the reduction in deforestation would be modest. A common problem in all PES schemes is adverse selection; participants enroll land which is unlikely to be cleared anyway, resulting in low additionality (Börner et al., 2017). A recent analysis suggests that only 13% of the land area enrolled in Watershared agreements has resulted in additional conservation (Bottazzi et al., 2018). If higher payments could increase additionality as well as uptake this may therefore increase the efficacy of the intervention. We finally note that the impact of Watershared may also increase and/or materialize with time, as found for a number of PES schemes (Grima, Singh, Smetschka, & Ringhofer, 2016), especially where livelihood changes are incentivized (Börner et al., 2017). For example, many of the Watershared incentives involve either waiting (fruit tree saplings reaching maturation) or mastery (bee keeping, effective irrigation) before becoming a financially viable alternative to the status quo. | What can RCT contribute to conservation impact evaluation? Establishing causality in environmental policies by properly identifying counterfactual outcomes is essential if environmental policy decisions are to be based on evidence (Ferraro & Hanauer, 2014). Quasi-experimental approaches represent a huge advance over what passed for conservation evaluations in the past, and their increasing use is very positive. However, post-hoc analysis is only as reliable as the counterfactual scenario which can be created statistically and recent evidence demonstrates how even supposedly robust methods such as difference-in-differences can result in biases in impact estimates (Daw & Hatfield, 2018). As much as possible, therefore, conservation interventions should be explicitly designed to allow robust evaluation (Ferraro & Hanauer, 2014). Randomizing a conservation intervention can help to facilitate an evaluation by reducing the role of confounding factors, as well as providing a satisfactory pool of counterfactuals in cases of nonrandom uptake. The Watershared RCT suffered from some contamination of the control and considerable variability in uptake. Despite this "noise", the randomized design was an improvement from a nonrandomized alternative. This is because unobserved confounders driving uptake are likely to exist, which quasi-experimental methods such as matching cannot account for. The existence of a control balanced in all factors for which we have data gives us confidence that the observed effect (or lack of ) is not due to these unobserved confounders. For example, there were low uptake rates in the northern sector which would not have been expected a priori, however randomization ensured that comparable controls existed. Despite calls for more randomized experiments in conservation impact evaluation, their use remains rare. Watershared is one of only three randomized impact evaluations of landscape-scale conservation interventions we are aware of (the others are: Jayachandran et al., 2017;Wilebore, Voors, Bulte, Coomes, & Kontoleon, in press). There are ethical and practical challenges meaning that full RCTs are not always appropriate (Baylis et al., 2016;Deaton & Cartwright, 2018;Ferraro, 2011;Pynegar et al., 2018). However, where possible, randomization certainly offers valuable opportunities for improving causal inference (Ferraro, 2011). The Watershared RCT is the result of a collaboration between practitioners, who had the foresight to implement their intervention in a randomized design, and researchers. More such collaborations would facilitate a growth in the robust evaluations that conservation so desperately needs. We hope that conservation can avoid the polarized debate surrounding the value of knowledge generated from RCTs in other fields (Ravallion, 2009), and that randomization can be added to the conservationist toolkit where appropriate. James Gibbons, Gavin Simpson, Alex Pfaff, and Paul Ferraro provided valued input on analytical approaches, giving very generously of their time. We are also grateful to Kelsey Jack who designed the initial randomization in 2010. This research was funded by grant RPG-2014-056 from the Leverhulme Trust and grant NE/L001470/1 from the UK's Ecosystem Services and Poverty Alleviation program. CONFLICT OF INTEREST There is the potential for conflict of interest as Natura were involved in the research but are also the implementers of the Watershared program which is the focus of our research. However, while N.A. (who was involved in founding Natura) is a co-author, he was not directly involved in the analysis (his role was providing context for helping us design the analysis and interpret results). E.P. started working for Natura after this analysis was complete. Author contributions E.W. and J.P.G.J. conceived the analysis with input from the other authors. R.D. developed the forest change product and conducted the validation (with help from Crespo). E.W. and J.P.G.J. wrote the paper. N.A. developed the randomization which made the analysis possible. ORCID Julia P. G. Jones https://orcid.org/0000-0002-5199-3335
8,132.6
2019-01-01T00:00:00.000
[ "Economics" ]
Thermodynamics in the Universe Described by the Emergence of Space and the Energy Balance Relation It has previously been shown that it is more common to describe the evolution of the universe based on the emergence of space and the energy balance relation. Here we investigate the thermodynamic properties of the universe described by such a model. We show that the first law of thermodynamics and the generalized second law of thermodynamics (GSLT) are both satisfied and the weak energy condition are also fulfilled for two typical examples. Finally, we examine the physical consistency for the present model. The results show that there exists a good thermodynamic description for such a universe. Introduction Numerous astro-observations show that our universe is in accelerating expansion at present [1,2]. There are usually two ways to explain the phenomenon if we consider the evolution of the universe from the point of view of the dynamics of gravity. One is the modification of geometric part of Einstein's field equation, such as f (R) theory, Lanczos-Lovelock gravity theory. The other is the modification of the material part of Einstein's field equation by introducing the extra matter with negative pressure or the scalar field (called dark energy). A good model which can explain the current accelerated expansion of the universe is ΛCDM (lambda cold dark energy) model which considers a cosmological constant, i.e., the value of the equation of state parameter ω = −1 [3,4]. However, astronomical observations allow also ω to vary with time. The variation with time is described usually by the slowly varying scalar field such as the quintessence [5][6][7][8][9], kinetic energy driven k-essence [10][11][12] and tachyon [13][14][15][16][17]. These models describe the accelerated expansion of the universe well. Studying gravity from a thermodynamic point of view is an interesting field in modern theoretical physics. The deep connection between gravity and thermodynamics is accepted generally because of the black hole thermodynamics [18][19][20] and AdS/CFT correspondence [21]. The equivalence between the Clausius relation δQ = TdS (which connects the heat flow δQ, entropy S and Unruh temperature T) and the Einstein equation was first given by Jocobson in 1995 [22]. Although the topology of quasi-de Sitter apparent horizon is quite different from that of the local Rindler horizon, the Friedmann equation with the slow-roll scalar field can be reproduced by using the first law of thermodynamics −dE = TdS where dE is the amount of the energy flow through the quasi-de Sitter apparent horizon [23]. Besides, the Friedmann equation can also be derived by calculating the heat flow through the horizon in an expanding universe and by applying the Clausius relation to a cosmological horizon [24]. Padmanabhan [25][26][27][28], in the reverse way, showed that gravitational field equations in a wide variety of theories can reduce to the thermodynamic identity TdS = dE + PdV when evaluated on a horizon. These conclusions further reveal the relation between horizon thermodynamics and spacetime dynamics. Furthermore, Padmanabhan [29,30] revealed the relation between the degrees of freedom of space and the dynamic evolution of the universe. He derived the standard Friedmann equation of the FRW universe through a simple equation ∆V = ∆t (N sur − N bulk ), where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, N sur is the number of the surface degrees of freedom N sur and N bulk is the number of the bulk degrees of freedom. Namely, the difference between the number of the surface degrees of freedom and the number of the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe. From the point of view of thermodynamics, it tends to spontaneously thermodynamic equilibrium for an isolated thermodynamical system. That is to say, the entropy of an isolated thermodynamical system, S, cannot decrease and reaches its maximum finally. In black hole physics, there exists a similar law which is called as the generalized second law of thermodynamics [19]. The law states that the sum of the entropy of the black hole horizon, S h , plus the entropy of the matter, S m , cannot decrease with time, i.e.,Ṡ =Ṡ m +Ṡ h ≥ 0 where the dot denotes the derivative with respect to time. After that, the generalized second law of thermodynamics (GSLT) was extended to the cosmological horizons [31,32]. In the cosmological context, the GSLT has been extensively studied (see, for example, [33][34][35]). In Ref. [36], we considered the current accelerated expansion of the universe based on the emergence of space and the energy balance relation ρV H = TS, where ρ is the energy density of the cosmic matter, S = A H /(4L 2 p ) = πH −2 /L 2 p is the entropy associated with the area of the Hubble sphere V H = 4π 3H 3 and TS is the heat energy of the boundary surface. Then we found that the evolution solutions of the universe include the solutions obtained from the standard general relativity theory, and concluded that it is more common to describe the evolution of the universe in the thermodynamic way. Therefore, it is interesting to investigate whether the first law of thermodynamics and the GSLT hold in the model described by Ref. [36]. The goal of the present paper is to study the thermodynamical behavior of the universe considered in Ref. [36] by means of its description of the emergence of space and the energy balance relation. Our analysis shows that the first law of thermodynamics holds and is actually the Clausius relation. The validity of the Clausius relation means that the evolution of the universe can be deemed as a series of quasistatic processes. We also show that the GSLT holds in the total accelerated evolutionary history of the universe and the total entropy of the universe tends to the maximal value when the universe evolves to the de Sitter universe by considering two typical examples. The results show that there exists a good thermodynamic description for such a universe. The paper is organized as follows. In Section 2, we briefly review the model which describes the evolution of the universe based on the emergence of space and the energy balance relation. In Section 3, we show that the first law of thermodynamics holds in the universe descried by the present model. In Section 4, the validity of the GSLT and thermodynamic equilibrium are shown; we also obtain the constraints imposed on the energy density and the pressure of the matter. Our conclusions are presented in Section 5. We use units c =h = 1. Dynamical Evolution Equations of the Universe Based on the Emergence of Space and the Energy Balance Relation Let us begin with the FRW metric which describes the homogeneous and isotropic universe where R = a(t)r is the comoving radius, h ab = diag −1, a 2 1−kr 2 is the metric of 2-spacetime (x 0 = t, x 1 = r) and k = 0, ±1 denotes the curvature parameter. Padmanabhan [29,30] thought that our universe is asymptotically de Sitter and its evolution can be described by the following law where L p is the Planck length, is the number of the surface degrees of freedom on the Hubble horizon in which H is the Hubble constant, is the number of the bulk degrees of freedom in which T is the temperature of the horizon, k B is the Boltzmann constant, |E| = |(ρ + 3p)V| is the Komar energy and V = 4π 3H 3 is the Hubble volume. The law (2) indicates that the difference between N sur and N bulk drives the universe towards "holographic equipartition" (i.e., N sur = N bulk ). According to the analysis of Ref. [36], the temperature of Hubble horizon of the flat FRW universe is employed as Here, we assume 1 +Ḣ 2H 2 > 0. The study of quantum field theory in a de Sitter space [37] showed that a freely falling observer would measure a temperature T = H/2π on the de Sitter horizon when the radius of the de Sitter horizon is taken as 1/H . Our universe is asymptotically de Sitter, so the temperature of the Hubble horizon should tend to H/2π when t becomes large enough. In fact, the approximation |Ḣ/2H 2 | 1 has been used in calculating the energy flow crossing the apparent horizon [23,24,38,39]. Therefore, it seems to be reasonable to assume 1 +Ḣ 2H 2 > 0 when we investigate the thermodynamic properties and dynamical behavior of the accelerated universe. In Section 4, we will show the validity of the assumption 1 +Ḣ 2H 2 > 0 by two typical examples. Thus, inserting Equations (3)-(5) into Equation (2), we obtain the Friedmann acceleration equation On the other hand, according to the energy balance relation ρV H = TS [40], we obtain another evolution equation of the universe Combining Equations (6) and (7), the following equatioṅ can be obtained. In this way, we obtain the dynamical evolution equations of the universe based on the emergence of space and the energy balance relation. It was shown [36] that it is more common to describe the evolution of the universe in such a thermodynamic way because the solutions of the dynamical evolution equations in such a model include the solutions obtained from the standard general relativity theory. First Law of Thermodynamics for the Present Model Now that the dynamical evolution of the universe based on the emergence of space and the energy balance relation has been investigated in Ref. [36], it is natural to ask whether the thermodynamic properties (the first law of thermodynamics and the GSLT) can hold in such a model. Furthermore, we may ask what are the constraints on the evolution of the universe if the GSLT holds. In this and the next sections, we will discuss the first law of thermodynamics and the GSLT in such a model respectively. The study of quantum field theory in a de Sitter space [37] showed that a freely falling observer would measure a temperature T = κ/2π on the de Sitter horizon where κ is the surface gravity. For the Q space, Bousso [41] argued its thermodynamical description and showed that the first law of thermodynamics −dE = TdS holds on the apparent horizon. Furthermore, Cai and Kim [38] derived the Friedmann equation of the FRW universe with any spatial curvature based on the first law of thermodynamics. Whether the first law of thermodynamics holds on the horizons in different gravity theories have been studied generally (for example, see, [42][43][44]). Now let us show that the first law of thermodynamics holds in the universe described by the present model. The amount of energy crossing the Hubble horizon during the time interval dt [41,45] where R h is the Hubble radius and k µ is the future directed ingoing null vector field. Using Equations (6) and (7), we obtain so the amount of energy crossing the Hubble horizon during the infinitesimal time interval is expressed as On the other hand, we can obtain where we use the definition of temperature Equation (5) and the area-entropy relation S = A Comparing Equations (11) with (12), we obtain the following equality which implies that the first law of thermodynamics holds in the present model. It is important to note that the strong energy condition ρ + 3p ≥ 0 is broken from Equation (4). However, we can see that the null energy condition ρ + p ≥ 0 can be satisfied whenḢ is nonpositive from Equation (10) because the term 1 +Ḣ 2H 2 which is related with the temperature of the Hubble horizon is positive. In fact, it satisfies the GSLT for the accelerated universe which satisfies the null energy condition. It is also worth mentioning that the amount of heat flux crossing the Hubble horizon during the infinitesimal time interval, δQ , is the change of the energy inside the Hubble horizon −dE. The minus appears due to the fact that the energy inside the Hubble horizon decreases when the heat flux flows out of the Hubble horizon, so the law (13) is actually Clausius relation δQ = TdS. Therefore, the evolution of the universe in the present model can be deemed as a series of quasistatic processes because Clausius relation works only when the thermodynamic process is reversible. Thus, the temperature of the matter inside the Hubble horizon can be taken as the temperature of the Hubble horizon. This is an important relation that we will use when we discuss the GSTL in the next section. Validity of the GSLT and Thermodynamic Equilibrium We have shown that the first law of thermodynamics holds on the horizon in the previous section; it is natural to ask if the GSLT holds in such a model. In the cosmological context, the GSLT denotes that the sum of the entropy of the cosmological horizon, S h , plus the entropy of the matter inside the horizon, S m , is a nondecreasing function. That is to say, the GSLT can be formulated as [31,32,46] S =Ṡ m +Ṡ h ≥ 0. Further, if the universe can reach an equilibrium state eventually, then the total entropy must satisfy the inequalityS =S m +S h ≤ 0 (15) at least at the last stage of evolution. The physical meaning of this inequality can be explained by the fact that the entropy of the universe increases less and less and reaches a maximum if the universe reaches an equilibrium state eventually. Here we would like to point out that inequality (15) is slightly different from the one in the references [47,48] where the authors used the expression in which the second derivative of the total entropy is less than 0, i.e.,S =S m +S h < 0. However, we allow the equal sign of the inequality (15) to be valid because it is possible for the total entropy to take the maximum value even if the first and second derivatives of the total entropy are both zero. In fact, the first and second derivatives of the total entropy are both zero when the universe evolves into the de Sitter universe. According to the Gibbs relation and the conclusion of the previous section that the evolution of the universe can be deemed as a series of quasistatic processes, we know that the matter of the universe satisfies [34,35,49] TdS m = d(ρV) + pdV = (ρ + p)dV + Vdρ, where V = 4π 3H 3 is the Hubble volume and T equals to the temperature of the Hubble horizon. Substituting Equations (5), (7) and (10) into Equation (16), we obtain the change rate of the entropy of the matterṠ For the Hubble horizon, the change rate of the entropy iṡ Therefore, we can get the first derivative of the total entropẏ (19) and the second derivative of the total entropÿ Before we discuss the GSLT, let us obtain the constraints which are imposed on the energy density and pressure of the matter by the present model. Equation (10) can be transformed intȯ Solving this equation, we obtain the solutionṡ which imply that the sum of the energy density and the pressure must satisfy the relation This constraint gives the upper bound of the sum of the energy density and the pressure. Now we investigate the GSLT in the present time and the last time of the evolution, respectively. (i) The GSLT in the present time of the evolution. At the present time, we assume that the scale factor behaves as a(t) ∝ t α (24) where α is a constant greater than unity because the universe is in accelerating expansion. This form can be obtained when the relation 8πL 2 p (ρ + p) ∝ H 2 is satisfied. In Ref. [36], it has been proven that the form of the scale factor is a(t) = t 2 3(1+ω) if the equation of state of the matter is assumed as p = ωρ where ω is a constant that is not equal to −1. In fact, a large number of papers on the accelerated expansion of the universe have assumed that the scale factor is the form (24). For example, the authors have pointed out that the rate of growth a(t) ∝ t 2 is consistent with supernova observations in Ref. [50]. After some calculations, we obtain Inserting Equation (25) into Equation (19), we obtain the change rate of the total entropẏ S = 2πt which is greater than zero obviously, so the GSLT is satisfied in the present time of the evolution. The term related with the temperature of the Hubble horizon, 1 +Ḣ 2H 2 , could be derived from Equation (25) as 1 − 1 2α , which shows that the temperature of the Hubble horizon is positive for the current accelerated expansion of the universe (α > 1). Further, we find that the null energy condition holds in the present time of the evolution because Equation (10) is positive. (ii) The GSLT in the last time of the evolution. The Friedmann acceleration Equation (6) is derived from the fact that our universe is asymptotically de Sitter, so the scale factor a(t) → Ae H 0 t when time t → ∞ where A and H 0 are both positive constants. Thus, the scale factor can be taken as Under this assumption, we obtain the following physical quantities Inserting Equations (28) and (29) into Equation (19), we obtaiṅ which implies that the total entropy is nondecreasing and the GSLT is satisfied. The derivative of the above expression, i.e., the second derivative of the total entropy is Analyzing expression (31), we obtain the conclusionS ≤ 0 for the sufficiently large time t which implies that the universe will tend to thermodynamic equilibrium. In order to see the conclusioṅ S ≥ 0 andS ≤ 0 clearly, we draw Figures 1 and 2 to show the variation of the first and second derivatives of the total entropy in the time range of 1/H 0 to 6/H 0 , respectively. The term related with the temperature of the Hubble horizon, 1 +Ḣ 2H 2 , could be derived from Equations (28) and (29) as 1 2 1 + tanh 2 (H 0 t) . This term is positive so the temperature of the Hubble horizon is positive. Further, we obtain the conclusion from Equation (10) that the null energy condition holds in the last time of the evolution. At the end of this section, we investigate the special solution ρ + p = 0 which depicts the de Sitter universe. When the equality ρ + p = 0 is satisfied, we can obtain the solutionsḢ = 0 orḢ = −2H 2 . For the solutionḢ = −2H 2 , we see a(t) ∝ t 1/2 which implies that the universe is not in accelerating expansion. This is inconsistent with the current assumption. Hence the unique solution isḢ = 0 which implies that H is a constant for the de Sitter universe. Substituting the solutionḢ = 0 into Equation (19), we obtainṠ = 0 which implies that the GSLT is satisfied for the de Sitter universe. According to the above analysis, we conclude that the entire accelerated evolutionary process of the universe satisfies the GSLT and the total entropy of the universe tends to the maximal value which equals to the total entropy of the de Sitter universe in the present model. This figure shows that the universe will tend to thermodynamic equilibrium for the sufficiently large time t. Conclusions In this paper, we study the first law of thermodynamics and the GSLT in the universe described by the emergence of space and the energy balance relation. First, we obtain the evolution equations of the universe based on the emergence of the space and the energy balance relation. In the process of derivation of the temperature of the Hubble horizon, we assume that the term 1 +Ḣ/H 2 must be greater than zero. This assumption is reasonable because this term equals exactly to unity for the de Sitter universe and our universe is asymptotically de Sitter. Indeed, we show the validity of the assumption for the accelerated universe whose evolution law is a(t) ∝ t α or a(t) ∝ sinh(H 0 t) in Section 4. Next, we show that the first law of thermodynamics −dE = TdS is satisfied for the present model. In fact, the validity of the first law of thermodynamics implies that the Clausius relation δQ = TdS is satisfied in the present cosmological context. Therefore, the temperature of the matter inside the universe can been taken as the temperature of the Hubble horizon because the Clausius relation applies only to variations between the nearby states of local thermodynamic equilibrium. Then, we analyze the GSLT and get the change rate of the total entropy according to the Gibbs relation and the area-entropy relation. Furthermore, we obtain the constraints which are imposed on the energy density and pressure of the matter by the present model. These constraints are ρ + p ≤ H 2 8πL 2 p and ρ + 3p < 0 respectively. To arrive at more specific results, we consider two typical examples in which the scale factor is taken as a(t) ∝ t α and a(t) ∝ sinh (H 0 t). The choice of the scale factor is based on the astronomical observation and the consistency with the current model. Whether the scale factor is taken as a(t) ∝ t α or a(t) ∝ sinh (H 0 t), the GSLT and these constraints are satisfied. At the same time, the null energy condition ρ + p ≥ 0 is also satisfied. In addition, we find that the universe will reach a thermodynamic equilibrium state and the total entropy reaches a maximal value when time t tends to infinity. Hence we may conclude that there exists a good thermodynamic description for such a universe. Finally, we must point out that these evolution equations have been obtained and the dynamical properties of such a universe have been studied in Ref. [36]. However, here we analyze the thermodynamic properties for this universe and find that the first law of thermodynamics and the GSLT are satisfied for two typical examples. The conclusions presented here further support the thermodynamic interpretation of gravity and reveal the connection between gravity and thermodynamics.
5,230.4
2018-08-31T00:00:00.000
[ "Physics" ]
Accidentally light scalars from large representations , Introduction In this work we consider perturbative, renormalisable models of scalar fields in four-dimensional quantum field theory.When all operators allowed by Lorentz invariance and the internal symmetries of the model are included in the Lagrangian with generic coefficients, all scalar fields are massive, with the only exception of Nambu-Goldstone bosons (NGBs).By Goldstone's theorem, in models with a continuous global symmetry group G spontaneously broken to a subgroup H, the NGBs, forming the coset G/H, are exactly massless to all orders.As a general rule, the mass of non-Goldstone scalar fields arises at the tree level from the scalar potential.One well-known exception to this rule are pseudo-NGBs, which appear when the symmetry F of the scalar potential is larger than the symmetry G which defines the model as a whole [1].This requires additional fields and interactions, such as gauge or Yukawa couplings, which explicitly break F and induce pseudo-NGB masses via loops. It is less well known that there exist models with tree-level massless scalars which are neither NGBs nor pseudo-NGBs in the above sense.In these models, the most general renormalisable potential compatible with the symmetry G is not invariant under an enhanced continuous symmetry larger than G.Still, some non-NGB scalar fields remain massless at the tree level.To distinguish these accidentally tree-level massless fields from pseudo-NGBs as defined above, we will call them "accidents" for short.Accidents were encountered e.g. in pre-QCD attempts to build renormalisable models of mesons [2], and their nature was clearly emphasised in an early precursor [3] of the little-Higgs idea (reviewed in [4]). 1 Accidents also appear in O'Raifeartaigh-like models of spontaneous supersymmetry (SUSY) breaking [5], where they have been dubbed "pseudo-moduli" and studied in greater detail more recently [6,7]; in this case the scalar potential is additionally constrained by SUSY. In this paper we present some models with accidentally light scalars which are in a sense the most minimal ones.We focus on two examples, with symmetry groups G = SU(2) × U(1) and G = SU(3) × U(1); there are no additional discrete symmetries imposed; the field content is a single scalar multiplet in an irreducible representation.This is to be contrasted with the examples in the literature, which to our knowledge tend to rely on more complicated continuous symmetries (typically multiple copies of the same group), feature additional ad-hoc discrete symmetries, and contain several scalar fields in various representations.The price to pay to avoid these complications is to take the single scalar field in a large representation of G. The simplest possibilities are the five-plet in the SU(2) case, analysed in section 2, and the tenplet in the SU(3) case, analysed in section 3.There are few known results on spontaneous symmetry breaking by large representations, see e.g.[8], and these do not cover our cases of interest, which further motivates our analysis. The geometry of field space is non-trivial in our models, due to the large field representation involved.There is a compact manifold M ′ of degenerate tree-level vacua.The continuous symmetry group G is completely broken at a generic point on M ′ .When G is gauged, all points on a single Gorbit are identified, but the resulting tree-level vacuum manifold M of physically inequivalent points does not reduce to a single point.Instead, M is parameterised by one or more non-Goldstone flat directions in field space: these correspond to tree-level massless "accident" fields.Both the scalar and the vector mass spectrum change when moving along M .At special points in M , the vacuum symmetry is enhanced (i.e. a subgroup H of G is unbroken), and additional accidents appear. In section 4 we will briefly discuss a few potential phenomenological applications of accidents in cosmology (dark matter, slow roll) and in particle physics (Higgs, doublet-triplet splitting).A detailed analysis of the phenomenology is left for future work. Potential and tree-level spectrum Our first example of a model with accidentally light scalars is for G = SU(2) × U(1) with a complex scalar in the five-dimensional representation of SU(2) and with unit U(1) charge, ϕ ∼ 5 1 .The most general G-invariant renormalisable potential can be written as Here S, S ′ and A a are bilinears transforming in the singlet and adjoint representation of SU(2), where T a are the SU(2) five-plet generators, which may be chosen imaginary and antisymmetric and satisfy tr T a T b = 10 δ ab .It can be checked that the SO (10) global symmetry of the free theory is broken explicitly by V to SU(2) × U(1): there is no larger continuous accidental symmetry. Since our aim is to study spontaneous symmetry breaking, we take µ2 > 0. We further take λ, κ and δ to be positive.While λ must be positive for the potential to be bounded from below, negative values for κ and δ are possible, but not of interest here.With our choice, each of the three terms in the quartic potential is positive definite, and • S is non-zero if and only if at least one vacuum expectation value (VEV) is non-vanishing; • S 2 − |S ′ | 2 vanishes if and only if ϕ and ϕ * are aligned in field space, i.e. ϕ = c φ for some complex number c and real unit vector φ; • if ϕ and ϕ * are aligned, then A a = 0 by antisymmetry; this corresponds to the fact that the adjoint 3 is contained in the antisymmetric part of the product 5 ⊗ 5. A minimum is therefore found by choosing ⟨ϕ⟩ such that ϕ and ϕ * are aligned and ⟨S⟩ = µ 2 /λ: The remarkable feature of this potential is the existence of one flat direction which is not associated to a NGB, corresponding to one accidentally massless field.The vacuum manifold is indeed fivedimensional: while the overall scale v of the VEV is fixed by the minimisation condition, one is free to rotate the five components v j , and to choose the angle θ.Changes in θ correspond to the U(1) Goldstone direction.Of the four directions in SO(5)/SO(4) ≃ S 4 corresponding to different choices for the orientation of the VEV, 2 three are associated to the SU(2) Goldstone directions, but the fourth one does not correspond to any symmetry generator.When gauging SU(2) × U(1), all points on the Goldstone manifold are identified, but there remains a flat direction of degenerate, physically inequivalent vacua. Explicitly, an SU(2) ≃ SO(3) five-plet can be represented as a traceless symmetric 3 × 3 matrix, Φ ≡ ϕ j λ j , with a convenient basis given by the symmetric Gell-Mann matrices λ 1,3,4,6,8 .The SO(3) transformation O acts as Φ → OΦO T .Using SO(3) × U(1) invariance, the VEV can thus be chosen real and diagonal: The accidentally flat direction is parameterised by the angle α.One can show that α ∼ α + π/3 as well as α ∼ −α under SU(2) × U(1), hence the fundamental domain of α can be chosen to be α ∈ [0, π 6 ].After diagonalisation, the scalar mass matrix becomes where with m λ the mass of the radial mode.When G is gauged, the gauge boson masses are where g 1 and g 2 are the U(1) and SU(2) gauge coupling, respectively.Fig. 1 shows the correlations among the tree-level masses of scalars and gauge bosons, as a function of α.Note that the sum of scalar squared masses remain constant along the flat direction, and the same holds for the sum of vector squared masses. Figure 1: Left panel: tree-level masses of the scalar fields moving along the flat direction α, for some benchmark values of the couplings, κ = 0.3 and δ = 0.1.The mass of the radial mode m 2 λ is not shown, as it is controlled by an independent quartic coupling.Right panel: tree-level masses of the SU(2) vector bosons in units of g 2 2 v 2 .At α = 0 one of the gauge bosons is massless as U(1) ′ is preserved: at such point the corresponding would-be NGB becomes a second accidentally massless scalar. There exists a distinguished point on the vacuum manifold, corresponding to a null eigenvector of one of the SU(2) generators, T3 say, where a U(1) ′ subgroup of SU(2) is unbroken.In the above parameterisation, it is given by α = 0.At this enhanced-symmetry point, there appears a second accidentally massless scalar, which corresponds to the U(1) ′ would-be NGB.This should be contrasted with the standard picture in the absence of a flat direction: in that case an enhancedsymmetry vacuum is an isolated minimum of the potential, and an unbroken gauge symmetry implies a massless gauge boson, but no massless scalar.There are degeneracies in the massive spectrum too, m κ = m 0 , m + = m − and m W + = m W − ; they are similarly associated with U(1) ′ -charged states.The two accidents together form a complex scalar with U(1) ′ charge 2. A second distinguished point along the flat direction corresponds to an eigenvector of maximal eigenvalue 2 of T 3 , and coincides with the opposite endpoint of the fundamental domain, α = π/6.In this point G is completely broken (as in any generic point), but still there are degeneracies in the mass spectrum, m 0 = m − and m W 0 = m W − . The extraneous massless degree of freedom which is found at a generic point of the vacuum manifold is not a pseudo-NGB in Weinberg's strict sense [1], since the potential admits no accidental symmetry it could correspond to.Nevertheless, in the limit where some of the couplings vanish, the symmetry of the potential is enhanced, and some of the scalar degrees of freedom become NGBs. 3 In particular, for κ = δ = 0 and vanishing gauge couplings, the model has a global SO(10) symmetry spontaneously broken to SO (9), and indeed there is only one massive radial mode with non-zero mass m λ , plus nine NGBs.When κ is switched on, the global symmetry is reduced to SO(5) × U(1), cf.Eqs.(1)-( 2).This symmetry is spontaneously broken by the VEV to SO(4), giving rise to five massless NGBs, while four scalars acquire a mass m κ .When δ is switched on, the symmetry is explicitly broken to SU(2) × U(1), but five massless modes remain.Four of them are NGBs of the exact, spontaneously broken symmetries, while the fifth is the accident.The characteristic feature of this model is that the explicit breaking does not induce a tree-level mass proportional to δ for the accident, which remains light (to the extent that the model is perturbative) even when δ is of the order of the other quartic couplings. We conclude by discussing an interesting property of the tree-level mass matrices in the presence of accidents.We will argue that in models with one or more accidentally-flat directions α i , the quantities tr M 2 and tr M 4 are independent of α i at the tree level, where M 2 = M 2 (α i ) stands either for the scalar mass matrix, or the vector boson mass matrix M 2 W , or the fermion mass matrix The argument goes as follows: these traces are quadratic or quartic polynomials in the scalar VEV components v j , with all indices contracted in a G-invariant way.On the other hand, the scalar potential V (ϕ j ) contains, by definition, all independent G-invariant polynomials in ϕ j up to degree four.The minimum value of the potential, V (v j ), is constant along the accidentally flat directions α i .This implies that the vacuum is invariant under a symmetry G v larger than G. Let us assume that the action of G v lifts to a linearly realized action on the initial scalar fields (in the above example, G v = SO(5) × U(1)).Then, the VEVs of G v -invariant operators in the potential are constant as the α i vary.On the other hand, the G v -breaking operators vanish in the vacuum.In conclusion, there is no α i -dependent polynomial that can contribute to the trace of M 2 (α i ) or M 4 (α i ).One can check that this argument holds for all the models with accidents considered in this paper. One-loop lifting of the flat direction Let us focus on the special U(1) ′ preserving point of the previous section, at α = 0.At the one-loop level, the tree-level scalar potential is corrected by the Coleman-Weinberg effective potential [9,10], where Str denotes the weighted supertrace, Λ is the renormalisation scale, and M 2 is the scalarfield dependent mass-squared matrix.The one-loop effective potential gives rise to a Λ-dependent tadpole term for the radial mode of tree-level mass m λ .We impose that this tadpole should vanish as a renormalisation condition, i.e. we define the renormalised VEV to be v 2 = 2µ 2 /λ in terms of the renormalised µ and λ.One then finds that all one-loop tadpole terms for the other scalar fields vanish as well, so that the U(1) ′ preserving point remains a critical point of the effective potential. Concerning the mass spectrum, the NGBs remain massless as they should, while the two modes which were accidentally massless at the tree level pick up a finite and positive one-loop mass, where , and f is a positive-definite function, We conclude that the symmetry-enhanced point becomes an isolated minimum of the potential, after including one-loop corrections. For the second distinguished point on the tree-level vacuum manifold, at its opposite end α = π/6, one finds that there is similarly no tadpole term induced at one loop for the tree-level massless mode (once the one-loop tadpole for the radial mode has been subtracted).However, its one-loop mass is instead tachyonic, and can be written as with Therefore this second point is a saddle point of the effective potential. Note that the accident effective potential does not depend on g 1 at one loop.Indeed, gauging U(1) preserves the SO(5) symmetry which is recovered for δ → 0 and g 2 → 0. In this limit the accidents become NGBs of SO(5)/SO (4), and therefore their effective potential must vanish. Coupling to fermions: accident misalignment The low-energy fluctuations around the U(1) ′ -preserving point at α = 0 are described by a simple effective field theory: a U(1) ′ gauge theory with one light charged scalar.The scalar mass m 2 acc is loop-suppressed with respect to the masses of the heavier states constituting the UV completion.It is interesting to study the question whether the symmetry-enhanced point can be destabilized by loop effects, which would spontaneously break the residual U(1) ′ .However, we have shown that, with a field content of only scalars and gauge bosons, m 2 acc is always positive.We therefore add to the model a minimal anomaly-free set of fermion fields coupled to the fiveplet ϕ.We take χ and ψ to be left-handed Weyl fermions, transforming as 3 ±1/2 with respect to SU(2) × U(1), 4 which allows for the terms where the matrix Φ was defined above Eq.( 4).The complex phases of the Yukawa couplings y ψ and y χ can be set to zero without loss of generality, but then the phase of the Dirac mass M becomes physical.In Fig. 2 we plot the accident mass at α = 0 induced by fermion loops, in the limit where gauge and scalar self-couplings are negligible, for the special case y χ = y ψ ≡ y.For a small |M | ≪ yv, we find m 2 acc is always positive, while for sizeable |M | with a suitable phase, there is a region where the accident becomes tachyonic, so that fermion loops do indeed destabilize the symmetry-enhanced point.In this region, the former saddle point α = π/6 becomes the minimum of the effective potential, and U(1) ′ is spontaneously broken at the loop level. Note that, even though this spontaneous breaking is induced at one-loop, its scale is the same as for the tree-level breaking of SU (2), that is to say, the gauge boson masses in the new minimum are all of the order g 2 v.In other words, the effective field theory valid close to the symmetry-enhanced point is not suitable anymore, because the new U(1) ′ -breaking minimum is at distance of the order of the cutoff v in field space.It would be more interesting to find a configuration where the effective potential has a minimum α min close to (but not exactly at) the U(1) ′ -preserving point, which would imply a breaking scale v ′ ≪ v, where we define v ′ /v ≡ sin(3α min ).In the effective theory well below the scale v, one could then identify the accident field with the Higgs boson of U(1) ′ breaking. To this end, we now discuss the combined effect of scalar self-couplings, gauge couplings and Yukawa couplings.The one-loop effective potential along the accident direction can be evaluated explicitly as a function of α because, on the tree-level vacuum manifold, M 2 is simply given by the Figure 2: The fermion contribution to the one-loop mass-squared of the accident, at α = 0 (purple curves) and α = π/6 (orange curves), as a function of |M |/yv, where M = |M |e iθ is the Dirac mass and the two Yukawa couplings are assumed to be equal, y χ = y ψ = y.For e.g.θ = 0 (solid curves), the U(1) ′ -preserving point is destabilised for M ≃ 0.8 yv, whereupon the U(1) ′ -breaking point at α = π/6 becomes the minimum.tree-level mass matrix which is easily diagonalised.Subtracting the radial tadpole and the vacuum energy at the renormalisation point α = 0, one obtains Here the scalar mass matrix is given by Eq. ( 5), and the eigenvalues of the gauge boson mass matrix M 2 W by Eq. ( 7).The 6 × 6 fermion mass matrix m F resulting from Eq. ( 12) can likewise be diagonalised analytically, and we denoted M 2 F ≡ m † F m F .Note that the Λ-dependence in Eq. ( 13) is spurious, since tr M 4 (α), tr M 4 W (α) and tr M 4 F (α) are α-independent (see Section 2.1): the divergences have already been subtracted, and the resulting effective potential is finite. The couplings can now be chosen to obtain a minimum of the one-loop corrected potential parametrically close to α = 0.The potential for an example point in parameter space is shown in Fig. 3.At this point, the gauge couplings are negligible, and the loop contributions from scalars and fermions largely balance each other, leading to a small misalignment from the U(1) ′ -preserving direction, with v ′ /v ≃ 0.06.However, such a minimum can be obtained only at the price of finetuning the parameters at the per-mille level.This in turn implies that higher loop corrections could significantly shift the minimum of the potential.To understand the fine-tuning, it is instructive to consider the Fourier expansion of the contributions to the effective potential.One has where c 6 and c 12 are functions of the couplings.Both the fermionic and bosonic contributions to c 12 turn out to be numerically suppressed with respect to the respective contributions to c 6 , by at least two orders of magnitude (and higher harmonics are even more suppressed, which is why they are neglected here). 5For generic values of the couplings, the effective potential is therefore dominated by the lowest harmonic, ∆V CW ≃ c 6 cos(6α), and the minimum is either at α = 0 or at α = π/6 depending on the sign of c 6 .In order to obtain a small misalignment, the fermionic and bosonic contributions to c 6 must cancel for the most part, and moreover c 6 /c 12 must be accurately tuned, in order to achieve (v ′ /v) 2 ≃ 1/2 + c 6 /(8c 12 ) ≪ 1.The tuning in the coefficients is numerically of the order of higher-loop effects, which we neglected in our analysis.In other words, the exact position of the minimum shown in Fig. 3 is not under theoretical control. A supersymmetric model with accidents The insights gained from the SU(2) × U(1) model of section 2.1 can be used to build a simple supersymmetric model of accidentally light fields.Promote ϕ to a chiral supermultiplet Φ = φ + θψ + θ 2 F of N = 1 SUSY, and include a second chiral supermultiplet Φ = φ + θ ψ + θ 2 F with the conjugate quantum numbers 5 −1 .Introduce the superpotential with the bilinear chiral superfields S = Φ Φ, S ′ = ΦΦ, S ′ = Φ Φ, and A a = ΦT a Φ.This is the most general superpotential compatible with G = SU(2) × U(1), up to mass dimension four.If G is a global symmetry, the full scalar potential is given by the F -term potential, where subscripts indicate derivatives with respect to superfields, and K I J is the inverse Kähler metric. Any critical point with W I = 0 for all I gives a SUSY vacuum.By comparing W with the non-supersymmetric scalar potential V of Eq. ( 1), one observes that SUSY vacua are in one-to-one correspondence with the critical points of V , and are obtained from the latter by simply replacing µ 2 → µM , ϕ → φ and ϕ * → φ.This is despite the fact that the supersymmetric scalar potential V F is not at all similar to the non-supersymmetric V ; in particular, it depends on twice as many scalar fields and includes operators up to dimension 6.Thus, following the analysis of section 2.1, in the region where all couplings are real and positive there are SUSY vacua corresponding to Eq. ( 3): the associated moduli space is compact, with no runaway directions; the VEVs of φ and φ are aligned; G is completely broken at a generic point in moduli space, and broken to U(1) ′ at a special point; there are five massless superfields, four of which are Goldstones while one is an accident, associated to the compact modulus parameterised as in Eq. ( 4); at a special point in moduli space there is one less Goldstone superfield and one more accident superfield. When G is gauged, the D-terms, must also vanish in a SUSY vacuum.This is the case if the VEVs of the five-plets φ, φ, φ * and φ * are all aligned in field space, and |⟨φ⟩| = |⟨ φ⟩|.The erstwhile Goldstone chiral superfields are absorbed by the gauge superfields.One chiral supermultiplet remains massless on the entire moduli space, and a second one becomes massless at the special point.Thus, the number of chiral-superfield accidents matches the number of real-scalar accidents of the non-supersymmetric model.The interest of this model is that a SUSY accident, i.e. a tree-level massless supermultiplet, remains massless at all orders in perturbation theory by the non-renormalisation theorem, as long as SUSY is exact.In section 4.4 we will sketch possible phenomenological applications of SUSY accidents. 3 The first in a series of accidents: an SU(3) ten-plet Accidentally massless scalars are not specific to the SU(2) × U(1) five-plet model.The same phenomenon occurs in other models with large representations.Consider for instance the symmetry group G = SU(N ) × U(1) with a scalar field ϕ ijk in the three-index symmetric representation of SU(N ) and unit U(1) charge.Here i, j, k = 1, . . ., N are fundamental indices; the ϕ multiplet contains N (N + 1)(N + 2)/6 components.For N = 2, one can check that the most general renormalisable scalar potential compatible with G gives tree-level masses to all scalars, while accidents start appearing for N ≥ 3.In the following we give some details on the N = 3 model, commenting on N > 3 at the end of the section. Let us thus consider G = SU(3) × U(1) with a scalar field ϕ ∼ 10 1 .Its components can be alternatively written as a vector with one index in the ten-dimensional representation, ϕ I with I = 1, . . ., 10.The scalar potential for a 10 representation of SU(3) has been partially analysed previously, e.g. to derive discrete flavour symmetries [11] or to stabilise a scalar dark matter candidate [12].In the latter paper, the existence of tree-level flat directions was pointed out.Here we investigate in more detail their nature and implications. The most general renormalisable potential invariant under SU(3) × U(1) includes, besides the mass term, only two algebraically independent quartic invariants, where the singlet and adjoint bilinears are defined by Here T a are the SU(3) generators in the 10 representation, satisfying tr(T a T b ) = 15/2 δ ab .The SO (20) global symmetry of a free theory of 20 mass-degenerate real scalars is respected by the λ term, however it can be checked that δ breaks it explicitly to SU(3)×U(1), with no larger continuous symmetry surviving.We take µ 2 > 0 to realise spontaneous symmetry breaking, while boundedness from below requires λ > 0 and δ > −λ/3.It so happens that accidentally massless scalars arise only for positive values of δ, hence we focus on δ > 0. Since only the A a A a term is sensitive to the direction of the VEV in field space, the potential is minimized in a direction where A a A a is minimal, i.e.where ⟨A a ⟩ = 0.The VEV can then be rescaled to satisfy ⟨S⟩ = µ 2 /λ ≡ v 2 /2, in order to obtain a minimum of the full potential.One such direction can readily be identified as the common null eigenvector of the two Cartan generators of SU(3), conventionally taken to be T 3 and T 8 , so that SU(3) breaks to U(1) 3 × U(1) 8 .We call the corresponding direction in field space point (I), with ⟨ϕ⟩ = v (I) / √ 2. In terms of the ϕ ijk components, G transformations can be used to bring it to the form ⟨ϕ (123) ⟩ = |v|/ √ 2, with all other components VEVs vanishing.Here the parentheses in ϕ (123) stand for weighted symmetrisation. At point (I), out of the 20 real scalar fields, only seven are massive and 13 are massless at the tree level, despite the fact that only seven generators are spontaneously broken.The remaining six massless scalar fields are accidents.The diagonal form of the scalar mass matrix is where the masses of the radial mode and the other massive modes are given by m 2 λ = λ v 2 and m 2 δ = δ v 2 , respectively.When SU(3) × U( 1) is gauged with couplings g 3 and g 1 , the gauge boson masses are It can be checked that there are no points preserving a larger continuous symmetry than U(1) 3 × U(1) 8 on the vacuum manifold, and that the symmetry-preserving point is unique up to G-transformations. Away from the special point (I), at a generic minimum defined by ⟨A a ⟩ = 0 and ⟨S⟩ = v 2 /2, we find that G is completely broken, with nine NGBs, nine massive modes, and two accidentally massless modes.To explicitly construct the associated flat directions, consider a second point (II) in field space, ⟨ϕ⟩ = v (II) , defined by ⟨ϕ 111 ⟩ = ⟨ϕ 222 ⟩ = ⟨ϕ 333 ⟩ = |v|/ √ 6 and all other VEVs vanishing.The points (I) and (II) are not equivalent under a G-transformation, as evident from the different tree-level mass spectrum: Likewise, the SU(3) gauge boson masses at point (II) are m 2 V /2 (×6) and 3m 2 V /2 (×2).In fact, any superposition of the VEV directions defining points (I) and (II) gives rise to a minimum of the potential with ⟨ϕ⟩ = v (I) cos α + v (II) sin α, where α ∈ [0, π/2].Moreover, it can be checked that the vacuum energy does not depend on the complex phases of the four VEVs of ϕ (123) and ϕ iii .Three of them can be rotated away using the U(1) 3 × U(1) 8 × U(1) generators.However, the fourth phase β is physical, and so one obtains a two-parameter family of physically inequivalent vacua: To summarize, the manifold of tree-level degenerate vacua is eleven-dimensional.At generic points in this manifold, G is completely broken, and nine of the flat directions are Goldstone directions.When G is gauged, gauge-equivalent points are identified, and nine NGBs are absorbed by the gauge bosons.There remains a two-dimensional manifold of gauge-inequivalent vacua, corresponding to two accidentally massless scalars, parameterised by two angles α and β.At point (I) the vacuum manifold degenerates, there is a residual symmetry U(1) 3 × U(1) 8 , and four additional tree-level accidents appear.Two of them correspond to the would-be NGBs of restored symmetries, while the other two are of different nature; they are not associated to any of the 11 flat directions, but arise only at the special point. The non-vanishing scalar masses along the vacuum manifold can be written as the roots of a certain cubic polynomial; their explicit expressions are not very illuminating.Nevertheless, they obey a simple sum rule, as is easily checked directly from Eq. ( 18): tr M 2 = (λ + C 2 δ) v 2 , where C 2 = 6 is the quadratic Casimir.Similarly, the sum of gauge boson masses squared is constant and equal to (C 2 g 2 3 + g 2 1 )v 2 ; see Section 2.1.Let us focus on the special point (I), where α = 0 and where β becomes the Goldstone direction of the spontaneously broken U(1).The six non-Goldstone flat directions correspond to the real and imaginary parts of ϕ 111 , ϕ 222 and ϕ 333 .The fate of these accidents beyond the tree level can be determined by computing the one-loop effective potential.In analogy with our computation of section 2.2, one finds that subtracting the tadpole term induced for the radial part of ϕ (123) renders the one-loop masses finite.All six accidents receive a positive and equal one-loop mass given by The flat directions are therefore lifted, and the symmetry-enhanced point (I) becomes an isolated minimum of the effective potential, i.e. the physical vacuum of this model.Point (II), on the other hand, is found to be destabilised by loops and becomes a saddle point of the effective potential. The mass degeneracy of the six accidents (as well as the other mass degeneracies in the spectrum) are due to discrete symmetries preserved after spontaneous symmetry breaking.In particular, G contains an S 3 subgroup acting by permutation on the ϕ ijk indices, left unbroken in the vacuum of Eq. (22).Note the remnant symmetries U(1) 3 × U(1) 8 and S 3 do not commute.For example, the accident components ϕ iii carry different charges, yet they form a triplet under permutations.In principle, such remnant gauge symmetries could be radiatively broken by fermion loops, which might destabilise the special point, in the spirit of section 2.3; in this case finding the new minimum requires a multi-field effective potential computation. Finally, let us comment on the analogous models obtained by replacing SU(3) with SU(N ).The structure of the most general renormalisable potential V is the same as in Eq. ( 18), for any N .For µ 2 , λ, δ > 0, the dimension of the tree-level vacuum manifold rapidly grows, yielding N (N − 1)(N − 2)/3 non-Goldstone flat directions.As for N = 3, there can be special points on the tree-level vacuum manifold where a subgroup of G remains unbroken, and the number of accidents is enhanced.A systematic classification of these models and their vacua is left for future work. Possible applications Could accidentally light scalars play a role in real-world particle physics or cosmology?Let us present a few possible applications of phenomenological interest. Accident dark matter The toy models presented above could actually describe a realistic dark sector with accidents playing the role of dark matter (DM) candidates.Assume the new scalar field ϕ to be a Standard Model (SM) singlet, and the SM sector to be neutral with respect to the dark-sector symmetry G.The two sectors can communicate (and thermalise) through a Higgs portal interaction, λ Hϕ (H † H)(ϕ † ϕ) with H the SM Higgs doublet.Let us consider the minimal model, with dark gauge symmetry SU(2) × U(1) spontaneously broken to U(1) ′ .Indeed, we have shown that such symmetry-enhanced vacuum is selected, once the accident flat direction is lifted by radiative corrections (at least in the absence of dark fermions). This remnant symmetry guarantees that the lightest charged particle is stable (without the need to assume an ad-hoc global symmetry).This is naturally the complex scalar accident, since it is charged under the unbroken U(1) ′ and its mass is generated only at loop level. 6he unbroken gauge symmetry at the special point guarantees the presence of massless dark photons, into which the accidents can annihilate.The DM phenomenology in the case of annihilation into dark, massless gauge bosons is discussed e.g. in section 5.2.1 of [12].In our case, the observed relic density can be reproduced when these annihilations freeze out, as long as 2 , where α D = g 2 D /(4π) with g D the U(1) ′ dark gauge coupling and Q D = 2 the accident charge.Alternatively, DM annihilation through the Higgs portal can dominate.In this case direct detection constraints require m DM ≳ 2-3 TeV or m DM within a very narrow window around the resonance m DM ≃ m h /2, see e.g.[12].The DM direct detection, indirect detection and collider constraints all depend on the size of the Higgs portal coupling, which can thus be tested. A massless dark photon contributes to the extra-radiation parameter ∆N eff , which is constrained both by BBN and CMB.Therefore, it needs to decouple from the early-universe thermal bath sufficiently early (so that its contribution is sufficiently diluted by the SM reheating, from the decoupling of the various SM particles).For a single dark photon, one finds that the decoupling temperature should be above a few hundreds of MeV, which implies a DM mass above a few GeV [12].Future CMB observatories are expected to have the ability to rule out or establish an extra radiation component at the level of a single dark photon.Another relevant constraint comes from galactic-scale structure formation: the so-called ellipticity constraint gives an upper bound on the strength of the dark-photon long-range force, Q 2 D α D ≲ 0.4 10 −11 (m DM /GeV) 3 [13][14][15], which combined with the relic density constraint implies a DM mass above ∼ 100 GeV in our case. If the Higgs portal coupling is tiny, the SM and dark sectors do not thermalise with each other in the early Universe, but still each sector can thermalise individually.Such a scenario with two thermal baths leads to a different, but perfectly viable, DM phenomenology [16,17].In particular if the dark sector has a temperature T ′ smaller than the one of the visible sector T , the dark photon contribution to ∆N eff is suppressed by a factor of (T ′ /T ) 3 , and becomes irrelevant. An analogous analysis holds for the SU(3) × U(1) model: the natural DM candidate is the multiplet formed by the six degenerate accidents, charged under the remnant gauge symmetry U(1) 3 × U(1) 8 , corresponding to two dark photons (see [12] for quantitative constraints). Cosmology along the accident potential As well known, any inflationary potential must be extremely flat along the inflaton scalar field direction.The possibility to invoke a shift symmetry to protect the flatness of the inflationary potential has been considered extensively since decades.This constitutes the "natural inflation" scenario [18] (also appearing in axion setups).In these scenarios the inflaton is the NGB of a spontaneously broken continuous symmetry.Inflation requires that the shift symmetry is slightly broken for the potential not to be totally flat, and the inflaton is therefore a pseudo-NGB. It is interesting to note that the SU(2) × U(1) potential we obtain along the accident direction is of similar form, i.e.V = Λ 4 [1 + a cos(φ/v)], with φ the accident field.This potential is known to be in tension with Planck data [19].To be in agreement with Planck data one possibility is to let the inflaton slow roll down this potential and to end inflation by an extra waterfall scalar field, see e.g.[20].However, such a field does not exist in our model, and adding it would imply to significantly change the structure of the scalar potential and of its minima.A better option might be to invoke a modified cosmology posterior to inflation (see e.g.[21]), or non-minimal couplings to gravity (see e.g.[22]). In general, the potential of accidents can have a richer structure than the minimal expression above: firstly, it may include higher harmonics of the form a n cos(n φ/v); secondly, in models with multiple tree-level flat directions as the SU(3)×U (1) model, slow roll may occur in the corresponding multi-field space.A dedicated study would be needed to assess the implications of these features on inflation. Note also that, as the accident oscillates around the bottom of its potential, it triggers a burst of (massless dark photon) particle production whenever it crosses the enhanced-symmetry point [23,24].This possibility of producing dark vector bosons could have interesting consequences for cosmology, see e.g.[25] in a somewhat different context. We conclude by commenting on the possibility to have a first-order cosmological phase transition along the accident direction. 7We showed in section 2.3 that the accident effective potential can develop a minimum away from the U(1) ′ -preserving point at α = 0, so that U(1) ′ is spontaneously broken.At sufficiently large temperatures, thermal corrections will dominate the effective potential, and tend to restore the symmetry with a minimum at α = 0.As the Universe expands, the temperature T decreases, and the potential develops a second minimum which may be separated by a barrier from the one at the origin.This is typically the case if the tree-level potential at zero temperature is flat, and the flat directions are lifted radiatively by the one-loop effective potential.Then the phase transition can be of first order, proceeding via tunneling (this ends a period of supercooling, during which the scalar field lies in the false minimum and dominates the Universe energy density, see e.g.[26][27][28]).As T further decreases, the origin becomes a local maximum and, at T = 0, we recover the potential shown in Fig. 3. First-order phase transitions may be relevant for baryogenesis [29,30], production of primordial black holes (see [31] and references therein), and/or for a signal of stochastic gravitational wave background (see [32] for an overview). The Higgs as an accident The only elementary scalar field in the SM is the Higgs boson, which at 125 GeV is much lighter than the scale of new physics.The absence of new physics up to the (multi-)TeV scale constitutes the "little hierarchy problem".Ultraviolet embeddings of the SM predicting a loop-suppressed Higgs mass are therefore appealing, and have been widely explored.The accident mechanism may lead to an additional class of such models. To address the little hierarchy problem with accidents, one would need to identify a model where, at some special point in the tree-level vacuum manifold, the remnant symmetry contains the electroweak symmetry SU(2) w × U(1) Y (or even the entire custodial symmetry of the SM Higgs potential), and the accidentally light scalars transform as an electroweak doublet.We have shown in section 2.3 how radiative corrections may give a VEV to the accidents, thus breaking the remnant symmetry at a scale parametrically smaller than the initial scale of spontaneous symmetry breaking.The accident models which we have identified so far feature only abelian U(1) n remnant symmetries, but this is likely a consequence of the simplest possible choices for the symmetry G and for the representation of ϕ. The phenomenology of an accidental Higgs would resemble that of composite pseudo-NGB Higgs models (see [33,34] for reviews) or little-Higgs models (see [4] for a review).In the little-Higgs scenario, the Higgs mass is loop-suppressed because it is protected by a large global symmetry, explicitly broken only by the product of at least two independent couplings, typically gauge or Yukawa couplings.In the accident scenario, an enlarged symmetry is broken by a single scalar quartic coupling, nonetheless the accident mass is loop-suppressed thanks to the restrictive structure of the scalar potential.In composite Higgs models, it is typically assumed that the Goldstone-Higgs shift symmetry is exact within the composite sector, and broken only by external gauge and Yukawa couplings. In our toy models, the effective theory of accidents has a simple ultraviolet completion, in terms of a weakly-coupled and renormalisable theory of an elementary scalar ϕ in a large representation of G. Notice that, in contrast with models where the Higgs is a NGB, the full symmetry G can be gauged, with no ad-hoc global symmetries assumed: the NGBs are absorbed by the gauge bosons, while the abelian Higgs is a tree-level massless accident.On the other hand, to address the "big hierarchy problem" associated with models of elementary scalars, it would be interesting to realise composite accident models, where ϕ emerges as a composite multiplet. Accidents in supersymmetry In supersymmetric models, the accident phenomenon can explain large mass hierarchies: when a symmetry is spontaneously broken at the scale M by some superfield in a large representation, its accident components will remain massless at all orders in perturbation theory, as long as SUSY is unbroken.The super-accident mass scale, which is controlled by SUSY breaking, can be arbitrarily small with respect to M . One can therefore speculate that super-accidents might be used to address problems such as doublet-triplet splitting in SUSY grand unification theories (GUTs), see e.g.[35] for a recent discussion and earlier references.This would require a non-trivial generalisation of the toy models above, replacing e.g.SU(2) × U(1) by some unified gauge group G, the residual U(1) ′ by the SM gauge symmetry (preserved at some special point in moduli space), and the five-plet ϕ with a G-multiplet containing an electroweak doublet.The latter should remain accidentally massless to play the role of the SUSY SM Higgs.The accident mechanism would then guarantee the absence of a µ term at the GUT symmetry breaking scale M , while the other multiplet components do acquire masses of order M , without imposing any ad-hoc cancellations between the superpotential couplings. Once SUSY is broken, the accident directions are expected to be lifted.Since the moduli space in super-accident models is compact, there is no risk of runaway directions appearing, and the vacuum will be at a stable point in field space.The resulting mass spectrum will depend on the specific mechanism of SUSY-breaking mediation.It would be interesting to study whether the accidental protection mechanism leaves some imprint on the final pattern of soft SUSY-breaking masses.This requires a detailed analysis of concrete example models, which is beyond the scope of the present work. Conclusions Given a generic renormalisable scalar potential with symmetry G spontaneously broken to H, accidents are scalar fields which do not receive a tree-level mass, although they do not belong to the G/H coset.There is no obvious way to infer from symmetry selection rules that the accident masses are suppressed. We demonstrated that accidents appear in theories with a scalar multiplet in a large representation of G.This can be ascribed to the restrictive structure of the most general renormalisable G-invariant potential.Already in the minimal models -the five-plet of SU(2) and ten-plet of SU(3) -the vacuum manifold is non-trivial.It would be valuable to conduct a systematic analysis of the possible field-space geometries leading to accidents. Accidents possess unsuppressed, tree-level, non-derivative couplings to other scalars.As one moves along the accidental tree-level flat directions, the tree-level mass spectrum of other scalars (and of vectors and fermions which obtain their masses from scalar VEVs) changes, but the sum of the masses squared remains constant. When including loop corrections, the flatness of the potential is lifted and an isolated minimum appears, where a non-trivial symmetry H remains unbroken and the number of accidents is enhanced.These models exhibit a one-loop hierarchy of scales, even if all dimensionless couplings are of the same order.It is an open question whether less minimal models exist where an accident mass arises only at two-loop or higher order. An accidentally light Higgs may help to address the little hierarchy problem.We built a toy model where an accident plays the role of abelian Higgs with a small VEV, motivating the quest for a less minimal model which could feature a SU(2) doublet of accidents.We showed that the accident phenomenon persists in supersymmetric theories, and may thus explain large hierarchies among superfield masses without fine-tuning. The rolling of accidents along their tree-level flat directions provides a playground for natural inflation and/or resonant particle production.Accidentally flat directions lifted by loop corrections may also lead to cosmological first-order phase transitions.Finally, dark-sector accidents are excellent candidates for DM, as they are naturally the lightest states charged under unbroken dark-sector symmetries.
10,529.4
2023-07-19T00:00:00.000
[ "Physics" ]
First step toward gene expression data integration: transcriptomic data acquisition with COMMAND>_ Background Exploring cellular responses to stimuli using extensive gene expression profiles has become a routine procedure performed on a daily basis. Raw and processed data from these studies are available on public databases but the opportunity to fully exploit such rich datasets is limited due to the large heterogeneity of data formats. In recent years, several approaches have been proposed to effectively integrate gene expression data for analysis and exploration at a broader level. Despite the different goals and approaches towards gene expression data integration, the first step is common to any proposed method: data acquisition. Although it is seemingly straightforward to extract valuable information from a set of downloaded files, things can rapidly get complicated, especially as the number of experiments grows. Transcriptomic datasets are deposited in public databases with little regard to data format and thus retrieving raw data might become a challenging task. While for RNA-seq experiments such problem is partially mitigated by the fact that raw reads are generally available on databases such as the NCBI SRA, for microarray experiments standards are not equally well established, or enforced during submission, and thus a multitude of data formats has emerged. Results COMMAND>_ is a specialized tool meant to simplify gene expression data acquisition. It is a flexible multi-user web-application that allows users to search and download gene expression experiments, extract only the relevant information from experiment files, re-annotate microarray platforms, and present data in a simple and coherent data model for subsequent analysis. Conclusions COMMAND>_ facilitates the creation of local datasets of gene expression data coming from both microarray and RNA-seq experiments and may be a more efficient tool to build integrated gene expression compendia. COMMAND>_ is free and open-source software, including publicly available tutorials and documentation. Background Transcriptomic studies started over 20 years ago with the first spotted microarray [1] while the first RNA-seq experiments appeared about a decade ago [2][3][4]. Since then the number of transcriptomic experiments performed has constantly grown, favoured, among other things, by the increase of technical quality and the decreasing prices [5]. Nowadays large studies profiling expression of genes and their association with several experimental conditions are commonplace, and the wealth of public information is a huge help for scientific investigation. Nevertheless, most of the true potential for reuse and integration remains untapped because of the vast heterogeneity of such datasets and the difficulties in combining them. With the advent of systems biology, data integration emerged as a prevailing aspect to take full advantage of such rich sources of information [6]. Several approaches have been proposed to fulfill the need to effectively integrate gene expression data and they can generally be categorized as being either direct integration or meta-analysis. The former directly consider the sample-level measurements within each study, and merge these into a single data set [7]. Meta-analysis, on the other hand, integrates gene expression analysis combining information from primary statistics (such as p-values) or secondary statistics (such as lists of differentially expressed genes) resulting from single studies. Those studies combine the information from several data sources defining confidence levels subjectively for each individual study without a general scheme. Meta-analysis is a common method to integrate conclusions from different studies [8]. Both approaches have been widely adopted and many tools have been developed to exploit or further analyse such datasets [9][10][11][12][13]. Regardless of the strategy used to combine and analyse a large amount of gene expression experiments, the first step in common with all these approaches is the acquisition of raw data. COMMAND>_ (COMpendia MANagement Desktop) is a web application developed in order to facilitate the creation and maintenance of local collection of gene expression data and have been successfully used to build gene expression compendia such as COLOMBOS [14] and VESPUCCI [15]. It has been designed with flexibility in mind in order to deal with the disparate ways in which gene expression data are published, and to be easily extended to deal with new technologies. Implementation COMMAND>_is a multi-user web application developed in Python 3 using the Django 1.11 framework for the backend; the web interface has been developed using ExtJS 6.2 with a look and feel typical of desktop applications (Fig. 1). Despite being developed as a single page application, it allows users to navigate using browser buttons. By default it relies on PostgreSQL as Database Management System (DBMS), but the Django Object Relational Mapping (ORM) allows it be used with other DBMSs as well. COMMAND>_ uses both AJAX and WebSocket (via Django Channel) for client-server communications. WebSocket ensures a two-way communication between the web interface and a Python backend, easing the problem of continuously polling the server Fig. 1 COMMAND>_ infrastructure. The Graphical User Interface (GUI), on the client-side, is developed using the ExtJS Javascript framework that communicates using both AJAX calls and WebSocket to the server-side part of the application developed using the Django framework. All the business-logic has been developed in Python within the Django framework that is in charge of managing the database connection with the Object Relational Mapping (ORM) layer. Celery is the task queue distributed system used to run and manage tasks for updates on time-consuming tasks. Intensive tasks such as downloading and parsing files are managed asynchronously by the Celery task queue system so that many processes can run simultaneously (8 by default). COMMAND>_ is a complex application with several layers that work together. To ease the deployment process we provide a Docker Compose file, thus having a working instance is just a matter of running one configuration file. Since COMMAND>_ relies on several third-party software, performance depends in part on the specific software requirements. The default Python scripts are designed to keep the memory footprint as low as possible and scale linearly with respect to the input size, because many of them might run concurrently. The complete requirements list is available at the documentation page. COM-MAND>_ has been designed to be adapted to different gene expression platforms and currently handles platforms of two kinds, microarray and RNA-seq, but can be extended to allow for more platforms to be managed. Gene expression data itself are modeled as one possible type of data that can be collected. By extending specific classes, as reported in the online documentation, COMMAND>_ can be adapted to potentially handle any kind of quantitative data. Data model The basic concept behind the data model and how it is implemented in the database (Fig. 2) revolves around the idea that a set of measurements for several biological features (such as genes in case of gene expression data) are collected across different samples. The collected values might be direct or indirect measurements of such biological features and depends on the type of platform used in the experiment. In case of microarrays for example, each measurement refers to a single probe (a reporter in the data model) and thus it is an indirect measurement of gene activity. Samples can then be thought as a set of reporter measurements taken with a platform that is therefore a set of reporters. Biological features (as genes) and reporters (as probes) might have different properties (fields) such as name and sequence that can be used to couple the two entities. The three entities experiment, platform, and sample as well as biological features and reporters also hold meta-data, such as original ids, names and descriptions. Results and discussion Workflow COMMAND>_ is a multi-user application. From the web interface it is possible to create users and groups and grant privileges. Admin users have unlimited access, while normal users might be limited to work only on specific compendia and/or with a subset of functionalities. The typical workflow can be divided into three steps: i) search and download experiment data, ii) parsing downloaded files, iii) preview and import experiment data into the local database (Fig. 3). A mandatory prerequisite for being able to perform these steps is to first establish the genomic background Fig. 2 The database schema that represents the data model. The core part is represented by the sample table that holds the information of the experiment it belongs to, the platform used to measure it as well as the link to the raw_data measurements for the expression data, by uploading a FASTA file with gene sequences. Users can then import experiments starting by searching and downloading them from public databases or uploading local files (Fig. 4a). The supported databases are (at the moment) NCBI GEO [16], SRA [17] and EBI ArrayExpress [18]. Once the search has been performed, users can select one or more experiments and start the download process. Compressed files will be automatically extracted in a temporary folder. The pivotal point is the assignment of downloaded files together with parsing scripts to entities (experiment, platforms and samples) to mine only the relevant information (Fig. 4b). The scripts can be created or modified directly within the interface and are responsible for parsing input files and populating each part of the data model, i.e. measurement data and meta-data for experiment, platforms and samples. Once scripts are assigned to downloaded files, they can run independently and the results can be inspected using the preview interface. If the experiment appears to be complete, it can be imported into the database. Any possible error that might occur during parsing or importing of the experiment will be reported in the system log. When a new microarray platform gets imported, it would be necessary to map its probes to genes. The probe to gene mapping is a fundamental process carried out performing a BLAST+ [19] alignment and a two-step filtering (Fig. 4c). The alignment might take a while for platforms with a lot of probes, especially when using the short-blastn option, and the result cannot be used as-is for the probe to gene mapping. Bad alignments need to be filtered out in order to retain only the most plausible ones, i.e. the alignments that most likely represent the "true" mRNA-to-probe hybridization process. The filtering step is usually fast and can be performed several times on the same alignment result to test different threshold choices. The two-steps filtering tries to mitigate the side effect of a simpler filtering (Fig. 5) and it is performed to guarantee that probes map to genes with high similarity (restrictive alignment threshold), while also mapping unambiguously to a unique position avoiding cross-hybridization issues in the measurements (less restrictive alignment threshold). Since probes coming from different microarrays generally differ in terms of length, origin, and sequence quality, parameters and cut-off thresholds can be adjusted in order to always obtain the reasonably best possible results according to each platform's specific characteristics and user needs. The probe to gene mapping step has the advantage of enhancing data homogeneity since all microarray platforms will be annotated using the same gene list (i.e. the same genomic background represented by the FASTA file with gene sequences uploaded during the initial step). Fig. 3 The flowchart of the typical workflow. Users start by searching and downloading experiments from public databases or uploading files for local experiments. Experiment files are then associated with parsing scripts and the parsing phase runs in background. Once experiment is parsed can be imported into the database and, if necessary, probes can be mapped to genes Moreover, annotating the microarray with the latest available data is often preferable since it might improve the expression data interpretation [20,21]. If probe sequences are not available, or relying on the default annotation is more appropriate, it is possible to manually associate probes to genes using, for example, the manufacturer annotation (gene identifiers). All the parameters and re-annotation are stored on COMMAND>_, so that the procedure is completely reproducible. In case of RNA-seq platforms, these steps are not necessary since the imported measurements (raw counts) are directly related to genes of the defined genomic background without the need for reporters like probes as for microarray experiments. Once FASTQ files are downloaded and associated to samples, the user will need to create the index file for the genomic background to be used in the alignment program. A FASTA file with gene sequences imported by the user would be automatically created and put in the experiment directory to be used as target for the index creation script. By default, FASTQ files will be then trimmed using Trimmomatic [22] and expression level quantified using Kallisto [23]. Users that wish to use different programs, could copy them to the COMMAND>_ directory and to write a Python wrapper script to use them. The three steps described in the workflow are specific for gene expression data, but would be the same in case of other kind of quantitative data. For example, exon or small-RNA sequencing could easily be used by adopting a different genomic background, thus uploading a FASTA file with exons or small-RNAs sequences respectively. The importance of the genomic background definition lies in the fact that it establishes exactly what is measured by the imported experiments. Considering that the genomic background should not change during data collection, not all quantitative data are equally suitable for being imported in COMMAND>_. For example, metagenomic experiments for which Operational Taxonomic Units (OTUs) change from sample to sample (and even more from experiment to experiment) would not be an ideal type of quantitative data to be collected. The file assignment dialog. From this dialog users assign scripts to experiment files in order to parse the information. c Probe to gene mapping dialog. This dialog allows users to perform the alignment and the two-step filtering used to re-annotate microarrays by mapping probes to genes Comparison with similar tools To the best of our knowledge there are no other tools that offer all the options COMMAND>_ does. Nevertheless, we will report the main differences between COM-MAND>_ and other similar tools (see Table 1). GEOquery [24] is a package written for the R programming language (http://www.R-project.org/) that allow R users to easily connect, retrieve, parse and extract expression data from GEO ready to be used in downstream analysis. The ArrayExpress [25] R package works similarly to GEOquery but for the ArrayExpress database. GEOmetadb [26] and SRAdb [27] both allow the user to query GEO and SRA within the R environment, but they require to download an SQLite file that contains the totality of GEO/SRA metadata. Compendiumdb [28] is an R package framework used to parse and store expression information into a relational database that can be queried from within the R environment for subsequent analysis. VirtualArray [29] is another R package used to combine raw data from diverse microarray samples (or experiments) and generates a combined object for further analysis. It also implements several batch effect removal methods but it is not available for the latest Bioconductor version. Microarray retriever [30] is a web-application used to query and download expression data from both GEO and ArrayExpress, but is currently unavailable. The main difference between all these tools, except for Microarray retriever, and COMMAND>_ is that all of Starting from the assumptions that, for this specific example case, a reasonable threshold is 95% or more and that for a sequence to be considered aligned uniquely the score difference between the alignments of the same probe should be more than 3%, we consider three different scenarios. The leftmost one (scenario a) presents the case of a single-step filtering with a threshold equal to 95%. In this case the probe selected as unique ones are the orange, green and purple one. While the first two probes are compliant with our assumptions, the purple one should be discarded since it aligns twice with a score of 95 and 94% respectively. To avoid this situation we might try do adjust the threshold raising it to 96% as depicted in the scenario b. The problem in this case is given by the fact that we would miss the green probe. Using a twostep filtering, as shown in scenario c, avoids both this unwanted situations since the first filter (at 94%) would retain both the purple probe alignments (that will be discarded since the alignment is not unique) and the blue probe that will then be removed by the second filter at 95% leaving only the orange and the green probes as expected them are Bioconductor packages and run within the R programming environment. The advantages are that Bioconductor is a strong and reliable environment and different packages can be used in combination to perform a vast amount of different analysis. Despite being a great tool for data analysis, R and Bioconductor are not meant for data retrieval, and management of large amount of data can be problematic since R programs, without specific packages such as parallel, are by default single-threaded process, the data are completely stored in RAM and thus don't easily scale to handle large datasets. COMMAND>_ has been developed with the specific goal of simplifying this part. It relies on a relational database and a task queue system such as PostgreSQL and Celery respectively to easily scale when number of experiments grows significantly. In this regard they might be thought more as complementary tools with R to be used to analyse the datasets collected using COMMAND>_. In COMMAND>_ many operations can be done using the Graphical User Interface (GUI) such as the re-annotation tool which allows the user to produce an optimized annotation instead of relying on default ones. It is important to highlight that the re-annotation step allows perfect reproducibility of the analysis since all parameters are stored within COMMAND>_. Finally, despite being a graphical tool offering a friendly user experience, COMMAND>_ gives the same flexibility of a command-line environment to manage all possible situations through its Python editor. Case study In order to demonstrate COMMAND>_ functionalities, we present several case studies available within the on-line documentation. Moreover, we used it here for searching, downloading, parsing, re-annotating and exporting a collection of small airway samples from patients affected by Chronic Obstructive Pulmonary Disease (COPD) [31]. The original study is a collection of 273 samples from three Affymetrix microarray experiments retrieved from the Gene Expression Omnibus (GEO): GSE8545, GSE20257 and GSE11906. We start retrieving the GEO experiments used in the study using the "Download Experiment From Public Database" dialog with the GSE Series ID as term and GEO as database (Fig. 4a). Before starting to parse the experiments we need to import the gene sequences to be used for the probe mapping step. The parsing procedure starts by selecting one experiment and pressing the "Parse/Import experiment" button. The parsing interface is divided into three collapsible sections: the top one shows the experiment data preview, the middle one contains the experiment files browser and the assignment tool used to couple parsing scripts and experiment files, while the bottom section is the Python editor. Having the original probes is highly encouraged in order to take advantage of the probe to gene mapping functionality. Since probe sequences are not included in this experiment, we have to download them separately from the Affymetrix Support site and upload them into COM-MAND>_ using the "Upload file" button. Once all files are in place we are ready to start assigning parsing scripts to the experiment files. Since we don't need to change any information related to the experiment entity we will start with platform-related files, i.e. HGU133Plus2_H-s_ENSG_probe_tab. The assignment procedure is itself based on the execution of a Python script, and in this way we can automatically assign a vast amount of files using user-defined rules. For this specific case we will tell COM-MAND>_ to parse the "HGU133Plus2_Hs_ENSG_probe_ tab" file using the "gpr_platform.py" script. To correctly parse the platform file we have to inform the "gpr_platform.py" about the field names to be used for the probe id and the probe sequence. The sample files assignment will proceed similarly by selecting all CEL files (we can use the filter by file names) and giving "cel_sample.py" as script to be used. This time we will use the "match_entity_name.py" assignment script in order to have COMMAND>_ Each column represent one functionality, respectively: R (the program is an R package), local (the program allow to use local data), GUI (the program provides a Graphical User Interface), GEO (the program connects to GEO), AE (the program connects to ArrayExpress), SRA (the program connects to SRA), DB (the program provides a database to store expression data), ANN (the program allows to annotate probes), Search (the program allow to perform queries using free text besides accession id) and Note (the program has special features or limitations) to automatically couple CEL files with the corresponding samples (Fig. 4b). The last file to be assigned is the soft file that contains the meta-data for all entities, experiment, platform and samples. Once again we use the "assign_all.py" to assign "soft_experiment.py", "soft_platform.py" and "soft_sample.py" scripts to experiment, platform and samples respectively. After inspecting that all the assignments are correctly done we are ready to run the parsing scripts. Once the parsing is done we can inspect the results in the "Preview" section and import the experiment. We will have to repeat the same procedure for the remaining experiments. Once that all the raw data are imported into the database we can map the probes for the GPL570 platform to the human genes we already imported. This fundamental step consists in two parts, the alignment and filtering of alignment results. For the alignment step we can chose a quite stringent identity threshold (such as 95% or 98%) since both probes and genes belong to the same species. The two-step filtering thresholds are set to 95% alignment length, 0 gap and 3 mismatches for the sensitivity step and 98% alignment length, 0 gap and 1 mismatch for the specificity step (Fig. 4c). The chosen threshold captures the idea that probes might align (even if not perfectly) on more than one gene resulting in an unusable probe, and, require a higher minimum alignment quality for a probe to be considered reliable. As stated previously this wouldn't be possible using only a single filter (Fig. 5). In this specific case choosing different thresholds will result in little differences since probes and genes come from the same organism. This step is increasingly relevant when more and more probes are designed for a different organism than the one we are using as the genomic background and for which we have the gene sequences, such as might be the case for different strains of bacteria or different cultivars of plant crops. After the alignment process is complete, we can set the filtering parameters and run the filtering. Once the filtering is done, we are able to import the probe to gene mapping. Finally, we can export the resulting raw data in both TSV and HDF5 file format. Conclusion In this paper we present COMMAND>_, a web-based application used to download, collect and manage gene expression data from public databases. COMMAND>_ relies on a DBMS for data persistence and a set of customizable Python scripts to extract only relevant information from public gene expression databases. COMMAND>_ is a multi-user application that allow teamwork via definition of groups of users with specific privileges on each of the defined gene expression compendia. Moreover, it eases the long-time maintenance of such gene expression compendia storing a system log with all the relevant information about the operations performed. COMMAND>_ is a tool in constant development with new features to be added with newer versions. It is easily extendable to readily manage new technology platforms as they appear, for new data formats to be parsed, and even for new quantitative data types to be imported. This is reflected in the software architecture as well as in the data model. Availability and requirements Project name: COMMAND>_. Other requirements: full requirements list available at https://raw.githubusercontent.com/marcomoretto/command/master/requirements.txt License: GNU GPL v3. Any restrictions to use by non-academics: none.
5,552.4
2019-01-28T00:00:00.000
[ "Computer Science", "Biology" ]
Using Sab-Iomha for an Alpha Channel based Image Forgery Detection —Digital images are a very popular way of transferring media. However, their integrity remains challenging because these images can easily be manipulated with the help of software tools and such manipulations cannot be verified through a naked-eye. Although there exist some techniques to validate digital images, but in practice, it is not a trivial task as the existing approaches to forgery detection are not very effective. Therefore, there is need for a simple and efficient solution for the challenge. On the other hand, digital image steganography is the concealing of a message within an image file. The secret message can be retrieved afterwards by the author to check the image file for its veracity. This research paper proposes Sabiomha, an image forgery technique that make use of image steganography. The proposed technique is also supported by a software tool to demonstrate its usefulness. Sabiomha works by inserting an invisible watermark to certain alpha bits of the image file. The watermark we have used to steganograph an image is composed of a combination of text inputs the author can use to sign the image. Any attempts to tamper the image would distort the sequence of the bits of the image pixel. Hence, the proposed technique can easily validate originality of a digital image by exposing any tampering. The usability of our contribution is demonstrated by using the software tool we developed to automate the proposed technique. The experiment which we performed to further validate our technique suggested that Sabimoha could be flawlessly applied to image files. A. Background Applications of digital images have been the focal point of computer vision researchers for decades now [1]- [4].Digital content is used as an effective way of communication among different stakeholders [5].The advent of digital devices and communication technologies has led to increase in the use of image files for sharing visual moments and photographs.Digital images are generated through cameras with-out transformation and development process contrary to camera reels in the past and can be delivered electronically through any supporting communication channel. Although an image data is generally considered reliable but with the passage of time, the digital technology itself has compromised the faith we have had in electronic content.The ever-increasing trend of malpractices in image forensics has posed new challenges to the research horizon as we continue to exist in the era which is very much vulnerable to multiple facets of digital contents.The situation seeks effective and efficient solution to ensure integrity of digital images. With multi-million users using emails and social media, nearly countless digital content is distributed and shared every day.A large portion of the content comprises of digital images.These days users can easily capture their memorable moments through digital cameras and can share with others by publishing the image files on the web.On the other hand, users can potentially receive tampered images and unknowingly circulate those as well.Since digital data is easily accessible these days, obnoxious users can manipulate image files for entertainment and at times abuse those for some societal or political gains or to dictate any legal affairs.This phenomenon is reinforced by the availability of some supporting software applications.Hence the situation calls for taking some concrete measures to meet these challenges. Previously, digital forensics domain has helped to rejuvenate some trust in digital content.However, as the image forgery detection techniques are being developed, tampering of digital data despite leaving any noticeable trails has become very trivial.The challenge leads to issues such as image authentication, protection, and forgery detection.This demands aggressive counter approaches from scientists and researchers to confront and challenge malpractices. B. Problem Description Image tampering is a known handling technique [5].Deception of typical image files is relatively a tedious task and requires sufficient expertise.However, digital images are disposed to tinkering.There exist numerous software applications to easily manipulate them.Malpractices mainly include duplication, replication, removing or exchanging parts of an image.It should be noted that originality of an analog data can be validated easily through a naked eye as any attempts to tampering can be conceived readily.Contrarily, development of supporting software tools has made manipulation of digital images a very easy task.For example, Fig. 1 highlights one such example.Originally, two objects were present in Fig. 1(a).The object on the far right is inserted as visible in Fig. 1(b).However, by looking at the figure through a naked eye, one cannot conceive that originality of the image had been compromised.Before taking an appropriate legal or social action in such cases, it is necessary to verify that an image had been edited.In such cases, as it is clear from the figure, validation of originality of a tampered image becomes very challenging since alteration of a digital image can be carried out easily in comparison to a printed one. As digital image domain is being revolutionized, tampering of a digital content without any noticeable impression has become very effortless.Therefore, to tackle the challenge, an image should be analyzed in such a way that even a slight attempt to forge can be detected straightaway.In this paper, we propose a light-weight automated technique that image owners and publishers can easily use to sign their images.The approach can also be used as an instrument to protect proprietary images from any possible forgery attempts. Rest of the paper is organized as follows: following subsections of Section 1 highlight the contribution and the current state of the art in the domain.Section 2 describes the related work.Section 3 reflects upon our contribution in terms of the proposed technique and presents its usefulness through a software tool we developed to automate and demonstrate our work.In the end, Sections 4, 5 and 6 sum up with Automation of SAB -IOMHA, and Conclusions, respectively. C. Contribution of the Research Project Validation is a standard procedure for investigating integrity of an object.We want to achieve it in terms of forgery detection of a digital image through the proposed work.The decisive objective is to audit digital image files for originality and verify that their integrity has not been compromised since their authoring.The current approaches for the purpose have encompassed signature-based methods for protecting image files and checking for their integrity.However, such techniques are not applicable in wider settings because of their limitations or overheads involved in their use.On the other hand, as part of our work, we propose using a composite watermark which consists of a cipher along with date and time stamp and email address of the image author.The watermark is inserted in structured patterns to certain bits of an image file. Digital watermarking is a known technique for media files for retaining copy-right information and identification of their proprietorship [4].These can be of several types and are widely used.Generally, images can be inserted with at least two types of watermarks, visible watermarks or invisible watermarks as required.A visible watermark embeds an image file with an identification mark and an invisible one on the other hand inflicts a hidden mark in it.As part of this research, we choose the invisible watermark which we sequentially insert across multiple bits of a digital image.The contents and structure of the watermark is distorted if someone tries to edit the image file by any means. In this research paper we provide more insight and extend Sab-iomha which we proposed previously [6], for its usefulness in the real settings.The extended version of the work reflects upon more technicalities of the technique and an improved validation mechanism.The ultimate objective of the research is to address the challenge of digital image forgeries. D. Current State of the Art ELA (Error Level Analysis) of an image can highlight any edited or distorted part of an image as different regions of an image having different compression levels can be identified. It enables the stakeholders to easily detect any problem areas through a naked-eye.Existing approaches to image forgery detection usually involve replicating those files to some dedicated software tools [7].Users are then provided with different features of ELA and Joint Photographic Expert Group (JPEG) format.Our contribution is twofold.First, we split an image file to temporarily separate its metadata from the visual content and then steganograph the same image.An image file is composed of combination of pixels.An ordered set of bytes represents each pixel for different colors that constitute an image.Those colors include Alpha, Red, Green, and Blue.It should be noted that data is not stored in Alpha bits.Therefore, we propose use of those bits to insert the hidden watermark into the image file.Cipher, as part of the watermark, is invisible and is removed automatically upon any attempts to forge.As part of the second contribution of the research paper, we demonstrate the usefulness of Sab-iomha through automation in terms of a software tool we developed to augment the proposed technique.If an image file was saved multiple times, it loses its quality [8].Metadata of an image file refers to the image itself.The information it contains may include the image type; e.g.JPEG, dimensions of the image, internal formats, and color scheme.The metadata also gives information such as the date of creation, the date of modification, name of the software editor that was used to create the image, file tags, and camera tags.It also provides information on the Exchangeable Image File Format (EXIF) which is used by the digital cameras manufacturers to extract camera settings that were used to capture the image.Camera settings entail information such as the manufacturer name and the model, time stamp, and lens settings.Those settings may vary among images to ensure maximum level of integrity.If a user tries to insert comments into an image file, they are incorporated into its metadata.Digital cameras normally do not allow automatic insertion of comments to the captured image.However, if any additions are found, it is an indication that the image has been edited or reprocessed using some software tool. Majority of the existing approaches to image forgery detection take account of the information provided through metadata or the file header.Any attempts to get additional information while capturing a digital photo or any effort to change its header can easily render image handling more complex hence time consuming.In addition to that, the currently available techniques do not account for digital contents or file storage itself.On the other hand, our proposed technique addresses the challenge using a simple yet efficient mechanism; i.e. hidden watermark is embedded in an image which diminishes the need for manipulating with the file header.Sab-imoha ensures that any attempts to manipulate the image distort the watermark.Hence any successful bids to alter the image file can be discovered promptly. II. RELATED WORK The literature review that was conducted to carry out this research encompassed image content, detection, and forensic analysis.We investigated different techniques currently in use for authenticating digital contents in terms of their traits as well as deficiencies. Lighting, inconsistent shading, and shadows have been used as a method for collecting evidence on image forgery [5].Mixture of shadow and shading was rationally used to serve for the purpose and both were made dependent on each other but in case they are not, the corresponding image is found to be a tampered one.Furthermore, the authors reported reliable and specific shadings under different inferences of some subjective measures such as guess-work or acceptance.However, their proposed technique is not applicable in case such historical text documents do not make a shadow.Moreover, the research is applicable to those human images only that contain visible faces.It requires human interaction and the method that is used to estimate authenticity of an image is also prone to an estimation error. Color discrimination has also been used as a mean to detect image forgery.To achieve that, some researchers have proposed a method called spliced image detection mechanism [9].They detected illumination inconsistencies of an image by extracting edge or text-based features.If the image file under consideration carried information about image type, camera model, and motion after being captured, the data was found to be helpful for preventing any image forgery attempts by making the latter a difficult job [10].However, detection of reflection-based forgeries is not a trivial task.A technique proposed by [11] suggests removing observable information from an image to make it trustworthy.Another method for detecting forgery in image files uses text-based signing of images [12].If the digital signature gets distorted, it implies that integrity of the image had been compromised. Thumbnails have also been used for verifying image files for authenticity [13].The authors proposed creating thumb-nails using contrast settings, compression, and filter models altogether which in turn are used to identify whether the actual images were compromised or not.Those models are then compared with the editing software and the originator cameras.A hidden watermark approach has also been used for image forensics [14].It controls JPEG-lossy compression, cropping, and other possible operations that can be performed on an image by adding an invisible watermark in such a way that any distortion or a missing link in it indicates that the image had been forged. The authors in [15] proposed an image forgery detection technique by investigating inconsistencies in lighting.Although lighting of a scene is not a complicated task, but it can be hard to match as the difference in lightings can be negligible.Researchers in [16] dealt with using a 3D lighting coefficient for image forensic.However, surface and lighting assumptions that are used are very specific.In addition to that, the challenge is to precisely estimate 3D shape of an image object. A steganography technique to protect JPEG images from tampering by capturing two identical images instead of generating a secret text has also been discussed in [4].The instance information is attached as a watermark to the actual image for the validation purpose.However, the proposed technique supports JPEG formats only and any slight change in camera settings between capturing images may also affect efficiency of the digital device. Seam modification in digital images is another way of www.ijacsa.thesai.orgimage tampering.The former can be performed through a couple of ways; seam carving and seam insertion.In [17] the authors have studied modification of JPEG images through seam modification.A very minute change in seam effects the pixel ordering.A non-traditional method of machine learning, Classification Support Vector Machine, is used to intercept the seam-tampered image that differentiates between the tampered image and the original one.The problem with their proposed method is that it fails when highly imbalanced and skewed data sets are observed.The method is not applicable in diverse setting either. Copy-move forgery (CMF) [18] is another common tampering technique in which a small part of the image is taken and copied to another location on the same image.Usually key-point based technology is used to detect this type of forgery, but it takes too much processing time and can run out of the memory while processing.Moreover, small cloned and smooth regions are difficult to detect.The author in [18] presents a new technique to overcome this problem.The test image is separated into smooth and rough regions and is further segmented into small regions.Before applying the Scale Invariant Features Transform (SIFT) algorithm, the customized parameters are detected for that specific image.If fixed parameters are selected to apply SIFT then results may not be satisfactory.Swarm intelligent (SI) algorithm was applied to generate a custom parameter for efficient processing of SIFT.The technique may reduce the processing time to avoid run out of memory.The experimental results indicate some higher false positive rate that needs to be improved. In-painting [19] is another technique that has been used for forgery detection.It works by rebuilding the deteriorated part of an image.When an image gets scratched or fade away, some of its segments are reproduced to bring back its originality.The main theme was to copy segment of an image and embed it back on the scratched or deteriorated patches of the same image.The authors proposed a copy-move image forgery method in which an object is removed from an image and is pasted on a different location on the same image.Two in-painting techniques [19] were used to detect the object removal, geometry-oriented and texture-oriented.Their proposed technique, which was referred to as exemplar-based image in-painting, reported significant decrease in search time for image blocks.However, it is not very useful for multiple object removals as it increased the search overhead. A steganography technique to protect JPEG images from tampering proposed capturing two identical images instead of generating a secret text [20].The instance information was attached as a watermark to the actual image for validation purpose.However, their proposed technique supports JPEG formats only and any slight change in camera settings between capturing of images may also affect the efficiency of the system. In summary, the existing approaches to counter image manipulation lack the diversity required to confront the challenge.Due to rapid rise in use of digital images, attempts to compromise their integrity are also on the rise despite currently available mitigation techniques.As it is evident from analysis of the literature, there exist no single technique that is easily applicable and equally useful to multiple types of digital images consistently; that is, computer generated images, digital documents that are saved as image files, and digital camera images.The situation calls for proposing more robust methods to confront the challenge.Researchers need to come up with effective forgery detection solutions to address the issue. III. SAB-IOMHA:THE PROPOSED TECHNIQUE There are two phases of this research work; steganography and forgery detection.We propose a forgery detection mechanism which is a two-step approach as shown in Fig. 2.An image file is protected using an invisible watermark and then any forgeries are detected by investigating the same watermark which was inserted in the first step.As part of the approach, firstly the image is converted into byte stream that splits metadata from the file.Secondly, an invisible watermark is inserted in certain bits of the image.The watermark is in text form and can be inserted across multiple bytes.However, its length depends upon size of the image; bigger the image in size lengthier would be the watermark. A digital image can incorporate two types of watermarks; visible watermark or invisible watermark depending upon user preferences.Visible watermark inflicts small spots on the whole image whereas the invisible one randomly inserts a text code in it.Fig. 3 is a pictorial representation of the visible watermark technique. It demonstrates different states of an image. Visible watermarks were inserted that are noticeable by zooming the image.An ELA can identify regions within an image that possess different compression levels.It is a measure to visually highlight difference in JPEG compression levels across different regions of an image. Since we make use of invisible watermark, the inserted text would be hidden.We suggest composing a composite invisible watermark which is composed of multiple information fields that makes it easy to validate an image.Those fields entail cipher text, email address of the image user, and date and time stamp.At the same time the composite watermark ensures that the ownership trail of the image is maintained for any future reference as well to preserve edit history of the file.Furthermore, as part of the watermark, the cipher changes automatically if someone tries to edit the signed image as any attempts to doctor it would distort the cipher part of the inscription. For a JPEG format, the entire image should represent the same ELA but if some fragments of an image carry different error levels, it is an indication that the original image was edited for an unauthorized modification.Regions with even coloring, like a blue or a white wall, would likely have a lower ELA levels in comparison to dark colors having highcontrast edges.For a typical forgery detection, one would check the image and try to figure out the difference between high and low contrasting edges and compare those with the ELA representation.Only a visible difference allows a nakedeye to detect any contemporary changes that might have been made to the image.Therefore, a sole ELA-dependent method is not a good fit to detect any such images which are digitally modified.In a 32-bit image that spans across four channels of colors, each pixel is constituted of four bytes.Each one of the three colors; i.e.Red, Green, and Blue is epresented by a byte each as shown in Fig. 4.However, the fourth byte which is known to be reserved for Alpha does not represent anything and is available for use.To date several systems have been proposed that represent pixels in terms of supporting colors but an ARGB is the most established arrangement for representing colors.It logically arranges a pixel in an order of Alpha, Red, Green, and Blue.As part of our composite watermark technique, we make use of the least significant bit of Alpha to steganograph an image file.This does not change data stored in any bit but text length should be calculated before it is inserted in the image file as a watermark. Algorithm 1 Embed watermark Require: x ≥ key * 10 ∨ x = 0 I = 0||I = 10||I = 100 1: P ← readImagePixels 2: P = P 0 , P end for 25: end function Fig. 4 demonstrates how exactly our proposed technique makes use of certain bits of an image file.It splits metadata from the file header.The file is then converted into pixels which in turn is transformed into a byte stream.Alpha bits are selected, and an invisible watermark is inserted into them, which is a composition of cipher, email address, and time and date stamp.If we consider an image as a matrix P having m rows and n columns, total number of pixels in it can be determined using the given m n relation. We argue that inserting watermark into the least significant bit is an easy yet effective approach for signing an image with the traceable information.Eighth bit of the Alpha bytes is utilized for the purpose; i.e. one bit of the overall size of the inserted watermark.It should be noted that we do not make use of all Alpha bytes of an image file.Their selection is based on a certain pattern which is generated at run time to ensure maximum protection of the image.For a four-byte image having thirty-two bits, the least significant bit of the Alpha component is utilized which is depicted as the marked bit of a pixel shown in Fig. 4(d).An image consisting of 800 600 pixels can store up to 1,440,000 bits or 180,000 bytes of watermark.For instance, a block of 8 pixels of a 4-byte image can be represented as: if number 35 is inserted as a watermark having binary representation 00100011 across Alpha bits of an image, the resulting pixel block gets manipulated in such a way that 35 is accommodated in consecutive pixels highlighted as shaded pixel bits in Fig. 5.It is worth mentioning that only least significant bits of Alpha bytes are inserted with the watermark fragments.All pixels can be protected using the scheme which does not affect the visual contents of the image file.Since the proposed technique consumes an image at the structural level, its steganography cannot be observed through a naked eye. In a 32-bit colour image, Alpha bits are separated, and the code stream is spread across the byte stream using Algorithm 1.Where x is the number of pixels in an image, I is intensity of the watermark which can be 10, 50 or 100, and Key is length of the cipher.P is an array of pixels which an image file contains. Dt is the current date and time of the system.E is email address of the user.At line 7 of the algorithm, x is cumulation of the composite watermark obtained by adding cipher text, date and time stamp, and email address.The cipher text constitutes the constant part of the watermark whereas rest is the system and user dependent to enhance the strength of the algorithm.The function at line 8 checks the image file for the watermark, if matched, the image is authenticated.Otherwise, InsertCipher procedure at line 13 is initiated.The cache space can be increased to any positive numeric value in case we want to add an interval between the bytes that are occupied by the ark. There could possibly be a case that someone else signs the image after it was steganographed by the actual author.The situation makes it nontrivial to keep track of the actual ownership.The combination of date and time in particular ensures that once a user signs the image, the ownership trail can be maintained for the subsequent detection of any successful forgery attempts.Table I illustrates the composition of the composite watermark.Email address of the user is allocated up to 255 bytes, date and time is allocated 7 bytes, 1 byte for Intensity which is the distance between two nearest cipher bytes, and variable number of bytes are reserved for the Key which points to the cipher text.The following equation I is used for determining the length of the cipher. Where a is any positive integer and y is the cumulative length of characters of email address and date and time stamp, K is constant length space allocated for the cipher text to be impeded in the image, and N represents length of the image in bytes. IV. AUTOMATION OF SAB -IOMHA The software tool that we developed to automate our research is relatively simple and user friendly with minimum of work-flows.It supports browsing of an image file using a GUI interface and is loaded in computer memory. Fig. 6 depicts user interface of the tool we developed.It was programmed using Java technologies.The ultimate objective is to facilitate validation of digital images and documents as well in case they are in an image format to prove integrity of the contents or to verify that the digital document has not been edited since its creation.The tool supports multiple features as shown in Fig. 6.The Steg Image embeds an invisible watermark in the image.The steganographed image can also be saved on the disk for any future reference.Forgery Detection opens up another screen as depicted in Fig. 6. Signing an image file is a two step procedure: in the first phase, we would steganograph an image by inserting the invisible watermark which is validated for integrity in the second phase.We randomly pick an image and upload it to the tool to demonstrate usefulness of our technique as well as the overall automation itself.The sample image on the right side of the Fig. 7 is signed using the watermark which is the composition of cipher text, email address, and date and time stamp.It can be observed that quality of the image was not compromised at all by using the technique.The same file can be checked to verify if the image is original or any attempts has been made to alter it.In case the validation procedure generates an alert text, which is the case as shown in Fig. 7, it is an indication that the image has been forged by some other user.Otherwise, the inserted watermark is displayed to testify the originality of the image.Algorithm 2 enlists steps performed to detect forgery.It is a three-step procedure; in the first one, it looks for an insertion, if not found, it implies that the image is not steganographed.If an insertion is found, it is matched with the actual watermark.If the exact match is not found, the image is reported to be forged.Otherwise, it is the original one. To further validate the proposed technique, we performed an experiment to demonstrate its effectiveness.A set of images with varying range of size was steganographed using the tool we have developed to automate Sab-iomha.The motivation was to compare metadata of the image files before and after the technique was applied.We considered certain factors like size, compression level, and resolution to investigate the subject.Each image had 4 color channels having 32 bits altogether, if key = extractedCipher then 7: Image is original if key = extractedCipher then 10: Image is forged Table I draws comparison between metadata of the image files before and after applying the steganography using Sabiomha.It is noticeable that color type remained the same even after each image was steganographed, that is, RGB with Alpha.There was no change in resolution of the images either.However, some difference was observed in terms of size of each image.In general, the steganographed images were noted to be slightly smaller in size.The overall analysis suggested that quality of each set of images remained the same, i.e. studying the metadata before and after the application of the forgery detection technique did not negatively influence the quality of the images under consideration. V. CONCLUSION Digital images are prone to forgery in the current age as it has become much easier to manipulate digital contents due to advancement in the domain.We have introduced a new dimension to the digital image steganography by proposing a light weight technique.It uses a composite watermark to check digital images for authenticity.The proposed technique signs digital images for integrity and protects them against any manipulations.The forgery issue is addressed in a novel way; ELA, JPEG, and metadata are incorporated, and an invisible watermark is inserted to enhance efficiency and effectiveness of forgery detection.The proposed technique is automated through a software tool which facilitates users to steganograph digital images.The same image can then be checked for originality.The core purpose of the tool development is to support the usability of Sab-iomha which may not only validate photographs but also any digital contents stored in an image format.This work enables even non-technical users to be able to investigate integrity of image files at their own.It also empowers them to get insight on their digital contents.As part of the validation mechanism, we have tested the algorithm on a series of random images.The results suggested that the technique can not only verify the digital images for authenticity but also does not negatively influence their quality.Moreover, users can also protect their images from any attempts to forge.The research we conducted do not have any ethical, moral and legal issues associated with it.The project is economically feasible too as the users do not require to purchase any hardware devices and are alleviated from the need for software installations.Currently, the work is aimed at supporting JPEG and PNG file formats only.We aim to extend support for other image formats in the future. Fig. 1 . Fig. 1.(a) Original image (b) Object on the right is inserted. Fig. 3 . Fig. 3. Original image.and after applying a visible watermark.(b) Zoomed-in one to enhance visibility and an ELA version of the image. for i = Dt + E to i = x do 20:for j = 1 to j = 8 do Fig. 4 . Fig. 4. Illustration of an image pixel and the corresponding bit used for the invisible water marking. significant Alpha bits with cipher bits. Fig. 5 . Fig. 5. Least significant bits of Alpha bytes of an image. Fig. 7 . Fig. 7. Home-interface of the tool with an image loaded and steganographed. : end function and 0.27 value for mega pixels.Table I reflects upon the image population in more detail. TABLE I . POPULATION OF THE IMAGE FILES FOR EXPERIMENTATION
7,205.8
2018-01-01T00:00:00.000
[ "Computer Science" ]
A lightweight hybrid deep learning system for cardiac valvular disease classification Cardiovascular diseases (CVDs) are a prominent cause of death globally. The introduction of medical big data and Artificial Intelligence (AI) technology encouraged the effort to develop and deploy deep learning models for distinguishing heart sound abnormalities. These systems employ phonocardiogram (PCG) signals because of their lack of sophistication and cost-effectiveness. Automated and early diagnosis of cardiovascular diseases (CVDs) helps alleviate deadly complications. In this research, a cardiac diagnostic system that combined CNN and LSTM components was developed, it uses phonocardiogram (PCG) signals, and utilizes either augmented or non-augmented datasets. The proposed model discriminates five heart valvular conditions, namely normal, Aortic Stenosis (AS), Mitral Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP). The findings demonstrate that the suggested end-to-end architecture yields outstanding performance concerning all important evaluation metrics. For the five classes problem using the open heart sound dataset, accuracy was 98.5%, F1-score was 98.501%, and Area Under the Curve (AUC) was 0.9978 for the non-augmented dataset and accuracy was 99.87%, F1-score was 99.87%, and AUC was 0.9985 for the augmented dataset. Model performance was further evaluated using the PhysioNet/Computing in Cardiology 2016 challenge dataset, for the two classes problem, accuracy was 93.76%, F1-score was 85.59%, and AUC was 0.9505. The achieved results show that the proposed system outperforms all previous works that use the same audio signal databases. In the future, the findings will help build a multimodal structure that uses both PCG and ECG signals. 1. The use of a light CNN-LSTM model. 2. The use of augmented datasets for training and building a robust model. 3. The first to apply the CNN-LSTM architecture to discriminate heart valvular disorders. 4. Comparing the use of time domain and frequency domain inputs on the proposed model performance. 5. Comparing different deep learning models, the CNN model, the LSTM model, and the combined CNN-LSTM model. The remainder of the paper is organized as follows. "Literature Review" section describes the related literature, and "Methods" section describes the dataset used in this study. "Results" section describes the proposed approach and the training procedure. "Discussion" section addresses the experimental findings, and finally, we conclude the article and outline the future research direction in the "Conclusions" section. Literature Review Multiple researchers sought to discriminate various cardiovascular diseases using heart sound recordings. Several researchers employed machine learning and deep learning methods, particularly Convolutional Neural Networks (CNN) to accomplish this task. Despite the significant achievements in this field, many limitations like the small size of data, inefficient training methods, and the unavailability of accurate models continue to hinder advancements in this domain. The use of phonocardiogram (PCG) signals to detect cardiac abnormalities is the latest trend, some investigated publicly available datasets, while others used private in-house datasets. In this section, we survey the most recent and relevant heart sound classification literature. In 2014, Sun et al. 11 used a boundary curve diagnostic model that uses time and frequency features combined with a Support Vector Machine (SVM) classifier to diagnose the cardiac sounds, and distinguish between four cardiac problems with 94.7% accuracy. In 2018, Son and Kwon 12 used Mel Frequency Cepstral Coefficients (MFCC) combined with Discrete Wavelet Transform (DWT) features as an input to the Support Vector Machine (SVM), Deep Neural Network (DNN), and K-Nearest Neighbor (KNN) classifiers, and they achieved an accuracy of 97.9%, 92.1%, and 97.4% respectively. In 2019, Alqudah 13 classified nonsegmented heart sound signals using instantaneous frequency estimation statistical features. Principal Component Analysis (PCA) was used for dimensionality reduction, they achieved 91.6% for the K-Nearest Neighbor (KNN), and 94.8% for the Random Forest (RF) classifiers. In 2020, Ghosh et al. 14 used Deep Layer Kernel Sparse Representation Network (DLKSRN) classifier for the detection of different heart valve diseases using time-frequency representation of PCG recordings. Nonlinear features like L1-norm (LN), Sample Entropy (SEN), and Permutation Entropy (PEN) were extracted from the time-frequency matrix of the PCG recording, and they achieved a 99.24% accuracy. Alqudah et al. 15 used AOCT-Net architecture to discriminate between five different cardiovascular diseases using full bispectrum analysis of heart sound recordings and adaptive momentum optimization technique. They achieved a 98.7% accuracy for full images, and 96.1% for contour images. Ghosh et al. 16 used the chirplet transform of the PCG cycle to propose a multiclass composite classifier that uses Local Energy (LEN) and Local Entropy (LENT) features extracted from the PCG signal in the time-frequency domain. They achieved 98.33% accuracy in discriminating between all four Valvular Heart Diseases (VHD) classes. Baghel et al. 17 developed an automated system with low time complexity to discriminate various cardiac valve disorders from phonocardiograms using a Convolutional Neural Network (CNN). They used data augmentation, and a Gaussian filter for noise removal, the suggested model achieved an accuracy of 98.6% with augmented data, and 96.23% without data augmentation. Oh et al. 18 classified heart sounds using a novel WaveNet model and achieved a 94% accuracy. They used 1000 PCG with 200 recordings per category, the model was validated using tenfold cross-validation and classified phonocardiogram (PCG) into five different classes. In 2021, Alkhodari et al. 19 www.nature.com/scientificreports/ Heart Diseases (VHD). The proposed architecture was fully automated and consisted of two-phase learning, the representation, and sequence residual learning phases, they achieved the highest reported accuracy of 99.6%, and a 99.4 F1 score. Methods The main objective of this research is to develop a new deep learning model based on the CNN-LSTM architecture to reliably distinguish heart sounds (binary and multi-class classifications). Figure 1 shows the block diagram of the proposed methodology, the following sub-sections describe in detail; the used datasets, the proposed methodology, and the performance metrics utilized to evaluate the suggested method. Datasets. The model was trained using the publicly available open heart sounds dataset 12 . The dataset contains 1000 audio clips gathered from various sources; the duration of each recording is nearly 3 s. As shown in Table 1, the data is divided into five categories with 200 clips in each category. The recordings are in *.wav audio format, were sampled at 8000 Hz, and converted to a mono channel format. The dataset contains five main classes which are the normal (N), aortic stenosis (AS), mitral stenosis (VS), mitral regurgitation (MR), and mitral valve www.nature.com/scientificreports/ prolapse (MVP). Table 1 summarizes the dataset being used, and Fig. 2 shows samples of different heart valve signals from the first dataset. All methods were performed following the relevant guidelines and regulations. PhysioNet/Computing in Cardiology Challenge 2016 was the second dataset utilized in this research to further examine the suggested model 13 . This dataset contains normal and abnormal classes only, all records have a sampling frequency of 2000 Hz and were converted to a mono channel format. Table 2 summarizes the dataset being used, and Fig. 3 shows samples of different heart valve signals from the second dataset. All methods were performed following the relevant guidelines and regulations. 21 . Computing this operation straight from the definition is frequently too slow, by dividing the DFT matrix into a product of sparse elements, an FFT can perform such modifications quickly 22 . The performance difference can be substantial, especially for large data sets with N in the hundreds of millions 23 . Fast Fourier transformations are commonly utilized in engineering, music, science, and mathematics. Although the fundamental principles were popularized in 1965, several algorithms had been developed as early as 1805. Gilbert Strang referred to the FFT as "the most important numerical algorithm of our lifetime" in 1994, and it was named one of the IEEE journal Computing in Science & Engineering's Top 10 Algorithms of the 20th Century 24 . In this paper, the Fourier transform of PCG signals was clipped to contain only 350 Hz from the 4000 Hz spectrum; this is because the major components are in this frequency range 16 . Figure 4 shows the whole spectrum of five different PCG signals. Down sampling. Earlier studies 16,25 show that the maximum frequency component content in the PCG signal is around 300 Hz, accordingly, the selected down sampling frequency of 1 kHz is sufficient to represent the PCG intrinsic data. To make the classification process faster and more accurate, each PCG record in the first dataset is downsampled by a factor of 8, and each PCG record in the second dataset is downsampled by a factor of 2. These factors were obtained from previous studies like 16,26 , and 27 , and they are sufficient to describe the frequency content of the whole signal. Figure 4 shows that the highest frequency content is 500 Hz in all heart conditions. Data augmentation. Data augmentation is a popular technique used to artificially enlarge the size of a given dataset 27 . In general, augmentation attempts to generate various versions of the audio clips by applying diverse enlargement techniques 28 . Moreover, training deep learning systems on large datasets makes them more skillful at dealing with different version of inputs that resemble real-life inputs, as a result, the augmentation techniques creates a variation in the audio files that results in a better overall performance 29,30 . Similar to images, there are several techniques to augment audio signals, and these techniques are usually applied to the raw audio signals 30,31 . Table 3 summarizes the primary dataset after augmentation. In this research, the following audio augmentation techniques were applied: • Time stretch: randomly slow down or speed up the sound. • Time shift: shift audio to the left or the right by a random amount. Deep learning CNN-LSTM model. Deep learning is the most recent and cutting-edge machine learning method employed in response to the expanding number of large datasets [32][33][34][35][36] . Deep learning is based on and inspired by the deep structure of the human brain 37,38 . The architecture of the human brain has a huge number of hidden layers, allowing us to extract and abstract deep information at different levels and from different perspectives. Deep learning is concerned with the development of a specialized architecture comprised of multiple and sequential layers in which successive phases of input processing are conducted 38 . A plethora of deep learning structures have been proposed in recent years 34,39 , Convolutional Neural Network (CNN) 39,40,41 and Long Short-Term Memory (LSTM) [42][43][44][45] are the most known, widely used, and efficient deep learning algorithms. The proposed hybrid CNN-LSTM model is described in Fig. 5. Deep feature extraction and selection from the PCG signals are handled by CNN blocks, particularly the 1D convolutional layers, the batch normalization layers, the ReLU layers, and the max-pooling layers. Whilst the LSTM module extracts contextual time data after being fed these qualities as time-dependent features 46 . Studies suggest that deep feature extraction and classification using a hybrid 1D CNN-LSTM outperforms single CNN or LSTM-based approaches 47,48 . Furthermore, utilizing the LSTM component produce a richer and more concentrated model compared to the pure CNN models, resulting in higher performance with fewer parameters. Table 4 shows the detailed description of the layers in the proposed CNN-LSTM architecture. Ablation study. The goal of this section is to explore what makes our model light and different from other models. In this section, we study the robustness of the network performance against the structural changes caused by ablations, as some layers are removed or added 49 . The ablation study removed the LSTM and CNN components from the model and analyzed the effect of removing them on the model performance. The ablations to the suggested CNN-LSTM model had both negative and positive effects on the classification performance 49 . The greater the number of ablated layers, the more powerful the impact on performance. The study found that various layers have various impacts on classification performance 50 . Finally, the ablation study concluded that the performance of the proposed CNN-LSTM model is higher than any single model and this combination of components resulted in the highest performance ever. Model evaluation. In general, evaluating any machine learning or deep learning model is a challenging task due to varying dataset sizes. Typically, machine learning engineers divide the data into training and testing sets with different ratios, they use the training set to train the model and the testing set to assess the model. Although this validation technique is appropriate when the dataset is large, it is not reliable because the accuracy obtained for one test set can be very different from the accuracy obtained using another 35,43 . The K-fold Cross-Validation provides an ideal answer to this problem, the solution is to divide the data into folds ensuring that each fold serves as a testing set at some point. In this study, tenfold cross-validation was used to evaluate the www.nature.com/scientificreports/ model, it guarantees that the model generalized properly, and it also helps prevent overfitting. Finally, different performance metrics were calculated to evaluate the performance of the proposed model 34,43 . Figure 6 illustrates the k-fold cross-validation methodology. Performance metrics. To evaluate the performance of the proposed methodology in classifying heart valve anomalies, the confusion matrix for the binary classification and multi-class classification (with and without augmentation) tasks were calculated. The outcomes of the CNN-LSTM model were compared to the corresponding label of the original PCG signal 16 . Using the resulting confusion matrix, four statistical indices were calculated and utilized to measure the performance of the suggested system, namely True Positive (TP), False Positive (FP), False Negative (FN), and True Negative (TN). Based on these statistical values, accuracy, sensitivity, specificity, and the F1-Score metrics were calculated. www.nature.com/scientificreports/ To further evaluate the proposed CNN-LSTM model performance, the Receiver Operating Characteristics (ROC) curve was generated, and Area Under Curve (AUC) was also calculated to give a quantitative estimation. Results In this section, the effectiveness of the proposed CNN-LSTM Model is evaluated using several performance metrics. As explained, the suggested CNN-LSTM model is the result of employing extensive ablation studies using single CNN and LSTM models. All the experiments were conducted on a desktop computer that runs Microsoft Windows, utilizes an Intel Core i7-6700/3.4 GHz processor, 16 GB of RAM, and a 500 GB hard disk drive (HDD). The tenfold methodology was used to test the proposed model, and one of the 9 folds used for training was used as validation during the cross-validation process. the Adam optimizer, and the cross-entropy loss function 37,38 were employed for each loss function. The following sections will illustrate the results of the ablation study together with the proposed model. Ablation study. The ablation study conducts various element changes in the base architecture, the crossvalidation accuracy is calculated for each experimental configuration, and the results are reported. In the first case study, we use the CNN model without any LSTM layers, while in the second case study, we use the LSTM model without any CNN layers. Figure 7 shows the two suggested model architectures. Both models were evaluated using the augmented and non-augmented datasets, the tenfold cross-validation methodology was used to test the proposed models, the Adam optimizer, and the cross-entropy loss function 37,38 were employed for each loss function. Using an initial learning rate of 0.001, the suggested models were trained for 100 max epochs per fold. The combination of these hyperparameters resulted in the best performance for each model. Table 5 shows the performance metrics of the models in the ablation study while Fig. 8 shows the average training and loss curves among all folds of different models. www.nature.com/scientificreports/ After completing the ablation studies on the two basic models (CNN and the LSTM), the proposed CNN-LSTM model is constructed by combining both of these models, and a significant improvement in classification performance was observed. The configuration of the CNN-LSTM model will be discussed in the next section. Proposed CNN-LSTM model. The initial learning rate was 0.001 for time domain inputs training and 0.0001 for frequency domain inputs, using these values, the suggested architecture was trained for 100 max epochs per fold. Figures 9, 10, and 11 show the training accuracy and loss for all folds among non-augmented, augmented, and binary classification respectively. The first part of Figs. 12, 13, 14 shows the five class confusion matrix using the non-augmented and augmented data respectively. The rows represent the actual class, whereas the columns represent the predicted class. In the case of non-augmented data, the accuracy is 98.5%, with a small number of incorrect classifications measured by the number of False Positives (FP) and False Negatives (FN) of 1.5%. For the augmented data, the accuracy is 99.9%, and the number of False Positives (FP) and False Negatives (FN) is 0.1%. It is clear from both figures that increasing the size of the dataset using different augmentation techniques increased accuracy by 1.4% to near 100% and lowered incorrect predictions by 1.4% to 0.1%. The second part of Figs. 12, 13, 14 displays the Receiver Operating Characteristic (ROC) curves for the augmented and non-augmented data. The ROC is a visual way to represent the tradeoff between specificity and sensitivity, it plots the True Positive Rate (TPR) against the False Positive Rate (FPR) at various threshold settings. It is obvious from both figures that the curve is close to the upper left corner indicating the excellent model diagnostic ability, it is also apparent from the figures that the Area Under the Curve (AUC) for the augmented data is slightly better than that of the non-augmented data. Figure 14 shows the confusion matrix, and the Receiver Operating Characteristic (ROC) curve for the binary (normal/abnormal) classification problem. Accuracy is 93.8%, and Area Under the curve (AUC) is 0.9505 indicating high performance. The drop in accuracy between the binary and multiclass class problem can be attributed to the larger size of the PhysioNet/CinC 2016 challenge dataset (5878 vs. 1000 audio files). In this paper, PCG signals were classified into 5 different classes using the augmented or the non-augmented version of the open heart sounds dataset or into two categories using the PhysioNet/CinC 2016 challenge dataset. www.nature.com/scientificreports/ The proposed CNN-LSTM architecture exhibited very high performance for all important metrics, it achieved near-perfect accuracy on the given datasets using 10-folds cross-validation. Tables 6, 7, and 8 show the accuracy, sensitivity, specificity, precision, and F1 scores for all experiments conducted. Figures 9, 10, and 11 show that the suggested model converged rapidly reaching 100% training accuracy quickly. Table 9 shows the various performance metrics of the different examined datasets. For the non-augmented data, the accuracy was 98.5%, sensitivity was 98.5%, specificity was 99.625%, precision was 98.505%, F1-score was 98.5%, and Area Under the Curve (AUC) was 0.997. For the augmented data, the accuracy was 99.87%, sensitivity was 99.87%, specificity was 99.96%, precision was 99.87%, F1-score was 99.87%, and Area Under the Curve (AUC) was 0.998. For the binary dataset, the accuracy was 93.77%, sensitivity was 99.63%, specificity was 92.42%, precision was 97.6%, F1-score was 85.52%, and Area Under the Curve (AUC) was 0.95. It is clear from the table that the augmented data outperforms the non-augmented data for all performance metrics. It is also noticeable that using the augmented data, the proposed hybrid model achieved a near 100% accuracy. Table 10 displays the performance metrics obtained for each condition using the multiclass dataset. It is clear from the table that the suggested model exhibited very high precision and recall scores for all the tested classes. Result of testing the proposed CNN-LSTM model using FFT inputs. To further investigate the performance of the proposed CNN-LSTM model, the suggested model was modified to accept inputs from It is also noticeable that using the augmented data, the proposed hybrid model achieved a near 100% accuracy. Figure 15 shows the training accuracy and loss for all folds among non-augmented, augmented, and binary datasets respectively. While Fig. 16 shows the nonaugmented, and binary datasets confusion matrix and ROC curves using the FFT-CNN-LSTM. To evaluate if the deep features extracted using the proposed CNN-LSTM were significant, discriminant, and representative in the classification of different heart sounds, a scatter plot of the extracted deep features among five classes was drawn from the last fully connected layer of the proposed model. It can be noticed from Fig. 17 that the range of different extracted features among different classes was far off from each other, which means that the extracted features can be used successfully in the classification of heart valve diseases. Also, it www.nature.com/scientificreports/ can be concluded from Fig. 17 that each extracted feature was representative of its class and that managed to discriminate it from other classes. www.nature.com/scientificreports/ filtering, denoising, or augmentation techniques were applied. The obtained results displayed in Table 13 show that the system succeeded in discriminating between normal and abnormal cases with 93.76% accuracy, 99.66% sensitivity, 92.42% specificity, and an average Area Under the Curve (AUC) of 0.9505. The findings show that the new system outperformed the previous state-of-the-art models for all performance metrics. The obtained accuracy is 6.45% higher than the 87.31% accuracy reported by Alkhodari et al. in 2021 19 . The reason for the weak performance of the previous models can be attributed to the unbalanced nature of the PhysioNet/CinC 2016 challenge dataset that uncovered model weaknesses in generalizing properly. www.nature.com/scientificreports/ The proposed model performed effectively on both datasets, and the accuracy obtained in this research is almost perfect (nearly 100%) which makes the suggested architecture dependable and trustworthy. To the best of our knowledge, this is the highest accuracy ever reported in the literature. This model will have a positive impact on public health, building an embedded mobile system using this model can help physicians in rural areas detect cardiovascular problems early, quickly, accurately, and cost-effectively. This will help alleviate fatal complications, remove interpretation subjectivity and variability, and will also improve the health situation in remote regions that lack expert doctors by helping novice doctors in these areas make the right decision. Discussion Using the FFT input, the FFT-CNN-LSTM model performance was efficient using both datasets, and the accuracy obtained using the FFT-CNN-LSTM model was 99.73% which makes using the frequency domain input dependable and trustworthy. The accuracy obtained using the time domain input was 99.83% slightly higher than the accuracy obtained using the frequency domain input which is 99.73%. To further test the system learning capability using the FFT input, the model was trained and tested on the widely used PhysioNet/CinC 2016 challenge dataset. Here, the raw data was used to train the new architecture; no preprocessing was applied. The system succeeded in discriminating between normal and abnormal cases with 90.65% accuracy, 99.00% sensitivity, 88.74% specificity, and an average Area Under the Curve (AUC) of 0.9367. This model also outperformed the state-of-the-art models for all performance metrics. The obtained accuracy is 3.34% higher than the 87.31% accuracy reported by Alkhodari et al. in 2021 19 . The main difference between the proposed CNN-LSTM and the CNN-BiLSTM model proposed by Alkhodari et al. 19 is that our proposed model uses a smaller number of parameters (28,277) compared to the number of parameters used by Alkhodari et al. 19 since they use two LSTM layers instead of a single LSTM, they also have a larger input size and more convolution filters. In addition, the proposed CNN-LSTM system is tested both in the time and frequency domains while other systems only use the time domain or frequency domain. Moreover, other methods including Alkhodari et al. 19 performed several pre-processing techniques like z-score normalization, smoothing, segmentation, and maximal overlap discrete wavelet transform (MODWT) while the proposed methodology performed downsampling only to decrease the number of samples to 8000 in the time domain and www.nature.com/scientificreports/ 1000 samples in the frequency domain for the whole signal without segmentation. In total, all of these parameters make the proposed CNN-LSTM system lighter compared to other models proposed in the literature. Since the proposed methodology was built and trained using a CPU-based system, not a GPU-based system, and to demonstrate that it is a lightweight model. The time consumption of FFT computation, CNN-LSTM using time domain input, and CNN-LSTM using frequency input was calculated for all datasets and the results are displayed in Table 14. Rapid classification and FFT computation, combined with the high accuracy obtained Conclusions Heart valvular irregularities are a major contributor to cardiovascular diseases (CVDs). This paper proposed an intelligent automatic heart diagnostic support system that uses phonocardiogram (PCG) signals. The model is hybrid and is comprised of a CNN module for feature extraction and an LSTM module for the classification of anomalies. For the multiclass problem using the open heart sounds dataset utilizing the time domain input, the end-to-end framework demonstrated state-of-the-art performance with 99.87% accuracy for augmented data and 98.5% accuracy for non-augmented data outperforming all prior efforts. The results also showed that augmenting the data slightly improved model performance by 1.37%. For the binary class problem using the PhysioNet/ CinC 2016 challenge dataset, accuracy was 93.76%. On the other hand, utilizing the frequency domain input, the accuracy was 95.40% for non-augmented data and 99.73% for augmented data. The results also showed that augmenting the data improved model performance by 4.33%. For the binary class problem using the Physio-Net/CinC 2016 challenge dataset, accuracy was 90.65%. In the future, ECG signals can be used alongside PCG signals to design a multimodal system to improve accuracy. Moreover, this near perfection accuracy will be www.nature.com/scientificreports/ used to build a lightweight system that will help doctors performing clinical diagnostics discriminate all four irregularities early and quickly. Study limitations. This study has several advantages, including the potential use of cardiac PCG recordings to aid in the clinical decision-making of heart valve health. In addition to providing the highest levels of performance, the system was designed to be as simple as possible. The suggested model is easy to use, and it does not involve any modifications of the input signals. Despite the model's strong performance in categorizing heart valve disorders, it is critical to evaluate the suggested model using a wide variety of datasets that include more classes and records. While achieving a high level of discrimination using a simple deep neural network design, we may be able to improve the model's performance even further. www.nature.com/scientificreports/
6,090
2022-08-22T00:00:00.000
[ "Computer Science", "Medicine" ]
Genomic characterization of Salmonella Cerro ST367, an emerging Salmonella subtype in cattle in the United States Background Within the last decade, Salmonella enterica subsp. enterica serovar Cerro (S. Cerro) has become one of the most common serovars isolated from cattle and dairy farm environments in the northeastern US. The fact that this serovar is commonly isolated from subclinically infected cattle and is rarely associated with human disease, despite its frequent isolation from cattle, has led to the hypothesis that this emerging serovar may be characterized by reduced virulence. We applied comparative and population genomic approaches to (i) characterize the evolution of this recently emerged serovar and to (ii) gain a better understanding of genomic features that could explain some of the unique epidemiological features associated with this serovar. Results In addition to generating a de novo draft genome for one Salmonella Cerro strain, we also generated whole genome sequence data for 26 additional S. Cerro isolates, including 16 from cattle operations in New York (NY) state, 2 from human clinical cases from NY in 2008, and 8 from diverse animal sources (7 from Washington state and 1 from Florida). All isolates sequenced in this study represent sequence type ST367. Population genomic analysis showed that isolates from the NY cattle operations form a well-supported clade within S. Cerro ST367 (designated here “NY bovine clade”), distinct from isolates from Washington state, Florida and the human clinical cases. A molecular clock analysis indicates that the most recent common ancestor of the NY bovine clade dates back to 1998, supporting the recent emergence of this clone. Comparative genomic analyses revealed several relevant genomic features of S. Cerro ST367, that may be responsible for reduced virulence of S. Cerro, including an insertion creating a premature stop codon in sopA. In addition, patterns of gene deletion in S. Cerro ST367 further support adaptation of this clone to a unique ecological or host related niche. Conclusions Our results indicate that the increase in prevalence of S. Cerro ST367 is caused by a highly clonal subpopulation and that S. Cerro ST367 is characterized by unique genomic deletions that may indicate adaptation to specific ecological niches and possibly reduced virulence in some hosts. Electronic supplementary material The online version of this article (doi: 10.1186/1471-2164-15-427) contains supplementary material, which is available to authorized users. Background Genomic characteristics associated with the emergence or reemergence of pathogens in livestock operations can be subdivided into two categories; (i) genomic features that increase the adaptation to a host, or facilitate the jump to a new host species, or (ii) genomic features that provide increased adaptation to environmental factors in the livestock environment, such as antibiotic resistance. Comparative and population genomic studies are particularly suited to determine which features are responsible for the emergence of certain pathogens. For instance, Price et al. [1] showed that a putative host jump, from humans to livestock, in a clonal complex in Staphylococcus aureus was associated with the loss of phage-carried human virulence genes and with the acquisition of tetracycline and methicillin resistance. Salmonella enterica is one of the most frequent causes of bacterial foodborne illness and death in the United States [2]. In Salmonella, examples of emergent clones include S. Typhimurium DT 104, a multidrug resistant clone, which has seen a global epidemic spread from 1990 [3], and S. enterica serovar 4,5,12:i:-, a monophasic variant of S. Typhimurium, which showed a global increase in the mid-1990s [4]. In this study, we present comparative and population genomic research on S. enterica subsp. enterica serovar Cerro (S. Cerro). S. Cerro is rarely associated with human disease, with only one outbreak reported in the US so far that could be solely attributed to this serovar [5]; an additional outbreak was recently reported and it was linked to multiple serovars, including S. Cerro [6]. However, this Salmonella serovar has emerged over the last decade as one of the most abundant Salmonella serovars in cattle operations in the northeastern US [7], including one of the most common serovars among subclinical dairy cattle and in the dairy farm environment [8] in the northeastern United States. Most of the S. Cerro isolated from cattle and farms represent one pulsed field electrophoresis (PFGE) type, indicating that a single clonal lineage is involved in this emergence [7]. It is unknown what causes S. Cerro to be associated with cattle and why it is rarely involved in human disease. Therefore, we hypothesize that S. Cerro has distinct genomic characteristics that explain its association with cattle and limited association with human disease. Results and discussion De novo assembly shows that S. Cerro FSL R8-0235 has a genome size of approximately 4.7 Mbp, contains six prophage regions and represents MLST sequence type ST367 After exclusion of contigs fewer than 200 bp, the total length of the S. Cerro FSL R8-0235 draft de novo assembly was 4,675,817 bp. The assembly consisted of 126 contigs, with a contig N50 of 292,947 bp, and a maximum contig length of 691,181 bp. The average coverage depth of the assembly was 96X. One contig, contig 016, contained genes of an IncI1-like plasmid, however it is unclear whether this is an integrated or extrachromosomal plasmid. In addition to genes involved in plasmid transfer, stability and replication, this plasmid also carries genes encoding a resistance nodulation division (RND) efflux pump [9]. However, none of the isolates sequenced in this study showed resistance to single or multiple antimicrobial agents. No evidence for the existence of additional plasmids within the genome was found. This may be at least partially due to the presence of a DNA phosphorothioationdependent restriction modification (RM) system in all S. Cerro strains examined in this study. While this RM system has been well characterized in S. Cerro [10], a PSI-BLAST search reveals this type of RM system is very rare among Salmonella, and only found in a limited number of sequenced Salmonella strains of serovars Saintpaul (SARA23, str. 9712, str. JO2008), Namur (str.05-2929) and Panama (ATCC 7378). Genome assembly based multi locus sequence typing (MLST) was performed using the online tool [17] of the Center for Genomic Epidemiology (Lyngby, Denmark; http://www.genomicepidemiology.org/) and an additional BLASTN search. This analysis revealed that S. Cerro FSL R8-0235 belongs to sequence type (ST) 367. According to the Salmonella MLST database (http://mlst.warwick.ac.uk/ mlst/dbs/Senterica) ST367 is associated with a S. Cerro isolate from a human case in Germany in 1985. The database also contains an accession of the type strain of S. Cerro, isolated from swine in 1936 in Uruguay. This strain belongs to ST1291 and displays a different allelic type at each of the seven MLST loci. S. Cerro therefore is very likely to be polyphyletic, which makes interpretation of historical references without genomic or MLST sequence data difficult. Because all isolates sequenced in this study belong to ST367, we will refer to these isolates as S. Cerro ST367 from here on. Timme et al. [18] recently published sequence data for another S. Cerro ST367 strain (strain 818; NZ_AOZJ00000000); this group showed that, among all serovars that have been sequenced so far, S. Adelaide FSL A4-669 is most closely related to S. Cerro ST367 which is consistent with our study (see below). Population genomic analysis of 27 Salmonella serovar Cerro isolates suggests a recent clonal expansion of a bovine-associated S. Cerro lineage To infer whether the S. Cerro isolates associated with bovine hosts and cattle-associated environments form separate subpopulations from S. Cerro isolated from other sources, we obtained whole genome sequencing data for 26 additional isolates ( Table 2). After removal of putative recombinogenic regions, as identified by BratNextGen [19], and SNPs that were present in fewer than 90% of the isolates, 343 SNPs were left for analysis. To assess the presence of a temporal signal in the dataset, a Path-O-Gen (available from http://tree.bio.ed.ac.uk/software/pathogen/) analysis was performed using a maximum likelihood tree inferred from the SNP data set. This analysis showed a correlation (Pearson's Correlation Coefficient 0.80, R 2 = 0.645) between the time of isolation of the individual isolates and the root-to-tip divergence, indicating a temporal signal for this dataset and justifying a molecular clock based phylogenetic analysis. A Bayesian analysis, assuming a relaxed molecular clock and a constant population size, inferred the mean mutation rate for the core genome of the 27 S. Cerro isolates to be 2.4 × 10 −7 /site/ year (95% Highest Probability Density (HPD) 1.5 × 10 −7 -3.3 × 10 −7 ). This mutation rate is comparable to mutation rates estimated for Buchnera aphidicola [20] and Helicobacter pylori [21], but about twice as fast as recently inferred for S. Agona [22]. The New York bovine isolates are found in a well-supported (posterior probability 1.0) clade (NY bovine clade; see Figure 1), well separated from the isolates from Washington state, Florida, and the human clinical isolates from New York state. This may indicate that, although isolates of S. Cerro of the bovineassociated clade were prevalent in farm environments, and thus farm personnel would be frequently exposed to this clone, this clone was not responsible for the human cases in New York state represented by these two isolates. The time of the emergence of the most recent ancestor (MRCA) of the NY bovine clade is estimated to be 1998 (95% HPD 1991(95% HPD -2003. The NY bovine clade is further split up into two clades: (i) a clade with two isolates from northeastern New York (Figure 1: clade 1) and (ii) a clade with 15 bovine associated isolates from western NY state ( Figure 1: clade 2). The MRCA of the latter clade dates back to 2002 (95% HPD 1999-2005). Within clade 2, two well supported clusters were identified (marked 'a' and 'b' in Figure 1). Specifically, 'cluster a' contains six isolates that were isolated from Steuben county (NY) and the neighboring Livingston county (NY). This finding suggests a phylogeographic signal in the dataset, which should facilitate more detailed tracing of the emergence of S. Cerro ST367 throughout the northeastern US with a larger sampling and a population genomic analysis. Genome sequence analysis reveals a stepwise evolution, of S. Cerro ST367 to a bovine-associated clade, characterized by deletion of selected operons and acquisition of a premature stop codon in sopA Loss or gain of genes within bacterial populations may indicate niche adaption of bacterial subpopulations [23]. To infer patterns of gene loss, we mapped reads of the 27 S. Cerro isolates against well-annotated genomes such as those of S. of S. Adelaide FSL A4-669 [24] against these genomes, to determine if the patterns of absence were also observed in the most recent common ancestor of this serovar and the S. Cerro population studied here. Reads of the 27 S. Cerro isolates mapped to 86, 88, and 90% of the coding sequences in S. Typhi CT18, S. Typhimurium LT2, and S. Choleraesuis SC-B67, respectively. This is very similar to the percentage of genes shared (89%) between S. Typhimurium LT2 and S. Typhi CT18 [25] and falls in the higher end of the range observed by Jacobsen et al. [26] for a wide variety of Salmonella serovars. The genome size, and the high number of shared genes thus suggest that the lineage of S. Cerro studied here did not experience notable genome reduction. Mapping of sequence reads of the isolates of the S. Cerro population further revealed a pattern of gene absence generally conserved within the S. Cerro population sampled here, suggesting that most of the genomic characteristics associated with the emergence of S. Cerro among bovine-associated habitats were present in the MRCA of this S. Cerro clade. Interestingly, loss of some SPIs (Salmonella Pathogenicity Islands) that were found here to be absent or partially absent (gene deletions) from the Cerro population studied (i.e., ST367), but are present in S. Typhimurium LT2 or S. Typhi CT18, have been associated with attenuation of virulence. Specifically, the genomic island at S. Typhi SPI-10 locus is completely absent from the S. Cerro ST367 isolates examined here; this SPI has been associated with virulence in mice [27]. Chaudhuri et al. [28] also showed that significant reduction of fitness of S. Typhimurium SL1344 is observed during intestinal colonization of cattle when genes in SPI-10 (in particular STM4489) are disrupted by transposon insertion. Genes homologous to (i) STM2230.1c to STM2240 of SPI-12, and (ii) STM3117, STM3123, and STM3119 to STM3121 of SPI-13 were also found to be absent from S. Cerro ST367; these SPIs have been associated with systemic infection of mice in S. Typhimurium [29], and replication in macrophages (SPI-13: [30]). Furthermore, disruption of STM2231 in SPI-12 and STM3123 in SPI-13 was previously shown to cause significant reduction in fitness in S. Typhimurium SL1344 during intestinal colonization of cattle [28]. In addition, homologs of STM0293, STM0294 and STM0299 are deleted in S. Cerro ST367. These genes are found in SPI-16, a SPI associated with intestinal persistence in mice [31]. Disruption of STM0293 in S. Typhimurium has been shown to cause reduced fitness with regard to intestinal colonization of cattle [28]. Most of the SPI-related genes found to be absent in S. Cerro ST367 were confirmed to be present in S. Adelaide FSL A4-669, suggesting loss of these genes/SPIs occurred after the divergence of S. Adelaide from the most recent common ancestor of S. Cerro ST367. We found evidence for the presence of four complete toxin-antitoxin (TA) modules (STM 2954.1 N-2955.S; STM4030.S-4031; STM3777-78 and STM4449-50) within the S. Cerro genomes studied here. This is interesting as De la Cruz et al. [32] suggested that TA modules in Salmonella play a role in virulence, and that the number of genomically encoded TA modules is correlated with pathogenicity of individual strains. By comparison, the number of TA modules in S. enterica subsp. enterica ranges from 5 (S. Paratyphi B SPB7) to 10 (S. Typhimurium LT2), making S. Cerro ST367 one of the subsp. enterica serotypes with the lowest number of TA modules. The number of TA modules in S. Cerro ST367 is similar to that observed in Salmonella enterica subsp. arizonae, a subspecies which is predominantly found in cold blooded hosts and does generally not seem to cause illness in warm blooded hosts [33]. Complete or partial absence of some SPIs in all S. Cerro ST367 and the low number of TA modules in the genome, thus suggests a putative shift of S. Cerro in host and/or tissue tropism before the emergence of the NY bovine-associated clade. The hypothesis that the S. Cerro population studied here shows unique host and/or tissue tropism characteristics is also supported by the finding that all 27 S. Cerro ST367 isolates sequenced here were found to carry a premature stop codon in sopA, causing a truncation of the gene from 782 aa (in S. Typhimurium LT2) to 433 aa. Previous studies have shown that SopA is involved in virulence during bovine gastrointestinal infections by S. Typhimurium and S. Dublin [34,35], and that sopA mutations are implicated in reduced polymorphonuclear (PMN) cell migration [34,36], and fluid secretion in ileal loops in calves [34]. Premature stop codons in sopA have been found in S. Typhi, S. Paratyphi A, and S. Gallinarum and it has been suggested that loss of a functional SopA has been an important factor in the virulence and adaptation of these serovars to a systemic niche in certain hosts [37,38]. Interestingly, the one base-pair insertion responsible for the premature stop codon occurs within a~10 bp region of sopA that also contains deletions in S. Typhi and S. Paratyphi A (Figure 2). While S. Typhi and S. Paratyphi A contain additional mutations that may have caused loss of function of SopA [38], the occurrence of deletions in the same region in S. Cerro sopA suggests this is a replication error prone region in the genome. A conserved domain search (http://www.ncbi.nlm.nih.gov/ Structure/cdd/wrpsb.cgi) against the Conserved Domain Database [39] of the aa sequence of the truncated SopA in S. Cerro ST367 revealed the premature stop is situated in the SopA central domain [40] of the gene. Furthermore, the truncated SopA protein lacks the capsase-3 cleavage sites, which have been demonstrated to be important in induction of PMN transepithelial migration in S. Typhimurium [36]. Although specifically the disruption of the main functional domain in SopA and the loss of the capsase-3 cleavage sites suggest loss of function of SopA in S. Cerro ST367, further molecular genetic experiments have to be conducted to reveal if truncation of SopA in S. Cerro ST367 has lead to loss of function of this gene, and how it affects host cell invasion (as suggested by Raffatelu et al. [41]) and other SopA associated aspects of Salmonella virulence. Read mapping also showed one gene cluster to be stepwise deleted in the NY bovine clade (Figure 1). This gene cluster contains homologs of the S. Typhimurium LT2 genes STM1633 to STM1637. This gene cluster encodes a D-alanine transporter and has been recently shown to be required for intracellular survival in murine macrophage-like cells [42], and disruption of STM1637 has been shown to cause a significant reduction in fitness in intestinal colonization in cattle in S. Typhimurium [28]. This gene cluster is present in all 10 Cerro ST367 isolates that do not belong to the bovine clade. Two isolates (FSL R8-2008, FSL R8-2639) lack two genes (STM1633, STM1634) in this gene cluster. These two isolates represent a clade that split off early from the remaining NY bovine-associatedpopulation. The remaining 15 isolates in this clade lack the entire gene cluster (Figure 1). The (partial) absence of the D-alanine transporter gene cluster is currently the only genomic feature that differentiates the NY bovine clade from the remaining population (including isolates from the NY human cases). S. Cerro displays reduced invasiveness of human epithelial cells compared to other Salmonella serovars commonly isolated from bovine sources The comparative genomic analyses described above suggest S. Cerro lacks several functional genes and genomic elements that are involved in invasion and intracellular survival. To assess if strains of S. Cerro ST367 population ( Table 2) studied here are impaired in their ability to invade human intestinal epithelial cells, Caco-2 cells were infected with S. Typhimurium (n = 4), S. Newport (n = 4), S. Kentucky (n = 4), and S. Cerro (n = 4). Each serovar was represented by one isolate each from a bovine clinical case, a subclinically infected bovine host, an environmental sample and a human clinical case. S. Cerro isolates were significantly less invasive than isolates of serovars Typhimurium (P < 0.0001) and Newport (P < 0.0001), but not significantly different from S. Kentucky (P = 0.0734) (Figure 3). However, the overall invasiveness of S. Kentucky seems to be skewed by the presence of one isolate from a human clinical case, which shows very low invasion. When this outlier is excluded from the analysis, the S. Cerro isolates are also significantly less invasive than S. Kentucky (p = 0.004). Thus, consistent with our genomic analyses, S. Cerro ST367 seems to be less invasive in human intestinal epithelial cells than the serovars examined here. Future studies on the ability of S. Cerro to invade bovine intestinal epithelial cells and to cause illness in cattle will be necessary though to determine whether S. Cerro or specific subtypes within S. Cerro truly show attenuated bovine virulence. Conclusions Comparative genomic analyses of 27 Salmonella Cerro isolates indicate that this serovar lacks several genes that have previously been shown to be involved in the ability of Salmonella serovars to cause intestinal infection. Reduced invasion of human intestinal epithelial cells, as compared to other serovars, further supports the reduced ability of this serovar to cause intestinal infection, however, further experiments are necessary to determine which genes are responsible for this phenotype. Altogether, these results suggest that the emergence of S. Cerro ST367 among livestock operations in the northeastern United States may not be due to increased adaptation to the bovine host, nor to increased antibiotic resistance. Instead, the frequent isolation of this serovar on cattle farms [8] may reflect that this serovar was able to disperse rapidly as no efforts were undertaken to control its spread (possibly due to a lack of clinical signs, which left infections undetected). Alternatively, or in addition, S. Cerro (or some subtypes within S. Cerro) may have unique phenotypic characteristics that were not discovered through the comparative genomic analyses conducted here, but that facilitate environmental survival or dispersal. Isolates selection The 27 S. Cerro isolates for genome sequencing (n = 1) and re-sequencing (n = 26) were isolated from 1986 to 2008 from human cases and domesticated and wild animals in 3 different states (i.e., New York, Washington, and Florida; Table 2). Genome sequencing, assembly and annotation The genome of S. Cerro FSL R8-0235 was sequenced using the SOLiD™ system (Applied Biosystems, Foster City). Mate-paired 50 bp reads were obtained and a de novo assembly was performed as detailed in Den Bakker et al. [24]. Contigs longer than 200 bp were submitted to the NCBI Prokaryotic Genomes Automatic Annotation Pipeline (PGAAP) [43] for automated annotation. Unpaired 50 bp reads for the genomes of the additional 26 S. Cerro ST367 isolates were obtained using the SOLiD™ system (Applied Biosystems, Foster City) as detailed in Den Bakker et al. [44]. Prophage identification PROPHINDER [11] was used to find putative prophages. The prophage regions were compared, using RAST [12], to previously sequenced genomes to identify homologous regions. SOLiD™ read mapping, population genetics analysis, and read mapping based gene presence/absence analysis SOLiD™ reads were mapped against a reference genome (FSL R8_0235) using PerM [45]. ComB [46] was used to for SNP calling and creation of consensus sequences. Regions with coverage less than 10X were masked in the consensus sequences. Consensus sequences created with ComB were used as input for the BratNextGen [19] recombination detection software, using 100 replicates of 50 iterations each. SNPs in regions that were predicted to be involved in a recombination event with P < 0.01 were excluded from the analysis. A maximum likelihood (ML) tree based on the SNP data was created in MEGA 5 [47], and this ML tree was used to test for the presence of a temporal signal in the dataset using Path-O-Gen 1.4 (available from http://tree. bio.ed.ac.uk/software/pathogen/). BEAST version 1.7.5 [48] was used to create a tip-dated phylogeny of the S. Cerro isolates. Four different models differing in assumptions on mutation rate and effective population size (strict clock, constant population size; strict clock, Gaussian Markov random field (GMRF) model [49]; relaxed clock, constant population size; relaxed clock, GMRF model) were run for 10 million generations each and compared using the Bayes factor as implemented in Tracer version 1.5 (A. Rambaut available from http://tree. bio.ed.ac.uk/software/tracer/). Read mapping based gene presence/absence analysis was performed by mapping SOLiD™ reads to selected reference genomes using PerM [45]. Coverage per annotated gene feature in the reference genome was subsequently obtained using the 'coverage' tool from the BEDtools suite [50]. Caco-2 cell invasion assays of S. Cerro, S. Kentucky, S. Typhimurium, and S. Newport To compare the ability of S. Cerro isolates to invade human intestinal epithelial cells, Caco-2 cells were infected with S. Typhimurium (n = 4), S. Newport (n = 4), S. Kentucky (n = 4), and S. Cerro (n = 4), see Additional file 1. Salmonella Typhimurium ATCC® 14028 was used as a positive control and its sirA isogenic mutant as a negative control. All isolates were susceptible to gentamicin as determined by antimicrobial susceptibility testing (MIC values between 0.25 and 1 μg/ml) by the Cornell University Animal Health Diagnostic Center. Salmonella isolates were grown on Luria Bertani (LB) plates at 37°C for 16 hours. A colony was transferred into 5 mL LB broth and incubated 18 hours at 37°C, without shaking. After 18 hours of incubation, 1 mL of each culture was pelleted by centrifugation and re-suspended in 1 mL of Phosphate Buffered Saline (PBS) pH 7.4. Bacterial cells were diluted and Caco-2 cells were inoculated at an MOI of 10. Each strain was inoculated in triplicate in each of the 3 experiments conducted. Appropriate dilutions were plated on LB for calculation of the initial inoculum. For all the experiments Caco-2 cells were maintained in Dulbecco's Modified Eagle Medium (DMEM) 20% FBS 1% non-essential amino acids at 37°C and 5.0% CO 2 , for no more than 50 passages. The 24-well plates were seeded at a concentration of 5.0 × 10 4 cells/well and incubated at 37°C and 5% CO 2 for 48 hours. Thirty minutes before the cells were inoculated with Salmonella, media in the 24-well plate was replaced with fresh media. Caco-2 cells were inoculated, and incubated at 37°C and 5% CO 2 for 1 hour, followed by 3 washes with pre-warmed PBS. Fresh media was distributed into each well followed by a 15 minute incubation at 37°C and 5% CO 2 . Finally, media with gentamicin (50 μg/mL) was added and the cells were incubated for 1 hour at 37°C and 5% CO 2 . The cells were then lysed by vigorously pipetting 500 μL of chilled water in each well. The bacterial suspensions recovered were plated on LB and incubated at 37°C overnight. Invasion efficiency was calculated as [CFU recovered/CFU infected] × 100. Statistical analysis was performed using SAS software (SAS Institute Inc., Cary, NC, USA). The invasion efficiencies were analyzed using one-way analysis of variance (ANOVA), Tukey post hoc test, and the data was log-transformed to satisfy ANOVA assumptions of normality. Availability of supporting data section All raw read files have been deposited in the Sequence Read Archive of the National Center for Biotechnology Information (http://www.ncbi.nlm.nih.gov/Traces/sra/) under entries PRJNA185435, PRJNA187190-187196, PRJNA187371, PRJNA187373, PRJNA187542, PRJNA 187545, PRJNA187919-187921, PRJNA187962, PRJNA 187963, PRJNA187965-187974, and PRJNA73959. The de novo assembly of strain FSL R8-0235 has been deposited as a Whole Genome Shotgun project at DDBJ/ EMBL/GenBank under the accession JMIJ00000000. The version described in this paper is version JMIJ01000000. Additional file Additional file 1: Isolates used in invasion assay. Microsoft excel file listing isolates used in invasion assay. Competing interests LDR-R, AMS, MW and HDB declare that they have no competing interests. Life Technologies Corporation partially funded this study by providing sequencing reagents and instruments, and by compensating its employees (CAC, LD, RF and MRF), who participated in study design, data collection and analysis, decision to publish, and preparation of the manuscript. Authors' contributions LDR-R, AMS, CAC, MRF and MW conceived the study. AMS performed DNA isolation. LD and RF performed the genome sequencing. LDR-R, CAC, and HDB performed the genome sequence analysis. LDR-R performed the invasion assays. LDR-R, MW, and HDB wrote the paper. All authors read and approved the final manuscript.
6,188
2014-06-04T00:00:00.000
[ "Biology", "Environmental Science" ]
The W mass and width measurement challenge at FCC-ee The FCC-ee physics program will deliver two complementary top-notch precision determinations of the W boson mass, and width. The first and main measurement relies on the rapid rise of the W-pair production cross section near its kinematic threshold. This method is extremely simple and clean, involving only the selection and counting of events, in all different decay channels. An optimal threshold-scan strategy with a total integrated luminosity of $12\,{\rm ab}^{-1}$ shared on energy points between 157 and 163 GeV will provide a statistical uncertainty on the W mass of 0.5 MeV and on the W width of 1.2 MeV. For these measurements, the goal of keeping the impact of systematic uncertainties below the statistical precision will be demanding, but feasible. The second method exploits the W-pair final state reconstruction and kinematic fit, making use of events with either four jets or two jets, one lepton and missing energy. The projected statistical precision of the second method is similar to the first method's, with uncertainties of $\sim 0.5$ ($1$) MeV for the W mass (width), employing W-pair data collected at the production threshold and at 240-365 GeV. For the kinematic reconstruction method, the final impact of systematic uncertainties is currently less clear, in particular uncertainties connected to the modelling of the W hadronic decays. The use and interplay of Z$\gamma$ and ZZ events, reconstructed and fitted with the same techniques as the WW events, will be important for the extraction of W mass measurements with data at the higher 240 and 365 GeV energies. Introduction The W mass is a fundamental parameter of the standard model (SM) of particle physics, currently measured with a precision of 12 MeV [1], from a combination of LEP, Tevatron and LHC measurements shown in Fig. 1. In the context of precision electroweak tests the precision of the measurement of the W mass is currently limiting the sensitivity to possible effects of new physics [2]. A precise direct determination of the W mass can be achieved by observing the rapid rise of the W-pair production cross section near its kinematic threshold. This method essentially only involves counting events, in all decay channels, and is therefore extremely clean and straightforward. In 1996 the LEP collider delivered e + e − collisions at a single energy point near 161 GeV, with a total integrated luminosity of about 10 pb −1 at each of the four interaction points. The data was used to measure the W-pair cross section (σ WW ) at 161 GeV, and extract the W mass with a precision of 200 MeV [3,4,5,6]. The W mass and width have further been measured, with better precision, making use of the full kinematic reconstruction of all decay channels at LEP [7], and the partial reconstruction of leptonic decays at the Tevatron [8] and LHC [9,10] hadron colliders. Estimates of the W mass and width precision achievable with the FCC-ee physics program are outlined in Ref. [11]. Further details and insight are given in the following. The W-pair cross section lineshape The determination of the W mass and width from the W-pair threshold cross section lineshape is presented here. For a basic understanding of the statistical and systematic uncertainties, the W mass extraction from a single cross section energy point is illustrated first. [1]. (right) W-pair production cross section as a function of the e + e − collision energy E CM as evaluated with YFSWW3 1.18 [12]. The central curve corresponds to the predictions obtained with m W = 80.385 GeV and Γ W = 2.085 GeV. Purple and green bands show the cross section curves obtained varying the W mass and width by ±1 GeV. Performing a W-pair cross section measurement at a single energy point the statistical sensitivity to the W mass is given by where L is the data integrated luminosity, the signal event selection efficiency and p the selection purity, alternatively expressed in terms of σ B , the total selected background cross section. A systematic uncertainty on the background cross section will propagate to the W mass uncertainty as Other systematic uncertainties as on the acceptance (∆ ) and luminosity (∆L) will propagate as while theoretical uncertainties on the cross section (∆dσ WW ) propagate directly as Finally the uncertainty on the center of mass energy E CM will propagate to the W mass uncertainty as that can be shown to be limited as ∆m W (E) ≤ ∆E CM /2, and in fact for E CM near the threshold it is ∆m W (E) ∆E CM /2, so it is the beam energy uncertainty that propagates directly to the W mass uncertainty. In the case of L = 12 ab −1 accumulated by the FCC-ee data taking in the W-pair threshold energy region, and assuming an event selection with σ B = 300 fb and = 0.75, similar to what was achieved at LEP [3], a statistical precision of ∆m W 0.3 MeV is achievable as from Eq. 1. The impact of systematic uncertainties can be kept below the statistical uncertainty by satisfying the following conditions: ∆σ WW (T) < 0.8 fb (8) ∆E CM < 0.35 MeV (9) corresponding to precision levels of 2 · 10 −3 on the background, 2 · 10 −4 on acceptance and luminosity, 2 · 10 −4 on the theoretical cross section, and 4 · 10 −6 on the beam energy. All of these conditions appear to be challenging yet should be attainable on the side of experimental systematics, as also discussed later in this essay. The challenge to reach the required theoretical precision is discussed in Ref. [13], where it is clear that substantial improvements over the current state of the art [14,15,16,17] will be necessary to reach the 2 · 10 −4 precision level. W mass and width measurements at two or more energy points In the SM the W width is linked to the W mass, and the Fermi constant, with a ∼ α S /π QCD correction due to the hadronic decay contributions. The W width is currently measured to a precision of 42 MeV [1]. The first calculations of the W boson width effects in e + e − → W + W − reactions have been performed in Ref. [18], and revealed the substantial effects of the width on the cross section lineshape, in particular at energies below the nominal threshold. From the determination of σ WW at a minimum of two energy points near the kinematic threshold both the W mass and width can be extracted [19]. In the following the YFSWW3 version 1.18 [12] program has been used to calculate σ WW as a function of the energy (E CM ), W mass (m W ) and width (Γ W ). Figure 1 shows the W-pair cross section as a function of the e + e − collision energy with W mass and width values set at the central values m W = 80.385 GeV and Γ W = 2.085 GeV, and with large 1 GeV variation bands around the mass and width central values. It is to be noted that these does not represent a full state of the art precision on the cross section values, but deliver a precision that is fully comfortable for all results and conclusions presented in this paper. In fact the same conclusions in terms of methodology, optimal data taking planning, and projected precision of the measurements are also reached when making use of the leading order analytical formulae in Ref. [18] for the cross section dependencies. It can be noted that while a variation of the W mass roughly corresponds to a shift of the cross section lineshape along the energy axis, a variation of the W width has the effect of changing the slope of the cross section lineshape rise. It can also be noted that the W width dependence shows a crossing point at E CM 2m W + 1.5GeV 162.3 GeV, where the cross section is insensitive to the W width. Figure 2 shows the differential functions introduced in Eq. 1 2 3 4, and relevant to the statistical and systematical uncertainties for a measurement of the W mass and width from the W-pair cross section near the kinematic threshold, similarly as discussed for the single energy point W mass extraction. For the statistical terms the efficiency and purities are evaluated assuming an event selection quality with σ B 300 fb and 0.75. The minima of the mass differential curves plotted in Fig. 2 left indicate the optimal points to take data for a W mass measurement, in particular minimum statistical uncertainty is achieved with E CM 2m W +0.6 GeV 161.4 GeV. The maximum sensitivity to the W width can be determined from the minima of the curves displayed in Fig. 2 right. Note that these curves all diverge at E CM 162.3 GeV, where dσ WW /dΓ W = 0. The minima of the width differential curves are spread over a larger E CM area, with the σ WW (dΓ W /dσ WW ) term decreasing at lower energies due to the vanishing σ WW . This is relevant in the context of an optimal data-taking strategy, if systematic uncertainties become limiting factors, as discussed later. If two cross section measurements σ 1,2 are performed at two energy points E 1,2 , both the W mass and width can be extracted with a fit to the cross section lineshape. The uncertainty propagation is given by Fig. 2. W-pair cross section differential functions with respect to the W mass(left) and width(right), evaluated with YFSWW3 1.18 [12]. Central mass and width values are set to m W = 80.385 GeV and Γ W = 2.085 GeV. The resulting uncertainty on the W mass and width is If the ∆σ 1,2 uncertainties on the cross section measurements are uncorrelated, e.g. only statistical, the linear correlation between the derived mass and width uncertainties is Optimal data taking configurations When planning data taking at two different energy points near the W-pair threshold in order to extract both m W and Γ W , it is useful to figure out which energy points values E 1 and E 2 , would be optimally suited to obtain the best measurements, also as a function of the data luminosity fraction f delivered at the higher energy point. For this a full 3-dimensional scan of possible E 1 , E 2 and f values, has been performed, and the configurations that minimize a given combination of the expected statistical uncertainties on the mass and the width F (∆m W , ∆Γ W ) are found. For example, in order to minimize the simple sum of the statistical uncertainties F (∆m W , ∆Γ W ) = ∆m W + ∆Γ W , the optimal data taking configuration would be with With this configuration, and assuming a total luminosity of L = 12 ab −1 , the projected statistical uncertainties would be ∆m W = 0.5 MeV and ∆Γ W = 1.2 MeV. Varying the definition of F (∆m W , ∆Γ W ) used in the optimization does not significantly affect the results. The optimal upper energy is always at the Γ W -independent E 2 = 162.34 GeV point, while the optimal lower energy is at (1 − 2)Γ W units below the nominal 2m W threshold, with the precise value depending on the degree to which the definition of F is focused on the W-width measurement. In a similar way the optimal data fraction to be taken at the lower off-shell E 1 energy point varies according to the chosen precision targets, with larger fractions more to the benefit of the W width precision. If a small fraction of data (e.g. f =0.05) is taken off-shell a statistical precision ∆m W = 0.3 MeV is obtainable both with a single-(m W ) and the two-parameter (m W , Γ W ) fit of the lineshape. Considering that the beam energies E b that can surely be calibrated with resonant depolarization are such that the spin tune is a half integer, that is E b = 0.4406486(ν + 0.5) GeV (17) where ν is an integer, the scan of energy points can be limited to a grid with E CM = 0.8812972(ν+0.5) GeV. Taking this grid constraint into account the optimal higher energy point for data taking becomes the E 2 = 162.62 GeV for ν = 184. The corresponding minimum statistical precisions attainable are increased by 5-10% with respect to the values reported above. For the case of minimizing ∆m W + ∆Γ W , would be with taking data with E 1 = 157.33 GeV,E 2 = 162.62 GeV, f = 0.40 and yielding statistical uncertainties ∆m W = 0.55 MeV and ∆Γ W = 1.3 MeV assuming a total integrated luminosity L = 12 ab −1 . The effects of the beam energy spread effects have also been considered, and impact mostly the W width extraction. A 10%-level control on the energy spread will be sufficient to make the corresponding systematic uncertainties negligible [20]. Data taking at additional energy points In the case of limiting correlated systematics uncertainties, it can be useful to take data and measure both signal and background cross section at more than two E CM points, in order to reduce background and acceptance uncertainties. In particular, for the simultaneous measurement of m W and Γ W just described, taking data at energy points where the differential factors (dσ/dm W ) −1 , (dσ/dΓ W ) −1 , σ(dσ/dm W ) −1 and σ(dσ/dΓ W ) −1 , are equal, can help cancelling the effect of correlated systematic uncertainties of background and acceptance. Initial investigations in this direction have been carried out [21], supporting the presumption that taking data at more than two energy points improves the robustness of the measurement against correlated systematic uncertainties. Measuring the W-pair cross section at additional points can also serve to disentangle possible new physics effects, as for example anomalous triple gauge coupling (TGC) contributions. The SM-expected steep W-pair cross section rise with energy is proportional to the produced W boson velocity (β W ) and is driven by the t-channel neutrino exchange process. The contribution of processes with TGCs follow a different β 3 W dependence, with expected cancellation effects. Anomalous TGC contributions would therefore lead to distinctive differences in the W-pair cross section lineshape also in the threshold region. W mass and width from the W pair decay kinematics In addition to the W mass and width measurements achievable through the W-pair cross sections near the production energy threshold, the W mass and width can also be determined from the kinematic reconstruction of the W-pair decay products. This was the primary method to measure the W mass and width with LEP2 data [7]. In the kinematic reconstruction of the W mass from W-pair decays the fully hadronic (qqqq) and semi-leptonic (qq ν) final states are exploited, making use of events with either four jets or two jets, one lepton and missing energy. In both cases the reconstructed W mass values are obtained by imposing the constraint that the total four momentum in the event should be equal to the known initial centre-of-mass energy and zero momentum. The four momentum constraints (4C) are implemented by means of a kinematic fit where the measured parameters of the jets and leptons are adjusted, taking account of their measurement uncertainties in such a way as to satisfy the constraints of energy and momentum conservation. The 4C implementation allows to overcome the limitations of jet energy resolution on the W mass reconstruction, and improve the mass resolution from ∼10 GeV to ∼2 GeV. The kinematic fit of final states with four-momentum conservation constraints can also be applied to other di-boson productions at E CM = 160 − 365 GeV, like Z-pairs and Zγ events. In the case of Zγ final states, also known as radiative returns to the Z-peak, the fit can be shown to lead to a reconstructed Z boson mass as [22], where θ 1,2 is the angle of the two leptons or jets from the Z decay, with respect to the photon direction, and β 1,2 are the leptons or jets velocities. The formula in Eq. 18 is based on fixing the jet directions and velocities to their measured values but rescaling their energies to conserve four-momentum, that follows closely what is done in a kinematic fit. Equation 18 also shows the direct interplay between the reconstructed Z mass and the centre-of-mass energy (E 2 CM = s). In practice the Z mass is reconstructed primarily through the decay products direction, and their velocities in the case of hadronic jets, while the energy scale is set by the known collision energy. The same happens with the 4C kinematic reconstruction of W-pairs, where again the energy scale of jets is given by the event E CM and the angular openings of jets and leptons carry the primary information to determine the W mass, with the jets velocities as the further important ingredient. On the other hand, by making use of the value of m Z precisely measured at the Z pole, the collision energy E CM can be treated as the parameter to be measured in Eq. 18, so that the kinematic fit of radiative decays can be used to determine E CM . This interpretation was used with LEP2 data to cross-check the E CM values determined by the accelerator [7]. In general a kinematic fit of either Zγ, ZZ, or WW decays can be equivalently employed either to determine the boson (W or Z) mass assuming a given centre-of-mass energy or, alternatively, the average centre-of-mass energy assuming a fixed boson mass. W-pair reconstruction at FCC-ee data taking energies The prospects of the kinematic reconstruction of W-pairs with FCC-ee data can be estimated taking as a reference existing LEP measurements [22]. In a kinematic reconstruction data analysis W-pair decay products are typically forced into four jets using the DURHAM [23] algorithm in the hadronic channel, and into two jets and a lepton in the semi-leptonic channel. The reconstructed W mass peak resolution can be remarkably improved with a four-momentum conservation fit (4C) described above, and eventually with the additional constraint of equal mass for both W in each event (5C). Maximum likelihood template fits of the reconstructed W mass distributions are then used to extract the value of m W . With this methodology, used with LEP2 data, it can be estimated that the combined statistical precision of all FCC-ee data would deliver a final precision of around 1 MeV for the W width, and below 0.5 MeV for the W mass, matching the precision delivered by the threshold cross section lineshape. Systematic uncertainties The limitations of systematic uncertainties to the precision of the W mass kinematic reconstruction with FCC-ee data are not easy to establish with certainty. As for the threshold cross section method, the beam energy uncertainty is reflected directly to the W mass reconstruction, in this case through the kinematic fit. Beam energy calibration through resonant depolarisation will ensure that this uncertainty will not be a limiting factor for the W mass reconstruction with the data taken at 162.6 GeV. For the data taken at 240 GeV and 350-365 GeV the analysis and kinematic fit of Zγ and ZZ events can allow to determine the data E CM with high precision, as can be inferred from Eq. 18 and done with LEP2 data [7]. Extrapolating the LEP2 measurements to the projected FCC-ee data, a statistical precision of around 1 MeV for E CM would be achievable. This would propagate to a 0.5 MeV systematic uncertainty on the W mass, that matches the projected final statistical uncertainty, and would therefore not be negligible. A number of other systematic uncertainties that were relevant for the LEP2 measurements appear to make this measurement overall more challenging with respect to the more simple threshold determinations. The most challenging uncertainties are likely to be related to non-perturbative QCD modelling of the W-pair decays fragmentation, that relates directly to the hadronic jets boost, i.e. the β factors in Eq. 18. Precise measurements of fragmentation properties of Z boson hadronic decays, collected at teh Z peak, will be instrumental to build control on the fragmentation properties of weak bosons. Finally a simultaneous analysis and kinematic fit of WW, ZZ and Zγ events, can lead to a determination of the m W /m Z ratio where many systematic uncertainties common to the three channels can cancel, and the W mass can be derived given the independent precision on the Z mass (∆m Z 100 KeV) from the Z peak data. Conclusions Among the primary parameters of the standard model, the W mass and width are those where a improvement of the experimental determination is most desirable. These measurements are extremely difficult at high energy hadron colliders, and the foreseen precision achievable with LHC data is around 10 MeV for the W mass. The FCC-ee program will offer the opportunity for a full exploration of the W-pair production at the kinematic threshold that will deliver a clean and straightforward determination of the W mass and width, with respective accuracies of 0.5 MeV and 1.2 MeV. Complementary and more challenging determinations of the W mass and width with FCC-ee data can be obtained through the reconstruction of W-pair decay products, from data in the full E CM range from the production threshold to E CM = 365 GeV. The projected statistical precision from these other measurements are similar to those of the threshold determinations, but the impact of systematic uncertainties are more difficult to predict, in particular those arising from the beam energy knowledge and the modelling of non-perturbative QCD effects in the W boson hadronic decays. The ultimate way forward to exploit the kinematic reconstruction method could be in the simultaneous analysis of WW, ZZ and Zγ events, making use of the much higher precision of the Z mass from the peak scan, and allowing to reduce the impact of correlated systematic effects.
5,171.6
2021-07-09T00:00:00.000
[ "Physics" ]
Drivers of Rural Households’ Choices and Intensity of Sustainable Energy Sources for Cooking and Lighting in Ondo State, Nigeria : Poverty reduction and the promotion of sustainable human development are fundamentally dependent on having access to modern energy services. Energy supplies that are dependable, reasonably priced, and sustainable are vital to modern societies. In achieving the sustainable development goals (SDG7) and access to clean energy supplies, this study, using cross-sectional data from 180 randomly sampled rural households, analyzed the key factors determining the choice and intensity of energy sources used for lighting and cooking in rural Nigeria. Both descriptive and inferential statistics (multivariate probit (MVP) and zero-truncated Poisson (ZTP models)) were employed for the analyses. The result showed that there is evidence of fuel stacking in their choice of cooking and lighting energy, and it increases with rising income levels but is more pronounced for lighting than cooking. The result also revealed that reliable access to clean energy (9% of sampled households for LPG and 23% of the households for grid electricity) is very low, as these households still rely on fuelwood (70%) for cooking, but the predominant usage of kerosene (39%) for lighting, as reported in the literature, has drastically changed to dry cell battery (51%). The results using a multivariate probit model to capture the multiple fuel usage phenomenon among rural households show that access to clean energy, improvement in rural poverty, usage of indoor kitchens, household size, and an increase in the education of household heads’ spouses significantly influence the use of clean energy in the rural areas. In the same vein, the result of the ZTP model showed that income, access to energy sources, and occupation of the household head were the drivers of the intensity of cooking and lighting energy sources. Thus, it is recommended that any policy interventions that are targeted at encouraging rural households to use clean energy should start by improving rural access to these clean energy sources, improving their poverty status while also increasing the level of education and awareness of rural women concerning the risks of using dirty energy sources. Introduction Ensuring access to affordable, reliable, sustainable, and modern energy for all has been identified as one of the key sustainable development goals for 2030 (SDG7).According to the International Energy Agency (IEA) and World Bank [1], the UN Sustainable Energy for All initiative posits "sustainable energy for all" to encompass three pillars, namely energy access, energy security, and energy efficiency.The main policy concern about the usage of household energy in rural areas of most developing countries has always been how to understand the key factors that determine energy choices of rural households in order to encourage them to switch to modern energy sources for cooking and lighting, as these two energy services are basic energy needs that cut across all rural households, and the choices these households make can impact their livelihoods, health conditions, and the environment. Approximately three billion people still rely primarily on wood or other biomass energy for cooking and heating.This accounts for nearly 1.06 billion people who lack access to electricity [2].The incomplete combustion of this traditional biomass fuel leads to air pollution, which is estimated to cause 4.3 million deaths annually worldwide [3].The majority of these individuals are from sub-Saharan African countries (SSA).Most of them reside in developing countries, mostly in Asia and Africa.Nine hundred and five million people, or the majority of those without access to clean cooking fuels, reside in sub-Saharan Africa.Furthermore, only 45% of people in SSA, home to 860 million people worldwide without access to electricity, live there.Based on current projections, the IEA estimates that 740 million people-up from 620 million in 2030-will not have access to electricity by 2050, and 1.8 billion people will still not have access to clean fuels for cooking in the next 20 years [4]. About 115 million people in Nigeria still primarily rely on traditional biomass as their primary source of energy for cooking, but every year, close to 79,000 people, mostly from the country's rural and marginalized areas, pass away from pollution caused by the inefficient combustion of traditional biomass [5].About 99.8% of cooking and heating in rural Nigeria are still done with traditional biomass, despite strong evidence that direct combustion of biomass energy, without the use of improved stoves, only offers very little real energy value with attendant environmental and health concerns, such as deforestation, land degradation, soil erosion, and air pollution [6][7][8]. Even though Nigeria has abundant primary energy resources to meet its domestic energy needs [9], the nation nevertheless has the second-largest population without access to electricity in the world (85 million people, of which 64% reside in rural areas).Furthermore, it is claimed that these rural residents use a variety of energy sources to meet their nighttime lighting needs [10].In rural Nigeria, the absence of dependable electricity access would mean that households would be unable to improve their standard of living by extending their children's study hours after school, improving their home's nighttime lighting, or engaging in other profitable or productive endeavors.Another type of deprivation that exists in rural areas is poverty, which is closely related to rural households' inability to access modern energy sources and other necessary resources for a suitable and sustainable way of life. The reliance on solid biomass fuels and kerosene as primary energy sources for cooking and lighting, respectively, is extreme in rural areas of Nigeria.It is impossible to ignore the impact that poverty has on Nigeria's rural areas' patterns of energy consumption.Poor households are the most vulnerable in any society worldwide, according to [11].Nigerian rural poverty is estimated to be 44.9%,compared to the country's average of 33.1%, based on data from the General Household Surveys (GHS) panel format, which was conducted in 2010/2011 and 2012/2013 [12].Given that the majority of impoverished rural households-particularly those in rural areas-cook their food outdoors on three-stone stoves, it is highly likely that these households experience hardships with food preparation and water heating, particularly during the rainy season.According to Louw et al. [13], in order to lessen their exposure to any unreliability linked to a single energy source, households typically use a combination of energy sources for a given energy service.Fuel stacking is the practice of rural households using a variety of energy sources [14].When households use multiple fuels for a specific energy service, and their fuel combination pattern demonstrates complementarity between dirty and clean energy sources, even in the face of a significant improvement in their income or welfare status, this is known as fuel stacking.The phenomenon of fuel stacking in rural households is increasingly well-established in the literature, particularly with regard to cooking and lighting energy sources [15][16][17]. Similarly, the low penetration of electricity in rural areas of Nigeria has been ascribed to various factors, such as the expensive, insufficient, and unstable infrastructure brought on by aging transmission lines [18].This has led to the over-reliance of a greater percentage of rural dwellers in Nigeria on traditional fuels like firewood and biomass to meet their household needs [19].Relying on traditional fuels prevents nations from meeting their sustainable development targets because of the inability to support contemporary economic activities, such as heavy industries, and they impede social development by making it more difficult for people to access modern health and education services [20]. Though a number of studies conducted in Nigeria have demonstrated that households use a variety of energy sources for lighting and cooking [21][22][23], none of these studies take these various energy options into account in their various models.Similarly, researchers like Abdul-Wakeel and Dasmani [24], Abdul-Wakeel et al. [25], and Azorliade et al. [26] have concentrated on the factors that influence household cooking fuel preferences in developing nations, such as Nigeria, with some of these researchers examining the distinction between clean and dirty energy.These studies' flaw is that they overlooked the problem of energy choice and consumption expenditure.Others, like Abdul-Wakeel et al. [27] and Ofori et al. [28], make an effort to address this problem by examining the effects of the fuel choice used in homes on human health, with a lot of their research concentrating on particular medical conditions.For instance, Ofori et al. [28] investigated the relationship between blood pressure and the use of dirty fuels in homes among women in southern Nigeria.To the best of our knowledge, this is the only study in Nigeria that models rural households' choices of energy sources for lighting and cooking independently, analyzing the factors that influence these choices and providing a better understanding of the factors that influence the choices made.With regard to the intensity of energy sources for lighting and cooking in Nigeria, this study is helpful in developing effective policy interventions to promote the use of contemporary energy sources for these distinct energy services.The contribution of this study is premised on the comprehensive empirical analyses of the factors influencing rural households' energy decisions in light of increased access to modern fuels and renewable energy sources relative to conventional energy sources while accounting for context-specific factors that influence household energy choices and intensity. Study Area and Source of Data The study was conducted in Ondo State, Nigeria.Ondo is one of the States in the southwestern part of Nigeria, where most of the rural areas have limited access to electricity and other modern energy.Primary data were used for this study, and the data were collected with the aid of a well-structured questionnaire.A multistage sampling technique was used to select respondents for this study.The first stage was a purposive selection of Okitipupa and Owo Agricultural Development Programme (ADP) zones out of the three existing ADP zones in the State.Each zone comprises six local government areas (LGAs).In the second stage, two LGAs were selected from each zone using a simple random sampling technique.In the third stage, simple random sampling was also used to select three villages from each of the LGAs.The final stage was the random selection of fifteen households each from the selected villages/communities to make a total sample size of one hundred and eighty (180) respondents.Detailed information on the choice, access, usage, and purchase of household energy sources for cooking and lighting were collected alongside key demographic and socioeconomic characteristics from each of the rural households. Empirical Model A discrete choice model is the multivariate probit model.Similar to the seemingly unrelated regressions model, it is a multiple-equation extension of the probit model that permits the disturbance terms to be correlated [29].This model is very helpful in managing fuel stacking because it permits the error terms to be freely correlated, in addition to enabling the simultaneous analysis of these various fuel options.The factors influencing rural households' decisions about cooking energy sources were estimated using a system of 4-equation multivariate probit model, in accordance with [30], while a system of 5-equation multivariate probit model was employed for their lighting energy source choices. ϵ im , m = 1, . . ., M are error terms spread as a variance-covariance matrix and multivariate normal, each with a mean of zero.V, where V has values of 1 on the leading diagonal and correlations ρ jk = ρ kj as off-diagonal elements.X im = vector of the explanatory variables and β m ′ = vector with to-be-estimated unknown coefficients.The outcome variables for rural households' choices of cooking energy are fuelwood (FWD), charcoal (CHL), kerosene (KRC), and bottled liquefied petroleum gas (LPG), while those for lighting are kerosene (KRL), dry cell battery (DCB), rechargeable battery (RGB), petrol (PMS), and grid electricity (GEL).Thus, we assume that there is independence in the error terms of equations for both models (cooking and lighting) in order to test for the absence of correlation among the error terms.That is, all cross-equation correlation coefficients for each model are jointly equal to zero (for example, rho21 = rho31 = rho41 = rho32 = rho42 = rho43 = 0).The likelihood ratio test is among the appropriate tests used to test the null hypothesis of no correlation of error terms across a multivariate probit model [30].Without significant statistical evidence to reject the null hypothesis, it means that the choices of rural households concerning a particular energy service are not jointly made, and in that case, it would be better to estimate these models separately.However, if this null hypothesis is statistically rejected, estimating the models separately will result in inefficient and biased estimates. In estimating the drivers of the intensity of level of energy sources (cooking and lightning) among the sampled rural households in Ondo State, a zero-truncated Poisson regression model was employed, as used by Ogundari [31].The outcome variable is specified as a count data.Empirical studies with cross-sectional data are hypothesized to have n explanatory observations, such that the i th observation denoted as (z i , x i ).z i in the current study is the energy source for cooking and lighting, which is the number of energy sources used by the rural households.x i is the vector of linearly dependent variables that are likely to determine the dependent variable z i .Independent variables x i include household socioeconomic and demographic variables.Assuming randomness within a specified time interval, the household energy sources for cooking and lighting have a Poisson distribution with a probability density function mathematically defined as: Y i is the random variable's actual value, which has a mean z i and variance ϖ i .It is hypothesized that z i is strictly positive (z i > 0), which suggests a zero-truncated outcome variable.Explicitly stated, the log-linear version of the estimated model can be specified as: The additive specification of the above equation is: Based on the property of Poisson, ϖ i = λ i is predicated on equidispersion.Cameron and Trivedi [32] claim that the qualitative impact of the Poisson assumption of equidisper-sion failing is comparable to that of the homoscedasticity assumption failing in a linear regression model. Descriptive Statistics of the Respondents The descriptive statistics of the variables used in the multivariate probit models are presented in Table 1.About 70 percent of the sampled households used fuelwood for cooking, while 21, 48, and 11 percent of them used charcoal, kerosene, and LPG for cooking, respectively.This indicates that rural households are still heavily dependent on dirty energy sources for their cooking.On the other hand, 39 percent of these households used kerosene for lighting, while 51, 28, 29, and 35 percent of them used dry cell batteries, rechargeable batteries, petrol, and grid electricity for lighting, respectively.The data also show the average age of household heads to be 48 years, with an average household having five members.On reliable access to modern energy in rural areas, twenty-two percent of the households sampled had access to grid electricity, while only nine percent reported having access to bottled liquefied petroleum gas (LPG).Also, 38 percent of the households reported owning an electricity generator, and only 23 percent cook their food in an indoor kitchen. Rural households' lighting and cooking energy consumption (choice) pattern More so, rural households' cooking energy consumption (choice) pattern is depicted according to their respective income quintiles (Figure 1).Furthermore, rural households' lighting energy consumption (choice) pattern is depicted according to their respective income quintiles (Figure 2).The figure shows that rural households from the lowest income quintile to the highest income quintile use petrol for lighting, and the frequency of use increases with income level. Distribution of cooking and lighting energy choices by income quintiles The distribution of rural households based on their different choices of cooking energy across income quintiles (Table 2) shows that most of these households cook with fuelwood, with the exception of households in the highest income quintile ((36.62%) and (23.94%).) that cook mostly with kerosene.The implication of the results in Table 2 shows that the improvement in the economic or wealth status of rural households will increase their usage of modern and clean energy for cooking.The results in the lower part of Table 2 enable us to take a closer look at our findings across different agroecological zones (AEZ 1 and 2) in our study area.The majority of households in AEZ1, within the income quintiles 1-4, use fuelwood for cooking, while most of the households in income quintile 5 cook with charcoal (36.59%) and kerosene (24.39%).Also, the households in income quintiles 1-3 located in AEZ 2 show similar patterns with their counterparts in AEZ 1.However, most households in income quintile 4 cook with fuelwood (37.5%) and kerosene (40.00%), while most households in income quintile 5 cook with kerosene (40.00%) and LPG (25.00%). The consumption of lighting energy among rural households (Table 3) shows that there is stacking of lighting energy sources as income progresses.Dry cell batteries are mostly used for lighting in rural households.The lowest income quintile households mostly kerosene (36.17%) and dry cell batteries (34.04%) for lighting, while the highest income quintile households use petrol (31.03%), dry cell batteries (22.99%), and rechargeable batteries (19.54%) for lighting up their houses at night.The results in the lower part of Table 3 show that all the households in AEZ 1 did not use grid electricity.This is because AEZ 1 was disconnected from the national grid by the electricity distribution company serving the area.While grid electricity was the highest source of lighting energy for rural households in AEZ 2, most rural households in AEZ 1 relied on dry cell batteries to light up their homes and environments. Multivariate Probit Model on Choices of Cooking Energy among Rural Households This section discusses the results from the multivariate probit model on choices of cooking energy among rural households.The likelihood ratio test (chi2(6) = 15.9343,p < 0.01) of the independence of the error terms of the different cooking energy choice equations is highly rejected (Table 4).We thus adopt the alternative hypothesis of the mutual interdependence among the choice of cooking energy.The result thus supports the use of a multivariate probit model to analyze the determinants of household fuel choice when there is evidence of fuel stacking.The results of parameters estimated from the multivariate probit model on household cooking energy choice are presented in Table 5.We found out that households with older household heads are more likely to use charcoal as their cooking energy choice.This is in support of results from Gebreegziabher et al. [33] that older household heads in Ethiopia are more likely to consume charcoal for cooking.The result of the education status shows that household heads who are more educated are less likely to use fuelwood as their choice of cooking energy in rural areas.An increase in a household's monthly expenditure on food will increase the likelihood of using LPG as the choice of cooking energy.This is in line with the results of Ogwumike et al. [22] that an increase in per capita expenditure increases the probability that the household will use LPG for cooking.Prob > chi2 = 0.0141 ** chi2 (10) 15.9343 Likelihood ratio test of rho21 = rho31 = rho41 = rho32 = rho42 = rho43 = 0: Note: *, **, and *** indicate statistical significance at the 10%, 5%, and 1% alpha levels, respectively. Multivariate Probit Model on Choices of Lighting Energy among Rural Households This section discusses the results from the multivariate probit model on choices of lighting energy among rural households.Table 6 presents the pairwise correlation coefficients showing the relationship between various lighting energy choices made by households.A negative correlation between two lighting energy sources indicates substitutability, while a positive correlation means complementarity.(10) 74.2261 Likelihood ratio test of rho21 = rho31 = rho41 = rho51 = rho32 = rho42 = rho52 = rho43 = rho53 = rho54 = 0: Note: **, and *** indicate statistical significance at the 5%, and 1% alpha levels, respectively. Table 7 presents the results of parameter estimates from the multivariate probit model on household lighting energy choice.An increase in the household head's age will make the usage of grid electricity as the household's choice of lighting energy source more likely. The Drivers of Intensity of the Use of Both Cooking and Lighting Energy among Rural Households The empirical result of drivers of the intensity of the use of cooking and lighting energy among rural households using the zero-truncated Poisson model is presented in Table 8.The coefficient of total income of the household head is positive and statistically significant in influencing the intensity of usage of energy sources for lighting and cooking in the study area.Households switch from using traditional fuels like wood to transitional fuels like kerosene and finally to modern fuels like electricity from the grid as the head of the household's income rises [22].Access to affordable and sustainable energy services is a prerequisite for achieving the internationally recognized goal of halving the percentage of the population living on less than USD 1 per day by 2015. Descriptive Statistics of the Respondents Contrary to evidence from several studies in Nigeria [22] that the majority of households use kerosene for lighting, the possible explanation for this could be either the supply of kerosene to the rural areas is becoming more unreliable and expensive or rural households are becoming more self-aware of the hazards related to the use of kerosene for lighting in their households.More so, an average household head completed 10 years of formal education, while the average years of formal education completed by the household head's spouse was 7 years.The number of years of formal education completed by the household head's spouse could be an important indicator of her empowerment in the house, and it can also mean that the spouse is more enlightened about the danger of using dirty energy sources.An average household for this study spent NGN 13,448 monthly on food.The poverty status shows that 68 percent of households live above the study poverty line.In terms of the main occupation, 43.33 percent of the household heads are farmers, while 29.44, 3.88, and 23.33 percent are artisans, traders, and civil servants, respectively. Rural Households' Lighting and Cooking Energy Consumption (Choice) Pattern Figure 1 shows that rural households from the middle-income quintile to the higherincome quintile use modern energy (LPG) for cooking, and the frequency of use increases with income level.It also shows that there is evidence of fuel stacking across different income quintiles, but the level of fuel stacking increases with high-income level households.This implies that low-income rural households are more vulnerable to the unreliable supply of a single energy source (especially modern energy) than their high-income neighbors who have the incentive to use more than one source of energy for cooking.This is corroborated by Louw et al. [13] that households mostly use multiple energy sources to forestall their vulnerability to the failure of a single fuel that is mostly unreliable.There is evidence of fuel stacking for lighting energy across different income quintiles in the study area, and the evidence becomes stronger with an increase in the income level of rural households.This implies that the higher the income level of households in rural areas, the higher the likelihood of fuel stacking for lighting.A comparison of Figures 1 and 2 shows that there is more stacking in lighting energy sources than cooking energy sources among rural households and across different income levels.This implies that households may be more vulnerable in terms of their lighting energy choice than their cooking energy choice.The possible explanation for this is that the supply of fuelwood (one of the energy sources for cooking) is within the domain of rural households, while almost all the energy sources for lighting are supplied outside their domain. Distribution of Cooking and Lighting Energy Choices by Income Quintiles In comparison, the results (AEZ 1 and AEZ 2) show that high-income rural households in AEZ 2 use LPG for cooking more than their counterparts in AEZ 1.While there is a reduction in the usage of fuelwood among households in AEZ 1 as their income increases, the usage of fuelwood is roughly the same among households in AEZ 2, except for households in quintile 5. Multivariate Probit Model on Choices of Cooking Energy among Rural Households A negative correlation between two cooking energy sources indicates substitutability, while a positive correlation means complementarity.Interestingly, the result shows a negative correlation between LPG (a clean cooking energy source) and dirty energy sources (fuelwood and charcoal).This implies that improvement in poverty status and reliable access to LPG in rural households will probably shift to the use of clean energy sources for cooking.Also, there is a positive correlation between LPG and kerosene, indicating that households that use this clean cooking energy source complement it with kerosene.In general, Table 4 supports the idea that households typically rely on multiple energy sources.For instance, a household might rely on cooking with both kerosene and LPG.Because energy sources can coexist in a single household, we can estimate household preferences for various cooking energy sources by using a multivariate probit model. Households with large member sizes are more likely to use fuelwood but less likely to use charcoal, kerosene, and LPG as their choice of cooking energy in rural areas.Findings from the studies of Pandey and Chaubal [34] and Özcan et al. [35] are in support that larger households prefer dirty fuels over clean fuels in most developing countries.Households that have reliable access to LPG are less likely to use fuelwood but more likely to use LPG as their choice of cooking energy.This implies that rural households that find LPG accessible, unlike those that find it non-accessible, are more likely to use LPG as their choice of cooking energy.This is in line with Mensah and Adu [36], who posited that ensuring households have access to LPG for cooking can drive the move towards cleaner fuels. This result shows that educated household heads will have a better understanding and awareness of the risks associated with cooking with fuelwood.This supports the findings of Alem et al. [37] and Ogwumike et al. [22] that households with more educated household heads are less likely to use fuelwood for cooking.More so, an increase in the years of education attained by the household head's spouse will reduce the likelihood of using fuelwood as the choice of cooking energy, but it will also make the use of charcoal, kerosene, and LPG more likely.This is because the household head's spouse with a higher education level has better empowerment within the house as she is more likely to be engaged in more productive activities that will increase the opportunity cost of using fuelwood for cooking.Baiyegunhi and Hassan [38] observe that a higher education level induces households to shift away from fuelwood towards the use of kerosene and LPG. An improvement in the poverty status of rural households will not only reduce the likelihood of using fuelwood as their choice of cooking energy, but it will also increase the likelihood of using kerosene and LPG for cooking.This is in line with the findings of Mensah and Adu [36] that moving from extremely poor to a non-poor welfare status reduces the probability of a household using crop residue and firewood while increasing the probability of switching to relatively cleaner fuels like charcoal, kerosene, and LPG.Household heads who are artisans are less likely to use fuelwood, but they are more likely to use charcoal and kerosene as their choice of cooking energy compared to household heads who are farmers.Also, household heads who are traders are less likely to use both fuelwood and LPG, but they are more likely to use charcoal and kerosene as their choice of cooking energy when compared to household heads who are farmers.More so, when compared to household heads who are farmers, civil servant household heads are less likely to use fuelwood, but they are more likely to use charcoal and kerosene as their choice of cooking energy in the study area. Multivariate Probit Model on Choices of Lighting Energy among Rural Households The likelihood ratio test (chi2(10) = 74.2261,p < 0.0001) of the independence of the error terms of the different lighting energy choice equations is highly rejected (Table 6).We thus adopt the alternative hypothesis of the mutual interdependence among the choice of lighting energy.The result thus supports the use of a multivariate probit model to analyze the determinants of household fuel choice when there is evidence of fuel stacking. Interestingly, the result shows a negative and significant correlation between dry cell battery and kerosene, petrol and kerosene, and rechargeable battery and kerosene.This shows that there is a high likelihood that rural households will shift away from kerosene as their lighting energy choice if provided with cleaner alternatives.Also, there is a positive and significant correlation between rechargeable batteries and petrol, indicating that households that use petrol as their lighting energy source complement it with a rechargeable battery.There is also a positive correlation between rechargeable batteries and grid electricity, but it is not statistically significant.This could be a result of a low level of access to grid electricity in rural areas.Table 6 generally confirms that households usually depend on more than a single source of energy for lighting. Households with large member sizes are more likely to use both petrol and grid electricity as their choice of lighting energy.This is also supported by Giri and Goswami [39].Households that have reliable access to grid electricity are less likely to use kerosene and dry cell batteries but more likely to use grid electricity and rechargeable batteries as their choice of lighting energy.This implies that providing rural households with a reliable supply or access to electricity will not only enhance the usage of grid electricity and rechargeable batteries but will also significantly discourage the use of both kerosene and dry cell batteries for lighting.This finding is also supported by Lay et al. [40].Households that own electricity-generating sets are more likely to use petrol and rechargeable batteries as their choice of lighting energy, but they are less likely to use grid electricity.The reason for this could be that the majority of households that own electricity-generating sets do not have reliable access to electricity, as the electrification rate in rural Nigeria is only 35 percent [10]. Households that have reliable access to LPG are less likely to use rechargeable batteries but more likely to use petrol as their choice of lighting energy.This could be that households that reported to have reliable access are high-income level households and, as such, they could afford to use petrol for lighting in the absence of a reliable supply of electricity.Household heads who are more educated are less likely to use grid electricity but more likely to use petrol as their choice of lighting energy in rural areas.An increase in the household's monthly expenditure on food will increase the likelihood of using kerosene as the choice of lighting energy.An improvement in the poverty status of rural households will increase the likelihood of using grid electricity as their choice of lighting energy source.This implies that alleviating rural poverty will significantly increase the usage of clean modern energy sources for lighting while discouraging the use of dirty energy sources.Household heads who are artisans are less likely to use both kerosene and rechargeable batteries as their choice of lighting energy compared to household heads who are farmers.Also, household heads who are traders are less likely to use both grid electricity, but they are more likely to use petrol as their choice of lighting energy when compared to household heads who are farmers.More so, when compared to household heads who are farmers, civil servant household heads are less likely to use kerosene, but they are more likely to use dry cell batteries as their choice of lighting energy in the study area.Furthermore, households who have satellite television are more likely to use grid electricity and rechargeable batteries as their choices of lighting energy sources, but they are less likely to use dry cell batteries. The Drivers of Intensity of the Use of Both Cooking and Lighting Energy among Rural Households The results imply that as the total income of the respondents increases, the propensity to increase the number of energy sources also increases.The results could mean that households are switching from traditional solid cooking fuels to more modern and efficient clean energy sources like LPG, electricity, and solar energy as people's incomes rise and other socioeconomic characteristics change.This outcome aligns with the energy-ladder theory.According to the hypothesis, a household's energy sources are significantly influenced by its income level [41].This theory supports the idea behind the economic theory of the consumer, which states that when income increases, consumers not only demand more of a good but also shift their consumption habits to prioritize higher-quality products. Modern clean fuels are generally thought to be more efficient, comfortable, and userfriendly than traditional or transitional fuels.Similarly, studies by Ravindra et al. [42] and Kapsalyamova et al. [43] suggest that household head adoption of energy sources for cooking and lighting in the study area is significantly influenced by income and affordability, even in light of the recent increase in the price of clean cooking fuel. According to this study, the respondent's intensity of use of energy sources is influenced by the positive and statistically significant coefficient of access to LPG.The result's implication is that giving rural households steady access to LPG will increase their use of energy sources for cooking and lighting.Families with consistent access to LPG are more likely to choose LPG as their primary cooking fuel.The results of Ahmad et al. [44], who examined the factors influencing the use of renewable energy sources in Pakistan, also supported this conclusion by finding a positive correlation between the amount of energy sources used and accessibility to them. Regarding LPG, it is estimated that monthly income received from farming, raising livestock, running a business, and other sources is positive and statistically significant in influencing the number of clean energy sources used by households.This is presumably due to the sense and security of regular income that these revenue streams offer.Additionally, they generate enough cash to cover LPG costs.This finding resonates with the result of Sharma and Dash [45] in their study on household energy use patterns in rural India. Conclusions This study uses data from rural Nigeria to analyze the determinants of rural households' choices of cooking and lighting energy sources and their consumption patterns.Evidence from the consumption pattern of rural households reveals that they use multiple energy sources for a particular energy service across different income levels, a phenomenon known as fuel stacking.And this phenomenon is more pronounced in their lighting energy choices than their cooking energy choices.The majority of rural households still use fuelwood for cooking, while few households use modern energy like LPG, with only nine percent reported to have reliable access to LPG.The usage of LPG started from the third income quintile to the fifth income quintile and increased with the income level of households.Contrary to most studies in rural Nigeria, the majority of rural households now use dry cell batteries for lighting, against only 39 percent that uses kerosene, while few also use petrol for lighting, and the usage increases with the income level of households.This may be connected to the limited access to grid electricity in rural Nigeria, as only 23 percent of sampled households reported having reliable access to electricity supply, while 35 percent use electricity for lighting, with 28 percent using rechargeable batteries.Also, 39 percent of households own a personal electricity-generating set, while only 23 percent reported cooking in an indoor kitchen. More so, results from the application of the multivariate probit model to simultaneously model rural households' energy choices for cooking and lighting show that household size, access to LPG, education, poverty status, and occupation of household head play an important role in the choice to use clean energy, such as LPG, for cooking, while age of household head, household size, access to grid electricity, possession of electricitygenerating sets, education, poverty status of household, occupation of household head, and possession of satellite television play a significant role in the choice to use a clean energy source, such as electricity, for lighting in rural areas.Findings from our study show that an increase in household size decreases the probability of using clean energy such as LPG, but it increases the probability of using fuelwood cooking.Access to LPG increases the probability of using LPG but reduces the probability of using fuelwood, and households that use indoor kitchens are more likely to use LPG and kerosene but less likely to use fuelwood for cooking.An increase in household heads' education and the education of their spouses reduces the probability of using fuelwood for cooking, while an increase in the education of household heads' spouses increases the probability of using charcoal, kerosene, and LPG for cooking. Consequently, the use of quality, affordable, and sustainable energy sources is necessary to lower global environmental pollution.Women are often demonized for providing and using dirty energy in their homes.An improvement in the poverty status of households increases the probability of using LPG and kerosene but reduces the probability of using fuelwood for cooking.Also, households that have access to grid electricity are more likely to use grid electricity and rechargeable batteries but less likely to use kerosene and dry cell batteries for lighting.Households that possess electricity-generating sets are more likely to use petrol and rechargeable batteries but less likely to use grid electricity for lighting.The more educated the household heads, the more the likelihood of using petrol for lighting, while non-poor households are more likely to use grid electricity for lighting.Households that possess satellite television (as compared to those that do not) are more like to use grid electricity and rechargeable batteries but less likely to use dry cell batteries for lighting in rural areas. In order to encourage environmental sustainability and innovation in the energy sector, legislators are putting sustainable energy research programs into action.Thus, this study recommends new policy changes to enhance rural communities' access to inexpensive and sustainable energy.In order to achieve highly centralized energy production, authorities ought to encourage off-grid micro-grids.To stimulate the transition of rural households to the use of clean energy for cooking and lighting, the findings from above provide some policy implications on household energy choice in rural areas of Ondo State, Nigeria, and by extension of most developing countries considering the increasing level of multiple fuel usage.Firstly, a reliable and adequate supply of these clean energy sources should be provided to rural areas at affordable prices so as to encourage their usage.Secondly, effective policies and programs that can improve the welfare and poverty status of rural households should be implemented in order to empower them to use clean energy.Thirdly, rural households should be educated on the development of sustainable livelihoods by utilizing contemporary energy services to help low-income households boost their income and productivity.Fourthly, rural households should be provided with the chance to engage in off-farm jobs so as to increase the opportunity costs of using dirty energy sources for both cooking and lighting. Table 1 . Descriptive statistics of variables for the models. Own_SATtvDummy = 1 if household owns a satellite television in their house, 0 otherwise 0.3500 0.4783 Owner Dummy = 1 if the dwelling space is owned by the household, 0 otherwise 0.4778 0.5008 Table 2 . Distribution of cooking and lighting energy choices by income quintiles. Table 3 . Distribution of lighting energy choices by income quintiles. Table 4 presents the pairwise correlation coefficients showing the relationship between various cooking energy source choices made by households. Table 5 . Parameter estimates from multivariate probit for household cooking energy choices. Table 6 . Correlation coefficients of household lighting energy choices. Table 7 . Parameter estimates from multivariate probit for household lighting energy choices. Table 8 . Drivers of intensity of the use of both cooking and lighting energy among rural households using the zero-truncated Poisson model.
9,209.2
2024-05-27T00:00:00.000
[ "Environmental Science", "Economics" ]
A Thermodynamic Perspective on Potential G‐Quadruplex Structures as Silencer Elements in the MYC Promoter Abstract Multiple G‐tracts within the promoter region of the c‐myc oncogene may fold into various G‐quadruplexes with the recruitment of different tracts and guanosine residues for the G‐core assembly. Thermodynamic profiles for the folding of wild‐type and representative truncated as well as mutated sequences were extracted by comprehensive DSC experiments. The unique G‐quadruplex involving consecutive G‐tracts II–V with formation of two one‐nucleotide and one central two‐nucleotide propeller loop, previously proposed to be the biologically most relevant species, was found to be the most stable fold in terms of its Gibbs free energy of formation at ambient temperatures. Its stability derives from its short propeller loops but also from the favorable type of loop residues. Whereas quadruplex folds with long propeller loops are significantly disfavored, a snap‐back loop structure formed by incorporating a 3’‐terminal guanosine into the empty position of a tetrad seems highly competitive based on its thermodynamic stability. However, its destabilization by extending the 3’‐terminus questions the significance of such a species under in vivo conditions. Introduction Guanine(G)-rich sequences have the propensity to fold into non-canonical tetra-stranded structures called G-quadruplexes (G4s). These are formed by stacking of planar hydrogenbondedg uanineq uartets in the presence of cationsl ike Na + or K + that stabilize the stacked quartet arrangementb yt heir coordination within the central cavity of the G-core.D ue to an overrepresentation of potentialq uadruplex-forming sequences in promoter regions of various proto-oncogenes like c-myc, ckit,o rBcl-2,G 4s tructures within ac ellular environment are considered attractive targets for novel therapeutics. Thus, the c-myc oncogenec odes for ap rotein that maya ct as both transcriptional activator andr epressor,b eing involved in the regulation of variousg enes linked to proliferation and growth arrest. [1] Because overexpression of c-myc was found to be associated with aw ide range of human cancers, [2,3] its complex transcriptional regulation employing multiple promoters has been the subjecto fi ntense research over the past three decades. As ar esult, the so-called nuclease hypersensitivity element III 1 (NHE III 1 ), a2 7b ase pair sequence located upstream of the P1 promoter,w as identified as am ajor control element of c-myc expression. [4,5] Notably,d ouble-stranded NHE III 1 was found to be in equilibrium with an on-canonical quadruplex formed after duplex unwinding by intrastrand folding of its purine-rich noncoding strand. [6] Twod ifferent G-quadruplex structures for the single-stranded NHE III 1 were identifieda nd characterized based on DMS footprinting experiments. [7] Following initial assignments to an antiparallel basket and ac hair topology,c ompelling evidence of their folding into three-layered parallel quadruplexes with three propeller loops came from subsequent NMR studies on truncated and mutated sequences. [8] With its six guanine tracts composed of two to four consecutive Gr esidues, the wild-type 27mer MYC single strand may fold into multiple G-quadruplexs tructures with various loop architectures depending on the set of G-tracts recruited to form the G-quadruplex core (Table 1). Site-directed mutagenesis togetherw ith polymerase stop and luciferase reporter assaysi nt he absence and presence of the G-quadruplexb inding porphyrin TMPyP4 pointed to G-tracts II-V as being involvedi nab iologically relevant G4 repressore lement whose formation decreases c-myc expression at the RNA as well as the protein level. [7,9] Because the first G-tract was suggested to not be engaged in the transcriptional gene regulation, subsequent structurals tudies mostly focusedo nt runcateds equences only involving the four central G-tracts of MYC as models of the physiologically activeG -quadruplex( Figure 1A). [10] As a consequence of two stretches comprising four guanines, their foldingi nto ap arallel three-layeredG -quadruplex allows for the formation of four different loop isomers. [11,12] However,a quadruplex with two one-nucleotide (1-nt) and ac entral 2-nt propeller loop was suggested to predominatei nb uffer solu-tion. [8] Even adding more diversity,e xpansion of the truncated sequence to also include the 3'-terminal G 2 -tract VI of the wildtype MYC resulted in ad ifferent major G-quadruplex species that features as nap-backl oop and allows the 3'-terminal guanine base to fill an empty guaninep osition within the 3'-tetrad ( Figure 1B). [13] Due to the importance of the G-quadruplex structures formed by NHE-III 1 for understanding c-myc expression and for ar ational design of drugs targeting the MYC sequence, various efforts have been directed towards identifying and characterizing the physiologically most relevant quadruplex species. [11,14] However,m ajor quadruplexes formed under in vivo conditions will not only depend on their intrinsic thermodynamic stability but also on their folding kinetics and on the differential binding of multiple transcription factorsl ike NM23-H2 or nucleolin. [15,16] Such ac omplex situation requires an individual assessment of binding processes, G4 foldingk ineticsa sw ell as G4 thermodynamic stability. Previous studies have evaluated G4 thermodynamics within ac ontrolled in vitro environment by the determinationo fm elting temperatures and also by van 't Hoff analyses based on at wo-state transition, yet rigorous and systematic investigations on MYC-derived sequences are large-ly missing. We here report on ad etailed thermodynamic profiling of an extensive set of putative MYC quadruplex folds by microcalorimetric methods to allow for am odel-free extraction of thermodynamic parameters. The present studies not only focus on specific loop isomerso fasingle fold but also on Gquadruplex structuresw ith the recruitment of different sets of G-stretches. Althoughi nsufficient to fully describe G4 populations within the cellulare nvironment, such ac omprehensive thermodynamic profiling will add to our understanding of major G4 structures formed in vivo. On the other hand, knowing the folding thermodynamicso fi ndividual MYC sequences in detail gives valuable informationo nt he impact of loop sequence, loop positioning, and loop length on the G4 stability of parallel quadruplexes in general. NHE III 1 sequence variants For ad etailed evaluation of quadruplex stabilities, various Grich sequences derived from the purine-rich 27mer NHE III 1 single strand termed MYC were employed (Table 1). Excluding its GG 3'-terminus, the MYC sequence features five G-tracts with potentialf olding into five different G4 speciesd epending on the G-tractsf orming the G-quadruplex core. In addition, one of the guanines at either the 5'-or 3'-end of aG 4 -tract is excluded from aG -columni nathree-layeredq uadruplex. Given three G 4 -tracts in MYC,t his gives rise to 2 2 or 2 3 loop isomers for the five quadruplexes depending on their folding with participation of two or all three MYC G 4 -tracts, respectively.D isregardingt he terminal G 2 -tract VI and assuming exclusive formation of all-parallel G-quadruplexes it can easily be seen that there are 28 possible specieso ft hree-layeredG 4s. Introducing mutationsw ithin Gt racts constitutes ac onvenient strategy to block formation of otherwise competing G4 structures. Thus, selectiveG !T/A substitutions may restrict foldingi nto quadruplexes with only one defined set of G-columns or even provide for exclusivef olding into as inglel oop isomer.H owever,t os erve as good mimics of the wild-type fold, trappedm utants are required to not noticeably affect the structure and thermodynamic stability when compared to the corresponding wild-type sequence. In fact, due to their position within loop regionsb ase substitutions are suggested to only have as mall impact on theG 4s tructure, but some blurring in stability should be considered when discussing sequences structurally trapped through mutations and truncations (see below). [17][18][19] The selection of truncated and/or substituted sequences is based on representatives that cover aw ide range of structural diversity to provide for extensive informationo ns tructure-stability relationships. For the sake of simplicity and consistency, sequences are nameda ccordingt ot he G-tractsn ot involved in tetrad formation( i.e.,t ract Ia nd VI in MYC-D1,6) and in keeping with the number of nucleotides within the propeller loops (i.e.,t wo 1-nt and ac entral 2-nt propeller loop in MYC-D1, 6 [13] and MYC-D3,6 [20] previously Figure 1B). Probing the G-quadruplextopology by their CD signatures For assessing quadruplex thermal stabilities and driving forces of G4 folding, differential scanning calorimetric measurements (DSC) were performed. [21] In order to lower high melting temperatures for obtaining well-definedp ost-transitional baselines in the accessible temperature range as required for the extraction of reliable transition enthalpies, samples were dissolved in al ow-salt buffer with 10 mm potassium ions. Therefore, G4 foldingw as initially tested by ac omparison of CD spectra acquiredi nb oth 10 mm and 120 mm K + buffers ( Figure 2). CD spectra of all sequences in potassium buffer feature negative and positive amplitudes at about 243 and 263 nm, typical of parallel quadruplexes with exclusive homopolar tetrad stacking. Whereas ab road low-intensity shoulder between 280 and 300 nm distinguishes the snap-back loop quadruplexes MYC- with the involvement of G-tract VI from the regularp arallel species with uninterrupted G-tracts, an egative dip at 290 nm is noticeable in particular for MYC-D5,6, MYC-D1,6, and MYC TT -D1,6. Importantly,e xcept for minor variations in amplitude, CD signaturesw ere conserved at the differents alt concentrationsw ith no apparent change in G4 topology. Probing the structural polymorphismo fMYC variants 1 HNMR spectrao fa ll MYC-derived sequences were acquired to evaluatet heir folding and to test for structuralh eterogeneities under the solution conditions employed (Figures 3a nd S1). Focusing on the Hoogsteen imino protons pectral region between 10.5 and 12.0 ppm, 12 resonancesa re generally expected for as ingle quadruplexw ith three G-tetrad layers, corresponding to 3 4h ydrogen-bonded guanine bases. Whereas the parent full-length MYC sequence shows extensive polymorphism with multiple G4 speciesa ss uggested by its crowded imino proton spectral region, the spectra of somem utated sequences are in line with as ingle fold without noticeable additionals pecies. These include not only MYC-D1,6[1.2.1], [10] MYC-D1,6[2.1.2], MYC-D1[1.3.1], [13] and MYC del -D1[1.1.1], [22] but also MYC-D3,6 with the formation of as ingle 1.6.1 loop isomer. [20] Other sequences exhibit coexistence of different folds, however,amajor species populated by ! 80 %p redominates over aminor species in mostcases. As traightforwardc omparison and interpretation of thermodynamic parameters for G4 formation is affected by the coexistence of multiple topologies with equilibriap ossibly changed by outer conditions. However,o ther serious problems for any thermodynamic analysism ay also result from sequences that partially form stable higher-order G4 structuresi n vitro depending on sequence, ion concentration, but also on the particular annealing procedure, as hasb een reported recently. [14,23,24] Thus,t he polymorphic MYC sequence was shown to have as trong tendency for intermolecular aggregation and with multimeric structures only melting at very high temperatures the formed fractiono fh igher-order speciesm ay remain unnoticed in G4 denaturation studies. Although thermal stabilities as measuredb ym elting temperatures remain unaffected, enthalpies and entropies of unfolding mayc onsequently be too low andcompromise an acceptable accuracy. To assesst he potential formation of multimeric structures we also performedn on-denaturing polyacrylamide gel electrophoresis for all sequences followinga nnealing in a1 0mm potassium phosphate buffer used for NMR and DSC studies. Most of the sequences exhibit either as ingle or strongly predominating monomer band under the low-salt conditions, differing in migration according to their number of residues ( Figure S2). Yet, additional slower migrating bands for MYC-D5,6 and in particularf or MYC-D1,6 and MYC TT -D1,6 comprising about 30 % of the total in-lane intensity hint at putative dimer formation for these mutant sequences. However, with no significant intensity of broadened signals in the 1 HNMR spectra with their flat baseline (Figure 3), larger high-melting G4 associates like G-wire speciesc an mostly be excluded for the latter sequences but may gain in relevance with buffers of higheri onic strength. Thermodynamic profiling of quadruplex formation by DSC Representative examples of DSC melting profiles are shown in Figure 4. It should be noted that for some of the samples heating above 100 8Cr esulted in ag radual decrease of signal heights in subsequent heating cycles apparently due to partial heat-inducedd amage of the DNA. Therefore, only those ther- [22] as well as from as eparate CD-based T m determination for MYC-D1,6[1.2.1] (data not shown). After proper baseline corrections, calorimetric enthalpies DH o cal of G4 folding were obtainedb yi ntegration of the meltingt ransition. Likewise, ensuring equilibrium conditions during thermal denaturation,m olare ntropies DS o at the melting temperature were calculated using standard thermodynamic relationships. These also enabled determination of the Gibbs free energy DG o at any reference temperature T from DH o and DS o when changes in molar heatc apacities DC p o are close to zero. [21,25] In fact, although quadruplex folding is expected to be associated with a DC p o ¼ 6 0, changes are generally too small to warrantt heir reliable extractionf rom the thermograms in case of the G4 transitions. [19] Average values from three independentm easurements on different samples are summarized in Table 2. Here, only model-free calorimetric transition enthalpies DH o cal and entropies DS o cal are included because many of the sequences employed may fold into multiple quadruplex species. Consequently,o bserved ratios between calorimetric and van 't Hoff enthalpies determined based on a two-state approximation will hardly yield direct information on (un)folding intermediates and/orcooperative units. Dependence on concentration and type of cations Due to the high thermals tability of some quadruplexes and the need for ad efined post-transitional baselinef or proper baselinec orrections, G4 melting was shifted to lower temperatures by using al ow-saltb uffer with 10 mm potassium phosphate, pH 7. To probe differential effects of cation concentration on thermodynamic profiles of the parallel quadruplexes, two representative G-rich sequences, namely MYC-D1,6[1.2.1] and MYC-D3,6, each forming aw ell-defined quadruplexs pecies were also characterized in ah igh-saltb uffer with 120 mm potassiump hosphate, pH 7( Ta ble 2, Figure S3). The significant rise in their thermals tability with a DT m of about 20 8Cc an be attributed to ah igher ionic strength of the buffer solution but also to the specific uptake of K + ions upon G4 folding, being coordinated within the centralG 4c avity as ap rerequisite for quadruplex stabilization. From at hermodynamic perspective, an elevated potassium ion concentrationresultsinamore exothermicf olding of the parallel quadruplex, yeti sa ccompanied by ah igher entropic penalty.T his pointst os tronger intramolecular interactions and ah igher rigidity similart oo bservations reported for aD NA four-wayj unction lacking more specific cation binding. [26] Because the DH8 contribution to the Gibbs free energy predominates even at highert emperatures, the overall thermodynamic stability of the quadruplexes will always increase in the accessible temperature range at higher K + concentrations. On the other hand, relative stabilities amongq uadruplexes are anticipatedt on ot be affected by changing K + concentrations. It shouldb en oted, however,t hat in addition to slower kinetics of (un)folding, absolute differences in enthalpies and Gibbs free energies between G4 species will generally show ag radual decrease with al owered K + concentration. [17] Thus, DDG8 293 between MYC-D1,6[1.2.1] and MYC-D3,6 amountst oÀ6.7 kcal mol À1 in 120 mm potassium phosphate buffer but only to À4.8 kcal mol À1 in 10 mm phosphate. Ab etter stabilization of guanine quadruplexes with K + compared to Na + ions was showntoalmostequally depend on desolvation and the size of the alkali metal cation. [27] To also test the impact of substituting potassium for sodium ions on the 6. Apparently,r eplacing potassium by sodium ions destabilizes the parallelq uadruplexesbyl owering exothermicities of folding with only partial compensation through ar educed entropic penaltyi nc lose analogy to decreasing K + concentrations. On the other hand, destabilization of parallel but selective stabilization of antiparallel conformationsh as been reported by Na + . [28] Whereas both MYC-D1, 6 6 to an antiparallel or (3 + 1) hybridtype G4 as apparent from the typical CD signature ( Figure S4). Clearly,t his prohibits ad irect comparison of cation impact on the thermodynamic profile of the latter and sets limits to changes in solution conditions. G4 variants with different G-tracts for G-core assembly To characterize the thermodynamics of G4 folding for the MYC sequence when recruiting ad ifferent set of G-tracts,i ndividual tracts were either truncated if located at the termini or blocked by G!T/As ubstitutions, forcing them to loop out between adjacent G-columns. In the present study,G -tracts I, II, III, or V were individually blockedi na ddition to truncating the MYC 3'terminus with its G 2 -tract VI to give MYC-D1,6, MYC-D2,6, MYC-D3,6, and MYC-D5,6. Based on previous observations that addition of the two-nucleotide GG 3'-terminus will result in refolding of a MYC-D1p arallel quadruplex into aq uadruplex with an interrupted G-tract and as nap-back loop with positioningo f the 3'-terminal Gi nto the empty position of the outer G-core ( Figure 1B), folding of as equence MYC-D1(1.3.1)w as also characterized in more detail.B ecause all of these MYC-derived sequences carry G 4 tracts, they are expected to fold into different loop isomerst hroughr egister shifts. [19] It should be noted, however, that as ingle G!A/T substitution within ac entral Gtract of MYC-D2,6 and MYC-D1(1.3.1)w as introduced for better comparison, reducing the number of available folding pathways. Before discussing thermodynamic stabilities,i ti si nstructive to look at the imino protons pectralr egion of the variouss equences mimicking MYC G-tract isomers. Notably,i mino resonances for most folds suggest ar ather homogenous G4 population with only small amountso fc oexisting species ( Figure 3 (Table 2). To gether with the favored fold of MYC-D3,6 and MYC-D1(1.3.1), it can thus be assumed that G 4 tracts at 5'-and3 '-termini strongly favor loop isomers that shift the excessive Gt othe overhang rather than to an internal propeller loop. Accordingly,t he observed predominant G4 for MYC-D2,6 likely constitutes the 6-1-1 loop isomer.Onthe other hand, MYC-D5,6 exhibits am ore significant heterogeneityw ith an imino protons pectral region apparently composed of at least two major quadruplex folds. Gibbs free energies of the five quadruplexes differing in the identityo ft heir G-columns vary considerably by 4kcal mol À1 at 310 Ki na10 mm K + buffer,a nd differences tend to even increasea t293 K. MYC-D1,6 and MYC-D1[1.3.1]f old into the most stable G4 structures, not only being close in their Gibbs free energies but also in their enthalpic and entropicc ontributions. On the other hand, MYC-D2,6 and especially MYC-D3,6 with their singlel ong loop revealarather low stability ( Figure 5). Clearly,t he latter two folds suffer from al ong propeller loop known to generally compromise quadruplex stability.W hereasl onger laterall oops have previously found to confer more exothermic heat to G4 folding through additional loop interactions and the uptake of counterions andw ater, [29] this does not apply to the present propeller loops.T hus, both MYC-D2,6 and MYC-D3,6 exhibit al ess negative molar transition enthalpy DH o cal but also al owered entropic penalty upon foldingw hen compared to the most stable MYC-D1,6a nd MYC-D1[1.3.1]. Noticeably,q uadruplexes derived from MYC-D5,6s harearather similars equence with MYC-D1,6 but have a significantly more positive Gibbs free energy by about3kcal mol À1 when compared to the latter (see below). Also, both favorable enthalpy and unfavorable entropy of folding are conspicuously smaller for MYC-D5,6 that was suggested to not be involved in thebiologically relevant MYC quadruplex. Snap-back loop quadruplexes Based on its thermodynamic stabilitya nd also suggestedb y previous NMR studies, [13] the snap-back loop quadruplexe ncompassing the 3'-terminal GG-tract may effectively compete with ar egular parallel quadruplex in case of the full-length MYC sequence. Consequently,s uch snap-back loop quadruplexes have been targeted by variousl igands with sometimes promising binding affinities. [13,22,30] To getm ore insighti nto the impact of additional modifications on the stability of the snapback loop structure, we shortened its two-tetrad bridging central propeller loop by deletiono ft wo nucleotides to give Apparently,l oop residues in this case may participate in additional hydrogen bond or stacking interactions, resulting in am ore favorable enthalpic but unfavorable entropic contribution. [29,31] Given negligible heat capacity effects (see above), the latterw ill generally result in ad ecreased thermal stability due to the strong temperatured ependence of the TDS o term in the Gibbs-Helmholtz equation. Because the terminal3 '-G fills an emptyp osition to participate in the outer G-tetrad of the snap-backl oop structure, its compatibility with a3 '-extension seems critical for its formation within the nuclear hypersensitivity element IIII 1 .N MR structurals tudies have already shown that such an extended sequence preserves the same G4 topology with the 3'-added nucleotide pointinga way from the quadruplex G-core. [13] In line with the latter report, MYC-D1[1.3.1]T shares as imilar imino proton spectralr egion with MYC-D1[1.3.1]a nd MYC del -D1[1.1.1]a lbeit with small amountso faminor species, indicating the formation of am ajor snap-back loop quadruplex even within an extended sequence context (Figure 3). However, stabilities seem to be compromised with aG ibbs free energy of G4 folding declining to À2.9 kcal mol À1 at 310 K. Such destabilizing effects associated with an enhanced formation of other competitive speciesa re even more pronounced for MYC-D1[1.3.1]TT with a2 -nucleotide 3'-TT addition (Figure 3a nd Ta ble 2). Therefore, when solely based on thermodynamics these results raise doubts with respect to the relevance of such snap-back loop structures under in vivo conditions. G-register isomers The two loop isomers MYC-D1,6[1.2.1] and MYC-D1,6[2.1.2] recruit the same four G-tractso fMYC but incorporate different Gs of the two G 4 tracts into their G-quadruplex core. If blocking Tsubstitutions are removed, ad ynamic exchange between such G-register isomersc an occur through as liding mechanism without complete refolding, entropically stabilizing the folded state. [15,19] Onlyl ookinga tt he individual isomers, MYC-D1,6[1.2.1] has am elting temperature higher by 13 8Ca sc ompared to MYC-D1,6[2.1.2].T hisi se xpected based on the longer loop lengths of the latter with their known destabilizing effects in parallelG 4s. Also, ah igher stability of MYC-D1,6[1.2.1] is associated with am ore favorable enthalpy of foldingb ya bout À11 kcal mol À1 that is only partially counteracted by al arger entropicp enalty to give a DDG8 of %À2.5 kcal mol À1 at 310 K. It should be noted that differencesi nDH8 of À7a nd À16 kcal mol À1 between MYC-D1,6[1.2.1] and MYC-D1,6[2.1.2] in buffer solutionsc ontaining 7.5 and 20 mm K + have independently been determined previously based on at wo-state transition. [17,19] This is in good agreement with the present data considering larger differences upon increasing K + concentrations (see above and ref. [18]). Impact of overhangand loop sequences As ar esult of using MYC sequences adopting well-defined NMR-derived quadruplex structures with appropriate G!T/A substitutions combined with efforts to add flanking residues for preventing G4 aggregation, [32] mutual comparison of thermodynamic parameters among native and mutated sequences Clearly,anegligible or only small impact of Gm utationsi n trapped quadruplex structures constitutes ak ey assumption when assessing the thermodynamics of different G4 folds. Whereas moderate changes in quadruplexm eltingt emperatures have been reported for T/A substitutions in loops of MYC sequences, [18] only minor thermodynamic perturbations were suggested for thymidine and 2'-deoxyinosine substitutions based on ag lobal thermodynamic analysis. [19] Indeed, due to MYC-D1,6 predominantly folding into the MYC-D1,6[1.2.1] topology with ac entral 2-nt loop as shown by NMR (see above), the mostly identicalt hermodynamic parameters determined for foldingo ft hese two sequences confirmt hat the dual G!T mutations in MYC-D1,6[1.2.1] have only an egligible impact on the thermodynamicsofG 4f ormation( Ta ble2). Based on these findings, significant differences in the thermodynamic profiles of MYC-D1,6 and MYC-D5,6 are unexpected. Both sequences feature four consecutive G-tracts G 3 -G 4 -G 3 -G 4 for MYC-D1,6 and G 4 -G 3 -G 4 -G 3 for MYC-D5,6 separated by single To rAnucleotides.C onsequently,f ormation of am ost favored MYC-D5,6[1.2.1] fold of similar stability to MYC-D1,6[1.2.1] may be anticipated. Apparently,t he differento verhang and/or loop residues must exert more significant effects on G4 stabilityf or theset wo G-tract variants. To search for the origin of the different thermodynamic stabilities, we substituted MYC-D1,6 in its overhang to form MYC TT -D1,6 with TT flanking residues precedinga nd followingt he 5' and 3' G-tract and to becomeapseudo-inverted MYC-D5,6 sequence with A$T exchange in the loops.I nterestingly,t hese changes had no effect on the G4 meltingt emperature but lowered the exothermich eat of folding by nearly 9kcal mol À1 counteracted by as maller amount of entropyl osses. Overall,t he amount of Gibbs free energy decreased by 1kcal mol À1 at 310 K. Such effects are easily rationalized by overhang sequences involved in specific interactions to act as stabilizing terminal caps of quadruplexes. [10,18] However,al arge part of the observed DDG8 310 difference of 2.8 kcal mol À1 between MYC-D1,6 and MYC-D5,6 and of the accompanied decrease in melting temperature by 10 8Cm ustb ea ttributed to the different loop residues (T-GA-T versus A-TG-A when only assuming 1.2.1 loop isomers). [18] These results emphasize that depending on the particular structural context overhang sequences and loop nucleotides may have more significant effects on the thermodynamics of folding. Also, only comparing meltingt emperatures may be misleadingi nn ot revealing significant changes in enthalpic and entropic contributionst hat determine relative stabilities at ambient temperatures. Conclusions The MYC sequence with its multiple G-tractsmay fold into various quadruplexesw ith potential relevance under physiological conditions. Whereas melting temperatures are convenientp arametersf or probing their thermodynamic stability, T m values are not directly linked to free enthalpies of G4 formation at a given temperature, requiring information on enthalpic and entropic contributions to the (un)folding event. Also, athermody-namic characterizationb ased on av an 't Hoff analysis of melting profiles is strictly based on at wo-state transition. This limits an evaluation of folding processesp roceeding through intermediates or to unresolved transitions of as equence with different coexisting quadruplex folds. To overcome these restrictions, ar igorous thermodynamic analysisu sing microcalorimetry is required to allow for am odel-free extraction of parameters. With more than four consecutive G-tracts in the MYC sequence, those quadruplexes with al ooped-out internal tract are highly unfavorable unless specifically stabilized by potential interactions with ligands or proteins.O nt he other hand, two quadruplexes with av ery similars equence context, MYC-D1,6 and MYC-D5,6, considerably differ in their thermodynamic stability. MYC-D1,6, proposed to fold into the physiologically most relevant G4 and also being the mosts table G4 species, draws its energetic benefits to al arge extent from its favorable loop composition when compared to MYC-D5,6. Thus, recruiting excessive G-tracts in case of oxidative lesions may circumvent deleterious effects on G-quadruplex formation but is expected to yield al ess stable alternative fold. Also, competition by a snap-backl oop quadruplexi nvolving the 3'-terminal GG tract VI mustb em et with caution.A lthougha ss table as MYC-D1,6 when truncated, an extended 3'-end will allow,y et destabilize this particular topology. The presented comprehensive thermodynamicp rofiling forms as olid basis for rationalizing stabilities of coexisting G4 structures that together with additional modulating interactions are expected to determine relevant populations within the cellular environment. Studying quadruplex structures and their relative populations in vivo remains ac hallengeb ut may be possible by the use of NMR in conjunction with stable isotope labeling. Also, detailed insights into the thermodynamics of different quadruplex folds will further contribute to our understanding of G4 stabilities in general and aid in the prediction of major folding topologies in sequences with multiple Gruns. Experimental Section Materials and sample preparation DNA oligonucleotides were purchased from TIB MOLBIOL (Berlin, Germany) and further purified by ethanol precipitation. Concentrations were determined spectrophotometrically by measuring absorbances in an H 2 Os olution at 80 8Cw ith molar extinction coefficients calculated by an earest neighbor model. [33,34] Prior to measurements, oligonucleotide solutions with concentrations as used for the subsequent experiments were annealed by heating to 85 8C for 5min followed by slow cooling to room temperature and storage in ar efrigerator overnight. For the experiments both al ow-salt buffer (10 mm potassium phosphate, pH 7) and ah igh-salt buffer (20 mm potassium phosphate, 100 mm KCl, pH 7) was used. Circular dichroism (CD) Spectra were acquired with aJ asco J-810 spectropolarimeter equipped with at hermoelectrically controlled cell holder.M easurements were performed with 1-cm quartz cuvettes at 293 Ko n 5 mm quadruplex in either al ow-salt or high-salt buffer.S pectra were recorded with ab andwidth of 1nm, ar esponse time of 1-2s,a nd as canning speed of 50 nm min À1 and finally blank-corrected. NMR spectroscopy NMR spectra were acquired on aB ruker Avance 600 MHz spectrometer equipped with an inverse 1 H/ 13 C/ 15 N/ 19 Fq uadruple resonance cryoprobehead and z-field gradients. Quadruplexes were dissolved in al ow-salt buffer with 10 mm potassium phosphate, pH 7.0. For solvent suppression on the samples in 90 %H 2 O/10 % D 2 OaWATERGATE with w5 element was employed. Data were processed using To pspin 4.0.6. Proton chemical shifts were referenced relative to TSP. Differential scanning calorimetry (DSC) DSC measurements were performed on aV P-DSC (Malvern Instruments, Great Britain) with 50 mm oligonucleotide in a1 0mm potassium phosphate buffer,p H7,u nless otherwise stated. Samples were heated from 20 to 100-110 8Cw ith as can rate of 0.5 8Cmin À1 . Equilibrium conditions were confirmed by as ingle experiment with as can rate of 0.25 8Cmin À1 ,y ielding at hermogram superimposable on the thermogram obtained with twice the scan rate. A buffer versus buffer scan was subtracted from the sample scan and cubic baselines were constructed for each transition. Melting temperatures T m and calorimetric enthalpies DH o cal corresponding to the maximum of the DSC peak and the area under the heat capacity versus temperature curve were obtained from the baseline-corrected profiles. DSC curves were fitted with an on-two-state model as implemented in the DSC analysis software. Here, the temperature dependence of the ratio of unfolded and folded population as given by the shape of the DSC thermogram is described by the van 't Hoff relationship with DH o vH ¼ 6 DH o cal .C hanges in heat capacity DC p 8 are too small for their reliable determination and were set to zero, consistent with negligible heat capacity effects upon quadruplex folding. [21] Reported thermodynamic parameters are average values with corresponding standard deviations from at least three independent experiments.
6,959.4
2020-08-05T00:00:00.000
[ "Biology", "Chemistry" ]
Solitons Equipped with a Semi-Symmetric Metric Connection with Some Applications on Number Theory : A solution to an evolution equation that evolves along symmetries of the equation is called a self-similar solution or soliton. In this manuscript, we present a study of η -Ricci solitons ( η -RS) for an interesting manifold called the ( ε ) -Kenmotsu manifold ( ( ε ) - KM ), endowed with a semi-symmetric metric connection (briefly, a SSM-connection). We discuss Ricci and η -Ricci solitons with a SSM-connection satisfying certain curvature restrictions. In addition, we consider the characteristics of the gradient η -Ricci solitons (a special case of η -Ricci soliton), with a Poisson equation on the same ambient manifold for a SSM-connection. In addition, we derive an inequality for the lower bound of gradient η -Ricci solitons for ( ε ) -Kenmotsu manifold, with a semi-symmetric metric connection. Finally, we explore a number theoretic approach in the form of Pontrygin numbers to the ( ε ) - Kenmotsu manifold equipped with a semi-symmetric metric connection. Introduction A Kenmotsu manifold [1] is a specific type of Riemannian manifold that arises in the field of differential geometry and is closely related to the theory of contact manifolds [2].It is named after the Japanese mathematician Kenmotsu Katsuhiro, who made significant contributions to the study of these manifolds.The (ε)-Sasakian manifold was initially described by Bejancu et al. in [3].Later, Xufeng et al. [4] demonstrated that such manifolds were actually immersed in indefinite Kaehlerian manifolds.Tripathi et al. [5] presented an (ε)-almost para-contact manifold.De et al. [6] proposed the idea of the (ε)-Kenmotsu manifold and demonstrated how the presence of this novel structure in an indefinite matrix affects the curvatures. Friedmann et al. [7] gave a semi-symmetric connection on a manifold.An explanation of this connection's geometrical meaning was provided by Bartolotti in [8].Hayden defined and investigated semi-symmetric metric connections in [9].The SSM connection on a Riemannian manifold was first systematically examined by Yano in [10].Subsequent research on this topic was conducted by a number of authors, including Haseeb et al. [11], Sharfuddin et al. [12], Tripathi [13], and Hiricȃ et al. [14,15]. The concept of Ricci solitons (RS) originated from the groundbreaking work of Richard Hamilton [16] in 1982, who created the Ricci flow as a means to smoothly deform metrics on a manifold.The Ricci flow, governed by a parabolic partial differential equation, iteratively adjusts the metric tensor on a manifold in the direction of its Ricci curvature, leading to a flow that reveals the intrinsic geometry's underlying features. Later, in 1988, he [17] claimed that RS can be seen as self-similar solutions to the Ricci flow equation, possessing a remarkable property: under the flow, the metric evolves by a conformal scaling combined with a translation.This self-similarity allows RS to provide significant insights into the geometric behavior of manifolds, shedding light on their curvature properties and offering connections to diverse fields such as geometric analysis, general relativity, and geometric topology.Definition 1 ([18]).A Riemannian manifold (B, g) is said to have a RS if the Riemannian metric g satisfies the following equation: where L U g symbolizes the Lie derivative of g in respect to the soliton field U on B (called soliton vector field) and γ ∈ R, which determines the type of soliton, while ic denotes the Ricci tensor of g. Remark 1. It is necessary to mention here that γ indicates a real scalar and its presence shows that the metric is not fixed by the flow (up to a diffeomorphism); in fact, it could be either expanded or contracted by γ.So, depending on the value of γ, RS are classified into three types: shrinking, translating (or steady), and expanding; that is, γ < 0 (this soliton has a positive scaling in the direction of U, meaning the metric is expanding along U), γ = 0, and γ > 0 (the metric is contracting along U). If the potential vector field U is the gradient of a smooth function Ψ, denoted by ∇Ψ the soliton Equation (1) reduces to HessΨ where HessΨ is the Hessian of Ψ. Perelman [19] proved that a Ricci soliton on a compact manifold is a gradient Ricci soliton. In 2009, Cho and Kimura [20] established the idea of the η-Ricci soliton, as an extension of the classical Ricci soliton concept and given by the following: where α is a real constant and η is a 1-form defined as η(p) = g(p, U) for any p ∈ χ(B). In this manuscript, the authors examine the η-RS on an (ε)-Kenmotsu manifold B with respect to a SSM-connection, which was inspired by the foregoing studies. The work is ordered as follows: Section 2 presents the basic notion and definition for an (ε)-Kenmotsu manifold and semi-symmetric metric connection.Section 3 includes the curvature properties of the (ε)-Kenmotsu manifold with a semi-symmetric metric connection.Section 4 presents the results of the η-Ricci soliton on the (ε)-Kenmotsu manifolds with a semi-symmetric metric connection and provides some examples and some of their characteristics and properties.In terms of η-Ricci solitons, we address certain curvature constraints on (ε)-Kenmotsu manifolds with a semi-symmetric metric connection.Section 6 discussed the harmonicity of gradient η-Ricci solitons in an (ε)-Kenmotsu manifold with a semi-symmetric connection.By employing the gradient η-Ricci solitons for a (ε)-Kenmotsu manifold with a semi-symmetric metric connection, we obtain multiple pinching theorems.In Section 7, a few applications of the semi-symmetric metric connection in the (ε)-Kenmotsu manifold in number theory are explored. An (ε)-contact metric manifold satisfying is termed an (ε)-KM [6], if holds, where ∇ is the Levi-Civita connection with respect to g. An induced connection ∇ on B is said to be semi-symmetric connection [10] if its torsion tensor Remark 2. If T of ∇ vanishes, then ∇ is known as a symmetric connection.Or else it is known as non-symmetric.Moreover, it is said to be a metric connection if g on B satisfies ∇g = 0; otherwise, it is non-metric. Furthermore, a semi-symmetric connection is called an SSM-connection [10] if Let B be an (ε)-KM and ∇ be the Levi-Civita connection on B. Then, ∇ and SSMconnection ∇ on B are related as given below: for all p, q, t ∈ χ(B). Let the vector fields which are linearly independent at each point of B. Let us define the Riemannian metric g as wherein ε = ±1.Since, the 1-form η is defined by η(p) = εg(p, I 3 ), for all p ∈ χ(B) and the (1, 1)-tensor field Φ defined by Φ(I 1 ) = −I 2 , Φ(I 2 ) = I 1 , Φ(I 3 ) = 0. Thus, using the linearity property of Φ and g, we find the basic relations stated at the beginning.Furthermore, we take the Levi-Civita connection ∇ in respect to g on B, and we have In light of Koszul's formula, we have 2g(∇ p q, t) = pg(q, t) + pg(t, p) − tg(p, q) + g([p, q], t) −g([q, t], p) + g([t, p], q) and we find Using the preceding relations, we obtain for ζ = I 3 .Hence, the manifold B under the above setting is an (ε)-KM of dimension 3. Characteristics of the Curvature on (ε)-KM with a SSM-Connection Let B be an (ε)-KM.The curvature tensor R of B with respect to ∇ is defined by Adopting ( 21) and ( 22), we gain In light of ( 5), (8), and ( 9), we find where (p, q)t = ∇ p ∇ q t − ∇ q ∇ p t − ∇ [p,q] t is the Riemannian curvature tensor. Lemma 1.If B is an (ε)-KM with ∇, then we have Proof.Applying covariant differentiation on Φq with respect to p, we obtain ∇p Φq = ( ∇p Φ)q + Φ( ∇p q), which can be rearranged as We replace q = ζ in (21) and we obtain (26). Remark 4. In general, however, due to the highly non-linear nature of the Ricci flow equation, singularities form in finite time.These singularities are curvature singularities, which means that as one approaches the singular time the norm of the curvature tensor blows up to infinity in the region of the singularity.A fundamental problem in Ricci flow is to understand all the possible geometries of the singularities. Interesting for physics: A theory of gravity, with the central role played by concepts of entropy, leading to spacetime singularities with controllable topology change ("Ricci flows with surgery"), for general evolving three-geometries. Symmetries: foliation-preserving diffeomorphisms.Ricci solitons are Ricci flows that may change their size but not their shape, up to diffeomorphisms.The soliton, which is related to the geometrical flow of manifold geometry, is one of the most significant types of symmetry.In actuality, to understand the ideas of kinematics and thermodynamics, the general theory of relativity uses the geometric flow for spacetime manifolds. A significant amount of the literature on Ricci solitons and its generalization can be found regarding spacetimes. Here is an example of η-RS on (ε)-KM with ∇. In particular, p = ζ, and we obtain In this situation, the Ricci operator Q defined by g( Qp, q) = ¯ ic(p, q) has the expression Qp = −(1 Remark 5. Acknowledge that the existence of an η-RS on an (ε)-KM with ∇ indicaites that ζ is an eigenvector of Q corresponding to the eigenvalue −(1 Now, consequently from (34), we obtain the following: ) is an (ε)-KM with ∇ on B and ( 34) defines an η-RS on B, then 1. Q and ¯ ic are parallel along ζ. Proof.The first part is obvious.We proceed with the second, using the fact that and Then, by switching S and Q from (37) and (38), we attain the desired second part. η-RS on an (ε)-Kenmotsu Manifold with a SSM-Connection and Some Curvature Restrictions In a Riemannaian manifold the most important intrinsic invariant is the Ricci tensor.A classical theoretical physics in which the gravitational and electromagnetic fields are unified as intrinsic geometric objects in the spacetime manifold.For this purpose, we first present the preliminary geometric considerations dealing with the metric differential geometry of induced connections.The unified field theory is then developed as an extension of the general theory of relativity based on a semi-symmetric condition, which structurally is meant to be as close as possible to the symmetric condition of the Einstein-Riemann spacetime.Verstraelen et al. [44] studied the semi-symmetry type curvature condition ic.= 0 implies a hyþercylinder space, where acts as a derivation on ic.Therefore, arbitrarily, we adopt the following curvature restriction as used in [30]. On putting s = ζ, we have which is equivalent to Proof.Second, we investigate W2 (ζ, p).¯ ic = 0.For this, ¯ ic must be satisfied which can be reformed as For t = ζ, we have Hence, we find the required result: Proof.Third, we study ¯ ic.W2 (ζ, p) = 0. So, we consider an (ε)-KM with ∇ satisfying the condition Taking the inner product with ζ, the Equation (45) becomes On simplification, we obtain Corollary 3.An η-RS on an (ε)-KM with ∇, satisfying ¯ ic.W2 (ζ, p) = 0n, is either shrinking or expanding for α = 0. Proof.For α = 0, we find Thus, the statement is fulfilled. Harmonicity of Gradient η-Ricci Solitons with a SSM-Connection In this section, we discuss the characteristics of a gradient η-RS with a Poisson equation on an (ε)-KM with a SSM-connection ∇, which is a special case of η-RS.Theorem 6.Let an (ε)-KM (B, Φ, ζ, η, g, ε) with ∇ admit a gradient η-RS and the potential vector field ζ be of gradient type, then the Poisson equation satisfying by Ψ is Proof.Now, taking the trace of (34) we obtain On considering that the potential vector field ξ is of gradient type; that is, ξ =: grad(Ψ), for a smooth function Ψ : (B, Φ, ζ, η, g, ε) −→ R, then (g, ζ, γ, α) is said to be a gradient η-RS [17]. Adopting the above fact with (47), we deduce the required result: On implementing the fact that a function β : B −→ R is harmonic if ∆β = 0, where ∆ is the Laplacian operator on B defined in [45], we obtain following conclusion: Theorem 7. Let an (ε)-KM (B, Φ, ζ, η, g, ε) with ∇ admit a gradient η-RS and the potential vector field ζ be of gradient type.If Ψ is a harmonic function on B, then the gradient η-Ricci soliton is expanding, steady, and shrinking as per the following: 1. Lower Bound of Gradient η-Ricci Solitons In this section, we obtain an inequality for the lower bound of gradient η-RS on (ε)-KM with a SSM-connection. In 2020, Blaga and Carasmareanu derived an inequality for a lower boundary of the geometry of g in the form of a gradient Ricci soliton for a smooth function ψ on ambient space M, such as in [46] || ic|| 2 g ≥ ||hess|| 2 g − wherein hess is the Hessian of a smooth function ψ on M. In light of ( 48) and ( 46), we therefore state the following: Theorem 8. Let an (ε)-KM (B, Φ, ζ, η, g, ε) with ∇ admit a gradient η-RS and the potential vector field ζ be of gradient type.Then, we have Application of the SSM-Connection in Number Theory A linear combination of Pontryagin numbers can be used to express the signature of a smooth manifold, according to the Hirzebruch signature theorem [47].These numbers represent specific characteristic classes or Pontryagin classes of real vector bundles.The Pontryagin classes are located in cohomology groups with degrees that are multiples of four.
3,155.6
2023-10-27T00:00:00.000
[ "Mathematics" ]
Effects of RhoC downregulation on the angiogenesis characteristics of myeloma vascular endothelial cells Abstract Background Tumor angiogenesis plays an important role in disease progression, and RhoC has been previously found to be expressed in vascular endothelial cells (VECs); however, its role in tumor angiogenesis requires clarification. This study aimed to explore the effects of RhoC downregulation on the cytoskeleton, pseudopod formation, migration ability, and canalization capacity of myeloma vascular endothelial cells (MVECs) in vitro. Materials and methods The expression of RhoC in MVECs and human umbilical vein endothelial cells (HUVECs) was knocked down by shRNA, and the expression levels of RhoC mRNA were detected by quantitative reverse transcription polymerase chain reaction (qRT‐PCR). The cytoskeletal changes and pseudopods were observed by laser scanning confocal and scanning electron microscopy; VECs were incubated in two‐dimensional Matrigel and three‐dimensional microcarriers to observe tube‐like structures and budding status, respectively. The protein expression of RhoC, phosphorylation of mitogen‐activated protein kinase (p‐MAPK), and Rho‐associated coiled‐coil kinase (ROCK) was determined by Western blotting. The expression of RhoC in VECs was downregulated by RhoC shRNA, thereby decreasing the number of pseudopods, two‐dimensional tube‐like structures, and buds. Results When RhoC was downregulated, the expression levels of ROCK and phosphorylation of MAPK were both decreased (P < 0.05). Moreover, the expression levels of RhoC and phosphorylation of MAPK and three‐dimensional budding numbers were higher in MVECs than in HUVECs (P < 0.05). The downregulation of RhoC expression in MVECs and HUVECs inhibited pseudopod formation, migration, canalization ability, and angiogenesis (P < 0.05). Conclusion Our data indicated that MVECs and HUVECs were well suited for angiogenesis research, but the former cell type was shown to be more advantageous in terms of budding numbers. RhoC plays a pivotal role in MVECs angiogenesis, and the downregulation of RhoC expression could inhibit angiogenesis via the RhoC/MAPK and RhoC/ROCK signaling pathways. | INTRODUCTION RhoC is a member of the Ras-homologous (Rho) GTPase family, which comprises important signaling molecules that are involved in regulating processes associated with dynamic changes in the cytoskeleton, such as cell migration and proliferation. 1,2 As shown previously in a study of esophageal carcinomas, RhoC protein expression can upregulate vascular endothelial growth factor (VEGF), which is closely associated with tumor angiogenesis. 3 Angiogenesis is initiated and regulated by many factors and is an extremely complex process that is mediated by a variety of inducing factors and includes multiple steps, such as vascular endothelial cell proliferation and migration, extracellular matrix (ECM) degradation and remodeling, and vascular formation. 4,5 Angiogenesis is required for tumor growth and metastasis. 6,7 RhoC expression is related to cell proliferation, migration, and cytoskeletal alterations. 8,9 However, RhoC was also found to be expressed in vascular endothelial cells (VECs); 3 therefore, its expression might be associated with the angiogenesis of VECs. In angiogenesis studies, human umbilical vein endothelial cells (HUVECs) are usually selected as the model cell line. 10,11 Normal bone marrow plasma cells expressed more proangiogenic genes than antiangiogenic genes and induced angiogenesis in vitro. The accumulation of plasma cells can induce basal vascularization at the bone marrow level. 12 Myeloma angiogenesis is regulated by various factors, such as the bone marrow microenvironment and hypoxia. 13,14 Given the substantial differences in structure and conformation between normal HUVECs and myeloma vascular endothelial cells (MVECs), HUVECs, and MVECs were selected for this study. By knocking down RhoC, we investigated the associations between MVECs and angiogenesis as well as the possible mechanisms through which RhoC affects vascular formation from endothelial cells, uncovering novel mechanisms associated with angiogenesis and providing new therapeutic strategies for targeting tumor angiogenesis. | Reagents The shRNA lentiviral vectors that were used to knockdown of RhoC expression were purchased from GenePharma (Suzhou, China). An anti-RhoC antibody was purchased from Abcam (Cambridge, MA, USA). The anti-mitogen-activated protein kinase (MAPK) and anti-Rho-associated coiled-coil kinase (ROCK) antibodies were purchased from Proteintech (Chicago, IL, USA). Phalloidin was purchased from Cytoskeleton (Denver, CO, USA). DAPI was purchased from Santa Cruz Biotechnology (Dallas, Texas, USA). Ezol was purchased from GenePharma; Cytodex3 was purchased from GE Healthcare (Uppsala, Sweden), and HRP-conjugated secondary antibody was purchased from Jackson (West Grove, PA, USA). | Lentiviral transduction of VECs Two types of VECs (MVECs and HUVECs) were investigated in this study, and each type was grouped into a negative control group (NC group) and an experimental group (S group). After trypsin digestion, cells were added to a 24-well culture plate at a concentration of 5 × 10 4 cells per well, which was followed by the addition of 500 μL of 10% fetal bovine serum/Dulbecco's modified Eagle's medium (DMEM) culture medium (to each well); culture plates were placed in an incubator at 37°C for incubation. On the next day, a lentiviral stock solution was diluted with cell culture medium at a ratio of 1:10 (v/v, stock solution: culture medium), and the culture medium was removed from each well and replaced with 500 μL of the diluted lentiviral solution. After a 12h incubation, complete culture solution was added to each well to replace the old culture medium and incubated for 48 hours. Subsequently, transfection efficiency was monitored, during which five 200× visual fields were randomly selected, and 100 cells were counted; the infection rate was defined as the average percentage of green fluorescent cells relative to the total number of observed cells. | Detection of RhoC mRNA expression after lentiviral transfection by quantitative reverse transcription polymerase chain reaction The forward and reverse primers, respectively, for the target gene RhoC were 5′-CAGTGCCTTTGGCTACCTTG-3′ and 5′-CCCTCCGACGCTTGTTCTT-3′, and those for GAPDH were 5′-CATGAGAAGTATGACAACAGCCT-3′ angiogenesis, lentiviral vector, myeloma, RhoC, vascular endothelial cells and 5′-AGTCCTTCCACGATACCAAAGT-3′ (Table 1). After lentiviral transfection, the cells were incubated in a 6-well culture plate until each well reached confluence. This step was followed by the addition of 300 μL Ezol to lyse the cells to extract total RNA. With three duplicate wells for each group, RNA samples were subjected to reverse transcription and PCR amplification. The PCR procedure was carried out to include pre-denaturation at 95°C for 3 minutes, denaturation at 95°C for 3 seconds, annealing at 62°C for 40 seconds, and extension at 72°C for 5 minutes. In total, 40 amplification cycles were conducted, and the amplified fragment had a size of 113 bp. The measured cycle threshold values were used to calculate the 2 −ΔΔCt value to compare the relative quantitative expression of mRNA among the groups, and each group was measured in triplicate. 15 | Pseudopod observation The NC group and the S group of cells were separately incubated on cover glass. After reaching 60% confluence, the cells were fixed with paraformaldehyde for 20 minutes and treated with Triton-100x for 20 minutes. A 5-μL portion of phalloidin labeled with rhodamine was diluted in 200 μL phosphate-buffered saline (PBS) solution, and the resulting mixture was applied to the aforementioned cover glass; cells samples were incubated for 30-60 minutes, followed by a 10min incubation with DAPI. The cells were observed using a laser scanning confocal microscope (Nikon Eclipse Ti, Japan); images were analyzed using Photoshop to magnify (400×) the observed images. | Scanning electron microscopy for the observation of cellular pseudopods and cytoskeletons When the NC group and the S group of cells grew to 60% confluency on the cover glass, they were rinsed with PBS, fixed with 3% glutaraldehyde (precooled to 4°C) and sat overnight at 4°C. Next, the cells were rinsed with PBS twice, each time for 10 minutes, and fixed with osmic acid (precooled to 4°C) for 1 hour at 4°C. This step was followed by a gradient alcohol dehydration process (15 minutes each time), a freeze-drying process, and a vacuum gold-sputtering process. The prepared samples were observed under a scanning electron microscope (Phenom-World, Netherlands), and images were obtained using visual fields magnified 5000×. | Scratch test Each group of cells was inoculated into a 24-well culture plate at a concentration of 1 × 10 4 cells/well, and three replicate wells were set up for each group. At confluence, the cells in each group were scratched and subjected to serum-free incubation for 24 hours. The cells were then observed and imaged in a visual field magnified 100× using an inverted microscope. ImageJ software was used to analyze the rate at which the scratched areas were filled. | Two-dimensional canalization on Matrigel Matrigel was thawed at 4°C, and in the meantime, a 24-well culture plate and 200-μL tips were precooled. Matrigel was mixed with serum-free medium at a ratio of 1:1, and the resulting mixture was added to the 24-well culture plate at a concentration of 200 μL per well. The plate was then placed in an incubator at 37°C for 1 hour to allow the mixture to solidify into a gel. Each group of cells was trypsin-digested until a density of 1 × 10 5 cells/mL was reached. Next, 1 mL of the resulting cell suspension was added to each well, and the plate was placed in an incubator at 37°C for 24 hours, during which the canalization status was monitored. The canalization status was assessed by the following equation: node number × branch number, as observed under an inverted microscope. Five visual fields at a magnification of 100× were randomly selected for each group to calculate the average value. Cytodex3 microcarrier One hundred milligrams of Cytodex3 was subjected to sterilization treatment, according to the product manual, and rinsed once with lukewarm culture medium. This solution was then added to 2 mL DMEM to obtain the diluted solution (50 mg/ mL). Each group of transfected cells was digested and resuspended to reach a density of 2 × 10 6 cells/mL. A 1-mL portion of the resulting suspension was added to an Eppendorf tube, to which 50 μL of Cytodex3 was then added for mixing. Matrigel was spread on the well of a 24-well plate. Microcarriers that were fully covered with cells were then selected, rinsed with prewarmed culture medium, and centrifuged at a low speed to remove the cells in the microcarrier suspension. After the supernatant was carefully removed, the mixture remaining in the centrifuge tube was homogeneously mixed with Matrigel that contained 10 ng/mL VEGF, and the resulting mixture was added to the abovementioned 24-well culture plate, which had been loaded with the gel. The plate was then placed in an incubator at 37°C to allow the gel-containing mixture to solidify. Next, new culture medium was added to the 24-well culture plate, which was then placed in the incubator for incubation; budding status was monitored consecutively for 4 days. Effective budding length was defined as the diameter of the ball, and the budding number for 10 microcarriers was measured in triplicate from a visual field at 200× magnification for each group of cells; the average values were calculated based on triplicate measurements. MARK, ROCK, and RhoC The cells in the NC and S groups were all lysed for the extraction of proteins. After boiling and denaturing, the protein samples were introduced to a culture plate at a dose of 10 μL per well. Electrophoresis (Bio-Rad Laboratories, Inc, Hercules, CA, USA) was conducted at 80 V for 30 minutes and then 120 V for 60 minutes. Next, the samples were transferred to a polyvinylidene difluoride membrane (EMD Millipore, Billerica, MA, USA) by applying an 80 V transmembrane voltage for 90 minutes, followed by overnight incubation with RhoC (1 μg/mL, ab64659; Abcam) and incubation with ROCK (1:500, 21850-1-AP) and Phosphorylation of MAPK (1:500, 66234-1-Ig) (both from Proteintech) primary antibodies at 4°C; then, a final incubation with HRP-conjugated secondary antibody (1:5000, 115-035-003; Jackson) was performed on the next day. The protein expression in each group of cells was measured in triplicate based on electrochemiluminescence; ImageJ software was used for quantitative grayscale analysis. | Statistical analysis The statistical software SPSS 17.0 (SPSS, Inc, Chicago, IL, USA) was used to test the normality of the data in this study. Data conforming to a normal distribution were expressed as x ± SD, and a t-test was used to compare the two samples (P < 0.05). | Effect of RhoC gene silencing Inverted fluorescence microscopy revealed that both types of cells transfected with lentiviruses expressed green fluorescent protein, and the transfection efficiency was greater than 80% ( Figure 1A). Quantitative reverse transcription polymerase chain reaction (qRT-PCR) revealed significantly lower RhoC mRNA levels in the S group than in the NC group for both MVECs (t = 9.50, P < 0.05) and HUVECs (t = 10.92, P < 0.05) ( Figure 1B). | Observation of pseudopods For MVECs, the number of pseudopods in the RhoC knockdown group (S group) was less than that in the NC group (t = 10.92, P < 0.05), with the cytoskeleton of cells in the S group exhibiting polygonal shapes. Similar results were observed in the HUVECs (t = 21.17, P < 0.05) (Figure 2A,2 & Table 2). | Vascular endothelial cell migration and movement MVECs in the S group had a significantly slower migration speed than their counterparts in the NC group (t = 4.48, P < 0.05). Similar results were obtained for HUVECs (t = 3.73, P < 0.05). There was no significant difference in migration speed between the two types of cells in the NC group (t = 0.21, P > 0.05), as shown in Figure 3A,3. | Canalization of VECs After incubation for 12 hours on Matrigel, the MVECs in the S group showed significantly decreased canalization compared to that in the NC group (t = 13.25, P < 0.05); similar results were observed for HUVECs as well (t = 8.36, P < 0.05). In contrast, there was no significant difference in canalization between the MVECs and HUVECs in the respective NC groups (t = 0.33, P > 0.05; Figure 3C,3). | Vascular endothelial cell budding After incubation on microcarrier beads for 1 day and 4 days, both MVECs (t = 20.20, P < 0.05) and HUVECs (t = 12.50, P < 0.05) in the S group exhibited significantly decreased budding compared to that of the MVECs and HUVECs in the NC group; MVECs in the NC group demonstrated significantly enhanced budding compared to the HUVECs in the NC group (t = 7.13, P < 0.05), as shown in Figure 3E,3. | Western blot analysis of phosphorylation of MAPK, ROCK, and RhoC protein levels As shown in Figure 4, phosphorylation of MAPK protein expression in the S group of MVECs was significantly lower than that in the NC group of MVECs (t = 25.16, P < 0.05), and the same effect was observed for HUVECs (t = 3.63, P < 0.05), as shown in Figure 4A,4. ROCK protein expression in the S group was significantly lower than that in the NC group for both MVECs (t = 28.17, P < 0.05) and HUVECs (t = 5.88, P < 0.05), as shown in Figure 4C. RhoC protein expression in the S group of MVECs was significantly lower (t = 15.58, P < 0.05) than that in the NC group; similar results were obtained for HUVECs (t = 16.22, P < 0.05), as shown in Figure 4D. | DISCUSSION RhoC is a member of the Rho GTPase family and belongs to a class of small molecular G proteins. RhoC protein expression in endothelial cells is intimately associated with tumor angiogenesis. 3 Angiogenesis involves endothelial cell After transfecting VECs with RhoC shRNA in this study, the level of RhoC mRNA in the experimental group of cells decreased by 95% compared to expression in the control group. Therefore, RhoC expression was knocked down to explore the relationship between RhoC and angiogenesis using endothelial cells. Moreover, a prerequisite for cellular migration is the formation of lamellipodia, filopodia, and focal adhesion. 16 Laser scanning confocal and scanning electron microscopy were used in this study to observe pseudopod conformation, showing that after RhoC knockdown, the number of filopodia markedly decreased, the cellular conformation was more regular, and the width of lamellipodia was restricted. This phenomenon is consistent with the result obtained by Ridley 17 who showed that RhoC could restrict the width of cell pseudopods. Research on the Rho family has mainly focused on the cytoskeleton, cellular mobility, cell proliferation, and tumor cell infiltration and metastasis. 18 In recent years, an increasing number of studies have found that RhoC is not only involved in tumor invasion and metastasis 18,19 but also plays an important role in tumor development and progression. Studies on hepatocellular carcinoma (HCC) have suggested that RhoC promotes the evolution of healthy hepatocytes into malignant cells by promoting cell migration; 20 this indicates that RhoC may be a new oncogene for HCC. By investigating of RhoC, it has been found that its expression in tumors, such as esophageal squamous cell carcinoma 3 and HCC, 21 is intimately associated with tumor angiogenesis. For the scratch test, RhoC downregulation could inhibit endothelial cell migration. The regulation of cellular migration by Rho GTPase is achieved by affecting the activities of actin and myosin as well as cell adhesion. ROCK can increase myosin phosphorylation such that it regulates actin contractility. To verify the effects of RhoC in the involvement of VECs in angiogenesis, VECs were cultured on a Matrigel-covered two-dimensional culture plate and on a three-dimensional microcarrier. The results showed that the two-dimensional tube-like structures of the two types of VECs were markedly decreased by RhoC downregulation, with a reduction in budding observed on the three-dimensional microcarrier. Therefore, it was confirmed that RhoC plays an important role in angiogenesis in VECs. The results of the expression of RhoC and ROCK proteins shown by Western blot showed that RhoC silencing decreased ROCK protein expression, which is consistent with the finding of Rong et al 22 who showed that the downstream effector molecule of RhoC is ROCK. Some studies have shown that RhoC activates ROCK by binding with ROCK and then phosphorylates the myosin light chain, participates in cell aggregation and fiber contraction, and promotes cell metastasis and infiltration. 23 The MAPK signaling pathway exists widely in cells and is involved in many biological processes, such as cell growth, development, division, differentiation, and apoptosis. This pathway is also closely associated with the development and progression of multiple malignant tumors. Regarding the MAPK family members, increased activation of extracellular regulated protein kinase (ERK) can stimulate angiogenesis. The ERK signaling pathway plays a pivotal role in malignant processes, such as the development and proliferation of tumors and tumor angiogenesis. 10 One study on prostate cancer showed that the RhoC/ROCK signaling pathway could upregulate the phosphorylation of MAPK, and the phosphorylation level of MAPK in prostate cancer tissues was significantly higher than that in adjacent tissues and was related to tumor cell metastasis. 24 Therefore, with respect to the mechanisms through which VECs participate in angiogenesis, we focused on detecting phosphorylation of MAPK protein expression and found that the downregulation of RhoC interfered with phosphorylation of MAPK expression. Based on these findings, we suggest that the RhoC/MAPK signaling pathway plays a pivotal role in angiogenesis. Research on breast cancer has reported that interfering with RhoC gene expression could inhibit the proliferation and infiltration abilities of tumor cells through a mechanism that likely involves the simultaneous downregulation of matrix metalloproteinase-9 (MMP-9). 25 The ECM primarily exists between cells, and therefore, angiogenesis requires the degradation of the ECM. Notably, the most important enzyme for ECM degradation is MMP-9. Therefore, the reduction in angiogenesis by the downregulation of RhoC may be associated with inhibition of MMP-9 expression. The above speculation, once confirmed, would validate a report by Shuli et al, 20 which showed that overexpression of RhoC stimulates MMP-9 expression to promote angiogenesis. However, whether RhoC/MMP-9 is involved in angiogenesis requires further investigation. In conclusion, as confirmed by in vitro experimental detection, RhoC plays a pivotal role in MVECs angiogenesis, and the downregulation of RhoC expression could inhibit angiogenesis via the RhoC/MAPK and RhoC/ROCK signaling pathways. Moreover, MVECs were superior to HUVECs when used in three-dimensional systems for the observation of budding. ETHICS APPROVAL AND CONSENT TO PARTICIPATE All experimental procedures involving human samples were approved by the Life Science Ethics Review Committee of Zhengzhou University.
4,666.2
2019-05-07T00:00:00.000
[ "Medicine", "Biology" ]
Monochannel Demultiplexer Phononic Crystal Slab Based on Hollow Pillars : A mono-channel waveguide with alternate hollow pillars of different radius to passively select and reject particular frequencies for filtering applications are numerically simulated based on the Finite Element Method (FEM). The waves are guided while the frequencies can be filtered according to pillar inner radius as its waveguiding mechanism. The computations of dispersion relation, transmission coefficient and stress displacement profile of the waveguides were carried out to understand the propagation behaviour of elastic waves on the waveguide structure. The proposed model shows a complete bandgap around 700 kHz, while its respective blocking phenomenon is demonstrated using square-ring shapes. The introduction of defect lines in linear and L-Shaped form enables a tailorable frequency shift within the bandgap region with optimized inner radius of hollow pillar. The proposed model eliminates the need for a multi-channel filtering system with conventional several separated lines thus reduces the dimension of filtering device. Introduction For the past two decades, studies on the phononic crystal (PnC) consists of artificial periodic arrangement of heterogeneous materials have motivate numerous researchers to study its novel properties such as filtering, waveguiding and sensing [1][2][3][4][5][6][7][8][9]. Particularly, since the first pioneering work by Liu et al., on 3-dimensional (3D) locally resonance mechanism [10], the study on the resonance unit cells for band gap (BnG) creation independent of periodicity and geometrical symmetry was developed. Later, similar path works were followed by Goffaux et al. [11] and Wang et al. [12] to demonstrate the similar mechanism on 2-dimensional (2D) and 1-dimensional (1D) setup. The ability of local resonance PnC to prohibit the propagation of incoming waves at a wavelength longer than crystal lattice has demonstrate much advantages over the Bragg band gap particularly on low-frequency regime [13,14]. Due to this aspect, a great deal of attention have been dedicated to pursue the potential applications of the structure. Different selections of materials constituent, topology of inclusion on the matrix background and crystal lattices have been thoroughly investigated both theoretically and experimentally to maximize the width of band gap opening [15][16][17][18][19][20]. In addition, the air void local resonance PnC with hollow structure embedded on the solid matrix background have offers many benefits over the common structure of pillar PnC. As a matter of fact, the hollow pillar shows whispering gallery modes (WGMs), in which the foundation work can be traced back about 100 years ago to Rayleigh's observation on St. Paul's Cathedral [21]. Since its discovery, researchers have immediately realised on its potential of applications in other branches of physics including the propagation of the seismic wave, electromagnetic waves and electric [22]. Recent studies showed that the same concept is applicable for the case of PnC which demonstrate an extraordinarily high factor, Q that can be enhanced with the addition of non-hollow cylinder layer in between the hollow pillar and the substrate [23,24]. Other approaches can also be used, for instance Kaproulias adopted disk geometry while Li et al., studied an isolated tube immersed in a liquid environment [25,26]. The tunability of its porous opening enables the adjustment of its respective density, sound speed and other physical properties to further improve the material acoustic performance. Most of recent studies on hollow-resonance pillars have shown the ability of the structure to have higher selectivity on frequency transmittance in the vicinity of surrounding non-hollow perfect crystal. Meanwhile, others have used the void volume with filling materials where the frequency shift of transmitting wave can be adjusted according to temperature switch [27][28][29]. With the filling material composed on the PnC hollow structure, it can indirectly double the performance of the PnC device with the ability to control both the mechanical and chemical properties of constituent material, which poses greater advantages over the common non-hollow pillar structure [30]. One of the main advantages of the PnC void pillar structure is a waveguiding [31][32][33]. Most waveguides are developed by removing a series of inclusion to break a lattice periodicity of phononic crystal to allow the wave entrapment in the direction of the defect path [34,35]. The symmetries breaking mechanism allows the existence of a transmitting band in the vicinity of the band gap region that is uniquely tunable to the characteristics of the waveguide opening. In addition, the insertion of multichannel waveguide corridors separated with few rows of perfect PnC lattice to prohibit the interaction among the waveguides lines, permits several passbands for possible demultiplexing application to filter different frequencies. The method has resulted in a bulky size of filtering devices [36]. The novelty of the proposed void pillar as the waveguiding mechanism eliminates the needs for multichannel for demultiplexing application and allows for a mono-corridor approach to filter distinct frequencies subjected to hollow opening of pillars. In this paper, mono-channel filtering frequencies for demultiplexing applications are demonstrated based on hollow resonant pillars. The single defect channel consisting of different hollow radii can selectively filter different frequencies for demultiplexing purposes. A careful selection of pillar's void opening allows the transmission of subwavelength that represent the waveguide mechanism. This can minimise the size of the filtering device with the elimination of several rows of waveguide channels as normally practised. The paper scenario is planned as follows: Section 1 describes in detail the based model which comprises pillar-based PnC slab with hollow waveguide mechanism. The simulation methods used based on the finite elements methods to calculate the dispersion and transmission coefficient of the proposed model are also explained through in this section. Section 2 presents the simulation result of the subwavelength guiding before the concluding remarks in Section 3. Model and Method of Simulations The model we studied in the present paper is shown in Figure 1. The structure has a square lattice of hollow resonant pillars among of non-hollow perfect crystals standing on top of finite slab structure. Both the substrate and inclusion formed a monolithic unit made of silicon material with cubic symmetry of crystallographic oriented in axis x, y and z. The mass density, ρ is 2330 kgm −3 , while the elastic constants C 11 , C 12 , and C 44 are respectively equal to 166 GPa, 64 GPa and 79.6 GPa. The relevant dimensions of hollow pillar radius, r i , high of pillar, h p , pillar diameter, d and slab thickness, h, were normalized to 4 mm unit of lattice constant, a. Figure 2 represents the supercell model for dispersion calculation. The bulk structure consists of 7 unit cells containing a defect in between the neighbouring solid pillars. The dispersion of the proposed structure in X and Y directions was simulated numerically using solid mechanics modules in COMSOL Multiphysics software. According to the Floquet theorem, the boundary condition of the unit cell can be formulated based on the following relationship: where k x is the wave vector in the x direction. Meanwhile, the propagation of elastic waves where the harmonic motions are assumed to be time-dependent is given by: where ρ represent the material density and the mechanical stresses T jk depends on the strain S lm : where C jklm represents the elastic stiffness constants. The strain and displacement can be link with the following relationship: the propagation of wave with no external force can be desscribed as follows: where K uu and M uu are respectively the stiffness and mass matrices while u serves as the displacement vector at the mesh nodes. The formulation was used to solve the eigenvalue problem that produced the eigenfrequency in the first Brillouin zone. The transmission spectra to understand the propagation of waves within the defect row of pillars can be solved using the FEM model in Figure 1 encompasses with perfectly matched layer (PML) as absorbing domain. The PML allowed the incoming mechanical wave to diminish gradually around the end boundary limit of the structure. With the PML acting as the absorbing medium, the structure was assumed infinite since the source of excitation do not interfer with the reflecting wave around the boundaries of the structure. To solve this, from the equation, then where γ j is an artificial damping at position x j and is given by the following relation: where x l is the coordinate at the beginning of PML and σ x is fixed according to the level of attenuation of PML space. Result and Discussion Before showing the result for hollow pillars as the mechanism of waveguiding, we simulated the reference band gap of the PnC with solid, non-hollow unit pillars, r i = 0. The dimension for pillar height, h p = 0.55a, pillar diameter, d = 0.84a and slab thickness, h = 0.1a produced three complete band gaps for perfect supercell structure. However the focus of this present study was on the largest of the three between 680 to 880 kHz where the respective dispersion curve and transmission spectra are shown in Figure 3. In the aforementioned frequency band gap domain, no incoming wave is allowed to pass through the perfect pillars that acts as the shielding region for the wave. The origin for the emergence of PnC band gap were described in the previous work [37][38][39]. To understand the nature of the result in Figure 3, a displacement study to illustrate the attenuation phenomena of the incoming wave was simulated. A square ring model to study the shielding effect of the band gap region with four rows of pillar units is presented in Figure 4a. Guided waves were excited based on the orthogonal displacement at point A (1 × 10 −10 mm in the z-direction) with two incoming frequencies: 800 kHz within the BnG and 880 kHz at a frequency above the BnG region. As can be noted in Figure 4b, at band gap frequency 800 kHz, the waves were suppressed inside the area B and practically created a vacuum area from any surface stress. The dark blue and yellow colours in the diagram represent the minimum and maximum mechanical stress of the slab surface. With a shift frequency to 880 kHz above the band gap, the waves can pass the phononic region undisturbed and the vibration mode can be observed inside the ring-shaped. In summary, a delimitation of incoming wave trespassing was successfully achieved by the PnC structure owing to Bragg scattering at the wave within the BnG domain, confirming the numerically simulated BnG in Figure 3. To create the trespassing frequency within the band gap region for filtering application, the inner-radius properties of the pillar were utilised to promote the frequencies divider by introducing a proportionate waveguides defects in the perfect PnC structure. Two hollows pillars with r i = 0.130a and 0.155a were considered with the aim to incite two separable range filtering frequencies. Figure 5 summarises the evolution of two frequencies shift corresponding to the previously mentioned r i 0.155a and 0.130a that do not exists in the native PnC. The dispersion curve and transmission spectra indicate filtering frequencies around 810 kHz and 880 kHz that uniquely represent the 0.155a and 0.130a waveguides. Although these branches are dispersive due to the interaction with the Lamb wave on the plate, they are the WGMs phenomena due to the hollow cylinders structure [24]. An immediate conclusion can be derived between the relationship of void opening with the frequency shifts within the band gap. By tuning the r i from 0.130a to 0.155a, a frequency move can be observed to a lower range. The limit size was limited equivalent to 0.130a and 0.155a to avoid any possible interaction between the waveguide frequency with the neighboring lamb wave frequency close to band gap lower and upper boundaries. To demonstrate the propagation behaviour of the two waveguides in Figure 5, a simulated system was constructed consisting of both r i 0.130a and 0.155a on the same platform. In Figure 6a, two void radii were inserted with green and blue marking colours intended to distinguish r i = 0.155a (green) and 0.130a (blue) to initiate the transmission of waveguide frequencies. The displacement field magnitude of the guided waves are plotted in Figure 6b,c. The results demonstrate two importance observations. Firstly, the defect channels responded specifically to the frequency attributed to the transmission result. For instance, at frequency 810 kHz for r i = 0.155a, only the top channels (both linear and L-shaped) have a notable stress intensity that shows the wave are well-confined within the the defect route. Similarly, Figure 6c illustrates similar stress displacement when frequency shift hit 800 kHz for r i = 0.130a. To produce a compact multi filtering frequencies system, a similar void defects as mentioned earlier were used to form a mono-channel defects consisting of the alternately hollow radius with r i = 0.155a and 0.130a surrounded with solid cylinders. The proposed system is illustrated in Figure 7a. Figure 7b shows a displacement profile for frequency f 0.130 = 880 kHz and f 0.155 = 800 kHz where the void cylinder shows alternate stress field between the two radius r i = 0.155a and 0.130a that correspond to their pass frequencies. The void cylinder can be used as a passive switch mechanism to reroute the filtering frequencies to another direction. Since the mono-channel consists of dual propagating frequencies within the same line, the frequencies can be singled-out to follow another path of filtering lines. However, the confined waves suffer from a diminution of energy as the wave approaches to the meeting end of the waveguide channel. Particularly for the case of 0.130a the linear channel shows significant loss of energy intensity on its straight channel. Figure 7c shows a transmission spectra of structured model in Figure 7a as compared to Figure 6a. A consistent transmission and elimination frequencies of propagation can be observed between the two structures suggesting that a bulky 16 × 7 (as in Figure 6a) filtering system can be reduced significantly into 7 × 7 unit cells model as illustrated in Figure 7a. The system eliminates the need to construct a multi-channel lines with their respective inter-solid cylinders to avoid interaction between void channel to filter several wavelengths. This can reduce the size of the whole system into a smaller-scale dimensions. Conclusions In conclusion, this study has numerically simulated a mono-channel filtering system consisting of dual pillars arranged alternately between two different hollow radii to selectively filter different frequencies. The band gap, the attenuation and transmitting waveguide frequencies with their respective stress field have been simulated based on the finite element methods. The proposed models showed a complete band gap at 680 kHz to 880 kHz and the two narrow passing frequencies f 0.130 , f 0.155 are respectively sensitive to hollow radius assigned to their filtering frequency. The displacement behaviour of the guiding line also shows that the propagating waves are well confined within the defect directions at their transmitting frequencies. The splitting of frequencies from a single-channel waveguides can also be achieved for the possible multiplexing application. The end system can minimise the size of normal-practised filtering technique that consists of several channels of waveguide lines to separate difference frequencies for an excellent candidate in sensing and communication applications. In addition, a full potential of the system can be further enhanced with filling elements inserted in the void volume such as liquid-based constituents. The result in the investigation can open a possibility for many acoustic-based applications like sensors, filters and resonators.
3,568.6
2022-01-24T00:00:00.000
[ "Physics", "Engineering" ]
MiR-367 alleviates inflammatory injury of microglia by promoting M2 polarization via targeting CEBPA MiR-367 was reported to regulate inflammatory response of microglia. CCAAT/enhancer-binding protein α (C/EBPA) could mediate microglia polarization. In this study, we explored the possible roles of miR-367 and CEBPA in intracerebral hemorrhage (ICH). ICH and normal specimens were obtained from the tissue adjacent to and distant from hematoma of ICH patients, respectively. Microglia were isolated and identified by immunofluorescence. The isolated microglia were treated with erythrocyte lysate and randomly divided into 8 groups using different transfection reagents. The transfection efficiency of miR-367 was determined by qRT-PCR. The expressions of M1 and M2 microglia markers were detected by Western blotting. The relationship between CEBPA and miR-367 was confirmed by dual luciferase reporter system. Flow cytometry was performed to determine the level of apoptosis in the cells transfected with miR-367 and CEBPA in erythrocyte lysate–treated microglia. We found that miR-367 expression level was downregulated in ICH specimens. Erythrocyte lysate–treated microglia was successfully established using erythrocyte lysate, as decreased miR-367 expression was observed. Overexpression of miR-367 could significantly decrease the expressions of MHC-ІІ, IL-1β, and Bax, reduced apoptosis rate, and increased the expressions of CD206, Bal-2, and Arg-1 in erythrocyte lysate–treated microglia. CEBPA was proved to be a direct target for miR-367, which could inhibit microglia M2 polarization and increase apoptosis rate. However, in the presence of both CEBPA and miR-367 mimic, the protein and mRNA expressions of CEBPA were decreased, leading to promoted microglia M2 polarization and a decreased apoptosis rate. MiR-367 regulates microglia polarization by targeting CEBPA and is expected to alleviate ICH-induced inflammatory injury. Introduction Hemorrhagic stroke (also known as intracerebral hemorrhage (ICH)) is the most acute and serious cerebrovascular disease, as it has a quick onset and high mortality and disability rates (Shi et al. 2016;Zhou et al. 2017) and can cause hematoma and secondary pathological processes (Psaila et al. 2009;van Asch et al. 2010;Tatlisumak et al. 2018). ICH causes secondary injury via various pathways, of which inflammatory response is one of the most pivotal pathways (Hamzei Taj et al. 2016). Therefore, inhibiting the production of proinflammatory mediators is possibly an effective strategy for preventing brain injury after ICH. Microglia plays an important role in inflammatory response (Shi et al. 2016). After ICH, microglia is activated and release inflammatory factors, thereby exacerbating ICHinduced injury (Zhang et al. 2017). Increasing evidence showed that microglia with different phenotypes can produce either detrimental or beneficial responses depending on specific environmental signals (Boche et al. 2013;Lee et al. 2016;Shi et al. 2016;Zhang et al. 2017). Microglia can be divided into classically activated M1 and alternatively activated M2 according to their surface markers and intracellular Hui Pei and Qian Peng contributed equally to this work. cytokines (Boche et al. 2013;Lee et al. 2016;Shi et al. 2016). Classically activated M1 can increase pro-inflammatory cytokines (e.g., IL-1β, TNF-α, iNOS) and M1 marker major histocompatibility complex class ІІ (MHC-ІІ) and aggravate inflammatory response, while alternatively activated M2 can secrete anti-inflammatory cytokines (e.g., Arg-1, IL-4, IL-13) and M2 marker CD206 and exerts an opposite effect to that of M1 microglia (Miron et al. 2013;Hamzei Taj et al. 2016;Xu et al. 2016;Zhang et al. 2017). Microglia can change its morphology and express MHC-II, allowing them to function as antigen presenting cells that present neuronal debris as antigen to invade T cells (Yanuck 2019). In agreement with microglia activation, profound morphological changes and MHC-II upregulation occurred upon graft-versus-host disease induction (Mathew et al. 2020). M1 transformation was prevented through reducing the release of inflammatory factors of M1 phenotype TNF-α, IL-6, and IL-1β, and increasing the release of cytokines of M2 phenotype, while increasing the expressions of M2 markers (CD206 and Arg-1) in vivo was concomitant with the amelioration of cerebral injury and neurological functions deficits (Han et al. 2018). M1 and M2 can alleviate ICH-induced inflammatory response by mediating microglia M2 polarization (Shi et al. 2016;Lan et al. 2017;Zhou et al. 2017). The natural product pinocembrin could reduce the number of M1 microglia without affecting M2 microglia, inhibit neuroinflammation and protect hemorrhagic brain (Lan et al. 2017). Shi et al. (Shi et al. 2016) also found that sinomenine could reduce ICH-induced inflammation by attenuating M1 microglia and promoting M2 microglia. In addition, regulatory T lymphocytes (Tregs) were proved to accelerate brain recovery after ICH through modulating microglia polarization toward M2 phenotype (Zhou et al. 2017). Promoting microglia M2 polarization via targeting control will possibly become a new direction for the treatment of neurological diseases mediated by inflammation. MicroRNAs (miRNAs) are endogenous small RNAs with 18-25 nucleotides in length (Yu et al. 2014). Some miRNAs can reduce ICH-induced brain injury by targeting different pathways Xu et al. 2017). For example, inhibiting miR-27b could alleviate ICH-induced brain injury by promoting Nrf2/ARE pathway activation (Xu et al. 2017), and miR-233 was proved to reduce inflammatory response via responding to NLRP3 inflammasome after ICH . Recent research showed that CEBPA, which is a transcription factor that mediates the differentiation of pluripotent myeloid progenitor cells into mature granulocytes, could mediate microglia polarization (Yu et al. 2017), and that miR-124 attenuated ICH-induced inflammatory injury by increasing M2-polarized microglia through targeting CEBPA (Yu et al. 2017). In addition, CEBPA was reported to be a target of miR-367 in the growth regulation of glioma cells (He et al. 2018). Previous study indicated that miR-367 could also reduce the inflammatory response of microglia . However, whether miR-367 alleviates ICH-induced inflammatory injury by promoting microglia M2 polarization via CEBPA has not been identified. Thus, in this study, we further explored the potential roles of miR-367 and CEBPA in ICH. Materials and Methods Specimen collection Thirty ICH patients (16 male and 14 female, aged from 38 to 55 y old) treated by craniotomy in the First Affiliated Hospital of Zhengzhou University between Apr. 2018 and Oct. 2018 were enrolled. The patients were confirmed as having ICH by CT scan or MRI. Patients with traumatic brain injury, secondary brain hemorrhage due to the use of anticoagulant, cerebral vascular malformation hemorrhage, cancer, or other causes, and patients with severe liver and kidney diseases or lung infection were excluded from the study. ICH specimens and normal specimens (each about 1 mm 3 ) were obtained from perihematoma and the tissue distant from hematoma during hematoma evacuation, respectively. The collected specimens were stored at 4°C, and examined within 24 h. The study was approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, and informed consent was signed by the participants. Microglia isolation and culture The monocytes THP-1 cells were purchased from the American Type Culture Collection (ATCC, Manassas, VA). Microglia were isolated from the normal specimens and washed with Hank's balanced salt solution (HBSS, Gibco BRL, Waltham, MA) to remove meninges and visible blood vessels. Then the specimens were minced and incubated with 0.25% trypsin-EDTA solution in phosphate-buffered saline (PBS, Sigma-Aldrich, Billerica, MA) at room temperature for 1 h. The suspension was filtered and centrifuged (at room temperature, 300×g, for 5 min) to isolate mixed glia cells. The isolated cells were then plated in a 75-cm 2 culture flask at 2 × 10 7 cells per flask in Dulbecco's modified Eagle's medium (DMEM, Sigma-Aldrich) containing 10% fetal bovine serum (FBS, Sigma-Aldrich) at 37°C with 5% CO 2 . The medium was changed every 3 d. After 12 d, microglia adhered to the bottom were isolated and cultured in the same way as described above. Microglia identification The cells were washed with 3 ml of PBS and treated with 2 ml of 4% paraformaldehyde at room temperature for 15 min. Then, the cells were washed three times with 4 ml of PBS, and incubated in blocking buffer (5% serum, 0.1% Triton X-100, in PBS) for 30 min. Next, the cells were incubated with 200 μl of anti-CD11b antibody (ab133357, 1:100, Abcam, Cambridge, MA) and diamidino-2phenylindole (DAPI) in blocking buffer overnight at 4°C and washed three times with 200 μl of blocking buffer. Finally, the cells were incubated with 200 μl of secondary antibody in blocking buffer in the dark for 30 min at room temperature and washed with PBS three times. Microglias were then observed under a fluorescence microscope. Microglia with a purity of higher than 90% were used for the study. Preparation of erythrocyte lysate Healthy blood samples were collected from healthy adults (8 male and 6 female, aged from 40 to 52 yr old) in the First Affiliated Hospital of Zhengzhou University between Apr. 2018 and Oct. 2018. Informed consents for participation in the scientific research were signed by the participants. Single-cell suspensions of erythrocytes were prepared. One milliliter of red blood cell lysing solution was added into 1 × 10 5 erythrocytes to incubate the cells for 20 min, and then the cells were centrifuged at 2000×g for 10 min. After that, the supernatants were used as erythrocyte lysate. Cell treatment Microglia were collected and seeded into 24well tissue culture plates at a density of 3 × 10 5 cells/well and then incubated with 10 μl of erythrocyte lysate or PBS for 3 d. Cytokine levels in the supernatants were determined by quantitative real-time PCR (qRT-PCR). Transfection Erythrocyte lysate-treated microglia were randomly divided into 8 groups, namely, MC, M, IC, I, MC + vector, MC + CEBPA, M + vector, and M + CEBPA. Erythrocyte lysate-treated THP-1 monocytes were randomly divided into 4 groups, namely MC, M, IC, and I. Transfection was performed using Lipofectamine ® 2000 reagent (Thermo Fisher, Carlsbad, CA) following the manufacturer's instructions. In brief, 1 × 10 5 cells were plated in 24-well plates overnight for attachment. The next day, 5 μl of transfection reagent was diluted in 50 μl of Opti-MEM ® medium (Thermo Fisher, Carlsbad, CA), while 14 μg of DNA (miR-367 mimic, inhibitor, mimic control, or inhibitor control) (Dharmacon, Inc. Chicago, IL) was diluted in 700 μl of Opti-MEM ® medium. Then, 150 μl of diluted DNA was added to 150 μl of diluted Lipofectamine ® 2000 reagent and incubated for 5 min at room temperature. Finally, 50 μl of DNA-lipid complex was added to the cells. The sequences of mimic (M), inhibitor (I), mimic control (MC), and inhibitor control (IC) were 3′-UCUCAACGUAUAAUCGUUGUCA-5′, 5′-AGAGUUGCAUAUUAGCAACAGU-5′, 5′-U U U G U A C U A C A C A A A A G U A C U G -3 ′ , and 5′ -CAGUACUUUUGUGUAGUACAAA-3′, respectively. To determine whether CEBPA was involved in miR-367-mediated microglia polarization, mimic control and vector, mimic control and CEBPA, mimic and vector, or mimic and CEBPA was cotransfected into microglia using Lipofectamine ® 2000 reagent. The plasmid vector used for CEBPA transfection was pcDNA3.1. RNA expression level and protein expression level were detected 48 h and 72 h after the transfection. Quantitative real-time PCR Total RNA was isolated using Trizol reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's instructions. In brief, 100 mg of collected tissues were triturated in liquid nitrogen, added with 1 ml Trizol reagent, and homogenized using a homogenizer. To determine the expressions of erythrocyte lysate-treated cells (2.5 × 10 5 ), the growth medium was removed, and 1 ml of Trizol reagent was added to lyse the cells, and the lysate was pipetted to be homogenized. The treated tissues and cells were incubated for 5 min to fully dissociate the nucleoprotein complexes. Then, 0.2 ml of chloroform was added to the cells and incubated for 3 min. The samples were centrifuged for 15 min at 12,000×g at 4°C. The mixture was then separated into a lower red phenolchloroform phase, a middle phase, and a colorless upper aqueous phase. Contents in the colorless upper aqueous phase were transferred to a new tube by angling the tube at 45°, and 0.5 ml of isopropanol was then added to the aqueous phase. After incubating for 10 min, total RNA was isolated by centrifugation at 12,000×g at 4°C for 10 min. Reverse transcription was performed using M-MLV Reverse Transcriptase System (Promega, Madison, WI). Quantitative real-time PCR with Light Cycler (Roche Diagnostics, Mannheim, Germany) and SYBR Green I in SYBRRT-PCR Kit (TaKaRa Biotechnology, Dalian, China) were used to detect mRNA expression. GAPDH served as an internal RNA control for Bcl-2, Bax, and CEBPA. Primers were purchased from Bio Asia Corp. (Shanghai, China), and the sequences were as follows: The Bcl-2 forward: 5′-TTCTTTGAGTTCGGTGGGGTC-3′ and reverse: 5′-TGCATATTTGTTTGGGGCAGG-3′; The Bax forward: 5′-TCCACCAAGAAGCTGAGCGAG-3′ and reverse: 5′-GTCCAGCCCATGATGGTTCT-3′; The CEBPA forward: 5′-GCGGGAACGCAACAACATC-3′ and reverse: 5′-GTCACTGGTCAACTCCAGCAC-3′; The GAPHD forward: 5′-CATGGTCTACATGTTCCAGT-3′ and reverse: 5′-GGCTAAGCAGTTGGTGGTGC-3′. MiR-367 was detected using a miRNA RT kit (ABI) and Taq Man Universal PCR Master Mix (ABI) according to the manufacturer's instructions. U6 served as an internal control for miR-367. Primers were purchased from Bio Asia Corp. (Shanghai, China) and the sequences were as follows: MiR-367 forward: 5′-ACTG TTGCTAATATGCAACTC-3′ and reverse: 5′-GAAC ATGTCTGCGTATCTC-3′; U6 forward: 5′-AGAG AAGATTAGCATGGCCCCTG-3′ and reverse: 5′-ATCC AGTGCAGGGTCCGAGG-3′. QRT-PCR reactions were performed under the following conditions: at 50°C for 5 min, at 94°C for 30 s, 40 cycles at 94°C for 5 s, and at 60°C for 30 s. Threshold cycle value (CT) was calculated by the ΔΔ CT method, and the data were analyzed using Light Cycler Software 4.0 (Roche Diagnostics). The experiments were carried out in triplicate. Flow cytometry assay of apoptosis Apoptotic cells were quantified by Annexin V-FITC-propidium iodide (PI) double staining using an Annexin V-FITC apoptosis detection kit. The cells were washed twice with PBS and diluted to a density of 1 × 10 6 cells/ml. Ten microliters of Annexin V-FITC and 10 μl of PI (20 μg/ml) were added into 100 μl of suspensions and incubated for at least 20 min at room temperature in the dark. Four hundred microliters of PBS binding buffer was then added to each tube. The cells were analyzed using FCM analysis (BD Biosciences Clontech) and CellQuest Pro software version 5.1. Dual-luciferase reporter gene assay The cells at 70-80% confluence were co-transfected in 24-well plates. Then, 0.3 μg of reporter gene plasmid, 0.02 μg of internal control vector pGL4.74 [hRluc/TK] vector (Promega, Fitchburg, WI), 1 μl of transfection agent, and 0.2 μg of expression vector orsi-RNA were mixed together. Fortyeight hours after the transfection, dual-luciferase reporter assay was performed according to the manufacturer's instructions. The cells were lysed using 1× reporter lysis buffer and harvested. Luminescence was detected by Mithras LB 940 (Berthold Technologies, Oak Ridge, TN). The firefly luciferase activity of the reporter gene plasmid was measured as 1 (m1), while the renilla luciferase activity (internal control) of pGL4.74 [hRluc/TK] vector was measured as 2 (m2). The relative luciferase activity was calculated by the ratio of m1/m2. Statistical analysis The data were analyzed using SPSS version 16.0 (SPSS Inc., Chicago, IL) and shown as means ± standard errors of the means. Statistical analysis was performed by one-way analysis of variance (ANOVA), followed by Dunnett's post hoc test. p < 0.05 was considered statistically significant. Results MiR-367 expression was downregulated in perihematoma of ICH patients QRT-PCR was performed to detect the expression level of miR-367 in ICH specimens and normal specimens. As shown in Fig. 1a, compared with normal specimens, the level of miR-367 in ICH specimens was reduced significantly (p< 0.01). Microglia identification Microglia were isolated to establish the ICH cell culture model. Immunocytochemistry was performed using CD11b as a marker of microglia and DAPI as a marker of cell nucleus to identify isolated microglia. The mergence showed that microglia and cell nuclei could perfectly correspond to each other (Fig. 1B), suggesting that the culture isolated was pure microglia. As some cells were in different division stages with different brightness, so some cells which were brighter than the others, which might be a limitation. Erythrocyte lysate-treated microglias were established Erythrocyte lysate-treated microglias were established (Yu et al. 2017) to verify the specific change of miR-367 level. First, we found that erythrocyte lysate significantly increased the protein expressions of M1 microglia markers (MHC-ІІ and IL-1β) and reduced those of M2 microglia markers (CD206 and Arg-1) (Fig. 1c, d). Then, the levels of miRNAs were detected by qRT-PCR. Compared with the PBS-treated group, the level of miR-367 in erythrocyte lysate-treated microglia was significantly decreased, the level of miR-124 slightly decreased, and miR-155 slightly Figure 1. The effects of ICH, erythrocyte lysate, and up/downregulation miR-367 on the mRNA expression level of miR-367. (a) The change of miR-367 expression in perihematoma of patients with ICH was determined by qRT-PCR (*vs. normal; **p < 0.01). (b) Immunofluorescence of isolated microglia probing for CD11b in gree channel and DAPI in blue channel. (c, d) The protein levels of M1 microglia markers (MHC-II and IL-1β) and M2 microglia markers (CD206 and Arg-1) were detected by Western blotting assay (*vs. PBS; **p < 0.01). (e) QRT-PCR for miRNAs from microglia treated with erythrocyte lysate or PBS for 3 d was applied (*vs. PBS; **p < 0.01). ( f ) QPCR was used to detect the effects of mimic and inhibitor on the mRNA expression level of miR-367 (*vs. MC; # vs. IC; ** /## p < 0.01). (g, h) The protein levels of M1 microglia markers (MHC-II and IL-1β) and M2 microglia markers (CD206 and Arg-1) were detected by Western blotting assay. (i) QPCR was used to detect the effects of mimic and inhibitor on the mRNA expression level of miR-367 in THP-1 cells (*vs. MC; # vs. IC; ** /## p < 0.01). (j, k) The protein levels of M1 microglia markers (MHC-II and IL-1β) and M2 microglia markers (CD206 and Arg-1) were detected by Western blotting assay in THP-1 cells. Data were shown as mean ± SD from three independent experiments and analyzed by Dunnett's t test. ICH, intracerebral hemorrhage; qRT-PCR, quantitative real-time polymerase chain reaction; DAPI, 4′,6-diamidino-2-phenylindole; PBS, phosphate-buffered solution; MC, mimic control; M, mimic; IC, inhibitor control; I, inhibitor; SD, standard deviation increased, while the level of miR-146a and miR-223 changed a little (Fig. 1e). The change of miR-367 was the highest among detected miRNAs and was consistent with the results in ICH patients, suggesting that erythrocyte lysate-treated microglia were successfully established. Moreover, miR-367 mimic could also upregulate the level of Bcl-2, downregulate the level of Bax, and reduce apoptosis rate (Fig. 2a, b), while miR-367 inhibitor exerted an opposite effect to that of miR-367 mimic in erythrocyte lysate-treated microglia. CEBPA was a direct target for miR-367 in microglia Target Scan 7.2 (Fig. 3a) predicted that CEBPA is a potential target for miR-367, and the direct relationship between CEBPA and miR-367 was further confirmed by dual luciferase reporter system. We found that co-transfecting with miR-367 mimic and wild-type CEBPA 3′-UTR greatly suppressed the luciferase expression level. However, co-transfecting with miR-367 mimic and mutated CEBPA 3′-UTR did not affect the luciferase expression level (Fig. 3b). These results demonstrated that CEBPA could be suppressed by miR-367. MiR-367 mimic downregulated CEBPA expression in erythrocyte lysate-treated microglia To explore the effect of miR-367 mimic on the expression of CEBPA in erythrocyte lysatetreated microglia, Western blotting and qRT-PCR assays were performed to detect the protein and mRNA expression levels of CEBPA. We found that transduction of miR-367 mimic downregulated the mRNA and protein expression levels of CEBPA (Fig. 3c-e). MiR-367 mimic promoted microglia M2 polarization and reduced apoptosis through CEBPA in erythrocyte lysate-treated microglia To determine the effects of CEBPA on microglia treated with miR-367 mimic, Western blotting and flow cytometry assays were performed to determine protein expression and cell apoptosis. Obviously, over-expression of miR-367 significantly improved M2 polarization in erythrocyte lysate-treated microglia, while co-transduction with CEBPA produced no effect (Fig. 3f, g). Additionally, CEBPA inhibited the reduction of apoptosis rate caused by miR-367 (Fig. 4). These data demonstrated that over-expression of miR-367 promoted microglia M2 polarization and reduced apoptosis via downregulating CEBPA. Discussion ICH is one of the most acute and serious types of stroke. However, there is a lack of a specific treatment for ICH (Hamzei Taj et al. 2016). Intensive blood pressure reduction and hemostatic therapy could possibly attenuate ICH-induced brain injury (Wartenberg and Mayer 2015;Morotti et al. 2017). Evacuation of hematoma by surgical treatment is another approach to alleviate ICH injury (Mendelow 2015;Rennert et al. 2015). Moreover, as secondary injury also contributes to brain injury after ICH (Lee et al. 2016), reducing secondary injury is equally important in treating ICH. Studies showed that the process of nerve cell injury after ICH is affected by secondary ischemia in peripheral brain tissue, thrombin release, hemoglobin toxicity, inflammatory response, and apoptosis, and among the above factors, inflammatory response is one of the most important affecting factors (Zhang et al. 2017). Numerous reports indicated that the reduction of inflammatory injury had a positive effect on the prognosis of ICH (Lee et al. 2016;Yu et al. 2017;Zhang et al. 2017). MicroRNAs (miRNAs) are endogenous small RNAs with 18-25 nucleotides in length (Yu et al. 2014), and they can regulate microglia polarization via various pathways. MiR-155 is an M1-related miRNA, and it was reported that miR-155 could affect the interleukin 13-dependent regulation of several genes (SOCS1, DC-SIGN, CCL18, CD23, and SERPINE), thereby affecting the establishment of an M2 phenotype in macrophages (Martinez-Nunez et al. 2011). MiR-146 acts as an anti-inflammatory miRNA to target TLR4 signaling and thereby suppresses iNOs and promotes M2 polarization (Vergadi et al. 2014). MicroRNA-223 regulates inflammation and brain injury via feedback to NLRP3 inflammasome after intracerebral hemorrhage . MiR-124 was shown to ameliorate ICH-induced inflammatory injury via promoting microglia M2 polarization through CEBPA (Yu et al. 2017). CEBPA has been proved as a tumor-inhibiting factor and is related to various types of cancers (Lourenco and Coffer 2017;Voutila et al. 2017). Upregulation of CEBPA might reduce key pro-inflammatory cytokines, thus showing an antiinflammatory potential (Freire and Conneely 2018;Zhou et al. 2019). It was also reported that CEBPA mediated microglia polarization and could be beneficial in reducing ICHinduced inflammatory injury [13]. Recent research indicated that miR-367 could downregulate the inflammatory response of microglia . Similarly, it was found that miR-367 expression was decreased in perihematoma of patients with ICH in our study. MiR-367 belonged to miR-92a family and it was reported barely detectable in macrophages (Lai et al. 2013). To investigate the mechanisms of miR-367 and CEBPA in ICH, cell culture of erythrocyte lysate-treated microglia was successfully established by erythrocyte lysate treatment, which was confirmed by the increase of M1 microglia markers (MHC-ІІ and IL-1β) and decrease of M2 microglia markers (CD206 and Arg-1). The representative miRNAs (miR-367, miR-124, miR-155, miR-146a, miR-223) being related to M1/M2 was detected, and miR-367 level was found to significantly decrease by erythrocyte lysate treatment. Then, we found that upregulating miR-367 could significantly decrease the protein expressions of M1 microglia markers (MHC-ІІ and IL-1β) and increase those of M2 microglia markers (CD206 and Arg-1) in erythrocyte lysate-treated microglias and monocytes. Cell apoptosis was a common mechanism in many bio-activities (Zhang et al. 2019). Moreover, we found that miR-367 mimic could also upregulate Bcl-2, downregulate Bax, and reduce apoptosis rate. However, miR-367 inhibitor exerted an opposite effect to that of miR-367 mimic. CEBPA was predicted as a direct target for miR-367 in microglia, and it could promote microglia M1 polarization (MHC-ІІ and IL-1β), inhibit microglia M2 polarization (CD206 and Arg-1), and increase apoptosis rate. The effect of CEBPA on inflammatory factors in our study was opposite to the observations before (Freire and Conneely 2018), which might depend on different mechanisms. However, when CEBPA and miR-367 co-existed, the protein and mRNA expressions of CEBPA were decreased, resulting in a reduction of microglia M1 polarization, an increase of microglia M2 polarization, and a lower apoptosis rate. MHC-ІІ, IL-1β, CD206, and Arg-1 studied in our research are representative factors that reflect microglia M1 and M2 polarization types. More specific high molecular markers for microglia M1 and M2 polarization types could be studied in the future. In addition, more in vivo experiments are needed in future study to further verify the results. Besides, we did not study humoral molecules from erythrocytes or the molecular interaction between humoral molecules and microglia nor did we apply monocytes or macrophages to compare with the effect of miR-367 on microglia or even compare miR-367 with other miRNAs. These works would be conducted in the future. In conclusion, our findings demonstrated that CEBPA aggravated inflammatory injury caused by erythrocyte lysate, while miR-367 could attenuate the injury by promoting microglia M2 polarization via targeting downregulated CEBPA. The results of our study provide a new feasible strategy for alleviating secondary injury in ICH. Funding This work was supported by the Innovation Foundation of Youth in the First Affiliated Hospital of Zhengzhou University. Thanks for the financial support. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,906.6
2020-11-04T00:00:00.000
[ "Medicine", "Biology" ]
A New Mining and Protection Method Based on Sensitive Data The traditional method of sensitive data identification for data stream has a large amount of calculation and does not reflect the impact of time on the data value, and the mining accuracy is not high. In view of the above problems we firstly adopt the sliding windowmechanism to divide the data flow according to time and delay the dataset according to the characteristics of the data flow in the sliding window to achieve the purpose of saving time and space. At the same time, threshold sensitivity analysis is used to find out the optimal threshold. Finally, a K-anonymous algorithm based on dynamic rounding function is employed to achieve the protection of sensitive data.Theoretical analysis and experimental results show that the algorithm can effectively mine the sensitive data in the data stream and can effectively protect the sensitive data. Introduction With the rapid development of network technology, Internet platforms such as search engines, social networks, and ecommerce have generated a large amount of data when it is convenient for users.Now it is entering the era of big data where data is explosively growing.People are paying more and more attention to the protection of personal information, and data has become the most valuable thing at the moment.This has led to the mining and protection of sensitive information, that is, through the mining of super large amounts of data to obtain important information of users.However, mining sensitive data can also lead to privacy leakage.Therefore, many researchers began to focus on sensitive data mining and protection. Baidu Encyclopedia defines sensitive information as follows: being used for improper behavior or being released or modified by others without the consent of the parties would be unfavorable to the implementation of the national interest or government plans or unfavorable to personal privacy rights enjoyed by individuals, including personal privacy information, business management information, financial information, personnel information, IT operation, and maintenance information.Among them, the data stream has strong time characteristics, and there is also the risk of sensitive information being tampered with and eavesdropped.However, expired stream data tends to be less valuable.The identification of sensitive data based on text content is a typical application of data mining.The method proposed in [1] is based on the threshold self-learning technology to improve computing efficiency.Massive text clustering and topic extraction based on sensitive data can obtain accurate and sensitive information, but it is not suitable for mining sensitive information in social networks [2].There are various methods for mining sensitive data in social networks, such as ensuring close privacy while publishing sensitive data [3], analyzing sensitive data transmission in android, leak detection for privacy [4], and multipart support sensitive data mining algorithm [5].Although the above method can efficiently mine sensitive data, it ignores the most important temporal characteristics of data flow.Based on this, Li Haifeng et al. proposed the FIMoTS algorithm in 2012 [6], Qi Xiangxia et al. [4] presented the FIUT-Stream algorithm in 2013, and Yin Shaohong et al. proposed the SWM-MFI algorithm in 2015 [7].These algorithms are based on sensitive data mining algorithms on the time data stream and are more in line with the characteristics of data flow in today's social networks. Journal of Control Science and Engineering This paper summarizes the advantages of the above algorithm and proposes a threshold self-learning algorithm based on the sliding window, which can ensure the shortest mining time based on the mining of accurate information. The protection of sensitive data is to prevent data from being leaked while ensuring the usefulness of the data [8,9].In order to protect the user's personal information, the traditional technique is to delete the user's sensitive data before the data is released.However, it has been found in practice that this method does not protect user information well, and it is difficult for data recovery, which destroys the usefulness of data.In view of the above problems, this paper mainly uses the following steps to mine and protect the data.Firstly, we segment the data stream and extract sensitive words.Then we divide the sliding window to mine sensitive data.Next, we find the optimal threshold.Finally we use the DIFD algorithm to implement the protection of sensitive data. Related Work 2.1.Sensitive Data Mining.The sensitive data recognition method based on text content mentioned in [1] is to judge sensitive information by simple feature selection extraction of text content and threshold determination method based on learning.The advantage is that the threshold determined by self-learning can make the accuracy of data extraction the highest, but when comparing threshold effects, a large number of calculations are generated, which greatly reduces the mining efficiency.The FIMoTS algorithm mentioned in [6] is more in line with the characteristics of data flow in today's social networks, which emphasizes the influence of time on the value of data.Based on the sliding window processing method, the time period is the processing unit, which increases the computational efficiency.However, the selection of the threshold value during the process is too arbitrary, and it is difficult to ensure the reliability of the mining result only through user customization.The mining algorithm used in this paper is combined with the FIMoTS algorithm in [1,6].First, the FIMoTS algorithm is briefly introduced as follows. The algorithm mainly uses enumeration tree as the data structure to save data.Firstly, the enumeration tree is initialized according to the initial sliding window dataset and absolute support degree; then the algorithm uses the time characteristics of data arrival and departure to mine sensitive data and prunes the enumerate trees.Finally, the algorithm sets the upper and lower boundaries of data changes to improve the mining efficiency. For the sliding window (|| = ), given the data item , the relative support is set to represent the / ( ≥ ) in a fractional form, where represents the total number of data items contained in the sliding window and indicates the total number of itemsets appearing in the sliding window .Minimum support threshold = / ( ≥ ), where minimum support is set by the user, and relative support is greater than minimum support, which is sensitive data.And we use to save the enumeration tree.In the enumeration tree, the parent node data item is a subset of the child nodes.When a child node of sensitive data is nonsensitive data, the node is set as a leaf node.The enumeration tree uses the form < , , > of the triple to represent each node's information, where represents the information of each data item, represents the relative support of the data item, and represents the update time of the node. This paper improves the algorithm FIMoTS in the process of mining sensitive data.In the literature [1], the dataset is first processed by word segmentation.Then the frequent itemset mining algorithm based on the sliding window is used to dynamically change the threshold and finally obtain the optimal threshold.So, we rename the algorithm as threshold selflearning-sensitive data mining algorithm, namely, SL-SDMA. Sensitive Data Protection. With the continuous advancement of mining technology, it is becoming easier for people to obtain sensitive data, and personal privacy is seriously threatened.Therefore, how to effectively protect the sensitive data excavated becomes another important research area.Current methods for protecting sensitive data include the privacy protection method based on K-anonymity [10], anonymized privacy protection technology based on clustering [11], and differential privacy protection [12].Although the algorithm of [11] reduces the risk of privacy leakage to a certain extent, the proposed K-anonymity model cannot solve the problems of homogeneous attacks and background attacks.The method in [12] can deal with attackers with arbitrary background knowledge and improve the usability of data clustering.However, this method cannot solve the privacy leak security problem in distributed environments.In view of the shortcomings of the above methods and the characteristics of social network datasets, this paper proposes the optimization of the K-anonymous algorithm based on the rounding partition function [10].Dynamically changing the processing dataset can fully reflect the time characteristics of the data flow, and K-anonymity sensitive data protection method is the earliest proposed privacy protection mechanism.After years of research, the technology has matured, and the operation is simple and has strong practicality. Dataset Preprocessing. The dataset studied in this paper mainly comes from online commentary.The dataset - consists of multiple online reviews.Let = { 1 , 2 , ...... }, where denotes the -th network comment.We first segment the online reviews into words through the TULAC word segmentation system developed by Tsinghua University's Natural Language Processing and Social Humanities Computing Laboratory and also identify the parts of speech of a word when it is segmenting, such as noun ( ), person's name ( ), and verb ( V). Let the phrase after lexical analysis be = {< 1 , 1 >, < 2 , 2 >, ⋅ ⋅ ⋅ , < , >}.Among them, denotes the phrase after the word segmentation of the -th dataset, denotes the -th phrase after the lexical analysis of the -th group dataset, and denotes the part of speech of the phrase .The phrase was used to analyze the word and word frequency to obtain a new phrase = { 1 , 2 , ⋅ ⋅ ⋅ , }.Among them, denotes the phrase Use to save sensitive data, and a single sensitive dataset consists of < , > and two datasets, which represent changes in the upper and lower bounds of sensitive data; use to save nonsensitive data, and a single nonsensitive dataset consists of < , > and two datasets, indicating nonsensitive data type changes.Upper and lower bounds: the function is used to initialize the enumeration tree.The FIMoTS algorithm implements mining of sensitive data.The specific algorithm steps are as shown in Algorithm 1. Sensitive Data Protection: DIDF Algorithm.This paper optimizes the K-anonymity algorithm [10] known as Flexible Partition algorithm based on the rounding partition function, which regards time as an important attribute.By dynamically changing the dataset, it can guarantee the real-time performance of the data and make it independent at different time periods.The data is relatively independent, and then Flexible Partition algorithm processing is performed on the dataset of each time period to obtain the maximally anonymous group.The Flexible Partition algorithm is briefly described below. Assuming that table T() contains = × + records (where denotes K-anonymity and is a positive integer smaller than ), theoretically table () can partition + 1 anonymous groups.Then for any anonymous group = × + of () (where denotes K-anonymity and is a positive integer smaller than ), if is odd, the number of anonymous groups generated by the two-division method is ( − 1)/2 × + ( + )/2 and ( − 1)/2 × + ( + )/2, respectively.Anonymous groups can be divided into W − 1 anonymous groups. For a large dataset = ×+, it can be divided into two anonymized groups 1 = 1 × + 1 and 2 = 2 × + 2 , where 1 + 2 ≤ , and when the 1 + 2 = equation is established.Based on the above analysis, it can be seen that the rounding partition function is and the opening in the function is rounded up, and the opening down is rounded down.This paper takes time as an important factor to divide the dataset according to the time period and dynamically to change the processing dataset, while considering the edge data processing method.The method proposed in this paper helps to maintain the datasets relative independence in different time periods while making the protection of sensitive data more accurate with strong operability.We renamed the algorithm based on the dynamic rounding function K-anonymous algorithm as DIDF algorithm, and the specific algorithm steps are as shown in Algorithm 2. The DIDF algorithm always tries to split the dataset of a single time period into more anonymous groups and has stronger advantages in processing the data on the boundary line, which can fully reflect the time characteristics of the data flow in the social network.For example, when = 2 and dataset || = 5, || = 2 × 2 + 1 is known based on the algorithm, so after the algorithm operation, two anonymous groups Comparison of Experimental Results The experiment was run on a PC with a 1.90GHz Core i3 processor, 44GB of memory, and a Windows 8.1 operating system.The lexical analysis processing was implemented using python programming language, and the SL-SDMA and DIDF algorithms were implemented in C# on Tongcheng datasets originated from online user reviews. Sensitive Data Mining. The dataset selected in the data mining uses two kinds of tourist reviews of the Tongcheng tourism website as an experimental dataset.The experiment acquires the total data length, the longest data item length, the shortest data item length, and the running time.By modifying the threshold multiple times, the relationship between sensitive data mining time and threshold is finally determined, as shown in Figure 2. Figures 2(a) and 2(b) show the relationship between the threshold and the sensitivity data recognition rate; there are maps showing that when the threshold range is [1/14, 1/12], sensitive data identification rate is highest, up to 100%.With the increase of the threshold, [1/11, 1/9], the standard for extracting sensitive data is increased, and complete sensitive data cannot be obtained, which leads to a decrease in the recognition rate of sensitive data; as the threshold decreases, [1/18, 1/15], the standard for extracting sensitive data is reduced, and more redundant data are obtained, which reduces the recognition rate of sensitive data. Figures 2(c) and 2(d) show the relationship between the threshold and the extraction time of sensitive data.As can be seen from the figure, the blue dashed line is the dividing line, and the line graph of the red threshold and time is roughly divided into three parts.These correspond to the three ranges of change in the recognition rate of sensitive data in Figures 2(a increase of threshold value.However, the current sensitive data recognition rate is not the highest, and the complete sensitive data cannot be obtained.Secondly, the point at the upper left threshold is in the range of [1/18, 1/15].The sensitive data recognition rate is 100%, and it is not difficult to see that when the threshold value is 1/13, the running time is the least, that is, the optimal threshold point, 3. The intermediate threshold value is in the range of [1/14, 1/12].At this point, the running time of the range decreases with the decrease of the threshold, but the recognition rate of the sensitive data obtained in this range is not the highest, and there is more redundant data.Therefore, when the dataset size is 28900, the threshold value is 1/13, which can not only guarantee the highest recognition rate of sensitive data, but also guarantee the shortest running time, which is the optimal threshold point in this paper. The experiment in Figure 3 shows the mining time of FIMoTS, SWM-FI, FIUT-Stream, and SL-SDMA algorithms.The time complexity of these four algorithms is ((1/2) 3 ), ( 2 log ), ( 2 + ), ( 2 + ), respectively, where and are constants.Through the sensitive data mining experiments on the above two datasets, the running time is shown in Figure 3.However, SWM-FI and FIUI-Strean are based on sliding window to mine sensitive data.They divide sliding windows according to transaction size and do not fully consider the impact of time on the datasets in social networks.Furthermore, they use matrix to store data for storing data, which wastes a lot of space.The SL-SDMA algorithm proposed in this paper uses the storage structure of enumeration tree to save data.Because our algorithm stops judging when the parent node is insensitive data, it not only saves time, but also avoids the waste of space.In addition, the sliding window in our algorithm is divided according to the time, which can fully reflect the time characteristics of the data stream, so the time efficiency of SL-SDMA is higher. Sensitive Data Protection. The size of the dataset selected for the protection of sensitive data is 28,900.The content of the dataset includes the name of the visitor, the content of the comment, and the time of the comment.The change of the value, the number of anonymous groups obtained, and the time consumed are used to prove the feasibility of the algorithm. The experimental comparison shows that the DIDF algorithm achieves the protection of sensitive data by dynamically acquiring the processed dataset.It not only can ensure the independence of the data in each time period, but also is more conducive to the centralized protection of data.The experimental results after the operation are similar to the experimental results of the Flexible Partition algorithm which is shown in Figure 4.That is, it is possible to obtain as many anonymous groups as possible, and the algorithm is also acceptable over time. Conclusion This paper first uses NLP's lexical data package THULAC to preprocess the dataset.Then according to the temporal characteristics of the data stream, a sliding window-based sensitive data mining algorithm is proposed, which takes the most important attributes of time and adopts the data structure of the enumeration tree.The storage of the calculation results is realized.By defining the upper and lower bounds of the data item type, the enumeration tree and the data collection information are updated only when the relative support degree reaches the upper and lower bounds of the type change, thereby saving the calculation time.Finally, the threshold self-learning function is used to determine the threshold for finding the minimum time spent in ensuring the accuracy of mining data.This method can determine the optimal threshold in the same dataset, thereby improving the experimental efficiency.In the protection of sensitive data, using DIDF algorithm to dynamically change the processing dataset not only can guarantee the independence of the dataset in each time period, but also can always obtain the maximum number of anonymous groups.Experiments show that the above methods can significantly improve the computational efficiency while ensuring the accuracy of the experimental results in the mining and protection of sensitive data and consider the time characteristics of the dataset, which has strong operability and feasibility.We intend to explore several directions in future work, including extending the algorithm to deal with the frequent pattern mining on data stream in distributed environment.Furthermore, because sensitive data mining may lead to personal privacy leakage, we intend to add a differential privacy method into our SL-SDMA method.We intend to explore several directions in future work, including extending the algorithm to deal with the frequent pattern mining on data stream in distributed environment.Furthermore, because sensitive data mining may lead to personal privacy leakage, we intend to add a differential privacy method to our SL-SDMA method. 3. 2 . Sensitive Data Mining Algorithm.The threshold cannot be dynamically changed in the algorithm FIMoTS, so it is not suitable for the mining of sensitive data of various sizes of datasets, resulting in low mining efficiency.The SL-SDMA algorithm adopted in this paper changes the threshold value dynamically.After the enumeration tree is initialized, data items are inserted and deleted continuously as the sliding window moves.According to the algorithm FIMoTS, when data items are inserted, the upper bound of the type change of becomes ( − ).If is sensitive data, the lower bound of the type change becomes ( − [(/( − )) ]); if is nonsensitive data, the lower bound of the type change becomes ( − [(( − )/) ]).When there are itemsets removed, the lower bound of the type change of becomes ( − ).If is sensitive data, the upper bound of the type change becomes ( − [(( − )/) ]); if is nonsensitive data, the upper bound of the type change is ( − [(/( − )) ]).After sensitive data is mined using the type change upper and lower bounds, the mined time and sensitive data redundancy of sensitive data under different thresholds are compared, and the optimal threshold is finally obtained, which can maximize the mining efficiency. Figure 3 : Figure 3: Mining time of experiment on two datasets. Table 1 lists the data characteristics of the two datasets.The data used to support the findings of this study are available from the corresponding author upon request or the url "https://pan.baidu.com/s/1-LEzNrk9YjG8o0hOhi0WWA."
4,680.2
2018-11-25T00:00:00.000
[ "Computer Science" ]
Introduction to Himalayan tectonics: a modern synthesis Extract The Himalaya resulted from collision of the Indian plate with Asia and are well known as the highest, youngest and one of the best studied continental collision orogenic belts. They are frequently used as the type example of a continental collision orogenic belt in studies of older Phanerozoic orogenic belts. The beauty of the Himalaya is that, on a broad scale they form a relatively simple orogenic belt. The major structural divisions, the Indus–(Yarlung Tsangpo) suture zone, the Tethyan Himalaya sedimentary units, Greater Himalaya Sequence (GHS) metamorphic rocks, the Lesser Himalaya fold-and-thrust belt and the Sub-Himalaya Siwalik molasse basin are present along the entire 2000 km length of the Himalaya (Figs 1 & 2). Likewise, the major structures, the Indus–Yarlung Tsangpo suture with north-vergent backthrusts, the South Tibetan Detachment (STD) low-angle normal fault, locally called the Zanskar Shear zone in the west, the Main Central Thrust (MCT) zone and the Main Boundary Thrust are all mapped along the entire length of the mountain belt between the western (Nanga Parbat) and eastern (Namche Barwa) syntaxes. Klippen of low-grade or unmetamorphosed sedimentary rocks lie above the GHS high-grade rocks in places (e.g. Chamba klippe in India; Lingshi klippe in Bhutan), and far-travelled klippen of GHS rocks occur in places south of the main MCT and GHS rocks (e.g. Darjeeling klippe). The Himalaya resulted from collision of the Indian plate with Asia and are well known as the highest, youngest and one of the best studied continental collision orogenic belts. They are frequently used as the type example of a continental collision orogenic belt in studies of older Phanerozoic orogenic belts. The beauty of the Himalaya is that, on a broad scale they form a relatively simple orogenic belt. The major structural divisions, the Indus-(Yarlung Tsangpo) suture zone, the Tethyan Himalaya sedimentary units, Greater Himalaya Sequence (GHS) metamorphic rocks, the Lesser Himalaya fold-and-thrust belt and the Sub-Himalaya Siwalik molasse basin are present along the entire 2000 km length of the Himalaya (Figs 1 & 2). Likewise, the major structures, the Indus-Yarlung Tsangpo suture with northvergent backthrusts, the South Tibetan Detachment (STD) low-angle normal fault, locally called the Zanskar Shear zone in the west, the Main Central Thrust (MCT) zone and the Main Boundary Thrust are all mapped along the entire length of the mountain belt between the western (Nanga Parbat) and eastern (Namche Barwa) syntaxes. Klippen of lowgrade or unmetamorphosed sedimentary rocks lie above the GHS high-grade rocks in places (e.g. Chamba klippe in India; Lingshi klippe in Bhutan), and far-travelled klippen of GHS rocks occur in places south of the main MCT and GHS rocks (e.g. Darjeeling klippe). In broad terms the timing of major events shows little variation along the entire mountain range, with Late Cretaceous-Paleocene obduction of ophiolites onto the passive margin of India, Late Paleocene ultra-high-pressure (UHP) metamorphism at Kaghan (northern Pakistan) and Tso Morari (India), Early Eocene final marine sedimentation prior to the closure of Neo-Tethys, and Late Eocene to Early Miocene regional Barrovian-type metamorphism along the GHS (Fig. 3). Peak kyanite grade metamorphism (Late Eocene-Oligocene) pre-dates the regional higher-temperature, lower-pressure sillimanite ± cordierite-grade event, which was accompanied by widespread migmatization and mid-crustal melting during the Oligocene-Mid-Miocene. The age of the abundant leucogranite sills and dykes along the top of the GHS, beneath the STD, is concomitant with the sillimanite-grade metamorphic event. The GHS metamorphism is all part of one continuum of crustal thickening and shortening, increasing pressure and temperature following a standard clockwise Pressure-Temperature-Time (PTt) path. Decompression melting peaked with widespread partial melting and formation of migmatites and leucogranites along the highest peaks of the Himalaya. Structural mapping and timing constraints suggest the large-scale southward extrusion of a partially melted layer of mid-crustal rocks (sillimanite grade gneisses and leucogranites) bounded by the STD ductile shear zone with right-way-up metamorphic isograds above, and the MCT ductile shear zone with inverted metamorphic isograds below, during the Oligocene-Early Miocene. This corresponds to the channel flow (or channel tunnelling) model that is now widely accepted for the GHS ductile structures. Brittle folding and thrusting processes characterize the Lesser Himalaya, structurally below the ductile MCT, and corresponds to the critical taper model. The most recent comprehensive reviews of the structure, metamorphism and tectonic evolution of the Himalaya are given by Kohn (2014), Searle (2015) and Goscombe et al. (2018). The relatively straightforward structural and metamorphic geometry, and timing constraints along the main Himalayan range are, however, complicated in the two syntaxis regions, the Nanga Parbat-Haramosh syntaxis in the NW (Pakistan), and the Namche Barwa syntaxis (SE Tibet) in the NE. In both these regions, a younger high-temperature metamorphic overprint on the standard Late Eocene-Miocene Himalayan events is apparent with high-grade sillimanite + cordierite crustal melting occurring in the deep basement, as young as Pliocene or even Pleistocene in age. This young metamorphism may be indicative of active metamorphism that is occurring at depth beneath the Himalaya today in rocks that have not yet been exhumed by thrusting, exhumation and erosion. The relatively straightforward tectonic picture along the main Himalayan range is also complicated in the Pakistan sector, west of Nanga Parbat, where the high-grade kyanite and sillimanite metamorphism has recently been dated as Ordovician, not Himalayan in age (Palin et al. 2018). In Zanskar there is also debate over the timing of the obduction of the Spontang ophiolite onto the Zanskar passive margin sequence, and the relative importance of pre-India-Asia collision folding and thrusting related to the final stages of the obduction, and post-India-Asia collision shortening and thickening. History of research The Himalaya have always been at the forefront of geodetic studies. The Great Trigonometrical Survey, started in 1802 under its founder William Lambton and his successor George Everest, mapped out the Himalayan ranges for the first time. Amongst the many great achievements of the Survey, these surveyors accurately determined the heights of most of the highest peaks of the Himalaya and Karakoram, and measured gravity anomalies that led to the development of the theory of isostasy. Richard Oldham joined the Survey of India in 1879 and made the first detailed observation of a large Himalayan earthquake, the Great Assam earthquake of 1879 (Oldham 1917). Oldham first identified on seismograms the arrivals of primary (P-waves), secondary (S-waves) and tertiary surface waves, previously predicted by mathematical theory. The earliest geological and geographical explorations of the Himalaya were made in the late 1800s and early 1900s. In 1907 Colonel S.G. Burrard, Superintendent of the Trigonometrical Survey, and H.H. Hayden, Superintendent of the Geological Survey of India, published four volumes of their classic work A Sketch of the Geography and Geology of the Himalaya Mountains and Tibet. During the late 1800s geologists like Medlicott, Middlemiss and Oldham made significant discoveries in the western Himalaya, and Mallet and von Loczy first discovered the inverted metamorphic gradient in the Darjeeling klippe. The next breakthrough was the publication of Arnold Heim and Augusto Gansser's Central Himalaya: Observations from the Swiss Expedition of 1936 (Heim & Gansser 1939), and Augusto Gansser's classic Geology of the Himalaya, published in 1964. Heim and Gansser discovered the remnant ophiolites of SW Tibet in the Kiogar-Amlang-la Range, laid the foundations for the stratigraphy of the Indian plate and confirmed the inverted nature of metamorphism along the Main Central Thrust. Other great pioneering geologists were D.N. Wadia, who mapped large tracts of the NW Frontier region, J.B. Auden and K.S. Valdiya, who worked along the central Indian Himalaya, Ardito Desio, who led the first successful ascent of K2 and mapped a large tract of the Baltoro Karakoram in 1955, and Rashid Khan Tahirkheli, a heroic Pakistani geologist who mapped large parts of remote Kohistan during the 1970s. This work was continued by the studies of Qasim Jan and Asif Khan and their students from the University of Peshawar. A regional map of the Central Karakoram Mountains covering the Hunza, Hispar, Biafo and Baltoro glacier region at the scale of 1:250,000 was published by Searle (1991) and a large compilation geological map of North Pakistan at scale of 1:650 000 was published by Searle & Khan (1996). Some of the most important early geological mapping in the Indian Himalaya was carried out by K.S. Valdiya and his colleagues from Kumaon University, working mainly in the Garhwal-Kumaon Himalaya, and Vikram Thakur and colleagues from the Wadia Institute of Himalayan Geology, Dehra Dun, working mainly in Himachal Pradesh, Lahoul-Spiti and Ladakh. More advances were made by the field studies of A.K. Jain and Sandeep Singh and their students from the Indian Institute of Technology, Roorkee, and Talat Ahmad and colleagues from the universities of Kashmir and Delhi (Jamia Islamia University). With the opening of Ladakh to foreigners in 1979-80 geologists from Italy, Switzerland, France and the UK were also active in the Indian Himalaya throughout the 1980s and 1990s. In many respects this was the golden age of Himalayan research when vast tracts of geologically unknown mountain ranges were mapped and studied for the first time. Emphasis gradually shifted during the 1990s and 2000s from standard sedimentology, stratigraphy, palaeontology and structural mapping to more detailed metamorphic, thermobarometric and geochronological studies of the Himalaya. In Nepal, the great pioneers of geological mapping include the Swiss Toni Hagen, who over 20 years trekked across large tracts of the country (Hagen 1960), Pierre Bordet in the Thakkhola and Nvi-Shang regions (Bordet et al. 1971(Bordet et al. , 1975 and in the Makalu region (Bordet 1961), and Michel Colchen, Patrick LeFort and Arnaud Pêcher in the Manaslu region (Colchen et al. 1986). Climbers contributed greatly to the early pioneering studies of Mount Everest. Noel Odell was a geologistmountaineer on the 1924 British Everest expedition and was the last person to see Mallory and Irvine heading up the NNE ridge towards the summit. Odell made many original geological observations on their journey from Darjeeling and Sikkim to the Tibetan side of Everest, and collected many samples. Lawrence Wager made an invaluable collection of rock samples from the north side of Mount Everest during the 1933 British Expedition led by Hugh Ruttledge. Waters et al. (2018) published a detailed metamorphic-thermobarometric analysis of the Wager samples in 2018, more than 80 years after he collected them. A detailed geological map of the Mount Everest region in Nepal and South Tibet at scale of 1:100 000 was published by Searle (2003), reprinted in 2007 with the addition of the Makalu and Barun glacier region. In Tibet, the great Swedish explorer Sven Hedin made four expeditions to Central Asia spanning 1893-1935, especially the Trans-Himalayan ranges of southern Tibet, and left an astonishingly large collection of rock samples (Hedin 1909(Hedin , 1917. With the opening of Tibet to foreigners in the late 1970s and early 1980s, several groups, both Chinese and foreigners, began the huge task of mapping the vast plateau region. A Royal Society expedition traversed the plateau from Lhasa to Golmud and published the first reconnaissance studies (Chengfa et al. 1988;Shackleton et al. 1988). A large group of French researchers led by Paul Tapponnier made more detailed studies, particularly of the active fault systems over c. 20 years of geological research across much of the plateau. Large-scale geophysical experiments, notably the four phases of the American-and Chinese-funded INDEPTH seismic profile, coupled with magnetotelluric and heat flow studies, spanned more than 20 years of work, and determined the large-scale structure of the lower crust and mantle across the Tibetan Plateau from the northern flank of the Himalaya north to the Kun Lun. During the 1980s Western geologists started to map and describe the geology of the Himalaya in much more detail. The opening of the Ladakh-Zanskar region to foreigners during the late 1970s opened up this fascinating and remote region to geological research. Sporadic, but ongoing, political problems in Kashmir affected access to some critical areas in the western Himalaya. The first Himalaya-Karakoram-Tibet (HKT) Workshop meeting was convened by Mike Searle at the University of Leicester in 1985, and brought together for the first time a wide range of Asian, European and American geologists. The first talk in the first HKT conference was given by the 'father of Himalayan geology', Augusto Gansser. The meeting was so successful that it was decided to hold an annual HKT meeting, alternating between the Himalayan countries in Pakistan, India, Nepal and China, and Europe or further afield in Canada, the USA, Japan and Hong Kong. HKT meetings were held in Kathmandu in 1994, in Peshawar, Pakistan in 1998, in Gangtok, Sikkim in 2002, in Leh, Ladakh in 2008, and in Dehra Dun, India in 2015. These HKT conferences continue to this day and the community is thriving (Fig. 4). The seventh HKT meeting held in Oxford University in April 1992 (convened by Mike Searle and Peter Treloar) led to GSL Special Publication 74, Himalayan Tectonics, published in 1993 ). Containing 39 papers over some 600 pages, this volume continues to be widely cited today. Most of the ensuing science has been published in peerreviewed scientific journals. However, a number of thematic volumes have been published that deal with specific aspects of the Himalaya. These include: GSL Special Publication 170, Tectonics of the Nanga Parbat Syntaxis and Western Himalaya Tectonic processes and outstanding problems-Himalaya-Karakoram-Tibet During the last 30 years, the annual HKT Workshop meetings have provided a catalyst for research targeted towards solving problems associated with continental collision processes. Some of the critical aspects that continue to be debated as more and more data are generated include the following: (1) Palaeogeography of India and the northward drift of the Indian plate. (3) Timing of India-Asia collision along the Indus(-Yarlung Tsangpo) suture zone. The precise timing of India-Asia collision has been hotly debated for at least the last 30 years, with proposed ages ranging from c. 65 Ma to as young as 37 Ma. Much depends on exactly how one defines 'collision'. Is it the first meeting of Indian continental crust with Asia, or is it the final disappearance of the Tethyan ocean that once separated the two plates? Various geological factors have been used to define the age of collision including palaeomagnetism (slow-down of northward drift of India), the age of UHP rocks along the northern margin of India (Kaghan and Tso Morari eclogites), the final marine sediments within the suture zone, the earliest clasts derived from Asia (Ladakh-Gangdese granites) in the Indus suture molasse deposits, the ending of calc-alkaline magmatism and volcanism along the south Asian margin (Ladakh-Gangdese batholith). UHP eclogites cannot be used to constrain collision as they are known from areas where continental collision has not yet occurred (e.g. Oman, Papua, New Guinea). In these examples subduction of the leading edge of the previously passive continental margin was the final stage of ophiolite obduction processes, not related to continent-continent collision (Searle & Treloar 2010). The most accurate timing of India-Asia collision is the precise foraminifera zonation of the final marine sediments along the suture zone, which are planktonic foraminifera zone P7-8 in Waziristan, Ladakh and South Tibet, at 50.5 Ma (Green et al. 2008). However, this age records the last marine sedimentation within the suture zone, not necessarily the first meeting of Indian and Asian crust. (4) Evolution of the Kohistan island arc and the Shyok suture zone. The Kohistan-Dras island arc was a large, late Jurassic-Cretaceous intra-oceanic island arc sequence that lay between the Indian and Asian plates (Jagoutz & Schmidt 2012 Kohn 2014 andSearle 2015). The youngest ages are recorded from the two syntaxis regions, Nanga Parbat in the west and Namche Barwa in the east, with ages of crustal melting as young as Pliocene-Pleistocene (3-1 Ma) in the Nanga Parbat core. More detailed geochronology, linking accessory phase dating to specific metamorphic periods and PT conditions would tie down the thermal history in greater detail. (7) PTt paths of rocks across the GHS. One of the major advances in the last 20 years has been the ability to match U-Pb ages on accessory minerals such as zircon, monazite, allanite and rutile to points on the pressuretemperature path of rocks. This has resulted in numerous studies relating PTt paths to prograde burial and retrograde exhumation paths across the GHS. Isochemical phase diagram (pseudosection) modelling using large datasets such as THERMOCALC has been extremely useful for interpretation of pressure-temperature data, but must be used with caution. At the outset, it must be determined whether the minerals used are in equilibrium and whether the 'age' obtained from an accessory mineral is actually related to the metamorphic reaction in question. Linking PTt paths with microstructures gives an extra dimension with PTtD (deformation) paths. Determining prograde and retrograde PTtD paths across major structures and being able to put a precise age on specific parts of the path has undoubtedly revolutionized our understanding of timescales of metamorphism. It is apparent now, for example that, in places rocks showing younger prograde burial PTt paths lie structurally beneath rocks showing older PTt paths, conforming to the general southward propagation of metamorphism across the GHS with time, following the regional structures. As older metamorphic slices in the north were being exhumed, younger metamorphic rocks in the south were being buried. Future research is needed to determine whether cryptic shear zones within the GHS are real structures, or whether they are metamorphic isograds. The High Himalayan Detachment (Goscombe et al. 2006(Goscombe et al. , 2018, for example, may be either a ductile shear zone or the sillimanite + K-feldspar isograd marking the first appearance of migmatite melt and leucogranite sills (and possibly the base of the extruding channel). Only field-based mapping combined with well-constrained PTtD paths along the upper GHS will be able to accurately correlate many of these cryptic structures along-strike. The inverted metamorphic sequence along the base of the GHS is present along the entire >2000 km length of the orogen. The increase in pressure and temperature up-structural section from unmetamorphosed rocks of the Lesser Himalaya through low-grade metamorphic rocks, through staurolite, kyanite, sillimanite gneisses and the first appearance of partial melt in migmatites defines an inverted metamorphic sequence. The amounts of southward thrusting of GHS rocks are difficult to quantify but are thought to be similar to the minimum offsets along the STD zone at the top of the GHS. Models to explain inverted metamorphism include thrusting a hot slab over a cold slab (LeFort 1975), shear heating along the MCT (England et al. 1992) and the post-metamorphic folding of earlier formed metamorphic isograds (Searle & Rex 1989). Field mapping of metamorphic isograds in the NW Indian Himalaya showed that right-way-up metamorphic isograds along the STD ductile shear zone (Zanskar shear zone) could be linked to inverted metamorphic isograds along the MCT ductile shear zone below (Kishtwar Window). This folded isograd geometry was the origin of the channel flow model (Searle & Rex 1989;Searle et al. 2008). (10) Crustal melting and channel flow processes in the GHS. Partial melting of the crust first appears with kyanite-bearing migmatites that have been found in many parts of the GHS. The higher volume melts are related to the muscovite dehydration melt reaction with sillimanite + muscovite breaking down to sillimanite + K-feldspar + melt. These melts form a thick network of in situ migmatites, foliation-parallel sills with crosscutting, interconnecting dykes and larger leucogranite massifs, now known to be giant sills rather than intrusive diapirs. Himalayan leucogranites have varying amounts of garnet, tourmaline and muscovite, whilst later leucogranites may contain cordierite and even andalusite as magmatic phases ( (Elliott et al. 2016). Forensic studies on older historic earthquakes such as the magnitude 8.1 Bihar earthquake of 1934, the largest known along the Himalayan belt, can also benefit from comparisons with the better known recent earthquakes (Bilham 2004). In addition to Himalayan tectonic processes, important processes along the Asian plate in the Karakoram, Pamir and Tibetan plateau region include: (14) Timing of granite magmatism, calc-alkaline volcanism along the Gangdese granite batholith. The southern margin of the Asian plate is marked by a linear granite batholiththe Ladakh-Gangdese batholith composed of calc-alkaline I-type granitoids with andesitic volcanics (Linzizong volcanics). These Andean-type granitic rocks are related to the subduction of Tethyan oceanic lithosphere northwards beneath Tibet. U-Pb zircon ages span from Late Jurassic through to Eocene time (Chung et al. 2005), and their ending is thought to be soon after continentcontinent collision which closed off the oceanic subduction source. Small-volume, post-collision adakites are sourced from melting of a garnet-bearing lower crust eclogite or amphibolite and occur across the Tibetan plateau. At least four very large porphyry copper (and gold) deposits occur within the Gangdese batholith in south Tibet, but it remains unclear if they are related to calc-alkaline subduction-related Gangdese plutonism, or the younger Miocene lower crust-derived adakites. (15) Timing of crustal thickening and uplift of the Tibetan Plateau. The crustal structure and timing of uplift of the Tibetan plateau has been the source of much controversy. Some older studies presumed that the plateau uplifted as recently as 7-8 Ma based on highly convoluted reasoning and far-field effects (e.g. climate and vegetation changes in Tibet and India and changes in global ocean chemistry, etc.; Molnar et al. 1993). Others proposed much older uplift, even precollisional based on U-Pb age data from metamorphic and magmatic rocks in the Karakoram and parts of more deeply exhumed Tibetan crust (Searle et al. , 2011. It seems quite likely that southern Tibet had a topography similar to that of the present-day Andes before the collision of India, with increased and enhanced post-collision uplift concomitant with regional kyanite-and sillimanite-grade metamorphism extending back to at least 65 Ma. Certainly, erosion rates on the Tibetan plateau have been extremely low, since fission track ages extend back to at least 49 Ma. These data suggest a rather passive uplift of the Tibetan plateau rather than any homogeneous shortening that would have produced regional Cenozoic metamorphism across Tibet. (16) Extent of underthrusting of lower Indian crust northwards beneath the Tibetan plateau. Two end-member models to explain the double crustal thickness beneath the Tibetan plateau are wholescale underthrusting of the Indian plate beneath the whole of the plateau as first suggested by Argand (1924), and post-collisional homogeneous crustal shortening and thickening, as first suggested by Dewey and Burke (1973). It seems apparent that the Himalaya has absorbed at least 500 km shortening in upper crustal rocks (Proterozoic and younger). The equivalent lower crust Archean rocks that underlay these prior to collision have been underthrust north, at least half-way across the plateau (Searle et al. 2011). This model is supported by several deep crustal geophysical experiments (e.g. INDEPTH, HiCLIMB) that suggest that the southern half of the plateau is underlain by cold lithospheric mantle and only the northern part of the plateau along the Kunlun has a hot asthenospheric mantle with strong east-west mantle anisotropy. This region also correlates with the youngest shoshonitic mantle-derived volcanics. Future studies might constrain mantle structure in more detail and tomographic studies might be able to delineate old, subducted slabs in the deep mantle beneath the plateau. (17) Timing of regional metamorphism along the southern Karakoram and Pamir gneiss domes. The southern margin of the Asian plate in the west lies along the Karakoram Mountains of north Pakistan, and far north Ladakh. Regional mapping along the Baltoro and Hushe regions, combined with structural, metamorphic and U-Pb geochronology, has constrained the southern Karakoram metamorphic complex as being mainly postcollisional kyanite-and sillimanite-bearing gneisses, migmatites and leucogranites . Earlier, pre-collision high-temperature, low-pressure andalusitesillimanite metamorphism in the Hunza valley region is related more to the I-type granite batholiths along the southern margin of Asia. These regional metamorphic rocks support a post-collisional thickening event north of the suture zone, but similar rocks are not seen in Tibet, although it is possible that they remain buried and have not yet been exhumed. U-Pb ages constrain the ages of metamorphism in both the southern Karakoram and the central Pamir gneiss domes as Eocene to middle Miocene, a similar time span to that known along the Indian plate Himalaya south of the suture zone. (18) Geological offsets and timing of slip along the major strike-slip faults of Tibet (e.g. Karakoram, Altyn Tagh, Kun Lun, Xianshuihe, Jiale faults). Another important tectonic model originally proposed by Molnar & Tapponnier (1975) was the eastward extrusion of Tibetan crust, bounded by large-scale strikeslip faults, notably the dextral Karakoram and Jiale faults along the SW and SE, and the sinistral Altyn Tagh and Kun Lun faults along the north. Initially the geological offsets along these bounding strike-slip faults was thought to be very large (500-1000 km; Tapponnier et al. 1982). Subsequent detailed field structural mapping combined with U-Pb geochronology, particularly along the Karakoram fault, determined that finite geological offsets were much lower (120-35 km; Searle et al. 2010b), and initiation of shearing was younger than the youngest dated leucogranites (<13 Ma; Phillips et al. 2004). Strike-slip faulting cannot explain the uplift of the plateau, and it would appear that, despite some of these faults being extremely active (e.g. Xianshui-he fault), their total offsets are limited. More detailed mapping combined with geochronology studies is needed along many of the other active strike-slip faults on and around the margins of Tibet. There do appear to be some unique features of the Himalaya and it could be argued that the HKT orogeny is unique in many aspects. Himalayan Tectonics-A Modern Synthesis The justification for the current volume in some ways goes back to the success of the 1993 Himalayan Tectonics volume, which provided a remarkably broad ranging set of papers that covered the full range of geography and science that the Himalaya provide us with. These papers have been and continue to be widely cited. With the possible exception of Yin & Harrison (1996), no subsequent volume has attempted this. Twenty-five years after publication of the 1993 book, it seemed an appropriate moment to provide a wide-ranging update of what we now know about the Himalaya-Tibet-Karakoram region. Rather than doing this as a conference volume, the method employed was to invite leading scientists to write review papers in their own fields that together will provide a coherent framework on which future research can be based. We are grateful to our friends and colleagues who agreed to participate in this project and trust that 25 years down the road this volume will appear as significant as the 1993 book. The present volume comprises a set of papers on the Himalaya, Kohistan arc, Tibet, the Karakoram and Pamir ranges that represent a review of our current understanding of the geology and processes that formed the mountain ranges we see today. The first paper by Searle (2018) reviews the geological evidence for the timing of subduction initiation, arc formation, ophiolite obduction and final closure of NeoTethys along the Indus suture zone and Ladakh Himalaya in particular. There is a complex series of events involved here that document India-Asia plate convergence, pre-collision ophiolite emplacement, UHP metamorphism and 'collision' between India and Asia. It is easy to confuse features that relate to pre-collisional events with those that actually relate to the collision itself. The age of India-Asia depends on how one defines collision, but the generally accepted age, based on final marine fossiliferous sediments within the suture zone and along the north Indian plate margin, is c. 50 Ma. Myrow et al. (2018) describe the restoration of the Himalaya using the Neoproterozoic and Paleozoic stratigraphy along the Lesser Himalaya and Tethyan Himalaya. They develop a stratigraphy for the Indian Plate rocks with a Paleo-Proterozic basement >1.6 Ga, overlain by a sequence of Neo-Proterozoic sediments <1.1 Ga old. These are blanketed by Cambrian and younger sediments. Understanding this stratigraphy is key to unravelling field geology along the arc. Two complementary papers explore the geology and evolution of the Kohistan arc in the Pakistan Himalaya. Petterson (2018) reviews the geological history of all units forming the Kohistan island arc, one of the largest and best exposed arcs in the geological record. Kohistan exposes a c. 40-50 km structural profile through this late Jurassic-Cretaceous intra-oceanic arc from deep garnet granulites and peridotites at the base through gabbros and amphibolites to classic calc-alkaline granites and volcanics at the top. Pettersen provides an up-to-date stratigraphy and time line for the forearc region. Jagoutz et al. (2018) present a comprehensive set of U-Pb, Hf, Nd and Sr isotopic data along the Kohistan-Ladakh arc spanning some 120 myr of geological history. They document a long-term magmatic evolution that shows a continuously increasing contribution of an enriched component derived from the subducted slab into the depleted sub-arc mantle. Along the Himalaya the geology of the currently exposed Indian plate includes: continental shelfslope-basin rocks at the leading edge of the plate margin, a slice of which was subducted to UHP depths of more than 100 km, as exposed in the Kaghan, Stak (Pakistan) and Tso Morari (India) eclogite belts; the Neoproterozoic to Cenozoic Tethyan sedimentary upper crust; North Himalayan domes; the Greater Himalayan metamorphic sequence; and Lesser Himalayan thrust sheets. O'Brien (2018) reviews the geology, thermobarometry and timing of the coesite-bearing UHP eclogites in both the Kaghan and Tso Morari regions. The UHP rocks are distinctly different from the granulitized eclogites of deep levels of the GHS as seen in the Ama Drime massif, north Sikkim, NW Bhutan and the Namche Barwa syntaxis. These rocks represent deep crustal Proterozoic rocks that have been subducted beneath south Tibet and undergone Oligocene-Miocene UHP metamorphism during crustal thickening. Butler (2018) reviews the geology of the Nanga Parbat syntaxis in northern Pakistan. He argues that feedback mechanisms implied in the tectonic aneurism models may have been overemphasized and that patterns of ductile flow within the syntaxes are consistent with orogeny-wide gravitational flow. Treloar et al. (2019) review the geology of the Pakistan Himalaya south of the Kohistan arc and Main Mantle Thrust. Here, kyanite-and sillimanite-grade gneisses previously thought to be the result of Himalayan age metamorphism, now reveal an important Ordovician peak thermal event (Bhimpedian orogeny), constrained by U-Pb dating of monazites, with a weaker Himalayan overprint. These rocks are clearly different from the main Himalayan GHS gneisses of Late Eocene to Mid-Miocene age, with their migmatites and leucogranites as exposed along the Zanskar Himalaya and further east to Garhwal, Nepal, Sikkim and Bhutan. They do, however, correlate with rocks along the Lesser Himalaya and Kathmandu klippe which also have Cambrian-Ordovician metamorphism and S-type granites. A number of papers deal with the Nepalese sector of the Himalaya. Dyck et al. (2018) describe the protolith stratigraphy of the Langtang GHS based on detrital zircon dating of the high-grade gneisses and compare the Proterozoic-Paleozoic protolith stratigraphy in the Langtang Himalaya with that of the Annapurna region to the west and the Everest region to the east. They argue that, within the context of the Northern Indian sedimentary successions, the Lesser, Greater and Tethyan Himalayan successions are structurally rather than lithologically defined. Carosi et al. (2018) describe the structure and metamorphism of the central western Nepal region with emphasis on the High Himalayan Discontinuity, a cryptic tectono-metamorphic boundary lying above the Main Central thrust. Most data support a model for GHS metamorphism of in-sequence shearing affected by minor later out-of-sequence thrusts. Waters (2019) provides a comprehensive and detailed review of the metamorphism of the Nepal Himalaya, in terms of pressure-temperature conditions, phase diagram (pseudosection) modelling, ductile strain and timing. The first part of his paper reviews the techniques used to constrain the metamorphic evolution of orogenic belts. The second part of the paper documents different PTt paths in the GHS below and above the 'High Himalayan Discontinuity' (Goscombe et al. 2006(Goscombe et al. , 2018 that divides the GHS into an upper zone capable of ductile flow and a lower zone characterized by inverted metamorphic gradients and downward decreasing metamorphic ages. Kellett et al. (2018) review the structures and metamorphism of the South Tibetan Detachment system in Nepal and South Tibet and discuss the various tectonic models including gravitational collapse, wedge extrusion, channel flow and duplexing. The STD appears to be an enigmatic and possibly unique structure, a low-angle normal fault that caps the southward extruding ductile middle crust (GHS). Jessup et al. (2019) distinguish two type of gneiss domes along the northern Himalaya. The North Himalayan gneiss domes formed by warping of the GHS metamorphic rocks after metamorphism and are cored by granite and gneiss. The type 2 domes formed in response to orogen-parallel extension during the Late Miocene. They present a new terminology to classify the domes which helps elucidate their significance. The Siwalik foreland basin along the southern boundary of the Lesser Himalaya preserves an erosional history of the uplift and exhumation of the Himalaya since collision. Garzanti (2019) summarizes the stratigraphic, petrological and mineralogical evidence from the foreland basin sequence. The onset of India-Asia collision is pinned down to middle Paleocene (60-58.5 Ma) time with the first provenance of Asia plate material interbedded with Indian plate continental rise rocks. The final marine sedimentary rocks within the suture zone and along the north Indian plate margin are c. 50.5 Ma. Thus, the timing of India-Asia collision could be bracketed between these two ages. The abrupt appearance of metamorphic fragments derived from the uplifted GHS appears at c. 23 Ma. A comprehensive review of historical seismicity along the Himalaya is provided by Bilham (2019). He assesses the risk and slip potential of different segments of the Himalaya and concludes that more than half the region has the potential to host a great earthquake (M w ≥ 8.0). This is particularly worrying given the magnitude of destruction and loss of life (>9000 dead, 22 000 injured) that occurred during the 25 April 2015 M w 7.8 Gorkha earthquake. This earthquake occurred at midday on a Saturday when schools were closed and most people were outdoors; if it had happened at night or during school time, the death toll would have been far greater. He argues that the death toll of a major nocturnal earthquake could exceed 100 000 owing to increased population and the vulnerability of present-day construction methods. Priestley et al. (2019) review all the geophysical data that allow an interpretation of the deep structure of the Himalaya, including seismic, gravity and modelling. They argue that, although the gross crustal structure of much of the Himalaya is becoming better known, understanding of the internal structure is still sketchy. The Asian margin of the India-Asia collision zone comprises the Gangdese granite belt along the southern margin of the Lhasa Block, and the northern terranes of the Qiangtang and Kunlun. These central Tibet terranes continue west into the Karakoram Mountains and Pamir ranges. Metcalf & Kapp (2019) present results of mapping more than 200 km of the Yarlung Suture zone using detrital zircon U-Pb ages and petrography. Their model has the Zedong arc representing the southward migration of the Gangdese arc as it was emplaced onto a forearc ophiolite complex along the southern margin of Asia. Zhu et al. (2018) review the magmatism along the Gangdese batholith of south Tibet since 120 Ma using a very large dataset of 290 U-Pb zircon ages that span c. 210c. 10 Ma. The majority of the ages are from the main calc-alkaline granite-granodiorite Gangdese rocks, but an important minority are from the small volume felsic adakites that were erupted at c. 16 Ma. The age of the Linzizong calc-alkaline volcanics is now refined to c. 60-52 Ma. The geology of the Karakoram and Pamir, the eastern extension of the northern Lhasa and Qiangtang terranes, is very different from the geology of central Tibet. Much of the latter is composed of sedimentary rocks and granites with few metamorphic deep crust rocks, whereas large tracts of the southern Karakoram and central Pamirs are dominated by kyanite-and sillimanite-grade regional metamorphic rocks. Searle & Hacker (2018) review the structure and metamorphic evolution of the Karakoram and Pamir. The ages of peak metamorphism appear close to mirror images of the Oligocene-Miocene ages from the Greater Himalaya, suggesting postcollision crustal thickening spread both south (Himalaya) and north (Karakoram, Pamir) of the suture zone. These Cenozoic metamorphic rocks are not exposed across central or eastern Tibet, but could be present in parts of the deep crust of the plateau region, unexposed thus far by erosion-exhumation processes. He et al. (2018) integrate new geological mapping along the Muskol metamorphic dome in the Central Pamir with detrital zircon geochronology and petrography. They describe Triassic rocks unconformably overlain by Cretaceous strata that are similar to the southern Qiangtang terrane and Bangong suture zone. Oligocene conglomerates interbedded with siltstones record a juvenile magmatism at c. 32 Ma. Finally, Clift & Webb (2018) review the history of the Asian monsoon in South Asia. They describe a strengthening of rainfall at c. 24 Ma, with a peak wet period at c. 15 Ma in the middle Miocene and a drying at c. 8 Ma. Neither of these ages correlates with the timing of uplift of the Tibetan plateau, or with the retreat of shallow marine seas from central Asia. The rise of the Himalaya during the Miocene provided an abrupt tectonic barrier to the northerly summer monsoon wind and rainfall, a situation that continues to this day.
8,659
2019-07-01T00:00:00.000
[ "Geology" ]
Fuzzy Mobile-Robot Positioning in Intelligent Spaces Using Wireless Sensor Networks This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using Wireless Sensor Networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods. Introduction Nowadays, Wireless Sensor Networks (WSNs) [1] have gained an increasing attention thanks to the advances in wireless communications and sensor design, which have permitted to reduce the cost and size of sensor devices. These sensor networks are composed of autonomous wireless sensing devices that incorporate sensing, processing, storing, and communication capabilities. In order to classify them, there are diverse criteria in the literature, such as considering only the communication protocols [2], the nature of the specific application [3], and the wireless device functionalities [4]. They have been successfully applied in a wide spectrum of applications, such as search and rescue [5], disaster relief [6], target tracking [7], and smart environments [8], to name but a few. The low cost of these devices makes them especially suitable for large Intelligent Spaces [9], where the nodes are spatially distributed in order to cooperatively processing and communicating sensed information. The positioning of mobile nodes along an Intelligent Space has special interest for location-dependent applications, such as robot navigation [10,11], geometric-dependent routing [12], location-dependent sensing, and Location-Based Services (LBS) [13]. The WSN localization problem consists of estimating the location or spatial coordinates of some or all the sensor network nodes of the WSN. In order to do so, the different localization approaches make assumptions about their network and device capabilities, including hardware incorporated in devices, signal propagation models, computational and energy requirements, nature of environment (indoor vs. outdoor), communication cost, accuracy requirements, and node mobility. Considering all these constraints, each sensor node makes use of available information, such as position measurements and location of neighbor nodes, to estimate its pose. The localization problem is much more complex in indoors because Global Positioning System (GPS) coverage is limited and inter-node position measurements are usually unreliable in low-cost sensor devices. For these reasons, indoor WSN localization problem is usually simplified by differentiating between unknown and known sensor nodes. The former nodes make use of known location of latter ones, the so-called beacons or anchor nodes, and position measurements to estimate their location. The position measurements include both information about the sensor node position relative to the WSN, e.g., distance [14] or bearing [15] to beacons, and information on the sensor node motion, such as movement estimation obtained from accelerometers in sensor nodes [16] and from odometers in mobile robots [17]. What really makes indoor WSN localization difficult is the presence of uncertainty in position measurements and the reduced level of accuracy of beacon positioning. The sensor nodes make use of some signal propagation model, which should be calibrated for each specific environment, and hence, it is strongly affected by slight environmental modifications. Besides, the location of beacons is usually configured by hand in indoor applications, which gives rise to a reduced level of accuracy of beacon positioning. All these factors induce different types of uncertainty in position measurements, including vagueness, imprecision, unreliability, and random noise. Measurements may also be affected by several simultaneous factors, which are not necessarily independent. For all these reasons it is important that the formalism used to address the indoor WSN localization problem is able to represent the different types of uncertainty and account for the differences between them. Fuzzy logic provides powerful tools to represent and handle the different facets of uncertainty in measurements [18], to address matching problems based on similarity interpretation of fuzzy logic [19], and to use approximate models based on experience. These arguments have induced us to make use of fuzzy sets as uncertainty representation of locations in the indoor WSN localization problem. In this paper, we address the problem of positioning a mobile robot in an Intelligent Space [20][21][22] using a low-cost and low-density WSN composed of TMote Sky devices, which are equipped with ZigBee (IEEE 802.15.4) communications. The inter-node measurements are estimated using the Received Signal Strength (RSS) of Radio-Frequency (RF) communications. These measurements are really unreliable due to RF signal propagation effects, such as reflections, diffraction, and scattering, which make difficult the signal strength calibration. The robot makes use of a vague description of the environment and position measurements to estimate its pose. Thus, the restrictions of the problem are as follows: the knowledge of the environment is approximate, the density of the WSN is unknown, and the on-site startup cannot be complex or time consuming. We have adopted a fuzzy robot localization framework [23], based on early ideas for representing location uncertainty [24] and ambiguity [25] in position measurements, which combines the typical schema of fuzzy systems with the typical schema of recursive position estimation methods. The advantages of this approach are obvious when high uncertainty and sensor model ignorance, which are the typical conditions of indoor WSN applications using RSS for inter-node distance estimation. The paper is structured as follows. Section 2 presents a review of relevant related works. Section 3 is devoted to analyze the sensor model used for estimating the distances between the sensor node installed on the robot platform and the beacons distributed along the Intelligent Space. Section 4 describes the theoretical bases of the proposed approach and a reference method used to evaluate the proposed one. The experimental setup, the experimental validation of the proposed method in different situations, and a comparison between the proposed approach and one of most popular localization methods is presented in Section 5. Finally, conclusions are presented in Section 6. Related Works Currently, there is a consensus on classification of WSN localization techniques into range-free (or coarse-grained) and range-based (or fine-grained) schemes [26][27][28]. Range-free approaches infer the constraints on the proximity to beacon nodes without making use of inter-node measurements, and thus sensing devices do not require special and expensive hardware. Normally, these localization methods use quite simple operations to save computational and energy consumption. They are used when the cost and limitation of hardware on sensing nodes prevent the use of range-based techniques, being a cost-effective alternative, at the cost of accuracy, in some applications [29]. On the other hand, range-based approaches rely on position measurements to estimate the location of unknown nodes. The sensor nodes should be equipped with special hardware to determine the position measurements, distance or bearing, from unknown nodes to beacons. Range-based approaches are the most suitable option when the indoor WSN application requires as accurate as possible position estimation, which is the case of most robotics applications. The position measurements in range-based approaches rely on hardware incorporated by sensor nodes, such as directional or omnidirectional antennas, RF-communications, and acoustic or optical sensors. The inter-node distance is usually estimated using the propagation time of signals, e.g., the Time of Arrival (TOA) [14] between transmitter and receiver or the Time Difference of Arrival (TDOA) [30], which is based on the correlation of two or more signals with different propagation time in order to obtain accurate distance estimations. The relative angle between sensor nodes, Angle of Arrival (AOA), can be estimated using an antenna array [15] or calculating the TOA difference of two transmitters/receivers separated by a fixed distance [31]. Nevertheless, the most popular inter-node measurement is the distance estimation based on RSS because most of sensor network devices are equipped with RF-based communications, and thus extra hardware is not needed. Moreover, the RSS of RF signals can be measured during communications without needing additional bandwidth or energy requirements [32]. Furthermore, RF-based position measurements permit estimating inter-node ranges through obstacles, which allows reducing the density of WSNs and avoids typical network coverage area problems of sensor networks composed of optics and acoustic devices. The problem is that RF signal strength is very unreliable because it is affected by several signal propagation effects. Range-based approaches deal with uncertainty of position measurements to provide a location estimation of unknown sensor nodes. The most popular range-based localization approaches are probabilistic methods, which formulate localization as a Bayesian estimation problem where both sensor node state (location) and sensor measurements are modeled using probability distributions. By using this representation, sensor node can believe to be at a certain location with a certain degree. The probabilistic localization problem consists of estimating the probability density over the space of all locations. Markov Localization framework estimates this probability density [33], which captures the probabilistic foundations of many stochastic localization methods currently used. These methods have been broadly used in indoor WSNs, e.g., grid-based methods [34], some variants of particle filters [28], and probabilistic methods for cooperative localization [32]. In the robotics context, different works have used some implementation of the Bayesian Localization Framework [33] in order to estimate the robot location using both Wireless Local Area Network (WLAN) signal strength [35][36][37] and range readings from radio tags [38] as sensing. The main problem of range-based approaches is that they are strongly dependent of sensor models. For that reason the procedure for obtaining such sensor models results of paramount importance. Some techniques [39] aim to learn accurate signal strength sensor models in order to make use of available indoor infrastructure, including signals detected from WLAN and RFID beacons. In the case of probabilistic methods, sensor models usually consist of normal distributions, which are determined using the central limit theorem, i.e., by repeating the measurement a sufficiently large number of times under similar conditions to determine the mean and variance of a Gaussian distribution. However, practical experience suggests that these assumptions are often violated in reality, especially when we cannot reproduce the conditions of measurements or they are unknown. In the case of fuzzy approaches, sensor models consist of fuzzy sets that represent the different facets of uncertainty affecting the measurements. These fuzzy sets are adjusted approximately, normally by human-experts that would base on their experience or an expert system, depending on the vagueness, imprecision, and unreliability of position measurements. For that reason fuzzy techniques are applicable in domains where assumptions of other methods are not satisfied, e.g., when sensor model cannot be easily elicited [40]. Some examples of fuzzy logic in localization approaches are tracking in wireless networks [41], multisensor fusion of uncertain information [42,43], location information fusion in multirobot systems [44], dynamic localization using fuzzy matching patterns techniques [45], and fuzzy inference to deal with imprecision [46] or adapt some parameters [47] of other localization methods. In this work, we propose a range-based indoor WSN localization method that aims to avoid the typical drawbacks of methods strongly dependent on signal strength calibration. In particular, the proposed method is focused on simple on-site startup and robustness. Perception We use a propagation model of the RF signal strength attenuation (RSS) in order to fit the sensor model used for estimating the inter-node distances. Such a model depends on several factors, such as kind of terrain, obstructions in the wave path, atmospheric conditions, and other phenomena. These factors induce the three phenomenon that cause radio signal distortions and give rise to signal fades, as well as additional signal propagation losses [48]; reflection, diffraction, and scattering. Indoor is probably the worse situation because there are multipath reflections, diffraction around sharp corners or scattering from wall, ceiling, or floor surfaces. The different models depend on environmental conditions, which usually rely on computing the median path loss for a link under a certain probability that the considered conditions will occur. In our case, we have adopted the shadowing propagation model [48], which consists of two parts: path loss and variation of received power. Path loss predicts the received power mean at the distance d, denoted by P L (d), that it is calculated relative to a reference distance d 0 as follows, where β is the path loss exponent, which is empirically determined. When path loss is measured in dB it can be expressed as follows, The variation of received power is represented as a log-normal random variable, i.e., a Gaussian distribution denoted by Ψ dB when it is measured in dB. Thus, the propagation model is represented as follows, Finally, the received power is the difference between transmitted and attenuated power, where P r (d), P t (d), and P L (d) are received, transmitted, and attenuated power respectively, d and d 0 are the distance and the reference distance respectively, β is the path loss exponent, and Ψ dB is a standard normal Gaussian distribution N (0, σ). The propagation model is customized for RF communications of the Tmote Sky commercial device. This device provides two indicators that can be used for elucidating the sensor model: Received Signal Strength Indication (RSSI) and Link Quality Indication (LQI). The latter is the quality parameter, or error rate, of packet reception. The inter-node distance can be estimated by RSS because all anchor nodes are configured for emitting at maximum RF power, and hence, the distance is estimated as the attenuation of signal strength relative to such a reference value. Thousands of measurements are taken from different locations of an office-like indoor environment in order to fit the propagation model. By knowing the ground truth of the sensor node to estimate and the position of the beacons emitting at maximum power, the Tmote indicators can be correlated for estimating the inter-node distances. Figure 1(a) shows the Tmote indicator values, path loss (dBm) and LQI (dimensionless), at different transmitter-receiver distances in an office-like environment, including measurements through obstacles, such as walls and office furniture. We can observe that Tmote indicators are very unreliable at all distances because the uncertainty of measurements depends on several factors, such as propagation effects and environment layout. The sensor model is obtained by fitting the RSSI values using a least squares fitting method. The fitting values are [P L (d 0 )] dB = 59.95, β = 3.72 and d 0 = 1. Figure 1(b) shows the gap of the Tmote indicators received at different distances and the fitted sensor model based on RSSI. We have noticed that LQI indicator cannot be used for estimating distances because these values are so similar along the inter-node range. However, they can be used for filtering out measurements that do not correspond to distance estimations using RSS, e.g., distance estimations shorter than five meters that are not contained in the interval [103,110] of LQI values are rejected. We have to remark that this sensor model is approximate for a certain indoor environment, but it will be used for any office-like environment. The statistical model permits estimating inter-node distances given the RSSI indicator of Tmote devices, but we can observe that these estimations are highly unreliable. For example, Figure 1(b) shows that a RSSI value of 43 corresponds to a distance of three meters according to the fitted statistical model, however, this value can correspond to any distance within the interval [1,9] meters according to the scattering of measurements. For that reason we should include uncertainty in the sensor model in order to deal with it. How to represent and handle uncertainty of position measurements is a key point in indoor WSN localization problem. Next section presents the formalism adopted to represent and deal with uncertainty of position measurements obtained both from range estimations and from odometry. The Fuzzy Approach We define the indoor WSN localization problem as a fuzzy estimation problem where both the state to estimate and the position measurements are represented using fuzzy sets. Fuzzy estimation consists of determining the fuzzy density over the space of all locations. We represent location information as a fuzzy subset µ of the set X of all possible locations [49,50]. For example, X can be a two-dimensional space encoding the (x, y) position coordinates of a sensor node. For any x ∈ X, the value of µ(x) (µ(x) ∈ [0, 1]) is read as the degree of possibility that the robot is located at x given the available information. Total ignorance is represented by the fuzzy location µ(x) = 1 for all x ∈ X. The fuzzy density or fuzzy belief G is defined as the density over all possible locations where the robot could be located. Thus, the localization problem can be formulated as maintaining the belief G t that represents the robot's position at time t. The aim of localization is making this belief as close as possible to the real distribution of the robot's pose. Ideally, the robot's belief has a single peak at the true location and it is zero everywhere else. Unfortunately, uncertainty is always present in reality. The fuzzy density is estimated following the typical predict-update cycle of recursive state estimators [51]. The prediction stage consists of a dilation of the fuzzy belief G t−1 in order to obtain the predicted fuzzy belief G t . This operation is performed by a fuzzy dilation operator [52,53] B that dilates the fuzzy belief G t−1 in all directions in order to represent both the robot's motion and the uncertainty in the robot's location. In the case that we know that the sensor node is static, the fuzzy dilation is also applied to guarantee the convergence of the method and to ensure that the recursive estimator is not trapped into a local minimum. Formally, the dilation operation of G by B is denoted by G ⊕ B, and the prediction stage is defined by which dilates the fuzzy belief from G t−1 to G t . Intuitively, the result of a fuzzy dilation is a fuzzy distribution spatially expanded from G, where B represents the shape of the expansion. In our implementation, we have adopted an isotropic B operator which expands G in all directions. The update stage consists of intersecting the predicted belief G t with the beliefs induced by all observations (inter-node position measurements) at time t. Let S t (·|r) be the possibility distribution induced by the observation r at time t. In other words, S t (·|r) represents the possibility that the robot is located at (·), n-dimensional fuzzy state, given the position measurement r. The predicted fuzzy belief G t is then updated by intersecting it with the fuzzy distributions S t (·|r 1 ), S t (·|r 2 ), . . . S t (·|r n ) induced by the observations r 1 ,r 2 ,. . . ,r n at time t as follows, where ∩ denotes a fuzzy intersection operator. There are different choices for ∩ depending on the independence assumptions made about the items being combined [54]. In our case, we have adopted the fuzzy product operator because it reinforces the effect of consonant observations. The fuzzy intersection operation satisfies the associative property, but commonly, normalization is performed after each intersection. Since fuzzy normalization is a non-associative operation, the order in which intersection operations are performed modifies the final result of the fuzzy robot's belief. The uncertainty of the observations is represented as different intervals in the inter-node range; the sensor model of each observation is associated with a trapezoidal fuzzy set µ(x, y) = (ρ, ∆, s, h, b), shown in Figure 3 in the dimension of the inter-node distance instead of the 2D grid, that represents the uncertainty of the position measurement. The ρ parameter is the center (inter-node distance estimation), ∆ is the width of the core, s · ∆ is the width of the support, h is the height, and b is the bias. The width of the core represents a completely possible area (imprecision representation) where we assume that the robot is located. The slopes of the trapezoidal fuzzy set, width of the support excepting the width of the core, represent an area where the robot could be located (vagueness representation). The bias of the trapezoidal fuzzy set represents the area where there is a small possibility that the robot is located (unreliability representation). In our implementation, the parameters of the trapezoidal fuzzy set, representing the imprecision, vagueness, and unreliability of each observation, are adjusted depending on the inter-node distance estimation ρ. We only adjust the width of the core ∆ and the width of the support s · ∆ in order to represent the facets of the uncertainty mentioned above. We have followed the criterion of close observations inducing a smaller area in the fuzzy robot's belief than the further ones, i.e., instead of weighting the importance of the position measurement (using different heights in the trapezoidal fuzzy sets) depending on the inter-node distance estimation ρ, we model the areas where it is fully possible, possible, and unlikely that the robot is located. Figure 4 (upper) shows an example of the fuzzy robot's belief representation and the fuzzy beliefs induced by the observations (position measurements). We can observe how the grid-based representation of the fuzzy belief is able to represent both total ignorance and multiple possible locations; the robot's location is initially ignored, and hence, all positions are fully possible, shown in Figure 4 The Reference Method The reference method is a variant of Monte Carlo localization approach [55], in which the probability density is represented by maintaining a set of samples that are randomly drawn from it. Such a variant uses a hybrid representation of the probability density to reduce the computational cost. The pose probability is factorized as a distribution over a continuous set of angles and continuous translational coordinates; the distribution over poses (x,y,θ) is first generically decomposed into the product P (x, y, θ)=P (θ) · P (x, y|θ)= i P (θ i ) · P (x, y|θ i ), which is a kind of Rao-Blackwellization of the state space [56,57]. The distribution P (θ) is modeled as a discrete set of weighted samples θ i , and the conditional likelihood P (x, y|θ) as simple two-dimensional Gaussian. This approach has the advantage of combining discrete Markov updates for the orientation with Kalman filter updates for the translational degrees of freedom. Note that though there is not bearing information, due to range-only measurements, the orientation can be estimated when the robot is in motion. Besides, the simulation of omnidirectional random noise is facilitated including an orientation hypothesis into each sample, even when sensor node is static. As in the case of the proposed localization approach, the Monte Carlo method follows the typical predict-update cycle of recursive state estimators. The prediction stage consists of the simulation of the motion of each sample including random noise in order to improve the convergence. Figure 2 (lower) shows three examples of probabilistic distributions: the set of samples represents the positions where it is probably located the robot. The example shows the resulting robot's belief, shown in Figure 2(c) (lower), after two update stages given the probability distribution of the robot, shown in Figure 2(a) (lower), at time t. The sequence is only possible if the robot does not detect any observation during some predict-update cycles. The update stage consists of a product operation and a resampling; the product operation is performed between each sample of the predicted probabilistic belief and the sample induced by each observation (inter-node position measurements) at time t, whereas the resampling aims to remove the samples with a low probability after the product operation. Experimental Validation This section presents the experimental validation of the proposed approach using real data. This is a key point because one of most important reasons of failure in localization methods is the unknowledge about sources of noise, which is usually left out when localization approaches are evaluated using simulations. Besides, an experimental comparison with one of most popular stochastic localization methods is performed in order to evaluate the differences between them. The battery of tests consists of kidnapping and tracking experiments using both the proposed approach and a stochastic reference method. In the robotics context, the kidnapping problem consists of positioning the robot at a location, and all of a sudden it is transferred or "kidnapped" to other location without the robot being aware of this. The kidnapping experiment is useful in order to evaluate the robustness of localization methods in different situations, such as false positive observations and recovery from failures. The tracking experiment consists of estimating the robot's location when it is in motion. We have to remark that motion is an important source of uncertainty when the robot navigates because RSS is affected by bearing modifications of antenna. Thus, the experiments evaluate usual situations (like tracking) and unusual situations (like kidnapping or recovering from failure). All the experiments are performed using both the proposed approach and the reference method in order to make the comparison between them. In order to make this comparison as fair as possible, we have used similar sensor and action models. How to do this, however, is not obvious since fuzzy and probabilistic techniques are semantically different: we interpret fuzzy sets to represent degrees of possibilities, while probabilities are more naturally interpreted in terms of stochastic events [23]. Moreover, stochastic methods need sensor models based on frequencies, and hence, the probability function that models the sensor should be experimentally obtained, whereas methods based on fuzzy logic make use of qualitative sensor models. We have ignored these semantic differences, and we have used probabilistic sensor models that directly reflect the fuzzy ones. Thus, the stochastic sensor model is represented by a two-dimensional Gaussian function, whose parameters are chosen so that the core and the support of the fuzzy model correspond to two and four standard deviations of the stochastic Gaussian function, respectively. Figure 5 shows the correspondence between the proposed method and the reference one. Figure 5. Sensor models used to perform the comparison experiments; correspondence between (upper) fuzzy set and (lower) stochastic Gaussian distribution. Experimental Setup The proposed method is evaluated using an indoor WSN composed of several RF beacons, Tmote Sky devices, distributed along an office-like environment. Figure 6 shows the floor plant layout and the deployment of the beacons. We can observe that beacon density is not so high, and hence, the proposed localization method is evaluated in this unfavorable situation. The experiments are performed using a four wheel drive robotic platform, Pioneer 3-AT shown in Figure 7(b), equipped with a laptop on the top, which drives the vehicle using the serial port and communicates with WSN through a Tmote Sky device. All WSN devices are synchronized in order to avoid emitting packets at once, and thus inducing interferences. Besides, all beacons are configured for emitting packets at maximum RF power in order to use the sensor model elicited above. The experiments consist of driving the robot between known positions, which permits estimating the ground truth and calculates the position error. While the robot navigates between known locations, shown in Figure 7(c), it estimates its position using the communication packets received from beacons. The operator indicates when the robot has reached a known position and thus ground truth is estimated by dead-reckoning using odometry from known locations. Since known locations are relatively close we assume that position error due to odometry is not significant, and hence such position estimations can be used as ground truth. Kidnapping Experiment The kidnapping experiment consists of positioning a sensor node at whatever initial location estimating its pose using the messages received from beacons, and after some seconds it is transferred to other location without the sensor node being aware of this. This process is repeated several, almost hundred, times. The locations where the sensor node is transferred are known, and the localization techniques are activated when the sensor node is located at a new pose and deactivated when the sensor node is "kidnapped". The aim of deactivating the localization process, while the sensor node is transferred to a new location, is to simulate an instantaneous transfer which should be interpreted and handled by the localization method, normally as recovery from failure. These experiments permit us to evaluate the convergence and robustness of the localization methods, and the quality of the position estimation. Note that environment information is obviated in these experiments. Figure 8(a) shows the position error along the whole experiment. We can observe that the average of the position error is almost the same, around three meters, using both methods. Note that the average position error includes position error during convergence time, i.e., since the sensor node is transferred to a new location until the localization approach converges to such a position. Besides, the localization approaches are evaluated from almost all possible positions in the environment and, given the low density of WSN beacons in the scenario, there are some areas where received information do not permit locating properly the sensor node. Figure 8(b) shows the position error during a short period in which beacon layout permits estimating the sensor node location from the poses where it is transferred. We can observe position error peaks when the sensor node is transferred to a new location, and how position error is reduced when the localization approach converges. We obtain position errors of less than one meter for both localization approaches when they converge from poses receiving information from enough beacons. We have to remark that the estimated sensor node is static in these experiments, and that motion is an important source of uncertainty in WSNs, which is evaluated in next section. Tracking Experiments The tracking experiments consist of estimating the location of a sensor node installed on the robot platform while it navigates through a route defined by known way-points. When the mobile robot reaches these way-points, an operator notifies it by sending a packet to the robot in order to indicate the ground truth and to calculate the position error. The experiments evaluate two unfavorable situations when there are long corridors in indoor environments: crossing an intersection with a long corridor and navigating through a long corridor. The multipath reflection effects and the line-of-view between emitter and receiver in long corridors induce very noisy RSS measurements with respect to the sensor model elicited above. This is because of such a sensor model considers average signal attenuation, which includes walls and other obstacles; in particular, multipath reflections induce further distance estimation due to higher RSS attenuation, while free line-of-view between emitter and receiver induces closer distance estimation due to lower RSS attenuation. Figure 9(a) shows the position error when the mobile robot is crossing an intersection with a long corridor. We can observe that the proposed localization method provides better estimations than Monte Carlo when the position measurements are very unreliable, i.e., when the robot is located at the cross-road, whereas it provides similar position estimations when the position measurements are relatively accurate. Figure 9(b) shows the position error when the mobile robot is navigating through a long corridor. In contrast to previous experiment, position measurements are really unreliable during the whole experiment. We can observe that the proposed localization approach also provides better position estimation than the reference method. Besides, average position error using the proposed localization approach is about two meters in this unfavorable situation, which supposes the double of accuracy than using the reference method. The reasons for providing the proposed method better position estimations in highly uncertain situations are: the representation of approximate location information and specially the ability of fuzzy logic for addressing the fusion information problem. The reference method makes use of a classical weighted average fusion of the different sources of location information, whereas the proposed method permits maintaining the information making a decision about the sources being combined, and typically obtaining a consensus between the different sources of location information. Figure 10 shows a numerical example of the proposed approach and the reference method, which aims to show what is happening in the long corridor of the tracking experiments. The example is shown in one dimension for graphical clarity. Initially, the robot is located in the origin of the dimension, represented by a continuous line, and the fuzzy robot's belief has a certain distribution that directly reflects the stochastic one following the criterion adopted in the previous experiments. The robot then detects a wrong observation in a long corridor, shown in Figure 10(a), due to the free line-of-view between the emitter and receiver, which induces closer distance estimation from the beacon due to lower RSS attenuation. We can observe how the fuzzy approach maintains the representation of both sources of information as consequence of the fuzzy intersection and the fuzzy normalization, whereas the stochastic method makes a weighted average fusion. The Center of Gravity (CoG) of the resulting fuzzy robot's belief, represented by the dotted line, matches up to the mean of the resulting probabilistic robot's belief, shown in Figure 10(b). However, the former distribution is able to maintain the information of the "wrong" measurement in order to further check its cause (outlier, failure, kidnapping, etc.). Finally, the robot detects a proper observation; in the case of the fuzzy approach, the fuzzy intersection and normalization induce a fuzzy robot's belief covering the real position of the robot, whereas in the case of the probabilistic method the weighted average fusion provides a probabilistic distribution farther away from the real position of the robot. Figure 10(c) shows how the CoG of the resulting fuzzy robot's belief is very close to the real robot's position, while the mean of the probabilistic distribution is farther away from the real robot's position. Some probabilistic localization methods perform different tests in order to check if the measurement is an outlier given the probabilistic distribution of the robot's belief, and then avoiding the problem presented in the numerical example. For example, the Extended Kalman Filter (EKF) makes use of the distance of Mahalanobis in order to compare the innovation of the state to estimate and the covariance associated to such an innovation considering the measurement. This comparison is used to filter out outliers, i.e., measurements that are not coherent with the robot's position considering the covariance of the robot's belief. However, this kind of tests compromises the localization approach when kidnapping or recovering from failure, i.e., it is then only able to handle the tracking (local localization) problem. Conclusions We have presented the development and experimental evaluation of a fuzzy localization framework for addressing the indoor WSN localization problem in general, and the indoor WSN localization of a mobile robot using uncertain inter-node range measurements in particular. The proposed approach is focused on simple setting and robustness; simple setting is achieved by adjusting the fuzzy sets that represent location uncertainty in position measurements, while uncertainty management permits estimating a consensus between the different sources of information, instead of classical weighted average fusion, which improves the robustness of the position estimation. The on-site startup by simply tuning the approximate sensor models supposes an important advantage with respect to popular WSN localization approaches based on signal strength calibration because the setting of these systems is complex and time consuming. The experimental evaluation of the proposed method confirms that the fuzzy localization approach is able to solve the typical local (tracking) and global (recovery from failure and ignorance of initial location) localization problems in robotics. Finally, we have demonstrated that the proposed approach results in feasible and robust low-density WSNs. For all these reasons, we can state that the proposed approach can be simple and quickly configured in indoors providing accurate position estimations, even when high uncertainty in the position measurements.
8,321.6
2011-11-17T00:00:00.000
[ "Engineering", "Computer Science" ]
Mir-21 Regulation of MARCKS Protein and Mucin Secretion in Airway Epithelial Cells Hypersecretion of mucus characterizes many inflammatory airway diseases, including asthma, chronic bronchitis, and cystic fibrosis. Excess mucus causes airway obstruction, reduces pulmonary function, and can lead to increased morbidity and mortality. MicroRNAs are small non-coding pieces of RNA which regulate other genes by binding to a complementary sequence in the target mRNA. The microRNA miR-21 is upregulated in many inflammatory conditions and, interestingly, miR-21 has been shown to target the mRNA of Myristoylated Alanine-Rich C Kinase Substrate (MARCKS), a protein that is an important regulator of airway mucin (the solid component of mucus) secretion. In these studies, we determined that exposure of primary, well-differentiated, normal human bronchial epithelial (NHBE) cells to the pro-inflammatory stimulus lipopolysaccharide (LPS) increased expression of both miR-21 and MARCKS in a time-dependent manner. To investigate whether miR-21 regulation of MARCKS played a role in mucin secretion, two separate airway epithelial cell lines, HBE1 (papilloma virus transformed) and NCI-H292 (mucodepidermoid derived) were utilized, since manipulation of miR-21 is performed via transfection of commercially-available miR-21 inhibitors and mimics/activators. Treatment of HBE1 cells with LPS caused concentration-dependent increases in expression of both miR-21 and MARCKS mRNA and protein. The miR-21 inhibitor effectively reduced levels of miR-21 in the cells, coincident with an increase in MARCKS mRNA expression over time as well as enhanced mucin secretion, while the miR-21 mimic/activator increased levels of miR-21, which coincided with a decrease in expression of MARCKS and a decrease in mucin secretion. These results suggest that miR-21 is increased in airway epithelial cells following exposure to LPS, and that miR-21 downregulates expression of MARCKS, which may decrease mucin secretion by the cells. Thus, miR-21 may act as a negative feedback regulator of mucin secretion in airway epithelial cells, and may do so, at least in part, by downregulating expression of MARCKS. Introduction Hypersecretion of mucus characterizes many inflammatory airway diseases including asthma, chronic bronchitis, and cystic fibrosis.Excessive mucus can obstruct airways, inhibit respiration, increase susceptibility to infection, and lead to increased morbidity and mortality.Mucus is a gel made up of water and mucins (large complex glycoproteins) that are post-translationally modified by myristoylation, glycosylation and/or phosphorylation.Mucins provide mucus with its viscosity and elasticity [1].At least 20 different mucins have been discovered in humans, 11 of which have been identified in the lungs.MUC5AC and MUC5B are the most prominent types in the airways [2]. Myristoylated alanine-rich C-kinase substrate (MARCKS) protein is a ubiquitous Protein Kinase C (PKC) substrate that has been shown to play an important role in regulation of mucin secretion by airway epithelium in vitro [3][4][5] and in vivo [6,7].The evolutionarily-conserved N-terminal region of MARCKS [6] is clearly involved in this action, as peptides analogous to the MARCKS N-terminus attenuate mucin secretion in airway epithelial cells both in vitro [3] and in vivo [6]. MicroRNAs are small non-coding pieces of RNA typically about 22 bases long.They regulate other genes binding to a complementary sequence in the 3'-untranslated region of the target mRNA.MicroRNAs serve an important regulatory role in proliferation [8], differentiation [9], development, migration [10] angiogenesis [11], apoptosis [12,13] and carcinogenesis [9].The micro RNA miR-21 has been shown to target many tumor suppressors and it is upregulated in many types of cancers and in various inflammatory conditions [14][15][16][17][18][19].Interestingly, miR-21 has been shown to specifically target the mRNA of MARCKS [20]. Given these associations, we investigated whether or not miR-21 could be involved in mucin secretion by airway epithelial cells in response to the proinflammatory stimulus, LPS, and, if so, whether miR-21 regulation of MARCKS could be part of the mechanism.The results indicate that: 1) Treatment of well-differentiated primary normal human bronchial epithelial (NHBE) cells with LPS derived from E. coli provoked time-dependent increases in expression of both miR-21 and MARCKS; 2) LPS treatment caused a similar increase in expression of both miR-21 and MARCKS in the virally-transformed HBE1 human airway epithelial cell line; 3) Inhibition of miR-21 via transfection of a miR-21 inhibitor after LPS treatment increased expression of MARCKS coincident with an increase in mucin secretion in another human airway epithelial cell line, NCI-H292 cells (derived from a mucoepidermoid carcinoma); 4) Activation of miR-21 via transfection of a mimic/inhibitor decreased expression of MARCKS and decreased LPS-provoked mucin secretion in these cells; and 5) Inhibition of MARCKS protein with a peptide identical to the MARCKS N-terminus inhibited mucin secretion in these cells regardless of treatment.Thus, it appears that miR-21 may play an important role as a negative feedback regulator of MARCKS expression and mucin secretion following inflammatory stimulation in airway epithelial cells. Cell Culture Well-differentiated NHBE cells from two separate donors were utilized for the initial studies.NHBE cells were purchased from Lonza corporation (Walkersville, MD), and grown and maintained in air/liquid interface as described previously [21] until, after approximately 18 days in culture, a well-differentiated epithelium was formed.After initial experiments indicated that expression of both miR-21 and MARCKS were enhanced by exposure of cells to lipopolysaccharide (LPS) from E. coli (Figures 1 and 2), studies to determine if there was a connection between miR-21 and MARCKS expression were performed using both a commercially-available inhibitor and an activator/mimic of miR-21 (described below) that required use of cells with a high transfection efficiency, so a human bronchial epithelial cell line, papilloma virus-transformed HBE1 cells [22]; a generous gift from Dr. Reen Wu, University of California, Davis, CA) were used.HBE1 cells were cultured as previously described [23].In additional studies examining the effects of these reagents on airway mucin secretion, a second cell line, NCI-H292 cells (derived from a human pulmonary mucoepidemoid carcinoma; purchased from the American Type Culture Collection [ATCC, Manassas, VA) were chosen, as these cells have been used previously to study mucin production [24].RPMI 1640 + 10% FBS with penicillin/streptomycin and amphotericin added was the medium used and cells were maintained in a humidified air/5% CO 2 environment until they reached ~70% confluence before they were transfected with the miR-21 inhibitor or mimic.Forty-eight hrs post transfection, cells were exposed to a range of concentrations of LPS and responses related to miR-21 and MARCKS expression and function monitored, as described below. MiR-21 Inhibitor and miR-21 Mimic To alter miR-21 levels in these cells, we utilized both an anti-miR-21 inhibitor and a pre-miR-21 activator, both purchased from Ambion (Forster City, CA).MicroRNA inhibitors are small, chemically modified single-stranded RNA molecules designed to specifically bind to and inhibit endogenous miRNA molecules and enable miRNA functional analysis by down-regulation of miRNA activity and endogenous miRNA function after transfection into cells.For these studies, we utilized the mirVana ® miRNA inhibitor containing the hsa-miR-21-5p sequence: UAGCUUAUCAGACUGAUGUUGA.In contrast, miRNA mimics are small, chemically modified doublestranded RNAs that mimic endogenous miRNAs and enable miRNA functional analysis by up-regulation of miRNA activity.Here, we utilized the mirVana ® miRNA mimic (also from Ambion) containing the stem loop sequence: GUCGGGUACAUCGACUGAUGUUGACUGUUGAA UCUCAUGGCAACACCAGUCGAUGGGCUGUCUGA CA.Effective use of these reagents in other cell types has been described previously [25]. Transfections Here, we utilized the mirVana ® miRNA mimic (also from Ambion) containing the stem loop HBE1 and NCI-H292 cells were grown, submerged in medium, in plastic wells to approximately 70% confluence, and at that point transfected with either the miR-21 inhibitor or activator.Transfections were performed using Qiagen (Roche, Indianapolis, IN) "HiPerFect" transfection reagent, a unique blend of cationic and neutral lipids suited to both lowand high-throughput transfection of miRNA mimics or inhibitors, according to the manufacturer's protocol.Forty-eight hours later, cells were exposed to either a range of concentrations of LPS or control media, and appropriate experiments performed. Analysis of mRNA Expression via RT-PCR NHBE cells were treated with 100 ng/ml of LPS for various time periods.Cells were harvested and RNA was extracted with an RNeasy kit (Qiagen).For miRNA analysis, real-time qPCR was carried out on a iQ5 Detection System (Bio-Rad) using 5 ng RNA input, 2 × iQ SYBR Green Supermix (Bio-Rad).For the detection ng RNA input, 2 × iQ SYBR Green Supermix, and pmol gene-specific primer pairs were used.Thermal cycling conditions were 95˚C for 3 minutes, 40 cycles at 95˚C for 10 seconds, and 55˚C for 30 seconds, followed by melting curve analyses.RNA input was normalized to endogenous controls: beta-actin or 36B4.The 2 −ΔΔct method was used to calculate the fold relationships in miRNA expression among the tested samples. Analysis of Protein Expression via Western Blot Westerns blots were used to evaluate levels of MARCKS within the cells after exposure to LPS or control media. Measurements of Mucin Secretion via ELISA Mucin was collected and assayed as described previously [3].Briefly, after the treatment period, medium was collected and the content of secreted mucin (measured as the major respiratory mucin, MUC5AC) quantified via a sandwich enzyme-linked immunosorbent assay using an antibody to MUC5AC (Neomarkers, Freemont, CA) as the capture antibody with the reporter antibody being a 17Q2 pan-mucin antibody [26,23].The 17Q2 antibody was purified from murine acites fluid (Covance, Gaithersburg MD) and further purified using an Immuno-Pure(G) IgG purification kit (Pierce Biotechnology, Rockford, IL) following the manufacturer's protocol and then conjugated with alkaline phosphatase (EMD Biosciences).The ELISA Substrate was 4-nitrohenyl phosphate (Sigma, Saint Louis, MO). Statistical Analysis Graphpad Prism ® software was used to perform unpaired two-tailed Student's T-test as indicated in the figure legends. Exposure to LPS Enhances Expression of Both miR-21 and MARCKS in Human Airway Epithelium As illustrated in Copyright © 2013 SciRes.OJRD with decreased expression of MARCKS RNA (Figure 4). Mucin Secretion by NCI-H292 Cells Is Affected by the miR-21 Inhibitor and Mimic, and Further by Peptides Targeting MARCKS Protein As illustrated in Figure 5, LPS (100ng/ml) increased mucin secretion by NCI-H292 cells after 30 min exposure.Pretreatment of the cells with the mirVana ® miR-21 inhibitor resulted in higher levels of secreted mucin in response to LPS than cells exposed to the HiPerfect transfection reagent used as a control, while pretreatment with the mirVana ® pre-miR activator decreased mucin secretion in response to LPS compared to cells without the activator.Additional pretreatment of the cells for 30 min with 50 µM of the MANS peptide, a reagent that inhibits function of MARCKS in airway epithelial cells [3,6], further attenuated the mucin secretory response, implicating MARCKS in the secretory pathway, as de- Discussion MicroRNAs are fairly short (21 -25 nucleotides in length) strands of non-coding RNA.They serve an important regulatory role in proliferation [8], differentiation [9], development and migration [10], angiogenesis [11], apoptosis [12] and carcinogenesis [14].In humans and other mammals, miRNAs bind to the 3' untranslated region of their target gene, forming an imperfect complement.This serves to act as a repressor of translation.Human miR-21, mapped at chromosome 17q23.2,and present within the protein-coding gene VMP1 (or TMEM49) is one of the most extensively studied mi-croRNAs, as it has been associated with both cancer and inflammation.MiR-21 targets many tumor suppressor genes, such as PTEN, PDCD4, Tropomyosin, TGFBRII, RhoB, Bcl2, IL-12 and CDK2AP1 [28,29], and has been shown to stimulate invasion, intravasation, and metastasis [14]. MiR-21 also has been associated with inflammation.It inhibits the TGF-β signaling pathway, which is known to inhibit adipogenesis and stimulate inflammation [29,30].It has been shown to specifically target TGF-beta 1 and and TGF beta receptors [15].Upregulation of miR-21 causes cell proliferation, while downregulation allows cell to stop dividing and/or undergo apoptosis.MiR-21 is a trigger for fibroblast dysfunction and fibrosis and is upregulated in cardiac infarctions [18].It is also upregulated in individuals with idiopathic pulmonary fibrosis, and in lungs of mice with bleomycin-induced fibrosis.A possible target for miR-21 in fibrosis is Smad7, an inhibitory Smad, which is an important regulator of TGF-β.MiR-21 prevents Smad7 from being made, which stops TGF-β from being inhibited.This allows Smad3 to become activated, increasing collagenase activity and ultimately leading to increased deposition of collagen in the lung parenchyma [30]. Interestingly, miR-21 recently has been shown to also directly target MARCKS protein, binding to MARCKS in the 3' untranslated region from the nt713-734, a region of MARCKS highly conserved among species.Since in previous studies from this laboratory, MARCKS has been shown to be an important regulatory molecule in the process of airway mucin secretion as well as inflammation [3,6,31], we looked here at a possible connection between airway inflammation, mucin secretion, MARCKS and miR-21. In studies using primary well-differentiated normal human bronchial epithelial cells cultured at an air/liquid interface, and using exposure to LPS as a model of inflammation, we found that, indeed, LPS exposure increased of miR-21.Coincident with that increase, expression of MARCKS protein also was enhanced by LPS, but MARCKS expression plateaued after approximately 6 hours of exposure, while miR-21 expression continued to increase.This suggested that miR-21 might be downregulating expression of MARCKS in these cells, which could have a downstream effect of attenuating mucin secretion since MARCKS is integral to the mucin secretory pathway.Thus, we performed additional studies utilizing a commercially-available miR-21 inhibitor as well as a miR-21 activator.Since these studies required efficient transfection of these reagents into cells, we switched the model system from primary NHBE cells to the papilloma virus-transformed HBE1 cell line, as described previously [23]. LPS exposure had the same effect on HBE1 cells, increasing expression of both miR-21 and MARCKS.Treatment with the miR-21 inhibitor decreased significantly levels of miR-21 in the cells with or without exposure to LPS, and this coincided with an increase in expression of MARCKS at the mRNA and protein levels.In contrast, treatment of HBE1 cells with the miR-21 mimic resulted in downregulation of expression of MARCKS under baseline conditions and after exposure of the cells to LPS.Thus, it appears from these findings that miR-21 may act as a negative regulator of MARCKS expression in airway epithelial cells, similar to its role as a negative regulator of MARCKS in prostate cancer cells [20].One could speculate that miR-21 functions as part of a negative-feedback mechanism that buffers cellular responses to inflammatory stimuli. Since MARCKS has been shown to be integral to the mucin secretory process, we then examined how expression of miR-21 and subsequent regulation of MARCKS expression could affect mucin secretion.We turned to a second cell line for these studies, the NCI-H292 cell line, derived from a human epidermoid tumor, since these cells are excellent models for airway mucin secretion, especially of MUC5AC, the predominant human airway mucin [1,2,23].The results of the secretion studies supported the potential anti-inflammatory role of miR-21 in airway epithelium, as treatment with the miR-21 inhibitor, which increases MARCKS expression, also provoked secretion of mucin by cells treated with LPS, while treatment with the miR-21 mimic, which downregulates MARCKS expression, resulted in decreased mucin secretion.To ascertain that indeed MARCKS was functionally associated with the secretory responses, pretreatment of the cells with the MANS peptide, a reagent that is identical to the evolutionarily-conserved N-terminus of MARCKS and which has been shown to inhibit mucin secretion and other functions of MARCKS [3,6,[31][32][33][34], reduced secretion in cells treated with the either the control HiPerFect transfection reagent, cells transfected with the miR-21 inhibitor, or cells treated with the miR-21 mimic/activator. In summary, it appears that inflammatory stimulation of airway epithelial cells, in this case by exposure to LPS, provokes enhanced expression of the microRNA, miR-21.MiR-21 then appears to target MARCKS mRNA, decreasing levels of MARCKS protein in these cells, and via this mechanism apparently also decreases the mucin secretory response to LPS.These results, while limited to in vitro studies, suggest that miR-2, as well as MARCKS, might be therapeutic targets for treatment of respiratory diseases characterized by mucus hypersecretion. Figure 2 . Figure 2. (a) Expression of miR-21 in HBE1 cells is increased in a concentration-dependent manner after exposure to LPS for 6 hrs, with significant increases between 100 and 1000 ng/ml.(*= p < 0.05, using Student's t-test, n = 3); (b) Protein expression of MARCKS also is increased at 500 and 1000 ng/ml LPS; (c) Protein expression of MARCKS increases at 4 hrs post LPS (500 ng/ml) exposure and plateaus thereafter, similar to what is observed in NHBE cells illustrated in Figure 1. Figure 4 . Figure 4. (a) Transfection of the mirVana ® miR-21 inhibitor (200 nM) or the mirVana ® pre-miR-21 mimic (200 nM) into HBE1 cells for 48 hrs, followed by examination via RT-PCR of MARCKS mRNA expression in these cells, shows that treatment with the inhibitor increases mRNA expression of MARCKS, while treatment with the mimic decreases it; (b) When these cells were treated with 500 ng/ml of LPS for 6 hrs, mRNA expression of MARCKS was enhanced in cells transfected with the inhibitor and slightly decreased in cells treated with the mimic.(* = p < 0.05 using Student's T-test, n = 3). Figure 5 . Figure 5. Effects of miR-21 inhibitor and mimic, and of a MARCKS-inhibitory peptide (MANS) on secretion of mucin (MUC5AC) by NCI-H292 cells exposed to 500 ng/ml LPS.NCI-H292 cells were transfected with the mirVana ® miR-21 inhibitor or with the mirVana ® pre-miR-21 mimic, both at 200 nM, for 48 hrs, then treated with LPS for min and mucin secretion measured by ELISA as described.Transfection with the inhibitor increased secretion, while transfection with the mimic decreased secretion.Preincubation of cells for 30 min with 50 µM of the MANS peptide, which inhibits function of MARCKS protein, attenuated secretion in cells whether they were exposed to the HiPerfect ® transfection reagent only, to the miR-21 inhibitor, or to the miR-21 mimic/activator, implicating MARCKS in the secretory response.Cells transfected with the miR-21 mimic secrete significantly less mucus when treated with MANS peptide.(* = p < 0.001 using Student's T-test) Values are means ± SE, n = 6 at each point.scribedpreviously[27]
4,057.2
2013-05-17T00:00:00.000
[ "Medicine", "Biology" ]
Molecular Dynamics Examination of Sliding History-Dependent Adhesion in Si–Si Nanocontacts: Connecting Friction, Wear, Bond Formation, and Interfacial Adhesion We simulate the contact between nanoscale hydrogen-terminated, single-crystal silicon asperities and surfaces using reactive molecular dynamics (MD) simulations. The results are consistent with recent experimental observations of a more than order-of-magnitude sliding-induced increase in interfacial adhesion for silicon-silicon nanocontact experiments obtained using in situ transmission electron microscopy (TEM). In particular, the MD simulations support the hypothesis that the increased adhesion results from sliding-induced removal of passivating species, in this case hydrogen, followed by rapid formation of Si–Si covalent bonds across the interface, with little plastic deformation of the asperities. The MD results concur with the additional hypothesis that subsequent readsorption of passivating species explains the experimental observation that adhesion reverts to low values upon subsequent contact. However, the simulations further reveal that the sliding-induced adhesion increase is only observed when there are a sufficient number of preexisting surface defects in the form of incomplete hydrogen coverage. Increased hydrogen coverage suppresses interfacial bonding, within the time span of the simulations. Furthermore, the relative alignment of the surface crystal axes plays a strong role in affecting the probability of bond formation during sliding and the subsequent adhesive pull-off force. Also, the hydrogen coverage and sliding distance significantly impact friction at low to moderate hydrogen coverages. Atomic-scale wear does occur during the sliding process primarily through Si–Si bond formation across the interface followed by pull-out of Si atoms from the tip. At low hydrogen coverages, wear is far more severe, Archard’s wear law is obeyed, and significant morphological changes of the asperity occur. The bond formation process is highly stochastic, but shows a general trend of greater numbers of bonds with greater sliding distances. Tips wear by losing large clusters of material, then smaller clusters and individual atoms, and eventually enter into a wearless regime as hydrogen termination increases. A hydrogen-terminated Si tip (green and blue) in sliding contact with a hydrogen-terminated Si substrate (yellow and red). The sliding direction is indicated by the black arrow. At this level of hydrogen termination, wear is initiated by the removal of hydrogen atoms from the tip (blue atoms at left of figure). Continued sliding causes the formation of interfacial Si-Si bonds followed by the transfer of Si and H from the tip to the surface. Introduction Buried solid interfaces, which are difficult or impossible to directly observe, often undergo complex processes which manifest as wear, friction, and adhesion. Uncovering the mechanisms underlying these tribological processes is critical for understanding phenomena occurring on a large range of length scales, from modeling and predicting earthquakes [1][2][3][4] to making more durable tools for extreme environments [5][6][7]. At the nanoscale, such knowledge can benefit emerging scanning probe-based manufacturing techniquessuch as pick-and-place printing [8] and tip-based nanolithography [9][10][11], which have the potential to supplement or disrupt traditional nanolithography techniques. Prof. Mark O. Robbins was a pioneer and a master in elucidating the fundamental processes in tribology, both through atomistic and multiscale simulations of these processes at the buried interface, and through his deep, physically based insights which were broadly applicable to systems in very generalizable yet accessible ways. He considered multiple forms of matter in tribology: hard contacts in dry conditions (e.g., Refs. [12,13]), and with contaminants (e.g., Refs. [14]); liquid lubricants under compression and shear (e.g., Refs. [15][16][17]); and soft materials in contact, including the change in behavior as one goes from soft or compliant to hard and stiff (e.g., Refs. [18,19]). Prof. Robbins was particularly eager to support and promote collaborations and comparisons between experiments and simulations, as he valued the validation experiments could bring to simulations, and understood the unique insights simulations could bring to experiments where the buried interface was hidden from view. He also supported and promoted experimental work that pushed the boundaries of established methods, and took an interest in in situ studies. All of the authors of this paper benefited from Prof. Robbins' feedback, questions, challenges, and insights on our prior simulation and in situ experimental tribology work. Inspired by his accomplishments and interest in linking simulations with in situ experiments, here we have collaborated to provide a study that uses these approaches to gain new insights into mechanisms of contact, adhesion, friction, and wear for silicon-silicon nanocontacts. Silicon is an element found in all of the previously mentioned probe-based manufacturing examples and, though silicon and its compounds are one of the most widely utilized and studied materials, understanding of the chemistry at solid silicon and silicon compound interfaces subjected to applied forces is still lacking. This deficit in tribochemical knowledge is underlined, for example, by the recent observation of an unexpected phenomenon, whereby the magnitude of adhesion between oxide-free silicon asperities, measured in a high vacuum transmission electron microscope (TEM), was reproducibly observed to depend on whether sliding had occurred prior to separation; the so-called "reversible" or sliding history-dependent adhesion [20]. In those experiments, the oxide on the tip was fractured off or removed by sliding in the TEM immediately before experiments, and did not regrow. Adhesion was small when measured with the traditional indentation test i.e., with no sliding, but on average more than on order of magnitude larger when measured after some sliding had occurred. It was hypothesized that sliding removes a passivating layer, exposing bare and reactive silicon atoms on opposite surfaces to one another, allowing them to form covalent bonds. These covalent bonds require a larger force to separate than can be attributed to purely van der Waals attractions, the latter being the source of adhesion in tests without sliding and the former with sliding. Thus, the adhesion or "pull-off" force would be greater for sliding, often by varying amounts depending on the trial, indicating a stochastic nature to the bond formation events, similar to what was observed in MD simulations of DLC-diamond nanocontacts [21]. Even with the capability to observe the edges of the silicon contacts in TEM, the contact interface was always buried, precluding observation of the cause of the reversible adhesion phenomenon. In this work, we present results from molecular dynamics (MD) simulations that explore the buried silicon interface in the presence of varying hydrogen termination concentrations. The simulations show that hydrogen plays a key role in the adhesive behavior of silicon, with and without sliding. Computational Details Classical MD simulations were conducted to better understand the atomistic processes that control adhesion and friction. The simulations were designed to match the experimental conditions as outlined in the Si-Si nanocontact study by Milne et al. [20] as closely as possible. While it is possible to match variables, such as, material types, applied loads, and tip size, the overlap of some variables, such as sliding speed, is still not feasible. While experimental contact sizes are typically larger than those used in MD simulations, creative approaches such as modified tip geometries can be used to match contact sizes [22]. Using this approach, the simulations provided insight into the observed experimental trends and the mechanisms at play. This approach has been successful in a number of previous comparisons between experimental nanocontact experiments and MD simulations of adhesion [21][22][23][24][25]. Simulation Setup and Configuration A representative simulation setup is illustrated in Fig. 1. A hydrogen-terminated silicon (111) surface, 15nm by 15nm in the contact plane, and 2.5 nm thick, was used as the contacting surface. The silicon used in the simulations was terminated with hydrogen based on results from previously published measurements using the same experimental setup which indicate that the silicon indenter, like typical silicon crystals, is largely passivated with hydrogen with a 1 × 1 adsorbate structure due to ambient exposure to ambient conditions [26,27]. In experiments, this termination may not be complete and may include defects, which are known to affect adhesion of silicon and diamond surfaces (Note, that diamond has a similar 1 × 1 H-termination as silicon.) [21,[28][29][30]. The presence of defects in the experiments was modeled by changing the percentage of hydrogen termination in the simulations. Other contaminants, especially water and hydrocarbons, are likely present in the experiments. Fully modeling the effect of these other species with reliable reactive potentials is beyond the scope of the present work. Also, oxide growth is unlikely under the experimental conditions described by Milne et al., and the oxide layer initially present on the Si surfaces was removed prior to conducting their TEM experiments [20]. Moreover, as we show below, the generic mechanism of increased adhesion is sliding-induced removal of passivating species, which could operate in a similar manner regardless of adsorbate species. Thus, our goal here is to use H as a prototypical passivating species to examine whether the resulting mechanism is consistent with the observed experimental observations. The method used to generate tips for the MD simulation has been described in several previous publications [21,23,[31][32][33]. The tip is modeled as an axisymmetric punch with a power-law profile. The power-law profile is described in cylindrical coordinates as: where z is the vertical coordinate or height, r is the radial coordinate, and N is the power-law index. Following the convention of Grierson et al. [33], the substitution Q = S N−2 R N−1 is used here, where S is a dimensionless parameter that describes the steepness of the power-law tip profile and R is an effective tip radius with dimension of length regardless of N. The tips used in this study used initial values of N = 5 , S = 1, and R = 2.5 nm to carve a tip from a block of silicon oriented such that the tip axis is along the [111] crystallographic direction. The tip height is 3.0 nm. The experimental AFM tips in Milne et al. are also (111) terminated, but best fits of the power-law profile had a value of N = 2 [20]. Slightly flatter tips (i.e., with a higher powerlaw exponent) were used in the MD simulations because the simulated tip is smaller than the experimental tips due to computational constraints mentioned above, which would result in significantly fewer atoms coming into contact with the substrate in the simulations than in the experiments. To compensate for this difference, the value of N used in the simulations was increased, allowing more atoms to come into contact. The as-cut tip has a flat, atomically smooth (111) crystal facet at its apex. After cutting, the tip was annealed to promote surface reconstruction particularly on the sides of the tip. One other change from the experimental configuration is the use of one tip and one flat surface, instead of two tips in contact. This simplifies the geometry of the problem, permitting sliding to be constrained to one linear direction, and thus facilitated easier interpretations of the mechanisms at play. Furthermore, most of the experiments involved lateral sliding distances that were small compared to the tip radius. As well, the experiments included cases where one tip was much larger in radius than the other. The experimental behavior was consistent among all of these cases. Thus, we do not expect this modification of the geometry to have any substantial effect on the results. Surface sites on both the tip and the substrate were randomly selected for hydrogen termination so that both had equal percentage coverages. Matching hydrogen coverages of 20, 40, 50, 60, 80, 90, and 100% were examined here; the example shown in Fig. 1 is for 20% coverage. Adhesion is known to exhibit a dependence on crystallographic orientation and alignment, particularly at the atomic scale. Atomicscale friction may also show a dependence on sliding direction [34][35][36][37][38][39][40]. To account for these orientation effects, two tip orientations were investigated. In the first case, the crystallographic orientation of the tip and substrate are aligned such that the [110] direction of tip and substrate both aligned parallel to the x-axis. In the non-rotated tip case, the two surfaces are in registry and fit together almost perfectly as a matched pair, analogous to a lock and key. In the second case, the crystallographic axes are misaligned by rotating the tip by 90° about the tip's vertical axis such that the [110] axis of the substrate remains in the x-direction and the [110] axis of the tip is now in the y-direction. In this case, the two opposing surfaces are out of registry. The non-rotated and rotated tips were selected to represent two possible extremes in both adhesion and friction with the non-rotated tips expected to be higher due to in-registry lattice locking [29]. Throughout the remainder of this paper, the aligned tip is referred to as "non-rotated" and misaligned tip is referred to as "90° rotated". Before starting the simulations, the lowest atom on the tip was placed 0.50 nm above the highest atom on the substrate. The ReaxFF potential was used to model interatomic forces with a recently updated parameter set for Si/C/H containing systems that was optimized for Si surfaces [41]. The ReaxFF is a reactive empirical bond-order-dependent potential that includes non-bonded van der Waals and Coulomb interactions with variable charge. The bond order and non-bonded interactions make the ReaxFF potential valuable for studying friction and adhesion of materials. The ReaxFF potential has been ported to the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [42]. The LAMMPS MD code has domain-decomposition schemes that allow parallel simulations to run efficiently on thousands of processors. The equations of motion were integrated using the velocity Verlet algorithm [43] in LAMMPS. Simulation of Adhesion and Friction Simulations were carried out by first minimizing the energy of the tip-substrate system using a conjugate gradient minimization scheme in LAMMPS [43,44] to allow for tip and surface reconstruction after tip cutting and H-termination. Next, a series of Indent-Hold-Retract (IHR) simulations were carried out on all of these tip-substrate systems in the following way. The system was first thermalized at 300K for 25 ps to ensure a uniform distribution of temperature throughout the system. Temperature was maintained at 300 K by applying a Berendsen thermostat [45] to the atoms in the red layers shown in Fig. 1. A constant velocity of 0.020 nm/ps was then applied to the rigid layer of the tip (blue atoms) to move the tip closer to the substrate until a preset Fig. 1 A sample simulation starting configuration with a Si (111) tip situated above a Si (111) substrate. Hydrogen was randomly added to the tip and substrate until the desired coverage (in the case shown here, 20%) was achieved on both the tip and surface. Rigid layers are shown in dark blue and thermostated regions are shown in red. Free Si and H atoms, where the equations of motion are integrated without any constraints, are represented as yellow and gray spheres, respectively. The size of the atoms corresponds to their van der Waals (vdW) radii target load of 10.0 nN was achieved. The tip was then held at the target load for 10,000 steps, or 2.5 ps, and then retracted from the substrate by moving the rigid layers of the tip at a constant velocity of 0.020 nm/ps away from the substrate. The IHR simulations described above allow for adhesion and covalent bonding between the tip and the substrate to be studied in the absence of sliding. Because it is difficult to bring the tip and the substrate together in a perfectly perpendicular arrangement during an AFM experiment, some sliding, or lateral motion of the tip, does occur whenever the tip and substrate may contact. To study the effect of sliding on adhesion, tips were also slid for 10.0 nm under a constant load of 10.0 nN at 0.020 nm/ps. The velocity is applied such that the tip travels 3 units in the x-direction for every 1 unit in the y-direction as indicated by the vector �� ⃗ V shown in Fig. 1. This ensures that the periodic image of the tip does not pass over previously worn areas of the substrate and, therefore, allows for a larger distance sliding with a smaller system size. The effect of sliding direction on the substrate on friction was not considered in this work but will be the subject of a subsequent publication. In the prior experiments, in situ measurement of adhesion during sliding is not possible. Rather, to measure adhesion, sliding is interrupted, the tip is separated from the surface, and then the tip placed back in contact before resuming sliding. Due to instrument drift, repositioning the tip exactly where it left off is difficult or impossible. In addition, removing the tip from the substrate to measure adhesion may result in transfer of atoms from the tip to the surface, or vice versa, which means that the tip is not exactly the same before and after breaking contact. MD simulation has an advantage over experiment in that the coordinates of all atoms are known at all times. These coordinates can then be captured at specified times and used as starting configurations for new sets of simulations with no interruption of the sliding process. In the simulations described here, the atomic coordinates were captured after 2.5, 5.0, 7.5 and 10.0 nm of sliding. These snapshots were then used to determine the adhesion at these points. The simulation procedure for each subset was as follows. First, the center of mass velocity of the tip was set to zero to remove the sliding velocity. The tip was then held in place at a constant load of 10.0 nN for 2.5 ps and then retracted from the substrate by moving the rigid layers upwards at a constant velocity of 0.020 nm/ps. As mentioned above, instrument drift makes it difficult or impossible to place the tip back in exactly the same spot after the tip is removed for an adhesion measurement. To determine the degree to which this affects adhesion, the tips were removed from the surface after 10 nm of sliding and then replaced over a new location on the substrate that had not been disturbed by sliding, then the IHR sequence was repeated. The simulation sequence is illustrated schematically in Fig The tip is slid 10 nm laterally over the Si surface. Adhesion is measured before sliding and after sliding at various distances. b Friction forces are monitored over the entire sliding sequence. c To measure adhesion, the normal force as a function of tip separation is collected at each sliding interval indicated in a and the resulting data analyzed as described in the text. The hydrogen termination in this system is 50% and c, a 21point running average filter was applied to the force data. Adhesion is measured in terms of the pull-off force and taken to be the minimum in the force versus displacement curve upon retraction of the tip from the substrate, which is same definition that has been used in previous publications [21,31,46]. An example set of normal force versus the vertical tip displacement curves for each point along the slide are illustrated in Fig. 2c. In the absence of covalent bonding, the pullback portion of the force curve is smooth and there is typically a single, well-defined, minimum, which has also been observed in other tip-substrate simulations [31]. In the presence of covalent bonding between the tip and the substrate, the force curve can have multiple extrema arising from the formation and subsequent breaking of covalent bonds [46], complicating the definition of pull-off force. The conditions leading to the formation of these interfacial bonds are discussed below. In all cases, the reported pull-off force is taken to be the largest minimum in the pullback force curve. Likewise, in the experiments, the maximum observed tensile force before separation is used as the reported adhesion force. Because energy is dissipated as the covalent bonds are broken during pullback, we further quantify differences in the force versus distance curves by integrating the area below zero force to produce the work expended to separate the tip and surface, and dividing by the area of contact measured at a nominal load. Strictly speaking, this does not correspond to the work of adhesion given the shape of the contacting bodies; however, as the tip is rather flat, the value obtained is approximately equal to the work of adhesion. More broadly, it provides an approximate way to compare the energy per unit area dissipated when separating the tip from the surface. Adhesion in the Absence of Sliding The IHR simulations allow for the examination of the effects of hydrogen termination and tip registry on adhesion in the absence of sliding. Fig. 3a shows the pull-off forces for nonrotated and 90° rotated tips as a function of hydrogen termination. Several notable observations are apparent from the data shown in Fig. 3a. First, reducing the hydrogen coverage generally increases pull-off forces. This behavior has been observed in previous MD simulations for diamond-diamond, diamond like carbon (DLC)-diamond, and ultrananocrystalline diamond (UNCD)-diamond surfaces in contact [29,47] as well as an AFM-MD study examining adhesion between a DLC tip and a diamond(111) substrate [21,23], and silicon (111) tip and diamond (111) substrates [46]. MD simulations have also examined the effects of roughness, hydrogen termination and material type on adhesion between hydrogen-terminated DLC and UNCD tips interacting with diamond(111), DLC, and UNCD surfaces, with and without H [31]. Note that the present study differs from the previous examinations not only in the identity of the tip-substrate couple, but because the degree of hydrogen termination on the tip and surface is matched. This increase in pull-off force as a function of hydrogen termination is attributed to the increase in unsaturated Si sites available for covalent bonding as H-termination is decreased, and to the associated small changes in roughness that allow tip and substrate atoms to approach more closely, giving rise to an increased opportunity for bonding. The number of Si-Si bonds between the tip and substrate as a function of hydrogen coverage is shown in Fig. 3b. In this analysis, Si-Si pairs are assumed to be bonded when they are within 0.2775 nm of each other, and silicon-hydrogen pairs are assumed to be bonded when a Si-H pair is within 0.1975 nm of each other. These values are chosen to be half-way between the minimum and maximum covalent cut-off values of bond lengths in the second generation REBO potential for Si, C, and H, i.e., the point where the bond cut-off interaction term F ij is equal to 0.5 for the respective bonds [48]. For diamond cubic Si, 0.2775 distance corresponds Fig. 3 a Pull-off (adhesion) forces from Si-Si contacts as a function of percent H-termination for the initial indent-hold-retract simulations (IHR). b The number of interfacial Si-Si bonds between tip and substrate just before tip retraction for the same initial IHR simulations to approximately one quarter of the way between first and second nearest neighbors and represents a reasonable limit for the range of covalent bonding. For reference, the Si-Si equilibrium bond length in diamond cubic Si is 0.2351 nm [49] and Si-H equilibrium bond length in SiH 4 is 0.148 nm [50]. As expected, the number of bonds present during contact increases with decreasing H-termination. As has been shown previously, covalent bond formation between the tip and the substrate has a stochastic component and the variability of the bond formation increases as the hydrogen content decreases [21]. This gives rise to the small amount of scatter in the plots. Second, the pull-off forces tend to be higher for the nonrotated tips. This is consistent with prior studies on other H-terminated materials. Specifically, Piotrowski et al. [29] examined the work of adhesion when infinitely flat, selfmated diamond surfaces were brought into contact. When atoms on opposing surfaces were directly above and below each other, work of adhesion was lower because the hydrogenated surfaces could not come into as close proximity. The non-rotated tip used here does not suffer from this constraint. Figure 4 shows the contact patch while the tip is under 10 nN of load. The first two atomic layers of the silicon substrate, the surface hydrogen, and tip hydrogen atoms are shown in yellow, blue, and red, respectively. For visualization purposes, all tip Si and H outside of the contact zone are hidden. Close-up views of the cross section through the contact are shown in the insets of Fig. 4. Visualization of the full MD trajectory indicates that the apex of the tip translates laterally by a small amount so that the H at the tip apex settles into (111) FCC hollow sites on the surface (see Fig. 4a). Surface H atoms likewise fit into the opposing (111) FCC hollow sites on the tip apex. This tight interlocking of the tip and substrate results in lower potential energies and thus larger adhesive forces for the non-rotated case. There are two reasons why this may happen in Si, but was not observed in the previous simulations using diamond. First, a finite-sized silicon tip is more flexible than an infinite slab of diamond and allows for lateral accommodation. This effect may not be observed even with a finite-sized diamond tip due the fact that diamond is much stiffer than Si. Second, the (111) FCC hollow sites are larger for silicon simply because the lattice constant of Si is larger than diamond (0.543 nm vs. 0.3567 nm) [49]. When the tip is rotated 90° this interlocking is no longer possible due to lattice mismatch, as illustrated in Fig. 4b. As briefly mentioned, the approximate work of adhesion W adh for 100% terminated surfaces was calculated as where the integral of the force F with respect to tip separation, r, is carried out for the attractive portion of the loading curve (where F<0) during separation of the tip from the substrate. For our calculation, the contact area A is estimated by the polygon inscribed by the red H atoms shown in Fig. 4, while the tip is loaded at 10 nN. Intermittent contact due to thermal fluctuations and change in area during the retraction process are thus ignored in this approximation. As noted earlier, this is an approximation and others have noted that this approach may not accurately predict the contact area [13,51,52]. This approach takes advantage of the fact that the N = 5 tips studied here are nearly flat at their ends, and the change in contact area during retraction is abrupt as it transitions from full contact to no contact. For comparison, we also use adhesive contact mechanics to determine the work of adhesion, although such models are based on continuum mechanics and thus ignore the atomistic details of the system. In fact, M.O. Robbins was one of the first to show that there can be a significant breakdown in these continuum [12,13]. By considering both approaches, we aim to provide reasonable estimates of the true value for W adh . From Eq 2, W adh for the non-rotated and 90° rotated tips with 100%H-termination was calculated as 85 mJ/m 2 and 12 mJ/m 2 , respectively. Using the continuum mechanics method of Grierson et al. [33], which treats the tips as rigid power-law profiles in the Derjaguin-Müller-Toporov limit [53] we obtain values of 85 mJ/m 2 and 45 mJ/m 2 , respectively. The two methods agree well, particularly for the non-rotated tip. The work of adhesion values are also reasonable when compared with other studies. In experimental studies, Ljungberg et al. [54] reported a range of surface energies between 12 and 20 mJ/m 2 for H-terminated Si (001). In that work, surfaces were treated in different concentrations of HF to remove the native oxide, which results in surfaces with a high degree of H-termination. For identical materials, the W adh was taken as twice the surface energy, putting the range of W adh reported by Ljungberg between 24 and 40 mJ/m 2 . First principles calculations by Zhang et al. [55] have determined that the difference between H-Si (001) and H-Si (111) surfaces is less than 5%, with H-Si(111) having the lower surface energy. The experiments in Milne et al. [20], produced an average W adh value of 8 mJ/m 2 for (111) oriented Si AFM probes. W adh for the 90° rotated tips reported here falls within this range of experimentally measured values. W adh for the non-rotated tip is well above the range of reported experimental values. However, the near perfect alignment of the crystal lattices between the tip and substrate for the non-rotated system makes it an extreme upper limit for W adh that is unlikely to be observed experimentally due to the difficulty in making contact with the two surfaces in perfect alignment. Adhesion in the Presence of Sliding Milne et al. [20] showed that sliding of two Si tips increased adhesion relative to no sliding. The pull-off force measured was an average of 19 times higher with sliding, than without, and in some cases observations of adhesion forces after sliding that were 100's of times larger than the average value obtained without sliding. The simulations presented here show similar results. The pull-off forces obtained after each sliding increment are plotted as a function of distance slid (Fig. 5a). For better statistics, the non-rotated and 90° rotated results have been averaged. The pull-off force tends to increase with increasing sliding distance as in experiment. Note that the magnitude of the increase is considerably smaller in the simulations, but so is the sliding distance (up to 10 nm in simulations vs. 10's of nm to a few µm in the experiments), and the contact time (due to the much slower sliding speeds in the experiments). Thus, it is expected that the MD simulations would not produce increases in adhesion as large as those in the experiments. To consider this further, we performed an analysis comparing the trends in the work of adhesion values for the experiments and the simulations. We used the DMT theory to determine the work of adhesion values for the experiments (appropriate since the tips were reasonably fit with parabolic profiles), and the method of Grierson et al. [33] to estimate corresponding work of adhesion values for the simulations. Reassuringly, we find that there is reasonable, order-of-magnitude agreement between the experimental and simulation results. For all of the experiments, the work of adhesion values with sliding versus without increased by an average ratio of 30.6. For the simulations, we fit linear trendlines for work of adhesion versus sliding distance from least squares fits, and extrapolated to find the work of adhesion value at the mean sliding distance for all of the experiments. The resulting average increase in the work of adhesion depended on the H coverage, and was found to be 25, 15, and 12 for 80%, 90%, and 100% H coverage, respectively. Note that we compare to these high percentages of H coverage due to the fact that at 60% H coverage and below, tip wear seen in the simulations is far more significant than that seen in the experiments. Because the experimental results involved a range of randomly varied sliding distances, and different pairs of tips with different radii, we analyzed the experimental results further. Using a least squares fit, we determined the average percent increase in the work of adhesion per nm of sliding for all of the experimental measurements, and found the value to be 2.2% per nm of sliding. For comparison, in the simulations, the results are 2.2%, 1.2%, and 1.0% per nm of sliding for 80%, 90%, and 100% H coverage, respectively. This is reasonable agreement between the experiments and simulations; while it does not, in and of itself, fully validate that the simulation is capturing all essential mechanisms in the experiments, the consistency between them demonstrates that the interpretation of the experimental results based on the simulations is plausible. The hypothesized mechanism for the increase in pull-off force seen in the experiments comes from removal of passivating species (e.g., hydrogen or hydroxyl groups) during sliding. The initial passivation was hypothesized to occur when fresh silicon surfaces were created in the experiments by removing the native oxide by fracture induced by contact and sliding; the bare silicon surfaces, when separated, then become exposed to the rarefied environment of the TEM, which nevertheless still hosts species such as hydrogen and water [56]. These and other species will dissociatively chemisorb onto the bare Si surfaces [57,58]. Sliding disrupts this passivation and increases the likelihood of covalent bond formation. We observe wear in the form of displacement and transfer of atoms from one surface to the other including removal of H atoms and formation of Si-Si covalent bonds across the interface. In systems with the lowest hydrogen Fig. 5 a Average pull-off force as a function of sliding distance and b Average pull-off force as a function of %H-termination. Data for non-rotated and 90° rotated tips have been averaged in both panels termination, the tips undergo the most wear while sliding, and have the largest number of covalent bonds between the tip and the surface just prior to pullback, which manifests itself as a large pull-off force. The effect of hydrogen termination on pull-off force is strong, with the two closely correlated (see Fig. 5b). The links between pull-off force and covalent bond formation are discussed further below. The location of hydrogen atoms initially belonging to the tip and substrate can be tracked separately during the entire simulation. Examination of the atomic trajectories during sliding indicates that the tip picks up some hydrogen from the surface, and likewise deposits hydrogen back onto the surface at different points in the simulation. This exchange tends to favor removal of hydrogen from the tip and its deposition onto the substrate. An example atomic trajectory for the 50% H-terminated tip and substrate is shown in Fig. 6a. In this snapshot, the tip was slid from left to right for a distance of 5 nm. The tip was then retracted from the surface. The tip and substrate Si atoms are colored green and yellow, respectively, and the tip and substrate H atoms are colored blue and red, respectively, so that the transfer of material is apparent. The MD trajectories suggest that during sliding there is an initial removal of H from the tip when contact is first made, followed by transfer of Si from the tip to the substrate as sliding progresses, which results in tip wear. The net transfer of H atoms from the tip to the substrate as a function of H-termination is shown in Fig. 6b. It is clear that there is an increase in transfer of H from the tip to the substrate as sliding distance increases and that the rate of transfer generally increases with decreasing H-termination as long as there is H to be transferred. The maximum H transfer is at 40% H-termination. Adhesion generally increases with more sliding. This is due, in part, to hydrogen removal which leads to the formation of interfacial Si-Si bonds between the tip and the substrate. This is apparent from Fig. 7. The pull-off force is shown in Fig. 7a as a function of the number of bonds in the contact just prior to pullback and the number of bonds as a function of sliding distance is shown in Fig. 7b). The H-terminations corresponding to each data set are given in the legend. The values for the non-rotated and 90° rotated tips are averaged. Data sets for each tip alignment show the same trends with a bit more scatter. For systems with high H-termination few if any Si-Si interfacial bonds form regardless of sliding distance; however, pull-off forces up to approximately 100 nN are observed. This is, perhaps, indicative of the relative importance of the contributing factors governing the pull-off force, for example, the force to break the Si-Si bond and the force to overcome the van der Waals interaction. At low numbers of bonds, the van der Waals contribution contributes more to the overall pull-off force. As the number of Si-Si bonds between the tip and the substrate increase, the force to rupture these bonds begins to dominate the pull-off force. It is also worth noting that although it is possible to increase the pull-off force with sliding distance for some systems, it is also possible to obtain small pull-off forces despite sliding 10.0 nm, for example. The low pull-off forces shown in Fig. 5 correspond to systems with large degrees of hydrogen termination where there is little wear and thus minimal Si-Si bond formation between the tip and the sample. In this case, van der Waals interactions dominate the pull-off force. When Si-Si interfacial bonds are formed, both the sliding distance and the H-termination, in addition to tip wear (discussed below) impact the pull-off force. Reversible Adhesion In experiments, the adhesion increase is observed to be reversible after separating the surface. As mentioned earlier, after sliding adhesion is high, with an average of 19× higher adhesion with sliding, than without. However, if adhesion is subsequently measured without sliding, with a lower bound of 5s delay between adhesion measurements, the original low adhesion values are recovered. Milne et al. [20] hypothesized that, in this interim time where tips are separated, the surfaces are repassivated with hydrogen or hydroxyl groups from the dissociative chemisorption of molecular hydrogen, water, or other trace contaminants in the TEM chamber, which lowers the adhesion [56][57][58]. Previous experiments have established that water and molecular hydrogen will indeed dissociatively chemisorb on clean Si surfaces. The repassivation process is beyond the time scale accessible to MD simulation so this hypothesis could not be tested directly. However, it is possible to replace the tips on artificially re-terminated substrates. After sliding 10.0 nm, 11 of the 14 tips were removed from the substrate and placed on a fresh substrate with hydrogen coverage that match the original tip termination. This is referred to as the "reset" condition. The 20% hydrogen non-rotated, 90° rotated tips, and 40% non-rotated tips were too severely worn after sliding to use any further so they were not included in this postsliding analysis. These severely worn MD tips were also not used because the experiments of Milne et al. show minimal wear [20]. The simulated tips were then brought into contact with a fresh surface Si surface (with the hydrogen coverage still matching the original hydrogen coverage of the previous simulation run) and retracted once more using the same procedure used for the initial IHR simulations as described in Sect. 2. The results are shown in Fig. 8. In Fig. 8a, the initial pull-off force (no sliding) and the maximum pull-off force are taken from Figure 5a and replotted together. It is worth noting the maximum pull-off force may occur at different sliding distances depending on the hydrogen coverage, see for example Fig. 5b. That is, the maximum in the pull-off force does not always occur for longer sliding distances. While this is generally true for the lowest hydrogen terminations, where tip wear is greatest, the pull-off force is most strongly linked to the number of bonds in the contact prior to pull-off as shown in Fig. 7. The number of bonds in the contact prior to pull-off results from the interplay of tip Fig. 6 a An example of atom transfer between tip and substrate shown for a 50%H-terminated tip-surface pair after sliding for 5 nm. To illustrate atom transfer, the tip and substrate Si atoms are colored green and yellow, respectively, and the tip and substrate H atoms are colored blue and red, respectively. b The net transfer of H atoms from the tip to the substrate as a function of sliding distance averaged over both tip orientations. Thus, positive numbers indicate that the tip has experienced a net loss of atoms, while negative numbers indicate the surface has experienced the net loss of atoms. Hydrogen coverage is shown in the legend wear, which is linked to hydrogen termination, sliding distance, and the stochastic nature of bond formation [21]. The relative change in the pull-off force with respect to the initial pull-off is plotted in Fig. 8b. After sliding, the adhesion increases considerably compared to the non-sliding case. When the previously used tips are placed on fresh surfaces, the adhesion recovers or in some cases is lower than the initial adhesion, where negative numbers indicate a decrease in adhesion relative to the initial value. On average, the MD results reproduce the trend of the experiments, i.e., that a tip, after sliding and then separated from the opposing surface, will exhibit reduced adhesion when brought back into contact with a "refreshed" part of the opposing surface. In the simulations, no additional H atoms were added on to the tip, even though readsorption of passivating species will occur on both tip and surface in the experiment upon separation. Despite this, the recovery of low adhesion is still observed because the tip is interacting with an area of the surface unperturbed by sliding. In some cases, adhesion is even lower after the "reset" than in the initial IHR simulations, most notably for the 40% and 50% H coverage. Fluctuations in the pull-off force values are not surprising given the stochastic nature of the bond formation events. Moreover, such variations are particularly unsurprising at low H coverage values, where more interfacial bond formation events occur due to the larger number of unsaturated Si bonds on the surface. They are also consistent with the distribution in adhesion values reported in the experiments [20]. Furthermore, the higher degree of roughness present on the tips after sliding, which is most prominent at lower H coverages, could also lead to variations in the adhesion force measured in the "reset" configuration. It is also plausible that frictional sliding creates a charge imbalance between the two bodies of silicon; however, the AFM tips used in the experimental studies (Mikromasch CSC37; Sofia, Bulgaria) were heavily n-doped, making the build-up of charge unlikely. This hypothesis was not considered in simulation. Fig. 7 a Average pull-off force versus the number of bonds in the contact after sliding under 10.0 nN of load and at the time of pullback. b Average number of bonds as a function of sliding distance. Hydrogen termination is given in the legend. Data in both panels for non-rotated and rotated 90° tips at each distance and hydrogen termination are averaged An important contention of the passivation hypothesis, and our claim that these MD simulations are broadly supportive of it, is that conditions exist where relatively little wear occurs even though covalent bond formation and subsequent breakage has occurred. As mentioned above, the simulations do find that, at low H coverages, substantial wear and modification of the tip geometry can occur. However, much more modest wear occurs even though a slidinginduced increase of adhesion is observed, such as for 60%, 80%, and 90% initial hydrogen coverage levels. Testing the validity of this contention requires examining the atomic level structure of the tip surface in the experiments after sliding and separation has occurred. This was stated in our prior experimental report [20] but to further illustrate this, here we show an example where high-resolution TEM images with atomic resolution are obtained. Video 1 in the SI shows one such in situ TEM test of bare silicon sliding against bare silicon. The reversible adhesion phenomenon occurred in this case. In this video, the apex of the upper probe is magnified post contact, showing an extremely smooth and oxide-free Si surface with a high degree of order. Atomic features are seen, with roughness at the atomic level. This shows that atomic order and flatness is preserved, even though high adhesion attributed to covalent bond breaking has occurred, consistent with the MD simulations at intermediate hydrogen coverages. Nanoscale debris features are occasionally seen in the video; these are present at the edges of the sliding zone, and may either be from contaminants, or from accumulated Si atoms that have been displaced due to the mild wear process. Regardless, being at the edge and not within the sliding zone, it is assumed that these features do not have any primary influence on the adhesion behavior observed. Friction and Wear It has been shown both experimentally [30,34,35,[59][60][61][62][63][64] and using MD simulations that ordered substrates can display friction anisotropy and that, in the absence of wear, the friction traces display periodicity that is a direct result of the potential energy landscape encountered by the sliding tip/surface. These effects have been observed in MD simulations of the friction of diamond versus diamond [34,36,65], in ordered self-assembled monolayer surfaces [66][67][68], in MoS 2 [69,70], in carbon nanotubes [38], and in other systems [39,71]. In the MD simulations presented here, these effects are present and will be reported elsewhere. Here, the focus is on changes in average friction and wear that occur as a function of sliding distance, hydrogen termination, and tip-substrate alignment when sliding in a fixed direction relative to the lower surface. The average friction force during sliding as a function of sliding distance is shown in Fig. 9a and as a function of hydrogen coverage in Fig. 9b. Several aspects should be noted. First, for hydrogen terminations of 60% and lower, friction increases approximately linearly with sliding distance. The rate of increase is similar for 20-40% hydrogen terminations, but is larger than that for the 50-60% terminations. Second, the friction is independent of sliding distance for hydrogen terminations of 80% and higher. Insight into these differences can be gained by examining the net Si atom transfer between the tip and the flat surface as a function of sliding distance shown in Fig. 10a. For the 20 and 40% hydrogen terminations, the net Si atom loss is a nearly linear function of sliding distance, and smaller hydrogen terminations yield larger slopes. Thus, the linear increase in friction with distance for these low hydrogen terminations is largely driven by the formation of Si-Si covalent bonds between the tip and the substrate during sliding. Sliding introduces stress in these interfacial Si-Si bonds, which increases as the tip moves farther away from the bonding site on the surface. Eventually, stress induced by sliding is large enough to rupture these Si-Si linkages between the tip and the surface, while other times, the Si-Si linkage persists and a different Si-Si bond breaks within the tip, which results in the wear of the tips. This type of mechanism for Fig. 8 a Initial pull-off, maximum pull-off during sliding, and pull-off force after resetting the tips on a fresh surface for each H-termination. b The increase in pull-off force after sliding and then resetting the tip on a fresh surface relative to the initial pull-off force breaking covalent bonds between surfaces was first observed in MD simulations by Harrison and Brenner [72]. In that work, two diamond surfaces with chemisorbed groups were in sliding contact. Hydrogen atoms were sheared from the chemisorbed groups. These free atoms were then able to extract hydrogen from the diamond surfaces creating a site for the formation of a covalent bond between the surface and the chemisorbed group to bond. Continued sliding caused the rupture of the carbon-carbon bonds between the chemisorbed group and the opposing surface, removal of the chemisorbed group from the diamond, and resulted in hydrocarbon debris between the surfaces. The wear of self-mated diamond contacts was later examined using MD and the REBO + S potential by Pastewka et al. They found that an amorphous layer of sp 2 -hybridized carbon forms during polishing and that the growth rate depends on the sliding direction [73]. Subsequent MD simulations have also observed the formation of covalent linkages between DLC surfaces sliding against diamond [74,75] and against DLC surfaces [76][77][78][79]. For instance, Schall et al. examined the effects of unsaturated sites, i.e., hydrogen passivation, on the friction between DLC surfaces. Unsaturated carbon atoms served as initiation points for covalent-bond formation between the two surfaces, resulting in an increase in adhesion, and an increase in friction. The formation and breaking of covalent bonds at the interface during sliding resulted in material transfer and changes in hybridization of the carbon. Friction increased as the covalent linkages underwent strain and decreased when these bonds broke [76]. Quantum chemical simulations were used to examine the wear of diamond(110) in contact with silica. Those simulations show that bonding to silica chemically activates the C-C bonds in diamond, which leads to wear [80]. Tribochemical reactions and tip wear have also been observed in MD simulations of Si tips in sliding contact with diamond [46,81], DLC tips in sliding contact with both diamond [21] and DLC surfaces [82,83]. Taken together these simulations have shown that unsaturated bonds in covalent materials, either present initially or The rate of Si atom removal, i.e., the slope in Fig. 10a, is markedly less for the 50% and 60% hydrogen terminations than for the 20 and 40% terminations. Fewer unsaturated sites are present initially and fewer are generated by sliding as evidenced by the reduced rate of hydrogen removal for these two hydrogen terminations compared to the 20 and 40% terminations (Fig. 6b). This reduction in the formation of covalent linkages between the tip and the substrate reduces the total friction force by reducing the component of friction arising from tip wear. Lastly, for the highest hydrogen terminations (80-100%) friction is low and independent of sliding distance. For these terminations, there very little wear ( Fig. 10a and b), i.e., only one Si atom was removed from the tip, in the form of Si-H for highest hydrogen coverages. While a handful of hydrogen atoms are sheared from the tip at these high coverages, too few unsaturated sites exist on the tip and sample to facilitate the formation of interfacial Si-Si covalent bonds between the tip and substrate. Thus, the sliding distance has a little impact on friction (Fig. 9) because the tip is largely unchanged, i.e., unworn, and continually sliding over a pristine surface and energy is dissipated via stick-slip instabilities that result in the release of vibrational energy as phonons. Archard's wear law [84] states that the volume of material lost during sliding is proportional to the normal load and the sliding distance. While Archard's phenomenological wear law holds for macroscopic contacts, it has been shown repeatedly to not apply to nanoscale contacts [85]. For example, atomic force microscope (AFM) studies have been used to study wear in covalent materials using Si tips sliding against diamond [24,46]; DLC tips doped with Si sliding against silicon oxide [86] and ultrananocrystalline diamond [87]; silicon nitride against a variety of substrates [88]; DLC tips sliding against a DLC [89] surface; Si tips sliding on polymeric surfaces [90]. Archard's law was not obeyed in any of these studies. Gotsmann and Lantz [90] observed that the rate of atom removal depended exponentially on stress and proposed that wear occurred via atomby-atom attrition. This type of behavior was also observed for Si tips sliding against a diamond surface [24]. In contrast, atomistic simulations of tip-surface sliding have reproduced Archard's law behavior, at least under some conditions [82, Fig. 10 a Average number of Si atoms transferred between the tip and the substrate versus the sliding distance. Hydrogen termination percentages on both the tip and the substrate are given in the legend. b Average total number of Si and H atoms transferred between the tip and the substrate as a function of hydrogen termination. Total tip sliding distance is given in the legend Fig. 11 The net number of interfacial Si-Si bonds between the tip and the substrate as a function of sliding distance for the non-rotated tips. Hydrogen coverages are shown in the legend. All coverages are shown in a and the higher coverages are shown in b 83,91]. For example, MD simulations of DLC tips sliding against DLC surfaces have shown that the number of atoms lost from the tip is linearly related to sliding distance for loads up to approximately 100nN. In addition, tip wear occurred via the loss of clusters that were worn from the trailing edge of the tip. Dai et al. [83] also carried out MD simulations of DLC tips sliding against DLC surfaces. They were able to recover Archard's law under certain conditions and asserted that failure to recover the law could be attributed to a transition of the wear mechanism from isolated atom-by-atom wear to cluster-based wear of a blunted tip. Similarly, Shao et al. modeled DLC-diamond contacts, and observed a transition from atom-by-atom wear to Archardlike behavior at higher stresses [92]. Fig. 10a shows the number of Si atoms transferred from the tip to the surface, or worn, during sliding as a function of sliding distance. For hydrogen terminations of 60% and below, Archard's wear law is obeyed in that a linear relationship between wear and sliding distance is observed. As noted above, the rate of atom removal is dependent upon hydrogen termination, increasing with decreasing hydrogen coverage, and thus the availability of unsaturated Si atoms that can form covalent bonds between the tip and the substrate. For large hydrogen terminations, there is very little wear. A more complete examination of whether Archard-like behavior is seen would require examining the load or stress dependence of wear, which is beyond the scope of the current work. Wear Mechanisms Wear of the tip was analyzed by plotting the instantaneous number of covalent bonds between the tip and the substrate as a function of sliding distance. The data for all hydrogen coverages of the non-rotated tips are shown in Fig 11. This plot makes clear the correlation between friction and the formation of tip-substrate covalent bonds. This correlation between friction and interfacial covalent bonds has been observed previously for DLC surfaces in sliding contact with diamond [74] and DLC [76] surfaces. The two hydrogen coverages with the highest friction, 20% and 40%, form bonds immediately upon contact with the substrate as is evidenced by the non-zero number of bonds at 0 nm (Fig. 11a). After approximately 1 nm of sliding, the number of bonds for the two lowest coverages increases dramatically with sliding distance. This is due to the wearing away of tip material and the exposure of a larger area of tip, which can then form bonds with the substrate. The transfer of material to the substrate and the evolution of the tip structure were analyzed by examining MD trajectories of the evolution of the apex of the tip during the course of the sliding. Figure 12 shows still images at the end of 10.0 nm of sliding from four simulations that used the non-rotated tip. Silicon atoms are colored according to their initial distance from the Si substrate, with red atoms being the closest. Hydrogen atoms are colored yellow. At the 20% hydrogen coverage (Fig. 12a), a few hydrogen atoms are transferred initially to the substrate. After about 1 nm of sliding, large sections of Si begin to be removed from the tip during sliding. The MD movies (Video 2, SI) also indicate that slipping, and wear, of an entire layer of Si atoms in the tip can occur. Increasing the hydrogen coverage to 50% and then 60%, allows for the more gradual onset of wear as evidenced by the slower increase in the number of tip-substrate bonds as a function of distance for those hydrogen terminations (Fig. 11a). This reduction in interfacial bonds is concomitant with a reduction in friction for the 50% and 60% terminations compared to the lower H coverages. Analysis of the MD trajectory (Video 3, SI) reveals that the wear mechanism for the 50% hydrogen-terminated tip differs from the 20% terminated tip. In this case, the trajectories clearly show The Si tips have been rotated so that they are being viewed looking up from the bottom at the apex (along the +z direction as indicated in Fig. 1). For clarity, the Si substrate is not shown. Si atoms are colored by z coordinate. Red is closest to the substrate, followed by gray, and blue is furthest from the Si substrate. Hydrogen atoms are shown in yellow. Hydrogen coverages are shown in the legends several individual hydrogen atoms removed from the tip prior to any loss of Si atoms. The majority of these hydrogens were originally located around the circumference of the contact patch, the point where the tip transitions from its flat apex to its curved shank. This is followed by the loss of two clusters of Si atoms, one Si 8 H 2 and the second Si 2 H, from the trailing edge of the tip. These clusters are evident at the far left of the snapshot in Fig. 12b. Continued sliding causes the loss of two additional multi-atom clusters. While material is lost from the trailing edge of the cluster, it is also clear that tip material can move through the contact from as far as the front third of the tip and eventually be removed. Removal of clusters of atoms from the trailing edge of the tip was also observed in MD simulations of DLC tips sliding on DLC substrates [82]. Comparison of the still images in Fig. 12a and b demonstrates that wear of the 50% hydrogen-terminated tip was significantly less than what was observed in the 20% terminated tip. For the 20% terminated tip, the tip layer that began closest to the substrate (red spheres) is almost entirely worn away, while much of this layer remains for the 50% terminated tip. Wear is further reduced with additional H-termination. Only Si small clusters are transferred after 10 nm of sliding (Fig. 12c) at 60% H-termination. Again, the wear process starts with the removal of H around the circumference of the contact patch, followed by the removal of small clusters of atoms (Fig. 12c). Above 80% H-termination, only a few H are transferred from tip to unsaturated Si sites on the substrate with little or no transfer of Si atoms and the number of bonds formed during sliding shows a very modest increase with time (Fig. 11b). In all cases, hydrogen is removed first from the outer circumferential edge of the contact patch (see for example Fig. 12d and Video 4, SI for the 90% H-terminated tip). In the case of the 100% H-terminated system, a single H atom was abstracted from the surface and deposited at the edge of contact. The removal or addition of H from or to the contact edges of the contact is likely the result of local structural disorder and local strains due to surface reconstruction, which lowers the energy barrier needed to remove the H or in the absence of H makes the tip edges more reactive. For high H-termination (>80%H), even though hydrogen is removed during sliding there is still enough H present in the interface to provide separation between the tip and substrate and prevent covalent bonding between tip and substrate Si (Fig. 11b). Perhaps, with additional sliding, the continued removal of H from the tip could result in an increase in adhesion due to the exposure of unsaturated Si. Summary and Conclusions In summary, ReaxFF molecular dynamics simulations were used to study adhesion for silicon nanocontacts, specifically examining the dependence of adhesion on hydrogen coverage, relative orientation of the two Si crystalline surfaces in contact, and the degree of prior sliding. This work sheds light on the mechanisms of the reversible adhesion phenomena observed experimentally in Milne et al. [20] between passivated, oxide-free, nanoscale silicon surfaces in contact in vacuum. MD simulations presented here show that wear, adhesion, and friction, with and without sliding, depend strongly on the concentration of terminating hydrogen atoms, whether sliding occurred, and the formation of interfacial Si-Si bonds. In particular, lower hydrogen coverages and increased sliding lengths both lead generally to an increased number of interfacial Si-Si covalent bonds, which in turn increases adhesion markedly. The bond formation and breaking processes during sliding are stochastic, so fluctuations in the total number of bonds occurs, and the dependence of adhesion on sliding distance is not perfectly monotonic, but the general increasing trend is clear. Wear is present in the simulations and at low H-terminations, wear is severe. As H-termination increases, wear transitions from large clusters or complete layers of material, to small clusters, to the removal of H atoms, and eventually to a near wearless regime when there is complete H-termination. Moreover, after resetting the hydrogen termination post sliding (by making contact with a fresh portion of the surface) and then performing adhesion measurements without further sliding, it was observed in the simulations that adhesion returned on average to the relatively low values that were measured prior to sliding, mirroring the reversible adhesion phenomenon observed experimentally. A high-resolution TEM movie of one of the Si surfaces brought into contact in such an experiment reveals an extremely flat, well-ordered surface with roughness present only at the atomic scale. This is consistent with the simulations of surfaces at intermediate H coverage values, where sliding can increase adhesion while only slightly modifying the Si surface. In addition to providing a potential explanation for the reversible adhesion seen in Milne et al. [20], this work contributes to the growing body of tribochemistry work which explores the effects of energetic inputs, such as sliding and applied stress, on the chemistry at buried interfaces.
14,238
2021-04-09T00:00:00.000
[ "Materials Science", "Engineering" ]
Reoptimized UNRES Potential for Protein Model Quality Assessment Ranking protein structure models is an elusive problem in bioinformatics. These models are evaluated on both the degree of similarity to the native structure and the folding pathway. Here, we simulated the use of the coarse-grained UNited RESidue (UNRES) force field as a tool to choose the best protein structure models for a given protein sequence among a pool of candidate models, using server data from the CASP11 experiment. Because the original UNRES was optimized for Molecular Dynamics simulations, we reoptimized UNRES using a deep feed-forward neural network, and we show that introducing additional descriptive features can produce better results. Overall, we found that the reoptimized UNRES performs better in selecting the best structures and tracking protein unwinding from its native state. We also found a relatively poor correlation between UNRES values and the model’s Template Modeling Score (TMS). This is remedied by reoptimization. We discuss some cases where our reoptimization procedure is useful. Introduction The problem of evaluating protein energy and scoring protein conformations has been an important aspect of protein research . The energy and scoring functions serve to both guide protein simulation studies and to rank putative protein models. There are two main approaches used, categorized as physical and knowledge-based [25]. In the physical approach, an energy function is built based on a physical model of the atomic interactions and is then optimized based on experimental results. In the knowledge-based approach, the model itself relies on experimental results, typically by matching experimental distributions. Selecting the best models among putative models is an important application of a protein energy or scoring function [2][3][4][5][6][7][8][9][10][12][13][14][15][16][18][19][20][21][22][23][24]. In such cases, models tailored to a specific sequence are produced, and the task is to rank them according to a specified criterion, usually a measure of the deviation from the native structure corresponding to the given sequence. In this respect, there is a strong overall match between the energy values and the spatial deviation from native scores, such as the Template Modeling Score (TMS) [26]. However, this correspondence is not exact. While energies such as the coarse-grained UNited RESidue (UNRES) force field account for charge distributions, the TMS and similar measures do not. One can imagine transient states arising in cases where the charge distribution has a different transition time than the timescale associated with a structural change. However, protein native and decoy structures are steady states, i.e., they are allowed enough time to relax and escape any unfavorable transient states. Therefore, unstable configurations with lower TMSs but large energies are expected to be excluded. Taking ensemble averages would tend to decrease this effect further. The current UNRES was optimized to carry out free simulations and not to score decoys [27][28][29]. An attempt at threading was made with a very early version of UNRES [9]; however, even this application involved decoy energy minimization. Given the success of UNRES in free simulations, we considered it worth trying this force field in decoy scoring. It should be noted that free simulations imply that the computed structures are relaxed; if not at all configurations, then at least the end ones. Decoy structures are fixed, and, therefore, clashes from side-chain-side-chain interactions can appear, in general. Consequently, to better design UNRES for decoy scoring, the long-range repulsive components of the potentials need to be better regulated. Thus, a new optimization of UNRES could improve UNRES for decoy scoring. To optimize UNRES for this purpose, we applied a methodology based on neural networks, developed in our earlier work [18,30]. UNited RESidue (UNRES) UNRES [31][32][33][34] is a coarse-grained model for proteins in which each amino-acid residue is reduced to two interaction sites: the united peptide group (p), located halfway between two consecutive C α atoms (which are not interaction sites and are only used to define the geometry of the chain), and the united side chain (SC) (Figure 1). Due to the model's reduced number of interaction sites and the exclusion of averaging out the degrees of freedom (the secondary degrees of freedom), the UNRES force field provides a speed of at least 3 order of magnitude higher compared with the all-atom simulations [35]. The effective energy function in the UNRES model is defined as the restricted free energy (RFE) or the potential of mean force (PMF), and it is given by Equation (1). A detailed description of UNRES is provided elsewhere [36]. where each potential is multiplied by an appropriate weight, w . , and these weights are optimized. In Equation (1), U SC i SC j and U SC i p j are side-chain-side-chain and side-chain-peptide-group interaction potentials, respectively. The peptide-group-peptide-group interaction potential is split into the Lennard-Jones term (U VDW p i p j ) and the electrostatic term (U el p i p j ). The local properties of the polypeptide chain are described by U tor , U tord , U b , U rot , and U bond potentials, which are torsional, double torsional, bending, rotameric, and virtual-bond-deformation terms, respectively. U corr and U turn are higher-order correlation terms that are necessary for the correct reproduction of secondary-structure elements [37], U ssbond is a disulfide bond potential, and U SC−corr is a recently implemented potential that couples the local positions of the backbone and side chains, which improves the predictive capacities of the UNRES force field [38,39]. Additionally, because the UNRES energy function originates from the PMF of polypeptide chains in water, in which the fine-grained degrees of freedom have been averaged out, it is temperature-dependent. The factors f i arise from multiplying the terms of the respective order in the cluster-cumulant expansion of the PMF [37]. Because the current implementation of UNRES involves scoring the decoys, which correspond to folded structures and not folding simulations, we set the temperature at T = 300 K, assuming that all proteins considered are folded at this temperature. The UNRES model uses an anisotropic potential for the interactions between side chains, which are represented by the Gay-Berne model [40]. This model allows for a more accurate approximation of the side-chain interactions than simpler spherical models. There are two interaction sites per residue: united side-chain (SC) and united peptide group (p) are represented by light-gray ellipses and dark-gray circles, respectively. C α atoms (white circles) and the angles β, α, Θ, and γ define the positions of the backbone and side chains. The energy-term weights in the initial version of the UNRES force field were optimized using only one α-helical protein (PDB code: 1GAB) [27]. We shall term this version of UNRES 'GB'. In later versions, the force field was re-parameterized using two training mini-proteins: the α-helical tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1) [28]. The latter force field was recently extended by the addition of the local torsional potentials [39] with very limited manual optimization of the weights of the torsional terms (Table 1). We term this version of UNRES 'EL'. In the current force field, all the energy terms are physics-based except for side-chain-side-chain interaction terms, which were obtained by an analysis of the PDB [41]. Recently, a new approach to efficient force field optimization was developed [29] based on the maximum likelihood method [42]. However, even with the use of this method, only a very limited number of training proteins can be included in the optimization due to the high computational cost of the iterative procedure based on extensive folding simulations. For the fold-recognition application reported in this paper, the side-chain-side-chain interaction (U SC i SC j ), torsional (U tor ), and side-chain-correlation (U SC−corr ) terms of Equation (1) are the most important because they account for sequence-specific long-and short-range interactions. Therefore, in addition to the common weights for these terms, we introduced residue-pair-type-specific weights (a total of 400 for each of the three kinds of potentials). Residue-type-specific weights of the excluded-volume contributions (U SC i p j ) were introduced, because these potentials control the size of the proteins and depend on a single residue type. Likewise, residue-type-specific weights were introduced for the contributions to the virtual-bond-angle (U b ), side-chain-rotamer (U rot ), and double-torsional (U tord ) potentials, the type being that of the central residue. Because, for the decoys taken from the PDB database [41], the regular secondary structure is already present, the electrostatic (U el p i p j ) and correlation (U turn ) terms, which determine the regular secondary structure in free simulations, matter only as much as they contribute to the energy of the "bulk" of the secondary structure of different types. Therefore, only the weights corresponding to total U el p i p j , U VDW p i p j , U 3 corr , U 4 corr , U 3 turn , and U 4 turn were optimized (one weight per each kind of term). Multiple types of calculations can be performed with UNRES, from single-point energy calculations and energy minimization in internal and external coordinates to Monte Carlo and Molecular Dynamics calculations of various variants and modifications. Including serial (sequential) and parallel runs with scaling up to 70% on 16K cores [43]. Example of UNRES usage include: Conformational Space Annealing (CSA) [44], Hybrid Monte Carlo (HMC) [45], Replica Exchange Molecular Dynamics (REMD) [46], and Multiplexed Replica Exchange Molecular Dynamics (MREMD) [47]. UNRES has been successfully used for studies of protein folding pathways, thermodynamics, and kinetics [48][49][50]; in studies of multimeric systems [51,52] with the use of periodic boundary conditions [53]; and in systems with nonstandard amino acids [54] and links [55]. More detail on UNRES can be found elsewhere [36]. Reoptimization As stated in the introduction, the current version of UNRES was optimized to run free simulations in which the potential clashes are removed; however, in general, this is not the case for scoring fixed decoys. Therefore, first, we modified the potential to limit its repulsive components. Specifically, we imposed a cutoff on the repulsive parts of the potential to limit the maximum repulsion to 3 kcal/mol for a given interaction type between each pair of interaction centers. Only with such an approach can the UNRES force field be used without the prior energy minimization of a system because, otherwise, even slight overlapping of the interaction centers can outweigh all other energy components. Another possibility would be to use soft potentials, such as the 8-6 Lennard-Jones potential, but, even then, short energy minimization is needed [56]. We then reoptimized the force field to accommodate this change and to customize it to the task of decoy scoring. We employed the neural network technique for optimization. In the implementation of the UNRES model developed in this work, although energy is a linear function of the parameters, the error function to minimize is expressed as: where index p runs over the training proteins, index i runs over the decoys corresponding to a given training protein (including the native structure, which has an index of 0), totp is the total number of training proteins, N p is the total number of decoys for protein p (including the native structure of this protein), TMS pred and TMS decoy denote the predicted TM-scores and those calculated from the respective decoy and native structures, and the input features are {U} pi , the set of UNRES energy components (Equation 1) calculated for decoy i of protein p. As described later, other input features will be used alongside {U} pi . In this work, we used the back-propagation neural network method to approximate the values of TMS pred p that will minimize Equation (3). The neural network we used is a nonlinear function from the input features ({U} pi ) to the output feature (TMS pred ). We started with a random set of neural network weights and passed the input features through the neural network weights to calculate TMS pred . This part of the process is known as feed-forward. We then calculated the error associated with said prediction and, by the steepest descent method, modified the neural network weights to reduce this error. This process is known as back-propagation. It was carried out repeatedly until an overfit-protection test was violated. In this case, for the overfit protection, we left out part of the data from training and chose weights that gave the best results for this left-out set. From UNRES, we first extracted information characterizing the state of the protein as an initial step toward UNRES reoptimization. Besides giving the overall UNRES energy value and the values for each of the components in Equation (1), we also split these components into their residue-type-specific contributions. For example, for paired interactions, we calculated separate values for the contributions from interactions between one type of residue and another. In total, we have the following input features from UNRES: one overall energy, nine single-value components, four 20-valued components that are residue-type-dependent, and three 400-value components that are dependent on the type of a residue pair. See Table 2 for a list and description of these labels. Additionally, the weights of each kind of UNRES energy component and that of the total UNRES energy were also optimized. In total, we have 1 + 15 + 4 × 20 + 3 × 400 = 1296 values characterizing the UNRES energy function in this study. It should be noted that the parameters of the pairwise side-chain-side-chain interaction energies are not symmetric. The reason for this is the directionality of the protein chain (from the N to the C terminus). For example, a parameter for an 'AC' pair of side chains means that alanine precedes cysteine in sequence. Such non-symmetry of interactions is quite commonly used in fold-recognition studies [1-6] and partially accounts for the "through sequence" long-range interactions. Table 2. Labels for UNRES characterization. Label Size Description Lennard-Jones interaction energy between peptide-group centers U el pipj Physics-based side-chain backbone correlation potentials Labels for the energy decomposition used as input for representing UNRES to the neural network and their description. Size refers to the number of components a given label has. Some, such as the overall energy, are single-valued (1), four depend on the residue type and are 20-valued (20), and three depend on the type of a residue pair and are 400-valued (400). The same approach we used in the Seder1 scoring function [18] was used here for scoring a model of a given sequence based on its similarity to the native PDB [41] structure of the sequence, as measured by the TMS, which provides a normalized value for training our networks. In addition, we established a formula for transforming the TM-score, TMS = 1 − 2 × TMS, to achieve a distribution of values more fitting to our bipolar selection of neural networks and to align the directionality between our score and the energy values. With our transformation, a native structure scores a '−1'. We used a two-layer feed-forward neural network with momentum, recently described in detail [30]. A diagram of the neural network architecture is given in Figure 2. HL1 and HL2 refer to the first and second hidden layers, respectively, and W1, W2, and W3 refer to the weights connecting the different layers. We used an all-connected network in which weights connect all the nodes of one layer to all the nodes of the next layer. The number of weights for a given network will depend on the number of inputs, as this will determine the number of weights in W1. For an all-connected network, the number of weights connecting layer L1 with h 1 nodes (plus a bias node) to layer L2 with h 2 nodes is (h 1 + 1) × h 2 . At each node, the weighted sum of the previous layer is passed through an activation function to give the value of that node. We used a bipolar hyperbolic tangent activation function. Momentum refers to the contribution of the gradient calculated at the previous time step to the correction of the weights. We used the steepest-descent back-propagation algorithm to optimize the neural network weights. We started the analyses with a randomly selected training set of 22,805 protein chain models from the full training set of 296,381. The use of a small training set enabled us to optimize the architecture. After that initial optimization, deviations from the optimized values were tested for the full set. The final optimized values are given in Table 3. From the full training set, we selected 30% of the proteins at random for an overfit-protection set. A total of six such random training/overfit sets were used to train different realizations of the neural network. For each of the approaches to optimizing UNRES in Table 4, the initial weights and the order of the training proteins were randomized for each of the six neural network realizations used for the corresponding approach. We chose to use six realizations based on experience from previous work [18,30,57,58]. To obtain the final prediction, we averaged the results from the six realizations. This also gives an estimate for the stability of the prediction through the standard deviation. We obtained the full training and overfit-protection sets from three sources (number of models given in parenthesis): server models submitted to CASP4 through CASP10 (123,634) [59], native models from the PDB (54,084) [41], and native models from the UCSF database of protein models (118,663) [60]. The three sources are treated in more detail in our previous publication [18]. Combining these three sources resulted in 296,381 proteins. From these, we randomly set aside 30% for the overfit-protection set and used the rest for training. We used the published results from CASP11 [59] as the testing set for the results here. This set contains 83 proteins that were selected by the CASP organizers to represent a variety of protein structures. Our approach here simulates participation in CASP11. HL1 and HL2 are the number of neurons in the first and second hidden layers, respectively. An additional bias neuron is used in each layer. a, µ, and P are the activation parameter, the learning rate, and the momentum, respectively. N max and N stop are the maximum number of epochs and the number of epochs necessitating a stop after no improvement on the overfit-protection set, respectively. Both the EL and GB versions of UNRES were optimized, resulting in the OUNRES (Optimized UNRES) versions. We also tried to integrate additional external information into UNRES, such as the number of residues, the number of atoms, and the scores from DFire2 [61] and Seder1 [18]. All input features were z-scored. We used cutoff values, defined by the values of the top and bottom 1% of the data, to limit the effect of outliers in the UNRES values. Our testing method consisted of simulating participation in the CASP11 competition [59]. All of our training instances were restricted to models available before the release of CASP11 targets. We collected a total of 83 surviving CASP11 targets and the top 150 server predictions for them. This resulted in a total testing dataset of 12,240 structures. No information from these structures, such as overfit protection, parameter optimization, or any others, was used in the training of the neural network for the testing reported here. However, the version of OUNRES released with this work (and available from http://mamiris.com/services.html) used CASP11 information to select the top-performing networks among possible candidates. Results We analyzed the results of the optimization of the partial UNRES energies for both the GB and EL methods, with and without added input features. Protein structure scoring functions have several uses of interest. Among these functions is the ability to select the closest model to a native structure and, in the scope of protein folding, to select models along the folding pathway that approach a native state. Testing the first application of selecting the model closest to the native structure is relatively easy; testing the second use is considerably more challenging. We used the Pearson correlation with the real TMS and several other self-developed methods to estimate the effectiveness of selecting paths along the folding pathway; however, these results are more difficult to interpret since TMS does not account for charge distributions, as mentioned earlier. We began by comparing the mean TMS of the top five selected models according to the various methods. We first ranked the models according to the prediction of a given method and then calculated the mean TMSs of the top five models for each of the 83 CASP11 targets. The mean and standard deviation (STD) of these 83 values were calculated for each method. The results of these calculations are given in Table 4. The results of the correlation were also calculated. Pearson correlations were calculated between the TMS to native structure of a model and its prediction according to the different methods. The correlation was also calculated between the TMS to native and the UNRES energy. This calculation was done per target. Then, the mean and STD of the correlations calculated for each of the 83 targets were obtained. These results are presented in Table 5. The Pearson coefficients are low, but it should be noted that the coefficients obtained for the OUNRES variants are higher than those for Seder1 and DFire2 alone; this means that OUNRES can rank the bulk of the decoys better than those two methods. Table 4 for the legend. STD is the standard deviation and was calculated over the 83 CASP11 targets. To better understand the result of our optimization, in Figure 3 we give the differences in the top five mean TMSs between OUNRES and UNRES as a function of the TMS to native of the best available decoy for that target (topTMS). In most cases (51/83), with both easy and hard targets, we find that OUNRES is an improvement over UNRES. In some cases, the optimization seems to reduce the quality of the top selected models, represented by a highly negative y-axis. We looked at the two worst cases: T0782 with a topTMS 0.85 and T0765 with a topTMS of 0.8. In both cases, OUNRES appears to perform significantly worse than UNRES, judging by the top five mean TMSs. If we observe the resulting protein structures in Figure 4, we see that in the case of T0782, UNRES seems to better model the beta barrel. However, in the case of T0765, it seems that OUNRES produces a better structure, while the increase in TMS for UNRES is mostly due to the structure being more compact. In Figure 5, we plot the change in correlation upon optimization as a function of the topTMS. We see that the correlation improves for most (68/83) targets. Additionally, it seems that only targets with a high topTMS (easy targets) are made worse by using OUNRES over UNRES. For hard targets, it seems that the correlation is always improving. We also tested the directional accuracy of the different methods in two ways. First, we calculated the mean TMS for the top 1-5 (top1-5) and top 10-15 (top10-15) real model TMS to native ranking. We then calculated the average score/energy for the top1-5 and top10-15 sets according to the different methods and calculated the change. If the change was appropriate for a given score/energy, i.e., it points to the top1-5 being more favorable, we assigned a '+1' for this CASP11 target. If the score was inappropriate, we assigned a '−1' to this target. We then calculated the average assignment of the 83 targets and the STD. We term this parameter the Directional Accuracy (DA). Results for the DA are presented in Table 6. This test can also be done in the reverse order. The models can be ranked according to a score/energy, then the real TMS difference between the top1-5 and the top10-15 can be calculated. We calculated this for the 83 CASP11 targets we used and averaged the results to arrive at a single value per method employed. We also calculated the STD for this test. We term this parameter the Second Directional Accuracy (DA2). Results for DA2 are given in Table 7. We also tested a path to native accuracy. We can imagine that the server models we collected from the CASP11 experiments are a folding pathway in the configuration space to the native structure. To obtain this pathway, we started with the native model for a given target sequence. We then found the nearest structure, as measured by the real TMS, and repeated the process. The used structures were excluded until all models for the target were exhausted. Following the consecutively closest structures, a folding pathway was obtained, for which energies and scores were calculated. In a similar fashion to that above, we assigned a '+1' if the change in energy or score was consistent with the direction of the path, i.e., decreasing or not changing as the native state is approached, and a '−1' otherwise. We then averaged these values along the path for a given CASP11 target and then averaged the resultant means to arrive at a single value per method. We call this the Path to Native Accuracy or PNA. These results are given in Table 8. Table 4 for the legend. STD is the standard deviation and was calculated over the 83 CASP11 targets. Note that, in this case, for the optimized methods, the signal is almost entirely positive, which indicates that the methods were directionally successful for almost all protein targets (72/83 for OUNRES+length). Table 4 for athe legend. STD is the standard deviation and was calculated over the 83 CASP11 targets. Discussion We see an overall consistent improvement with the optimization and the addition of input features across all tests undertaken. We did not find any significant advantage of the EL or GB approaches to UNRES over the other. For the mean TMS of the top1-5 models, we see a consistent improvement, with the optimization adding about a percent of relative accuracy and inclusion of the sequence length and number of atoms adding another relative percent to the accuracy. Improvements over UNRES of OUNRES+length (EL or GB) have a statistical confidence of more than 99% according to a two-sided Student's t-test. The addition of information from DFire2 does not seem to improve the accuracy for this case. We also calculated the mean TMS for the top five ranked structures using only DFire2 or Seder1 to perform the ranking. In both cases, we get slightly better results than the neural networks trained on parameters extracted from UNRES, with and without Seder1 or DFire2 or both as inputs. This seems to be due to the optimization of the scoring for models farther from the native state than the top five. This can be seen in terms of correlations, where using either Seder1 or DFire2 yields worse correlations, with Seder1 outperforming DFire2 both for correlations and for top five mean TMS. These observations seem to indicate that although we seem to have improved UNRES by optimization, there is information yet to be picked up by the neural network for close to native structures. A significant effect is observed for the Pearson correlation, where a 2-fold increase is observed upon the optimization of UNRES. Slight fluctuation around this improvement is observed with the introduction of additional input features; however, there is no significant improvement above a correlation of 0.32-0.33. As indicated earlier, the strong improvement upon optimization could be in part due to UNRES's consideration of charge distributions and the fact that the correlation is calculated over the entire sample of models for a given target. For DA1 and DA2, we again see a strong response upon optimization and some additional response to the inclusion of additional input features. In both cases, we expect that the better the score/energy, the more consistently appropriate will be the change in values between the top1-5 to top10-15 models. More than a 2-fold increase in accuracy is observed upon optimization, and an additional significant improvement in accuracy is observed if additional input features are introduced. Improvement due to optimization in both cases has a confidence greater than 99% according to a two-sided Student's t-test. One should note that due to the choice of variables for test DA1 (1, −1) and its discrete nature, the STD in this case is exaggerated. Note that for DA2, for the optimized methods, the signal is almost entirely positive; i.e., the mean minus the STD is greater than zero. This indicates that the optimized methods were directionally successful for almost all protein targets. For the most successful method, OUNRES+length, 72 out of 83 targets had the correct directionality. For the PNA, we do not see a strong signal. The fluctuations in the PNA are quite large, indicating that, for many targets, the directional assignments were rendered more erroneous. However, there is enough signal to observe a significant improvement from the optimization and additional input features. In this case, there is no clear consistency in the improvements though, and that could be due to the nature of the path to the native structure. It is interesting to note that the path to native is intimately related to the hardness of the target. In Figure 6, we give the mean and span of the path as a function of the best model TMS submitted to CASP11. The 83 points in the plot correspond to the 83 CASP11 targets we used. The mean of the path is defined as the average over the TMS between consecutive models along the path. The span of the path is defined as the TMS between the native structure and the final model structure in the path until all others have been excluded. The Pearson correlation coefficients between the best model TMS and the mean/span of the path are 0.868/0.869, respectively. The correlation between the mean and the span of the path is 0.922. We also calculated the best fit lines between the best model TMS and the mean/span of the path. The line for the span of the path is given by l(x) = 1.03x − 0.21. The line for the mean of the path is given by l(x) = 0.63x + 0.32. Conclusions We reoptimized the UNRES energy function for protein decoy model quality assessment and achieved consistently better results in a number of tests. We find the bulk of the improvement for this round of optimization is from the improved scoring for the bulk of the models. This is seen by the large increase in the correlation of OUNRES relative to UNRES. This bias toward improving the bulk of the data results from the choice of neural network architecture and approach. It should be noted that this is the first attempt at using UNRES for scoring fixed decoy sets. A very early version of UNRES was used for threading [9], but in that work, the decoys were subjected to restrained energy minimization with UNRES, and minimized energies were used for scoring. We introduced several quantities to help compare the energy/score functions of proteins. The Top5TMS measures the average TMS to native of the top five picked by the method. DA1 and DA2 provide a measure of the directional successes of the energy/score function in terms of the path to native. Finally, the PNA is a measure of the directional success of an energy/score function along the folding pathway of a protein. We find that additional information in the form of additional input features tends to improve the accuracy of UNRES and OUNRES in picking the closest to native models and in assigning the direction toward a closer to native structure. In this respect, it seems that simply adding the z-scaled number of residues and atoms improves the performance of OUNRES most significantly. However, we find that DFire2 does not seem to improve the performance of UNRES, possibly due to an existing PMF in UNRES. On the other hand, the DA1, DA2, and PNA measures suggest that OUNRES has a substantial power of energy-ranking the bulk of the decoys and can, therefore, be used for selecting the decoys for further processing, rather than picking prediction candidates. Thus, OUNRES seems to be of advantage when decoys need to be pre-selected for further processing, rather than for the selection of the final models. Conflicts of Interest: The authors declare no conflict of interest.
7,512.4
2018-12-01T00:00:00.000
[ "Computer Science", "Biology" ]
A Study of Multi-Scale Relationship Between Investor Sentiment and Stock Index Fluctuation Based on the Analysis of BEMD Spillover Index According to behavioral finance theory, investor sentiment generally exists in investors’ trading activities and influences financial market. In order to investigate the interaction between investor sentiment and stock market as well as financial industry, this study decomposed investor sentiment, stock price index and SWS index of financial industry into IMF components at different scales by using BEMD algorithm. Moreover, the fluctuation characteristics of time series at different time scales were extracted, and the IMF components were reconstructed into short-term high-frequency components, medium-term important event low-frequency components and long-term trend components. The short-term interaction between investor sentiment and Shanghai Composite Index, Shenzhen Component Index and financial industries represented by SWS index was investigated based on the spillover index. The time difference correlation coefficient was employed to determine the medium-term and long-term correlation among variables. Results demonstrate that investor sentiment has a strong correlation with Shanghai Composite Index, Shenzhen Component Index and different financial industries represented by SWS index at the original scale, and the change of investor sentiment is mainly influenced by external market information. The interaction between most markets at the short-term scale is weaker than that at the original scale. Investor sentiment is more significantly correlated with SWS Bond, SWS Diversified Finance and Shanghai Composite Index at the long-term scale than that at the medium-term scale. Introduction Traditional financial theory holds that asset price is determined by the intrinsic value. The arbitrage behavior of rational investors will continuously correct the wrong pricing in the market, and investor sentiment cannot systematically influence the market price. As financial market has continuously developed and market transaction has become increasingly complex, traditional financial theory cannot explain some financial anomalies in reality, such as Allais paradox, calendar effect, equity premium puzzle, option smile, closed-end mutual fund puzzle and the effect of small-cap stocks. In view of these financial anomalies, financial economists overthrew the basic hypotheses of traditional financial theory and explained the financial anomalies from the perspectives of psychology, behavioral science and sociology, thus forming behavioral finance from the perspective of investor behavior. Behavioral finance theory argues that investors are not completely rational or always risk-averse, but have limited rationality. Noise traders are influenced by market environment and their sentiment, and they influence market transactions in turn, which will eventually lead to mispricing. There is an interaction between investor sentiment and financial market. Based on the noise trader model, De Long, et al. [1] found that investor sentiment is a systematic risk influencing the equilibrium price of financial assets, which triggers a further discussion on the relationship between investor sentiment and capital market. However, the existing studies focus on the influence of investor sentiment on capital market, while the influence of capital market on investor sentiment is seldom considered. In the real market, investor sentiment and capital market show complex relationship. Investor sentiment is the systematic deviation of investors' expectations [2] . In economic activities, investor sentiment has an impact on investment decisions by influencing their subjective judgment on future earnings, thus influencing the capital market. Investor sentiment also varies with market environment. In a bull market, the increasing demands lead to the continuous rise of asset prices and the increase of return on asset. The change of market environment will attract more investors, which further boosts investor sentiment. For the financial markets that boom and bust frequently, the dynamic change of investor sentiment has become an important factor affecting the stability of the market. For the rapidly developing Chinese capital market, the degree of irrationality is far higher than that of mature capital market, and the volatility of the market is more obvious. To explore the role of investor sentiment in the process of Chinese capital market volatility and the role of different industries in Chinese capital market volatility, clarifying these issues is not only of theoretical value for economic research, but also has important practical significance for the governance of China's capital market. if investor sentiment influences each other, the unpredictability of noise traders' beliefs causes price risks. Consequently, arbitrageurs cannot eliminate the mispricing caused by irrational behavior, and asset price will deviate significantly from the intrinsic value. Investor sentiment is a systematic risk which can influence the equilibrium price of financial assets. DSSW model can be used to analyze the closed-end fund discount puzzle based on its own theory, triggering extensive research on the proposition that noise trader risk influences asset equilibrium price. Scholars have done a lot of research using econometric model to analyze the effect of investment. Some scholars used regression model to investigate the impact of investor sentiment on market returns. Sayim, et al. [3] used vector auto-regression model to examine the influence of rational and irrational sentiment of institutional investors and individual investors in the United States on the returns and volatility of stock market in Istanbul. Kadilli [4] examined the role of investor sentiment in predicting the annual stock returns of financial companies within the regression analysis framework. Chen, et al. [5] used the panel threshold model to test the influence of sentiment of local investors and global investors on the expected industry stock returns in 11 Asian countries from 1996 to 2010. Some scholars used regression model to investigate asymmetric features of investor sentiment. Chung, et al. [6] studied the asymmetry of investors' emotional prediction ability in the economic expansion period and the economic recession period based on the multivariate Markov-switching model. Ni, et al. [7] employed the panel quantile model to test the asymmetric relationship between investor sentiment and stock monthly returns in China's A-share market. Hudson and Green [8] used the first principal component method to determine the sentiment of individual investors and institutional investors in Britain, and then used regression model to investigate the influence of investor sentiment on stock returns during the instability and stability periods in financial market. Some scholars used regression model to analyse the relationship between investor sentiment and market returns. Zhang, et al. [9] revised the theoretical model of noise trading proposed by De Long, and carried out factor analysis to construct comprehensive investor sentiment index based on market turnover rate, closed-end fund discount and growth rate of account opening. Meanwhile, factor analysis was carried out to construct comprehensive investor sentiment index. In this study, OLS and GARCH-M Return analysis method were employed to analyze the relationship between investor sentiment and stock returns in China's stock market. Chi and Zhuang [10] applied panel data model to study the relationship between investor sentiment and stock returns in China. Dong, et al. [11] established a single-factor model and a multi-factor model based on quantile regression to study the correlation between high sentiment, low sentiment and market returns. The majority of the existing studies have studied the relationship between investor sentiment and capital market from multiple perspectives. However, on the one hand, the studies discuss the influence of investor sentiment on capital market in the time domain based on traditional econometric models, but ignore the frequency domain information of financial data and cannot completely describe the relationship between investor sentiment and stock market. In addition, the traditional financial statistical model usually requires stable time series. Similar to signal, most stock market time series data contains noise, and has the characteristics of nonlinearity, non-stationary, sharp peak and fat tail. The existing studies focus on the time domain dynamic correlation between investor sentiment and capital market in the time dimension, but the frequency domain dynamic corre-lation between investor sentiment and capital market is seldom involved. Li and Feng [12] used EEMD method to decompose investor sentiment and stock index price series, and investigated the fluctuation correlation between investor sentiment and stock index price series at different time scales based on traditional econometric analysis. However, the dynamic characteristics of spillover effect were not considered. As argued by Perters [13] , the financial market is composed of traders with different investment time levels, such as short-term investment, medium-term investment and long-term investment. Different types of traders with different time scales have different investment ideas, and they influence the market from different time scales. Therefore, it is necessary to study the relationship between investor sentiment and stock market returns at different time scales from a multi-scale perspective. Financial time series has the characteristics of engineering signal. Many scholars have applied time-frequency analysis method to analyze financial time series, extract information at different scales, and reveal the inherent characteristics of data. Chinese scholars represented by Xie, et al. [14] made up for the defect that empirical mode algorithm is based on IMF following the definition of narrowband signal and analyzed a set of power data by using local narrowband decomposition algorithm to guide power distribution. Xie, et al. [15] used the improved empirical mode decomposition algorithm to analyze the analog signal and the actual signal, and proved that the essential mode function obtained by bandwidth criterion not only approximates the real component but also reflects the intrinsic information of the analyzed signal. Moreover, the power consumption data was decomposed into periodic term and trend term by bandwidth EMD. Based on the multi-scale characteristics of stock price fluctuation, Wang, et al. [16] measured the portfolio risk with HSI Index and Shanghai Composite Index as data samples by using BEMD-Copula-GARCH model. Wu, et al. [17] applied empirical mode decomposition (EMD) algorithm to decompose the multi-scale characteristics of China's live pig price. Also, structural change point analysis algorithm was employed to test the structural change points of each characteristic mode, and the four modes of live pig price in China were analyzed. Sun, et al. [18] decomposed the national risk value of 12 OPECs into three time scales: Short term, medium term and long term, and studied the fluctuation characteristics, modal characteristics and global importance between each scale and the original national risk series. On the basis of Diebold and Yilmaz's [19,20] generalized variance decomposition idea, this study constructed the spillover effect index of investor sentiment and stock market, analyzed the influence scale and direction of investor sentiment, and discussed the relationship between investor sentiment and capital market. Moreover, sliding window technology was employed to explore the dynamic characteristics of spillover effect. Based on BEMD (bandwidth empirical mode decomposition, BEMD) method, this study overcame the shortcomings of previous single time domain or frequency domain analysis by combining time domain analysis and frequency domain analysis, and clearly presented the signal characteristics in time-frequency domain, which can hardly be obtained in single time domain or frequency domain. Also, the dynamic characteristics of spillover effect were analyzed by using window rolling technology. BEMD method can be applied to decompose all kinds of signal, and has high signal-to-noise ratio in dealing with non-stationary and non-linear data. This study used BEMD method to decompose investor sentiment and market indexes into the sum of several IMF components and residual items. Zhang's method [21] was employed to reconstruct IMF and residual items into short-term fluctuation item, medium-term major event impact item and trend item. Based on spillover index and time difference correlation coefficient between variables, the interaction between investor sentiment and various variables was explored. This study made contributions in two aspects: 1) By establishing investor sentiment index, BEMD method was used to effectively avoid the possible problems of scale mixing and over-decomposition of traditional EMD algorithm and decompose the financial time series data into time series of different scales, then the relationship between investor sentiment, stock market and financial industries represented by SWS Index was studied at the original scale, long-term, medium-term and short-term scales. On the basis of the analysis of the "time domain dynamic correlation", the "frequency domain dynamic correlation" is further analyzed, and the rolling time window technology is introduced to analyze the dynamic relationship. 2) There is interaction between investor sentiment and capital market. On the one hand, investor sentiment influences capital market. On the other hand, capital market also has an impact on investor sentiment. But the existing research ignores the impact of the capital market on the investor sentiment at different time scales. This study investigated the static and dynamic interaction between investor sentiment and each market at different scales by using spillover index. Construction of Investor Sentiment Index The sources of investor sentiment index are mainly divided into two categories: One is to directly investigate investor sentiment, including direct indexes like questionnaire. Such indexes include CCTV BSI and Shenzhen Securities Information Company ICI. Since 2001, the CCTV website has established CCTV BSI index by investigating the predication of afternoon market by securities companies and consulting institutions to reflect investors' views on the trend of stock market. Shenzhen Securities Information Co., Ltd. has compiled a systematic, precise and highly professional investor confidence index, which is only used for internal research and cannot be extensively and deeply studied. China's media or organizations have limited surveys of investor sentiment, and the continuity of surveys and the availability of data are poor. In addition, these surveys are generally conducted among institutional investors, and cannot cover the sentiment of individual investors. The other is indirect indexes measuring investor sentiment. Investor sentiment indexes are constructed based on the transaction data about investor sentiment in the market, and can be used to analyze future market conditions. Such indexes include not only the overall market performance indexes formed by summarizing the overall performance data of the stock market, but also the transactional sentiment indexes constructed from the perspective of the types of market trading behaviors. This study used BW sentiment index construction method (Wurgler and Baker [22] ) and selected three proxy variables: Turnover rate of weighted market value of A shares in circulation (Turn RMC), price earning ratio of A share in Shanghai Stock Exchange (Pe) and Advance/Decline line of A share in Shanghai Stock Exchange. Moreover, the advance and decline influence of the three variables was comprehensively considered, and principal component analysis was carried out to construct investor sentiment index. Study of Spillover Index Based on BEMD Since the traditional EMD algorithm has the problem of scale mixing, the extracted IMF may contain oscillation with a large frequency range, and sometimes may be over-decomposed. Therefore, by combining BEMD and spillover index analysis, this study used the bandwidth empirical mode decomposition (BEMD) proposed by Xie, et al. [15] to extract the short-term scale information of time series data. Moreover, the short-term scale relationship between investor sentiment and variables was studied based on the spillover index. Base on the EMD algorithm, sifting process in BEMD can be summarized as follows. 2) Get the upper envelop e max (t) by interpolating between maxima. Similarly get the lower envelop e min (t) with minima. 5) Repeat Steps 1∼4 on PMF p i (i), stop criterion is presented by next three subsidiary process; a) Use 3-thresholds α 1 , θ 1 , θ 2 criterion to get PMF that almost satisfies the two conditions of IMF. b) Continue the sifting process until we find the minimum for σ 2 PMF . c) Take the final PMF as an IMF, then the IMF has small frequency bandwidth mild scale mixing problem. Then record the IMF imf if the extremum point number of r(t) is larger than three, let k = k + 1, i = 0, and go to Step 1); otherwise finish the sifting process. According to the BEMD algorithm, IMF with the weakest scale mixing can be obtained. As the short-term scale information extracted by BEMD belongs to high-frequency information and meets the stability requirements of series, a VAR model can be established. This study applied the spillover index method based on forecast error variance decomposition proposed by Diebold and Yilmaz [19,20] to measure the short-term scale relationship between investor sentiment and stock market indexes. The model can be summarized as follows: p-order vector auto-regression VAR(p) model considering n variables with stationary covariance is as below: where, the error term ∼ (0, Σ ) is an independent and identically distributed random variable. The moving average of x t is expressed as below: the prediction error variance of each variable was decomposed into various system shocks, and the H-step error variance of x i caused by x j was evaluated. The decomposition based on generalized VAR does not need to orthogonalize shock. The decomposition results are robust and not affected by variable ordering. The generalized VAR method allows related shock, which is explained by the historical distribution of errors, instead of orthogonalizing shock. Since the shock to each variable is not orthogonal, the sum of variances of prediction errors (the row sum of elements in variance decomposition table) is not necessarily equal to 1. In order to measure the spillover effect among variables, the H-step error variance caused by the impact of x i is defined as self-spillover, while the H-step error variance caused by the impact of x j is interactive spillover. H-step prediction error variance is decomposed into θ g ij (H), where Σ is the variance matrix of error vector . σ jj is the standard deviation of the j-th equation error. e i is the selection vector. The i-th element is 1, and other elements are 0. As explained earlier, the sum of the elements in each row of the variance decomposition table is not equal to 1, namely N j=1 θ g ij (H) = 1. To use the available information in the variance decomposition matrix to calculate the overflow index, the row sum was employed to standardize the input of the variance decomposition matrix. In order to measure the contribution of fluctuation shocks of all variables to the total prediction error variance, the total spillover index is defined as below. Directional spillover index S g i. (H) measures the spillover intensity of all other variables to variable i, which is called the total directional acceptance index of i. The directional spillover index S g i. (H) measures the total spillover intensity of variable i to all other variables, which is called the total directional spillover index of i. The calculation formula is as follows: The net spillover index measures the net spillover effect of market i to other markets j, which is the difference between the total fluctuation shocks from and to other markets. The calculation formula is as below: To sum up, this study first decomposed the original data into a series of empirical modal functions with different frequencies by BEMD. Second, the data was reorganized to extract the fluctuation information at different scales. As the short-term fluctuation information belongs to high-frequency data and meets the requirement of data stationarity, the VAR model can be established directly, and the total spillover index, the directional spillover index and the net spillover index can be obtained by decomposing the generalized prediction error variance. Measurement of Investor Sentiment Index On the basis of the daily trading data from January 24, 2007 to December 27, 2018, this study selected three proxy variables: Turnover rate of weighted market value of A shares in circulation (Turn RMC), price earning ratio of A share in Shanghai Stock Exchange (Pe) and Advance/Decline line of A share in Shanghai Stock Exchange. Since different variables have advance or lag in the influence of investor sentiment, the advance and lag effects of the three variables in index synthesis were comprehensively considered, and the advance and lag variables of the three variables were selected to construct investor sentiment index by carrying out principal component analysis, as shown in Table 1. Analysis of Multi-Scale Spillover Effect In order to study the influence of market volatility on investor sentiment, this study collected the daily trading data of Shanghai Composite Index, Shenzhen Component Index, SWS Bank Index, SWS Insurance Index, SWS Securities Index and SWS Diversified Financial Index from January 24, 2007 to December 27, 2018, and investigated the interaction between investor sentiment and Shanghai Composite Index, Shenzhen Component Index and financial industries represented by SWS index. When decomposing data using BEMD method, the number of decomposed integrations was set to 100, and the standard deviation of white noise was 0.1. The method proposed by Zhang [22] was employed to reorganize IMF components and extract short-term fluctuation information, medium-term major event impact information and longterm trend information of the series. Moreover, the interaction between market fluctuation and investor sentiment at different scales was analyzed. By comparing and analyzing the reorganized IMF, we can more intuitively analyze the periodic fluctuations of investor sentiment, stock price index and SWS index. Generally speaking, high-frequency data represent the short-term fluctuation of time series influenced by irregular information, which lasts for a short time and fluctuates seriously. Low-frequency data represent the time series influenced by major events like financial crisis, which makes time series change greatly and have strong periodicity. The trend term reflects the basic trend of time series. Analysis of Spillover Effect at Original Scale First of all, the interaction between investor sentiment and Shanghai Composite Index, Shenzhen Component Index and financial industries conducted with static full sample spillover effect analysis. Second, rolling window technology was employed to study the dynamic characteristics of spillover effect among variables. 1) Analysis of total spillover effect By constructing the full sample spillover index of generalized predictive variance decomposition of the first 10 days in advance, a 7 × 7 matrix was obtained, as shown in Table 2. The ij-th element in the table represents the contribution of variable j to predictive error variance of variable i. Diagonal elements were employed to measure the spillover effects within elements, while non-diagonal elements were applied to measure the spillover effects between elements. According to the data in the table, the total internal spillover index between investor sentiment and Shanghai Composite Index, Shenzhen Component Index, the financial industries represented by SWS Index is 65.52%, showing a strong correlation. On the whole, the internal spillover of variables is generally higher than that between variables. Particularly, the investor sentiment is affected most significantly, and the spillover index is up to 99.5%. Other variables have little influence on investor sentiment. Being self-strengthening, investor sentiment is high stimulated by good news, which spreads through different media and generates positive feedback effect on the stock market, thereby promoting the rise of stock price. The rising stock price further stimulates the rise of investor sentiment. However, the Shanghai Stock Exchange and the Shenzhen Stock Exchange are special, that is, the spillover effect of Shanghai Stock Exchange is slightly smaller than that of Shenzhen Stock Exchange. In view of the overall spillover effect, The insurance industry has the strongest spillover effect, reaching 95.96%. The investor sentiment has the second strongest spillover effect, reaching 92.12%, but the influence from other markets is only 0.51%, indicating the information outside the market is the main factor leading to the change of investor sentiment. Investor sentiment is an important factor influencing the securities market. The insurance industry and banking industry represented by SWS Index have more influence on other markets compared with the factors outside the market. As for the spillover effect of investor sentiment on a single variable, investor sentiment has the strongest spillover effect on the insurance industry and the weakest spillover effect on the diversified Finance. Considering the spillover effects of various financial industries on Shanghai Stock Exchange and Shenzhen Stock Exchange, except for self-spillover, the spillover effects of the banking industry and the insurance industry are the strongest. The net spillover effects of the banking industry and the insurance industry are great, indicating that the business relevance of different financial institutions has been enhanced with the deepening of the mixed operation in the financial industry. The banking industry and the insurance industry have similarity in capital adequacy ratio, solvency and risk matching ability, and the cross-industry and cross-market contagion of financial risks continuously increases. 2) Analysis of dynamic spillover effect Spillover effect generally varies over time. The correlation between different variables may be aggravated or weakened in uncertain conditions, and the static spillover index may ignore the influence of new information during the sample period. In order to effectively test the time-varying characteristics of spillover effect between investor sentiment and variables, this study introduced rolling time window technology to ensure smooth expression of index and avoid information distortion. The sample size of the sliding window was set 300, and the rolling step size was 10. As Figure 1 shows, The dynamic spillover index indicates that the spillover index between investor sentiment and various markets is in the range of 65%∼75% from 2007 to 2018, showing strong correlation. Also, the spillover index between investor sentiment and various markets has great volatility, which is obviously influenced by information. There are three major fluctuations, which occurred in October 2009-July 2010, February 2014-December 2014, May 2017-October 2018. The 2009 financial crisis has not completely disappeared, and there is still a long way to go for economic recovery. The sales restriction and ban lifting has brought heavier pressure in the securities market. However, as the Chinese government issued the 4-trillion economic stimulus plan in November 2008, and the GEM market landed on the capital market stage, the multi-level capital market was gradually improved, and investor sentiment has been enhanced. The securities market finally closed big red line, and the turnover of the Shanghai Stock Exchange and the Shenzhen Stock Exchange also reached a record high. With the tightening of real estate policy and monetary policy in 2010, higher demands were raised for IPO financing and refinancing in the capital market, and the capital pressure in the secondary market was doubled. The A-share market represented by Shanghai Composite Index fell far beyond the major global stock indexes, only better than that in Greece and Spain which suffered from European debt crisis. With the continuous decline of stock indexes, investors were depressed. In 2014, as China comprehensively deepened the reform of economic system and released the reform dividend, the capital market finally got rid of the bear market which lasted for nearly seven years and ushered in a new round of bull market. The investor sentiment kept rising, leading to the increase of the spillover index. Affected by three key factors, namely overseas factors represented by Sino-US trade friction, the continuous shrink of credit contraction represented by the continuous decline in the growth rate of social financing, and the continuous strengthening of financial supervision represented by the formulation of new regulations on asset management, the A-share market continued to slump in 2018, and the turnover of Shanghai stock market hit a new low in more than four years. By the end of 2018, the number of investors was 145 million, the market value decreased by 14.59 trillion yuan, and the per capita loss of A-share investors was 100,600 yuan. Investors were depressed and the spillover index kept falling. 3) Analysis of directional spillover effect Figure 2 shows the investor sentiment and the directional spillover index and net spillover index of each market from 2007 to 2018, which reflects the trend of investor sentiment and information spillover between each market and all other markets. It can be seen from the figure that there is uncertainty and volatility between investor sentiment and the directional spillover index of each market, which is greatly influenced by various market information. Investors' sentiment and the total directional acceptance of each market have the opposite trend to the total directional spillovers, while the total directional spillovers and net spillovers tend to be consistent. From the overall directional acceptance of each index, due to the influence of the fuse mechanism and the new regulations of capital market reduction, the overall directional acceptance of investor sentiment and insurance market are in a higher position in 2016, and the spillover index is in a higher position, mainly the receiver of information, while the Shanghai Composite Index, diversified finance and bank's overall directional acceptance are in a relatively low position, while the spillover index is in a lower position. It is mainly the transmitter of information. From the total directional spillover of each index, the total directional spillover index of investor sentiment is in a high position in 2018. From this, it can be judged that in 2018, the worst-performing stock market was the Chinese A-share market, and besides the constant trade friction between China and the United States, the downward pressure on the domestic economy and the relatively low real economy, the low investor sentiment is also an important cause. The total directional acceptance of investor sentiment The net spillover of investor sentiment The directional spillover of investor sentiment The total directional acceptance of Shanghai Composit index The net spillover of Shanghai Composit index The directional spillover of Shanghai Composit index The total directional acceptance of Shenzhen Composit index The net spillover of Shenzhen Composit index The directional spillover of Shenzhen Composit index The total directional acceptance of diversified finance The net spillover of diversified finance The directional spillover of diversified finance The total directional acceptance of bank The net spillover of bank The directional spillover of bank The total directional acceptance of bond market The net spillover of bond market The directional spillover of bond market The total directional acceptance of insurance market The net spillover of insurance market The directional spillover of insurance market Figure 2 The directional spillover index and net spillover index between the investor sentiment and each market at original scale 4) Robustness test To test the robustness of the total spillover effect, an optional H-step prediction error decomposition and an optional W -day sliding window are used. In this paper, 200-day and 300-day sliding windows, 10-step and 30-step prediction steps are selected for robustness test, and the test results are shown in Figure 3. No matter how the size of sliding window and prediction step are chosen, the dynamic overflow graph shows similar patterns, which shows that the analysis results are robust and consistent. Analysis of Short-Term Scale Spillover Effect 1) Analysis of total spillover effect From Table 3 of High Frequency Data Spillover, it can be seen that the internal total spillover index between investor sentiment and Shanghai Composite Index, Shenzhen Component Index, financial industries represented by SWS Index is up to 69.25%, which is higher than the spillover index at the original scale. The correlation among variables is greater than that at the original scale. Investor sentiment has greater influence on each market at the short-term scale than that at the original scale, reaching 299.29%. On the whole, the internal spillover is generally higher than the spillover between variables. Particularly, investor sentiment is most affected by its own impact, up to 99.8%, while other variables have less influence on investor sentiment. However, compared with the original scale data, except that the internal spillover of investor sentiment and the banking industry increases, other indexes decrease in different degrees. Apart from investor sentiment and the bank industry, the total directional acceptance of each market at the short-term scale is larger than that at the original scale. Except for investor sentiment, the total directional spillover index of each market at the short-term scale is smaller than that at the original scale, indicating that investor sentiment is the main variable affecting other market, and except bank industry all markets are more susceptible to new market information. Moreover, the interaction between most markets is weaker than that at the original scale. The total directional acceptance index of the banking industry at the short-term scale is smaller than that at the original scale, and the total directional giving index is smaller than that at the original scale too. Therefore, banks are unlikely to be influenced by other market information in the short term, but are more likely to influence other markets. It shows that the role of banking market in risk spillover in the short term is less than that in the original scale. 2) Analysis of dynamic spillover effect According to the dynamic spillover index, as Figure 5 shows, the spillover index between investor sentiment and each market was maintained at 60%∼70% from 2007 to 2018. Compared with the original scale, the dynamic spillover curve at the short-term scale is flatter, and the spillover effect is smaller. The significant difference of the dynamic spillover graph at the original scale and the short-term scale appeared in 2015. In the short term, investor sentiment and various industries had greater feedback to relevant information in 2015, and the spillover index was larger. In China's stock market, the stock index increased substantially in the first half of 2015, while the stock market crashed and the stock price fell sharply on June 15. The regulatory authorities took various measures to rescue the market. Moreover, the CSRC and the Ministry of Public Security cracked down on malicious short selling. In order to curb the herd effect that investors may sell into corrections, decrease the volatility of the capital market, enable investors to fully disseminate and feedback information, reduce information asymmetry and price uncertainty, and prevent sharp price fluctuations, Shanghai Stock Exchange, Shenzhen Stock Exchange and China Financial Futures Exchange officially issued relevant provisions of index circuit breaker and launched the circuit breaker mechanism at the end of 2015. Spillover index % Figure 5 The dynamic spillover index between the investor sentiment and each market at the short-term scale 3) Analysis of directional spillover effect Figure 6 shows the directional spillover index and net spillover index between the investor sentiment and each market at the short-term scale from 2007 to 2018, which reflect the shortterm trend of information spillover of investor sentiment as well as each market and all other markets. The total directional acceptance index at the short-term scale is smaller than that at the original scale. The difference between the total directional acceptance index of investor sentiment and the original scale appeared in 2008 and 2016. The 2008 financial crisis had a greater impact on investor sentiment at the short-term scale than that at the original scale, while fuse system had a smaller impact on investor sentiment than that at the original scale. Obviously, external events have greater impact on investor sentiment at the short-term scale. There were significant differences between Shanghai Composite Index and Shenzhen Composite Index in 2018. The total directional acceptance index of Shanghai Composite Index in 2018 was in the trough position of index at the short-term scale and in the peak position at the original scale. The total directional acceptance index of Shenzhen Composite Index was in the trough position at the original scale. In 2018, the Shanghai Composite Index was less affected by external events at the short-term scale, while the Shenzhen Composite Index was obviously influenced by external events at the original scale. The total directional spillover index of investor sentiment, Shanghai Composite Index, Shenzhen Component Index, bond market and insurance market is small at the short-term scale. The total directional spillover index of banks is small at the original scale. There is little difference in diversified finance between the shortterm scale and the original scale. The total directional acceptance of investor sentiment The net spillover of investor sentiment The directional spillover of investor sentiment The total directional acceptance of Shanghai Composit index The net spillover of Shanghai Composit index The directional spillover of Shanghai Composit index The total directional acceptance of Shenzhen Composit index The net spillover of Shenzhen Composit index The directional spillover of Shenzhen Composit index The total directional acceptance of diversified finance The net spillover of diversified finance The directional spillover of diversified finance The total directional acceptance of bank The net spillover of bank The directional spillover of bank The total directional acceptance of bond market The net spillover of bond market The directional spillover of bond market The total directional acceptance of insurance market The net spillover of insurance market The directional spillover of insurance market Figure 6 The directional spillover index and net spillover index between the investor sentiment and each market at the short-term scale 4) Robustness test In order to test the robustness of total spillover effect at the short-term scale, 200-day and 300-day sliding windows and 10-step and 30-step prediction steps were selected to carry out robustness test. The test results are presented in Figure 7 and Figure 8. Despite the size of the sliding window and the prediction step, the dynamic spillovers at the short-term scale show similar patterns, indicating that the short-term scale analysis results have high robustness and consistency. The dynamic spillover index of 300-day sliding windows and 30-step prediction steps at the short-term scale Analysis of Medium-Term and Long-Term Scale Spillover Effect In order to deeply study the fluctuation relationship between investor sentiment and market indexes at medium-term and long-term scales, the correlation degree as well as leading and lagging relationship between variables were tested according to the time difference correlation coefficient between variables. Variables M1, M2, M3, M4, M5, M6, M7 and variables L1, L2, L3, L4, L5, L6, L7 respectively represent medium-term and long-term investor sentiment, Shanghai Composite Index, Shenzhen Component Index, SWS Diversified Financial Index, SWS Bank Index, SWS Bond Index and SWS Insurance Index. It can be seen from Figure 9 that the investor sentiment has a strong correlation with SWS Bond, SWS Diversified Finance and Shanghai Composite Index at the medium-term scale. At the long-term scale, investor sentiment is strongly correlated with SWS Bond, SWS Bank, Shanghai Composite Index and Shenzhen Component Index, compared with the medium term. In the medium term, the correlation of SWS Bond, SWS Diversified Finance and Shanghai Composite Index with investor sentiment (−i) is stronger than that in the same period, while the correlation with investor sentiment (+i) is weaker than that in the same period, indicating that the medium-term fluctuation of investor sentiment precedes that of other variables. Investor sentiment (−i) and (+i) has weaker correlation with SWS Bond, SWS Bank, Shanghai Composite Index and Shenzhen Component Index in the long run than that in the same period, indicating investor sentiment has no obvious leading and lagging relationship with other variables. Figure 9 The time difference correlation coefficient between variables at long-term scales Conclusion This study investigated the correlation between investor sentiment and Shanghai Composite Index, Shenzhen Component Index, SWS Diversified Finance, SWS Bank, SWS Bond as well as SWS Insurance according to the spillover index. In order to study the correlation among the variables at different scales, BEMD method was employed to decompose and integrate the variables, and the static and dynamic correlations between investor sentiment and the variables were analyzed at multiple scales. The study results demonstrate that the total internal spillover index between investor sentiment and Shanghai Composite Index, Shenzhen Component Index and financial industries represented by SWS Index is up to 65.52%, and there is a strong correlation among the variables. Investor sentiment is an important influencing factor of the securities market. The dynamic spillover index shows strong correlation between investor sentiment and markets. At a short-term scale, compared with the original scale data, except that the internal spillover of investor sentiment and the banking industry increases, other indexes decrease in different degrees. Apart from investor sentiment and the bank industry, the total directional acceptance of each market at the short-term scale is larger than that at the original scale. Compared with the original scale data, except that the internal spillover of investor sentiment and banking industry increases, other indexes decrease in different degrees. The dynamic spillover curve is relatively flat in the short term, and the spillover effect is small. In the medium term, investor sentiment has a strong correlation with SWS Bond, SWS Diversified Finance and Shanghai Composite Index. In the long term, investor sentiment is strongly correlated with SWS Bond, SWS Bank, Shanghai Composite Index and Shenzhen Component Index than that in the medium term. Research prospects: The research in this paper is based on the consistency of data. But the data in reality will inevitably be affected by external events, and the compilation method of available data may also change, such as the official opening of the science and technology innovation board on June 13, 2019 and the new compilation method of the Shanghai Composite Index on July 22, 2020. In the future, on the one hand, we can try to improve the compilation method of investor sentiment index to eliminate the impact of external events on the data, on the other hand, we can try to improve the analysis method to eliminate the impact of external events on the data.
9,419.2
2021-08-01T00:00:00.000
[ "Economics" ]
Clinical relevance of the tumor microenvironment and immune escape of oral squamous cell carcinoma Background Changes in the tumor microenvironment and immune surveillance represent crucial hallmarks of various kinds of cancer, including oral squamous cell carcinoma (OSCC), and a close crosstalk of hypoxia regulating genes, an activation of chemokines and immune cells has been described. Methods A review about the pivotal role of HIF-1, its crosstalk to various cornerstones in OSCC tumorigenesis is presented. Results Hypoxia is a frequent event in OSCC and leads to a reprogramming of the cellular metabolism in order to prevent cell death. Hypoxic OSCC cells induce different adaptive changes such as anaerobic glycolysis, pH stabilisation and alterations of the gene and protein expression profile. This complex metabolic program is orchestrated by the hypoxia inducible factor (HIF)-1, the master regulator of early tumor progression. Hypoxia-dependent and -independent alterations in immune surveillance lead to different immune evasion strategies, which are partially mediated by alterations of the tumor cells, changes in the frequency, activity and repertoire of immune cell infiltrates and of soluble and environmental factors of the tumor micromilieu with consecutive generation of an immune escape phenotype, progression of disease and poor clinical outcome of OSCC patients. Conclusions This review focusses on the importance of HIF-1 in the adaption and reprogramming of the metabolic system to reduced oxygen values as well as on the role of the tumor microenvironment for evasion of OSCC from immune recognition and destruction. sufficient to predict the individual prognosis of an OSCC [10,11]. Therefore, there is an urgent need to establish additional prognostic factors. Interestingly, the aggressiveness of OSCC increased with aboral localization. Furthermore, systematic analysis of the expression profiles of OSCC led to the identification of different tumor markers with prognostic value in OSCC, such as the carcinoembryonic antigen (CEA), carbonic anhydrase (CA) 19-9, CA 125, CA 15-3 and the squamous cell carcinoma (SCC) antigen [12]. Recently, also hypoxia-associated genes were identified as important markers for prognosis of OSCC [10,13] suggesting that hypoxia-dependent pathways including the stabilization of HIF-1α play a key role in the development and progression of this disease [14][15][16]. So far, the expression and localization of HIF-1α, the key regulator of hypoxic responses, in OSCC cells have been determined by several groups, although the downstream effects in particular the expression of hypoxic pathways in the context of OSCC progression are widely unknown. In addition, the different cellular processes controlling the composition of immune cell infiltration and immune surveillance have to be elucidated to develop novel prognostic markers and potential therapeutic strategies. Recently, the molecular mechanisms and signal transduction pathways involved in the development of head and neck squamous cell carcinoma (HNSCC) in general and of OSCC in particular have extensively reviewed [17,18]. Therefore, this survey summarizes the molecular biologic and immunologic aspects involved in OSCC with a specific focus on the hypoxic tumor microenvironment and its link to an altered tumor metabolism. HIF-1-a key regulator protein in cancer progression All mammalian cells require oxygen for their essential metabolic program including oxidative phosphorylation [19,20]. The normal physiological oxygenation of mammalian tissues varies between 1 and 11 % (approximately partial pressures of 7.5-85 mm Hg) [21,22]. Intra-tumoral hypoxic stress could be mediated by rapid cell division, aberrant angiogenesis and blood flow. An oxygen pressure below 5-10 mm Hg is a powerful force for metabolic adaption and leads to structural alterations favouring survival, angiogenesis and progression of tumors, epithelial-mesenchymal transition (EMT), suppression of immune reactivity, which then correlated with poor prognosis and therapy resistance of tumor patients [23][24][25]. Hypoxic stress induces a complex gene expression program [26,27] with HIF-1 as a master transcriptional regulator of genes controlling oxygen homeostasis [28], thereby mediating cellular and systemic adaptive responses to maintain oxygen homeostasis in all metazoan species [29]. The HIF system consists of three principal molecule groups: The most important molecule is HIF-1, which contains the oxygen sensing subunit alpha (HIF-1α). Next to HIF-1α HIF-2α and various homologues of HIF-3α have been identified [24,28,30,31]. Despite HIF-2α is also regulated by an oxygen-dependent hydroxylation [24], it is not involved in all types of cancer [32,33]. In some carcinoma, an inverse expression of HIF-1α and HIF-2α was found. While HIF-1α is overexpressed and associated with disease progression in HIF-2α expression is low and is a good prognostic marker [34]. In contrast, the third homologue, HIF-3α, may function as an inhibitor of both HIF-1α and HIF-2α [24]. HIF-1 is a basic helix-loop-helix-PAS heterodimer composed of an alpha and a beta subunit [19,22]. The beta subunit, also called aryl hydrocarbon receptor nuclear translocator (ARNT), is a constitutively expressed 91-94 kDa protein [20,30]. Under physiologic conditions HIF is hydroxylated by specific prolyl hydroxylases (PDH) on proline residues 402 (Pro-402) and 564 (Pro-564), which facilitates the binding of the von Hippel-Lindau tumor suppressor protein (pVHL) to HIF-1α/HIF-2α leading to a rapid ubiquitination by an E3-ubiquitin ligase complex followed by the proteasomal degradation of HIF1α/HIF2α [32]. Another possibility to inhibit HIF-1α is mediated by the factor inhibiting HIF-1 (FIH-1α), which hydroxylates asparagine 803 in the transactivation domain and consequently blocks the binding of the co-activators p300 and CBP [28]. Under hypoxic conditions, HIF-1α dimerizes with HIF-1β and the resulting transcription factor HIF1 activates a large panel of target genes. HIF-1α represents a central protein involved in different pathways that are important for the survival of cancer cells in early cancer disease progression. Recently, a meta-analysis of 28 studies was performed on a large cohort of overall head and neck cancer (HNC) patients demonstrating an association of HIF-1α and HIF-2α overexpression with mortality particularly in the Asian population wherein increased levels of HIF-1α were associated with a reduced survival [15]. This metaanalysis further clarified that HIF-1α expression has a distinct prognostic value in the different subtypes and localization of HNC [15] indicating that the different subtypes of HNC should be analysed separately. Concerning OSCC increased HIF-1α levels have been shown to correlate with poor prognosis [14,[35][36][37]. HIF-1α target genes A more general finding was first proposed by Semenza and co-workers demonstrating that HIF-1α controls the expression of 100 genes involved in tumor progression [38] and this number increased to more hundred genes during the last years [39]. These genes include the CAIX, a membranous enzyme involved in pH regulation [40] the glucose transporter-1 (GLUT-1) responsible for glucose import [41] and the monocarboxylate transporter-1 (MCT-1) and MCT-4 important for lactate transport [42,43]. Furthermore, HIF-1 targets are not only involved in glucose transport and glycolysis, but also in cell survival and proliferation, invasion and metastasis formation [24]. This is in line with reports describing a hypoxia-mediated "angiogenic" or "glycolytic" switch of tumor cells [44]. Furthermore, a continuous low O 2 partial pressure has an impact on both the cellular metabolism and HIF signaling [22]. The HIF-1-mediated pathways, which play an important role in stabilization and/or progression of OSCC lesions, are (i) glycolysis (ii) angiogenesis (iii) pH stabilisation (iv) the microenvironment and epithelial mesenchymal transition (EMT) and (v) distinct strategies of tumor cells to escape immune surveillance (Fig. 1). Role of glycolysis Hypoxia induces adaptive changes in the cellular metabolism by HIF-1 as a master regulator to balance oxygen supply and demand [34]. OSCC cells obtain most of their energy by glycolysis. Since glycolysis delivers only 2 ATP molecules compared to 38 ATP molecules by respiration, an increased glucose uptake is essential for tumor cells to survive [45]. The family of glucose transporter molecules summarizes 13 members [46,47]. The most investigated transport molecules in OSCC are HIF1 and GLUT-1 [41,48]. Own investigations demonstrated a significant correlation between increased glucose uptake and poor prognosis in OSCC [49]. Similar results were obtained by Harshani [48]. In addition, the hypoxia linked upregulation of GLUT-1 was also described by Gimm and co-authors, which negatively interfered with the survival of OSCC patients [50]. An increased glucose consumption leads to an acidification of tumor cells. The next crosstalk to enable tumor cell survival is an upregulation of carbonic anhydrase(s). This was accompanied by co-expression of HIF-1α and CAIX, from which the latter is also transcriptionally activated by the HIF complex. Interestingly, the risk of tumor-related death for the patients' groups with the worst prognosis was comparable independent of HIF-1α alone (RR = 4.53) [51]. In addition, GLUT-1 is overexpressed at a high frequency in OSCC lesions, patients with tumour lesions expressing both HIF-1 α and GLUT-1 had a 5.13-fold increased risk of tumour-related death (P = 0.017). Co-expression of high levels of HIF-1α and GLUT-1 was hence significantly correlated with poor prognosis in OSCC patients. Since proteins associated with the glucose and lactate metabolism often co-localize in hypoxic areas of OSCC [52,53], a combined analysis of the expression pattern of both proteins might be used as an early diagnostic and independent prognostic marker [54]. Moreover, enhanced glucose uptake by OSCC cells reduced the sensitivity of tumor cells to cisplatin-based chemotherapy [55]. Role of angiogenesis Tumor progression is a multifactorial process including the induction of angiogenesis and cancer cell proliferation in OSCC cells. This was accompanied by an upregulation of diverse angiogenic markers. Angiogenin expression significantly correlates with HIF-1α [56] and with an increased microvessel density (MVD). When OSCC cells were cultured under mild hypoxia (5 % O 2 ) only HIF-2 α contributed to VEGF-expression. In contrast, at 1 % O 2 VEGF's were regulated by both HIF-1 α and HIF-2α. As a consequence both HIF-1α and HIF-2α play a pivotal role in tumor angiogenesis and tumor growth of OSCC [37]. In addition, HIF-1α is involved in tumor lymphoangiogenesis. This was demonstrated by analysis of the density of blood and lymphatic microvessels in OSCC using immunohistochemical staining for CD43 and LYVE-1: HIF-1α overexpression significantly correlated with a VEGF-C upregulation. Consequently, a higher lymphatic vessel density was found in HIF-1αpositive OSCC [57]. Role of pH stabilisation The proliferation of cancer cells creates toxic waste products and an acidification leading to a decrease in the intracellular pH of tumor cells. The metabolic adaption accumulates different ionic exchangers at the tumor cell membrane to maintain intracellular pH (pHe) (Fig. 2). Dysbalances in pHe have been shown to be associated with cancer progression [58]. Moreover, HIF-1α orchestrates also pH stability of the tumor cells [59] and extracellular matrix adaption [60,61], which is linked to alterations of the metabolic program by affecting the expression of the HIF-regulated pathway components [59]. This includes e.g. an upregulation of CAIX, which is associated with nodal metastases and a decreased survival of OSCC patients [62]. The deregulated pH in OSCC is also an adaptive feature, which could be divided into general pathways. First it is necessary to maintain the intracellular pH (pHi). Second, an acidification of extracellular pH (pHe) is the consequence. In normal differentiated adult cells, intracellular pH (pHi) is generally ~7.2 and lower than the extracellular pH (pHe) of ~7.4. However, cancer cells have a higher pHi of ≥7.4 and a lower pHe of ~6.7-7.1. A complex membrane spanning system is initiated to emerge pH homeostasis. In this process HIF-1α is a sensitive controller and regulator of a complex adaptive metabolic response to alterations in particular the intracellular pH value [33,62]. An increased pHi is permissive for cell proliferation and the evasion of apoptosis, facilitates metabolic adaptation and is obligatory for efficient directed cell migration [58]. It is proposed that the hypoxic microenvironment induced the epithelial to mesenchymal transition (EMT), upgraded stem-like properties, promoted invasion and metastasis, and thus increased the malignancy. Several pluripotency transcription factors seem to be associated with the EMT transition process like Nanog and POU5F1, SNAI and SOX-2 [11,[63][64][65][66]. In our opinion, the same effects are another hallmark of OSCC development as described by Guo and coworkers for gastric cancer [67,68]. The HIF-1α-mediated hypoxia-dependent down-regulation of E-cadherin and upregulation of N-cadherin is caused by the activation of the transcription factor SNAI2 thereby promoting EMT of tumor cells. Therefore, the aberrant HIF-1α and SNAI2 expression combined with the cadherin switch has been suggested as potential risk marker for predicting metastasis and clinical prognosis [68]. Furthermore, the switch to anaerobic metabolism the genetic instability of OSCC cells greatly increases [44]. Tumor microenvironment and epithelial-mesenchymal transition (EMT) When hypoxia results in epithelial mesenchymal transition, the resultant reduction in adherence and the change of cytoskeleton enables migration leading to metastasis. The hostile microenvironment, which is created by hypoxic events, combined with the presence of growth factors and cytokines are the crucial initiating events leading to this complex transition. EMT is classified as three different subtypes based on different biological settings, which causes various functional consequences. Type 1 EMT is considered to be associated with implantation, embryogenesis, and organ development. Type 2 EMT occurs in organ fibrosis, whereas Type 3 EMT occurs in carcinogenesis [69]. During the progression to metastatic competence, the OSCC cells enter into a metabolic and survival reprogramming process. This allows the OSCC cells to acquire features similar to mesenchymal cells that may significantly impart invasiveness, changes in adhesive properties, activation of motility and the degradation of the extracellular matrix. Major signaling pathways, which are commonly implicated in epithelial-mesenchymal transition, include TGF-β, Wnt, Notch, Hedgehog, and others. These pathways converge on several transcription factors, including zinc finger proteins Snail and SNAI2, Twist, ZEB1/2, and Smads. These factors interact with one another and others to provide crosstalk between the relevant signaling pathways [70]. Although hypoxia-induced EMT signalling occurs in all tumor cell populations, only the stem-like cells acquire high migratory potential, which suggests that these cells are potentially responsible for invasion and consequently metastasis [71]. Furthermore, in the presence of hypoxia, the utilization of epigenetic mechanisms of gene regulation could play an important role in aggressive behaviour of tumors. The tumor microenvironment appears to play a prominent role in affecting EMT changes. For example, the E-cadherin transcriptional repressor, TWIST, is positively regulated by HIF-1α [72]. It has been demonstrated that hypoxia-induced EMT in in vitro experiments is associated with increased HIF-1α and Twist expression that knock-down either HIF-1α or Twist blocks hypoxiainduced EMT [73]. There are many pathways regulating EMT. Amongst them are Notch signaling and Wnt pathways, which have been shown to be involved in the conversion of the hypoxic stimulus into EMT. They are also important in increasing both motility and invasiveness of the tumor cells [74]. The EMT program has been found to be active in the invasive front of OSCC. There is evidence of reduced E-cadherin expression at the invasive front of OSCC. Additionally, there is an association with histological invasiveness, which suggests that this protein could be a potential EMT marker, thus offering prognostic information in OSCC [75]. Moreover, there is further evidence that suggests hypoxia induced EMT in OSCC cell lines via activation through Notch signaling. Therefore, inhibition of the Notch signaling pathway to suppress EMT is possibly a useful approach for the treatment of OSCC [76]. An understanding in how hypoxia regulates EMT related transcriptome could help to identify nodes of interaction related to cancer progression. Consequently, further targeting hypoxia induced EMT could show to be an alternative therapeutic approach for the prevention and treatment of OSCC. Distinct strategies of tumor cells to escape immune surveillance A novel and recently emerging aspect in tumorigenesis is the crosstalk between hypoxia/HIF-1α and immune recognition/immune escape mechanisms [77]. Hypoxia has been shown to influence the repertoire and activity of immune cells in various tumor entities, including OSCC. Different strategies by which hypoxia contributes to tumoral immune escape from recognition by NK cells and cytotoxic T lymphocytes have been described [78][79][80]. Hypoxic stress is known to induce a variety of immune suppressive molecules, such as IL-10 and TGF-β (Figs. 3, 4). This step could further induce the differentiation of tumor-associated macrophages (TAMs) into M2 macrophages to suppress anti-tumoral activities [78]. In addition, HIF-1α orchestrates the CD4 + and CD8 + T-cells regarding their survival, apoptosis and cytokine secretion. In HIF-knock down mice an increased frequency and activation of CD4 + and CD8 + T cells was found, which produced higher levels of IFN-γ thereby enhancing anti-tumor responses [81]. Furthermore, the interaction of HIF-1α with the cytotoxic T cell lymphocyte-antigen 4 (CTLA-4) as immune checkpoint regulator and its receptors CD80/CD86 is well known. The HIF-triggered CTLA-4 blockade caused a reduction in the frequency of tumor infiltrating Tregs [78]. Another key role of HIF-1α associated with immune escape and induction of tolerance is the shedding of cell surface immune checkpoint regulators like MICA, which interact with the NKG2D receptors of different immune cells. Tumor cytolysis is avoided due to the resistance to NK cell attack [82]. This immune adaption is dependent on the HIF-1α-mediated induction of metalloproteinase ADAM-10 in tumor cells [83]. In addition, PDL-1 is expressed under hypoxic conditions leading to resistance to CTL-mediated lysis. Moreover, the cellular co-localisation of HIF-1α and PDL-1 in tumors has been well investigated [79]. In advanced OSCC Chen and co-workers described positive staining for both HIF-1α and PDL-1, which was associated with a worse prognosis [84]. In addition, a binding of HIF-1α to the hypoxia-response element of the PDL-1 promoter has been shown in both breast and prostate cancer cell lines demonstrating a link of HIF-1α with the checkpoint inhibitor [79]. It is generally accepted that tumor cells could be recognized as abnormal cells and thus be destroyed by immune cells. However, tumors have developed different mechanisms that allow their immune escape such as deficient expression of tumor antigens on their cell surface, loss or a reduced expression of MHC (major histocompatibility complex) class I molecules, lack of expression of co-stimulatory molecules, production of immune suppressive molecules like the transforming growth factor (TGF)-β, IL-6, IL-10, prostaglandin (PG) E2 and adenosine, resistance to apoptosis, and/or the expression of Fas ligand (FasL), which leads to the death of tumor-infiltrating lymphocytes (TILs) as well as induction of co-inhibitory molecules of the B7 family and of non-classical HLA class I antigens, e.g. HLA-G and HLA-E [85][86][87][88]. In addition, tumor cells can recruit TAMs, the major inflammatory components of the tumor microenvironment by secreting the colony stimulating factor (CSF-1), the chemokine ligands 2, 3, 4, 5, and 8 (CCL2, 3, 4, 5, and 8) and VEGF [89][90][91]. Due to their distinct functions macrophages could be divided into (i) the M1 phenotype, which kill pathogens, promote the activation of cytotoxic CD8 + T cells and the differentiation of naïve CD4 + T cells into Th1 effector and Th17 cells and (ii) the M2 macrophages, which stimulate CD4 + Th2 cells as well as regulatory T cell differentiation and promote angiogenesis and tissue remodeling (pro-tumor functions). The effect of Th17 cells is controversially discussed. However, so far no information exists on the prognostic impact of premalignant oral lesions developing OSCC and their TAM phenotype. It might be an advantage to limit the progression of premalignant lesions to cancer by sustaining the Th17 phenotype [92]. This is in contrast to Deng and coauthors demonstrating a recruitment and proliferation of Th17 cells in the intestine promoting colon cancer [93]. Multiple studies demonstrated a correlation between the frequency of macrophages in the tumor microenvironment and the patients' prognosis. This is due to the secretion of the epidermal growth factor (EGF), plateletderived growth factor (PDGF), TGF-β, IL-6, IL-1, and tumor necrosis factor (TNF)-α of TAMs, which creates a favorable milieu for tumor growth. In hypoxic areas, TAMs stimulate angiogenesis by secreting TGF-β, VEGF, granulocyte macrophage (GM)-CSF, TNF-α, IL-1, IL-6, and IL-8, promote tumor cell migration and invasion via matrix metalloproteinases (MMPs), TNF-α, and IL-1 and induce immunosuppression via TGF-β, PGE2, and IL-10. T lymphocytes could also contribute either to tumor cell destruction or facilitate its development. While Th1 CD4 + T cells facilitate tumor rejection by assisting the function of cytotoxic CD8 + T cells, Th2 CD4 + T cells promote antibody production of B cells by secreting cytokines. The CD4 + T regulatory cells (Tregs) expressing FoxP3 promote tumor progression by inhibiting the NK and T cell functions. Their frequency is increased in tumor patients and associated with a worse prognosis. In addition, the number of myeloid-derived suppressor cells (MDSCs), which are induced by VEGF, GM-CSF, TGF-β, IL-6, PGE2, and cyclooxygenase (COX)-2 are often increased in tumor patients. MDSCs are a heterogeneous population of immature and progenitor myeloid cells with an immunosuppressive role in various types of cancer, including OSCC. A significant accumulation of both granulocytic and monocytic MDSC was observed in head and neck cancer. The frequency of granulocytic MDSC showed an inverse correlation to the frequency of T cells in the peripheral blood. The increased granulocytic MDSC significantly associated with advanced clinical stage and poor prognosis of head and neck cancer patients [94]. Cetuximab treatment of HNSCC significantly increased monocytic MDSC in non-responders, but decreased granulocytic MDSC in responders of HNSCC patients. The frequency of MDSC known to promote tumor progression correlate with poor prognosis in cancer patients including HNSCC [95]. They have been shown to be involved in tumor progression by inhibiting the activity of CD4 + and CD8 + T cells, by the production of arginase and reactive oxygen species (ROS) and by inducing Tregs through an IL-10 and IFN-γ-dependent process. Their interaction with macrophages could result in the induction of the type 2 phenotype due to increased IL-10 secretion. Peripheral blood markers reflecting the immunologic tumor microenvironment One possibility to analyze the aggressiveness of OSCC is the identification of prognostic biomarkers in the peripheral blood. Indeed, a number of circulating biomarkers have been described [96]. These include e.g. circulating peripheral blood CD14+/CD16+ monocyte-derived macrophages (MDMs), which have been evaluated in OSCC patients by Grimm and co-workers and characterized as CD14 + /CD16 + MDMs [97]. Moreover, the lowest ratio of IL-17F/VEGF was found in OSCC patients (P < 0.05): The lower ratio of IL-17F/VEGF correlated to higher tumor stage and lymph node metastases. Furthermore, the serum level of IL-17F and the ratio of IL-17F/VEGF were positively associated with the number of CD3 + CD4 + T cells. These data indicated that serum IL-17F might originate from PBMCs during the development of OSCC and thus could be used to distinguish OSCC patients from healthy individuals [98]. The proportion of CD57 + T cells, including both CD8 + and CD4 + subsets significantly increased with clinical stage, especially in parallel with tumor size as described by Iida and co-authors. The population of CD57 + T cells is another potent prognostic marker and may also influence the systemic immunity of patients with OSCC [99]. Moreover, different serum levels of IL17A, TGFβ1, IL4 and IL10 were significantly higher in oral cancer patients, while the concentration of IL2 and IFN-γ was relatively lower in patients when compared to controls. TGFβ1 levels significantly correlated with disease. In this context, IL17A might represent a risk factor for OSCC [100]. Characteristics of the immune escape in OSCC It has been proposed that OSCC could escape the antitumor response by several distinct mechanisms. So far alterations in the HLA class I molecules due to deficient expression of components of the antigen processing machinery (APM) has been mainly described thereby leading to resistance to CTL-mediated lysis. In HNSCC a coordinated downregulation of various APM components, in particular TAP1, TAP2, tapasin and HLA class I antigens, were found with higher frequency of downregulation in metastasis when compared to the primary lesions of the same patients [101,102]. This was further associated with a worse clinical outcome of patients. Furthermore, a downregulation of HLA class I antigens and of most APM components in OSCC lesions was described when compared to adjacent normal tissues. The deficient expression of the low molecular weight protein LMP2, a component of the proteasome, was associated with a reduced CD8 + T cell infiltration, which was associated with the presence of regional lymph node metastases and with reduced survival rates of these patients. The molecular mechanism of APM deficiencies in OSCC could be diverse and include ganglioside-mediated or transcriptional-mediated down-regulation of several HLA class I APM components [103,104]. In addition a lack of IFN-γ inducibility has been shown as well as defects in the IFN-γ signal transduction pathway, such as impaired phosphorylation of one or more components, which also negatively interfere with the constitutive HLA class I APM component expression. Furthermore, the development of OSCC is strongly influenced by the host immune system, such as reduced frequency of (activated) immune effector cells, increased frequency of regulatory T cells (Treg), functional defects or apoptosis of both circulating and tumor-infiltrating T cells and the local microenvironment disabling TIL due to absent or low expression of the CD3 zeta chain (CD3ζ), decreased proliferation in response to mitogens or IL-2, an imbalance in the cytokine profile and pronounced apoptotic features [105][106][107][108]. Moreover, immune cell dysfunction is also found in peripheral circulating mononuclear cells of patients with advanced OSCC. HNSCC cells also produce high quantities of TGF-β1, which reduces the expression of NK cell receptor NKG2D and CD16 and inhibits the biological functions of NK cells. In addition, an accumulation of Tregs and TAMs in the TILs and/or of peripheral blood mononuclear cells in head and neck cancer patients was detected, which could be related to the early recurrence and patient's prognosis [109][110][111]. High levels of various soluble mediators were found in the peripheral blood and/or tumor microenvironment of HNSCC, such as VEGF, PGE2, TGF-β, IL-6, and IL-10. These factors have been shown to inhibit the immune response at different levels. Accumulating evidence exists that myeloid-derived cells including myeloid derived suppressor cells, polymorphonuclear granulocytes (PMN) and dendritic cells play an important role in tumor angiogenesis [112][113][114]. The frequency and function of these cells are modulated by tumor-derived factors thereby increasing the inflammatory or immune suppressive activity [115]. In addition, OSCC have also a significant impact on dendritic cells (DCs). A larger number of DCs in non-metastatic lymph nodes was found when compared to metastatic lymph nodes of OSCC. The immature DC marker CD1a was especially present in the cancer "nest", whereas the mature DC marker CD83 was more prominent in the peritumoral area. The relationship between the expression of VEGF and DC infiltration, which plays an important role in immune defense against tumors, remains unclear. However, VEGF-A expression is not only involved in tumor angiogenesis, and disease progression but also in immune suppression by inhibiting the differentiation of CD1a + immature DCs from progenitor cells leading to a reduced number of mature DCs and increased levels of dysfunctional CD83 + mature DCs [116]. Moreover, in OSCCs a greater number of S100 + and CD1a + immature DCs in adjacent tissue and regional lymph nodes in patients without metastasis had been found. In contrast, CD83 + mature DCs were more abundant in patients with metastasis [117]. Tumor cells can modulate the expression of toll like receptors (TLRs) present on the surface of immune cells. Monocyte-derived DCs (MDCs) of OSCC patients express all TLRs except TLR4, -9, and -7 [118][119][120]. Thus altered TLR expression is an important tumorpromoting event in OSCC progression. OSCCs can also influence the frequency of circulating MDC and plasmacytoid dendritic cell populations. The number of circulating MDCs (LIN-DR + CD11c + ) was significantly lower in patients with OSCC. However, the circulating MDC population increased after removal of the tumor suggesting that this reduction was reversible and controlled by the presence of tumor cells. In addition, TAMs are involved in the angiogenesis and tumor progression of OSCC as described by various groups. The number of TAMs as determined by immunohistochemical analysis using the CD68 antibody is higher in carcinomas [121]. Comparison of TAMs with clinical parameters demonstrated an association between their frequency, tumor stage, grade and invasion, intratumoral microvessel density and the presence of angiogenic factors, such as VEGF [122]. This was further confirmed by the analysis of the expression of cell cycle (cyclin E and p53) and proliferation markers (Ki-67) as well as macrophage infiltration [123]. A direct correlation between the macrophage infiltration and the tumor proliferation index was noted, which suggested that the number of TAMs is functionally linked to tumor progression [124]. In addition macrophages play a role in OSCC formation by contributing to neovascularization. In fact, OSCCs could attract macrophages by secreting MCP-1 and TGF-β1 [125,126]. The mechanisms responsible for T cell apoptosis in OSCC patients involve the Fas/FasL or the TRAIL and TNF-α signaling pathways. The FasL expression was found on the cell surface of OSCC cells leading to an apoptotic signal of circulating Fas + T lymphocytes. The suppression of Treg might depend on Fas/FasL-mediated apoptosis: CD4 + T cells were resistant to Fas-mediated apoptosis by Tregs, but were able to induce Treg apoptosis in the presence of low concentrations of IL-2. The expression of monocyte chemotactic protein-1 (MCP-1/CCL2) and macrophage inflammatory protein-1α (MIP-1α/CCL3) was found in OSCC lesions and might also control disease progression. In addition, serum levels of CCL2 and CCL3 in OSCC were determined. A significant lower concentration of CCL2 was detected in the OSCC patients when compared to that in the healthy controls. Serum levels of CCL3 were positively related to the tumor size, while the CCL2/CCL3 ratio in OSCC patients was correlated to TNM (tumor, node, metastasis) [127,128]. Thus CCL2 and CCL3 are associated with progression of OSCC and might serve as potential biomarkers. Clinical relevance of the frequency and composition of immune cell infiltrates Recently, the prognostic value of various tumor-infiltrating immune cells populations was determined in OSCC patients. Interestingly, a high frequency of CD4 + CD69 + T cells was linked to a better prognosis, and CD4 + Foxp3 + T cells were positively correlated with better locoregional control. Moreover, a higher density of CD4 + CD25 + Tregs was also linked to a good prognosis in OSCC. In discrepancy to these studies the presence of Tregs in TILs was linked to a worse prognosis in OSCC patients in other reports. Suppression by the tumor microenvironment is mediated by a unique subset of CD4 + CD25 high Foxp3 + Tregs that produce IL-10 and TGF-β, which lead to a more antiproliferative effects [108,129,130]. Changes in the expression of the ζ chain of TILs are biologically significant because the absence or low expression of this chain in TILs in patients with stage III or IV HNSCC predicts a poor survival compared with patients expressing a normal ζ chain. The importance of the ζ chain was further confirmed by demonstrating a lower expression of the ζ chain in circulating CD4 + and CD8 + T cells and CD3 − CD56 + CD16 + NK cells in the blood of patients with OSCC when compared to healthy individuals. Reichert and co-authors studied the DC population and the expression of the ζ chain in TILs in a large series of 132 OSCCs [131]. A low density of DCs and absent or low expression of the ζ chain in TILs was correlated with a poor prognosis of survival and a high risk of recurrence. The more advanced cases demonstrated higher rates of Tregs and B cells and fewer CD8 + T cells. In the low-risk group, a high concentration of CD20 + TILs was linked to a better survival rate, whereas this increase was linked to a worse prognosis in the high-risk group. Conclusions Although hypoxia has been associated with an increased immune escape a deeper understanding of the factors that cause immune suppression in OSCCs might be relevant for the development of novel anti-cancer therapies. The worse prognosis of these patients has been linked to hypoxia and hypoxia-induced immune escape. Impaired anti-tumor responses of OSCC patients are caused by the tumor itself, by the presence of functional defects or apoptosis of both circulating and tumor-infiltrating T cells, but also by soluble factors of the tumor microenvironment including soluble factors and the hypoxic microenvironment, which leads to an accumulation of immune suppressive cells, like TAM, Tregs and MDSCs macrophages as well as a downregulation in the function and activity of T lymphocytes and DCs. Abbreviations ANG: angiogenin; APM: antigen-processing machinery; ARNT: aryl hydrocarbon receptor nuclear translocator; CA: carbonic anhydrase; CEA: carcinoembryonal antigen; COX: cyclooxygenase; CTL: cytotoxic T lymphocytes; EGF: epidermal growth factor; EMT: epithelial in mesenchymal transition; FIH-1: factor inhibiting HIF-1 alpha; GM: granulocyte macrophage; HIF: hypoxiainducible factor; HRE: hypoxia responsive element; IL: interleukin; LYVE-1: lymphatic vessel endothelial hyaluronan receptor 1; MCT-1: monocarboxylate transporter; MDSC: myeloid-derived suppressor cell; MHC: major histocompatibility complex; MVD: microvessel density; NHE1: solute carrier family 9 member A1; OSCC: oral squamous cell carcinoma; OSR: overall survival rate; PDH: prolyl hydroxylase; PDL-1: programmed death-ligand 1; PG: prostaglandin; pHe: extracellular pH; pHi: intracellular pH; PTG: prostaglandin-endoperoxide; SCC: squamous cell carcinoma; TAMs: tumor-associated macrophages; TGF: transforming growth factor; TILs: tumor-infiltrating lymphocytes; TNF: tumor necrosis factor; TNM: tumor-node metastasis; Tregs: regulatory T cells; pVHL: von Hippel-Lindau tumor suppressor protein; VEGF: vascular endothelial growth factor. Authors' contributions AWE and BS carried out the manuscript preparation including the sessions and final manuscript conception. CW participated in the sessions (i-iii) in preparing the manuscript. PS participated in the session (iv), JB and MK participated in the sessions "Characteristics of oral squamous cell carcinoma and additional prognostic markers and HIF-1α target genes". All authors read and approved the final manuscript.
7,684.6
2016-04-05T00:00:00.000
[ "Biology", "Medicine" ]
Calibration of a multi-physics ensemble for estimating the uncertainty of a greenhouse gas atmospheric transport model Atmospheric inversions have been used to assess biosphere–atmosphere CO2 surface exchanges at various scales, but variability among inverse flux estimates remains significant, especially at continental scales. Atmospheric transport errors are one of the main contributors to this variability. To characterize transport errors and their spatiotemporal structures, we present an objective method to generate a calibrated ensemble adjusted with meteorological measurements collected across a region, here the upper US Midwest in midsummer. Using multiple model configurations of the Weather Research and Forecasting (WRF) model, we show that a reduced number of simulations (less than 10 members) reproduces the transport error characteristics of a 45-member ensemble while minimizing the size of the ensemble. The large ensemble of 45 members was constructed using different physics parameterization (i.e., land surface models (LSMs), planetary boundary layer (PBL) schemes, cumulus parameterizations and microphysics parameterizations) and meteorological initial/boundary conditions. All the different models were coupled to CO2 fluxes and lateral boundary conditions from CarbonTracker to simulate CO2 mole fractions. Observed meteorological variables critical to inverse flux estimates, PBL wind speed, PBL wind direction and PBL height are used to calibrate our ensemble over the region. Two optimization techniques (i.e., simulated annealing and a genetic algorithm) are used for the selection of the optimal ensemble using the flatness of the rank histograms as the main criterion. We also choose model configurations that minimize the systematic errors (i.e., monthly biases) in the ensemble. We evaluate the impact of transport errors on atmospheric CO2 mole fraction to represent up to 40 % of the model–data mismatch (fraction of the total variance). We conclude that a carefully chosen subset of the physics ensemble can represent the uncertainties in the full ensemble, and that transport ensembles calibrated with relevant meteorological variables provide a promising path forward for improving the treatment of transport uncertainties in atmospheric inverse flux estimates. Abstract.Atmospheric inversions have been used to assess biosphere-atmosphere CO 2 surface exchanges at various scales, but variability among inverse flux estimates remains significant, especially at continental scales.Atmospheric transport errors are one of the main contributors to this variability.To characterize transport errors and their spatiotemporal structures, we present an objective method to generate a calibrated ensemble adjusted with meteorological measurements collected across a region, here the upper US Midwest in midsummer.Using multiple model configurations of the Weather Research and Forecasting (WRF) model, we show that a reduced number of simulations (less than 10 members) reproduces the transport error characteristics of a 45-member ensemble while minimizing the size of the ensemble.The large ensemble of 45 members was constructed using different physics parameterization (i.e., land surface models (LSMs), planetary boundary layer (PBL) schemes, cumulus parameterizations and microphysics parameterizations) and meteorological initial/boundary conditions.All the different models were coupled to CO 2 fluxes and lateral boundary conditions from CarbonTracker to simulate CO 2 mole fractions.Observed meteorological variables critical to inverse flux estimates, PBL wind speed, PBL wind direction and PBL height are used to calibrate our ensemble over the region.Two optimization techniques (i.e., simulated annealing and a genetic algorithm) are used for the selection of the optimal ensemble using the flatness of the rank histograms as the main criterion.We also choose model configurations that minimize the systematic errors (i.e., monthly biases) in the ensemble.We evaluate the impact of transport errors on atmospheric CO 2 mole fraction to represent up to 40 % of the model-data mismatch (fraction of the total variance).We conclude that a carefully chosen subset of the physics ensemble can represent the uncertainties in the full ensemble, and that transport ensembles calibrated with relevant meteorological variables provide a promising path forward for improving the treatment of transport uncertainties in atmospheric inverse flux estimates. Atmospheric inversions based on Bayesian inference depend on the prior flux error covariance matrix and the observation error covariance matrix.The prior flux error covariance matrix represents the statistics of the mismatch between Published by Copernicus Publications on behalf of the European Geosciences Union. L. I. Díaz-Isaac et al.: Calibration of a multi-physics ensemble the true fluxes and the prior fluxes, but the limited density of flux observation limits our ability to characterize these errors (Hilton et al., 2013).The observation error covariance describes errors of both measurements and the atmospheric transport model.In atmospheric inversions, the model errors tend to be much greater than the measurement errors (e.g., Gerbig et al., 2003;Law et al., 2008).Additionally, atmospheric inversions assume that the atmospheric transport uncertainties are known and are unbiased; therefore, the method propagates uncertain and potentially biased atmospheric transport model errors to inverse fluxes limiting their optimality.Unfortunately, rigorous assessments of the transport uncertainties within current atmospheric inversions are limited.Estimation of the atmospheric transport errors and their impact on CO 2 fluxes remains a challenge (Lauvaux et al., 2009). A limited number of studies are dedicated to quantify the uncertainty in atmospheric transport models and even fewer attempted to translate this information into the impact on the CO 2 mixing ratio and inverse fluxes.The atmospheric Tracer Transport Model Intercomparison Project (TransCom) has been dedicated to evaluate the impact of atmospheric transport models in atmospheric inversion systems (e.g., Gurney et al., 2002;Law et al., 2008;Peylin et al., 2013).These experiments have also shown the importance of the transport model resolution to avoid any misrepresentation of highfrequency atmospheric signals (Law et al., 2008).Diaz Isaac et al. (2014) showed how two transport models with two different resolutions and physics but using the same surface fluxes can lead to large model-data differences in the atmospheric CO 2 mole fractions.These differences would yield significant errors on the inverse fluxes if propagated into the inverse problem.Errors in horizontal wind (Lin and Gerbig, 2005) and in vertical transport (Stephens et al., 2007;Gerbig et al., 2008;Kretschmer et al., 2012) have been shown to be important contributors to uncertainties in simulated atmospheric CO 2 .Lin and Gerbig (2005), for example, estimate the impact of horizontal wind error on CO 2 mole fractions and conclude that uncertainties in CO 2 due to advection errors can be as large as 6 ppm.Other studies have shown that errors in the simulation of vertical mixing have a large impact on simulated CO 2 and inverse flux estimates (e.g., Denning et al., 1995;Stephens et al., 2007;Gerbig et al., 2008).Therefore, some studies have evaluated the effects that planetary boundary layer height (PBLH) has on CO 2 mole fractions (Gerbig et al., 2008;Williams et al., 2011;Kretschmer et al., 2012).Approximately 3 ppm uncertainty in CO 2 mole fractions has been attributed to PBLH errors over Europe during the summertime (Gerbig et al., 2008;Kretschmer et al., 2012).These studies have attributed the errors to the lack of sophisticated subgrid parameterization, especially PBL schemes and land surface models (LSMs).This led other studies (Kretschmer et al., 2012;Lauvaux and Davis, 2014;Feng et al., 2016) to evaluate the impact of different PBL parameterizations on simulated atmospheric CO 2 .These studies have found systematic errors of several ppm in atmospheric CO 2 that can generate biased inverse flux estimates.While there is an agreement that errors in the vertical mixing and advection schemes can directly affect the inverse fluxes, other components of the model physics (e.g., convection, large-scale forcing) have not been carefully evaluated. Atmospheric transport models have multiple sources of uncertainty including the boundary conditions, initial conditions, model physics parameterization schemes and parameter values.With errors inherited from all of these sources, ensembles have become a powerful tool for the quantification of atmospheric transport uncertainties.Different approaches have been evaluated in the carbon cycle community to represent the model uncertainty: (1) the multi-model ensembles that encompass models from different research institutions around the world (e.g., TransCom experiment; Gurney et al., 2002;Baker et al., 2006;Patra et al., 2008;Peylin et al., 2013;Houweling et al., 2010), (2) multi-physics ensembles that involve different model physics configurations generated by the variation of different parameterization schemes from the model (e.g., Kretschmer et al., 2012;Yver et al., 2013;Lauvaux and Davis, 2014;Angevine et al., 2014;Feng et al., 2016;Sarmiento et al., 2017) and (3) multi-analysis (i.e., forcing data) that consists of running a model over the same period using different analysis fields (where perturbations can be added) (e.g., Lauvaux et al., 2009;Miller et al., 2015;Angevine et al., 2014).These ensembles are informative (e.g., Peylin et al., 2013;Kretschmer et al., 2012;Lauvaux and Davis, 2014) but have some shortcomings.In some cases, the ensemble spread includes a mixture of transport model uncertainties and other errors such as the variation in prior fluxes or the observations used.Other studies have only varied the PBL scheme parameterizations.None of these studies have carefully assessed whether or not their ensemble spreads represent the actual transport uncertainties. In the last two decades, the development of ensemble methods has improved the representation of transport uncertainty using the statistics of large ensembles to characterize the statistical spread of atmospheric forecasts (e.g., Evensen, 1994a, b).Single-physics ensemble-based statistics are highly susceptible to model error, leading to underdispersive ensembles (e.g., Lee, 2012).Large ensembles (> 50 members) remain computationally expensive and ill adapted to assimilation over longer timescales such as multi-year inversions of long-lived species (e.g., CO 2 ).Smaller-size ensembles would be ideal, but most initial-condition-only perturbation methods produce unreliable and overconfident representations of the atmospheric state (Buizza et al., 2005).An ensemble used to explore and quantify atmospheric transport uncertainties requires a significant number of members to avoid sampling noise and the lack of dispersion of the ensemble members (Houtekamer and Mitchell, 2001).However, large ensembles are computationally expensive.Limitations in computational resources lead to restrictions, in-cluding the setup of the model (e.g., model resolution, nesting options, duration of the simulation) and the number of ensemble members.It is desirable to generate an ensemble that is capable of representing the transport uncertainties and which does not include any redundant members. Various post-processing techniques can be used to calibrate or "down-select" from a transport ensemble of 50 or more members to a subset of ensemble members that represent the model transport uncertainties (e.g., Alhamed et al., 2002;Garaud and Mallet, 2011;Lee, 2012;Lee et al., 2016).Some of these techniques are principal component analysis (e.g., Lee, 2012), k-means cluster analysis (e.g., Lee et al., 2012) and hierarchical cluster analysis (e.g., Alhamed et al., 2002;Yussouf et al., 2004;Johnson et al., 2011;Lee et al., 2012Lee et al., , 2016)).Riccio et al. (2012) applied the concept of "uncorrelation" to reduce the number of members without using any observations.Solazzo and Galmarini (2014) reduced the number of members by finding a subset of members that maximize a statistical performance skill such as the correlation coefficient, the root mean square error or the fractional bias.Other techniques applied less commonly to the calibration of the ensembles include simulated annealing and genetic algorithms (e.g., Garaud and Mallet, 2011).All these techniques are capable of eliminating those members that are redundant and generating an ensemble with a smaller number of members that represents the uncertainty of the atmospheric transport model more faithfully than the larger ensemble. In this study, we start with a large multi-physics/multianalysis ensemble of 45 members presented in Díaz-Isaac et al. (2018) and apply a calibration process similar to the one explained in Garaud and Mallet (2011).Two principal features characterize an ensemble: reliability and resolution.The reliability is the probability that a simulation has of matching the frequency of an observed event.The resolution is the ability of the system to predict a specific event.Both features are needed in order to represent model errors accurately.Our main goal is to down-select the large ensemble to generate a calibrated ensemble that will represent the uncertainty of the transport model with respect to meteorological variables of most importance in simulating atmospheric CO 2 .These variables are the horizontal mean PBL wind speed and wind direction, and the vertical mixing of surface fluxes, i.e., PBLH.We focus on the criterion that will measure the reliability of the ensemble, i.e., the probability of the ensemble in representing the frequency of events (i.e., the spatiotemporal variability of the atmospheric state).For the down-selection of the ensemble, we will use two different techniques: simulated annealing and a genetic algorithm (from now on referred to as calibration techniques/processes).In a final step, the ensemble with the optimal reliability will be selected by minimizing the biases in the ensemble mean.We will evaluate which physical parameterizations play important roles in balancing the ensembles and evaluate how well a pure physics ensemble can represent transport uncertainty. Generation of the ensemble We generate an ensemble using the Weather Research and Forecasting (WRF) model version 3.5.1 (Skamarock et al., 2008), including the chemistry module modified in this study for CO 2 (WRF-ChemCO 2 ).The ensemble consists of 45 members that were generated by varying the different physics parameterization and meteorological data.The land surface models, surface layers, planetary boundary layer schemes, cumulus schemes, microphysics schemes and meteorological data (i.e., initial and boundary conditions) are alternated in the ensemble (see Table 1).All the simulations use the same radiation schemes, both long-and shortwave. The different simulations were run using the one-way nesting method, with two nested domains (Fig. 1).The coarse domain (d01) uses a horizontal grid spacing of 30 km and covers most of the United States and part of Canada.The inner domain (d02) uses a 10 km grid spacing, is centered in Iowa and covers the Midwest region of the United States.The vertical resolution of the model is described with 59 vertical levels, with 40 of them within the first 2 km of the atmosphere.This work focuses on the simulation with higher resolution; therefore, only the 10 km domain will be analyzed.The CO 2 fluxes for summer 2008 were obtained from NOAA Global Monitoring Division's CarbonTracker version 2009 (CT2009) data assimilation system (Peters et al., 2007; with updates documented at https://www.esrl.noaa.gov/gmd/ccgg/carbontracker/, last access: 17 January 2018).The different surface fluxes from CT2009 that we propagate into the WRF-ChemCO 2 model are fossil fuel burning, terrestrial biosphere exchange and exchange with oceans.The CO 2 lateral boundary conditions were obtained from CT2009 mole fractions.The CO 2 fluxes and boundary conditions are identical for all ensemble members. Dataset and data selection Our interest is to calibrate the ensemble over the US Midwest using the meteorological observations available over this region.The calibration of the ensemble will be done only within the inner domain.To perform the calibration, we used balloon soundings collected over the Midwest region (Fig. 1).Meteorological data were obtained from the University of Wyoming's online data archive (http://weather.uwyo.edu/upperair/sounding.html, last access: 20 July 2018) for 14 rawinsonde stations over the US Midwest region (Fig. 1).To evaluate how the new calibrated ensemble impacts CO 2 mole fractions, we will use in situ atmospheric CO 2 mole fraction data provided by seven communication towers (Fig. 1).Five of these towers were part of a Penn State experimental network, deployed from 2007 to 2009 (Richardson et al., 2012;Miles et al., 2012Miles et al., , 2013;; https://doi.org/10.3334/ORNLDAAC/1202).The other two towers (Park Falls -WLEF; West Branch -WBI) are part of the Earth System Research Laboratory/Global Monitoring Division (ESRL/GMD) tall tower network (Andrews et al., 2014), managed by NOAA.Each of these towers sampled air at multiple heights, ranging from 11 to 396 m above ground level (m a.g.l.). The ensemble will be calibrated for three different meteorological variables: PBL wind speed, PBL wind direction and planetary boundary layer height (PBLH).We will calibrate the ensemble with the late afternoon data (i.e., 00:00 UTC) from the different rawinsondes.In this study, we use only daytime data, because we want to calibrate and evaluate the ensemble under the same well-mixed conditions that are used to perform atmospheric inversions.For each rawinsonde site, we will use wind speed and wind direction observations from approximately 300 m a.g.l.We choose this observational level because we want the observations to lie within the well-mixed layer, the layer into which surface fluxes are distributed, and the same air mass that is sampled and simulated for inversions based on tower CO 2 measurements. The PBLH was estimated using the virtual potential temperature gradient (θ ν ).The method identifies the PBLH as the first point above the atmospheric surface layer, where (1) θ ν is greater than or equal to 0.2 K km −1 , and (2) the difference between the surface and the threshold-level virtual potential temperature is greater than or equal to 3 K (θ νs − θ ν ≥ 3 K). WRF derives an estimated PBLH for each simulation; however, the technique used to estimate the PBLH varies according to the PBL scheme used to run the simulation.For example, the YSU PBL schemes estimate PBLH using the bulk Richardson number (Hong et al., 2006), the MYJ PBL scheme uses the turbulent kinetic energy (TKE) to estimate the PBLH (Janjic, 2002), and the MYNN PBL scheme uses twice the TKE to estimate the PBLH.To avoid any errors from the technique used to estimate the PBLH, we decided to estimate the PBLH from the model using the same method used for the observations.Simulated PBLH will be analyzed at the same time as the observations, 00:00 UTC, i.e., late afternoon in the study region. We analyzed CO 2 mole fractions collected from the sampling levels at or above 100 m a.g.l., which is the highest observation level across the Mid-Continent Intensive (MCI) network (Miles et al., 2012).This ensures that the observed mole fractions reflect regional CO 2 fluxes and not nearsurface gradients of CO 2 in the atmospheric surface layer (ASL) or local CO 2 fluxes (Wang et al., 2007).Both observed and simulated CO 2 mole fractions are averaged from 18:00 to 22:00 UTC (12:00-16:00 LST), when the daytime period of the boundary layer should be convective and the CO 2 profile well mixed (e.g., Davis et al., 2003;Stull, 1988).This averaged mole fraction will be referred to hereafter as daily daytime average (DDA). Criteria In this research, we want to test the performance of the transport ensemble and try to achieve a better representation of transport uncertainties, if possible using an ensemble with a smaller number of members.A series of statistical metrics are used as criteria to measure the representation of uncertainty by the ensemble for the period of 18 June to 21 July 2008.The criteria used for our down-selection process include rank histograms, rank-histogram scores and ensemble bias. Talagrand diagram (or rank histogram) and rank-histogram score The rank histogram and the rank-histogram scores are tools used to measure the spread and hence the reliability of the ensemble (see Fig. A1 in the Appendix).The rank histogram (Anderson, 1996;Hamill and Colucci, 1997;Talagrand et al., 1999) is computed by sorting the corresponding modeled variable of the ensemble in increasing order and then a rank among the sorted predicted variables from lowest to highest is given to the observation.The ensemble members are sorted to define "bins" of the modeled variable; if the ensemble contains N members, then there will be N + 1 bins. If the rank is zero, then the observed variable value is lower than all the modeled variable values, and if it is N + 1, then the observation is greater than all of the modeled values.If the ensemble is perfectly reliable, the rank histogram should be flat (i.e., flatness equal to 1).This happens when the probability of occurrence of the observation within each bin is equal.A rank histogram that deviates from the flat shape implies a biased, overdispersive or underdispersive ensemble. A "U-shaped" rank histogram indicates that the ensemble is underdispersive; normally, in this type of ensemble, the observations tend to fall outside of the envelope of the ensemble.This kind of histogram is associated with a lack of variability or an ensemble affected by biases (Hamill, 2001).A "central-dome" (or "A-shaped") histogram indicates that the ensemble is overdispersive; this kind of ensemble has an excess of variability.If the rank histogram is overpopulated at either of the ends of the diagram, then this indicates that the ensemble is biased. The rank-histogram score is used to measure the deviation from flatness of a rank histogram: and should ideally be close to 1 (Talagrand et al., 1999;Candille and Talagrand, 2005).In Eq. ( 1), N is the number of members (i.e., models), M is the number of observations, r j the number of observations of rank j , and r = M/(N + 1) is the expectation of r j .In theory, the optimal ensemble has a score of 1 when enough members are available.A score lower than 1 would indicate overconfidence in the results, with an ensemble matching the observed variability better than statistically expected.Having a score smaller than 1 would not affect the selection process.Nevertheless, a flat rank histogram does not necessarily mean that the ensemble is reliable or has enough spread.For example, a flat histogram can still be generated from ensembles with different conditional biases (Hamill, 2001).The flat rank histogram can also be produced when covariances between samples are incorrectly represented.Therefore, additional verification analysis has to be introduced to certify that the calibrated ensemble has enough spread and is reliable.We introduce hereafter several additional metrics used to evaluate the ensemble. Ensemble bias Atmospheric inverse flux estimates are highly sensitive to biases.The bias, or the mean of the model-data mismatches, was used to assist the selection of the calibrated sub-ensemble.We identify a sub-ensemble that has minimal bias: where p i is the difference between the modeled wind speed, direction or PBLH, and the observed value, M is the number of measurements and i sums over each of the rawinsonde measurements. L. I. Díaz-Isaac et al.: Calibration of a multi-physics ensemble Verification methods Different statistical tools were used to evaluate both the large (45-member) ensemble and calibrated ensemble; these statistics include Taylor diagrams, spread-skill relationship and ensemble root mean square deviation (RMSD).These statistical analyses will be used to describe the performance of each member (standard deviations and correlations), ensemble spread (root mean square deviation) and error structures in space (error covariance), which will allow us to evaluate all the important aspects of an ensemble.We use Taylor diagrams to describe the performance of each of the models of the large ensemble (Taylor, 2001).The Taylor diagram relies on three nondimensional statistics: the ratio of the variance (model variance normalized by the observed variance), the correlation coefficient, and the normalized center root mean square (CRMS) difference (Taylor, 2001).The ratio of the variance or normalized standard deviation indicates the difference in amplitude between the model and the observation.The correlation coefficient measures the similarity in the temporal variation between the model and the observation.The CRMS is normalized by the observed standard deviation and quantifies the ratio of the amplitude of the variations between the model and the observations. To verify that the ensemble captures the variability in the model performance across space and time, we computed the relationship between the spread of the ensemble and the skill of the ensemble over the entire dataset (i.e., spread-skill relationship).The linear fit between the two parameters measures the correlation between the ensemble spread and the ensemble mean error or skill (Whitaker and Lough, 1998).The ensemble spread is calculated by computing the standard deviation of the ensemble and the mean error by computing the absolute difference between the ensemble mean and the observations.Ideally, as the ensemble skill improves (the mean error gets smaller), the ensemble spread becomes smaller, and vice versa.Compared to the rank histograms, spread-skill diagrams represent the ability of the ensemble to represent the errors in time and space. The spread of the ensemble is evaluated in time, using the RMSD.The RMSD does not consider the observations as we take the square root of the average difference between model configuration and the ensemble mean.Additionally, we use the mean and standard deviation of the error (modeldata mismatch) to evaluate the performance of each of the members selected for the calibrated ensembles. Transport model errors in atmospheric inversions are described in the observation error covariance matrix and hence in CO 2 mole fractions (ppm 2 ).Therefore, we evaluate the impact of the calibration on the variances of CO 2 mole fractions.For the covariances, we compare the spatial extent of error structures between the full ensemble and the reducedsize ensembles by looking at spatial covariances from our measurement locations.The limited number of members is likely to introduce sampling noise in the diagnosed error co-variances.We also know that the full ensemble is not a perfect reference, but we believe it is less noisy.The covariances were directly derived from the different ensembles to estimate the increase in sampling noise as a function of the ensemble size. Calibration methods In this study, we want to test the ability to reduce the ensemble from 45 members to an ensemble with a smaller number of members that is still capable of representing the transport uncertainties and does not include members with redundant information.The number of ideal ensemble members could have been decided by performing the calibration for all the different sizes of ensemble smaller than 45 members.However, we decided to use an objective approach to select the total number of members of the sub-ensemble.Therefore, we use the Garaud and Mallet (2011) technique to define the size of the calibrated sub-ensemble that each optimization technique will generate.The size of the sub-ensemble was determined by dividing the total number of observations by the maximum frequency in the large ensemble (45-member) rank histogram.We are going to generate sub-ensembles of three different sizes (number of members) to evaluate the impact that an ensemble size has on the representation of atmospheric transport uncertainties.Each of the ensembles will be calibrated for the period of 18 June to 21 July 2008. Two optimization methods, simulated annealing (SA) and a genetic algorithm (GA), are used to select a sub-ensemble that minimizes the rank-histogram score (δ), which is the criterion that each algorithm will use to test the reliability of the ensemble.Each method will select a sub-ensemble that best represents the model uncertainties of PBL wind speed, PBL wind direction and PBLH. In this study, SA and GA techniques will randomly search for the different combinations of members and compute the rank-histogram score.Both techniques generate a subensemble (S) of size N.For the first test, we will use these algorithms to choose the combination of members that optimize the score of the reduced ensemble δ (S) (i.e., rankhistogram score) for each variable.With this evaluation, we determine if each optimization technique yields similar calibrated ensembles and if the calibrated ensembles are similar among the different meteorological variables.In the second test, we calibrate the ensemble for all three variables simultaneously, where we use the sum of the score squared: [δ(S)] 2 : to control acceptance of the sub-ensembles.In Eq. (3), δ wspd (S), δ wdir (S) and δ pblh (S) are the scores of the subensemble for PBL wind speed, PBL wind direction and PBLH, respectively.Kirkpatrick et al. (1983) and Černý et al. (1985) as an optimization method inspired by the process of annealing in metal work.Based on the Monte Carlo iteration solving method, SA finds the global minimum using a cost function that gives to the algorithm the ability to jump or pass multiple local minima (see Fig. A2 in the Appendix).In this case, the optimal solution is a sub-ensemble with a rank-histogram score close to 1. Atmos The SA starts with a randomly selected sub-ensemble.The current state (i.e., initial random sub-ensemble) has a lot of neighbor states (i.e., other randomly generated subensembles) in which a unit (i.e., model) is changed, removed or replaced.Let S be the current sub-ensemble and S be the neighbor sub-ensemble.S is a new sub-ensemble (i.e., neighbor) that is randomly built from the current subensemble with one model added, removed or replaced.To minimize the score δ, only two transitions to the neighbors are possible.In the first transition, if the score of the neighbor sub-ensembles δ(S ) is lower than the current subensemble δ(S), then S becomes the current sub-ensemble and a new neighbor sub-ensemble is generated.In the second transition, if the score of the neighbor sub-ensemble δ(S ) is greater than the current sub-ensemble δ(S), moving to the neighbor S only occurs through an acceptance probability.This acceptance probability is equal to exp − δ(S )−δ(S) T and it only allows the movement to the neighbor S if u < exp − δ(S )−δ(S) T .For the acceptance probability, u is a random number uniformly drawn from [0,1] and T is called temperature, and it decreases after each iteration following a prescribed schedule.The acceptance probability is high at the beginning, and the probability of switching to a neighbor is less at the end of the algorithm.The possibility to select a less optimal state S , i.e., with higher δ(S ), is meant to escape local minima where the algorithm could remain trapped. When the algorithm reaches the predefined number of iterations, we collect only the accepted sub-ensemble S and their respective scores δ(S).When the algorithm finishes with the iterations, we choose the ensemble that has both the smallest rank-histogram score and lowest bias among the different sub-ensembles (see Sect. 2.7).The number of iterations was defined by sensitivity test and repetitivity of the experiments (see Sect. 2.6). Genetic algorithm GA is a stochastic optimization method that mimics the process of biological evolution, with the selection, crossover and mutation of a population (Fraser and Burnell, 1970;Crosby, 1973;Holland, 1975).Let S i be an individual, that is, a subensemble, and let P = {S 1 , . .., S i , . .., S N pop } be a population of N pop individuals (see Fig. A3 in the Appendix).As a first step, in the GA, a random population is generated (denoted P 0 ).Then this population will go through two out of the three steps of the genetic algorithm, (1) selection and (2) crossover.In the selection step, we select half of the best individuals with respect to the score (i.e., summation of the score of three variables δ(S)).For the second step, a crossover among the selected individuals occurs when two parents create two new children by exchanging some ensemble members.A new population is generated with N pop /2 parents and N pop /2 children. This process is repeated until it reaches the specified number of iterations.This algorithm will provide at the end a population of individuals with a better rank-histogram score than the initial population.Out of all those individuals, we choose the sub-ensemble with the best score for the three variables (i.e., wind speed, wind direction and PBLH) and with a smaller bias than the large ensemble. Parameterization of the selection algorithms Various inputs are required to guide the selection algorithms.For example, we typically need to choose the initial and final temperature (T 0 and T f ) for the SA and its schedule, the best population size (N pop ) for the GA and the number of iterations for each algorithm.The temperature of the SA, the N pop of the GA and the number of iterations were chosen by running the algorithms multiple times and confirming that the system reached similar solutions with independent minimization runs.If similar solutions were not achieved within multiple SA or GA runs, the algorithm parameters were altered to increase the breadth of the search.For the SA, we found that 20 000 iterations yielded similar solutions after multiple runs of the algorithm.For the GA, 30 to 50 iterations were sufficient as long as the ensemble was smaller than eight members.For an ensemble of 10 members, we needed to increase to 100 iterations.Another factor that was important in the SA was the initial temperature used in the algorithm and the temperature decrease for each iteration.While the temperature is high, the algorithm will accept with more frequency the poorer solutions; as the temperature is reduced, the acceptance of poorer solutions is reduced.Therefore, we needed to provide an initial (T 0 ) and final (T f ) temperature that allowed the system to reduce its acceptance condition gradually and to search more combinations of members to identify the best solution or sub-ensemble.We determine the optimal parameters for SA by the maximum number of ensemble solutions which indicates that the algorithm explored the largest space of solution with T 0 equal to 20 and T f equal to 1 × 10 −3 .For GA, the larger the population, the more we can explore the space to find an optimal solution.We found that a N pop of 280 individuals was the value that produced similar solutions (sub-ensembles) after multiple runs. Selection of the optimal reduced-size ensembles The selection process is performed in three distinct steps to ensure that the final calibrated ensembles will be the optimal combinations of model configurations (Fig. 2).First, the flatness of the rank histograms will control the acceptance of the calibrated sub-ensembles by the selection algorithms (see Fig. A1 in the Appendix).The flatness is defined by Eq. ( 1) for the single-variable calibration and Eq. ( 3) for the calibration of the three variables simultaneously.The algorithm selects multiple sub-ensembles with a rank-histogram score smaller than 6 for each individual meteorological variable, or smaller than the original ensemble score if higher than 6 (see Fig. 2 and Table 2).In general, the lowest scores are found for PBLH and the highest for wind speed, as shown in Fig. 3.As a second step, sub-ensembles accepted by SA and GA algorithms with a bias larger than the bias of the full ensemble are filtered out.This step is critical to avoid the selection of biased ensembles as discussed by Hamill et al. (2001).Finally, the remaining calibrated ensembles are compared among SA and GA techniques to identify if both algorithms provide a common solution.If multiple common solutions were identified, the final sub-ensemble was determined by the solution with the smallest score and bias.However, if no common solution was found by both techniques, the final sub-ensemble corresponds to the smallest score among the different solutions that share > 50 % of the same model configurations. Evaluation of the large ensemble In this section, we evaluate the performance of the large ensemble.Our goal is to test the ensemble skill (ability of the models to match the observations) and the spread (variability across model simulations to represent the uncertainty).We will evaluate the skill and the spread for PBLH, PBL wind speed and PBL wind direction across the region of study using afternoon (00:00 UTC) rawinsonde observations. Model skill We evaluate the performance of the different models of the 45-member ensemble by computing the normalized standard deviation, normalized center root mean square and correlation coefficient for wind speed (Fig. 4a), wind direction (Fig. 4b) and PBLH (Fig. 4c) (Taylor, 2001).The majority of the model configurations produce winds speeds and directions with higher standard deviations (more variability) than the observations, whereas the simulations over-and underestimate PBLH variability depending on the model configuration.The model-data correlations with wind speed and wind direction are between 0.4 and 0.7, whereas the PBLH shows a smaller correlation, between 0.3 and 0.6.The range of modeled PBL heights will provide a wide spectrum of alternatives to select the optimal calibrated sub-ensemble.However, wind speed and wind direction do not show much difference among the different models.This limited spread potentially reduces the selection of the model configurations to produce a sub-ensemble that matches the observed variability. Reliability and spread of the ensemble We illustrate the ensemble spread and how well this ensemble encompasses the observations using the time series of the simulated and observed meteorological variables.Figure 5 shows the time series of the ensemble spread for wind speed, wind direction and PBLH at Green Bay (GRB; Fig. 5a, c, e) and Topeka (TOP; Fig. 5b, d, f) sites.The time series show qualitatively that simulated wind speed (Fig. 5a-b) and wind direction (Fig. 5c-d) have a smaller spread compared to PBLH (Fig. 5e-f).Figure 5 shows how the ensemble can have a small spread and still encompass the observations (i.e., DOY 183; Fig. 5c), and have a large spread and not encompass the observation (i.e., DOY 174; Fig. 5e).These time series suggest that the ensemble may struggle to encompass the observed wind speed and wind direction more than the PBLH. Figure 6 shows the rank histograms of the 45-member ensemble for each of the meteorological variables that we use to calibrate the ensemble (i.e., wind speed, wind direction and PBLH).In these rank histograms, we include all 14 rawinsonde sites.All the rank histograms have a U shape.U-shaped histograms mean the ensemble is underdispersive; that is, the model members are too often all greater than or less than the observed atmospheric values (e.g., DOY 178-181; Fig. 5b).Each rank histogram has the first rank as the highest frequency, indicating that observations are most frequently below the envelope of the ensemble (e.g., DOY 178-180; Fig. 5b).The rank-histogram score for each of the variables is greater than 1, confirming that we do not have optimal spread in our ensemble.Table 2 shows that both wind speed and wind direction have a higher rank-histogram score (i.e., ≥ 6) than the PBLH, which has a score of 3.2.The ensemble mean wind speed and PBLH show a small positive bias relative to the observations, averaged across the region, whereas wind direction has a very small negative bias. Figure 7 shows the spread-skill relationship, another method that we use to examine the representation of errors of the ensemble.Wind direction (Fig. 7b) shows a higher correlation between the spread and the skill compared to the PBLH (Fig. 7c) and the wind speed (Fig. 7a).Therefore, the ensemble has a wider spread when the model-data differences are larger.The PBLH and wind speed show consistently poorer skill (a large mean absolute error) compared to their spread.This supports the conclusion that the large ensemble is underdispersive for these variables.None of these variables show a correlation equal to 1; this implies that our ensemble spread does not match exactly the atmospheric transport errors on a day-to-day basis.This feature is common among ensemble prediction systems (Wilks et al., 2011) and should not impair the ability to identify the optimal reduced-size ensembles. Calibrated ensemble In this section, we show the results of the calibrated ensembles generated with both SA and GA.Each calibration was performed for three different sub-ensemble sizes; the size of the ensembles is determined using the technique explained in Sect.2.4.To compute the size of the sub-ensemble, we use the maximum frequency of the rank histogram using the large ensemble (Fig. 6).In this case, the maximum frequency is the left bar (r 0 ) of every rank histogram.This technique yields the result that the calibrated ensemble should have about 8-10 members depending in the variable to be used.Therefore, for this study, we will generate 10-, 8-and 5-member ensembles using the two calibration techniques. Individual variable calibration Table 3 shows that both techniques (i.e., SA and GA) were able to find similar combinations of model configurations (i.e., an ensemble that shares more than half of the members) when each meteorological variable was used separately.The configurations chosen for each sub-ensemble vary significantly across the different variables, with the exception of the 10-member ensemble calibrated using wind speed and wind direction.The majority of the ensembles include model configuration 14.This model configuration, as shown in Díaz-Isaac et al. ( 2018), introduces large errors for both wind speed and wind direction, and is selected to allow for sufficient spread of these variables in the sub-ensembles.The final scores of the calibrated ensembles for each variable show that finding a calibrated sub-ensemble that reaches a score of 1 is not possible for wind speed and wind direction.A subensemble with a score less than or equal to 1 can be found for PBLH. Figure 8 shows the rank histograms of the different calibrated ensembles (i.e., 10, 8 and 5 members) for each meteorological variable shown in Table 3.The calibrated ensembles of PBLH (Fig. 8c, f, i) are nearly flat for all ensemble sizes, whereas the 10-and 8-member sub-ensembles keep a slight U shape for wind speed and wind direction but are significantly flatter than the original ensemble.The ratio between the expected (r) and observed frequency of the end members is reduced from 5 (original expected frequency of 0.02 with 0.1 frequency observed) to less than 2 (calibrated expected frequency of 0.1 with 0.15 frequency observed).The smallest rank-histogram scores for wind speed and wind direction are obtained with a five-member ensemble (Fig. 8g-h).The biases for all sub-ensembles (Table 3) are similar to or less than the bias of the large ensemble (Table 2). Multiple-variable calibration Table 4 shows the sub-ensembles selected by SA.Each of the sub-ensembles has two simulations in common (i.e., 17 and 33), implying that these models are crucial to build an ensemble that best represents the transport errors for the three variables.Figure 9 shows the rank histograms of the sub-ensembles shown in Table 4.These rank histograms show that we were able to flatten the histogram relative to the 45-member ensemble for all three meteorological vari- ables.Similar to the individual variable calibration, the rank histograms for wind speed (Fig. 9a, d) and wind direction (Fig. 9b, e) still show a U shape which is minimized for the smallest (i.e., five-member) sub-ensemble (Fig. 9g-h).The rank histograms are flatter for the PBLH (Fig. 9c, f, i) and the histogram score is closer to 1 (Table 4) compared to wind speed and wind direction.The rank-histogram scores for all variables are greater than those for one-variable optimization (see Table 4).The high rank-histogram scores are associated with the equal weight given to the three variables for this simultaneous calibration, where wind speed controlled the calibration process.For the calibration of the three variables together, we were not able to produce an ensemble for wind speed with a score smaller than 4; this ends up limiting the selection of the calibrated ensemble for the rest of the variables (see Fig. A4 in the Appendix).In addition, all these calibrated sub-ensembles have biases smaller in magnitude than the 45-member ensemble.Both wind speed and PBLH retain an overall positive bias, and wind direction a negative bias. The standard deviations of these three calibrated ensembles are larger than those of the large ensemble, consistent with the effort to increase the ensemble spread. Using SA and GA techniques and the selection criteria detailed in Sect.2.7 (i.e., low mean error of the entire ensemble), we defined an optimal five-member sub-ensemble (the optimal solution using both techniques) and nearly identical combinations of members for 10-and 8-member subensembles, with only two model configurations not being shared by both algorithms.We also find that configuration 14 remains important for the multi-variable calibrated ensembles, as it was for the single-variable calibrated ensembles. Evaluation of the multiple-variable calibrated ensemble Both optimization techniques were able to generate subensembles that reduce the U shape of the rank histograms, while significantly decreasing the number of members in the ensemble.A flatter histogram indicates that the ensemble is more reliable (unbiased) and has a more appropriate (greater) spread.The correlation between spread and skill for the wind direction increased, while wind speed and PBLH remain similar.Therefore, we conclude that the calibrated sub-ensembles are equivalent to or even better than the full ensemble to represent the daily model errors. Figure 10 shows the time series of the different calibrated ensembles generated by the SA algorithm at the TOP site.In general, there are no major differences among 5-(Fig.10a, d, g), 8-(Fig.10e, h) and 10-member (Fig. 10c, f, i) ensembles.Figure 12 shows how the calibration can increase the spread of the ensemble to the extent of encompassing the observations (e.g., DOY 179; Fig. 10b-c) compared to the full ensemble (Fig. 5b).The ensemble spread was reduced after calibration at a few specific points in space and time. Insight into the physics parameterizations can be gained by evaluating the calibrated ensembles.The LSM, PBL, CP and MP schemes, and reanalysis choice vary across all of the sub-ensemble members; no single parameterization is re- tained for all members in any of these categories.However, we also find that the calibrated ensembles rely upon certain physics parameterizations more than others.Figure 11 shows that most of the simulations in the calibrated ensemble use the RUC and thermal diffusion (T-D) LSMs in preference to the Noah LSM.In addition, more simulations use the MYJ PBL scheme than the other PBL schemes.The physics parameterizations shown with a higher percentage in Fig. 11 appear to contribute more to the spread of the ensemble than the other parameterizations. We next explore the characteristics of the individual ensemble members that are retained in an effort to understand what member characteristics are important to increase the spread of the ensemble.Figure 12 shows the mean and standard deviation of the residuals for each simulation included in the five-member ensemble of SA and GA.Ensembles appear to need at least one member with a larger standard deviation to improve the spread for wind speed and wind directions (see member 23; Fig. 12a-b).Additionally, a member that has a large PBLH bias (see member 16; Fig. 12c) appears to be selected, highlighting the need for end members among the model configurations in order to reproduce the observed variability in PBLH.We note here that model configuration 14 was not selected when calibrating three variables together. Propagation of transport uncertainties into CO 2 concentrations The calibrated ensembles found in this study were chosen based on the meteorological variables and not on the CO 2 mole fractions to avoid the propagation of CO 2 flux biases into the solution.We can now propagate these uncertain- ties, represented by the ensemble spread, into the CO 2 concentration space.This straightforward calculation is possible because every model simulation uses identical CO 2 fluxes. We present here the transport errors in both time and space with the spread in CO 2 mole fractions, comparing the initial (uncalibrated) 45-member ensemble to the calibrated subensembles. CO 2 error variances Figure 13 shows the spread of daily daytime average CO 2 mole fractions across the different sub-ensemble sizes at Mead (Fig. 13a,d,g,j), West Branch (Fig. 13b, e, h, k) and WLEF (Fig. 13c, f, i, l).The spread of the DDA CO 2 mole fractions of the large ensemble (Fig. 13a-c) does not appear to differ in a systematic fashion from the spread of the calibrated small-size ensembles (Fig. 13d-l).While the calibration has increased the average ensemble spread, none of the ensembles consistently encompass the observations, either in terms of meteorological variables (Fig. 12) or CO 2 (Fig. 15).The CO 2 differences between the models and the observations may be caused by CO 2 flux or boundary condition errors, the two components impacting the modeled CO 2 mole fractions in addition to atmospheric transport.The cause of the total difference cannot be determined from the CO 2 data alone.The increased daily variance in CO 2 resulting from the ensemble calibration process is shown in Fig. 14.The eightmember ensemble often has the maximum CO 2 variance.Table 5 shows the spread (model-ensemble mean) and RMSE (model-data) ratio of the CO 2 mole fraction for the full and calibrated 10-member ensembles at each in situ CO 2 observation tower.The ratio of the variances is an estimate of the contribution of the transport uncertainties to the CO 2 modeldata mismatch for the summer of 2008.This table shows that the transport uncertainties represent about 20 % to 40 % of the CO 2 model-data mismatch.We found that values after calibration show a slight increase compared to the full ensemble. Impact of calibration on ensemble statistics The calibration of the multi-physics/multi-analysis ensemble using SA and GA optimization techniques generated 10-, 8-and 5-member ensembles with a better representation of the error statistics of the transport model than the initial 45-member ensemble.One of our goals was to find subensembles that fulfil the criteria of Sect.2.7, independent of the selection algorithm and for multiple meteorological variables.Wind speed and wind direction statistics only improve by a modest amount in the calibrated ensembles as compared to the 45-member ensemble, while PBLH statistics, namely the flatness of the rank histogram, show a significant improvement in the calibrated ensembles.The variance in the calibrated ensembles increased relative to the 45-member ensemble but the potential for improvement was limited by the spread in the initial ensemble.Stochastic perturbations (e.g., Berner et al., 2009) could increase the spread of the initial ensemble, which, combined with the suite of model configurations, could better represent the model errors.Here, we limited the 45-member ensemble to mass-conserved, continuous flow (i.e., unperturbed) members that can be used in a regional inversion.Future work should address the problem of using an underdispersive ensemble before the calibration of the ensemble. Single-variable and multiple-variable ensembles We first attempted to calibrate the ensemble for each meteorological variable (i.e., wind speed, wind direction and PBLH).Table 3 shows that the different sub-ensembles were able to follow the criteria presented in Sect.2.7, but the calibration of the single-variable ensembles did not allow us to find a unique sub-ensemble that can be used to represent the errors of the three variables.Therefore, the joint optimization of the three variables was required to identify an ensemble that best represents model errors across the three variables.By minimizing the sum of the squared rank-histogram scores of the three variables, the selection algorithm found common solutions at the expense of less satisfactory rank-histogram scores than were obtained for single-variable ensembles (see Table 4).We assumed that each variable was equally important to the problem, an assumption that has not been rigorously evaluated.Future work on the relative importance of meteorological variables on CO 2 concentration errors would help weigh the scores in the selection algorithms. Resolution and reliability The calibrated ensembles show the rank-histogram score closer to 1 (Table 4), that is, flatter rank histograms (Fig. 9) compared to the 45-member ensemble (Table 2 and Fig. 6).The sub-ensembles do have a greater variance than the large ensemble (i.e., improved reliability) (Fig. 14).However, the j), WBI (middle column; b, e, h, k) and WLEF (last column; c, f, i, l) towers using SA-calibrated Rows from top to bottom are 45-, 10-, 8-and 5-member ensembles.The blue area is the spread of the 45-member ensemble, the green area is the spread is the spread of the calibrated (10-, 8-and 5-member) ensembles, the black line is the mean of the ensemble, and the red dots are the observations.spread-skill relationship (i.e., resolution) of the calibrated ensembles do not show any major improvement compared to the 45-member ensemble, implying that the spread of the ensemble does not represent the day-to-day transport errors well.While the rank histogram suggests that the different calibrated ensembles have enough spread, the spread-skill relationship indicates that our ensemble does not systematically encompass the observations.The disagreement between the rank histogram and the spread-skill relationship can be associated with the metric used for the calibration (i.e., rank histogram) and the biases included in the calibrated ensemble.Using the score of the rank histogram alone may not be sufficient to measure the reliability of the ensemble (Hamill, 2001); therefore, future down-selection studies should incorporate the resolution as part of the calibration process (skill score optimization).The biases in the model are a complex problem because there are many sources systematic errors within an atmospheric model (e.g., physical parameterizations and meteorological forcing).Future studies should consider data assimilation or improvement of the physics param- eterizations to reduce or remove these systematic errors.To improve the representation of daily model errors, additional metrics should be introduced and the initial ensemble should offer a sufficient spread, possibly with additional physics parameterizations, additional random perturbations, or modifications of the error distribution of the ensemble (Roulston and Smith, 2003). Error correlations Rank histograms, as explained in Sect.2.3.1, evaluate the ensemble by ranking individual observations in a relative sense. The ensembles calibrated using the rank histograms may be representing the variances over the region correctly but not the spatial and temporal structures of the errors (Hamill, 2001).These parameters are critical to inform regional inversions of correlations in model errors, directly impacting flux corrections (Lauvaux et al., 2009).In this study, the calibrated ensembles show an improvement in the meteorological variances and an increase in the CO 2 variances relative to the uncalibrated ensemble.However, spatial structures of the errors were not evaluated and may be impacted by sampling noise.Few members will produce a statistically limited representation of the model error structures.For example, ensemble model prediction systems use at least 50 members to avoid sampling noise and correctly represent time and space correlations.Figure 15 shows the spatial correlation of 300 m DDA CO 2 errors with respect to the Round Lake site on DOY 180.Error correlations increase significantly as our ensemble size decreases.With fewer members, spurious correlations increase, resulting in high correlations at long distances.Assuming we sample only a few times the distribution of errors, our ensemble is very likely to be affected by spurious correlations with a variance on the order of 1/N .We conclude here that our reduced-size ensembles are impacted by sampling noise which would require additional filtering.Previous studies have suggested objective methods to filter the noise in small-size ensembles (i.e., Ménétrier et al., 2015) or modeling the error structures using the diffusion equation (e.g., Lauvaux et al., 2009).Future work should address the impact of the calibration on the error structures as this information is critical in the observation error covariance to assess the inverse fluxes.Concerning the magnitudes of the error correlation, the calibrated sub-ensembles exhibit a larger contrast in correlation values compared to the 45-member error correlations.Overall, the different ensembles show similar flow-dependent spatial patterns which demonstrates that the calibration process, even if generating sampling noise, preserves the dominant spatial patterns in the error structures.Therefore, the calibrated ensemble is likely to provide a better representation of the variances and a similar spatial error structure for the construction of error covariance matrices in regional inversions. Conclusions We applied a calibration (or down-selection) process to a multi-physics/multi-analysis ensemble of 45 members.In this calibration process, two optimization techniques were used to extract a subset of members from the initial ensemble to improve the representation of transport model uncertainties in CO 2 inversion modeling.We used purely meteorological criteria to calibrate the ensemble and avoid contaminating the calibration with CO 2 flux errors.The calibrated ensembles were optimized using criteria based on the flatness of the rank histogram.We generated different calibrated en-sembles for three meteorological variables; PBL wind speed, PBL wind direction and PBLH.With these techniques, we identified sub-ensembles by calibrating the three variables jointly.Both techniques show that calibrated small-size ensembles can reduce the score of the rank-histogram flatness and therefore improve the representation of the model error variances with few members (between 5 and 10 members). The calibration techniques improved the spread (flatness of the rank histogram) of the ensembles and slightly improved the biases, which were already small in the larger ensemble, but the calibration did not improve daily atmospheric transport errors as shown by the spread-skill relationship.We assessed how the calibrated ensemble errors propagate into the CO 2 mole fractions simulated with identical CO 2 fluxes (i.e., independent of the atmospheric conditions).The spread from the calibrated ensembles represented from 20 % to 40 % (Table 5) of the model-data 300 m DDA CO 2 mismatches for summer 2008.These results suggest that additional errors in CO 2 fluxes and/or large-scale boundary conditions represent a large fraction of the differences between modeled and observed CO 2 .Error correlations of the calibrated ensembles were compared to the large ensemble to identify any impact of the calibration.Compared to the initial error structures, the calibrated ensembles are most likely affected by sampling noise across the region, which suggests that additional filtering or modeling of the errors would be required in order to construct the error covariance matrix for regional CO 2 inversion. Figure 1 . Figure 1.Geographical domain used by WRF-ChemCO 2 physics ensemble.The parent domain (d01) has a 30 km resolution, the inner domain (d02) has a 10 km resolution.Contours represent terrain height in meters.The inner domain covers the study region and includes the rawinsonde sites (red circles) and the CO 2 towers (blue triangles) locations. L. I. Díaz-Isaac et al.: Calibration of a multi-physics ensemble Figure 2 . Figure2.Diagram of the process of selection of reduced-sized ensembles explained in Sect.2.7.In this diagram, in the sub-ensemble, we show our two main thresholds after running each algorithm; the sub-ensemble score has to be smaller than the full ensemble (δ < δ f ) and the sub-ensemble bias is smaller than the full-ensemble bias (bias < bias f ). Figure 3 . Figure 3. Box plot of the rank-histogram scores of the different sub-ensembles of 10 (a), 8 (b) and 5 (c) members accepted by the SA.Each figure shows the rank-histograms scores for the different variables: PBL wind speed (WSPD), PBL wind direction (WDIR) and PBLH.The top of the box represents the 25th percentile, the bottom of the box is the 75th percentile, the red line in the middle is the median and the green "x" the mean.Outliers beyond the threshold values are plotted using the "+" symbol. Figure 6 . Figure 6.Rank histogram of the 45-member ensemble for wind speed (a), wind direction (b) and PBLH (c) using 14 rawinsonde sites available over the region.The horizontal dashed line (r) corresponds to the ideal value for a flat rank histogram with respect to the number of members. Figure 7 . Figure 7. Spread-skill relationship for (a) wind speed, (b) wind direction and (c) PBLH using the 14 rawinsonde sites available over the region.Each point represents the model ensemble spread (standard deviation of the model-data difference) and skill (mean absolute error) for each observation.A one-to-one line is plotted in black and a line of best fit is plotted in red.Correlation (r) and slope (b) of the line of best fit of the spread-skill relationship are plotted as well. Figure 8 . Figure 8. Rank histograms of the calibrated ensembles found for wind speed (a, d, g), wind direction (b, e, h) and PBLH (c, f, i) for each of the ensemble size.The upper, middle and lower panels correspond to the ensembles with 10, 8 and 5 members, respectively.The horizontal dashed line (r) corresponds to the ideal value for a flat rank histogram with respect to the number of members. Figure 9 . Figure 9. Rank histograms of wind speed (a, d, g), wind direction (b, e, h) and PBLH (c, f, i) using the calibrated ensembles found with SA.The upper, middle and upper lower panels correspond to the ensembles with 10, 8 and 5 members, respectively.The horizontal dashed line (r) corresponds to the ideal value for a flat rank histogram with respect to the number of members. Figure 10 . Figure10.Time series of simulated and observed 300 m wind speed (a-c), 300 m wind direction (d-f) and PBLH (g-i) using the 5-, 8-and 10-member calibrated ensembles (first, second and third columns, respectively) at the TOP rawinsonde site.The green shaded area represents the spread (i.e., root mean square deviation) of the ensemble, the black line is the mean of the ensemble, and the red dots are the observations at 00:00 UTC. Figure 11 . Figure 11.Frequency with which the physics schemes are used for the SA (a, c, e) and GA (b, d, e) calibrated ensembles of 10 members (a, b), 8 members (c, d) and 5 members (e). Figure 12 . Figure 12.Residual (model-data mismatch) mean and standard deviation of individual members for wind speed (a), wind direction (b) and PBLH (c) using the SA-and GA-calibrated sub-ensemble of five members. Figure 14 . Figure 14.Sum of the CO 2 mixing ratio variance of the large (45member) ensemble and the different sub-ensembles selected with the SA (a) and GA (b) down-selection techniques. Figure 15 . Figure 15.Spatial correlation of CO 2 for the 45-(a), 10-(b), 8-(c) and 5-member (d) ensembles with respect to the location of the Round Lake tower for DOY 180.This figure uses the calibrated ensembles of 10, 8 and 5 members found by the SA technique. Table 1 . Physics schemes used in WRF for the sensitivity analysis. Table 2 . Rank-histogram score (δ), biases and standard deviation (σ ) of the 45-member ensemble for wind speed, wind direction and PBLH computed across 14 rawinsonde sites using daily 00:00 UTC observations for 18 June to 21 July 2008 in the upper US Midwest. Table 3 . Calibrated ensembles generated by both SA and GA and their rank-histogram scores and bias for each variable. Table 4 . Ensemble members, rank-histogram scores (δ), bias and standard deviation (σ ) for wind speed, wind direction and PBLH for the calibrated sub-ensembles generated with SA. Table 5 . Spread (model-ensemble mean), RMSE (model-data) and ratio (spread 2 /RMSE 2 ) at each of the in situ CO 2 mixing ratio towers, for the 45-member ensemble and 10-member ensemble calibrated with SA and GA. Figure 13. Ensemble mean and spread (i.e., RMSD) of the DDA at approximately 100 m CO 2 concentrations at Mead (first column; a, d, g,
14,071.4
2019-04-30T00:00:00.000
[ "Environmental Science", "Physics" ]
Statistically Evaluating Social Media Sentiment Trends towards COVID-19 Non-Pharmaceutical Interventions with Event Studies In the midst of a global pandemic, understanding the public’s opinion of their government’s policy-level, non-pharmaceutical interventions (NPIs) is a crucial component of the health-policy-making process. Prior work on CoViD-19 NPI sentiment analysis by the epidemiological community has proceeded without a method for properly attributing sentiment changes to events, an ability to distinguish the influence of various events across time, a coherent model for predicting the public’s opinion of future events of the same sort, nor even a means of conducting significance tests. We argue here that this urgently needed evaluation method does already exist. In the financial sector, event studies of the fluctuations in a publicly traded company’s stock price are commonplace for determining the effects of earnings announcements, product placements, etc. The same method is suitable for analysing temporal sentiment variation in the light of policy-level NPIs. We provide a case study of Twitter sentiment towards policy-level NPIs in Canada. Our results confirm a generally positive connection between the announcements of NPIs and Twitter sentiment, and we document a promising correlation between the results of this study and a public-health survey of popular compliance with NPIs. Introduction As COVID-19 spreads rapidly around the world, governments have implemented different NPIs to contain the spread of the virus. While effective at slowing down the spread of COVID-19 (Haug et al., 2020), NPIs such as school and non-essential businesses closures, telecommuting, mask requirements and physical distancing measures have drastically changed our lives and sparked dissent. Antimask and anti-lockdown protests are commonplace, while there are nearly fifty million active cases around the world. It is crucial for decision makers to understand the public's opinion about NPIs, and for policy-makers to have a means of forecasting the level of popular compliance with them. This will determine their effectiveness as well as whether additional measures and communication strategies are needed in light of waning adherence. Analysis of social media data is already popular among epidemiologists, as it is a data source with near real-time feedback at very low cost (Majumder et al., 2016). Extracting sentiment trends towards the pandemic on various social media platforms has already attracted interest (Wang et al., 2020b;Li et al., 2020;Wang et al., 2020a). Neural sentiment analysis is very prevalent because of its high performance on classification tasks 1 and versatility. Temporal variation of sentiment is usually represented by time series, in which an average model-predicted sentiment scores over from all social media posts within each time interval is computed. Previous work following this paradigm suffers from two major issues, however. Firstly, nearly all time-series analyses have been based on sentiment classification results -every post is classified into one of the predetermined sentiment categories (positive/(neutral)/negative) -even though sentiment is a continuous random variable. For example, Wang et al. (2020b) provide two "sentiment-neutral" examples that in fact have differing sentiments. Smoothing sentiment from a continuous variable into a ternary or binary scale causes a loss of dynamics, hence increasing the difficulty of the task and lowering the reliability of all subsequent analyses. There are now n-valued sentiment corpora for n = 5 (Socher et al., 2013) and n = 7 (Mohammad et al., 2018), but finer-grained discrete sentiment does not entirely solve the problem. The valence regression task (V-reg) proposed by Mohammad et al. (2018) is far more suitable because it conveys a continuous sentiment intensity measure through a logistic regression score. Figure 1: Wang et al. (2020a) claimed that general sentiment reached a minimum when the government announced a "lock-down" (A), and COVID-19 related sentiment reached a maximum when Amsterdam announced release measures (B). Note that the magnitude of difference between the minimum point they discovered at (A) and the valley a few days prior, at which there was no press conference, is not visible to the naked eye. A continuous score also allows us to compute an average sample sentiment over a definite period of time, which has a more accurate variance than smoothing binary scores. Secondly, because of the community's lack of a model capable of conducting significance tests and distinguishing the influence of various events across time, no statistically sound conclusion can be drawn. As an example, Wang et al. (2020a) claimed to have noticed a link between public sentiment and the timing of the Dutch government's press conferences by visually inspecting the raw trend of social media sentiment, seen in Figure 1. In fact, there were numerous peaks and valleys throughout the interval they studied, because the average sentiment fluctuated wildly during this time. We can bring the potential of this urgently needed application to fruition by looking outside CL/NLP. Financial analysts face similar problems when they try to assess the effect of a particular news event on the price of a particular stock, because the price is affected by countless events as well as the reactions of traders with different motivations and perspectives on those events. Event studies Warner, 1980, 1985) have been proposed and recognised as viable methods for attributing stock price fluctuations to specific financial events. To our knowledge, there has been no study of this class of methods within epidemiology. In Finance In the financial sector, event studies are used to examine the return behaviour of a security after the market experiences some event (e.g., a stock split or an earnings release) that pertains to the firm that issued the security. The actual return of a stock (or a portfolio of assets) (R t ) at a given time t (t = 0 represents the time of the event) can be decomposed as follows: is an expected return, which can be explained by a model given the conditioning information X t . ξ t is an "abnormal" return that directly measures the unexpected changes on the returns, which are likely to have been caused by some unforeseen event (Eckbo, 2009). It is also possible that the abnormal return was just caused by chance (E[ξ t ] = 0), however, and we can measure the statistical significance with which we can reject this null hypothesis through various tests based upon time-series aggregation, which we discuss presently. The expected return can be estimated by a market model (Fama and MacBeth, 1973) where R m,t is the return of a market portfolio, i.e., of all of the assets in the market as represented by a broad market index (e.g., S&P 500, Nasdaq). β is the risk factor of the stock and can be computed using the ratio of the covariance between the actual return and the market return to the variance of the market return β = cov(R,Rm) σ 2 (Rm) . α is the bias that can be computed with least squares estimation, but since β is already computed, the optimal value of α is 1 The analysis of an event proceeds by first determining whether there is a statistically significant impact, and then if there is, computing the magnitude of the impact. To answer these two questions, the integral of the abnormal return, called the cumulative average residual (CAR), is computed: Under the assumption that the return of a stock with no marked events is a stochastic process that perfectly reflects the overall performance of the market as accounted for by the market model (Fama and MacBeth, 1973), the expectation of CAR should be zero. Thus, we can test the null hypothesis that the event has no impact on the return, E[ξ t ] = 0, by a one-sample t-test, one-sample Wilcoxon signed rank test (Wilcoxon, 1945), or a binomial proportionality z-test. In finance, the ratio of CAR divided by the overall actual return is traditionally used to represent the magnitude of an event's impact, but the statistics of these tests can also be used. In Public Health Over the course of the pandemic, governments around the world have utilized different NPIs at different times and with different stringencies (Hale et al., 2020). Therefore, overall sentiment shift cannot represent the impact of individual public health events. Instead, overall sentiment acts like market return: an aggregation of individual sentiments. Therefore, we define the daily sentiment index (I) as the average sentiment (valence) of all the tweets from a single day. Individual COVID-19-related topics are analogous to individual stocks, and the sentiment change on individual topics is reflected in the change of the sentiment index. But some topics specifically relate to certain events, similar to how individual stocks react to the news relevant to their firms. Therefore, the average sentiment S m,t of all discussions on topic m at time t is similar to the return of a stock in the event study. Our "market model" for sentiment is: E[S m,t ] = α m + β m I t . We compute the abnormal sentiment by ξ m,t = S m,t − E[S m,t ] and calculate CAR by aggregating ξ m,t over time: CAR(t 1 , t 2 ) = t 2 t=t 1 ξ t . Experimental Setup Gilbert et al. (2020) started collecting COVID-19 related tweets by searching for tweets mentioning at least one of the various naming conventions for COVID-19 using the Twitter search API as at January 21, 2020, and collected 281,487,148 tweets up until August 23rd, 2021. After Carmen geolocation (Dredze et al., 2013), we obtained 5,979,759 English Twitter samples from Canada. For this paper, we studied two NPIs: wearing a mask and social distancing. For present purposes, we considered an event to be every change in the stringency level of any NPI, as measured by the Oxford COVID-19 Government Response Tracker (OxCGRT) project (Hale et al., 2020). We used a keyword-based filter to obtain topic-related tweets. We began with a manually written list of related keywords to obtain a list of tweets M that contain a keyword, and M that do not contain any keyword. Then for each bigram and trigram x, we calculated a topic relevance score based on pointwise mutual information: pmi(x; M )−pmi(x; M ). We ranked the top 150 keywords for each n-gram and manually removed the topic-unrelated ones. For example, "covidsafe" was identified using this method but "congressman sponsor," a topic relevance score After filtering all the tweets connected to an NPI of interest, we computed their valence score using the NTUA-SLP model, 2 which was selected from the 75 entrants to the V-reg shared task (Mohammad et al., 2018). We followed the hyperparameter settings from the original paper (Baziotis et al., 2018) and reproduced its reported Pearson correlation (0.846) on the English valence dataset. To establish a periodic time series of valence change, we computed the daily average valence of tweets posted on the same day. 3 Individual NPIs Experimental Results Wearing A Mask Canada's mask advisory has changed several times during the progression of the pandemic (Mohammed et al., 2020) and we investigated two key changing points of the advisory as events. On April 6th, 2020, the Public Health Agency of Canada (PHAC) revised the advisory for mask wearing (event 1), permitting the use of non-medical face coverings in public (Chase, 2020;Mohammed et al., 2020). Finally on May 20, 2020, PHAC formally issued a recommendation for the general public to wear masks in public (event 2) (Mohammed et al., 2020;Harris, 2020). Assuming a confidence threshold of α = 0.05, event 1 had a statistically significant positive impact for up to 9 days (Figure 2b). Event 2 also showed significance from two days after the event to up to eight days after ([+2, +8]; Figure 2c). Unlike event 1, there is also a period of significance right before the event occurred. This may have been anticipatory, or it may indicate that the observed impact had instead been caused by prior events. During the 9-day effect window of event 1, there is a 2.13% positive CAR, with t-statistic 1.73, Wilcoxon statistic 7.0, and z-statistic 1.67. Social Distancing Social distancing recommendations have been issued with different stringencies and at different times at the provincial level in Canada. Therefore, we focus separately on three provinces: Ontario (ON), British Columbia (BC) and Alberta (AB), with sufficient numbers of tweets and different distancing policies. According to Mc-Coy et al. (2020), Ontario released its first provincewide social distancing recommendation on March 16, 2020 (Williams, 2020); British Columbia issued a social distancing recommendation on March 17, 2020 (Dix and Henry, 2020); and lastly, Alberta released a public message about social distancing on March 21st 4 (McCoy et al., 2020). Figure 3 analyses the significance of the initial recommendations in those three provinces. All three announcements have a positive impact on CAR with statistical significance. Ontario's recommendation ( Figure 3d) has a short but significant impact on [+2, +7]. Alberta (Figure 3f) exhibits a significant impact on [+3, +9], and British Columbia on [+1, +9]. CAR and Survey Data Correlation To help understand whether the sentiment of NPIs measured using Twitter are representative of the general Canadian population, we assessed the correlation between our NPI sentiments and the level of compliance measured through a national survey. The COVID-19 Monitor initiative (COV, 2020;Mohammed et al., 2020) has conducted 25 surveys in Canada on people's compliance with 6 NPIs since mid-March. Each survey has approximately 2000 participants. The demographics of the participants have been pre-stratified, and each wave was post-stratified by modelling raking weights based on the 2010 Canadian Census. Among the 6 NPIs, both social distancing and wearing a mask appear. For the cross-correlation test, both time series have been detrended using the SciPy signal package, 5 and then pre-whitened following the instructions proposed by Dean and Dunsmuir (2016) to remove autocorrelations with the time series. 6 Figure 4 shows the correlations and crosscorrelations with the proportion of the population who report complying with either of these two NPIs and CAR. Wearing a mask receives a strong Pearson r = 0.915 (Figure 4a), a cross-correlation of 0.710 and a +5 lag, meaning CAR is 5 days ahead of the survey (Figure 4b). Social distancing receives a moderate Pearson r = 0.481 (Figure 4c), a cross-correlation of 0.492 and also a +5 lag (Figure 4d). The cross-correlations cannot be quantitatively compared with the Pearson correlation scores as they are calculated differently, but the general trend stays the same: wearing a mask exhibits a strong correlation while social distancing, only moderate one. The lags also accord with our expectations as COV (2020) conducted surveys 4 to 10 days apart. The lower correlation for social distancing might have been caused by their more diverse implementation across subsovereign jurisdictions (see section 4). As the details of the sample selection process at the provincial level are not publicly available, we have not been able to draw direct, provincial comparisons. Mask-wearing advisories, however, are mostly issued at the federal level in Canada. Comparing mask-wearing across provinces is thus less problematic. With both types of NPI, Twitter users are demographically younger, better educated, and more urban than the general population (Mellon and Prosser, 2017;Murthy et al., 2016). This may explain some differences from the national distribution sampled for this survey. Captions of (a) and (c) report Pearson correlations; captions of (b) and (d) report cross-correlations with days of lag.
3,535
2021-06-01T00:00:00.000
[ "Computer Science" ]
Facilitating higher photovoltaic penetration in residential distribution networks using demand side management and active voltage control Future power networks are certain to have high penetrations of renewable distributed generation such as photovoltaics (PV). At times of high PV generation and low customer demand (e.g., summer), network voltage is likely to rise beyond limits mandated by grid codes resulting in a curtailment of PV generation, unless appropriate control means are used. This leads to a reduction in energy yield and consequently reduces the economic viability of PV systems. This work focuses on scenario‐based impact assessments underpinned by a net prosumer load forecasting framework as part of power system planning to aid sustainable energy policymaking. Based on use‐case scenarios, the efficacy of smart grid solutions demand side management (DSM) and Active Voltage Control in maximizing PV energy yield and therefore revenue returns for prosumers and avoided costs for distribution networks between a developed country (the UK) and developing country (India) is analyzed. The results showed that while DSM could be a preferred means because of its potential for deployment via holistic demand response schemes for India and similar developing nations, technically the combination of the weaker low voltage network with significantly higher solar resource meant that it is not effective in preventing PV energy curtailment. electricity. The performance-based electricity distribution model Revenue = Incentives + Innovation + Outputs model of the UK which has been in operation from 2015 2 is representative of this drive. In the continuing drive to reduce cost, given the high cost of assets, especially at the transmission and sub-transmission voltage levels, it is safe to assume that even in the near-or medium-term, power networks will be mostly composed of present-day assets. There will be high volumes of customer-side renewable generation due to the decarbonisation targets. However, the exact penetration levels, renewable generation type and their share in the demand mix are presently uncertain. Due to technological advances, photovoltaics (PV) system costs have been on a continuous decline and, by 2017, PV module was more than 80% cheaper compared to a decade ago. 3 PV systems also have a low maintenance cost due to their static nature. At the domestic residence level, PV systems is one of the most popular types of renewable generation. Currently, Germany has the highest PV installed capacity in Europe; with over 49 GW. 4 More than 98% of PV systems are connected to low voltage (LW) distribution networks. 5 Even though the present levels of PV penetration in most other countries are relatively low, given the ambitious targets (e.g., 175 GW by 2022 for India by the Ministry of New & Renewable Energy), scenarios similar to Germany with high PV penetration is not far away. A decentralized power supply becomes problematic for the traditional operating mode of the electricity network where net load on the network is largely foreseeable, power supply is controlled and there is a uni-directional electricity flow from large generators to consumers. 6 Conventional power distribution networks have limited PV generation hosting capacity and 'high PV generation -low demand' conditions can result in network voltage limit violations. 7 Extensive research has recently been carried out on assessments of the impacts of distributed generation on the electricity distribution network. [8][9][10] Such impact analyses have been able to identify the detrimental effect of future load on network assets. [11][12][13] Accelerated aging of transformer oil and insulation, 11 deterioration of functioning of aged circuit breakers and switchgear, 12 and higher maintenance requirements of transformer tap changers 13 are a few of the identified detrimental effects that have a direct commercial significance. While there are schemes in place for prioritizing the grid injection of renewable energy, 14 the detrimental effects identified as associated with increase in PV penetration levels have resulted in grid codes making active curtailment of PV generation becoming a mandatory requirement now in several countries. 15 For example, according to Engineering Recommendation G98, PV systems in the UK LV distribution networks are required to curtail generation when the voltage rise at the point of connection exceeds the mandated limit. 16 Incentives like feed-in tariffs offered by government bodies have driven the installation of PV systems, but, as customers have to invest a large capital on installing PV systems and are getting paid for the energy they generate. Curtailing PV generation reduces the PV energy yield and therefore the systems financial viability. Maximizing the energy yield and penetration levels of PV systems is therefore important with respect to both climate change mitigation and energy economics. Several approaches have been considered in the literature in order to improve the network hosting capacity of PV and other renewables and maximize the energy capture. These approaches include network reinforcement, network reconfiguration, static VAR control, energy storage, 17 and smart grid solutions such as Demand Side Management (DSM) 18 and Active Voltage Control (AVC). 15,19 Power networks are currently moving into the smart grids paradigm. The inherent cost attached to smart grids technologies means that the global economic inequality will be reflected in their deployment. Developing nations with lower economic reserves to spare are often constrained in terms of the level and nature of changes they could make to their power networks. However, owing to energy supply deficits, load growth, dependency on fossil fuel imports, and so forth, developing nations are in greater need of cheaper low carbon generation. This can only be realized through efficient and sustainable energy policies. Figure 1 is representative of the modeling requirements within the energy policy nexus. A multitude of scenarios of with variations in underlying technical processes, energy behavior and associated economics needs investigation for effective policymaking. As energy flow becomes inevitably more complex with larger integration of renewable generation, electric vehicles and energy storage in modern power networks, power system planning methods are becoming more complicated compared to how they were with conventional, mostly thermal, generation. It was evident from a survey of recent literature on power system planning that there is a significant focus recently on large-scale renewable integration, specifically with regards to generation expansion planning focusing on national energy policies. 20 Majority of literature tends to concentrate on optimization of transmission and distribution planning, ultimately underpinned by load flow analysis. 21 As an emerging area there is a high level of attention given to energy storage from the point of view of technical constraints, given the uncertainty around their economics. 22 There is also focus on the drivers and challenges of renewable penetration such as F I G U R E 1 Outline of modeling requirements for energy policymaking carbon tax 23 and resource uncertainty and variability. 24 Resource planning 25 and mitigating strategies such as DSM and OLTC for voltage rise mitigation 24 is investigated in this context. Authors of Reference 26 reviewed power system planning challenges for India with increasing penetration of renewables given the ambitious installed capacity targets. The current energy policies are summarized, and it is recommended that India learn from international experiences and adopt best practices from developed countries. The need for DSM and advanced forecasting methods is also emphasized along with other recommend actions to facilitate higher renewable penetration. In Reference 27, a method combining probabilistic duck curve and probabilistic ramp curve to efficiently compensate the imbalance between the high PV generation time and peak time of load was demonstrated for a use case of China. Reference 28 emphasizes that load forecasting is often the first step in power system planning. Plug-in electric vehicles (PEVs) and the Korean government PEV targets are focused on. A stochastic method for forecasting PEV load profiles is introduced focusing on the PEV expansion target, statistics of existing vehicles and consumer numbered connected to substations. Reference 29 focuses on the voltage rise problem with increased renewable penetration for aging power networks and introduces an algorithm for carrying out decision-making on asset upgrades or network reinforcement by addition of components and modification of topology. The trade-off between power line upgrades and placements and operation of on-load tap changing (OLTC) transformers in the network was investigated from the point of view of technical constraints. In Reference 30, authors identify that increasing renewable penetration is confidential with increasing need for flexibility within power systems. Market design is identified as the structural tool that can facilitate flexibility. Potential market reforms are outlined with a focus on DSM. The impact of the difference in nature and requirements of different regional networks and availability of flexible loads are acknowledged. It is recommended that future research focus on planning and operation of power system factoring the difference into account. In Reference 31, a multi-region power system planning approach named REPLAN is proposed for Nigeria. The focus was on improved energy exporting and importing arrangement between regions and overall energy cost reduction by forecasting inter-regional transmission capacity and pathways for developing regional generation. Although the study emphasized the need to investigate local (regional) network models, it was aimed at long-term power system planning and not on diurnal power system operation. It was evident from the literature surveyed that there is a strong focus on energy policies. However, the focus is mostly at the higher-level vision-type policies, often at the national level, setting the energy targets rather than the policies or grid codes at the operational level, which translate the envisioned benefits to reality. Revenue from energy is the basis of renewable energy economics. Policy makers will not be to capture the full picture for facilitating higher penetration of renewable like PV based on research that just focus on maximum hosting capacity, the implications of technical measures/constraints to PV energy and PV system owners also need to be understood. In this context, the main aim and contribution of this work is to support power system planning by means of scenario-based impact assessments and thus aid sustainable energy policymaking, especially for developing countries. The efficacy of smart grid solutions (DSM and AVC), between developed and developing countries, in facilitating higher PV penetration in residential distribution networks, given grid code requirements, is analyzed. Select use case scenarios of the UK and India are used as examples of a developed and a developing country. A net prosumer load forecasting framework is introduced, and its application is demonstrated for the use cases. The remainder of the article is organized as follows: In Section 2, the research methods along with case studies and simulation details are discussed, with the case study network description for Newcastle (UK) and Mumbai (India) in Section 2.1, the description of parameters for PV simulation in Section 2.2 and the proposed smart grid solutions in Section 2.3. Section 3 describes the methodology used for assessing the performance of the chosen smart grid solutions: demand-side management and AVC. Section 3.1 explains the net load profile generation for both Newcastle and Mumbai, while PV energy yield estimation algorithms are described in Section 3.2. Simulation results are discussed in Section 4, which is classified into three scenarios: (i) base case in Section 4.1, (ii) the case with DSM and (iii) case with DSM and AVC. And finally, conclusions are drawn in Section 5. Distribution networks considered In this paper, we consider two LV distribution network examples: One from the UK (as an example of a developed country) and one from India (as an example of a developing country). For the UK, Newcastle upon Tyne was chosen as the location for investigation. A typical UK distribution network model shown in Figure 2 from Reference 32 was used. The LV feeder shown in detail from the secondary distribution transformer has 384 houses. The total number of houses connected to an 11 kV feeder is 3072 (= 8 × 384) and to the 33/11 kV substation is 18,432 (= 6 × 3072) houses. For India, Mumbai was chosen as the location for investigation. The distribution network model shown in Figure 3 was used. The model consists of a 33/11 kV 15 MVA transformer substation with nine outgoing feeders (11 kV), supplying 14,385 houses. A typical 415 V LV feeder (shown in red) supplying 385 houses was considered in detail, similar to Newcastle. PV generation simulation A 3.6 kW polycrystalline rooftop residential grid-connected PV system was considered as typical for both countries. PVGIS (Photovoltaic Geographic Information System) 33 was used as the solar resource database as well as PV generation simulation tool. Technical data of Sharp ND-R250A5 polycrystalline PV modules and SMA H5 inverter were used for simulation. Daily PV generation profiles for a typical year were generated for both locations. Systems were assumed to be stationary and at optimal tilt. The PV system's annual energy yield was found to be 3280 kWh (equivalent to 911 kWh/kW) for Newcastle. For the system in Mumbai, the yield was around 80% more than that of Newcastle at 6017 kWh (equivalent to 1671 kWh/kW). PV penetration scenarios for assessment In this study, PV penetration level was defined as the fraction of the number of houses in the distribution network considered having a typical PV system. Eleven scenarios each, are studied for both Newcastle and Mumbai cases. PV penetration level is varied from 0% to 100% in steps of 10%, to create the 11 scenarios. Demand side management DSM is the control of customer loads in order to achieve a better match between the available supply and the demand. Of the DSM strategies available, the load shifting strategy (Figure 4), which is the movement of operation of selected loads between times of the day, is chosen in this work. This strategy is most suited for maximizing self-consumption of energy (and hence the economic value) from PV systems installed at customer premises. DSM can be either 'Active' or 'Passive'. 'Active' Demand Side Management (ADSM) is defined as the automated (intelligent) control of residential electricity demand to meet the needs of the power supply system. 34 This has become possible with the roll out of smart meters and the development of home automation technologies. 'Passive' DSM (PDSM) requires customers to be active participants, the control action of load shifting is realized by the customers based on inputs from network operator/electricity company. DSM implementations can be based on price signals such as time of use (ToU) tariffs and real-time pricing or based on F I G U R E 4 (A) Before load shifting and (B) After Load Scheduling incentive schemes, for example, buy-back programs. 35 Figure 5 is representative of a plausible ADSM scheme and shows an ADSM controller incorporated into a smart grid architecture 36 in which maximization of PV energy capture would be realized through direct load control by the ADSM controller. In PDSM a similar maximization of PV energy could be realized, for example, through a mobile phone app that evokes customer load action. 37 Load shifting can be expressed mathematical as 38,39 : where, Forecast(t) = Forecasted consumption at time t Connect(t) = Connected load amount at time t Disconnect(t) = Disconnected load amount at time t Appliances chosen as flexible loads for DSM in this study is shown in Table 1. The table also shows the household share (percentage of household with the specific appliance), cycle duration and energy consumption/cycle considered for the chosen flexible loads based on information assimilated from References 40-42. While the share of Dishwashers was below 1% in India before 2020, manufacturers have witnessed a 400% surge in demand due to COVID lockdown and homeworking restrictions. 43 Mumbai, being the commercial capital of India, it is assumed that the increase in PV penetration will be coincidental with an increase in uptake of Dishwashers. Load profiles of these flexible loads chosen for DSM for a typical day were available from Reference 44 for the UK. Owing to the lack of such appliance level consumption data in India, the same profiles were assumed for India. Figure 6 shows the load profiles for the three categories of flexible loads. With the use of appropriate control logic and knowledge of the network topology, the feeder level controller (Aggregator MV) shown in Figure 5 would be able to make nodal voltage predictions. The in-home ADSM controller can receive these predictions via the smart meter and trigger load-shifting of the flexible loads according to the DSM program. Active voltage control AVC is a part of the active management of the network. Grid codes usually require that the voltage at the end customer terminal does not deviate from the nominal value by more than a few percent (e.g., within −6% to +10% for the LV network in Europe). To satisfy this requirement, the voltage of all nodes in the network should be kept close to their nominal value at the extremities of the distribution network operation. Transformer tap changers, voltage regulating transformers and reactive power compensation are some of the techniques that are used for achieving this control. 45 Amongst these, transformer tap changers are the most common and hence, in this study, AVC is considered by means of transformer tap changing, as shown in Figure 7 for one phase of a three-phase primary substation transformer. The OLTC on the high voltage winding (winding 2) regulates the voltage by varying the transformer ratio V 2 /V 1 . Tap position 0 corresponds to no voltage correction and tap position N Taps yields the maximum voltage correction. F I G U R E 7 One phase of primary substation transformer Reversing the switch connects the regulation winding in opposite polarity and yields negative tap positions. Hence the tap range is -N Taps ≤ N ≤ +N Taps . Voltage regulation by the OLTC can be described by the equation where V nom1 and V nom2 are the nominal voltages of winding 1 and 2, N is the tap position, V sec is the transformer output voltage after tap changing, V pri is the source voltage incoming to the transformer primary part and V TC is the voltage per tap. Normally, control of OLTCs at primary substations is by means of an automatic voltage controller, which controls the tap changer on the high voltage side of the transformer, in order to keep the voltage on the LV side within limits. In contrast to conventional voltage regulation (which uses Scalar LDC), the automatic voltage controllers in this case deploys Vector Line Drop Compensation (LDC), which is intended to keep the voltage in the distribution feeder within limits by compensating for voltage drop along fictitious impedance and modifying the controller algorithm to keep the transformer terminal voltage equal to a reference value. As vector LDC also counts on changes in power factor, the results are more reliable and the mathematical expression is as follows, 46 Tap-changer is operated by comparing the reference voltage with the deadband which is a small voltage range introduced in the transformer's design in order to avoid unnecessary switching around the target voltage. Tap movements are usually made if |V m -V ref | > Deadband/2 for a certain time delay of t step (which is 1-min duration in this study) according to the following equation: where Vmax = 1.1pu − Voltage at current tap position Vmin = Voltage at current tap position − 0.9pu PERFORMANCE ASSESSMENT High PV penetration levels can result in situations where the LV network voltage exceeds the statutory limits. Current grid codes (for example, G98 in the UK) require residential PV systems to turn-off and curtail generation during periods of voltage rise. The main aim of this paper is to analyze the efficacy of smart grid solutions (DSM and AVC), between developed and developing countries, in facilitating higher PV penetration in residential distribution networks, given grid code requirements using the 11 PV penetration scenarios for Newcastle and Mumbai described in the previous sections. LV distribution networks of both the UK and India were designed for an After Diversity Maximum Demand (ADMD) of 2 kW per customer. However, in terms of PV, Mumbai's output is much higher compared to Newcastle for the same PV system size. As described in Section 2.3.1 it is possible to realize a certain ADSM load action also through PDSM. PDSM as a holistic strategy without the need for smart appliances or direct load control would be preferable in the first instance for developing countries like India because of economic reasons. As such, DSM is chosen as the first preferred solution to prevent PV curtailment, followed by AVC. The two-stage approach is shown in Figure 8. The objective is to maximize the PV energy capture by self-consumption and consequently to reduce the burden caused by the reverse power flow on electrical network assets to maintain the optimal assets' lives. For load shifting, the scenario-based assessments considered a representative DSM logic outlined in Figure 9 is applied to each flexible load category (washing machine, dishwasher, and electric water heater). Figure 10 outlines the AVC operation scheme considered for the study. Net load profiles Residential load profiles represent the variation of After Diversity Maximum Demand (ADMD) of domestic consumers over a day. The standard method of constructing an hourly load profile is by recording the energy consumption, at feeder or substation level in an electricity distribution network, at regular intervals and dividing this by the number of customers on that feeder to produce the ADMD. The nature of customers is changing under de-carbonization. Residential customers with generating technologies such as PV are prosumers as they produce and export electricity in addition to the typical consumer roles. In the smart grid context, historic forecasts of load profile will not be appropriate. Net load profiles at the residential customer level will need to be prosumption profiles, factoring in the drastic changes in load (for example, due to electric vehicles [EV], heat pumps, and so forth) and at-home generation technologies (PV, Micro-CHP, and so forth). Synthetically generated net load profiles are therefore important for scenario-based assessment studies. Several studies have used artificial intelligence models for predicting energy demand of buildings. 47 Günay 48 modeled the gross electricity demand in Turkey using Artificial Neural Network (ANN) models with weather and socio-economic factors as inputs. Zameer et al. 49 used genetic programming based on an ensemble of neural networks to demonstrate the feasibility of wind energy prediction (in Europe) by using publicly available weather and energy data. With regard to the challenge of predictive modeling for uncertain penetration levels of future distributed resources, a number of researchers have recently had reasonable success by employing statistical probability distributions. [50][51][52] For example, Munkhammar et al. 52 demonstrated the use of the Bernoulli distribution for incorporating EV demand into load profiles. However, these statistical probability distributions fail to take into account the time varying behavior in the energy consumption of distributed resources as they assume a constant load. Therefore, a framework for synthetic net residential load profile generation proposed combining artificial intelligence and statistical probability distributions, that can be used for scenario-based assessment studies, is proposed as shown in Figure 11. The framework summarizes authors' accumulated experience in using artificial intelligence methods and observations of literature. The net residential load profile generation problem is inherently data centric. The choice of data, artificial intelligence methods, and inclusion of operational elements of the framework such as statistical probability distribution is dictated by the data available. A method tailored for the data available and scenario under consideration, can be generated based on the framework. ANNs are capable of mapping nonlinear relationships between inputs and outputs with End DSM a high level of accuracy. [53][54][55] ANNs are used in a wide variety of tasks in different fields including finance, industry, science, and engineering. 53,[56][57][58] ANNs is particularly suited for load forecasting where high levels of accuracy are required. 57 ANN based methods were developed for Newcastle and Mumbai and net load profiles were generated for all 11 scenarios described in Section 2.2.1. the Matlab environment, hidden layers from 1 to 6 and nodes in each layer from 1 to 20. The optimal network was found to have 13 neurons in one hidden layer which was trained using the Bayesian regularization backpropagation algorithm. Validation of this model using the same data used with the original model saw the MAPE lower to 0.00608 and the RMSE lower to 3.48. Figure 12 summarizes the training of the ANN model and its inputs for predicting net load profiles. UKERC 62 was the source of load data during training. PV generation data was based on PVsyst software simulations using public domain weather data from PVGIS. The net load profiles for different PV penetration scenarios studied in this work for Newcastle were created using five inputs, namely time of day (hour), PV penetration level (0% to 100% in steps of 10%), EV penetration level (set to 0), temperature, and irradiance values. Temperature and irradiance values were from the SARAH solar radiation database accessible through the PVGIS website. Mumbai case The ANN model developed and validated by the authors in Reference 63 was used to generate load profiles for Mumbai. Like many developing countries, owing to the lack of resources, there is a severe shortage of data in the public domain. In contrast to the PV data (resolution of 15 min for all days of a typical year), the load data set was extremely limited [48 F I G U R E 12 ANN-based net load profile generation for Newcastle data values in total, 24 hourly values each for summer and winter]. This made ANN training extremely challenging and was mitigated by means of Bayesian Regularization. 63 Figure 13 shows the synthetic residential load profiles for Mumbai generated by the ANN model. However, optimizing the ANN model for extremely limited data posed a challenge, the ANN model could only learn the load behavior not the PV behavior. For this reason, net load profiles were based on summation of ANN predicted load profiles and PVGIS PV generation profiles. PV energy yield estimation algorithms At the LV distribution level (230/400 V UK, 240/415 V India), the grid codes of both the UK 64 and India 65,66 mandate an upper voltage limit of 1.1 p.u. For PV inverters connected to LV networks, G83, the UK's previous grid code required disconnection at the same voltage of 1.1 p.u. However, the new grid code G98 requires PV inverters to disconnect only at 1.14 p.u. It is understood that this is for reasons of stability as disconnection of large amount of renewable generation at F I G U R E 13 ANN generated synthetic residential load profiles for Mumbai the same instant can create instability. Therefore, there are two voltages which are of significance 1.1 p.u. and 1.14 p.u. Most PV inverters are now manufactured to comply with G98. India also uses the same inverter technologies as the UK at the same frequency. It was assumed that with higher PV penetration India will follow the UK and the two voltages mentioned would be the ones of significance. Economic analysis is central to energy policymaking. Most economic analysis considered PV energy yield (in kWh) for a period of 1 year. As such, the efficacy of DSM and AVC for maximizing PV energy capture following the two-stage approach in Figure 8 is also assessed for a 1-year period for the scenarios considered. The Post-Curtailment Energy Yield Estimation (PC-EYE) algorithms for the three cases part of the assessment process namely (i) Base case (without DSM or AVC), (ii) Case with DSM, and (iii) Case with DSM and AVC, are shown below. MATLAB was used to code the algorithms. Bus voltages were calculated using Distflow (Distribution load flow). 67 The DSM and AVC programs were based on the schemes presented earlier in Figures 9 and 10. 1.14 p.u. was the threshold voltage at which curtailment action was initiated. The grid voltage upper limit of 1.1 p.u. was set as the voltage for initiating DSM and AVC actions to maximize energy capture by preventing curtailment. The Post-curtailment energy yield estimation algorithms for the Base case, with DSM and with both DSM and AVC have been detailed below. PV energy curtailment is calculated in the following manner (un-curtailed energy yield is available from the PV generation profiles): Firstly, using the appropriate post-curtailment algorithm, record the instances where bus voltages are greater than 1.14 p.u. for certain hour of a day owing to PV generation. Record the PV generation corresponding to these hours and instances and the sum them to calculate the aggregate PV energy curtailment for the day. Then, the PV curtailment for every single day of a month recorded in this manner are aggregated at the end of a month to calculate the total PV curtailment for a certain month. After that, the values of PV curtailment in all 12 months are summed at the end of a meteorological year to get the total annual curtailment for that specific meteorological year. Same process is used to calculate PV curtailment at all buses investigated in the distribution network considered. RESULTS AND DISCUSSION Simulations were run for the 11 scenarios of varying PV penetration (steps of 10%) described in Section 2.2.1. Three different cases were considered: (i) Base case (without DSM or AVC), (ii) Case with DSM, and (iii) Case with DSM and AVC. Time period considered in the simulations was 1 year. It was identified that for both Newcastle and Mumbai, the mid-summer period is the period of highest irradiation in the year when voltage rise and consequently PV energy curtailment was most severe. The performance of the smart grid solutions considered for the worst-case scenario, the peak irradiation day in summer, is representative of the efficacy. Owing to this reason, some of the results discussed below only focus on the peak day in summer. The bus that is located the farthest from the main grid source (Bus 17) is the most severely affected by any reverse power flow from the domestic PV sources back to grid. 68 So, Bus 17 was chosen to visualize the effectiveness of DSM and AVC. Newcastle case Simulation results for the Newcastle case indicated that for the first 10 PV penetration scenarios, from 0% to 90% penetration level, there were no voltage limit (1.1 p.u.) violations at any Buses. Figure 14 shows the Bus voltages at 90% penetration for the peak summer day. For the 100% PV penetration scenario, voltage limit violation was found to occur for Bus 13 to Bus 17. Figure 15 shows Bus 17 voltage and duration of PV energy curtailment for this scenario. The curtailment voltage threshold of 1.14 p.u. was never exceeded even for the 100% PV penetration scenario. Evidently, revision of the grid code from G83 to G98 and changing the disconnection threshold has had a positive impact on PV energy capture. Under G83's curtailment voltage threshold of 1.1 p.u., the aggregate annual energy curtailment between Bus 1 and Bus 17 would have been 15,911 kWh. Mumbai case Simulation results for the Mumbai case indicated that for up to 40% PV penetration level there were no voltage limit violations at any Buses. Figure 16 shows the Bus voltages at 40% penetration for the peak summer day. Table 2 lists the higher PV penetration scenarios and the Buses which were affected by voltage limit (1.1 p.u.) violations respectively for each scenario. Figure 17 shows the voltages at all Buses for PV penetration levels from 50% to 100%. Figures 18 and 19 show Bus 17 voltage and duration voltage violation for the 50% and 100% PV penetration. The severity of voltage rise with increasing PV penetration is clearly evident. The threshold voltage of 1.14 p.u. was exceed for scenarios with PV penetration level from 70% and above. Figure 20 provides a summary of curtailment results. For the typical meteorological year, the simulation results showed that Buses 15-17 were affected by PV energy curtailment when the PV penetration level exceeded 70%. Buses 14-17 were affected by PV energy curtailment when the PV penetration level exceeded 80%. And, Buses 13-17 were affected by PV energy curtailment when the PV penetration level reached 100%. At 100% PV penetration the annual energy curtailment at Bus 17 is 48,941 kWh which meant that 81% of the annual energy generation from the residential PV systems connected to the Bus will be curtailed. At 70% penetration, the respective curtailment value was 29% of the annual energy generation at the Bus. For PV systems connected to Bus 15, the curtailment was a mere 1% of the annual energy produced by the systems at 70% penetration. However, at 100% penetration, the curtailment was 72% of the annual energy produced by the PV systems connected to the Bus. For Bus 13, Newcastle case Two DSM participation scenarios were considered. A high customer participation scenario considered 50% of the houses in the network participating in DSM. A lower customer participation scenario considered 15% of the houses in the network participating in DSM and is assumed to be a more accurate representation of current customer behavior. PC-EYE (with DSM) algorithm was run with DSM program following the scheme in Figure 9 (Section 3) for the flexible load categories Washing machine, Dishwasher, and Electric water heating as described in Section 2.3.1. It can be seen from Figure 21 that the voltage violation at the most sensitive Bus (Bus 17) was fully compensated by the DSM program when 50% of the houses in the network participated in DSM. However, 15% of houses participating in DSM was not able to fully compensate the voltage limit violation as can be seen from Figure 22. The duration of voltage violation, however, was shortened. Voltage violation at 11 am was eliminated but those at 10 am and 12 noon remained. Mumbai case A high and a low DSM participation scenario were considered as in the case of Newcastle with 50% and 15% housing participation, respectively. Results showed that DSM had minimal impact for the Mumbai case. Figure 23 compares Bus 17 voltage with 50% DSM participation to the Base case for 70% PV penetration, which was the minimum penetration level to have energy curtailment in the Base case. There is no impact on voltage violation and PV energy curtailment was not compensated. Figures 24 and 25 show the aggregate annual energy curtailment for all Buses with 15% and 50% DSM participation. There is very little improvement from the Base case curtailment shown previously in Figure 20. Despite being the comparatively easier to realize solution for India, DSM did not prove to be an effective solution for maximizing PV energy capture. This is due to the high solar resource of India. While the residential distribution network in Newcastle is based on copper cables, the system in Mumbai utilizes aluminum overhead conductors. This difference in network topology, also means that Mumbai is more susceptible to voltage limit violations under high PV penetration. Bus Newcastle case Results from running the PC-EYE algorithm with DSM and AVC with 15% housing participation in DSM not only shortened the duration of voltage violation, but also fully compensated it. Figure 26 shows all Bus voltages. Bus 17, where there was voltage limit violation even with DSM, showed no violation when AVC was combined with DSM. The combination of AVC and DSM is found to be efficient in fully eliminating voltage violations for the Newcastle network for the worst-case high PV penetration scenario. Mumbai case Results from running the PC-EYE algorithm with DSM and AVC with 15% housing participation in DSM for 70%-100% PV penetration is shown in Figures 27-31 It can be seen from Figure 29 that when the PV penetration is 90% in Mumbai network voltage violation still existed at Bus 16 and 17 for which voltages were greater than 1.1 p.u. However, the duration of voltage violation was shortened. All Bus voltages were less than curtailment threshold voltage of 1.14 p.u. Consequently, combining DSM with AVC was able to eliminate PV energy curtailment in the network completely even at 90% penetration. This can only be the impact of AVC, as it was evident from simulations in the previous section that DSM alone had minimal impact for the Mumbai case. In contrast to the results at 90% PV penetration, voltage limit violation and PV energy curtailment remained when PV penetration was at 100%. As seen in Figure 30, Buses 15-17 were affected by voltage limit violation even with AVC and DSM. Only Bus 16 and 17 voltages were above the curtailment threshold. It can be seen from Figure 31 that the duration of voltage limit violation was considerably reduced when AVC was applied with DSM. Impact of OLTC hitting tap limits during AVC is evident from voltages of Buses close to OLTC in Figure 30. As shown in Figure 32, there is a significant reduction in the amount of PV energy curtailed in a year when AVC was applied in combination with DSM. DSM in this case was with 15% housing participation. The curtailment that still existed at 100% PV penetration with 15% DSM and AVC was only at Buses 16 and 17. Annual energy curtailed was 1208.4 kWh at Bus 16 while it was 13,706.8 kWh at Bus 17. For a typical PV system connected to Bus 17, curtailment was around 23% of its annual energy yield, whereas the value was at 80% for the Base case without DSM and AVC. The base case (refer to Figure 20) curtailment in the LV network with 15%DSM and AVC was 110,470.6 kWh. When the higher participation scenario, with 50% houses participating in DSM was considered along with AVC, the reduction in aggregate annual energy curtailed in the LV network from PV systems at 100% penetration was a mere 2.5%. This is indicative of AVC being more effective than DSM for the Indian network and solar resource conditions. Table 3 summarizes the aggregate annual PV energy curtailment in the LV network for the Mumbai case for all scenarios where curtailment occurred. It is clear that the houses connected to buses which are far away from the main grid source may not harvest as much energy from their own PV systems because of the shut-down time of their PV inverters when the voltage exceeds a certain limit of 1.14 p.u. DSM and AVC aids maximization of energy capture because of the reduction in curtailment. The average amount of financial loss prevented because of the reduction in curtailment can be calculated as 69 : Prevented Financial Loss (per year) = Amount of curtailment reduced × Electricity Unit Price (5) The reduction in curtailed energy means more power is consumed from the PV system rather than from the grid source. Prevented financial loss is therefore because of the reduction in grid import. Cost of a kWh of electricity in India on average is INR 6.034. 70 Table 4 summarizes the prevented financial loss at different PV penetration levels for the Mumbai case when 15% DSM is used in combination with AVC. Given the economic situation and consumer purchase power index in India, the financial savings to customers with DSM and AVC is significant. As voltage limit violations and reverse power flow are reduced and consequently the negative impact on network assets are limited, networks would be able to manage operations with the aging assets. Increasingly, electricity utilities are penalized for carbon emission and are given long term carbon emission reduction targets. Hence, there is also significant potential for avoided costs from the side of utility if renewable generation such as PV are maximized by means of these smart grid solutions. However, the actual value of avoided costs will depend on the nation's energy policies. CONCLUSIONS Future power networks are certain to have high penetration levels of renewable generation in the distribution network. With high penetration levels of microgeneration, curtailment of PV output according to grid code mandates during peak generation and low demand period is anticipated. For example, according to Engineering Recommendation G98, PV systems in the UK LV distribution networks are required to curtail generation when the voltage rise at the point of connection exceeds the mandated limit. Power networks are currently moving into the smart grids paradigm. The inherent cost attached to smart grids technologies means that the global economic inequality will be reflected in their deployment. Developing nations with lower economic reserves to spare are often constrained in terms of the level and nature of changes they could make to their power networks. It was evident from literature that while there is a strong focus within research studies on impacting sustainable energy policies for developing nations, the focus is mostly at the higher-level vision-type policies at the national level, rather than policies or grid codes at the operational level especially at the LV level. This work increments the state-of-the-art by supporting power system planning by means of scenario-based impact assessments and thus aiding sustainable energy policymaking for developing countries. Based on use case scenarios, the efficacy of smart grid solutions DSM and AVC in maximizing PV energy yield and therefore revenue returns for prosumers and avoided costs for distribution networks between a developed country (the UK) and developing country (India) is analyzed. The results showed that while DSM could be a preferred means because of its potential for holistic deployment (PDSM) via demand response scheme for India and similar developing nations, technically the combination of the weaker LV network with significantly higher solar resource meant that it is not effective in preventing PV energy curtailment. Developing nations like the UK have upgraded their policies and grid codes to facilitate higher energy capture from renewable like PV. The positive impact of the move from grid code G83 to G98 and modification of voltage-based curtailment threshold was observed in the PV energy yield captured. For the Newcastle case study, the results obtained show that there is no PV energy curtailment at all even in the base case scenario under G98. For the Mumbai case, under the studied conditions, combining AVC extended the PV penetration level without curtailment from 70% to 90%, if the grid code was equivalent to G98. While the Indian government has set ambitious targets for renewable installed capacity as well as higher-level 5-year plans for achieving them, grid codes equivalent to G98 in the UK are still in development. Results of this work demonstrated that, while smart grid solutions are capable of enabling PV generation maximization and improving penetration levels, the extent of such benefits are location-specific and are affected by the distribution network structure. It is recommended that, in preparation of grid codes, scenario-based assessments are carried for other renewable energy maximization methods as well with a focus on load profiles as well as the locational renewable resource conditions as demonstrated in this work. Future work will explore the methods to compensate the energy loss and power quality problems in potential scenarios of increasing housing demand and PV penetration coming into existing distribution networks as well as the potential of local autonomous inverter control at the PV sites.
9,972.2
2021-05-04T00:00:00.000
[ "Engineering", "Environmental Science" ]
Situation of Biofungicides Reconnaissance, a Case of Anthracnose Disease of Cowpea Plant extracts have long been used in commercial agriculture as anti-microbial tools in food safety applications. These offer growers and agrobiologists many unique benefits which include their eco-friendliness. This work reviews the situation of Biofungicides reconnaissance in reference to fungal disease of cowpea. Twenty different pathogens were associated with various fungal diseases of cowpea and, only the species of Colletotrichum was found to have the virulence and propensity of afflicting a 100% infection on a single susceptible cowpea crop. Plant families under the affliction of Colletotrichum were analyzed. The different forms of botanicals so far availed for use as potential biofungicidal were identified. Eighteen plant families were found to represent the entire plants and plant materials agrobiologically screened within a range of thirteen years and found to habour large spectra of species containing substances of biofungicidal potentials. Current position in the use of Botanicals to combat agricultural pests and disease is 7% of the total cowpea disease management options. Introduction Cowpea (Vigna unguiculata (L.) Walp) is a leguminous, annual grain crop in the bean family (Fabaceae/Leguminosae), with high degree of variation in growth habit, leaf shape, flower colour and seed size and colour, cultivated mostly in the humid tropics of the globe for its seeds, as a vegetable crop, green manure, fodder, as a cash crop and or cover crop [1]- [4].The crop's haulms are also valuable source of livestock protein [5]. C. destructivum is polycyclic, having multiple life cycles in a growing season [19], and hemibiotrophic, thriving in both living and dead tissues [4].The pathogen is seed borne fungus [20] [21], and can survive for at least two years on diseased stem tissues, plant debris, either on the soil surface or beneath [4]. The C. destructivum sporulates readily on infected cowpea at localized infection foci to produce anthracnose symptom within 96hrs of inoculation [22], in the form of lesions as small angular brown spots on the leaf petiole, the lower surface of leaves and leaf veins of cowpea grown under different cropping patterns [6].These various spots created later coalesced to produce a brick red to brown discoloration of the entire leaf.Symptoms are usually delayed (delitescent infection) until production of flowering buds.This degree of virulence on cowpea often leads to product yield loss between 35% and 50% [8] [9]. Available management techniques for Anthracnose disease of cowpea include the use of Biocontrol systems (bioagents), pesticides (conventional/synthetic chemicals), cultural observations (clean seeds/hygienic fields and practices), HPR (host plant resistance) and botanicals (Biopesticides/no synthetic chemicals) [23].Some, though seem effective, are enlaced with residual and often negative and indelible impressions [24]. The global quest for "back to nature" continues to augment the need and desirability to search for the alternative which employs natural agro-biological balance (Biopesticides) to address plant disease issues.This is remotely aimed at protecting the soil that supports the life of these crops among other horticultural plants [25], and in extension safeguards the environment for every other vivo organism. The Environmental Protection Agency (EPA-USA) defines a Biopesticides as a pesticide derived from natural materials such as animals, plants, bacteria and certain minerals [26].Biopesticides of plant origin are the botanicals (Figure 1).Biological control of plant disease through the use of antagonistic micro organism [4] [27]- [29] and botanical control of plant disease through the use of plant extracts [7]- [9] are two major ways in the control of plant disease in respect to natural agro biological balance. In the evaluation of some botanicals against C. destructivum, Akinbode and Ikotun [4] inhibited the growth of the pathogen in vitro using Nicotiana tabacum plant extract.Crude botanical extracts from stem back and root bark of Azadiractha indica, Vernonia amygdalina and Cochlospermum planchonii exhibited strong fungi toxicity against Colletotrichum capsici as reported by Nduagu et al. [25]. Palhano et al. [30] inactivated spores of Colletotrichum gloeosporiodes using high hydrostatic pressure separate and combined with Citral or lemongrass (Cymbopogon citratus) essential oil.Their work reiterated the need for the use of plant essential oils as an alternative for crop health problems considering the safety and stability of the soil and its environment.The search for bioactive substances from the plant world has led researches to distinctive regions of plants.Flowers, leaves, barks, seeds, fruits, roots and at times whole plant could be employed in the search for botanicals [31].The seeds of neem, Azadiractha indica A. Juss and fruits of bush pepper, Xylopia aethiopica (Dunal) A. Rich were used in the work of Amadioha and Obi [8], while in another similar study the same authors employed the leaves of scent plant, Ocimum gratissimum (L.) and lemon grass, Cymbopogon citratus D. C. Stapf., for anthracnose disease of cowpea [32].Amadioha [9] used the leaves of Piper nigrum, Ocimum sanctum and Citrus limon in his study against C. lindemuthianum. Akinbode & Ikotun [4] and Colpas et al. [23] utilized the leaves of Nicotiana tabacum and Ricinus communis, and Ocimum gratissimum respectively in the scientific investigation for bioagents/botanicals in control of C. destructivum.Nduagu et al. [25] screened eleven plants on growth of Colletotrichum capsici (Synd) Butler & Bisby concentrating on leaves, stem bark and root barks of the concerned plants. Different methods and techniques have been employed by scientists in the extraction and characterization of products from plants.A plant material could thus be harnessed in its fresh, or air/sun/oven-dried form, and with an adoption of extraction methods such as the use of hot or cold water for aqueous botanicals [4] [23] [25] [32] [33]; organic solvents for oil botanicals [8]; crude ashes for crude powder botanicals [34] [35] There are weighty merits for the quest for wider exploration of Biopesticides in the field of agriculture.Biopesticides tend to pose fewer risks than conventional pesticides; Biopesticides are usually inherently less toxic than conventional pesticides; when used as a component of integrated pest management (IPM) programmes [25].Biopesticides can greatly decrease the use of conventional pesticides, while crop yields remain high; Biopesticides require much less data and time frame to register than a conventional pesticide [36].They are non residue producing control agent, making them eco-friendly and easy to use class of Reduced-risk Fungicides.It is in the support of all these that the European Community, like in other developed parts of the world, established a European Commission Working Document (SANCO/10472rev.5).This specifies data requirements for active substances of plants protection products made from plants or plant extracts. Discussion The global comparison of scientific research and publications, on protection of Vigna unguiculata (L.) (Table 1), projected the Asian region with the highest of 35.9%.This was followed by Africa with 24.80%.The least scientific research and publications on protection of cowpea was from Australia with 1.40%.Scientific documents on cowpea diseases according to pathogen groups, for five years range indicated the fungi to be on top with 35.6% (Table 2).This relatively high level percentage of scientific papers points at the major pathogenic constrains and its economic importance in the cultivation of cowpea crops.Regrettably the Biopesticides (Botanicals) which is a protective drive for a natural agro-biological balance in the fight against agricultural pests and disease was associated with about 7% of the total cowpea disease management options (Figure 2), a clear indication of under exploration of this area.This corroborates with the observation in the work of Emechebe & Lagoke [3]. It was observed that each cowpea pathogen has different regions of interest on a "whole cowpea plant".This work, therefore, considered a cowpea crop from different botanical dimensions of six parts, and it was discovered that 30% of the fungal infections occur on the foliar part of the crop, 25% on the stems, 15% on the roots, 10% on the pods/fruits, 25% on the seeds/seedlings and 10% on the whole parts of the plant (Figure 3). There was an indication that while the other nineteen pathogenic fungal species (Table 3) poses the ability each of attacking only about a meager 20% of a cowpea crop, Colletotrichum species especially the C. destructivum & C. truncatum, have in stock 100% virulence on a single crop each at a given pathogenic situation (Table 3).This corroborates with the findings of Latunde-Dada et al. [22], Latunde-Dada & Lucas [37], and Akinbode & Ikotun [4]. There are eighteen plant families presently under the anguish of the Colletotrichum Corda (Table 4).About 28% family interaction existed between plant families under the affliction of Colletotrichum and the plant families screened for antifungal properties (Table 6) as derived from Table 4 & Table 5.These apparitions were observed within the five plant families of Asteraceae, Caricaceae, Fabaceae, Lauraceae and Poaceae.The rest of about 72% were unique in occurrence, hence no family interactions among them (Table 6.)The extrapolated values appear to affirm the indication on table 5 of this work that of all the entire plant families in existence only about eighteen (18) has been screened for their biofungicides characteristics between 1998 and 2011.The products screened during this thirteen years span were of various forms or state, such as (1) Aqueous: botanicals extracted using water as the solvent.The water also forms the extract solution [4] [25]; 2) Syrup: botanicals of a higher measure of viscosity having been extracted with a solvent other than water, and also containing some of the extracting liquid in its solution [23] [38]; 3) Oil: botanicals in the form of essential oil of the test plants, usually extracted through a condensation system [8]; and 4) Ash: botanicals produced in the form of residues powder left after the combustion of a test plant material [34] [35]. Nevertheless, the botanical forms enumerated in this study could be extended to produce additional form by the application of further processing treatment on the original form.For example the Syrup produced in the work of Win et al. [38] was utilized in its dry crude botanical extract state after subjecting the initial extract syrup to an evapoconcentration system.This study, however, observed that most of the botanicals screened for this span of thirteen years (1998 to 2011) were produced and also utilized in their aqueous form.And considering the total 33 frequency occurrence of botanical forms (Table 5), the aqueous botanicals was 51.52%, followed by the syrup (15.15%), ash (18.18%) and the oil form of 15.15%.The relative easy and economy of production could be responsible for the high percentage value obtained with aqueous botanical evaluation. Conclusions Anthracnose disease remains a devastating health problem to cowpea crop and an equated hindrance to its economic cultivation.The major afflicting pathogen Colletotrichum destructivum O'Gara has the virulence of one hundred percent (100%) infection on a single crop stand (that is every part of the crop is subject to attack and infection by C. destructivum at a given pathogenic situation). The use of botanicals remains suitable contest to adequate disease management options, at least for its characteristics ease of production, economy and ecological amiability. This study has availed the fact that with the high global yearning for the urgent replacement of conventional (chemical) fungicides in disease management with the ecologically compatible bio-fungicides, the seemingly several works in this direction is merely about 7% of the different management systems as indicated on cowpea disease control options and therefore, advocates for more reconnaissance in this essential area of agro-biological Amaranthaceae Juss.[44] Anacardiaceae Lindl.[45] Asteracea Bercht.& J. Presl. Figure 2 . Figure 2. Percentage disease control techniques on Vigna unguiculata for a media decade. Table 1 . Relative contributions, geographically, of scientific publication on cowpea diseases. Table 3 . Colletotrichum Corda and other fungal pathogens of Vigna unguiculata. 1Nominal value of A or B equals to 18; Nominal value of C is equals to 5.
2,681
2014-04-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Quantitative discrimination of biological tissues by micro-elastographic measurement using an epi-illumination Mueller matrix microscope : We propose a method for estimating the stiffness of bio-specimens by measuring their linear retardance properties under applied stress. For this purpose, we employ an epi-illumination Mueller matrix microscope and show the procedures for its calibration. We provide experimental results demonstrating how to apply Mueller matrix data to elastography, using chicken liver and chicken heart as biological samples. Finally, we show how the histograms of linear retardance images can be used to distinguish between specimens and quantify the discrimination accuracy. Introduction The determination of tissue mechanical properties is of large interest in clinical applications for the diagnosis and detection of many diseases, since these properties vary with the pathological condition of the tissue. When external forces such as tension, compression, or shear are applied, the stress and strain of a pathological tissue is different than healthy tissue. For example, cancerous tumors are often stiffer than normal tissue [1]. Micro-elastography provides information on tissue micro-structure and on tissue behavior under applied stress. It is relevant to diagnosis and identification of tissue disorders and disease-related symptoms and thus can act as a method to distinguish between normal and diseased tissue. Elastography techniques have been used by many research groups [2,3] by employing ultrasonic and MRI methods. Elastography has been combined with optical coherence tomography [4][5][6] to determine mechanical behavior of tissue under a compressive load. The measurement is qualitative and the nature of elastograms maps tissue strain with spatial resolution. The above mentioned methods requires complex hardware, and measurement artifacts can be a problem if the data and image processing are not done with great care. Polarization imaging provides additional features such as structural, biochemical and functional information of the light interacting medium compared to conventional intensity imaging methods and therefore can be helpful for clinical applications. Recently, Buchta et al. [7] proposed a shearing-interferometry-based elastography method for determining the elastic parameters and localized stiffness inhomogeneities of soft tissue. He et al. [8] proposed a method to compare among different tissues by using frequency distribution histograms of the Mueller matrix elements. Du et al. [9] show that by using Mueller matrix polarimetry, characteristic features of cancerous tissues can be differentiated from normal tissue. Biological tissue is anisotropic where collagen fiber structure is present in tissue, such that fibrous tissue can be differentiated from other tissue types using linear retardance measurement. Since abnormal tissue shows different fiber structural properties than does normal tissue, linear retardance thus can be used as an effective tool for diagnostic purposes [10][11][12][13]. Mueller matrix imaging contains information of retardance, diattenuation, and depolarization allowing for a better understanding of polarization properties of sample and thus a Mueller matrix microscope is capable of quantitative analysis of a tissue's birefringence and its direction in the micro scale. Epi-mode measurement is useful compared to transmission mode for measuring opaque samples such as biological specimens. The most commonly used instrument for Mueller matrix polarimetry is perhaps the dual rotating retarder technique originally proposed by Azzam [14]. Arteaga et al. [15] introduced a transmission mode Mueller matrix microscope using the dual rotating retarders methodology. For elastography measurement, the specimen should have a thickness of the order of mm and in that case, a transmission Mueller matrix microscope can not be used because of light penetration and therefore it is required to build a Mueller matrix microscope in epi-illumination mode. We introduce a epi-illumination Mueller matrix microscope instrument for the quantitative discrimination of elastographic properties of tissue. In the discussion below, we survey the measurement model for our epi-illumination Muller matrix microscope system and explain the calibration method for it. In order to demonstrate the effectiveness of our measurements, we record Mueller matrix images of a rubber sample while applying different amounts of stress. After using the rubber sample to verify the accuracy of measuring Young's modulus using the Mueller matrix microscope system, we establish an empirically determined linear relationship between the sample linear retardance in reflection and the applied stress. This relationship is described by what we call the sample's "stress-retardance sensitivity coefficient" and by comparing the stress-retardance sensitivity coefficient of different tissues, we show that it is possible to discriminate tissue types even when they appear same to the eye. Methodology and calibration of the instrument Our Mueller matrix microscope is depicted in Fig. 1. The light source used in this experiment is a halogen lamp (Ocean Optics HL-2000-HP) which is transmitted through an optical fiber (Ocean optics, P1000-2-UV/VIS, 1000 µm dia). The light beam is collimated by a collimating lens (Edmund Optics, 43-902, NA= 0.25) and then passed through a bandpass filter (Edmund Optics, 65-098, 12.5 mm dia) having a spectral bandwidth 10 nm, centered at 550 nm. The focal length of the collimating lens is 15.37 mm and the diameter of the collimated beam is 8 mm. The polarization state generator (PSG) is composed of a horizontal polarizer and a rotating quarter wave plate (QWP). The polarization state analyzer (PSA) is similar to the PSG with the components placed in reverse order. The polarizers are Glan Thompson type having 12 mm diameter with an extinction ratio greater than 100000:1 and the waveplates are zeroth-order QWPs. The two retarders in the PSG and PSA are mounted independently with motors to rotate at rotation speeds of θ(t) and 5θ(t) respectively. The beam splitter is a non-polarizing beam splitter used to pass light from the PSG into the microscope objective lens. The microscope objective is a plano infinity-corrected long working distance objective lens (Mitutoyo, 10X, NA=0.28). Imaging is performed using a cooled CCD camera (Bitran BQ-86M, 1360×1024 pixels, 16 bits). The exposure time of the camera can be varied depending on the requirement of the measurement. Dark current is measured before each measurement and is subtracted from the signals at each pixels during measurement. Averaging of five measurements are performed to improve the signal to noise ratio and to increase the precision of the measurement. Total thirty six images are taken and then these images are used to determine the Mueller matrix elements of the sample. The thirty six measurements are spread over a 180°rotation of the PSG. Without any calibration of the system, the Mueller matrix elements can be found from the Fourier coefficients of the output intensity and is given in detail in Ref. [16]. We demonstrate the calibration of our reflection-mode Mueller matrix polarimeter. Since the retarders used in the proposed configuration are only approximately achromatic in the visible wavelength range, working at different wavelengths requires recalibrating the retardance of the wave plates. Also, the main source of errors in a dual rotating retarder instrument is the azimuthal error in the components. Goldstein et al. [17] proposed a calibration method for a transmission configuration by taking air as a reference sample. In reflection mode, this is not feasible. Instead, we take a reference measurement using a mirror as the sample, and from this calibrate the retardance of the retarders and azimuth angles of the components. We also calibrate the effect of the beam splitter in both reflected and transmission paths through the beam splitter, since it causes an amount of linear retardance and diattenuation. Since the polarization effects due to the microscope objective are very small, therefore they are ignored in the calibration process. Calibration of retardance error of waveplates and azimuthal error of the components Let us define the linear retardance magnitude and fast axis orientation angle of the linear retarder as δ and θ respectively. Let us consider retardance errors of the two retarders as, 1 = δ 1 − 90°, 2 = δ 2 − 90°. If we write 3 and 4 are azimuthal errors of the retarders and 5 is azimuthal error of the analyzer, then the output Stokes vector at the camera can be written using Stokes-Mueller calculus as where S in is the input Stokes vector of the incident light, P n (φ) be the Mueller matrix of the nth polarizer having transmission axis oriented at an angle φ, and R n (θ) be the Mueller matrix of the nth linear retarder. The reference sample is taken as mirror at normal incidence whose Mueller matrix is given by Using this theoretical model, the corresponding output intensity is Fourier transformed to determine the Fourier coefficients a n and b n . Therefore, the azimuthal errors of the components can be expressed as and retardance errors can be explicitly written as By using this calibration method, the azimuthal angles and retardance of the waveplates are retrieved and then we adopt the equation given in Ref. [17] to retrieve the Mueller matrix elements. Calibration of the beam splitter Although the beam splitter is considered as a non-polarizing element, it has linear retardance and diattenuation which can cause artifacts in the measurement results of the experimental Mueller matrix if not compensated-for. To remove the linear diattenuation and retardance effects due to the beam splitter, calibration of beam splitter is also performed. In the experimental procedure, the light first reflects from the beam splitter, then interacts and reflects from the sample and finally transmits through the beam splitter. In the whole process, the measured Mueller matrix (M measured ) is a multiplication of the Mueller matrix of the beam splitter for reflection (M BSR ), the Mueller matrix of the sample (M S ), and the Mueller matrix of the beam splitter due to transmission (M BST ), and therefore can be written as Complete Mueller matrices of the beam splitter was measured using a commercial Mueller matrix polarimeter (Axoscan) [18] in both reflection and transmission and are shown in Table 1. The AxoScan system measures polarization properties at a single point (not imaging) of the sample and can measure in the visible to near infrared range (400-800 nm). The measurement error in the Mueller matrix elements by the Axoscan system is 0.1% and the system has a precision of 0.01% for 40 set of measurements at our working wavelength of 550 nm. In transmission configuration, the measured linear retardance is 1.42°and diattenuation is 0.281 whereas in reflection mode the beam splitter has a linear retardance of 2.36°and diattenuation of 0.270. The measured Mueller matrix of the beam splitter is sensitive to the optical geometry of the commercial polarimeter. However, the same geometry was used during measurement of the beam splitter with the commercial polarimeter as with the Mueller matrix microscope. Determination of stress-retardance sensitivity coefficient We next show how the linear retardance values obtained from a sample's Mueller matrix can be used to estimate the sample's stiffness. After passing through a birefringent material, the two components of the electric field vector along the two principal stress directions experience a different refractive index. Using the stress-optic law, the magnitude of linear retardance ∆ expressed in radians can be written as where n x and n y are the refractive indices along the two principal directions, G is the stress optic coefficient expressed in radians/unit pressure, σ x and σ y are the amount of stress applied along the two principal axes expressed in Pa, d is the effective optical path length of light through the sample, and λ is the wavelength of the light source. By applying stress in one principal direction (σ x = σ, σ y = 0), Eq. (7) simplifies to: The strain can be calculated as the change in length (∆L) divided by the original length (L). The applied stress σ and the produced strain are related to Young's modulus E by If we measure the strain of a sample as a function of applied stress, we can fit a line to the data to determine Young's modulus for the material. By substituting Eq. (9) into Eq. (8), we obtain where we define the "strain-optic modulus" R = 2π G E d/λ. From the slope of the retardancestrain plot fitted from the same experiment but this time from the reflected Mueller matrix data, we can also determine the value R. Since strain is dimensionless, R has the same units as ∆. We call the coefficient a the "stress-retardance sensitivity coefficient". Since Young's modulus E is expressed in Pa and R is in radians, therefore the unit of a will be Pa/rad. Instrument validation In order to experimentally validate the Mueller matrix microscope instrument, we measure the spatially resolved Mueller matrix of a mirror at normal incidence. The Mueller matrix element images and element standard deviation images of the mirror are shown in Fig. 2. All of the Mueller matrix elements are normalized by m 00 . Each standard deviation is obtained from a set of five measurements. The mean measurement errors of the diagonal and nondiagonal elements of the Mueller matrix elements of the mirror are less than 0.8% and 1.1% respectively. We next measured the Mueller matrix of a rotating Glan-Thompson polarizer. The Mueller matrix is decomposed by the polar decomposition method [19] and the retrieved linear diattenuation magnitude and orientation for the different orientation axes of the polarizer are shown in Fig. 3. The value of linear diattenuation averaged across the face of the sample (circular region in Fig. 3) is measured to be 1.00±0.01 i.e. a perfect polarizer to within the measurement precision of our system. The ±0.01 error in linear diattenuation represents the spatial standard deviation. The orientation of diattenuation follows the azimuthal axis of the linear polarizer and has an error of less than 0.2°. Next, a waveplate (λ/4 at 632.8 nm) is measured to quantify the instrument's retardance measurement accuracy at wavelength 550 nm. Figure 4 shows the linear retardance magnitude and relative axis orientation measured while rotating the waveplate from 0°to 180°. By taking the spatial average and spatial standard deviation across the face of the sample, the linear retardance of the waveplate is measured as 105°±2.4°, independent of the rotation angle of the waveplate. The estimated orientation of linear retardance follows the waveplate orientation to within 0.5°of error. Elastographic measurement The mount depicted in Fig. 5 is used to measure the strain produced in a sample. Stress is applied by attaching different weights to a pulley. One side of the sample is held fixed on the fixed stage which is kept in place while the other side of the sample is attached to a sliding stage. The direction of the strain is perpendicular to the incident beam. To validate the accuracy of the stress-strain measurement using this technique, we first performed a measurement on a rubber material having a known Young's modulus of 10.8 MPa. For a given strain, the complete Mueller matrix was simultaneously recorded for the rubber sample and the polar decomposition method used to retrieve the linear retardance. The spatial average linear retardance for strains from 0.000 to 0.048 are shown in Fig. 6. Linear retardance in the rubber arises due to stress birefringence for the applied stress produced in the sample. To quantitatively evaluate linear retardance images (Fig. 6), the spatial histograms of linear retardance and fitted Gaussians for different applied strain are shown in Fig. 7. The mean value of retardance increases linearly with increasing strain (Fig. 8). Figure 8 also shows the stress-strain measurement result. Fitting the measured stress-strain data to a straight line gives an estimated Young's modulus of 10.5±0.12 MPa. The ±0.12 range in Young's modulus represents the standard deviation after five measurements. The value of the stress-optic modulus R for the rubber sample is determined as 220°using Eq. (10). We can estimate the stress-retardance sensitivity coefficient a, after measuring the Young's modulus E and stress-optic modulus R from their ratio (Eq. (11)) as of 47.7. Next, we perform measurement of biological tissues. Samples of raw chicken liver, raw chicken heart, (shown in Fig. 5) and cured ham were each cut in a rectangular shape of 10 mm×20 mm cross-section and 1.2 mm thickness using a long blade knife. We prepared the tissue in rectangular shape rather than free form shape and then applied force in one direction, such that the deformation in the tissue is approximately uniform over the sample and is confirmed by imaging four different parts of the sample. To determine the stress-retardance sensitivity coefficients of chicken heart and liver, we performed stress-strain and Mueller matrix measurements simultaneously by giving an increasing amount of applied stress. When analyzing the Mueller matrix of birefringent turbid media with Mie-sized scatterers acquired in reflection geometry, the Lu-Chipman decomposition may suffer from limitations due to the assumptions required by this method. In this case, one can make use of the extended polar decomposition method instead. [20]. Figure 9 shows the linear retardance images of chicken liver and heart for different strain value. The mean values of linear retardance are plotted with strain in Fig. 10. The non-zero retardance at zero strain is due to the small amount of birefringence presence in the samples due to their fiber structure. At lower values of applied stress, the measured strain and retardance increase linearly. Between 0.5 and 1.0 of strain, however there is a sudden change in behavior for chicken liver, where the retardance-strain curve exhibits a break from its initial slope to a new lower slope. Above this point, the retardance drops suddenly and then becomes more or less constant with applied strain. This can happen due to the breaking of the internal fiber structure and tissue damage. We fit the stress-strain data and linear retardance-strain data from the first four data points before when the fiber structure breaks. Fitting the measured data to a straight line through the first four data points by least square fitting, we calculate Young's modulus of chicken heart and liver as 0.73 MPa and 0.14 MPa respectively. Thus, chicken heart tissue is five times stiffer than chicken liver tissue. Using Eq. (10), the value of the stress-optic modulus R for chicken heart and liver are determined as 230°and 51°respectively. To confirm uniform strain over the sample, we also measured Mueller matrix in four different parts of the chicken heart and liver. The spatial histograms of the linear retardance in the four different portions of the sample almost matches with each other and confirm that the stain is uniform over the sample. After measuring the Young's modulus E and stress-optic modulus R, we can estimate the stress-retardance sensitivity coefficient a from their ratio (Eq. (11)) as of 3.2 and 2.7 kPa/°chicken heart and liver respectively. While Fig. 10 gives the estimated value for the stress-optic modulus R, computing R from material properties alone (i.e. Eq. (10)) requires knowing the "effective optical depth" of the sample in reflection. This is not same as the sample thickness since the light can not penetrate all the way through. In order to estimate d for our samples, we used a microtome to slice the tissues in thicknesses of 5, 10, 15, and 20 µm and compared the retardance measured from these thin slices to retardance measured on bulk samples. Since the 10 µm thin slices gave results comparable to the bulk samples, we use d≈10 µm for the effective optical depth. We also measure the stress-strain properties and Mueller matrix image of a cured ham sample. The linear retardance image is extracted from Mueller matrix image for different amount of strain. As before, we get retardance images at different values of stress and fit the stress-strain data and retardance-strain data up to the point where the tissue fibers show damage. The slope of the fitted line gives the Young's modulus of ham as 0.59 MPa [21]. Using the same fitting procedure, the value of the stress-optic modulus R is measured to be 170°. From the Young's modulus E and stress-optic modulus R, the stress-retardance sensitivity coefficient a of ham is determined as 3.4 kPa/°. In order to demonstrate how the stress-retardance sensitivity coefficient can be used to discriminate between two different tissue types, we show the measured linear retardances for chicken heart and liver at the same value of applied strain 0.2 (Fig. 11). From the fitted Gaussian curves, we can see that chicken heart has a higher retardance than chicken liver for a fixed applied stress. A standard choice for discriminating between the two samples is to draw a line where the two curves intersect-in this case at 27.2°. All pixels to the left can be classified as liver tissue, 3 and all to the right as heart tissue, for which we can calculate a classifier specificity of 0.986 and sensitivity of 0.977. A comparison of different physical parameters in terms of Young's modulus, the stress-optic modulus, and the stress-retardance sensitivity coefficient of the samples used in the present study is listed in Table 2. For the biological samples, it is evident from Table 2 that, the properties E and R closely track one another , so that R can be used as a kind of proxy measurement for E. However when the sample has a very different behavior, such as the case for rubber sample versus all of the biological samples, then there is no clear connection between E and R. Then the stress-retardance sensitivity coefficient a can be useful for discrimination between different samples. Chicken liver 0.14 51 2.7 Conclusion We have demonstrated a method to determine the elastographic parameters in terms of a stress-optic modulus and a stress-retardance sensitivity coefficient for differentiation of biological samples. An epi-illumination Mueller matrix microscope and its calibration method is demonstrated that is useful for the measurement of opaque samples. Linear retardance images are retrieved from the Mueller matrix elements and are shown for the biological samples including chicken liver and chicken heart. Using the histogram of the linear retardance images, it is shown that we can differentiate between different tissue structure. We quantitatively distinguished between different biological tissues from the stress retardance sensitivity coefficient by correlating Mueller matrix with the mechanical properties of the tissue. While our technique provides for a means to measure thick biological samples, due to its use of reflected light, rather than in thin slices for transmission, accurate quantification requires calibration. Some tissue samples will allow for deeper transmission in this measurement geometry than other samples will, but if their basic transmission and scattering properties are known a priori, then it seems likely that a calibration method can be used to match retardance to stress in each case. We believe that using Mueller matrix polarimetry technique and by applying strain in the tissue, it is possible to distinguish between normal and affected region of the tissue and therefore can be helpful for early detection of many diseases.
5,255.6
2019-07-09T00:00:00.000
[ "Biology" ]
Dynamical manipulation of electromagnetic polarization using anisotropic meta-mirror Polarization control of electromagnetic wave is very important in many fields. Here, we propose an active meta-mirror to dynamically manipulate electromagnetic polarization state at a broad band. This meta-mirror is composed of a double-layered metallic pattern backed by a metallic flat plate, and the active elements of PIN diodes are integrated into the meta-atom to control the reflection phase difference between two orthogonal polarization modes. Through switching the operating state of the PIN diodes, the meta-mirror is expected to achieve three polarization states which are left-handed, right-handed circular polarizations and linear polarization, respectively. We fabricated this active meta-mirror and validated its polarization conversion performance by measurement. The linearly polarized incident wave can be dynamically converted to right-handed or left-handed circular polarization in the frequency range between 3.4 and 8.8 GHz with the average loss of 1 dB. Furthermore, it also can keep its initial linear polarization state. Polarization state is of great importance in many electromagnetic (EM) devices since a majority of EM phenomenon is polarization sensitive. A wave plate, based on a birefringent crystal with specific orientation and thickness, is a traditional method to manipulate polarization. It can achieve linear to circular polarization with different handedness by the superposition of two orthogonal linearly polarized waves with a certain phase shift due to difference of refractive index along the two axes. The handedness is mainly dependent on the phase difference that is associated with the crystal thickness. As the difference between refractive indexes is typically very small, a large thickness is often required. In addition, the polarization conversion is only restricted to a narrow bandwidth because the produced phase shift between two orthogonal polarization modes is frequency dependent. It is still worth noting that the traditional wave plate cannot dynamically manipulate EM polarization states. With the great capacity to manipulate the EM wave, metamaterials or meta-surfaces have caused much interest and resulted in many intriguing applications, such as negative refraction 1,2 , flat lens 3,4 , Fano resonance 5 , and invisibility cloak 6,7 . For the polarization control, both chiral metamaterials [8][9][10][11][12][13][14] and anisotropic metamaterials [15][16][17][18] behave strong capacities. Due to the strong coupling between electric and magnetic fields, chiral metamaterials exhibit two properties, such as circular dichroism [8][9][10][11] and optical rotation [12][13][14] . It can not only transform a linearly-polarized (LP) wave into a circular polarized (CP) wave with different handedness at different frequencies, but also rotate the incident wave by a certain angle. However, the most of chiral metamaterials only operate in a narrow bandwidth because of highly resonant nature of meta-atoms. Although several methods, including multilayer 19,20 and helix structures 21,22 , have been reported to extend the operation bandwidth, the high loss of the chiral metamaterial is still dissatisfactory, especially for the CP chiral metamaterial. The anisotropic metamaterial adopts the similar working principle of the briefrigent crystal, which can independently tune the transmission or reflection phases (ϕ 1 and ϕ 2 ) along two orthogonal axes. By designing the phase difference Δϕ = ϕ 1 − ϕ 2 , the anisotropic metamaterials under illumination of a linearly-polarized wave can realize the different outgoing polarization states, including left-handed circular polarization (LHCP) at case of Δϕ = π /2, right-handed circular polarization (RHCP) at case of Δϕ = 3π /2 and linear polarization (LP) at case of Δϕ = π , assuming no material loss is generated. However, the same reason of the naturally resonance in the cell causes the polarization conversion of this kind of metamaterial limited to a small bandwidth. In order to address this issue, a nascent strategy of dispersion management was proposed and applied to a single dimension of a reflective meta-surface, and thus, a LP wave was achromatically converted to its cross-polarization state over a 3:1 fractional bandwidth with transformation efficiency of 90% 17 . More recently, the bandwidth of polarization conversion was further extended to 5:1 octaves by implementing the dispersion management in the two dimensions of the meta-surface for achieving the ideal phase retardation on two orthogonal directions 18 . Despite of the great progress, the above broadband polarization transformer only can achieve the single outgoing polarization state. Therefore, dynamical metamaterial has been developed to satisfy the multi-polarization requirement. The active elements or tunable materials, such as microelectromechanical systems 23,24 (MEMS), PIN diodes 25 , photoactive medium 26 and grapheme 27 , have been utilized in the design of meta-atom. With outside stimuli, the metamaterial is expected to achieve real-time manipulation of polarization states. However, the loss and narrow bandwidth for the dynamical polarization transformation severely impede their further development. So it is still a great challenge to actively manipulate polarization states with low loss in a broad band. In this article, an actively controlled meta-mirror is proposed to manipulate the polarization states of the reflected wave in a broad band. It can convert linearly-polarized wave to LHCP, RHCP or originally LP wave by tuning the bias voltage applied to the PIN diodes in the meta-atoms. The dispersion management is employed in the two dimensions of the proposed meta-mirror to achieve ideal phase retardation for achromatic polarization conversion. Through numerical simulation and experimental measurement, we demonstrate the strong ability of the designed meta-mirror in dynamical polarization manipulation over a wide band. Results The proposed meta-mirror is generally composed of an anisotropic metallic pattern and a metallic flat plate with a dielectric spacer between them. By specially designing an anisotropic metallic pattern, any desired reflection phase difference between x-and y-directions can be produced. Since the polarization transformation is mainly dependent on this phase difference, the geometrical design of the anisotropic cell plays a key role in the polarization characteristic of the meta-mirror. We can use transfer matrix method to calculate the reflection phase of the anisotropic meta-mirror along x-and y-directions, respectively. Where k is the wave vector in free space and d is the thickness of dielectric spacer. i = x, y represent the electric field polarized along the x-and y-direction, respectively. Z i (ω) indicates the surface impedance of the meta-mirror, and Z 0 = 377 Ω is the impedance of free space. Both ϕ xx and ϕ yy are frequency dependent, and the transformation of LP wave to LHCP or RHCP wave would be produced assuming that and no material loss is generated. In order to construct the active meta-mirror, the PIN diode is integrated into the design of the meta-atoms. Figure 1(a) shows the general operating principle of the active meta-mirror. It is composed of a single-layer periodic cross metallic strip structure backed by a metallic flat plate. The active elements of PIN diodes are loaded on the gaps of the cross metallic strips along both x-and y-directions where they are independently controlled by the bias voltage, so that we can dynamically tune the phase difference between these two orthogonal directions. When the proposed active meta-mirror is illuminated by an LP wave with electric field polarizing at 45 degree with respect to the x-axis, three different polarization states of the outgoing wave could be obtained. As Fig. 1(b) shows, if the PIN diodes are switched on along the x-direction and turned off along the y-direction at the state 1, the phase difference of − 90° (Δϕ(ω) = ϕ xx (ω) − ϕ yy (ω)) could be constructed by optimizing the metallic pattern, and then the meta-mirror would convert the LP incident wave into RHCP reflected wave. When all the PIN diodes are changed into their opposite states (state2), as shown in Fig. 1(c), the phase difference of 90° could be obtained, resulting in the production of the LHCP reflected wave. If all the PIN diodes are switched off at the state 3, as seen in Fig. 1(d), the meta-mirror would become isotropic, and original LP state is expected to be reserved since no phase shift is produced on the two orthogonal directions. To verify the feasibility of the active meta-mirror at a broad band, the above simple design is adopted for the theoretical analysis. M/A-COM Flip Chip MA4SPS502 is selected for the loaded PIN diodes. Its total capacitance is C t = 0.09 pF @ − 40 V for reverse bias, the inductance of this diode is L d = 0.35 nH, while the series resistance is Rs = 2.4 Ω for a forward bias current of 20 mA. The working central frequency of this meta-mirror is designed at 6 GHz, and an air spacer is inserted between the metallic pattern layer and metallic plate. In order to achieve wideband polarization conversion, the air spacer thickness cannot be too small, or else the strong magnetic coupling between the meta-surface and ground plane would result in non-constant phase gradient, causing the limited bandwidth. Here, the thickness is designed to be 12 m that is about quarter of wavelength at the central frequency. There is almost no coupling between the metallic pattern and metallic flat plate. In addition, assuming that the PIN diodes are switched on along the x-direction and they are in the OFF state along the y-direction, the equivalent circuit of this meta-mirror along these two directions can be obtained, as shown in Fig. 2(b,c), respectively. The inductors (L) and capacitors (C) are due to the cross metallic strip and its gap, respectively. Therefore, the frequency-dependent impedance for Z x and Z y can be expressed as follows: x R iwL In the microwave domain, 3-dB axial ratio is generally adopted to express the bandwidth of circularly polarized (CP) wave. Assuming that no material loss is generated, that is, the meta-mirror can reflect all the incoming wave energy at both x-and y-polarizations, the reflection phase difference of Δϕ(ω) between these two polarizations can be calculated to be located in range of (− 90° − 36.75°, − 90° + 36.75°) or (90° − 36.75°, 90° + 36.75°) 28 . Considering fabrication tolerance and measurement error, the phase difference varying range is reduced to (± 90° − 20°, ± 90° + 20°) to define the bandwidth of the CP wave in simulation. Here, we take LP-RHCP conversion as an example to investigate this simple model of the active meta-mirror. According to the formula (1-3), the phase difference Δϕ(ω) mainly depends on the values of L and C which can be evaluated by fitting Δϕ(ω) with the ideal phase difference of − 90°. When the circuit parameters satisfy (L, C) = (0.1 nH, 5 fF), the phase difference Δϕ(ω) fluctuates in the range of (90° − 20°, 90° + 20°) at a wide band from 3.3 GHz to 11.3 GHz, as seen in Fig. 2(d). Additionally, the corresponding impedance Z x , Z y can be calculated by formula (2)(3), and the ideal impedance Z y for the given Z x could be derived from formula (1) as well. As Fig. 2(e) shows, the calculated Z y almost approaches the ideal Z y at a wide frequency range. Therefore, the above calculation results of the simple model for the active meta-mirror have fully demonstrated its capability of dynamical polarization conversion at a wide band. Figure 3 shows the geometry of the designed wideband meta-mirror that can dynamically manipulate polarization states of reflection wave. The super cell of this meta-mirror is composed of four sub-cells which are arranged to possess C4 symmetry. The sub-cell structure consists of a double-layered metallic pattern printed on both sides of a dielectric substrate, as shown in the inset of Fig. 3(a). There is a continuous metallic strip along the y-direction. In order to avoid the crossing of two metallic strips between x-and y-directions, the metallic strip along the x-direction is constructed by three rectangular patches through two metalized via-holes. The PIN diodes of M/A-COM Flip Chip MA4SPS502 are inserted on the gap between all the adjacent sub-cells. For the LP-CP transformation, the operating state of the PIN diodes at the x-direction would be different from that at the y-direction. If the incident LP state needs to be reserved, all the PIN diodes should work in the same states. The dielectric substrate selected to support the metallic structure is 1 mm thick F4B with relative permittivity ε r of 2.65 and tangent loss of 0.001. The period of the unit cell is set to be px = 15 mm. In addition, there is an air spacer with a thickness of 12 mm between the dielectric substrate and metallic flat plate. In order to verify the reflection characteristics of this meta-mirror, numerical simulation is carried out by using a commercial software CST microwave studio 2014. The unit cell for simulation is given in inset of Fig. 3(b). Periodic boundary condition is set to its x and y sides, and we adopt x-and y-polarized wave, respectively, as the exciting source to obtain its reflection characteristic. The geometric parameters of the double-layered metallic patterns are optimized as follows: l1 = 8.2 mm, l2 = 1.95 mm, l3 = 3.25 mm, l4 = 7.6 mm, w1 = 4 mm, w2 = 0.5 mm, g = 0.3 mm and g2 = 0.15 mm. Figure 3(b) shows the reflection coefficient of this meta-mirror at the state 1. It is seen that reflection amplitudes for both x-and y-polarizations are larger than 0.95, which means that the incoming wave is almost totally reflected by this meta-mirror at these two polarizations. However, it is seen in Fig. 3(c) that their reflection phases is obviously different, and the phase difference between them fluctuates in the range of (− 90° − 20°, − 90° + 20°) from 3.6 GHz to 8.7 GHz. When the meta-mirror is tuned to operate at the state 2, the similar results are expected to be obtained, and the phase difference between x-and y-polarizations would be located in the range of (90° − 20°, 90° + 20°) at the same frequency band, as seen in Fig. 3(d). Figure 3(e,f) show the simulated electric field distribution of the x-polarized and y-polarized reflection waves at the state 1, respectively. It can be seen that both x-and y-polarized incident waves are vertically reflected and their wave fronts have obvious phase difference. The reflection phase of the y-polarized waves is 90° ahead compared to the x-polarized reflection waves. Hence, the RHCP reflection wave would be produced when the meta-mirror is illuminated by a normal incident wave with electric field along the structure diagonal. Due to rotational symmetry for the geometrical structure of the designed meta-mirror, the opposite handedness of the CP wave could be realized at the state 2 where the reflection wave of the x component is designed to advance the y component by 90°. According to the simulation results given in Fig. 3(b,c), we can further calculate the effective sheet impedance through transfer matrix method analysis that is well described in ref. 18. Figure 4 depicts the retrieved results for the effective anisotropic impedances Z x and Z y at the state 1. The optimal impedance Z y that is calculated for the given Z x to construct a CP wave is also given as comparison. It is seen that the retrieved impedance Z y of this meta-mirror agrees well with the optimal impedance value over a wide frequency range of interest. For further investigation of the applicability of the meta-mirror under oblique incidence, the reflection characteristics for different oblique incident angles of 10°, 20° and 30° at the state 1 are studied, as seen in Fig. 5(a). With the increase of oblique incident angle, there is a strong resonance peak that is gradually shifted towards lower frequency. The polarization conversion effect around this resonance frequency is deteriorated, but the designed meta-mirror still possesses the capacity of the wideband polarization conversion. The relative bandwidth for CP outgoing wave is beyond 60% for all the oblique incident angles. In order to understand the production of the above resonance peak, we investigate the x-and y-polarized reflection characteristic of this meta-mirror with oblique incident angle of 30°, and the corresponding result is given in Fig. 5(b). As it shows, a strong absorption phenomenon is produced in the x-polarization at the frequency of 7.594 GHz, which means that there is almost no reflection wave energy. The inset of Fig. 5 (b) shows the power loss density distribution for the x-polarization (1). The admittance curves of Y y and the ideal Y y are also given for observation. at the absorbing frequency. It is obvious that the high power loss density is located along x-direction, especially at the gaps where the PIN diodes are loaded. So we consider that most of incident wave is dissipated on the series resistor of the PIN diodes and then converted into heat energy. In order to validate the simulation results of the designed meta-mirror, the sample with a dimension of 360 mm × 360 mm was fabricated and its schematic fabrication process flow is depicted in Fig. 6. Firstly, a 1 mm thick double-face copper clad laminate with relative permittivity ε r of 2.65 is selected and the metallic pattern of the meta-mirror is etched on its two sides by using printed circuit board (PCB) technology. Then, the PIN diodes are soldered between the adjacent sub-cells of the meta-mirror, and 1000 ohm resistors are used between each branch of the metallic structure and the direct current (DC) feeding line for producing the same amount of current for all the diodes and protecting the diodes as well. The fabricated sample is placed at a height of 12 mm away from a metallic flat plate, and four nylon spacers are utilized to support the whole meta-mirror. Finally, a two-way DC voltage source is adopted to control the states of PIN diodes. The polarization conversion characteristic of the meta-mirror was measured in the anechoic chamber. Figure 7(a) shows the reflection measurement setup. Two wideband horn antennas connected to the two ports of a vector network analyzer R&S ZVA40 are selected as a transmitter and receiver, respectively. Their incidence and reflection angles are fixed as 5° to make a good approximation of the normal incidence. The square sample is located in the central stage of the whole measurement setup, and its diagonal is parallel to the x-axis. For the LP-CP conversion states, both the transmitting and receiving horns are firstly set to be polarized along x-axis and then the amplitude and phase of the x-component reflection wave could be measured. Subsequently, the polarization state of the receiving horn is changed to y-axis and the corresponding result of the y-component reflection wave could be obtained. Figure 7(b,c) shows the measured phases of the x-and y-component reflection waves at the state 1 and state 2, respectively. It is seen that the phase difference between these two component refection waves are located in range of (− 90° − 20°, − 90° + 20°) at the state 1 and (90° − 20°, 90° + 20°) at the state 2 in the frequency band of 3.4 GHz ~ 8.8 GHz, which agrees well with the simulation results. The characteristic of the CP waves at different states can be then calculated by using the formula of R ± = R xx ± iR yx , where the subscript "+ " indicates the RHCP wave and "− " indicates the LHCP wave. The measured LP-CP conversion performances for both two operating states are given in Fig. 7(d,e), respectively, where the simulation results are also given as comparison. It is seen that the RHCP reflection wave is produced from 3.4 GHz to 8.8 GHz at the state 1, where the isolation between RHCP and LHCP outgoing wave is larger than 15 dB (corresponding to AR ≈ 3 dB). Its reflection loss varies between 0.4 dB and 2.7 dB with an average of about 1 dB. When the meta-mirror operates at the state 2, the similar result is obtained at the same frequency band where the LHCP reflection wave is generated. The minimum reflection loss is about 0.2 dB at 6 GHz and its cross-polarization ratio is larger than 15 dB from 3.4 GHz to 8.8 GHz. There is some difference between simulated and measured results at the state 2, which is maybe due to the fabrication tolerance and measurement errors, especially for the soldering tolerance of the pin diodes. Figure 7(f) shows the measured and simulated LP reflection spectra of the sample at the state 3 where there is no bias voltage applied to this sample. It is seen that the outgoing wave can still keep the same polarization state as the incident wave. The reflection loss is less than 1.2 dB at a wide band ranging from 3.4 GHz to 8.8 GHz. Therefore, the designed meta-mirror has been experimentally verified to have three polarization states which can be dynamically controlled as required. Discussion In summary, the active meta-mirror with multi-polarization function is presented. This meta-mirror integrates the PIN diodes into the design of meta-atoms. When tuning the working state of the PIN diodes, the reflection phase difference of this meta-mirror at two orthogonal directions would be dynamically switched among − 90°, + 90° and 0°, corresponding to three different polarization states which are LHCP, RHCP and LP states, respectively. Both simulated and measured results have verified that the designed meta-mirror has the capability of converting the incident LP wave into RHCP or LHCP reflected wave between 3.4 and 8.8 GHz where it also can keep the original LP states. The proposed active meta-mirror could be developed for several potential applications such as spin-orbit interaction 29 and dynamic beam steering 30 .
5,002.2
2016-07-29T00:00:00.000
[ "Physics" ]
Resolving some nomenclatural issues on Isoeto-Nanojuncetea and four new communities of the Iberian Peninsula . We describe four new vegetation units and propose 17 new typifications and 24 altered names of syntaxa belonging to Isoeto-Nanojuncetea . Information is also provided on the publication dates of the alliances Isoetion and Preslion Introduction and Methods We have been working on Isoeto-Nanojuncetea Br.-Bl. & Tüxen in Br.-Bl. et al. 1952, mainly in the Iberian Peninsula and northern Morocco, over the past 20 years (Molina & Casado, 1997;Espírito-Santo & Arsénio, 2005;Molina, 2005;Molina et al., 2009;Pinto-Cruz et al., 2009;Silva et al., 2009aSilva et al., , 2009b. Some new communities were described and others confirmed for the Iberian territory (Silva et al., 2008(Silva et al., , 2009cCosta et al., 2012). Plant communities of temporary ponds compose a highly specialized vegetation with an extremely patchy distribution that poses challenges for classification (e.g. Silva, 2009). Nevertheless, syntaxa belonging to this class have been described from the beginnings of phytosociology almost a century ago, often using obsolete name-giving taxa that make more complicate the nomenclatural interpretations of the units (e.g. Braun-Blanquet, 1922, 1936a. Here we describe three new associations and one new subassociation, designate 17 type relevés and propose the correction, completion or mutation of 24 names following the rules of the 4th edition of the International Code of Phytosociological Nomenclature (ICPN; Theurillat et al., 2020). The nomenclature of vascular plants follows Flora iberica (Castroviejo, 1986(Castroviejo, -2019, and for families not yet published in this flora we followed Euro+Med (2006-), except for Isoetes delilei, which agrees with Greuter & Troia (2015), and Isoetes longissima with Troia & Greuter (2014). Publication dates of the alliances Isoetion and Preslion cervinae The name Isoetion was invalidly published by Braun-Blanquet (1931: 39) because a sufficient original diagnosis was not provided (ICPN, Art. 2b). Thereafter, the author provided a sufficient diagnosis validating the alliance name in the paper Un joyau floristique et phytosociologique "L'Isoetion" méditerranéen, that was published in the Bulletin de la Société d'Étude des Sciences Naturelles de Nîmes (Braun-Blanquet, 1936a) and also in a Communication of the SIGMA (Braun-Blanquet, 1936b). Text and page makeup are identical in both publications, except for the page numbering. The Communication is dated on the cover page in January 1936 and contains a reference to the Bulletin on the last page: "Extrait du Bulletin de la Société d'Etude des Sciences Naturelles de Nîmes, t. XLVII, 1930-35". An PHYTOSOCIOLOGICAL NOMENCLATURE SECTION additional evidence that the Communication should be considered as a reprint of the Bulletin is that in both publications a reference to the 'Communication nº 40' is indicated in the first page under the title, but the Communication published is the number 42 of the series, suggesting that it was postponed until the Bulletin was published. The precise date of publication of volume 47 of the Bulletin is unknown, but on page 252 there is a reference to a meeting of the Société held on November 29, 1935. Hence, it is highly unlikely that the volume could have been published before 1936 (D. Kania, pers. comm.). The author citation should therefore be Isoetion Braun-Blanquet 1936, as indicated by Theurillat et al. (2020). Eryngio corniculati-Isoetetum delilei The correct name-giving taxon is not Isoetes setacea but Isoetes delilei Rothm. (Greuter & Troia, 2015). In corrections according to Art. 44 the author and the year of the correction are not indicated. Brullo & Minissale 1998 Syn. The association 'Hyperico-Cicendietum filiformis Rivas Goday (1964) 1970' is based on the previous name 'Cicendietum filiformis (Allorge 1922) salmantico y onubense' (Rivas Goday, 1964: 222), which must be considered invalid as the rank indicated for it was 'regional variants' ['variantes regionales'] (Art. 3d). The name Hyperico-Cicendietum filiformis Rivas Goday 1970 was validly proposed by Rivas Goday (1970: 239), but the name-giving taxon indicated was Hypericum humifusum subsp. australe (Ten.) Rouy & Fouc., which is an illegitimate synonym of H. australe Ten. Although H. humifusum was the only species indicated in the original relevés of Rivas Goday (1964: 222), we can conclude that Rivas Goday (1970) subsequently assumed that the taxon they contained corresponded to H. humifusum subsp. australe (Rivas Goday, 1970: 226, 231, 239). According to Flora iberica (Castroviejo, 1986(Castroviejo, -2019 this taxon is not present in the area from which the association was described, and the name completed according to the original diagnosis and Art. 10a Note 2 (Hyperico australis-Cicendietum) is therefore an inadequate name that cannot be used (Art. 43, nomen ineptum). Although the name 'Hyperico humifusi-Cicendietum filiformis Rivas Goday 1970' has been cited in several publications Costa et al., 2012;De Foucault, 2013a;Gigante et al., 2013), Rivas Goday's name has still not been formally corrected. The correction of this name according to Art. 43 would also create an illegitimate later homonym (Art. 31) of the name Hyperico humifusi-Cicendietum filiformis Brullo & Minissale 1998(Brullo & Minissale, 1998. This latter name, based on relevés from the province of Zamora (Navarro & Valle, 1984), is a syntaxonomic synonym and in fact the name that must be used for the association. This Carpetan-Leonese and Oroiberian syntaxon was originally assigned to Cicendion (Rivas Martínez, 1964, 1981, a position also accepted by Jansen & Sequeira (1999) for Serra da Estrela. It was subsequently subordinated to the alliance Menthion cervinae Costa et al., 2012), which is typical of sites with a higher flooding level. According to the original diagnosis and its Iberian-Atlantic character, it seems more appropriate to reassign it to Cicendion. Myosuro minimi-Crassuletum vaillantii De Foucault (2013b: 90) designated a lectotype for the name 'Myosuro-Bulliardetum vaillantii Br.-Bl. 1936', but this lectotypification is superfluous because the name is invalid (Art. 19c). The original diagnosis (Braun-Blanquet et al., 1952) includes a reference to Braun-Blanquet (1936a) and provides a synoptic table that exactly matches the four relevés published by Braun-Blanquet in 1936. The relevé we selected as type is therefore part of the original diagnosis and must be considered as a lectotype. Agrostis pourretii Willd. is the correct name in the genus Agrostis for Agrostis salmantica (Lag.) Kunth, hence the association name must be corrected according to Art. 44 of the ICPN. Rivas-Martínez et al. (2002) formerly proposed this correction. The lectotypification proposed by Belmonte (1986: 48) was not effectively published (Art. 1). This association was firstly published in Rivas Goday (1968: 1022-1023) with a sufficient original diagnosis including a synoptic table. Thereafter, Pérez Latorre et al. (1999) designated the neotype indicating that the taxon present in the association corresponds to Juncus hybridus Brot., but do not corrected the association name. Indeed, the namegiving taxon Juncus tingitanus Maire & Weiller is absent from the area (Cádiz province) where the association was described (Castroviejo, 1986(Castroviejo, -2019Romero Zarco, 2010). Silva et al. (2008) formerly proposed the correction of this name to Laurentio michelii-Juncetum hybridi (Art. 43). Cypero micheliani-Crypsietum alopecuroidis Since Heleochloa alopecuroides is currently included in the genus Crypsis as Crypsis alopecuroides (Tutin et al., 1980;Euro+Med, 2006-), the name of the association may be mutated (Art. 45). Rivas-Martínez et al. (2002: 256) already proposed the mutation. Although several Cyperus species are indicated in the original diagnosis, only C. michelianus (the more frequent and abundant) is considered as character species of the association by the authors. The original name-giving taxon is Fimbristylis dichotoma, a misapplied name for F. bisumbellata in southern Spain (Castroviejo, 1986(Castroviejo, -2019. Brullo & Minissale (1998) formerly proposed the correction of this association name, which corresponds to Art. 44. The first name-giving taxon Panicum debile is currently treated as a synonym of Digitaria debilis (Tutin et al., 1980;Euro+Med, 2006-), and the name needs to be mutated (Art. 45). (Bolòs, 1979: 202).
1,764
2021-01-25T00:00:00.000
[ "Physics" ]
ESTABLISHMENT OF QUALITY PARAMETERS OF Quisqualis indica LEAVES THROUGH SOPHISTICATED ANALYTICAL TECHNIQUES . Quisqualis indica ( Q. indica ; Rangoon creeper) is found in Asia and finds its place in Ayurvedic texts, ethno-medicine as well as modern research. Its leaves contain important constituents like rutin, quisqualis acid, trigonelline, L-proline and L-asparagine. Traditionally, the leaves are used as antipyretic, anti-flatulent, anti-inflammatory, anti-septic, and anti-diarrhoeal. Modern pharmacological research also supports these claims. However, this plant remains unexplored phytochemically, which restricts any means for standardization of its formulations. The present research focuses on analysis of leaves of Q. indica using sophisticated chromatographic and spectral techniques. Thin layer chromatography (TLC), high performance thin layer chromatography (HPTLC), gas chromatography-mass spectrometry (GC-MS) techniques were used. After several pilots, TLC analyses, an HPTLC fingerprint of methanolic extract of the leaves was performed using chloroform: methanol: ethyl acetate (7: 3: 3) solvent system, which showed 12 peaks at 254 nm and 9 peaks at 366 nm. GC-MS analysis of the methanolic extract detected 7 known phytochemicals, some of them having pharmacological importance. This research may serve the parameters for quality control of Q. indica leaves in herbal industries, in the detection of adulteration of its formulations as well as open new avenues for phytochemical research, including isolation of a marker compound. INTRODUCTION Quisqualis indica (Q. indica, Figure 1) is a ligneous vine that belongs to the family Combretaceae. In this plant, the leaves are opposite or elliptical. In India, it is grown as an ornamental plant, while it is distributed across the world in tropical countries, especially in China, the Philippines, Bangladesh, Myanmar, and Malaysia [1][2][3]. It is commonly known as Rangoon creeper or Chinese Honeysuckle (English). Local names of Q. indica include Madhumalati (Hindi), Modhumalati (Bengali), Parijat (Manipuri), Vilayati Chambeli (Marathi), Radha Manoharam (Telugu), Niyogniyogan (Filipino), Quiscual (Spanish) and Shih-chun-tzu (Chinese) [4][5]. Each plant contains several phytochemicals in its various parts showing different pharmacological activities and toxicities, likewise Q. indica Linn. also shows many pharmacological activities due to the presence of medicinally active compounds. Other parts of Q. indica also have significant ethnomedicinal uses. Fruits are tonic, and anthelmintic, used in nephritis and gargling, diarrhea, and as astringent. Seeds are used in diarrhea as an antiseptic, febrifuge for high fevers, vermifuge, and anthelmintic. Roots are used in rheumatism, diarrhea, and as anthelmintic [9][10][11]. This research study may aid in the identification and characterization of phytoconstituents by chromatographic fingerprints obtained using sophisticated HPTLC and GC-MS techniques. Using the reports, one can check adulteration and facilitate standardization of herbal formulations containing leaves of Q. indica. It will also promote further research studies and the isolation of phytochemicals for the betterment of the community. Chemicals and reagents Methanol was purchased from the Rankeem Chem Trade Enterprise (purity >99%, analytical AR grade). Chloroform, ethyl acetate, ammonia and formic acid were obtained as gift sample from the Molychem Laboratory. Collection and sampling Five year old, mature fresh leaves of Q. indica were collected from the Medicinal Garden of RK University (Latitude 22.24006, Longitude 70.90098), Rajkot -India in the monsoon, August 2014, and compared with standard literature for authentication. The leaves were ovate in shape, 5-13 cm  2-5 cm in size, pale green in colour with coarse surface texture. A herbarium [SOP/COG/398/2014] was submitted to repository at School of Pharmacy, RK University, and certified by the Botanist from School of Science, RK University. Extraction After collection, the leaves were dried in a hot air oven at 50 o C. Leaves of Q. indica were powdered and 25.0 g dry powder was extracted with 100 mL methanol for 24 hours by maceration at room temperature. The macerated solution was filtered and the filtrate was allowed to dry at 50 o C. The obtained solid mass was stored for further experiments. Pilot TLC and HPTLC studies For the best resolution in HPTLC study, the mobile phase system was developed on TLC. Methanolic extract of leaves was used to develop TLC. Chloroform, methanol, ethyl acetate, ammonia and formic acid were used in the development of TLCs. After several trials in TLC studies, the best mobile phase system was identified. The proportions of the solvents were modified for several TLC plates to obtain precise and clear spots in fingerprinting. Chloroform: methanol: ethyl acetate (7: 3: 3) system gave best separation in TLC and forwarded for HPTLC analysis. HPTLC analysis for fingerprinting of methanolic extract was carried out at the Department of Chemistry, Saurashtra University with chloroform: methanol: ethyl acetate (7: 3: 3) system. The HPTLC fingerprinting was obtained on HPTLC plates containing silica gel C60 F254 as stationary phase, manufactured by E. Merck KgaA. CAMAG Linomat 5 was used for sample application. Peak height and area were selected for evaluation, while the measurement was done using the principle of absorption. The sample was dissolved in Methanol. CAMAG TLC Scanner 3 was used for scanning the plates in daylight as well as at 254 nm and 366 nm. GC-MS studies Qualitative GC-MS analysis of the sample named Madhumalati, containing methanolic extract of Q. indica leaves was performed using GC-MS at the Department of Chemistry, School of Science, RK University -INDIA. Agilent 5977B MS occupied to 7820A GC was used to analyze. HP-5 capillary column (30 m × 0.32 mm; 0.25 µm film thickness) was used for GC-MS studies [12][13][14][15][16]. NIST library of the GC-MS was used to identify the detected compounds. Pilot TLC and HPTLC studies The mobile phase chloroform: methanol: ethyl acetate (7: 3: 3) showed distinct spots in TLC. Hence, it was further used for HPTLC fingerprinting. HPTLC plates were scanned at 254 nm and 366 nm as shown in Figure 2 and their densitometric spectra are as shown in Figure 3. GC-MS studies From GC-MS analysis of the methanolic extract of Q. indica leaves, 7 phytochemicals as shown in Table 3 were identified using NIST library. Chromatogram of the GC analysis is shown in Figure 4, while spectra from the MS analysis is shown in Figure 5. The phytochemicals detected from the GC-MS analysis of the methanolic extract of Q. indica leaves were reported with significant biological activities. Decylundecyl ester carbonic acid is used as an acidifier as well as to inhibit the production of uric acid. 3,7,11,15-tetramethyl-2hexadecen-1-ol is useful for providing oligosaccharides to the body. However, others did not possess known pharmacological activities. It is significant to note that Agarwal et al. [17] reported 15, 12, and 18 compounds qualitatively via GC-MS analysis, in methanol, ethyl acetate, and hexane extracts respectively [17]. In addition to that, Sutar et al. [18] investigated the phytochemical and biological activities of Q. indica leaves and evaluation of secondary metabolites and characterization of isolated compounds was done by TLC, GC-MS analysis, NMR, and FTIR [18]. Potphode et al. had performed bioethanol production of Q. indica leaves by GC-MS analysis [19]. CONCLUSION This exclusive research work may open up further ideas to facilitate the standardization of herbal formulations containing Q. indica leaves. The obtained chromatographic fingerprints may aid to check adulteration and in the quality control of herbal formulations. This research study can also be used in further research for the isolation of compounds or as a marker for other processes.
1,668.8
2022-12-15T00:00:00.000
[ "Medicine", "Chemistry", "Environmental Science" ]
Localizing Optic Disc in Retinal Image Automatically with Entropy Based Algorithm Examining retinal image continuously plays an important role in determining human eye health; with any variation present in this image, it may be resulting from some disease. Therefore, there is a need for computer-aided scanning for retinal image to perform this task automatically and accurately. The fundamental step in this task is identification of the retina elements; optical disk localization is the most important one in this identification. Different optical disc localization algorithms have been suggested, such as an algorithm that would be proposed in this paper. The assumption is based on the fact that optical disc area has rich information, so its entropy value is more significant in this area. The suggested algorithm has recursive steps for testing the entropy of different patches in image; sliding window technique is used to get these patches in a specific way. The results of practical work were obtained using different common data set, which achieved good accuracy in trivial computation time. Finally, this paper consists of four sections: a section for introduction containing the related works, a section for methodology and material, a section for practical work with results, and a section for conclusion. Introduction Glaucoma is a chronic eye disease, which can be controlled but cannot be cured. If left untreated, loss of vision occurs gradually, potentially leading to blindness. The detection and diagnosis of glaucoma are related to tracing the changes in the optic cup which is a portion of optic disc (OD). To perform this detection in an automated system, the optical disc region must be extracted from retinal image through segmentation process. However, localization of optic disc is an important step in simplifying this segmentation. Different methods have been proposed for localization of OD [1]. They have exploited some of OD region features, such as its yellow color, having more brightness, having high grey intensity, and containing a network of convergence vessels. They have been applied through different techniques. Simple operations were adopted through the works such as Akram et al. [2] who used average filtering and threshold. Aquino et al. [3] exploited morphological operations and edge detection. Whardana and Suciati [4] combined two techniques: morphological operators and clustering withmeans method. Li and Chutatape [5] presented an algorithm that uses the clustering of image according to the bright pixel and the candidate regions are passed through principal component analysis (PCA) in order to locate the center of optic disc region. Padmanaban and Kannan [6] suggested the use of Fuzzy C-Means (FCM) clustering. Foracchia et al. [7] worked on tracing the vessels, matching their path with directional pattern in OD in originate image. Nergiz et al. [8] introduced a study using vasculature geometry character in optical disc, convergence to its center. Mendonace et al. [9] presented new methodology: entropy based on the information resulting from the distribution of vessel through the optic disc region. Learning techniques has good opportunity in optic disc localization fields; however, OD template would be learned based on different features. While Wu et al. [10] worked to build a specific model for network vessels shape in OD region which has form as parabolic shape, Dehghani et al. [11] suggested building a histogram-template for colors which OD region contains. Akyol et al. [12] proposed an algorithm which consists of a multiple-steps algorithm, with induction classifier being one of these steps. The study by Ichim and Popescu [13] used adaptive local texture analysis to generate several features that pass through classification algorithm. Muangnak et al. [14] used in their work decision model, which is induced from direction vectors that were derived from the vessel network, and points of convergence and then used hybrid method. Sinthanayothin et al. [15] applied neural network in their work with data input which is extracted from principle components analysis of the image in question. Frequency transform techniques have been used in other studies. Pallawala et al. [16] proposed using Daubechies wavelet transform. Jafariani and Tabatabaee [17] employed Fourier-Mellin transform in their work. While another study produced by Esmaeili et al. [18] used curvelete transform technique. Optic Disc. Eye fundus plays an important role in detecting various eye diseases; it consists of retina which is the transparent, light-sensitive structure at the back of the eye, optic nerve disc, and blood vessels (retinal arteries and veins). Anatomically, the retina contains structures including macula which is the central area, rods which are photoreceptor cells that surround the macula, the optic nerve which carries signals, and blood vessels as shown in Figure 1 [19]. One of the problems that eye suffers is the vision loss resulting from the diabetes complications that can be noticed through changes occurring in the retina components. In relation to this, fundus image is used continuously to examine and assess the eye health conditions [2]. Identification of optic disc in fundal landmark is important as reference coordinates to locate anatomical components in retinal images, for vessel tracking as a reference length for measuring distances in retinal images, and for observing any change in the optic disc which may result from a disease [21]. With the development of digital imaging and computing power, the potential to use these technologies in ophthalmology analysis and computer vision techniques also increases. Optic disk localization with this technique has been performed successfully according to its features that can be used in image analysis such as brightness, high contrast, and yellowish disk, where the blood vessels and optic nerves pass through it [2]. Therefore optic disc area contains more detailed information that can be exploited in its identification. 2.2. Entropy. An information theory is associated with information measure that has played an important role in different application fields, such as image analysis. The entropy of a probability distribution can be interpreted not only as a measure of uncertainty but also as a measure of information. As a matter of fact, the amount of information acquired from the observation of the result of an experiment (depending on chance) can be taken numerically equal to the amount of uncertainty concerning the outcome of the experiment before carrying it out [22]. Entropy in Image Analysis. A digital image consists of small units that represent brightness of a particular position in the image, called picture elements (pixels). The variation of the pixels values carries information in an image, so it can be measured by entropy metric. However, the image has different distributed brightness values; entropy of image ( ) can be computed as where ( ) is the distribution of brightness value indexed by in image ; is total of brightness levels in an interested image ( ). In image processing, entropy measure generates a value which represents new feature that can be exploited in image analysis such as texture analysis. Low values of entropy refer to smoothing texture, while texture with more details has higher entropy values [24]. Thus, it can be used to generate a new feature to measure the smoothing of the texture of images. As related, the optic disc area in fundus image contains more details such as nerves and vessels passing through that means its texture is not smooth, so it is expected that its entropy value would be higher than other regions in the fundus image. Sliding Window Approach. This is an approach that has been used in locating object in an image. However the image is partitioned into subimages (regions); they will be evaluated separately according to quality function. So the region with maximum score will be candidate to be Region of Interest [25]. Step (1): Extract Green channel Image ( ) Step (2): Divide Image ( ) into (3 × 3) patches ( , ) Step (3) for each patch , (i) Compute entropy value Step (4) Pick the patch , with the maximum value Step (5) Formulate new region new that is: Step (6) Divide new into (5 × 5) overlap patches ( new , ) Step (7) for each patch new , (i) Compute entropy value Step (8) Pick the patch with maximum value Step (9) Marking the center location of this patch with small square in to produce new image Algorithm 1: Proposed algorithm: localization of optic disk. ( ) is quality function; is the candidate region with =argmax ∈ ( ) . ( The proposed algorithm in this paper used sliding window approach with two methods: nonoverlap and overlap. However, in nonoverlap method as shown in Figure 2, each image pixel must not belong to more than one region, so Meanwhile, overlap method as shown in Figure 3 permits image pixel to belong to more than one region. , are regions in the image. Proposed Algorithm. As mentioned in the last section, there are some concepts that would be exploited in this work as shown in Algorithm 1. (1) However, entropy value in the optic disc area is significant; it would be used to find this area. So the algorithm uses searching technique; greedy method was used in this work to find the maximum entropy value. (2) Sliding window technique was used in order to find specific area from the whole image; the image area would be partitioned into patches. This technique was executed in two methods: nonoverlap sliding window in order to find the approximate optic disc area and then using this area with another method; overlap sliding window which is of importance in order to find the location of optic disc accurately. information about image channels (red, green, and blue) that the best one is green channel image; however it gives maximum contrast [26]. Therefore its texture is more distinguished than other channel images as shown in Figure 4. Experimental Work. The proposed algorithm was executed computationally using the following tools: MATLAB R2014a programming language and computer with specification Intel5Core6i5-3320M<EMAIL_ADDRESS>GHz; tests were applied with the known data sets: DRIVE, CHASEDB, DRIONS-DB, and DIARETDBI. In addition, a special data set is used [20], whose results will be given in detail in the following paragraphs. The algorithm was initiated with extraction of green channel image ( ) which was extracted from original image ( ) as shown in Figure 4. So image ( ) was divided into (3 × 3) patches ( ) using nonoverlap sliding window as shown in Figure 5(a). The size of each patch is the same; it is approximate to size of optic disc area. From the experimental work here, this size is suitable for implying more of optic disc area in one of the patches, so significant patch that may contain optic disc comes out. Then, for each patch , , entropy value would be generated as shown in Figure 5(b). The patches are ranked according to their entropy values. The one with the highest entropy value would be selected as the area that may contain optic disc, as shown in Figure 5(b). The darker box referring to the highest value which is corresponding to the patch is addressed by 2,1 as shown in Figure 6(a). New region ( new ) would be generated from patch 2,1 , and half of each surrounded patch of 2,1 . However, the existence of these halves in a new patch guarantees the possibility that some optic disc area may be in one of the surrounded patches. Then new would be divided into overlap patches as (4 × 4), (5 × 5), or (6 × 6). From the experimental work, this number of divisions is suitable; however, size of new patches is approximately equal to the size of optic disc area as shown in Figure 6(b). The new patches entropies would be measured. The patch with maximum value would be candidate to be the optic disc area. From the results shown in Figure 6(b), the patch that is indexed ( new 3,3 ) has the highest entropy value. Finally, small black square is placed in the center location of this patch as shown in Figure 7. Results and Findings. As related to data set [20] results, the computing entropies for candidate patch that is closer to optic disk area are more significant than other patches in image; this is true for all the images (35 images) as shown in Figure 8. However there is a comparison between candidate patch entropy value and the mean entropy values of the rest of the patches in an image; each one is represented by different line. It is obvious that candidate patch entropy value exceeds the mean patches entropies, for all tested images. So these images would be marked correctly. Sample of these marked images is shown in Figure 9. In addition, the experimental work would be applied with other data set and two measures were computed for comparison the results: accuracy of localization of OD in each data set image and execution time as shown in Table 1. There is a disparity in the accuracies (b) Entropy values for patches of (I g ) Figure 5: Image number (29) in data set with its nonoverlap patches entropies. Conclusion Researchers have focused on localization of optic disk using a computer, so more methods have been suggested with good results. In spite of this, the proposed algorithm in this paper gives significant results through simple computational steps that are executed in a short space of time; however there is no need for any preprocessing enhancement steps. Moreover, the proposed algorithm is attractive; it entails a simple technique, so it can be combined with other algorithms in order to be more effective as a supporting step. Finally, there are more features that are significant in the OD area, so in future work combining them with the entropy feature in order to generate robust algorithm is suggested. Also the evolutionary algorithm can enhance this algorithm to give accurate localization. Conflicts of Interest The author declares that there are no conflicts of interest.
3,229.8
2018-02-06T00:00:00.000
[ "Computer Science", "Medicine" ]
The impact of Anastrazole and Letrozole on the metabolic profile in an experimental animal model Anastrazole and Letrozole are used as endocrine therapy for breast cancer patients. Previous studies suggested a possible association with metabolic and liver adverse effects. Their results are conflicting. Fifty-five 4-week-old female Wistar rats were allocated in 4 groups 1) ovariectomy control (OC), 2) ovariectomy-Anastrazole (OA) 3) ovariectomy -Letrozole (OL), 4) control. Serum glucose, cholesterol, triglycerides, HDL-c and LDL-c were measured at baseline, 2 and 4 months. At the end, the animals‘ liver were dissected for pathology. At 4 months, total cholesterol differed among the OC and OL groups (p = 0.15) and the control and OL groups (p = 0.12). LDL-C differed between the control and OC groups (p = 0.015) as well as between the control and OA (p =0 .015) and OL groups (p = 0.002). OC group triglycerides, differed from those of the OL group (p =0 .002) and the control group (p = 0.007). The OA also significantly differed from the OL (p = 0.50). Liver pathology analysis revealed differences among groups with favored mild steatosis and ballooning. Anastrazole and Letrozole seem to negatively influence the lipid profile in our experimental model. This information should be taken in caution by medical oncologists when addressing patients with altered lipid metabolism. Aromatase is the main enzyme which catalyzes the conversion of androgen to estrogen in the adipose tissue of postmenopausal women 1 . It is present in numerous tissues including the ovaries, placenta, skin, adipose tissue and breast cells 2 . Aromatase inhibitors (AIs) exhibit anti-estrogenic activity which is triggered by inhibiting the cytochrome P450 3 . Three generations of AIs have been developed 4 . The third generation AIs present diversities concerning their effects on the lipid profile of women who suffer from breast cancer 5 . Despite the fact that they seem to demonstrate improved tolerability 6 , studies on Anastrazole and Letrozole indicate a possible negative impact on liver function of postmenopausal women [7][8][9] . However, data seem to be conflicting in this field. Specifically, based on the final results of the National Surgical Adjuvant Study-BC 04, Anastrazole did not influence serum lipids 10 . On the other hand, in the ATAC trial, hypercholesterolemia was more prevalent among women treated with Anastrazole 11 . Letrozole treatment was correlated with increased serum cholesterol levels in the BIG 1-98 trial 12 . It has been also shown that low levels of estrogens affect liver metabolism in mice in numerous ways, such as lipid accumulation and hepatic steatosis 13,14 . The aim of the present study is to investigate whether Anastrazole and Letrozole when administered in ovariectomized female rats influence their lipid profile and the liver architecture. Results At enrollment, the mean body weight of animals did not significantly differ among groups (Table 1). Similarly, baseline serum glucose, cholesterol, triglycerides and LDL-c levels were also comparable ( Table 2). Two months after the initiation of the experiment the total cholesterol levels significantly differed among groups (p = 0.003). At post-hoc analysis this resulted from differences detected between the ovariectomized control and the Letrozole groups (p = 0.001) as well as between the Anastrazole and the Letrozole groups (p = 0.03). In line with these observations, serum HDL-C levels significantly differed between the ovariectomized control and the Letrozole groups (p = 0.001) and between the Anastrazole and the Letrozole groups (p = 0.025). Serum triglycerides concentration was also differently affected and at post-hoc analysis differences were evident only between the ovariectomized control and Anastrazole groups (p = 0.01) and the Anastrazole and Letrozole groups (p = 0.3). At the end of the study, mean body weight levels remained comparable among the different groups (Table 1). Regarding serum lipid levels, the majority of the aforementioned differences persisted. Specifically post-hoc analysis for total cholesterol revealed significant differences only among the ovariectomized control and the Letrozole groups (p = 0.15) and the control and Letrozole groups (p = 0.12). LDL-C was also affected and the statistical significance was evident among the ovariectomized control group and the control group (p = 0.15) as well as among the Anastrazole and control groups (p = 0. 15) and Letrozole and control groups (p = 0. 19). In the case of triglycerides levels, the ovariectomized control group differed from both the Letrozole group (p = 0.2) and the control group (p = 0.07). The Anastrazole group also significantly differed from the Letrozole group (p = 0.5) (Fig. 1). Hematoxylin-eosin stained liver samples obtained from animals of Letrozole and Anastrazole groups showed signs of hepatic steatosis and ballooning (Fig. 2, Table 3). The grade of fatty liver disease was considered as "mild" in eight of the eleven rats in Anastrazole group. In nine of the twelve rats of the Letrozole group, the grade of steatosis was considered as "mild", in two animals of this group the grade of steatosis was characterized as "moderate" while only one animal of this group presented normal liver architecture. In both control and ovariectomized control groups, "mild" steatosis was detected in one animal per group. No statistically significant differences were detected in the grade of steatosis between the Letrozole and Anastrazole groups (p = 0.331) although liver architecture was more disturbed in Letrozole treated rats. Hepatocellular degeneration (ballooning) of grade 1 was confirmed in five of the twelve animals of Letrozole group and in two of the eleven animals of Anastrazole group. Ballooning of grade 2 was detected in two Letrozole treated rats. Ballooning was not observed in any animal of the control or ovariectomized control groups. Neither portal nor lobular inflammation were detected in the liver lesions of all the animals studied. Discussion Third generation AIs are largely used in postmenopausal women with a diagnosis of hormone receptor positive breast cancer 15 . Naturally, their safety and effectiveness are improved compared to the earlier generation AIs 16 . The menopausal transition and the postmenopausal period influence the cardiovascular system directly and indirectly. Several studies have demonstrated the crucial role of total cholesterol, LDL-C and triglycerides as important risk factors for cardiovascular events [17][18][19] . Female sex hormones have been correlated to a decline in the incidence of cardiovascular events in young and middle-aged women as compared to men, while adverse changes in serum total cholesterol and triglyceride levels between pre and postmenonopausal period have been reported 20 . Thus, estrogen influence positively serum cholesterol levels and AIs can interrupt this interplay thus increasing the odds of a developing cardiovascular disease 21 . The majority of the studies concerning the effects of Anastrazole on lipid profile have shown an increase in HDL-C levels and various effects on LDL-C and triglyceride levels 22,23 . In a large systematic review and meta-analysis performed by Amir et al., prolonged use of AIs was associated with significant changes of the lipid profile, including hypercholesterolemia 24 . In the same study, the use of AIs was also associated with a higher risk of cardiovascular disease. In the BIG 1-98 trial this risk was documented when studying the effects of administration of Letrozole as compared with tamoxifen 25 . Counterintuitively however, the MA-17 trial showed no changes in terms of lipid profile with Letrozole use 26 . Several studies have investigated the influence of Anastrazole on the lipid profile of women reporting conflicting results. The ATAC trial suggested that Anastrazole did not affect the lipid profile or the odds of developing cardiovascular disease 27 . The SABRE trial, also reported no differences following Anastrazole administration on LDL-C, HDL-C, or triglyceride levels for a 12-months treatment period 28 . Lin et al observed that treatment with Anastrazole seems to result in less lipid accumulation in hepatic tissue as compared to tamoxifen and concluded that it may be preferable for patients with potential hepatic dysfunction 29 . Furthermore, Sawada et al suggested that Anastrazole may also exert a beneficial effect on the lipid profile of postmenopausal women 30 . Conversely however, the results of the ITA trial, pointed towards lipid metabolism disorders 31 . Contrary to Anastrazole, the effect of Letrozole on lipid profile and hepatic architecture has been seldom investigated. In a small study which recruited 20 postmenopausal women, Letrozole was associated with a significant increase in total and LDL cholesterol levels 16 weeks after the initial enrollment of the patients 32 . However, these results were not confirmed by the NCIC CTG MA.17 study 33,34 . It has been shown that the occurrence of metabolic syndrome is increased among women after menopause 35 . Modifications on lipid metabolism or inflammatory mediated processes are involved in the action of estrogen deficiency on hepatic function and histology 36,37 . The increase in accumulation of fat in the hepatic tissue recorded in our study in the animal groups treated with Anastrazole and Letrozole may be attributed to the inhibition of estrogen production caused by these agents and the subsequent disturbed lipid accumulation. The effect of Anastrazole and Letrozole on liver function have not yet been clarified. In an experimental study, which was conducted in order to determine the effects of Letrozole on hepatic function in female rats, hepatotoxicity was observed, while minimal histological findings were detected 8 . In a recent study by Lin Y et al., Anastrazole was demonstrated to have minimal toxicity in terms of liver function compared to that of tamoxifen 29 . According to the conclusions of a case report, a potential autoimmune mechanism of hepatotoxicity has been also documented in a patient receiving Anastrazole 7 . A limitation of the study was the induction of the control rats that were not subjected to ovariectomy at the end of the experimental period that did not allow the investigation of potential differences in serum lipid levels among estrogen deficient or not rats throughout the entire protocol. The results of our study suggest that Letrozole significantly alters the lipid profile of ovariectomized mice, therefore, putting into question its tolerability which is reported by previous clinical studies. Anastrazole on the other hand seems to exert a mild effect on the levels of LDL-c which is not reflected in the total cholesterol and triglyceride levels. Mild histological liver alterations seem also to occur and these alterations should be taken in mind in future clinical studies. Once again Letrozole resulted more cases of mild and moderate liver pathology, although this result did not reach statistical significance. Implications for clinical practice and future research. Letrozole's mode of action on the lipid profile of patients should be seriously evaluated by medical oncologists when addressing patients with altered lipid metabolism until further evidence become available. Anastrazole on the other hand seems to exhibit a milder effect. Future trials should thoroughly investigate the potential metabolic and liver adverse effects of Anastrazole and Letrozole and consistently observe the enrolled patients over an adequate period of time (preferably until the end of the treatment). Concluding, according to the findings of our study, Letrozole administration over a 2 and 4 month treatment period negatively affects serum lipid metabolism in ovariectomized female rats and disturbs liver histopathology. Anastrazole, on the other hand, seems to result mild changes and might be a safer alternative for ovariectomized patients. Future clinical trials are needed to corroborate our findings because current clinical evidence in the field are scarce and not sufficient to support the tolerability of these drugs. Materials and Methods Animals. Fifty-five 4-week-old female Wistar rats (Hellenic Pasteur Institute, Department of Animal Models for Biomedical Research, Greece) were maintained in weather controlled chambers (temperature 20 ± 1 °C, humidity 55 ± 5%) under controlled lightning (12 hours light per day) for 30 days in order to adapt to their new environment. ELVIZ 510 food pellets were provided ad libitum, in order to ensure a full nutrient diet. The protocol was approved by the Ethics Committee of the Athens Medical School and by the Veterinary Directorate of Attica Region in agreement with the Directive 2010/63/EU. The methods were carried out in "accordance" with the approved guidelines. The night before the operation food was deprived from the animals. Surgical procedures. Forty-five female Wistar rats underwent surgical ovariectomy. The surgical procedures were performed between 8:00 am and 9:00 am on diestrous day 1 (D-1). The animals were anesthetized with a combination of ketamine (75 mg/kg) and xylazine (10 mg/kg) which were administered intraperitoneally. A midline dorsal skin incision was then performed. The ovarian vessels were clamped and both ovaries were excised. Muscles and skin were sutured to close the incision. Animal treatment. After the ovariectomy, the operated animals were randomized in three groups. The first group did not receive any drug regimen (ovariectomized control group). The second group received Anastrazole and the third group received Letrozole. Administration of these regimens was performed according to previous reports 38 . Specifically, Anastrazole was administered p.o. in drinking water, after being dissolved in DMSO solution, in a concentration tested to result in a daily uptake of approximately 0.1 mg/kg of body weight and Letrozole was similarly administered in a concentration tested to result in a daily uptake of approximately 2 mg/kg of body weight. Both agents were administered for a 4-month period. Blood samples were collected using capillary tubes from the medial retro-orbital venous plexus under light ether anesthesia, at the beginning of the experiment (T1), at 2 months (T2) and at the end of the study (4 months-3) at 9:00 AM after a 12-hour fasting period. Four months after the initiation of the study, the animals were euthanized. At this point ten control animals of similar age were included in the study as a control group without ovariectomy, in order to observe the potential differences of the three groups as opposed to normal values. Enzyme-linked immunosorbent assay (ELISA). Blood specimens were collected in Vacutainer tubes (BD Diagnostics, NJ, USA). The serum was separated after centrifugation of blood at 3000 rpm for 10 minutes. The specimens were stored at -30 °C until the assay which was performed within two months. Serum concentrations of total cholesterol and of triglycerides were determined using the enzymatic PAP commercial kit ("biosis"-Biotechnological Applications, Athens, GR) and HDL-cholesterol was determined with a cholesterol enzymatic photometric method. LDL-cholesterol was determined by the mathematic model "LDL-cholesterol = Total Cholesterol-(HDL-cholesterol + Triglycerides/5)". Pathology. At the end of the 16-week period, animals were euthanized under ether anaesthesia. Liver were dissected immediately for further histopathological analysis as previously described 39 . Liver sections were stained with hematoxylin-eosin and examined blindly by two independent pathologists under light microscopy. The histologic evaluation was conducted in accordance to the guidelines Pathology Committee of Non-Alcoholic Steatohepatitis Clinical Research Network 40 . The histological features were grouped into 4 broad categories: steatosis, ballooning, portal inflammation and lobular activity. A score from 0 (absence) to 3 (severe) was assigned to each parameter. Statistical analysis. The normality of the distributions was assessed with Kolmogorov-Smirnov's test and graphical methods. All data are expressed as median [range]. We used the Kruskal-Wallis non-parametric test for multiple group comparisons and the Dunn's test of multiple comparisons for post-hoc multiple testing. Comparisons between multiple time points were performed using Friedman's test with Wilcoxon's Signed Ranks test for post-hoc comparisons. The Chi-square and Fishers exact test were used for analysis of dichotomous variables. Differences were considered as statistically significant if the null hypothesis could be rejected with > 95% confidence (p < 0.05).
3,577
2015-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Vertically twinned aluminum nano-pillars under tensile loading: a molecular dynamics study Nano twinned FCC materials show superior properties comparatively to their single crystal counterparts. The properties of nano-twinned materials are possessed by the interactions of dislocations with the coherent twin boundaries (TBs). In this paper, we describe the fabrication of arrays of vertically aligned aluminum nano-pillars that contain different number of TBs (different twin boundary spacing) and no grain boundaries or other microstructural features. We have investigated the influence of twin boundary (TB) spacing on the mechanical responses of individual nano-pillars under tensile loading. The investigation fabricated with molecular dynamics (MD) simulation reveals that, the yield strength is dependent on number of vertical twins. Yield strength increases with increasing number of twins upto a critical value and then starts to decrease with further increment of twin numbers. An increase of ductility was also found as a result of immobilized dislocation. The deformation process was nucleated by spontaneous dislocation buds and eventually turned into mature partial dislocations. The simulation was done until fracture to give an insight about dislocation behavior. Introduction Nano-pillar is a type of nano-crystalline material which is extensively researched by material scientists and engineers. These types of materials have established a fundamental base for the next generation devices-as the nominal sizes of modern electronic devices continue to shrink, additionally the demand of strength and ductility continues to increase. State of the art research has been focused on the development and application of nextgeneration interconnects resulting in increasingly small feature sizes with high performance and greater reliability within a cost-effective manner. Meanwhile conventional lithography may have to deal with a barrier when the critical dimension approaches sub −20 nm [1]. To minimize the upper mentioned barriers, the requirement for the enhancement of mechanical properties of nano-materials for new-generation devices has raised appeal towards the twinned nanopillars. The TB influences multiple mechanical properties, crack resistance improvement [2], higher ductility [3] and enhancement in strain rate sensitivity [4]. When the inexhaustible coherent twin boundaries (CTBs) are introduced into the ultrafine-grained FCC cubic metals with low stacking fault energy, considerable plastic strain is achieved along with ultrahigh tensile strength [5][6][7]. A strengthening effect developed by the CTBs plays an important role as in their ultrafine-grained counterparts, but greater strength can be achieved mainly in twinned nanowires [8,9]. By inducing orthogonally oriented CTBs in Au nanowires, the ideal theoretical strength was achieved upto 3.12 GPa [9]. Roos et al investigated the role of longitudinal TB on the deformation mechanism of Au nanowires [10] using in situ transmission electron microscopy. They observed the deposition of partial dislocation in twinned nanowires while twins and stacking faults were observed in single crystal nanowires. The transition of deformation mechanism from twinning in single crystal nanowires to slip in twinned nanowires has been accredited to the pile-up of leading partials against the TB and nucleation of trailing partials [10]. This similar transition in deformation mechanism is observed in this work. Aluminum nanoparticles are used in many fields including dispersion strengthening, electro deposition [11], material surface coating [12], nano-composites, biomaterials etc Through experiments along with mathematical analytical methods, researchers have demonstrated that nanowires are quite different than bulk materials regarding the structure and properties because of the effects of large free surfaces in nanomaterials. Materials containing high-density CTBs have been investigated and the findings are satisfying -high strength and hardness without any compromise of ductility, fracture toughness, thermal stability and electrical conductivity. Lu et al investigated regarding this matter. According to their findings, the source of ultrahigh strength is the constructive blockage of dislocation motion by enormous CTBs which possesses an extremely low electrical resistivity [13]. Incorporating twin is not always the answer to increase strength since the dependency also relies on diameter and length of the nanomaterials which was investigated by Deng et al. They used Au nanopillars and the findings described that the strain-rate sensitivity above 100 K temperature is significantly smaller in twinned NWs of Au with perfectly circular cross-section than in similar NWs without twins [4]. Therefore, to predict the strengthening and other mechanical properties in twinned nanowires, the fundamental understanding on how partial dislocations nucleate from defects in lattice, twinned plane or at free surface is innumerably important. Sanzose et al studied the effect of nano-enhancement and their explanation implies that the trailing partial dislocation initiated at the beginning stages of deformation demolishes the stable stacking fault formed by the leading partial and consequently empowers both partials to glide as a pair freely away from the nucleation sites [14]. The authors of this paper tried to organize different aspects in a successive way-from describing the simulation methodology through 2 parts: by describing the interatomic potential used and the MD model developed. The result section contains 3 major sub-sections: the 'stress-strain behavior' of the model under load, the variation in elastic modulus and maximum strength, the atomic level observation of fracture process. Considering above mentioned different parameters, our investigation was inclined to search for a simple question-how Al nano-pillars containing vertical TBs behave under tensile load. Furthermore, we tried to shed some light on fracture mechanism while the number of twins held as a satisfactory variable. Simulation methodology 2.1. Interatomic potential MD simulations have been accomplished in LAMMPS package [15] employing an embedded atom method (EAM). LAMMPS uses techniques of spatial decomposition of simulation domain and a message-passing technique [15]. The potential for FCC Al is taken from Mendelev and Kramer [16]. The EAM potential's validity may be judged by its accuracy with actual experimental data. This potential reproduces melting point and liquid structure accurately. Apart from these properties, this potential is used to find out the crystalline properties which is a major cornerstone of this study. This potential is developed for improving the harmony with the firstprinciples calculations and new experimental measurements were performed to have reliable results using x-ray diffraction method. Furthermore, liquid-phase diffusivities were calculated to find its correlation with liquid structure which gave satisfying result. This potential was successfully employed to investigate the tensile behavior of Al nanopillars with vertical twins. The post processing operations were undertaken by utilizing OVITO [17]. Molecular dynamics model The dimension of the four models under experiment is constant which is 8.3 nm×8.3 nm×54.1 nm having a square cross-section. Size is one of the major parameters to accurately simulate dislocation as it interferes on the fracture mechanism of nano-materials. The only variable is the number of vertical TB and that is 0, 1, 2, 3, 4, 5 and 8 respectively. The simulation box contains 243000 atoms. The tensile load is applied on the [1-10] orientation which is considered z axis. The x and y axes are [112], [111] respectively. The twin is created along [111] direction which is visualized in figure 1. For simulating infinitely long nanowire, periodic boundary condition is maintained along the z axis where the load is applied and other two directions are kept independent. From finite temperature Maxwell distribution, the initial velocities were chosen randomly. After the construction of nanowires, minimization of energy was performed by conjugate gradient method to obtain stability in the nanostructures by setting stopping tolerances for force and energy 10 −100 and 10 −100 respectively. Thermal equilibration at 10 K temperature in canonical ensemble (NVT) was imposed after the energy minimization of the nano-pillars. The model was equilibrated to constant energy for 50 ps. The temperature recalibration was done after every 1 fs. The nano-pillars were equilibrated to constant pressure for 100 ps with a Nose-Hoover isobaric-isothermal (NPT) ensemble. Integration of equation of motion was done by Verlet algorithm with a time-step of 1 fs. After equilibration, a constant strain rate of 3×10 8 s −1 with respect to the initial box size was applied in the direction of z axis. The strain rate was much higher than average experimental strain rate, owing to the MD simulation timescale limitations. While straining, the pressure development on the other two orthogonal directions were maintained zero. Dislocation Extraction Algorithm (DXA) was employed to analyze the dislocations, Burger vector of each dislocation type, junctions and their successive line representation [18]. Stress-strain behavior Engineering stress-strain (σ-ε) curves of nanotwinned Al nano-pillars with different number of TBs are shown in figure 2 under tensile loading at 10 K temperature. In order to construct the stress-strain curve, the stress is calculated using the virial theorem which is commonly used in MD simulations [19]. All the curves in figure 2 show an abrupt drop of stress after reaching the peak stress (it is defined as yield stress) without considering the variation in TB spacing. Peak stress is the onset of plastic deformation and divides the curve into elastic and plastic deformation region. In the elastic deformation region stress-strain relationship is linear due to the fact that the atoms of the crystal structure are just flexing. But after the yield point, it starts to deform plastically as the atoms in the crystal structure starts to take a new position relative to each other due to the mechanism of dislocation activation. Variation in elastic modulus and yield strength The modulus of elasticity (i.e. Young's modulus) is calculated using data points of the elastic region, where the stress-strain curves are linear. Young's modulus of FCC Al nanopillar without any vertical twin is 74.24 GPa. Meanwhile W.C Oliver found the modulus of elasticity for bulk Al approximately 68 GPa by performing experiments whereas the theoretical value is 70.4 GPa [20] which is less than the current simulation result. The prime reason behind that is the fewer number of surface defects in nano range compared to the bulk material. Therefore, the modulus of elasticity is compatible with the experimental and theoretical values obtained. The values obtained in this study for modulus of elasticity and yield strength are given at table 1. From the table, it is obvious that there is no significant change in Young's modulus for the Al nanopillar models with TBs. So, it can be concluded that, there exists no relation between TB spacing and modulus of elasticity as CTBs alter neither modulus nor coherency stress in the neighboring lattice as a highly coherent interface. The strain at which the sudden drop occurs has been considered as strain to yielding. From the data it can be interpreted that by incorporating TB in Al nanopillars, the strength can be increased which is similar to the experiment orchestrated by Sun et al [21]. From figure 3 it is clear that TB inclusion increases the yield strength of the Al nanopillar upto a certain value, as infinitely strong material cannot be produced. The twin spacing for which the twinned Al nanopillar has the highest yield strength is called the critical twin spacing. After this critical twin spacing, yield stress starts to decrease. 1.66 nm is observed as the critical TB spacing. where, d is in nm and σ y is in GPa. The mechanism underlying this strengthening effect of Al nanopillar with vertical TBs is, TBs act as a barrier to dislocation nucleation and dislocation propagation. Using the configuration force resulting from the mismatch of material properties in TB, the interaction between TBs and dislocation can be described [24]. This repulsive force which is exerted by the TB to the dislocations moving towards the TB [25], helps to increase the yield strength of nanotwinned material by hindering the glide of dislocations. As a result, dislocations are piledup and stored inside the twin spacing until it can overcome the repulsive force and can glide through the TB. Variations in TB spacing change the room for dislocation storage and the magnitude of the repulsive force. Decrement in twin interspacing increases the repulsive force and dislocation nucleation sites which result in a dramatic increase in material strength and hardening at the nano-scale [26]. As the twin spacing is reduced below a critical value i.e., into the inverse Hall-Petch regime, the yield strength decreases as there does not exist enough space for dislocation pile-up resulting in the diffusion of the TBs [27]. In contrast, in the twin-free nanopillar steps and jogs are created by the dislocation glide, which are the preferential sites for necking phenomenon [3]. From the above investigation, it may be concluded that the TBs can promote high strength and high ductility. Therefore, controlled deformation mechanism regarding the twinned structure has the potential to develop a stronger nanomaterial with improved ductility. Atomic-level observations of fracture process The atomic level observations are displayed in figure 4, which visualizes the fracture behavior of the perfect Al nanopillars. It is understood that the nano-pillar yields by nucleation of four leading partials from one of the corners of pillar, followed by another fracture plane from a contemporary nucleation development at figure 4(b). After sufficient rise in yielding the stacking faults transform into twins by the nucleation of Shockley partials on adjacent plane successively. From figure 4(c), it can be seen that with previously developed twin, another plane of faults arises creating a triplet, which are completely parallel to each other. Deformation process is absolutely dominated by the nucleation and fully development of twins on different planes. The atomic level observations, displayed in figures 5-7, visualize the fracture behavior of the Al nano-pillars containing TBs 1, 2 and 4 respectively. The twin nano-pillars yield by the nucleation of leading partials from the corners of the planes. In figure 5(a), the two nucleation sites are developed in the Al nano-pillar model with single twin followed by several other nucleation development while straining further. In figure 6(a), the nucleation of leading partials is developed in the bottom corner which causes the yielding in nanopillar model with two twins. In figure 7(a), the nucleation of leading partials has several dislocation multiplication sites which result in yielding of nanopillar with 4 twins. The matured dislocations result in a It is observed that, the fracture mechanism for nanopillars with more than 4 vertical TBs is completely different from the nanopillar models with less than 5 vertical TBs. In figure 8, partial dislocations are formed from the surface followed by the propagation to the TBs. A lot of interaction points between partials and TBs are observed in figure 8(b). In some of that interaction points, partial dislocation migration happened instead of deformation twinning. This phenomenon causes the decrement of flow stress after critical twinning space. Conclusion To unveil the effect of TB on Al nanopillars, MD simulations have been performed. Although similar studies were performed for Fe, Cu, Ni and some other alloys, the increasing application of Al in nanotechnology and materials science makes the current study demanding. The main conclusion can be summarized as below • The deformation mechanism is mainly caused by twinning in the twin free single crystal Al nanopillar. • In the twinned nanopillars of Al, the deformation process mainly occurs due to leading partial dislocations. • As the twin planes hinder the growth of dislocation, the number of dislocations play a crucial role on yield strength, elastic modulus and flow stress. Although this study fulfills its objectives, there are some other aspects to continue the corresponding research. For instance -• The mechanical behavior of twinned Al nanopillars subjected to various temperatures. • The influence of twin orientation on the deformation mechanism. • The mechanical behavior and deformation mechanism of the nanopillars if subjected to compression. A detailed study on twinned Al nanopillars is beyond the framework of this paper.
3,582.6
2020-01-06T00:00:00.000
[ "Materials Science", "Engineering" ]
Geranylgeranoic acid, a bioactive and endogenous fatty acid in mammals: a review Geranylgeranoic acid (GGA) was first reported in 1983 as one of the mevalonic acid metabolites, but its biological significance was not studied for a long time. Our research on the antitumor effects of retinoids led us to GGA, one of the acyclic retinoids that induce cell death in human hepatoma-derived cell lines. We were able to demonstrate the presence of endogenous GGA in various tissues of male rats, including the liver, testis, and cerebrum, by LC-MS/MS. Furthermore, the biosynthesis of GGA from mevalonic acid in mammals including humans was confirmed by isotopomer spectral analysis using 13C-labeled mevalonolactone and cultured hepatoma cells, and the involvement of hepatic monoamine oxidase B in the biosynthesis of GGA was also demonstrated. The biological activity of GGA was analyzed from the retinoid (differentiation induction) and nonretinoid (cell death induction) aspects, and in particular, the nonretinoid mechanism by which GGA induces cell death in hepatoma cells was found to involve pyroptosis via ER stress responses initiated by TLR4 signaling. In addition to these effects of GGA, we also describe the in vivo effects of GGA on reproduction. In this review, based mainly on our published papers, we have shown that hepatic monoamine oxidase B is involved in the biosynthesis of GGA and that GGA induces cell death in human hepatoma-derived cell lines by noncanonical pyroptosis, one of the mechanisms of sterile inflammatory cell death. oxidase B in the biosynthesis of GGA was also demonstrated. The biological activity of GGA was analyzed from the retinoid (differentiation induction) and nonretinoid (cell death induction) aspects, and in particular, the nonretinoid mechanism by which GGA induces cell death in hepatoma cells was found to involve pyroptosis via ER stress responses initiated by TLR4 signaling. In addition to these effects of GGA, we also describe the in vivo effects of GGA on reproduction. In this review, based mainly on our published papers, we have shown that hepatic monoamine oxidase B is involved in the biosynthesis of GGA and that GGA induces cell death in human hepatoma-derived cell lines by noncanonical pyroptosis, one of the mechanisms of sterile inflammatory cell death. Supplementary key words caspase 4 • ER stress-induced unfolded protein response • isoprenoids • pyroptosis • TLR4 • UPR ER 4,5-didehydrogeranylgeranoic acid (4,5-didehy-droGGA [XII] or peretinoin) significantly suppressed the incidence of second primary hepatoma up to 5 years after 600 mg daily for 1 year in patients after radical hepatoma surgery in a randomized, placebocontrolled clinical trial (1,2). Peretinoin, as analogous to its name, is a synthetic chemical developed as one of the retinoids, but we need to go back in history to properly understand its mother compound, geranylgeranoic acid (GGA) [XI]. HILDEBRANDT ACID In 1900, Hildebrandt, who was studying the pharmacological effects of terpenes, reported the appearance of dicarboxylic acid derivatives of terpenes in the urine of rabbits treated with citral [II] (3). More than 30 years later, Kühn discovered that the substance reported by Hildebrandt was a dicarboxylic acid with a carboxy group at the ω-end of geranoic acid [III] (GA; also written "geranic acid", but here we use GA as recommended by Kühn) and named the compound "Hildebrandt acid" (4). Hildebrandt acid [VIII] is a compound identified as a metabolite of citral [II], a foreign substance in the body. The discovery of Hildebrandt acid [VIII] in rabbit urine indicates the presence of an enzyme system in animal cells that oxidizes acyclic monoterpenoid alcohols to carboxylic acids. It has recently been shown that geraniol [I], which is often added to perfumes, deodorants, and cosmetics as a rose fragrance ingredient, can be administered to detect GA [III] and Hildebrandt acid [VIII] in human urine and screen for environmental substances using LC-MS/MS technique (5). Figure 1 shows acyclic monoterpenoids previously detected in mammalian urine and their possible metabolic relationships (6). Hildebrandt metabolite, these two organic acids may be physiological metabolites (7). Popjak's group has shown that dimethylallyl diphosphate (DMAPP: C 5 ), geranyl diphosphate (C 10 ), and farnesyl diphosphate (FPP: C 15 ), diphosphate intermediates from mevalonic acid (MVA) to squalene, which are important intermediates in cholesterol synthesis, are all enzymatically dephosphorylated to dimethylallyl alcohol, geraniol [I], and farnesol, respectively. Furthermore, these isoprenols are oxidized by alcohol and aldehyde dehydrogenases to dimethylacrylate, GA [III], and farnesoic acid (FA) in cell-free experiments and mice (8,9). The results shown in the in vivo experimental system are particularly noteworthy. In particular, of the radioactivity taken up by the liver from the administered radioactive MVA, 11% was in the steroid fraction, 46% in squalene, 4% in allyldiphosphate, 16% in free prenols, and 23% in prenoic acids. At the time, the question of interest was whether the oxidized metabolites of these intermediates of squalene synthesis were intermediates of squalene synthesis or of a separate pathway. As is clear from subsequent developments, there was a report (9) that high concentrations of FA inhibited cholesterol synthesis by inhibiting mevalonate kinase, but not much attention was paid to other isoprenoic acids. Some of the acyclic monoterpenoid acids described above are precisely the excretions found in human urine and are metabolites of foreign substances such as additives found in cosmetics and foods, some of which may be catabolic degradation products of internal metabolites. So, can we say that all of these acyclic isoprenoid oxidation products are degradation products? Or are they all simply excretions? DISCOVERY OF GGA IN ANIMALS GGA [XI] is one of the acyclic diterpenoid acids, which we reported to be present in several medicinal herbs (10). At the time, we thought we were the first to report natural GGA [XI], but reports of GGA [XI] as a metabolite in living organisms can be traced back another quarter century (11). Geranylgeranyl diphosphate In mammals, except for the isoprenylation of proteins, two well-known metabolic pathways involving FPP are the biosynthesis of steroids (steroid pathway) and the biosynthesis of isoprenoids, a linear chain of 4-21 isoprene units (nonsteroidal pathway) (Fig. 2). The former is a squalene-mediated pathway in which two FPPs are condensed tail-to-tail, and the latter is a Fig. 1. Urinary monoterpenoid metabolites found in mammals to which citral or geraniol had been given orally. The figure shows the metabolites detected in urine and their metabolic relationships after oral administration of geraniol and citral (circled by squares) as exogenous chemicals to mammals such as rabbits and rats. Citral is a collective term that covers two geometric isomers; the E-isomer geranial (trans-citral) and the Z-isomer neral (cis-citral). The chemical structure of trans-citral is shown here. Each enzymatic reaction by a single enzyme is indicated by a solid arrow, and each enzymatic reaction involving multiple enzymes is indicated by a dashed arrow. biosynthetic pathway for acyclic isoprenoids of various lengths in which isoprenyl diphosphate (IPP: C 5 ) is sequentially condensed onto FPP. The nonsteroidal pathway can be further divided into two pathways: the pathway in which dolichols are synthesized by linking isoprene units in the cis configuration and the pathway in which ubiquinone side chains are synthesized by linking isoprene units in the trans configuration. The former dolichol is formed by cis-prenyltransferase to form a polyprenol with 15-17 isoprene units of IPP linked to FPP in a cis configuration, and the polyprenol undergoes α-oxidation by steroid 5 alpha-reductase 3 (SRD5A3) and converted to dolichols (12). The latter CoQ 10 isoprenyl side chain is synthesized by decaprenyl diphosphate synthase (PDSS1/PDSS2 heterotetramer enzyme) as an isoprenoid with 10 isoprene units using IPP and DMAPP as substrates (13). GGPP, which has only one additional isoprene unit linked in trans configuration to FPP, can exist as an intermediate in the biosynthesis of the isoprenyl side chain of CoQ 10 , but GGPP is known to be specifically biosynthesized independently of CoQ 10 biosynthesis. In 1981, Sagami et al. discovered and purified GGPP synthase from pig liver (14,15). GGPP synthase uses FPP and IPP as substrates to produce E,E,E-GGPP (all-trans GGPP: C 20 ), but the biological significance of this GGPP was unknown until geranylgeranylated proteins were discovered (16,17). GGPP, along with FPP, is now well known as an isoprenoid donor for protein isoprenylation (18). However, as noted by Dallner's group, GGPP synthase activity in rat brains is almost 100 times greater than protein geranylgeranyltransferase activity (19). GGPP may have a different metabolic pathway than proteingeranylgeranylation or may be required for other cellular processes that have not yet been elucidated. C 20 -prenoic acid Interestingly, several compounds that appear to be metabolites of GGPP were also reported around the time of the discovery of GGPP synthase. In 1983, radiolabeled MVA ([2-3 H]-MVA) was incubated with bovine retinal homogenate and then analyzed for radioactivity (11). The major part of the labeling was incorporated into the saponifiable fraction (20%-40%) rather than into the steroidal fraction (3%-22%), and the majority (90%) of the saponifiable fraction was not C 15 -prenoic acid FA but C 20 -prenoic acid GGA and component IV (the authors at the time considered the possibility of a cis-isomer of GGA, but component IV could also be 2,3-dihydroGGA [XIII]). They made similar observations with a tissue culture system of the bovine retina (20). Assuming that C 20 -prenoic acid is an abortive metabolite from an intermediate in the biosynthesis of the CoQ 10 side chain, it is curious that the formation of C 25 -, C 30 -, C 35 -, C 40 -, and C 50 -prenoic acid has not been reported. Radiolabeled 2,3-dihydroGGA [XIII] and GGA [XI] were also detected in the saponified fraction of triglycerides when [2-14 C]MVA was incubated in an in vivo culture system of the invertebrate Schistosoma mansoni (21). Whole genome analysis of this parasite reported that it lacks the gene for squalene synthase, an essential enzyme in the steroid synthesis system (22). Other than these few early reports, the formation of C 20 -prenoic acids in animal tissues has received relatively little attention. Acyclic retinoids Since Saffiotti et al. reported that the induction of tracheobronchial squamous metaplasia by intratracheal injection of the carcinogen benzo[a]pyrene was inhibited by intragastric administration of vitamin A (retinyl palmitate) (23), cancer chemoprevention and differentiation induction therapy with retinoids became widely advocated (24,25). Retinoids exert their effects by regulating gene expression by acting as ligands for the well-known nuclear transcription factors, retinoid receptors (retinoic acid receptor [RAR] and retinoic acid receptor [RXR]). Using a plasmid assay with a reporter gene inserted downstream of the retinoid response sequences, GGA [XI] and its 4,5-didehydro derivative [XII] were found to exhibit ligand activity similar to natural retinoids such as all-trans retinoic acid (ATRA) [X], as shown in Fig. 3, and 9-cis retinoic acid (RA) (26). Thus, compounds that do not have a cyclic structure but can be ligands for retinoid receptors or transcriptional regulators of target genes were named "acyclic retinoids" by Muto and Moriwaki (27). Thus, our research on the antitumor effects of retinoids led us to develop a special interest in acyclic retinoids, which do not exhibit the side effects that retinoids have (1,2). Phytanic acid is a well-known oxidative metabolite of phytol, a microbial metabolite of chlorophyll, and has also been reported to be a ligand for RXR. Thus, phytanic acid is also a member of the acyclic retinoid family. However, there are few reports of in vivo studies on the cancer-preventive effects of phytanic acid (28). Antitumor actions of acyclic retinoids Peretinoin (4,5-didehydroGGA [XII]), an acyclic retinoid, was observed to inhibit cell proliferation, upregulate albumin (ALB) gene expression, downregulate α-fetoprotein (AFP) gene expression, and induce differentiation into hepatocytes when added to a culture system of human hepatoma-derived cell lines (29). Reporter assay of synthetic GGA derivatives using RARE/CAT (slightly modified from Ref. (26)). The ligand activity to the cellular retinoic acid binding protein (CRABP) was based on the radioactivity of 3 H-all-trans retinoic acid ( 3 H-ATRA) bound to CRABP and replaced by a 10-fold molar excess of ATRA. That is, the radioactivity substituted by each synthetic GGA derivative in 10-fold molar excess is shown relative to 100 of ATRA. In addition, plasmid (RARE-CAT), a recombinant prokaryotic chloramphenicol acetyltransferase (CAT) gene downstream of retinoic acid response element-β (RARE), was introduced into the human hepatoma-derived cell line HuH-7. After treatment with various retinoids (1 μM each), CAT enzyme activity was measured and expressed as RARE-CAT activity. The RARE-CAT activity of cells treated with each compound is shown relative to the RARE-CAT activity of cells treated with ethanol (vehicle alone). GGA, geranylgeranoic acid. Indeed, 9-cis RA showed the same effect as 4,5-didehydroGGA [XII], and ATRA [X] similarly decreased AFP gene expression, but unlike 4,5-didehydroGGA [XII] and 9-cis RA, ATRA [X] also decreased ALB gene expression. Thus, it was suggested that the effect of inducing differentiation into hepatocytes is not necessarily common to retinoids. An acyclic retinoid of 4,5-didehydroGGA [XII] has been reported to induce the differentiation of tumor cells such as leukemia cells and neuroblastoma as well as hepatoma cells. ATRA [X], a natural retinoid, is known to induce the differentiation of acute promyelocytic leukemia (APL) cells into granulocytes, leading to complete remission when taken by patients with APL (30). When primary cultured leukemia cells from APL patients were treated with the acyclic retinoid, a concentration-dependent induction of differentiation was observed, similar to ATRA [X] (31). The acyclic retinoid seems to act exactly as a retinoid. The differentiation-inducing effects of ATRA [X] on human neuroblastoma-derived cell lines date back to 1981 (32). When neuroblastoma cells are treated with ATRA [X], it is observed that their proliferation is inhibited, their invasiveness is attenuated, and morphological neuron-like projections are formed irreversibly. Furthermore, ATRA [X] treatment restores brain-derived neurotrophic factor (BDNF) dependence of neuroblastoma cells, and a BDNF-dependent increase in amyloid precursor protein (APP) gene expression was also observed (33). 4,5-didehydroGGA [XII] also inhibited the proliferation of neuroblastoma cells similarly to ATRA [X], and morphologically, the formation of protrusions more than twice the cell body diameter was observed (34). Subsequently, we observed that another acyclic retinoid, GGA [XI], like ATRA [X], also inhibited the proliferation of the neuroblastoma-derived cell line SH-SY5Y cells, induced neuron-like morphological changes, and significantly induced the expression of the neurotrophic receptor tyrosine kinase-2 (NTRK2 or TrkB) gene, a putative BDNF receptor (35). The cells expressed retinoid receptor proteins of RARα,β,γ and RXRα,γ, and ATRA [X] or GGA [XI] treatment markedly upregulated RARβ expression and downregulated RXRα expression. Thus, we suggest that these two retinoid receptor subtypes expressed in SH-SY5Y cells are implicated in the biological effects of ATRA [X] and GGA [XI]. Acyclic retinoids such as GGA [XI] and 4,5-didehydroGGA [XII] could regulate the expression of differentiationspecific genes in their respective cells via transcription factors such as RAR and RXR as well as ATRA [X] and 9-cis RA, leading tumor cells to differentiated cells. NATURAL GGA FOUND IN PLANTS We reasoned that if GGA [XI] is a true acyclic retinoid, it could be biosynthesized in plants. As a start, we examined whether GGA [XI] could be detected in lipid extracts of medicinal herbs used in traditional medicine, such as Ayurveda, to cure liver diseases (10). As a result, we were able to identify and quantify GGA [XI] contents from turmeric, Schisandra, licorice, and Indian gooseberry by LC-MS. We then explored several commercial food products and detected GGA [XI] in curry powders, dried parsley flakes, fresh broccoli, and azuki beans (unpublished; YS), but not in polished and brown rice (36). Since it was shown that turmeric is relatively rich in GGA [XI] (10), we analyzed how the concentration of GGA [XI] in the blood varied after ingestion of commercial turmeric tablets (37). The LC/MS peak of GGA [XI] was detected in the plasma of seven healthy subjects (4 males and 3 females, aged 20-25 years) before ingestion of turmeric tablets, and its concentration ranged from 7.5 to 15.5 ng/ml. Two hours after ingestion of the turmeric tablets, the blood GGA concentration increased by about 1.5 times the basal level and remained at that level until 4 h later but returned to the original value after 8 h. In other words, GGA [XI] is detected physiologically in our human blood, and when GGA [XI] is ingested as a component of food, it is absorbed in the intestine and appears in the blood. If natural GGA [XI] were found only in plants and this compound were found to be essential for life, the definition of a true acyclic retinoid vitamin could be applied to GGA [XI]. However, since GGA [X] is synthesized also in animal cells (11), this definition is not valid. We next decided to explore the biosynthesis of GGA [XI] in mammals in depth. Endogenous GGA From 1995 to the present, we have been studying the antitumor effects of GGA [XI], especially the mechanism of cell death induction, using a culture system of human hepatoma-derived cell lines. Therefore, we collected HuH-7 cells, a human hepatoma-derived cell line, before adding GGA [XI] to the culture medium, analyzed their lipid extract by LC-MS, and detected a peak consistent with GGA [XI] (38). The possibility of contamination with GGA [XI] contained in FBS, derived from herbivorous ruminants, in the medium, was considered, but no GGA [XI] was found in lipid extracts of FBS. In addition, the intracellular GGA content varied depending on the state of cell proliferation and density whether subconfluent, confluent, or over-confluent. Therefore, we analyzed several cell lines in a subconfluent state and found that GGA content (ng/g wet wt) was about 3.9 in HuH-7 cells, 5.2 in PLC/PRF-5 cells, 3.2 in HepG-2 cells, and 17.7 in HeLa cells, a human cervix cancer-derived cell line. And GGA [XI] was below the detection limit (0.5 ng/g wet wt) in human neuroblastoma-derived cell lines IMR-32 and SH-SY5Y cells (38). Trace amounts of GGA [XI] were detected by the single mass (m/z[-] = 303.4) signal of molecular ions (deprotonated ions [M-H]to be precise), making difficult a baseline separation from arachidonic acid, a structural isomer of GGA [XI] that is present in large amounts in animal samples. Therefore, the GGA isomers were then separated using both molecular and fragment ions. Indeed, we subsequently switched to a tandem mass signal detection method using a combination of molecular ion (m/z (39). Then, we analyzed endogenous free GGA [XI] in each organ of experimental animals (male Wistar rats) by this tandem mass (LC-MS/MS) method. As a result, endogenous free GGA signals were detected in all organs analyzed, especially in the liver at the highest concentration. Apart from the liver, relatively high concentrations of GGA [XI] were detected in the genital organs (testes, prostate, and seminal vesicle) and brain (cerebrum and cerebellum) (39). These are in agreement with the increased reproductive index (RI) in in vivo animal studies (SAM and C3H/HeN mice) in which GGA [XI] was administered orally (40,41) and with the increased expression of BDNF in the hippocampal dentate gyrus and Cornu Ammonis regions of 1-week-old mice born to mothers which were given GGA [XI] orally (40). BIOSYNTHESIS OF GGA IN HUMAN HEPATOMA CELLS In mammals, the liver is an active organ in MVA metabolism. Therefore, we analyzed the metabolism of MVA to GGA [XI], which Fliesler & Schroepfer (11,20) reported in retinal cells as described above, using a culture system of the human hepatoma-derived cell line HuH-7. It was shown that the administration of pravastatin, an inhibitor of HMG-CoA reductase, depleted endogenous GGA in HuH-7 cells within 2 days, and GGA content was restored by mevalonolactone (MVL) supplementation (39). We also found that blocking the main channel of MVA metabolism by adding zaragozic acid A (ZAA or squalestatin), an inhibitor of squalene synthase, resulted in a 10-to 15-fold increase in endogenous GGA [XI] over 3 days. Since the substrate of squalene synthase is FPP, it is possible that the accumulation of FPP by ZAA treatment also increases the flow to GGPP and the accumulation of GGPP enhances the production of GGA [XI]. Isotopomer spectral analysis (42) After blocking the synthesis of squalene, a major channel of MVA metabolism, with ZAA treatment and further inhibiting the synthesis of endogenous MVA with pravastatin, stable isotope 13 C-labeled MVL ([1,2-13 C 2 ]MVL that will be intracellularly metabolized to 5-13 C-IPP or 5-13 C-DMAPP) was added and HuH-7 cells were cultured for 48 h; 13 C 4 -labeled GGA [XI] was detected as the major GGA [XI]. Then, after the addition of 13 C-MVL, intracellular GGA [XI] was chronologically detected by LC-MS/MS, and the combination of the mass number of the molecular ion (m/z = 303-307 corresponding to 13 C 0 -GGA-13 C 4 -GGA) and that of the fragment ion (m/z = 98 or 99 corresponding to no or one 13 C in the α-isoprene unit of GGA) was analyzed. The eight isotopic isomers (isotopologs and isotopomers) of GGA [XI] could be differentially tracked by this method (39). Then, by setting MVA as a monomer that is incorporated into its tetramer, GGA [XI] (actually IPP or DMAPP corresponds to the monomer) and assuming that the intracellular MVA concentration is constant throughout the experiment, isotopomer spectral analysis (ISA) can be applied even in nonequilibrium conditions. ISA of intracellular GGA [XI] was performed using HuH-7 cells and 13 C-MVL. Under the condition that intracellular MVA was kept at the physiological concentration (2.5 mM) and dilution rate D with 13 C-MVL was kept constant at 0.766-0.768 during the experimental period, g(t), the ratio of 13 (39). In other words, GGA [XI] appears to be an intermediate metabolite that is biosynthesized from MVA but metabolized relatively quickly to other compounds. Hepatic GGA [XI] is an endogenous lipid with a rapid metabolic turnover. Thus, GGA [XI] is an acyclic diterpenoid acid, but it does not appear to be an end product of degradation, at least not like the acyclic monoterpenoid acids (see Fig. 1) excreted in human urine described at the beginning of this review. As (Fig. 4) (39), which has already been reported to be synthesized from MVA in the parasite (21), as mentioned above. OXIDATION OF GERANYLGERANIOL REQUIRES MOLECULAR OXYGEN From GGPP to GGOH The next question was what enzymatic reactions are involved in the metabolism from GGPP to GGA [XI]. Since a specific GGPP pyrophosphatase had already been reported that specifically uses GGPP but not FPP as a substrate in rat hepatocytes (45), it was suggested that the enzyme converted GGPP to geranylgeraniol (GGOH) [XVIII] in a one-step reaction. However, a recent report that type 1 polyisoprenoid diphosphate phosphatase (PDP1 or phospholipid phosphatase 6) is involved in the regulation of intracellular GGOH concentrations suggested that the dephosphorylation of GGPP to GGOH [XVIII] is catalyzed by the phosphatase rather than pyrophosphatase, indicating that phosphate groups may be sequentially released from GGPP (46). However, PDP1 was originally identified as an enzyme that converts presqualene diphosphate to presqualene monophosphate (47) and also acts equally on geranyl diphosphate, FPP, and GGPP to produce geraniol [I], farnesol, and GGOH [XVIII], respectively (48). Whether Fig. 4. Hepatic MAOB is involved in the oxidation of GGOH to GGal along with CYP3A4, the backup enzyme for MAOB during GGA synthesis. The enzymatic reaction from GGOH to geranylgeranyl aldehyde (GGal) is NAD(P) + -independent in human hepatoma cells and rat liver homogenates and involves the consumption of molecular oxygen (38), and MAOB inhibitor treatment reduces intracellular GGA content. In addition, it has been shown that hepatic MAOB is involved in the oxidation reaction of GGOH to GGal, as MAOB inhibitor treatment decreased the intracellular GGA content and knockdown of the MAOB gene using a specific siRNA-inhibited GGA synthesis from GGOH (43). On the other hand, intracellular GGA content was maintained at the same level in MAOB-KO cells as in wild-type cells, indicating that CYP3A4, as a backup enzyme of MAOB, is involved in the oxidation of GGOH to GGal, as shown by experiments using inhibitors and siRNA (44). The oxidation of GGal to GGA is NAD(P) + -dependent and is catalyzed by ALDH, a nonspecific enzyme. Isotopomer spectral analysis showed that GGA is further metabolized to 2,3-dihydroGGA (39), and endogenous 2,3-dihydroGGA was consistently detected in higher concentrations than GGA in each organ of Wistar rats (39). GGA, geranylgeranoic acid; MAOB, monoamine oxidase B; ALDH, aldehyde dehydrogenase. the enzyme involved in the dephosphorylation of GGPP is the former pyrophosphatase or the latter phosphatase, the existence of the enzymatic reaction(s) from GGPP to GGOH [XVIII] in mammalian cells is confirmed. From GGOH to GGA Therefore, we focused our analysis on the oxidation reaction from GGOH [XVIII] to GGA [XI]. Initially, we assumed that the reactions would be catalyzed by enzymes that exhibit relatively broad specificity, such as alcohol dehydrogenase (ADH) and aldehyde dehydrogenase, which are known to catalyze the metabolism from retinol to retinoic acid via retinal. However, the metabolism from GGOH [XVIII] to geranylgeranyl aldehyde (GGal) [XIX] did not require the addition of NAD + in a rat liver homogenate system, and the oxidation from GGal [XIX] to GGA [XI] was found to be NAD + -dependent (36). Moreover, consumption of molecular oxygen was observed in the oxidation reaction from GGOH [XVIII] to GGal [XIX], indicating that the reaction from GGOH [XVIII] to GGal [XIX] is not a dehydrogenation reaction but a reaction catalyzed by an oxidase that requires oxygen as a substrate and that the enzyme activity is localized in the mitochondrial fraction (38). It was also shown that tranylcypromine (TCP), an inhibitor of monoamine oxidase A/B (MAOA/B), suppresses the synthesis of GGA from GGOH [XVIII] and that human recombinant MAOB, but not MAOA, protein efficiently produces GGal [XIX] from GGOH [XVIII] (38). Subsequently, we demonstrated that siRNA-mediated knockdown of the MAOB gene reduced endogenous GGA content in HuH-7 and Hep3B, indicating that the oxidation of GGOH [XVIII] to GGal [XIX] is catalyzed by MAOB in human hepatocytes (Fig. 4) (43). GGA in MAOB-KO cells However, contrary to our expectations, when MAOB-KO cells were established in Hep3B cells by the CRISPR/Cas-9 system, the endogenous GGA concentration in the KO cells did not differ from that in WT Hep3B cells (43). Moreover, since the endogenous GGA concentration did not change after TCP treatment of the KO cells, it was unlikely that any residual MAOB was involved in the maintenance of endogenous GGA levels, suggesting that an enzyme other than MAOB may be induced as a backup for GGA synthesis. Interestingly, when the human MAOB gene was introduced into these KO cells, the endogenous GGA concentration did not change, but the endogenous GGA concentration decreased upon TCP or MAOB siRNA treatment. In other words, even in MAOB-KO cells in which some backup system was induced, MAOB was shown to be preferentially used for GGA synthesis when the MAOB gene was reintroduced (43). So what could replace the MAOB enzyme in the backup system for GGA synthesis in MAOB-KO cells? We searched for that possible enzyme. Our previous studies showed that GGOH oxidase activity is present in the microsomal fraction as well as in the mitochondrial fraction where MAOB is localized, but not detected in the cytosol fraction, in rat liver (36) and HuH-7 cells (37). Endo et al. (49) reported that ADH1A, which is localized in the cytosol fraction, generates GGal [XIX] using GGOH [XVIII] as a substrate, but even when ADH1A gene expression was knocked down using siRNA specific for ADH1A, the endogenous GGA levels in Hep3B/MAOB-KO cells did not decrease (43). This is consistent with our previous finding that no enzymatic activity to oxidize GGOH [XVIII] to GGal [XIX] was detected in the cytosolic fraction (37). On the other hand, a group of cytochrome P450 enzymes localized in the liver microsomal fraction are known to oxidize isoprenols such as geraniol [I] (50) and retinol (51). Therefore, we decided to search for members of the cytochrome p450 enzyme family that are induced in MAOB-KO cells and oxidize GGOH [XVIII]. Age-dependent decrease in liver GGA content Male C3H/HeN mice are known to spontaneously develop hepatoma at a high rate after 2 years of normal rearing (52). Furthermore, it has been reported that oral administration of 4,5-didehydroGGA [XII] (50 μg/ mouse) to male C3H/HeN mice at 12 months of age greatly reduced the number of cases of hepatoma detected at 23 months of age (53,54). Therefore, we hypothesized that the liver GGA content of C3H/HeN mice declines with aging and that when the value declines past a putative threshold, it may favor for the development of hepatoma. Indeed, hepatic GGA concentrations in C3H/HeN mice decreased with age, and GGA [XI] could not be detected in the livers of 93week-old mice (55). Hepatic GGA levels in male C3H/ HeN mice showed a two-stage decrease with age. In the first stage, hepatic GGA levels decrease by approximately 30% from 18 to 25 weeks of age, and in the second stage, they decrease rapidly and become undetectable from 60 to 93 weeks of age (55). Age-dependent decrease in hepatic MAOB gene expression Since the age-related decrease in hepatic GGA content is thought to be due to a decrease in GGA synthesis, the expression level of the MAOB gene was analyzed. Since long ago, MAOB gene expression has been shown in studies in human postmortem brains, where MAOB activity generally increases with age and MAOA activity remains largely unchanged (56). In rodents, as in humans, MAOA activity changes very little with age, and MAOB activity is known to increase with age in all brain regions except the brainstem (56). Although most enzyme activities are characterized by a decrease with age, MAOB may be a rare exception. Indeed, MAOB mRNA levels increased with aging in the brains of C3H/HeN mice. However, although hepatic MAOB mRNA levels increased progressively until about 18 weeks of age, they decreased with age to about onethird of their maximum level at 93 weeks of age (55). A significant positive correlation between liver MAOB mRNA levels and hepatic GGA levels was detected, suggesting that the age-related decrease in hepatic GGA levels was due to decreased expression of the liver MAOB gene. Prevention of spontaneous hepatoma by oral administration of GGA Assuming that a decrease in hepatic GGA levels in C3H/HeN mice would allow spontaneous hepatocarcinogenesis to develop, we wondered if increasing hepatic GGA levels by administering exogenous GGA [XI] when hepatic GGA levels were decreased would suppress spontaneous hepatocarcinogenesis. We observed that oral administration of GGA [XI] or GGOH [XVIII] or intraperitoneal administration of ZAA to 24-week-old mice significantly increased liver GGA content (55). Therefore, we administered a single oral dose of GGA [XI] to mice at 7 months of age, when liver GGA levels begin to decrease, at 11 months of age, which is 4 months after liver GGA levels stabilize at a low level, or at 17 months of age, when liver GGA levels begin to rapidly decrease further, respectively. Subsequent autopsy of livers at 24 months of age revealed that a single oral dose of GGA [XI] at 11 months of age, as in the case of 4,5-didehydroGGA [XII] described above (53,54), significantly reduced the detection rate of spontaneous hepatoma (55). A single dose of GGA [XI] at 7 or 17 months of age had no effect on suppressing spontaneous hepatomas, and the detection rate of hepatoma in either group was not significantly different than in the untreated control group. As mentioned above, GGA metabolism is rapid, so it is unlikely that a single dose of GGA [XI] administered to 11-month-old mice will exert its effects for a long period of time, even up to 24-month-old mice. Therefore, we will discuss the mechanism by which GGA [XI] administered as a single dose at the age of 11 months exerts its effects for more than one year based on experimental observations. Three observations are important in considering the mechanism: 1) liver GGA content in male C3H/HeN mice begins to decrease with age at around 6 months of age, and liver GGA levels remain low during 8-15 months of age and then decrease further until no GGA [XI] becomes detectable at 23 months of age; 2) a single oral administration of GGA [XI] to mice causes a transient increase in liver GGA concentration; and 3) GGA [XI] induces inflammatory cell death in a concentration-dependent manner selectively in tumor cells but not in primary cultured hepatocytes, which is described in detail later. Based on these three things, the possibility that a single dose of GGA [XI] at 11 months of age could cause irreversible changes in C3H/HeN mice is as follows. That is, a single oral administration of GGA [XI] to C3H/HeN mice at the age of 11-12 months when liver GGA levels have decreased with age transiently increases liver GGA levels, which induces inflammatory cell death in all tumor cells (or precancerous cells) developing in the liver at the age of 11 months and eliminates them from the liver so that the spontaneous hepatocarcinogenesis does not occur during the following one year. If GGA [XI] is not administered during this period, existing tumor cells are expected to develop into hepatomas after one year because they will continue to grow, evading cell death due to low liver GGA concentrations in the liver. On the other hand, GGA [XI] administered as a single dose at the age of 7 or 17 months shows no inhibitory effect on hepatocarcinogenesis. The reason for this, based on the three observations mentioned above, is speculated that GGA [XI] administered at the age of 7 months is ineffective because GGA-sensitive tumor cells have not yet developed, and GGA [XI] administered at the age of 17 months cannot eliminate tumor cells because they have already increased in malignancy and lost GGA sensitivity. However, we believe that a detailed understanding of the mechanism of GGA inhibition of hepatocarcinogenesis will have to wait until the mechanism of spontaneous hepatocarcinogenesis in C3H/HeN mice is clarified. Nonretinoidal function Now the fact that we know that GGA [XI] is synthesized de novo in mammalian hepatocytes, that its synthesis decreases in an age-dependent manner, and that administration of exogenous GGA [XI] inhibits the development of spontaneous hepatoma in vivo, what is the mechanism by which GGA [XI] prevents carcinogenesis? As mentioned above, we were initially interested in the differentiation-inducing effects of retinoids on hepatoma cells and began serious research on the antitumor effects of acyclic retinoids (29). At that time, ATRA [X] was attracting attention as an active metabolite of retinoids, and cellular retinoic acid binding protein (CRABP) was reported as a factor mediating its action (57). GGA [XI] was then found as one of the synthetic retinoids, indexed by its ligand activity against CRABP (58). Later, retinoid receptors (RAR, RXR) were discovered as more reliable mediators of RA activity at the transcription levels (59), and ligand activity against them was considered more essential as an indicator of retinoid biological activity. As already mentioned, GGA [XI] showed ligand activity against CRABP, RAR, and RXR (Fig. 3). GGA [XI] dramatically reduced the number of viable HuH-7 cells in a concentration-dependent manner, but even media containing higher concentrations of GGA [XI] did not alter the number of live primary mouse hepatocytes (60). Notably, this cell death-inducing effect on hepatomas, detailed below, is specific to GGA [XI] and not to retinoids such as ATRA [X] and 9-cis RA (61). Therefore, we do not believe that GGA-induced cell death is directly related to retinoid-binding proteins such as CRABP, RAR, and RXR. In other words, the cell deathinducing effects of GGA [XI] that may be directly related to its carcinogenesis inhibitory effects are considered to be nonretinoid effects. Apoptotic cell death When GGA [XI] is added to the medium at concentrations of 1-20 μM to several human hepatoma-derived cell lines, including HuH-7, HepG-2, Hep3B, and PLC/ PRF-5, cell death is induced in approximately 8-10 h (Fig. 5). Initially, GGA-induced cell death was presumed to be due to apoptosis, since the dissipation of mitochondrial inner membrane potential and condensation of chromatin were observed before the cell death, and the cell death was inhibited or delayed by selective inhibitory peptides for caspase-1 (CASP1) or CASP3, respectively (60,61,64). However, the subsequent examination did not reveal cytoplasmic leakage of cytochrome c from mitochondria or activation of CASP3 (unpublished; KO and YS), which we considered to be different from typical apoptosis. Autophagic cell death Next, we analyzed autophagy, another mode of programmed cell death. Western blot analysis of markers of autophagosomes, LC3β-II, live fluorescence microscopy of GFP-RFP-LC3β-expressing cells, Western blot analysis of p62/SQSTM1 (sequestosome 1), a cargo of autophagosomes degraded by autolysosomes, and electron microscopy of autophagosomes revealed that GGA [XI] induces an incomplete autophagic response, which is mainly characterized by the abnormal accumulation of early autophagosomes and failure of autolysosome formation (65). At that time, we considered that this "incomplete response" of autophagy itself might cause GGA-induced cell death. Mutant p53 protein stored in the cytoplasm is thought to be involved in the inhibition of autophagy (66). Thus, the observation that mutant p53 protein is stored in the cytoplasm in HuH-7 (Y220C) and PLC/ PRF-5 cells (R249S) and rapidly translocated to the nucleus upon GGA treatment (67) suggests that GGA [XI] at least releases mutant p53-mediated inhibition of autophagy. Unfolded protein response Around 2012, ER stress had been reported as one of the triggers of autophagy along with starvation stress. Therefore, after GGA treatment of human hepatomaderived cell lines, we analyzed three pathways of ER stress-induced unfolded protein response (UPR ER ); splicing of XBP1 mRNA on the ER membrane (IRE1 pathway), DDIT 3 gene transcription level (PERK pathway), and PDIA4 gene transcription level (ATF6 pathway). As a result, activation of the IRE1 and PERK but not the ATF6 pathway was observed (68). This is similar to the lipid-induced UPR ER caused by high concentrations (sub-mM) of saturated fatty acids (such as palmitic acid (PA) and stearic acid), whereas the tunicamycin-induced UPR ER involves activation of all three pathways (69). GGA-induced UPR ER , like lipidinduced UPR ER , was suppressed by the coexistence of oleic acid (OA), a monounsaturated fatty acid. Interestingly, the effect of OA cotreatment inhibited not only UPR ER but also GGA-induced incomplete response of autophagy and even GGA-induced cell death, suggesting that GGA-induced UPR ER is the upstream signal, followed by the incomplete response of autophagy downstream, leading to cell death. Activation of ATF4, which is downstream of the PERK pathway in the UPR ER signal, is known to enhance Fig. 5. Timeline of the cellular events during GGA-induced cell death of human hepatoma HuH-7 cells. The increase in intracellular lyso-PC and lyso-PE after GGA treatment was detected by metabolomics analysis without preconception (62). The increases Beclin-1 expression and then Beclin-1 promotes autophagosome formation (70). We reported a rapid upregulation of Beclin-1 by GGA (65), indicating that GGA-induced UPR ER triggers an autophagic response. We observed rapid loss of cyclin D1 protein upon GGA treatment in several human hepatoma cell lines (HuH-7, PLC/PRF-5, and HepG2) and reported that the mechanism is not transcriptional regulation of the cyclin D1 (CCND1) gene but the rapid arrest of its translation (71). At the time of writing that paper, we were unable to explain the mechanism of the translational repression, but it can now be easily explained by assuming that the GGA-induced UPR ER activates the PERK pathway and phosphorylates eIF2α, which in turn stops translation of the CCND1 mRNA (72). The induction of UPR ER by GGA [XI] may explain another interesting phenomenon induced by GGA [XI]. Namely, TP53-induced glycolysis and apoptosis regulator (TIGAR), a repressor of the glycolysis, and synthesis of cytochrome C oxidase 2 (SCO2), an assembly factor of mitochondrial respiratory chain complex IV, were rapidly increased upon GGA treatment of HuH-7 cells, and conversion of energy metabolism from glycolysis to respiration was observed (73). Although the mechanism was unknown at the time, the structure-activity relationship of UPR ER induction with GGA derivatives was consistent with that of SCO2 upregulation activity, leading to the hypothesis that UPR ER induction may increase SCO2. Recently, it was shown that UPR ER induction by glucose deprivation or tunicamycin treatment increases the formation of respiratory chain supercomplexes (74). The authors of this paper propose that the mechanism is that, activation of PERK, one of the three UPR ER branches, induces the assembly of respiratory chain supercomplexes through a signaling flow of increased phosphorylation of eIF2α, upregulation of ATF4, and increased expression of supercomplex assembly factor-1. It is possible that the same mechanism operated in GGA-induced UPR ER to increase SCO2. Quick induction of UPR ER GGA [XI] was shown to induce so-called "lipidinduced UPR ER ," which is well known to be induced by lipids such as PA, but it is unclear how GGA [XI] induces UPR ER . Therefore, we followed GGA-induced UPR ER immediately after the addition of GGA [XI] using splicing of XBP1 mRNA as an indicator. UPR ER was already observed 5 min after the addition of GGA [XI] to the medium (unpublished; CI and YS). Since it is currently experimentally impossible to analyze at an earlier time, we have confirmed that the signal of GGA addition reaches the ER membrane within 5 min. In addition, as mentioned above, simultaneous treatment with OA completely suppresses both GGAinduced UPR ER and cell death but not at all when cells are pretreated with OA (1-24 h before GGA addition), that is, when OA is not present outside the cell during GGA treatment, suggesting that OA is not eliminating GGA signal inside the cell but rather eliminating it at the cell surface (68). In other words, we can hypothesize that the point of action of GGA [XI] in UPR ER induction and cell death induction may be near the cell surface. We, therefore, focused on reports that cell death induction by lipids such as saturated fatty acids like PA, or lipotoxicity, is dependent on the activation of toll-like receptor-4 (TLR4) localized at the cell surface (75,76). In other words, lipids such as saturated fatty acids and cholesterol induce the UPR ER and stimulate TLR4, which is known to be a receptor that originally recognizes lipopolysaccharides produced by Gram-negative bacteria. Direct activation of TLR4 by PA A possible mechanism by which PA activates TLR4 has been reported (77), in which PA acts directly on TLR4 activation by acting as a ligand for the TLR4/MD-2 complex. According to their docking simulations, five molecules of PA dock into the hydrophobic pocket of MD-2 and activate and internalize TLR4 by cross-linking TLR4 as a dimer and recruiting it to lipid rafts on the cell membrane. Activated TLR4 transduces signals into the cell, which is thought to activate IRE1 localized to the ER and cause splicing of XBP1 mRNA on the ER (78,79). The observation that TLR4-KO mice do not develop UPR ER even when reared on a high-fat diet strongly supports this idea (80). However, there is also a report (81) that when radiolabeled stearic acid was used to analyze the binding of the fatty acid to TLR4/MD-2 fusion protein, no evidence of binding was obtained; if PA is directly involved in the activation of TLR4, it may be necessary to consider that PA is not only a ligand for TLR4/MD-2 but also promotes the translocation of TLR4 to the membrane raft. Moreover, the mechanism by which TLR4 activation leads to the activation of IRE1 localized to the ER is still unclear. in intracellular calcium ions and mitochondrial superoxide after GGA treatment were quantified in time series by observing a culture system of HuH-7 cells with Fluo-4 AM or MitoSox Red incorporated using a laser scanning confocal fluorescence microscope and collecting live cell images in time-lapse (63). Cellular mRNA levels of XBP1s (the spliced form of XBP1 mRNA), IL1B, and NLRP3 genes were quantified by RT-qPCR, while LC3B-II, p62, Beclin-1, active CASP1, active CASP4, and N-terminal fragment of GSDMD (gasdermin D) were measured by Western blotting (63). Laser scanning immunofluorescence confocal microscopy observed subcellular localization of GSDMD and NF-κB (63). GGA, geranylgeranoic acid; lyso-PC, lysophosphatidylcholine; lyso-PE, lysophosphatidylethanolamine. Indirect activation of TLR4 by PA PA has also been reported to cause UPR ER without requiring activation of TLR4, although it causes TLR4mediated inflammatory responses (82). One mechanism is that PA taken up by cells is metabolized to lysophospholipids (lyso-PLs) and diacylglycerols (DAGs), which perturb the ER membrane, resulting in the activation of IRE1 and PERK, sensors of ER stress localized in the perturbed membrane domains. PAinduced UPR ER is inhibited by cotreatment with OA, which is thought to be a mechanism by which OA is metabolized into phospholipids and does not activate IRE1 or PERK by maintaining healthy ER membranes (69). If PA causes UPR ER without TLR4 activation, how is TLR4 activated after PA addition? UPR ER induced by PA metabolites such as lyso-PLs and DAGs activates IRE1, followed by enhanced ceramide synthesis and secretion of ceramide-containing extracellular vesicles, which in turn activate TLR4 (83). In this regard, the recent identification of sphingosine-1-phosphate lyase (SPL) as a substrate of the IRE1 enzyme (84) points to the possibility that phosphorylation of SPL by IRE1 inhibits its activity, resulting in ceramide increase and consequent activation of TLR4. In any case, it remains an open question as to how PA activates TLR4, although this may vary depending on cellular context (85). TLR4 activation by GGA We now consider the possibility and mechanism of TLR4 activation by GGA [XI] using the activation of UPR ER and TLR4 by PA as a model. Through the use of specific inhibitors and gene knockdown experiments, we were able to show that the two processes essential for the induction of hepatoma cell death by GGA [XI] are UPR ER induction and TLR4 activation, which are suppressed by cotreatment with OA as in the case of PA (63). There is no experimental evidence that GGA [XI] is incorporated into lyso-PLs or DAGs as in the case of PA. However, of note, we recently detected without any preconceptions by comprehensive metabolomics analysis that immediately after treatment of HuH-7 cells with 10 μM GGA [XI], nine molecular species of lyso-PLs are overwhelmingly increased compared to other cellular metabolites (62). Moreover, lysophosphatidylcholine (lyso-PC) containing one of PA (C16:0), palmitoleic acid (C16:1), or arachidonic acid (C20:4) and lysophosphatidylethanolamine containing C20:4 increased most quickly (see Fig. 5), while lyso-PC containing dihomo-γ-linolenic acid (C20:3) increased later. Interestingly, as with the addition of PA, lyso-PC containing PA showed a rapid and transient increase (peaking at 2 h after GGA addition at the latest) followed by a slow increase, reaching the highest concentration among the increased lyso-PLs (62). Therefore, the same mechanism of lipotoxicity exhibited by PA may be at work in the induction of cell death of hepatoma cells by GGA [XI]. In other words, it can be hypothesized that GGA [XI] induces UPR ER , which causes activation of IRE1 and PERK by increasing PA-containing lyso-PC in the ER membrane, thereby increasing intracellular ceramide levels and stimulating TLR4 by ceramide-containing vesicles secreted to the extracellular space. However, unlike the case of PA, GGA [XI] cannot be directly metabolized to PA-containing lyso-PC, which raises the question of what mechanism allows GGA [XI] to increase PA-containing lyso-PC. To answer this question, based on the characterization of the molecular species of the increased lyso-PLs, we suspect that GGA [XI] may be responsible for the decrease in ectonucleotide pyrophosphatase/phosphodiesterase 2 (ENPP2) or autotaxin activity (62). ENPP2, an enzyme that acts on lyso-PLs to produce lysophosphatidic acid, plays a very important role in hepatoma development (86), including the growth of hepatitis B virus (87) and hepatitis C virus (88), and high-grade hepatomas are reported to have high ENPP2 expression (89). Assuming that GGA [XI] is involved in the downregulation of ENPP2, it is very interesting to consider the preventive effect of GGA [XI] on hepatocarcinogenesis. Therefore, it is possible to assume that GGA increases PA-containing lyso-PCs via downregulation of ENPP2, the increased lyso-PC (C16:0) activates IRE1 on the ER membrane, the activated IRE1 suppresses SPL activity via phosphorylation, resulting in increased ceramide and consequent activation of TLR4. However, even if such an assumption can be made, in experiments with specific inhibitors of TLR4 or gene knockdown, the most upstream signal for GGA-induced cell death in hepatoma is TLR4 activation. Thus, the downregulation of ENPP2 may occur downstream of TLR4 activation signaling by GGA. We have so far not examined the possibility that GGA directly activates TLR4 nor whether GGA-activated TLR4 signaling is involved in the downregulation of ENPP2. In any case, there is no doubt that TLR4 plays an essential role in GGA signaling, as treatment with VIPER, a specific inhibitory peptide of TLR4, or knockdown of the TLR4 gene with siRNA completely suppresses GGA-induced cell death and UPR ER induction as well as incomplete autophagy responses (63). Pyroptotic cell death induced by GGA (Fig. 5) Although the signal of GGA addition was shown to flow from TLR4 activation to autophagy through lipidinduced UPR ER , the mechanism of GGA-induced cell death remained unknown. Therefore, we observed the activation of CASPs, which is directly related to programmed cell death and detected the activation of CASP4 from 1 h to 5 h after the addition of GGA [XI]; along with the activation of CASP4, gasdermin D (GSDMD)-N, the N-terminal fragment of GSDMD was produced, and 3 h after the addition of GGA [XI], immunofluorescence microscopy also confirmed the localization of GSDMD at the plasma membrane (see Fig. 5). Subsequently, rapid activation of CASP1 was observed 8 h after the addition of GGA [XI] (63). In HuH-7 cells, treatment with thapsigargin (an established ER stress inducer through inhibition of the sarco/endoplasmic reticulum Ca 2+ ATPase) alone, which decreases Ca 2+ concentration in the ER, activated CASP4 in the same manner as GGA treatment (63). CASP4 is considered to be the enzyme responsible for ER stress-induced cell death (90). Recently, it was shown that the mechanism of CASP4 activation is that mitochondrial-localized calpain 5 (CAPN5) is activated by ER stress and the active proteolytic enzyme CAPN5 cleaves pro-CASP4 (91). On the other hand, the activation of CASP1 is generally considered to be carried out by the formation of inflammasomes independently of the activation of CASP4. However, during GGA treatment, the activation of CASP1 is also inhibited by the cotreatment with CASP4 inhibitor, suggesting that pyroptosis through a noncanonical pathway in which CASP4 activation is involved in the formation of inflammasomes occurs. Moreover, as shown in Fig. 5, after GGA treatment, increased production of superoxide in mitochondria, nuclear translocation of NF-κB, and increased expression of NLRP3 and IL-1B genes (so-called "priming of inflammasome") are also induced to follow the activation of CASP4, indicating that the pyroptosis by the canonical pathway is also thought to be progressing. In considering the mechanism of GGA-induced cell death, it makes sense to consider the following observations: GGA [XI] induced cell death in the guinea pig fibroblast cell line 104C1, but the cell line 104 C/O4C, which is stably expressing the human phospholipid hydroperoxide glutathione peroxidase (PHGPx or GPX4) gene, became GGA-resistant. Hyperproduction of superoxide in mitochondria was observed in both cell lines upon GGA treatment, but GGA-induced accumulation of peroxides and loss of mitochondrial inner membrane potential were observed only in the 104C cells and not in the 104 C/O4C cells (92). The fact that PHGPx suppresses GGA-induced cell death indicates that there may be a common process with ferroptosis in GGA-induced pyroptotic cell death. In summary, it was shown that the administration of exogenous GGA [XI] to hepatoma cells acts in an inhibitory manner against hepatocarcinogenesis by inducing pyroptosis, a form of inflammatory cell death, via TLR4, which is upregulated in hepatoma cells (93,94). It was also shown that increasing the intracellular content of endogenous GGA [XI], one of the endogenous MVA metabolites, with drugs such as ZAA leads to similar cell death in hepatoma cells (39). Future research on GGA targeting is needed for prophylactic strategies against liver cancer. OTHER BIOLOGICAL FUNCTIONS OF GGA We have described above the inhibitory effects of GGA [XI], one of the MVA metabolites, on hepatocarcinogenesis. However, in addition to this, GGA [XI] also has the potential to act in the reproductive and cranial nervous systems. We had been breeding and rearing senescence-accelerated mice, SAM-P8, characterized by accelerated aging of the brain nervous system and found by chance that rearing with GGA [XI] added to a commercial solid feed for breeding and lactation period significantly increased the number of weaned pups per mating (reproduction index: RI) (40). Subsequently, a similar experiment was conducted in another strain C3H mice, and an increase in RI was observed upon GGA supplementation (41). When endogenous GGA [XI] was quantified in various organs of 5-week-old male Wistar rats, the second highest amount of GGA [XI] was found in the testes and epididymis (39). This suggests that the increase in RI associated with GGA supplementation may be due to GGA-induced enhancement of sperm maturation and fertility. Indeed, we have detected GGA [XI] in human semen (unpublished; YT and YS). In addition, mice bred and lactated by GGA supplementation showed increased expression of BDNF around the hippocampal dentate gyrus of mice at 1 week of age (40), suggesting that GGA supplementation also acts on brain development in the perinatal mice. Treatment of SH-SY5Y cells, a cell line derived from human neuroblastoma, with GGA [XI] results in morphological neuronal-like changes, including longer neurites and increased contact points per cell (35,95), which is compatible with the results of the in vivo experiment. It is also worth mentioning the biological effects of GGA [XI] on bone metabolism. Namely, GGA [XI] induces osteoblast differentiation and inhibits osteoclast formation in vitro; GGA [XI] increases femur bone mineral density in SAM-P6 mice in vivo (96). The differentiation-inducing effect of GGA [XI] on neuroblastoma and its effect on bone metabolism may be due to the retinoid action of GGA [XI]. Because ATRA [X] is also known to have similar effects on neuronal differentiation of neuroblastoma (97) and osteogenesis (98). Two pharmacological actions of GGA [XI] have also been observed: one is the inhibition of human immunodeficiency virus type-1 (HIV-1) infection. GGA [XI] inhibited HIV-1 infection of host cells; GGA is thought to inhibit HIV-1 entry into host cells by suppressing the cell surface expression of the chemokine receptor, CXCR4 (99). The other is the inhibition of lysine demethylase 1 (LSD1 or KDM1A), which belongs to the family of FAD-dependent amine oxidases that include MAOB. GGA [XI] and several of its dihydro derivatives inhibited active recombinant human LSD1 (100). Furthermore, GGA treatment of HuH-7 cells resulted in rapid migration of nuclear-localized LSD1 to the cytoplasm (101), indicating that GGA [XI] may have epigenetic effects on cells. Further metabolites of GGA and their biological activities We have described that GGA [XI] is produced as a metabolite of MVA in mammalian organs and cells, including humans, bovines, rats, and mice, and also even in parasites. GGA [XI] is further metabolized intracellularly to 2,3-dihydroGGA [XIII] in HuH-7 cells, in rat thymocytes (102), and in Wistar rat tissues, 2,3-dihydroGGA [XIII] has been detected several to more than a hundredfold (5-fold in the liver, 106fold in the epididymis, and 141-fold in the thymus) higher than GGA [XI] (39 (21). In our laboratory, we have observed that hydrolysis of the phospholipid fraction of rat liver and HuH-7 cells, especially the cardiolipin fraction, liberated GGA [XI] and 2,3-dihydroGGA [XIII] but could not detect them in the neutral lipid fraction (unpublished; YT and YS). In addition, there are also differences in biological activities, such as the lipid droplet-inducing activity of 2,3-dihydroGGA [XIII] on cultured cells, which is not observed at all with GGA [XI] (103). Identifying GGA [XI] and 2,3-dihydroGGA [XIII] and their further metabolites, as well as confirming and analyzing their biological activities, are issues to be explored in the future. Essential role(s) of GGA in the cell It was indeed surprising that even in KO cells of the MAOB gene, which encodes the enzyme responsible for GGA biosynthesis, the intracellular levels of GGA [XI] were maintained by induction of the backup enzyme CYP3A4. How does a certain level of intracellular GGA [XI] contribute to cell survival? Why should intracellular GGA [XI] not be reduced or lost in MAOB-KO cells? These points still need to be resolved. GGA as a primordial form of retinoic acid We started with the study of the antitumor activity of retinoids, which led us to focus on the acyclic retinoid GGA [XI] as an endogenous metabolite of MVA. GGA [XI] is found in many eukaryotes, both plants and animals, from parasites deficient in steroid synthesis to mammals such as mice, rats, bovines, and humans. Needless to say, retinoids are essential nutrients and vitamin A that cannot be de novo biosynthesized in the animal kingdom, and their roles in ontogeny, cell differentiation, reproduction, and the nervous system are well known, making them essential for the survival of individual animals. This is a bit of a leap, but since GGA binds to CRABP and exhibits ligand activity for RAR and RXR, we hypothesize that GGA might have emerged as the primordial molecule of retinoic acid in eukaryotes, acting now as an essential signal transduction molecule for cell survival. Acknowledgments The author would like to thank Professor Luigi M. De Luca of Johns Hopkins University, the author's postdoctoral mentor, for his encouragement and constructive suggestions on this manuscript. The author would also like to appreciate Drs. Kyoko Okamoto, Takashi Muraguchi, Maiko Mitake, Chiharu Sakane, Haruka Kamiyama, Chieko Iwao, Toshiya Yonekura, Suemi Yabuta, and Yuki Tabata for their dedicated contributions to GGA research in the author's laboratory. This work was supported in part by the Japan Society for the Promotion of Science Grant 16K00862 and a research grant B from the University of Nagasaki. Author Contribution Y. S. is responsible for all aspects of this article. Conflict of Interest The author declares that they have no conflicts of interest with the contents of this article.
13,458.2
2023-05-01T00:00:00.000
[ "Medicine", "Biology", "Chemistry" ]
Does Trade Liberalization Reduce Poverty in Mali? Evidence from ARDL Bounds Testing Approach This paper investigates the impact of trade liberalization on poverty reduction in Mali over the period 1986-2018. Like Magombeyi and Odhiambo(2017), we will use three measures of poverty (namely per capita consumption, infant mortality rate and life expectancy) to capture its multidimensional aspects. Using the ARDL bounds testing approach, the findings indicate that there is a negative relationship between trade liberalization and three proxies of poverty reduction in the long-run. However, it significantly only decreased per capita consumption. Yet, in the short-run, trade liberalization has a positive and significant effect on per capita consumption and life expectancy. In contrast, it has a negative and significant impact on the infant mortality rate. From these findings, it can be said that in Mali, the effect of trade liberalization on poverty reduction is not sensitive to poverty proxies but depends on complementary policies. Factors such as financial deepening, education, inflation, institutional quality, and infrastructure development seem to influence the relationship between trade liberalization, and poverty reduction. between trade liberalization and poverty reduction. Section three presents the methodology. Section four presents the empirical findings. Finally, the conclusion and some policy recommendations are provided in section five. Theoretical Literature According to the analytical framework developed by Winter (2000Winter ( , 2004, there are theoretically four channels through which trade liberalization can affect poverty reduction. -The first one is economic growth. For the proponent of this channel, openness to trade leads to long term economic growth which in turn reduces poverty. If researchers widely accept economic growth as the key to sustained poverty alleviation, there is still debate among them on the relationship between trade liberalization and economic growth. Some authors have demonstrated that trade promotes growth by increasing the size of the market, facilitating access to cheaper imported goods and knowledge available in the world. It can also enhance growth by allocating more resources to the Research and Development (R&D) or human and physical capital sectors Lucas, 1988;Rebelo, 1991;Rivera-Batiz & Romer, 1991;, 1991aFeenstra, 1996 ). In contrast to them, others showed that the positive impact of trade on growth depends on the country's development level or complementary policies (Calderon et al., 2004 andDaumon &Ozyiurt, 2011). Also, Young (1997) and Bourdon and Vijil (2013) showed that trade reduces growth in the country specialized in the production and export of low technology, low quality, or few products. -The second channel is the change in the price of goods and services. But the effect depends on the fact that the poor households are net producers or net consumers. The price increase of goods and services will positively affect net producers and harm net consumers. Whereas, a decrease in the price of goods and services will benefit net consumers and harm net producers. -The third one is the wage and employment channels. This channel is based on the Heckscher-Ohlin theorem, which suggests that trade liberalization by increasing the employment and wage of unskilled labour will reduce poverty in developing countries. However, if this theorem is very powerful in theory, Winters (2004) argued that in practice, many other factors might need to be considered. For instance, openness to trade may be accompanied by skilled biased technical changes which generally increase the demand for skilled labour relative to unskilled labour. Thus, in that case, poverty will be unaffected or worsened by trade liberalization. -Finally, the fourth channel is government revenue and spending. Following Sharer et al. (1998), trade liberalization can positively or negatively affect government revenue. On the one hand, tariffs reduction or elimination from trade liberalization will lower the revenue of the government. And this reduction will, in turn, lower public spending on social activities( such as health, education, infrastructure) which disproportionately affect the poor. But, on the other hand, trade liberalization will increase government revenue. For instance, by reducing the incentive for smuggling and corruption, lower tariffs can increase the volume of goods recorded at customs which in turn boost government revenue. Empirical Literature The empirical studies on the link between trade liberalization and poverty( monetary and non-monetary) in the case of Africa have yielded mixed results. While some researchers found that openness to trade has a positive effect on poverty reduction, others showed the effect negative or even insignificant. In this study, we will review some of them. Saibu et al. (2012) employed the vector error correction method to examine the relationship between trade openness, unemployment, and poverty in Nigeria for the period 1986-2010. The results indicate that trade openness has a significant positive on economic growth and unemployment but a significant negative impact on poverty in the long run. Le Goff and Singh (2014) used the GMM estimator examined the effect of trade openness on poverty reduction in 30 African countries over the period 1981-2010. They found that trade openness reduces poverty in the country with a high level of financial intermediation, high literacy rate, and strong institution. In the case of Zimbabwe, Musvovi (2014) used an Ordinary Least Square (OLS) method to determine the effect of trade on poverty reduction from 1986 to 2012. The findings indicate that trade liberalization has a positive impact on poverty in Zimbabwe. Kelbore (2015) also employed a GMM estimator to examine the effect of trade openness and structural transformation on poverty in 43 African countries during the period 1881-2010. He found that trade openness initially increases poverty by 1.3 percent and reduces it by about 1.2 percent after one-period lags. Further, GDP (2020) using the feasible generalized least square explored the relationship between international flows (foreign direct investment, foreign aid, and foreign trade) and poverty reduction in 29 Sub-Saharan African countries. The findings reveal a positive and significant link between foreign trade and poverty reduction. ARDL Bounds Test to Cointegration Following Odhiambo and Magombeyi (2017), the ARDL bounds test cointegration developed by Pesaran et al. (2001) will be employ in this study. This approach is chosen because of several reasons. First, the ARDL bounds test, as opposed to the Johansen and Juselius cointegration, is simple and allows a cointegrating relationship to be estimated by OLS once the lag order is selected. Secondly, it does not require all the variables to be integrated of the same order of integration I(1), unlike VAR/VECM approaches. Variables can be integrated of order one I(1) or I(0). Third, it is relatively more efficient in small sample data sizes, as is the case of our study. Fourth, the error correction method integrates the short-run dynamics with long-run equilibrium without losing long-run information. Model Specification This study will use three measures of poverty reduction in order to capture the effect of trade liberalization on both monetary and non-monetary poverty. We will employ three models. Model 1 examines the impact of trade liberalization on per capita consumption. Model 2 examines the impact of trade liberalization on infant mortality rate. And model 3 examines the impact of trade liberalization on life expectancy. Therefore, following Le Goff and Singh (2014), the three models will be specified as follow: Therefore, following Le Goff and Singh (2014), the three models will be specified as follow: Model 1: PCC= 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + Model 2: IMR= 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + Model 3: LE= 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + Where, PCC is per capita consumption, IMR is infant mortality rate, LE is life expectancy, OPEN is trade openness, PGDP is per capita GDP, FDI is foreign direct investment, FD is financial deepening, EDU is education proxied by gross secondary school enrollment, CPI is consumer price index, INFR is infrastructure development proxied by road paved and IQ is institutional quality proxies by the summation of nine PRS indicators, including corruption, bureaucracy quality, rule of law, government stability, external conflict, internal ijef.ccsenet.org International Journal of Economics and Finance Vol. 12, No. 9; 2020 conflict, investment profile, military in politics, and democratic accountability. The non-linear function specified above can be easily estimated by converting equations 1,2 and 3 into a linear regression after taking the logarithm of both sides of the function as stated in equation 4, 5, and 6. We obtain: Model 1: Model 2: Model 3: The ARDL model and the error correction specification are given in equations 7, 8, and 9 for model 1, model 2, and model 3 respectively. The error correction model for Model 1 is specified as follows: Where α 1 − α 9 and γ 1 are coefficients, α 0 is a constant, ECM t−1 is lagged error term and μ t is white noise error term. Model 2: ARDL specification Where α 1 − α 9 and ϑ 1 − ϑ 9 are regression coefficients, α 0 is a constant and, μ 1t is white noise error term. The error correction model for Model 1 is specified as follows: Where α 1 − α 9 and γ 1 are coefficients, α 0 is a constant, ECM t−1 is lagged error term and μ t is white noise error term. The error correction model for Model 1 is specified as follows: Where α 1 − α 9 and γ 1 are coefficients, α 0 is a constant, ECM t−1 is lagged error term and μ t is white noise error term. ijef.ccsenet.org International Journal of Economics and Finance Vol. 12, No. 9; 2020 Data Sources This study used annual data covering the period 1986 to 2018. All the variables were expressed in natural logarithm. Data on per capita consumption, per capita GDP, financial deepening, gross secondary school enrollment were obtained from World Bank Development Indicators. Foreign direct investment inflows, consumer price index and trade openness data were obtained from the United Nations Conference on Trade and Development (UNCTAD) statistics database. The data of Institutional quality is obtained from the International Country Risk Guide. Lastly data on roads paved, infant mortality rate and life expectancy were obtained from African Development Indicators Database. Stationarity Tests Before we conduct a test for co-integration, we have to make sure that all the variables under consideration are not integrated at an order higher than one. Thus, to test the integration properties of the series, we have used Augmented Dickey-Fuller(ADF) and Phillips Perron(PP) unit root tests. Table 1 below present the results of the stationarity test. From the table, we can see that all the variables are non-stationary at level, which means that all of them contain a unit root. This can be seen by comparing the P-value of both ADF and PP test statistic with 1 per cent, 5 per cent, and 10 per cent level of significance. However, they become stationary at first difference. Thus, we can conclude that all the variables are integrated of order one I (1), which confirms the suitability of the ARDL-based analysis. Note. *, ** and *** show the significance at respectively 10%, 5% and 1% level. L is natural logarithm. The software E-views 10 was used for these tests. Bound Testing Approach to Co-Integration After establishing that all the variables are integrated of order one I(1), the next step is to employ the ARDL approach to cointegration in order to determine the long-run relationship among trade liberalization, economic growth, foreign direct investment, financial deepening, education, consumer price index, infrastructure development, institutional quality and poverty reduction (per capita consumption, infant mortality rate, life expectancy). The results reported that the calculated F-statistics is greater than the critical values at 10%, 5% and 1% for all poverty measures. Note. *, ** and *** show the significance at respectively 10%, 5% and 1% level.The software E-views 10 was used for these tests. Discussion of the Empirical Results on the Relationship between Trade Liberalization and Per Capita Consumption The results presented in Table 3 for model 1 show that the coefficient of trade openness is negative in both the long run and short run but statistically significant only in the long run. According to the result, a 1 per cent rise in trade openness decrease per capita consumption( or increase the poverty level) by 0.27 per cent in the long run while it does not influence per capita consumption in the short run. The long-run result is contrary to expected sign and finding of Giles and Williams (2000), who show that an increase in trade can increase exports, which come with increased incomes and hence higher consumption. However, it is in line with the findings of Magombeyi and Odhiambo (2017), Onakoya et al. (2019). The negative relationship could be due to several reasons. The first one is the low level of financial intermediation, low level of education, and poor institutional quality. Lee Goff and Singh (2013) demonstrated that openness to international trade could not reduce poverty in countries with low access to credit, low level of education, and also lack of governance. The second reason for the negative relationship could be due to the export structure of Mali, which remained unchanged and highly concentrated on three primary commodities. This makes the country vulnerable to world price volatility and weather conditions. According to Hewit (2003), commodities price instability hurts economic growth and lead to ijef.ccsenet.org International Journal of Economics and Finance Vol. 12, No. 9; 2020 increase poverty. The short-run result is consistent with the findings of several studies in the literature (see Akmal et al., 2007;Chaudhry & Imram, 2013;Agasalim, 2017). The explanation of that in the case of Mali is because the majority of poor people in Mali are not directly linked to the country's export products (gold and cotton). For instance, if some poor peoples produce cotton in Mali, most of them are producers of products such as rice, millet, and sorghum. With regards to the control variables, (i) economic growth has a positive and significant impact on per capita consumption in the long-run and short run. This suggests that an increase in economic growth by one percent increase per capita consumption (or decrease poverty levels) by 1.04 percent in the long-run and 1.22 percent in the short-run. Thus, this result was in line with the expected sign and consistent with the absolute hypothesis, which indicates that the increase in income will increase consumer spending. (ii) The coefficient of FDI is negative and statistically significant in the long-run, while positive and significant in the short-run. (iii) The coefficient of financial deepening proxy by domestic credit to the private sector is insignificant in the long run, while it is positive and statistically significant in the short run. This suggests that an increase in financial deepening does not affect per capita consumption in the long run, but increase it by 0.08 percent in the short run. (iv) The coefficient of education (proxy by gross secondary school enrollment) is negative and statistically insignificant in the long run, while a positive relationship with per capita consumption is confirmed in the short run. This suggests that an increase in education does not play a significant role in reducing poverty in Mali in the long run but increases it by 0.52 percent in the short run. (v) The coefficient of CPI is found to have a positive impact on per capita consumption in the long-and short-run. A one percent increase in consumer price index leads to a 0.76 percent and 0.55 percent increase in per capita consumption, respectively, in the long and short run. (vi) The coefficient of infrastructure development captured by a road paved is positive but statistically insignificant both the long run and short run. This suggests that infrastructure development does not play a significant role in poverty reduction in Mali. (vii) The coefficient of institutional quality is negative and statistically significant in both the long-run and short-run. At the same time, a statistically positive relationship was confirmed at lag one in the short run. (viii) The coefficient value of ECM is negative and statistically significant at 1 percent level of significance which implies that the results support the existence of a long-run association between all the variables used in this study. It also suggests that approximately 70 percent of the short-run disequilibrium is corrected in the long run. According to the robustness, the coefficient of R-squared value is 0.97, which implies that the independent variables jointly account for about 97 percent of the total variation in per capita consumption. Subsequently, the remain 3 percent may be due to other factors such as unstable rainfall.The adjusted coefficient of determination (R2) value of 0.95 implies that per cent of the total variation in per capita consumption is explained by the change in the endogenous variables when the coefficient of determination is adjusted for the degree of freedom. The F-statistic value of 45.59 is statistically significant at 5 percent level of significance which implies that the model is a good fit. The Durbin-Watson statistic value of 2.70 indicates the absence of auto-correlation in the estimated model. The diagnostic test results conclude that there is no serial correlation, heteroscedasticity, and anomaly. This implies that the model has no problem. Discussion of the Empirical Results on the Relationship between Trade liberalization and Infant Mortality Rate The results presented in Table 3 for model 2 show that trade openness is negative in both the long-run and short-run but statistically significant only in the short-run. This implies that an increase in trade openness does not influence infant mortality rate in the long run, but significantly reduce it by 0.05 percent in the short-run. The long-run result is contrary to expected signs and findings of some economists (see Herzer, 2017;Novignon et al., 2018), while consistent with the findings of Magombeyi and Odhiambo (2017), and Barlow (2018). This may be due to the fact most of the poor people are not directly linked to Mali merchandise exports. For instance, the production and export of Gold, which is the main export goods of Mali (66 percent of total merchandise exports on average), have created a formal job for skilled labor who generally come from the urban area. However, the explanation of the short-run result may be access to medical goods because of trade openness. According to Papageorgiou et al. (2007), medical products imported from countries that are major exporters of medical technology are positively correlated with the health status in countries that do not perform pharmaceutical R&D. In the case of Mali, access to imports goods such as impregnated mosquito nets has reduced the number of confirmed cases of malaria(a leading cause of infant death and pregnant women) over the past decade. With regards to the control variables, (i) economic growth proxied by per capita GDP is negative both in the long run and short run, but it is statistically significant at 5 per cent only in the long-run. This implies that an increase in GDP per capita has reduced infant mortality by 0.51 per cent in the long run, while its impact has been insignificant in the short-run. (ii) FDI is positive and statistically significant in both the long run and short run. However, the negative and significant relationship exists between foreign direct investment and infant mortality rate in the short-run at lag one. The outcome suggests that an increase in foreign direct investment tends to increase infant mortality rate by 0.025 percent and 0.004 percent in the long-run and short-run while decreasing infant mortality in the short run lag one by 0.008 percent. (iii) financial deepening is positive and insignificant in both the long run and short run. At the same time, at lag one in the short-run, it has a negative and significant relationship with the infant mortality rate. This suggests that the financial deepening does not any significant impact on infant mortality in Mali in the long run as well as the short run, whereas it reduces infant mortality rate in the short term at lag one. (iv) education has a negative association with the infant mortality rate in both the long run and short run, but its impact is statistically significant only in the short run. However, it has a positive impact on the infant mortality rate in lag one in the short run. This suggests that an increase in education does not play a significant role in infant mortality rate in Mali in the long-run, but reduces it by 0.04 per cent in short-run. (v) Consumer price index which is used as a proxy of inflation in this study is found to have a negative relationship with the infant mortality rate in both the long-run and short-run, while their link is significant only in the short run. This result suggests that an increase in inflation does not statistically affect infant mortality rate in the long-run but significantly reduces it by 0.07 per cent in the short-run. (vi) Infrastructure development captured by a road paved is negative in both the long run and short run, while statistically significant only in the long term. An increase in infrastructure development by 1 percent would decrease the infant mortality rate by 0.13 percent in the long run. At the same time, it does not have any significant impact on infant mortality rate in the short term. (vii) institutional quality has a positive impact on infant mortality rate in the long run, while an insignificant negative relationship was confirmed in the short run. Also, in the short run at lag one, institutional quality has found to reduce infant mortality significantly. (viii) The coefficient value of ECM is negative and statistically significant at 1 per cent level of significance which implies that the results support the existence of a long-run association between all the variables used in this study. It also suggests that approximately 49 per cent of the short-run disequilibrium is corrected in the long run. According to the robustness, the coefficient of R-squared value is 0.93 which implies that the variables trade openness, economic growth, foreign direct investment Inflows, financial deepening, consumer price index, education and infrastructure development, institutional quality jointly account about 93 per cent of the total variation in infant mortality rate. Subsequently, the remain 7 per cent may be due to other factors such as unstable rainfall. The adjusted coefficient of determination (R2) value of 0.88 implies that per cent of the total variation in infant mortality rate is explained by the change in the endogenous variables when the coefficient of determination is adjusted for the degree of freedom. The F-statistic value of 17.57 is significant at 5 per cent level of significance with a probability value of 0.00 implies that the model is a good fit. The Durbin-Watson statistic value of 2.45 indicates absence of auto-correlation in the estimated model. Moreover, serial correlation, heteroscedasticity, normality and Ramsey RESET tests are performed to check the model. The results conclude that there is neither serial correlation nor heteroscedasticity and anomaly. This implies that that the model has no problem. Discussion of the Empirical Results on the Relationship between Trade Liberalization and Life Expectancy The results presented in Table 3 for model 3 show that trade openness has a negative and insignificant in the long run, while it is positive and statically significant in the short run. This suggests that an increase in trade openness does not have significantly affect life expectancy in the long run, but increase it by 0.01 in the short run. The long-run result is contrary to expected sign and findings of Novignon et al. (2018); while consistent with the findings of Magombeyi and Odhiambo (2017), and Borlow (2018). Like infant mortality rate, this may be due to the fact most of the poor people are not directly linked to Mali merchandises exports in long-run. The short-run result may be because of medical goods imports such as mosquito nets and antimalarial drugs. With regards to the control variables, (i) GDP per capita is found to have a positive and significant impact in both the long-run and short-run, while the table shows a negative relationship between GDP per capita and life expectancy in the short run at lag one. This implies that an increase in GDP per capita by 1 per cent increase life expectancy (or decrease poverty levels) respectively by 0.22 per cent and 0.07 per cent in both the long-run and the short-run. (ii) FDI has a negative and statistically insignificant impact on life expectancy in the long run, while positive and statistically relationship is found in the short run. This suggests that an increase in foreign direct investment does not significantly influence life expectancy in the long run, but aid to increase it in the short run. (iii) Financial deepening has a positive relationship with life expectancy in both long run and short run, but it is statistically significant in the short-run. This suggests financial deepening does not influence life expectancy in the long run, while increasing it by 0.03 per cent in the short-run. (iv) Education (proxy by gross secondary school enrollment) has a positive and statistically significant relationship with life expectancy in both the long run and short run. This suggests that an increase in education will increase life expectancy in Mali, respectively by 0.09 per cent and 0.03 per cent in the long run and short run. (v) Consumer price index has a negative and insignificant impact on life expectancy in the long run, while a positive and significant impact is revealed in the short run. This implies that an increase in consumer price index does not significantly life expectancy in the long-run while increasing it by t 0.07 per cent in the short run. (vi) Infrastructure development has a negative and insignificant impact on life expectancy in the long run while positively impact it in the short-run. This implies that an increase of infrastructure development by 1 percent does not play any significant role in life expectancy in the long run but increases it by 0.01 percent in the short-run. (vii) Institutional quality has a negative impact on life expectancy in both the long run and short run, but statistically significant only in the long-run. Additionally, in the short-run at lag one, the table 5 shows a significantly positive association between institutional quality and life expectancy. (viii) the coefficient value of ECM is negative and statistically significant at 1 per cent level of significance which implies that the results support the existence of a long-run association between all the variables used in this study. It also suggests that approximately 66 per cent of the short-run disequilibrium is corrected in the long run. According to the robustness, the coefficient of R-squared value is 0.94 which implies that the independent variables jointly account about 94 percent of the total variation in life expectancy. Subsequently, the remain 6 per cent may be due to other factors such as unstable rainfall. The adjusted coefficient of determination (R2) value of 0.88 implies that per cent of the total variation in life expectancy is explained by the change in the endogenous variables when the coefficient of determination is adjusted for the degree of freedom. The F-statistic value of 16.13 is significant at 5 per cent level of significance with a probability value of 0.00 implies that the model is a good fit. The Durbin-Watson statistic value of 2.39 indicates absence of auto-correlation in the estimated model. Moreover, serial correlation, heteroscedasticity, normality and Ramsey RESET tests are performed to check the model. The results conclude that there is neither serial correlation nor heteroscedasticity and anomaly. This implies that that the model has no problem. Stability Test Finally, Figure 1,2 and 3 present the results of stability test for models 1, 2 and 3. The cumulative sum of recursive residuals (CUSUM) and the cumulative sum of recursive residuals square (CUSUMSQ) are employed to check the stability of the three models. From the figures below, it can be seen that the plots of CUSUM and CUSUMQ are within the 5 per cent significance lines or boundaries, which suggests that the residual variance of all the models is somewhat stable, hence also confirming the stability of the three models. Conclusion and Policy Recommendations The main objective of our study was to investigate the long-run and short-run impact of trade liberalization on poverty reduction in Mali during the period 1986-2018. It was done by using the ARDL bounds test. The results indicated that trade liberalization increases poverty reduction in Mali in the long-run and reduce it in the short-run. Similarly, variables such as financial deepening, education, infrastructure development, and institutional quality tend to increase poverty in the long run and reduce it in the short-run. This confirms that the impact of trade liberalization on poverty reduction depends on complementary policies. In order to benefit the positive effect of trade liberalization, the government of Mali should diversify its exports by increasing the number of export products and partners. Also, the government must add value most of the primary commodities in which it has a comparative advantage. Therefore, the quality of education must be improved by increasing the number of schools, qualified teachers, and low skill labour training, particularly in rural areas. It must particularly invest more in women education. This will not only increase their employment opportunities but will help to reduce child mortality.Through the private sector guarantee fund, access to medium and long-run credit must be improved, especially to poor people and small and medium enterprises. A massive investment must be made in the infrastructure sector (mainly road, railway, energy, telecommunication, water, slaughterhouse, and so on) in order to reduce transaction costs and improve export competitiveness. The quality of the institution must be enhanced through the fight of corruption and consolidation of democracy. Following Mauritius and Ethiopia, the export processing zone should be established to attract more foreign investment in low-skilled labour-intensive industries (such as textile and agro-business). Finally, regional integration policies should be pursued and strengthened by the government through regional infrastructure projects and standardization of border procedures. This is critical because, as a landlocked country, Mali requires secure access to ports and to quality port services in neighbouring countries.
7,016.8
2020-08-04T00:00:00.000
[ "Economics" ]
Spontaneous Innovation for Future Deception in a Male Chimpanzee Background The ability to invent means to deceive others, where the deception lies in the perceptually or contextually detached future, appears to require the coordination of sophisticated cognitive skills toward a single goal. Meanwhile innovation for a current situation has been observed in a wide range of species. Planning, on the one hand, and the social cognition required for deception on the other, have been linked to one another, both from a co-evolutionary and a neuroanatomical perspective. Innovation and deception have also been suggested to be connected in their nature of relying on novelty. Methodology/Principal Findings We report on systematic observations suggesting innovation for future deception by a captive male chimpanzee (Pan troglodytes). As an extension of previously described behaviour – caching projectiles for later throwing at zoo visitors – the chimpanzee, again in advance, manufactured concealments from hay, as well as used naturally occurring concealments. All were placed near the visitors' observation area, allowing the chimpanzee to make throws before the crowd could back off. We observed what was likely the first instance of this innovation. Further observations showed that the creation of future-oriented concealments became the significantly preferred strategy. What is more, the chimpanzee appeared consistently to combine two deceptive strategies: hiding projectiles and inhibiting dominance display behaviour. Conclusions/Significance The findings suggest that chimpanzees can represent the future behaviours of others while those others are not present, as well as take actions in the current situation towards such potential future behaviours. Importantly, the behaviour of the chimpanzee produced a future event, rather than merely prepared for an event that had been reliably re-occurring in the past. These findings might indicate that the chimpanzee recombined episodic memories in perceptual simulations. Introduction We present systematic observations of a male chimpanzee who appears to have invented the use of concealments -both manufactured and naturally occurring ones -to be used for projectiles for future throwing at zoo visitors. That is, planning behaviours that produced a possibly desired outcome in the future, instead of relying on mere preparation for an upcoming situation that has been experienced before. It has been suggested that human planning skills evolved in response to an increasingly complex social environment [1,2]. Undoubtedly, thinking about how one's current actions will affect others' future behaviours often steers one's choices. Our long-term social predictions are arguably important in both cooperative and competitive contexts. Planning for how to deceive prey or opponents before encountering them is an effective low-cost strategy. The ability to solve new problems or to come up with novel solutions to old problems has often been associated with innovation. Innovations for deception are prime examples of social innovations [3]. Foresight The theoretical roots of cognitive foresight research lie in the field of memory studies. In 1972, Tulving proposed a distinction between semantic and episodic memory [4], creating an essential framework for current animal research on foresight and memory. An easy way to distinguish them is to regard the first as knowing, the latter as remembering. The semantic system represents general knowledge about the world. By contrast, the episodic system involves perceptual simulations from a first-person perspective. Knowing that Budapest is the capital of Hungary comes from the semantic system, but remembering the sight and smell of the fig tree in the back yard of the city's royal palace comes from the episodic system. Tulving made a notable addition to his initial theory by making a type of consciousness -autonoetic (self-knowing) consciousness -a necessary correlate of the episodic system [5]. At the same time Tulving was introducing autonoetic consciousness, another hypothesis was being put forward: the episodic system provides not only memories of past events but also mental constructs of possible future ones. This hypothesis has now been confirmed in several areas, from neurocognition to child development (for review see e.g. [6,7]). It appears as though episodic memory contributes previously experiences that are recombined into a novel construct, representing a possible future event. To elucidate the distinctively subjective, first-person-perspective of autonoetic consciousness, Tulving used the phrase mental time travel: autonoetic consciousness makes it possible to travel in time cognitively and phenomenally, to revisit or pre-visit events. Metaphorically, autonoetic consciousness provides the ''inner eye'' by which one ''sees'' past or future, perceptually simulated, events. Animal studies face a problem: it is problematic methodologically to rely on a terminology that presupposes phenomenal consciousness. This has caused considerable quandaries over how to parsimoniously interpret the results of certain studies on planning and memory in corvines and primates [8][9][10][11][12][13][14][15]. Is it ever possible to know whether an animal uses an episodic system given that one has no way to probe subjective experiences? Is it therefore also valid to deny the existence of an episodic system even if behavioural and neurobiological data suggest one, just because of the lack of phenomenal insight? It is in fact not known whether the phenomenal experience that accompanies human foresight is functional or merely an epiphenomenal byproduct of other processes. It is however roughly known which brain areas are involved in episodic operations in humans, and that those operations seem to rely partly on reorganising stored perceptual inputs (for review see e.g. [7]). In principle, those operations are empirically testable in non-humans -indeed, they have partly been studied [16]. One way to avoid arguments dependent on phenomenological access is to distinguish sensations from perceptions: sensations describe the subjective experience of events, perceptions their physical interpretation [17]. An episodic system relying on perceptual simulation does not logically entail subjective experience. However, it does presuppose (re-)organization of perceptually detached information. This is a somewhat different way to avoid the problem of subjective experience than the one taken by Clayton and colleagues [18]: instead of returning to the initial definition of episodic memorieswhich did not include consciousness or simulation -we propose a more neurobiologically based, but also non-phenomenal, approach, where perceptual simulations are central. An important empirical challenge is to show whether the futureoriented behaviour in question relies on something more than mere cognitive repetition of an entire previous experience. That is, whether the animal under study can prepare for novel situations that require mentally recombining perceptual elements into new configurations, as the human episodic system allows. Such a finding for a non-human species would strongly suggest the existence of an episodic system. Many investigations and much debate have concerned the so-called Bischof-Köhler hypothesis [19][20][21][22][23]. Suddendorf and Corballis [24] first offered the hypothesis, stating that ''…animals other than humans cannot anticipate future needs or drive states and are therefore bound to a present that is defined by their current motivational state''. It does seem that an episodic system facilitates such anticipation; however, passing or failing the Bischof-Köhler ''test'' is not necessary, and perhaps not even sufficient, for establishing or rejecting episodic foresight in nonhuman animals: a certain flexibility appears just as important. (For similar ideas, see [25]) Deception Numerous reports of deceptive primate behaviours exist [26,27]. Some exist for corvines as well [28][29][30][31]. Byrne and Whiten [32] introduced the concept of tactical deception, which they later elaborated on [33]. Tactical deception is a type of behavioural deception, not a morphological one as for example mimicking the colour pattern of a venomous snake. Under normal circumstances, the behaviour in question is presented ''honestly''; however, in this case it is used tactically, to mislead. Consider a raven that appears to make a cache in the presence of onlookers, even though it does not empty the contents of its beak. Of course, in many instances tactical deception can occur without the deceiver having any representation of the false knowledge states of the deceived. Such representations require that one have a so-called Theory of Mind [34]: an understanding of that the other's psychological state lies behind the behaviour. That skill is sometimes called mind reading. Theory of mind or mind reading is not required where the ''reader'' has associatively learned relationships of others' behavioural responses to different circumstances -or even where one can reason what one would have done in the situation the other is in, without assuming anything about the other's state of mind. Such exceptions to mind reading could include one's generalized experience of others' direct line of gaze, with no conceptual understanding of them as ''seeing''. An example would be that when food is and has been outside the other's direct line of gaze, the other makes no attempt to take it [35]. This broader category of behaviour-predicting skills is often referred to as behaviour reading. Although no single study has provided unequivocal evidence for mindreading in non-human animals, some argue that the combined weight of studies imply that at least chimpanzees and some corvines take into account the goals and perceptual perspectives of others -although maybe not their beliefs [36]. Those who reject this often argue that the studies are methodologically flawed and unable even in principle to infer mental state attribution: the results could be interpreted as reflecting no more than behaviour reading [35,37]. Innovation Innovations in animals have been observed in a wide range of species [38][39][40]. Such innovation has received most attention from ecological approaches and from the perspective of its role in cultural transmission. However, it remains under-studied from a cognitive perspective, so that the underlying proximate mechanisms are neither well identified nor understood. The difficulty pinpointing the cognitive mechanisms underlying innovation is partly related to the difficulty of defining it. Innovation can be viewed either as the product (i.e., a novel behaviour pattern [39]) of or the process that results in novel behaviour [41]. Given these two perspectives, Reader and Laland [42] argue that innovations (the product) are learned behaviour patterns. It follows that innovation (the process) requires learning. This excludes from the definition mere chance behaviour or innate behavioural expressions. Reader and Laland recognize that general learning alone cannot explain innovation. They suggest a number of broad cognitive mechanisms -or behavioural processes -underlying innovation (facilitating the necessary learning): e.g., exploration, insight, creativity, and behavioural flexibility. Unfortunately, these labels are all more or less poorly understood. The cognition behind innovation remains largely uncharted. What is interesting given the scope of the current study is the way that innovation and deception have been linked in the context of primates' social life [3,39]. The two skills do seem closely related: innovation can be said to occur when an existing signal or other behaviour is used in a novel way [39]; tactical deception occurs when a familiar and normally honest signal is used in a new and misleading way [33]. Previous report on the chimpanzee of this study In 2009 one of us (MO) reported on the projectile related behaviour of the male chimpanzee, who is also the subject in this study [23]. In 1997 the chimpanzee started to gather stones from the water moat surrounding the outside compound and storing them hours before he threw them in dominance displays at the arriving zoo visitors. The behaviour was detected after some days of unusually high number of projectiles being thrown. When cleaning the island compound, the zookeepers found five stone caches placed at the shoreline facing the visitors' area. Following days a zookeeper placed herself in a blind to observe the chimpanzee behaviour during the morning hours. He was found to retrieve stones from the moat and place them in piles. In 1998, the chimpanzee started to manufacture projectiles by breaking off loose pieces from the compound's concrete surface, and then placing them in the caches. The behaviour was observed a high number of times during the decade covered by the report. The key findings were not only that the ape prepared for future throwing when the visitors was outside his field of perception, but also that there appeared to be a dissociation between his emotional states: calm during the gathering process, agitated during the throwing sessions. These behaviours indicate foresight based on the episodic system. Nonetheless, concerns have been raised over how the findings should be interpreted -because no detailed data is available on the chimpanzee's behaviour and circumstances at the moment when the first caches were made [43,44]. Such information would have been valuable for the understanding of the underlying factors behind the behaviour. That said, explanations based solely on associative learning mechanisms are difficult to motivate. Even if the behaviour did start out by chance, or if initially, the chimpanzee took the stones from the water and cached them along the shore for some purpose other than throwing them lateri.e., even if he only came later to realise that they could be thrown -one still needs an explanation for the complexity of the resulting behaviour, including the time spans and the manufacturing of projectiles. One also needs to take into account the experimental results on foresighted behaviours in chimpanzees, which suggest that associative learning alone cannot explain such behaviour. It has been experimentally controlled for that chimpanzees do not merely rely on conditioning in tasks of future tool use [22,45]. And, on the other side of the coin, it has been suggested that chimpanzees are unable to learn to bring an item intended for future exchange for food from a human, despite extensive prior reinforcement training on the item [46]. These different findings suggest that associative learning cannot on its own explain foresighted behaviour in chimpanzees. To gain more detailed information we systematically studied how the projectile related behaviour starts at the beginning of a zoo visitors' season. This does not address the problem with lack of data from the behaviour's initial inception; however, it complements the earlier work and offers potential for more fine-grained insights. During the 2010 season, previously unobserved behaviours were documented, comprising both deception and innovation in relation to the chimpanzee's projectile planning activities. Ethics statement The work was carried out under the Uppsala regional ethics committee approval No C199/9. The Swedish Agricultural board (No. 31-2599/09) has approved Furuvik Zoo as a cognitive research facility on chimpanzees. Subject The male chimpanzee, Santino, was born in 1978 at Munich Zoo in West Germany. At the age of five, he was transferred to Furuvik Zoo, Sweden, where he has lived ever since. Over the years, the composition of Santino's group varied, ranging between four and seven individuals of mixed sexes and ages. When Santino became the dominant male at the age of 16, there was only one other male in the group. This male died within the first year of Santino's dominance, leaving Santino as the sole male, as he has remained until the date of this study. When this study was conducted, apart from the male, the group consisted of five females, two adults, two sub-adults and one infant. Methodological premises Furuvik Zoo is only open to the general public for a short season: typically June to August. The general season is in some years preceded by a shorter pre-season -usually in May -during which the only visitors are guided educational groups. This study was carried out in 2010 and the pre-season and general season followed this pattern. The division of pre-and general season governed the methods used. Conducting a study where human bystanders are involved presents challenges: in particular, the ethics of studying a potentially dangerous behaviour. Ethically, the observer, aware of Santino's projectile-throwing behaviour, could not fail to intervene upon observing preparations for impending throws. During the pre-season, a zoo ethologist guided the groups, and each visitor was informed about the chimpanzee's throwing behaviour. Given this, it was ethically appropriate to observe the chimpanzee's preparation of the projectiles without interference. The pre-season afforded a well-controlled setting compared to the general season, when a large number of visitors is moving around. Among other things, it was possible to make accurate observations on whether visitors were out of the chimpanzee's view. Two principal, complementary methods were used: (i) direct behavioural observations and (ii) recovery of projectiles from the compound at the end of a day. During the general season, only the latter method could be used. Behavioural observations The primary goal was to address how the chimpanzee initiates his projectile-throwing behaviour at the start of the visitors' season. Therefore, behaviour sampling with continuous recording was used from the moment visitors were present during the pre-season. An observation session began the moment a visitors' group entered the vicinity of the chimpanzee compound. The session ended 30 minutes after the visitors left. Two central observational codes requires some elaboration: Throws and throw attempts were recorded according to the position from which they were executed. It was not always possible to reliably observe the number of projectiles per throw, given the speed of the throws and the frequency with which multiple projectiles were thrown at once. Likewise it was not possible to reliably retrieve thrown projectiles, due to the dense vegetation around the compound. A hiding was recorded if the observer clearly saw at least one projectile being placed behind or underneath something that would block the view. No hidings were recorded where the chimpanzee was simply active in areas that were later found to contain projectiles. This was a conservative coding, given the difficulty of seeing projectiles in the chimpanzees' closed hand. (Obviously, this code was not incorporated immediately, but only after the first observation of a hiding). The observer needed to be out of the chimpanzee's view, during the periods when he did not have visitors. In consequence, the observer did not have an unobstructed view of the entire island: that would only have been possible with three simultaneous observers, who would have been visible to the chimpanzee. However, none of these restrictions proved problematic for recording of the essential initial behaviours. Recovery of projectiles At the end of each day, remaining projectiles and concealments were documented and removed. This was the only method deployed once the general season began, and the monitoring continued for 114 days. However, Santino only engaged in projectile-related behaviour on two days of the general season. Although none of the projectiles concealed by the hay piles originated at the place of concealment, that possibility did arise for those projectiles placed behind one of two logs, where in each case potentially loose concrete was present. The position of the projectiles might in this case then be a result of chance, rather than from intentional concealment. Therefore two types of controls were used. First, two observers independently scanned all concrete areas of the island, both visually and by probing the concrete with the side of the fist (similar to Santino's own behaviour). Second, the two observers independently examined the colour and structure of the projectiles, to judge whether they matched the pattern of the adjacent concrete. Initial behaviours The primary aim of this study was to document how the projectile behaviour was initiated in a zoo season, and it turned out that the first observations yielded findings indicating intentional deception and innovation. Therefore the initial behaviours were essential and are described in detail. The first attempt to throw projectiles in 2010 involved the first visitors of the pre-season. The attempt was preceded by typical male chimpanzee dominance display behaviour: aggressive bipedal locomotion, pilo-erection and vocalization. The projectiles were chipped off the surface layer of the concrete in the outdoor compound island immediately before they were used. The guiding zoo ethologist backed the group away before the ape could release the projectile. He consequently desisted from throwing. This pattern repeated three times in a row. When the group returned, 190 minutes later, the male made no aggressive displays. Instead he walked from the centre of the compound island toward the group, with two concrete projectiles in his hand. To the guide, his appearance did not suggest intentions of throwing. The chimpanzee even stopped and picked up an apple floating in the water from which he took a bite as he continued approaching the visitors. Just within range, he made a sudden throw at the group (see Figure 1). This behaviour fits with a category of deception referred to as creating a neutral image. In this case, inhibiting an aggressive intent in order to secure a close approach [3]. Following day, the chimpanzee made two further attempts, preceded by aggressive display. In both cases, the group backed away, and he desisted. When the group left, the chimpanzee were observed being active in the area of one of the logs, thereafter he brought a melon-sized heap of hay from the inside enclosure (see Figure 2). This was placed on the island, close (8 metres) to the visitors' area. Subsequently he put an unknown number of projectiles under the hay that were carried in his hand. When the group returned to the compound 60 minutes later, the chimpanzee sat beside the hay. As the group approached, without preceding display, he threw a projectile stored under the heap. Shortly after, the chimpanzee positioned himself behind the log close to another part of the visitors' area (7 metres). When the group moved into this area, he threw two stored projectiles from behind the log. No display preceded the throws. When the group left the compound again the chimpanzee was observed to cache two more projectiles under the hay pile. These were thrown, with no preceding display, 20 minutes later when the group returned to the compound. In the evening the observers recovered twelve remaining projectiles from the island, all from concrete. Out of these, seven were found in hides: one under the hay pile facing the moat, five behind the log and one under the hay outside the door to the indoor enclosure. A hay pile on the island, or any concealing behaviour, had not been observed previously, either by the authors of the current study or by the zookeepers. Due to the close monitoring and documentation of the chimpanzee's projectile caches since its beginning, it is close to certain that the hay hide was a first case of innovation for deception. The chimpanzee did however sometimes use hay as resting material directly outside the door to the enclosure, in a sheltered area approximately 22 meters, and out of view, from centre of the actual island. On the time of the first hay concealment the chimpanzee had taken out no such resting material, only afterwards. Although later that day this resting material also served as concealment. The whole zoo season Through the course of the zoo season four hidings were directly observed as they took place (i.e. the actual projectiles were seen), always with an observer outside of the chimpanzees view. In two cases the hay was transported from the inside enclosure and placed over the projectiles, and at two occasions the projectiles were placed under the hay. In these instances the chimpanzee had first encountered a group, and cached immediately after they left. In one of these occasions he did not throw the concealed projectiles, as the group did not return. In turned out to be problematic to directly observe any unambiguous hidings behind the logs and the rock structure. Projectile oriented behaviour occurred in seven days in a period of 27 days. In all, 46 projectiles were recovered, of which 35 came from concealments. Three types of concealments were used: hay, logs (two different) and a protruding rock structure (see Figure 3 for the perspective from the visitors side on the different concealments). Hay concealments were never placed behind the logs or the rock structure. The concealments from naturally occurring obstacles were visible to the chimpanzee but not to the visitors. Out of the 35 concealed recovered projectiles, 15 were placed under hay heaps (under 6 heaps; 2 ''empty'' heaps were also recovered), 18 were placed behind logs and 2 were placed behind a protruding rock structure (see Figure 4 for the distribution of projectiles on different dates). The non-visible projectiles were significantly more than expected by chance (binomial test, P,0,001) (see Figure 5). Chance level was set at 50% which is much conservative for three reasons: (1) the number of places with naturally occurring obstacles on the island is far less than 50% of the island's area; (2) the number of potential behaviours the ape can perform instead of manufacturing a hide from hay is far more than one; (3) a majority of the observed throws were made from hides, i.e. the remaining projectiles recovered from hides were fewer than the number that was actually hidden as compared to the visible caches. The controls of the areas with loose concrete, which did not yield concealments or visible caches, revealed eight such areas in both of the independently performed controls. The distribution of the 18 projectiles behind two (the logs) out of ten possible areas with loose concrete significantly deviates from chance distribution (binomial test, P,0,001). The controls of colour and structure showed that at least four projectiles almost certainly did not originate from the vicinity of the concealments (only applicable for the logs; the other concealment areas did not have loose concrete). This is a highly conservative measure as the concrete is quite similar throughout the compound, and it should be understood as only a complementary control and not as the true numbers of transported projectiles. They way the chimpanzee coupled display behaviour with throws and throwing attempts further hints at the intent with hiding the projectiles. Out of the observed throws and throwing attempts 8 were done with projectiles from hides and 8 with visible projectiles. The chimpanzee did not display a single time before throwing from a concealment, while he displayed 7 out of 8 times when the projectiles were visible; the difference between the conditions is significant (Fisher's exact test, P = 0,00139). The only time the chimpanzee did not display in the visible condition was in the event described in the above section. These figures should be treated with some caution, as they also include throwing attempts in the visible condition. Discussion The likely explanation for the concealments is that they were intended for deception. Several factors converge on such an assumption: no hay concealments were ever placed behind concealing obstacles; most projectiles behind or under the concealments had been transported there; when concealed, the projectiles could be seen only from the viewpoint of the chimpanzee or not by anyone; observed throws from the concealments were never preceded by any display (suggesting the combining of two deceptive strategies); concealing behaviours were never observed when anyone was in the chimpanzee's view. It is less clear what prompted these deceptive behaviours and the use of hay as the concealing material. One could speculate about the chimpanzee's initial throwing experience of the season, watching the people backing away. Perhaps this led him to take deceptive action, so he could release the projectiles at closer range. The first time the chimpanzee -atypically for him -was observed slowly approaching the visitors, displaying no obvious aggressive intent, before suddenly throwing projectiles at them fits well with a documented deceptive category in primates. There is no way to tell whether this was the first time he ever used this strategy. The strategy might occasionally have been used in the past. What is close to certain, however, is that there had never before been a hay concealment on the chimpanzee island, nor had projectiles ever previously been found behind naturally occurring obstacles, only as completely visible and close to the shore line. The day the first concealments were made began as the day before, with the onlookers backing away. Those first concealments included both manufactured and naturally occurring ones. The chimpanzee was quite familiar with hay, giving him plenty of opportunities to learn its effect of blocking the view of objects; he was similarly familiar with logs. He also occasionally transported hay to a resting place just outside the door to the indoor enclosure, giving him experience of bringing hay from the inside. That said, any answer why and how he came up with the new strategy on his second day of visitors would be speculative. Interestingly, he did not start out on that second day using the deceptive strategy; his initial encounter with the visitors played out as before, and only on the second encounter did the aggression inhibition and use of concealment occur. One obvious gain from the new strategy is that the chimpanzee could use more projectiles in short succession. By combining his old strategy of gathering projectiles in advance with his new strategy of concealment and behavioural inhibition, he could extend his ability to throw stones at visitors from close range. Although, there is no way to tell whether this really was his motivation. Both the manufacture and use of the concealments were likely premeditated. The behaviour never occurred when anyone was within the chimpanzee's view, but only after a group had been present and left: i.e., prior to their possible return. That is, it appears to have been prompted by the prior presence of visitors on those days when it occurred: the chimpanzee prepared no concealments on days when he had not previously seen visitors. This departs from the chimpanzee's previously reported behaviour, by which he typically collected projectiles in the morning before the zoo opened, on days when the zoo had visitors. That said, the earlier observations were based mainly on the general season, not on the (rare) pre-season. During the general season, visitors come every day, while during the pre-season, they arrive sporadically, several days apart (see Figure 4 for the dates of the pre-season in 2010). Taken together, the results suggest that the chimpanzee crafted a desired outcome in a perceptually detached future by acting innovatively in his current situation. Such activity produces a specific future event, in contrast to activity that merely prepares for a future situation as repetition of a previously experienced event. That is why the most critical finding of this study is the observation of the first instance of the concealment behaviour. This is indication of the existence of that type of perceptual simulation used by humans in certain planning tasks: a recombining of components of previously experienced events. The data further show that chimpanzees are able to plan for social situations -at least for deception -and that social planning in general is not out of reach for chimpanzees, as was suggested in a study where chimpanzees were unable to plan for future exchange with humans [46]. Do the results imply that the chimpanzee possesses a theory of mind? Sensu stricto, it appears as the results do not: however elaborate, the concealments could be based on the chimpanzee's understanding of line of gaze. What the behaviour does appear to show is that the chimpanzee is able to predict the behavioural responses of others not present at the time of the prediction. Mind reading is characterized as reasoning about what is not overt in behaviour: i.e., mental states. What the chimpanzee appears to be reading is likewise not overt in any behaviour (the visitors are not present). That said, the performance is possible without representing anyone else's mental states. What does seem to be a possibility is detached perceptual constructs of others' behaviours. One means by which this might be achieved is again the episodic system, allowing the agent to simulate others in the context of a potential future situation. It has been suggested that in humans, foresight, memory, and the taking of others' viewpoints all seem to be supported by a common brain network [47]. The relevant brain structures appear to be largely shared with chimpanzees [33]. In the context of theory of mind and planning, it has been suggested that the meta-representational ability required for representing others' deviating psychological states is a prerequisite for representing one's own future deviating mental states and hence planning for them. The alleged lack of such an ability in non-human animals is one reason their planning is often taken to be highly restricted [1]. However, such an assumption is not necessary. When it comes to planning for your own deviating mental states it has been suggested that the perceptual construct of a potential situation plays a trick on the phylogenetically older parts of the brain: the structures governing motivation treat the construct more or less as true perceptions [48]. So, the potential future mental state, or motivation, is brought to the present and might act as a break on the motivations directed towards the current situation. When planning for potential future behaviours of others, we suggest that this could in principle also be solved by detached perceptual construct of behaviours priorly experienced under different circumstances. Then there is no need for theorylike reasoning about other's mental states, the behaviour could be ''read'' from the perceptual simulations (it is not necessary to represent other's mental states even for creating the constructs; a learned behavioural catalogue would suffice). What underlies the perceptual simulations of potential futures, what makes them to form, is a highly interesting question beyond the scope of speculations of this study. The present report should be followed up by experimental investigations whether chimpanzees -and other great apes -are in general capable of planning for future deception; and whether they have the ability to form representations of future behaviours of others who are not present, given different situations. Such experiments would provide an interesting avenue for advancing the study of social cognition. As an endnote: when observations were continued in the 2011 season, the chimpanzee did not cache or throw a single projectile. He had suffered a hip injury at the beginning of the season and was both generally slowed down and reluctant to leave his indoor enclosure. By the middle of the season, at which point he had healed, he showed no inclination to throw stones. This is consistent with the pattern in the present and previous study, in which his projectile-related behaviour was found to stop sometime before the middle of the season.
7,856.8
2012-05-09T00:00:00.000
[ "Biology", "Psychology" ]
Search for squarks and gluinos in events with an isolated lepton, jets, and missing transverse momentum at √s=13 TeV with the ATLAS detector The results of a search for squarks and gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton-proton collision data at a center-of-mass energy of ffiffiffi s p ¼ 13 TeV are presented. The data set used was recorded during 2015 and 2016 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 36 . 1 fb − 1 . No significant excess beyond the expected background is found. Exclusion limits at 95% confidence level are set in a number of supersymmetric scenarios, reaching masses up to 2.1 TeV for gluino pair production and up to 1.25 TeV for squark pair production. Introduction Supersymmetry (SUSY) [1][2][3][4][5][6] is a theoretical framework of physics beyond the Standard Model (SM) which predicts for each SM particle the existence of a supersymmetric partner (sparticle) differing by half a unit of spin. The partner particles of the SM fermions (quarks and leptons) are the scalar squarks (q) and sleptons (˜ ). In the boson sector, the supersymmetric partner of the gluon is the fermionic gluino (g), whereas the supersymmetric partners of the Higgs (higgsinos) and the electroweak gauge bosons (winos and bino) mix to form charged mass eigenstates (charginos) and neutral mass eigenstates (neutralinos). In the minimal supersymmetric extension of the Standard Model (MSSM) [7,8] two scalar Higgs doublets along with their higgsino partners are necessary, resulting in four chargino states (χ ± 1,2 ) and four neutralinos (χ 0 1,2,3,4 ). SUSY addresses the SM hierarchy problem [9][10][11][12] provided that the masses of at least some of the supersymmetric particles (most notably the higgsinos, the top squarks and the gluinos) are near the TeV scale. In R-parity-conserving SUSY [13], gluinos or squarks are pair produced at the Large Hadron Collider (LHC) via the strong interaction and decay either directly or via intermediate states to the lightest supersymmetric particle (LSP). The LSP, which is assumed to be the lightest neutralino (χ 0 1 ) in this paper, is stable and weakly interacting, making it a candidate for dark matter [14,15]. The decay topologies targeted in this paper are largely inspired by decay chains that could be realized in the pMSSM scenario, which is a two-dimensional subspace of the 19-parameter phenomenological Minimal Supersymmetric Standard Model (pMSSM) [16,17]. Four SUSY models with gluino or squark pair production and different decay topologies are considered. The first two models, referred to as the gluino and squark one-step models for the rest of this paper, are SUSY simplified models [18][19][20] in which pair-produced gluinos or squarks decay via the lightest chargino (χ ± 1 ) to the LSP. In the model with gluino production, the gluino decays to the lightest chargino and two SM quarks viag → qq χ ± 1 , as illustrated in Figure 1 (left). The gluino decay is assumed to proceed via virtual first-and second-generation squarks, hence no bottom or top quarks are produced in the simplified model. The chargino then decays to the LSP by emitting an on-or off-shell W boson,χ ± 1 → W ( * )±χ 0 1 , depending on the available phase space. In the MSSM this decay chain is realized when the gluino decays, via a virtual squark that is the partner particle of the left-handed SM quark, to the chargino with a dominant wino component. In the squark production model, the squark decays to the chargino viaq → q χ ± 1 , followed by the same chargino decay, as illustrated in Figure 1 (middle). The third model, referred to as the gluino two-step model for the rest of this paper, assumes gluino pair production with a subsequent decay to the chargino viag → qq χ ± . The chargino then decays via emission of an on-or off-shell W boson to the second lightest neutralino according toχ ± → W ±χ 0 2 . In the last step of the cascade, the second lightest neutralino decays via emission of a Z boson to the LSP. The decay chain of this signal model is illustrated in Figure 1 (right). The model is used as a proxy for SUSY scenarios with many decay products in the final state. Within the MSSM, additional decay modes lead to a significant reduction in the cross-section times branching fraction for this particular decay. Finally, the fourth set of SUSY models, the pMSSM model, is selected to have a bino-dominated neutralino as the LSP, kinematically accessible gluinos, and a higgsino-dominated multiplet at intermediate mass. The higgsino multiplet contains two neutralinos (theχ 0 2 andχ 0 3 ) and a chargino. The decays proceed predominantly via virtual third-generation supersymmetric quarks due to their enhanced couplings with the higgsinos. Examples of dominant characteristic decay chains of this model for mχ ± 1 ∼ < 500 GeV and mg ∼ > 1200 GeV areg → ttχ 0 2,3 andg → tbχ ± 1 , withχ 0 2,3 decaying to Z/hχ 0 1 andχ ± 1 to W ±χ0 1 . In this search, the experimental signature consists of a lepton (electron or muon), several jets, and missing transverse momentum (E miss T ) from the undetectable neutralinos and neutrino(s). Depending on the sparticle masses of the model considered, different amounts of energy are available in their decays. Therefore, the number of leptons and jets in the final state, as well as their kinematic properties, depend on the mass spectrum in the model of interest. Four signal regions with jet multiplicities ranging from two to six are defined to provide sensitivity to a broad range of mass spectra in the gluino and squark one-step models. For the two-step and pMSSM models, a dedicated signal region requiring nine jets is constructed to take advantage of the large jet multiplicities in these models. In each signal region, the event yield is compared with the SM prediction, which is estimated using a combination of simulation and observed data in control regions. The results of all Run-1 ATLAS searches targeting squark and gluino pair production are summarized in Ref. [28]. The same SUSY models considered in this paper were also targeted in other Run-2 ATLAS searches using different experimental signatures [29][30][31]. This paper is structured as follows. After a brief description of the ATLAS detector in Section 2, the simulated data samples for the background and signal processes used in the analysis as well as the dataset and the trigger strategy are detailed in Section 3. The reconstructed objects and quantities used in the analysis are described in Section 4 and the event selection is presented in Section 5. The background estimation and the systematic uncertainties associated with the expected event yields are discussed in Sections 6 and 7, respectively. Finally, the results of the analysis are presented in Section 8, and are followed by a conclusion. ATLAS [32] is a general-purpose detector with a forward-backward symmetric design that provides almost full solid angle coverage around the interaction point. 1 The main components are the inner detector (ID), which is surrounded by a superconducting solenoid providing a 2 T axial magnetic field, the calorimeter system, and the muon spectrometer (MS), which is immersed in a magnetic field generated by three large superconducting toroidal magnets. The ID provides track reconstruction within |η| < 2.5, employing pixel detectors close to the beam pipe, silicon microstrip detectors at intermediate radii, and a straw-tube tracker with particle identification capabilities based on transition radiation at radii up to 1080 mm. The innermost pixel detector layer, the insertable B-layer [33], was added during the shutdown between LHC Run 1 and Run 2, at a radius of 33 mm around a new, narrower, beam pipe. The calorimeters cover |η| < 4.9. The forward region (3.2 < |η| < 4.9) is instrumented with a liquid-argon (LAr) calorimeter for both the electromagnetic and hadronic measurements. In the central region, a lead/LAr electromagnetic calorimeter covers |η| < 3.2, while the hadronic calorimeter uses two different detector technologies, with scintillator tiles (|η| < 1.7) or liquid argon (1.5 < |η| < 3.2) as the active medium. The MS consists of three layers of precision tracking chambers providing coverage over |η| < 2.7, while dedicated fast chambers allow triggering over |η| < 2.4. The ATLAS trigger system used for real-time event selection [34] consists of a hardware-based first-level trigger and a software-based high-level trigger. Simulated event samples and data samples Three simplified SUSY signal models and a set of pMSSM scenarios are considered in this search. Gluinos or squarks are assumed to be produced in pairs (gg orqq). In the case of the simplified models, 100% branching ratios to the decay of interest are assumed. The gluino/squark one-step simplified models have three free parameters: the masses of the gluino or squark (mg /q ), the lightest chargino (mχ ± 1 ), and the lightest neutralino (mχ 0 1 ). Other sparticles that do not appear in the decay chain are set to have a high mass. To probe a broad range of SUSY mass spectra, two model parameterizations are considered. In the first type, mg /q and the mass ratio x ≡ (mχ ± 1 − mχ 0 1 )/(mg /q − mχ 0 1 ) are free parameters, while mχ 0 1 is fixed to 60 GeV. In the second type, mg /q and mχ 0 1 are free parameters, while mχ ± 1 is fixed by setting x = 1/2. For the rest of this paper, the former type is referred to as variable-x and the latter one is referred to as x = 1/2. The gluino two-step simplified model has two free parameters that are varied to probe different mass configurations: the masses of the gluino (mg) and the lightest neutralino (mχ 0 1 ). The masses of the lightest chargino and the second-lightest neutralino are constrained to be mχ ± 1 = (mg + mχ 0 1 )/2 and mχ 0 2 = (mχ ± 1 + mχ 0 1 )/2, respectively. All other sparticles are kinematically inaccessible. In the pMSSM scenario, the sparticle masses are varied by scanning the gluino mass parameter M 3 (related to mg) and the bilinear Higgs mass parameter µ (related to mχ ± 1 and mχ 0 2 ). The scan ranges are 690 GeV < M 3 < 2140 GeV and −770 GeV < µ < −160 GeV. The bino mass parameter M 1 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the center of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Rapidity is defined as y = 0.5 ln[(E + p z )/(E − p z )] where E denotes the energy and p z is the component of the momentum along the beam direction. (related to mχ 0 1 ) was set to 60 GeV. The remaining model parameters, defined in Ref. [35], are set to TeV, such that the mass of the lightest Higgs boson is compatible with 125 GeV and all other sparticles are kinematically inaccessible. Mass spectra consistent with electroweak symmetry breaking were generated using SOFTSUSY 3.4.0 [36] and the decay branching ratios were calculated with SDECAY/HDECAY 1.3b/3.4 [37]. The signal samples were generated at leading order (LO) using M G 2.2.2 [38] with up to two extra partons in the matrix element, interfaced to P 8.186 [39] for parton showers and hadronization. The CKKW-L matching scheme [40] was applied for the matching of the matrix element and the parton shower, with a scale parameter set to a quarter of the mass of the sparticle produced. The ATLAS A14 [41] set of tuned parameters (tune) was used for the shower and the underlying event, together with the NNPDF2.3 LO [42] parton distribution function (PDF) set. The E G 1.2.0 program [43] was used to describe the properties of the bottom and charm hadron decays in the signal samples. The signal cross-sections were calculated at next-to-leading order (NLO) in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithmic accuracy (NLL) [44][45][46][47][48]. The nominal cross-section and its uncertainty are taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales, as described in Ref. [49], considering only the four light-flavor left-handed squarks (ũ L ,d L ,s L , andc L ). The simulated event samples for the signal and SM backgrounds are summarized in Table 1. Additional samples are used to assess systematic uncertainties, as explained in Section 7. To generate tt and single-top-quark events in the Wt and s-channel [50], the P -B v2 [51] event generator with the CT10 [52] PDF set in the matrix-element calculations was used. Electroweak t-channel single-top-quark events were generated using the P -B v1 event generator. This event generator uses the four-flavor scheme for the NLO matrix-element calculations together with the fixed four-flavor PDF set CT10f4. For all top quark processes, top quark spin correlations are preserved (for the singletop t-channel, top quarks are decayed using MadSpin [53]). The parton shower, fragmentation, and the underlying event were simulated using P 6.428 [54] with the CTEQ6L1 [55] PDF set and the corresponding P 2012 tune (P2012) [56]. The top quark mass was set to 172.5 GeV. The EvtGen 1.2.0 program was also used to describe the properties of the bottom and charm hadron decays in the tt and the single-top-quark samples. The h damp parameter, which controls the p T of the first additional emission beyond the Born configuration, was set to the mass of the top quark. The main effect of this is to regulate the high-p T emission against which the tt system recoils. The tt events are normalized using the cross-sections computed at next-to-next-to-leading order (NNLO) with next-to-next-to-leadinglogarithmic (NNLL) corrections [57]. The single top quark events are normalized using the NLO+NNLL cross-sections for the Wt-channel [58] and to the NLO cross-sections for the tand s-channels [59]. Events containing W or Z bosons with associated jets (W/Z+jets) [60] were simulated using the S 2.2.1 event generator [61]. Matrix elements were calculated for up to two partons at NLO and four partons at LO using the Comix [62] and OpenLoops [63] generators. They were merged with the S 2.2.1 parton shower [64] with massive band c-quarks using the ME+PS@NLO prescription [65]. The NNPDF3.0 NNLO PDF set [66] was used in conjunction with a dedicated parton shower tuning developed by the S authors. The W/Z+jets events are normalized using their NNLO cross-sections [67]. The diboson samples [68] were generated using the S 2.1.1 and 2.2.1 event generators using the CT10 and NNPDF3.0 PDF sets, respectively. The fully leptonic diboson processes were simulated including final states with four charged leptons, three charged leptons and one neutrino, two charged leptons and two neutrinos, and one charged lepton and three neutrinos. The semileptonic diboson processes were simulated with one of the bosons decaying hadronically and the other leptonically. The processes were calculated for up to one parton (for Z Z) or no additional partons (for WW, W Z) at NLO and up to three partons at LO. The response of the detector to particles was modeled either with a full ATLAS detector simulation [72] using G 4 [73] or with a fast simulation [74]. The fast simulation is based on a parameterization of the performance of the electromagnetic and hadronic calorimeters and on G 4 elsewhere. All background (signal) samples were prepared using the full (fast) detector simulation. All simulated events were generated with a varying number of minimum-bias interactions overlaid on the hard-scattering event to model the multiple proton-proton interactions in the same and nearby bunch crossings. The minimumbias interactions were simulated with the soft QCD processes of P 8.186 using the A2 tune [75] and the MSTW2008LO PDF set [76]. Corrections were applied to the samples to account for differences between data and simulation for trigger, identification and reconstruction efficiencies. The proton-proton data analyzed in this paper were collected by ATLAS during 2015 and 2016 at a center-of-mass energy of 13 TeV with up to 50 simultaneous interactions per proton bunch crossing. After application of data-quality requirements related to the beam and detector conditions, the total integrated luminosity corresponds to 36.1 fb −1 . The uncertainty in the combined 2015 and 2016 integrated luminosity is 3.2%. It is derived from a calibration of the luminosity scale using x-y beam-separation scans. This methodology is further detailed in Ref. [77]. The data were collected using the higher-level triggers that select events based on the magnitude of the missing transverse momentum, E miss T . The triggers used are close to fully efficient for events with an offline-reconstructed E miss T greater than 200 GeV. Event reconstruction In each event, proton-proton interaction vertices are reconstructed from at least two tracks, each with a transverse momentum p T > 400 MeV and consistent with the beamspot envelope. The primary vertex (PV) of the event is selected as the vertex with the largest p 2 T of the associated tracks. A distinction is made between preselected and signal leptons and jets. Preselected leptons and jets are used in the E miss T computation and are subject to a series of basic quality requirements. Signal leptons and jets are a subset of the preselected objects with more stringent requirements and are used for the definition of signal, control and validation regions. To avoid double-counting of the preselected jets, electrons, and muons, a sequence of overlap-removal procedures based on the angular distance ∆R = (∆y) 2 + (∆φ) 2 is applied. First, any jet reconstructed within ∆R < 0.2 of a preselected electron is rejected. This prevents electromagnetic energy clusters simultaneously reconstructed as an electron and a jet from being selected twice. Next, to remove bremsstrahlung from muons followed by a photon conversion into electron pairs, electrons within ∆R < 0.01 from a preselected muon are discarded. Subsequently, the contamination from muons from decays of heavy hadrons is suppressed by removing muons that are within ∆R < min(0.04 + (10 GeV)/p µ T , 0.4) from preselected jets meeting the previous criteria, or ∆R < 0.2 from a b-tagged jet or a jet containing more than three tracks with p T > 500 MeV. In the former case, the p T -dependent angular separation mitigates the rejection of energetic muons close to jets in boosted event topologies. Finally, jets reconstructed with ∆R < 0.2 from a preselected muon are rejected. Signal electrons are required to satisfy the likelihood-based tight identification criteria detailed in Ref. [89]. Signal muons and electrons satisfy a sequence of ηand p T -dependent isolation requirements on trackingbased and calorimeter-based variables, defined as the GradientLoose [90] isolation criteria. Compatibility of the signal lepton tracks with the PV is enforced by requiring the distance |z 0 sin θ| to be less than 0.5 mm, where z 0 is the longitudinal impact parameter. In addition, the transverse impact parameter, d 0 , divided by its uncertainty, σ(d 0 ), must satisfy |d 0 /σ(d 0 )| < 3 for signal muons and |d 0 /σ(d 0 )| < 5 for signal electrons. Corrections derived from data control samples are applied to simulated events to calibrate the reconstruction and identification efficiencies, the momentum scale and resolution of leptons and the efficiency and mis-tag rate of b-tagged jets. Event selection Each event must satisfy the trigger selection criteria, and must contain a reconstructed primary vertex. Non-collision background and detector noise are suppressed by rejecting events with any preselected jet not satisfying a set of quality criteria [91]. Exactly one signal lepton, either an electron or a muon, is required. Events with additional preselected leptons are rejected to suppress the dilepton tt, single-top (Wt-channel), Z+jets and diboson backgrounds. The following observables are used in the definition of signal regions in the analysis. The missing transverse momentum, E miss T , is defined as the magnitude of p miss T , the negative vectorial sum of the transverse momenta of preselected muons, electrons, jets, and identified and calibrated photons. The calculation of p miss T also includes the transverse momenta of all tracks originating from the PV and not associated with any identified object [92, 93]. The transverse mass, m T , is defined from the lepton transverse momentum p T and p miss where ∆φ( p T , p miss T ) is the azimuthal angle between p T and p miss T . For W+jets and semileptonic tt events, in which one on-shell W boson decays leptonically, the observable has an upper endpoint at the W-boson mass. The m T distribution for signal events extends significantly beyond the distributions of the W+jets and semileptonic tt events. The effective mass, m eff , is the scalar sum of the p T of the signal lepton and all signal jets and E miss T : The effective mass provides good discrimination against SM backgrounds, especially for the signal scenarios where energetic jets are expected. Gluino production leads to higher jet multiplicity than squark production. High-mass sparticles tend to produce harder jets than low-mass sparticles. Thus the optimal m eff value depends on the different signal scenarios. To achieve sensitivity to a wide range of SUSY scenarios with a limited number of signal regions, this variable is binned in the final region definition instead of one simple m eff cut. The detailed description can be found in Section 5.1. The transverse momentum scalar sum, H T , is defined as where the index j runs over all the signal jets in the event. Empirically, the experimental resolution of E miss T scales with H T , and the ratio E miss T / H T is useful for suppressing background events with large E miss T due to jet mismeasurement. The aplanarity is a variable designed to provide more global information about the full momentum tensor of the event. It is defined as (3/2) × λ 3 , where λ 3 is the smallest eigenvalue of the normalized momentum tensor [94] calculated using the momenta of the jets and leptons in the event. Typical measured aplanarity values lie in the range 0 -0.3, with values near zero indicating relatively planar background-like events. Signal events tend to have high aplanarity values, since they are more spherical than background events due to multiple objects emitted in the sparticles decay chains. Signal region definitions Five sets of event selection criteria, each defining a signal region (SR), are designed to maximize the signal sensitivity. Each SR is labeled by the minimum required number of jets and, optionally, the characteristics of the targeted supersymmetric mass spectrum. Four of the five SRs, 2J, 4J high-x, 4J low-x, and 6J, target the gluino/squark one-step models. The fifth SR, 9J, targets the gluino two-step and pMSSM models. Table 2 summarizes the four SRs targeting the gluino/squark one-step models. The four SRs are mutually exclusive. For setting model-dependent exclusion limits ("excl"), each of the four SRs is further binned in b-veto/b-tag and m eff , and a simultaneous fit is performed across all 28 bins of the four SRs. This choice enhances the sensitivity to a range of new-physics scenarios with different properties such as the presence or absence in the final state of jets containing b-hadrons, and different mass separations between the supersymmetric particles. For model-independent limits and null-hypothesis tests ("disc" for discovery), the event yield above a minimum value of m eff in each SR is used to search for an excess over the SM background. The 2J SR provides sensitivity to scenarios characterized by a relatively heavyχ 0 1 and small differences between mg, mχ ± 1 , and mχ 0 1 , where most of the decay products tend to have small p T . Events with one low-p T lepton and at least two jets are selected. The minimum lepton p T is 7 (6) GeV for the electron (muon), and the maximum p T is scaled with the number of signal jets in the event as 5 GeV × N jet up to 35 GeV. The maximum p T requirement balances background rejection and signal acceptance for models with increasing mass splittings, where there are more energetic lepton and jets. Stringent requirements on E miss T and on m eff enhance the signal sensitivity by selecting signal events in which the final-state neutralinos are boosted against energetic initial-state radiation (ISR) jets. The SM background is further suppressed by a tight requirement on E miss T /m eff . The 4J high-x SR is optimized for models where mχ 0 1 is fixed to 60 GeV and x ≈ 1, i.e., mχ ± 1 is close to mg. The W boson produced in the chargino decay is significantly boosted, giving rise to a high-p T lepton. The main characteristics of signal events in this model are large m T values and relatively soft jets emitted from the sparticle decay. Tight requirements are placed on E miss T , m T , and E miss T /m eff . The 4J low-x SR targets models where mχ 0 1 is fixed to 60 GeV and x ≈ 0, i.e., mχ ± 1 is close to mχ 0 1 . The large mg /q -mχ ± 1 mass splitting leads to high jet activity, where events are expected to have higher m eff and larger aplanarity than in the high-x scenarios. The W boson tends to be off-shell, leading to small m T , and accordingly an upper bound is imposed to keep this region orthogonal to the 4J high-x SR. for gluino (squark) Table 3: Overview of the selection criteria for the signal region used for pMSSM and gluino two-step models. SR 9J The 6J SR is optimized for models with x = 1/2, targeting scenarios with large sparticle mass. Events with one high-p T lepton and six or more jets are selected. Requirements on m T , E miss T , m eff , and aplanarity are imposed to reduce the SM background from tt and W + jets production. The sensitivity is improved for scenarios with large mg /q and small mχ 0 1 by introducing a higher m eff bin. Finally, one signal region, 9J SR, is defined to target the pMSSM and gluino two-step models. The selection criteria are summarized in Table 3. At least nine jets are required, targeting the models' long decay chains in which multiple vector or Higgs bosons are produced. The background is further suppressed by tight requirements on the aplanarity and on E miss T / H T . For setting model-dependent exclusion limits ("excl"), the SR is separated into 1000 < m eff < 1500 GeV and m eff > 1500 GeV to achieve good discrimination power for different gluino masses. For model-independent null-hypothesis tests ("disc"), events selected with m eff > 1500 GeV are used to search for an excess over the SM background. Background estimation The dominant SM backgrounds in most signal regions originate from top quark (tt and single top) and W+jets production. In this section, the techniques employed to estimate the contribution of these backgrounds in the signal regions are detailed. Additional sources of background in all signal regions originate from the production of Z+jets, tt in association with a W or Z boson, and diboson (WW, W Z, Z Z) events. Their contributions are estimated entirely using simulated event samples normalized to NLO cross-sections. The contribution from multi-jet processes with a misidentified lepton is found to be negligible once the lepton isolation and E miss T requirements used in this search are imposed. A data-driven matrix method, following the implementation described in Ref. [21], determined this in previous iterations of the analysis [22]. As this background is found to be negligible, it is not further considered in the analysis. The dominant top quark and W+jets backgrounds in the 2J, 4J high-x, 4J low-x, and 6J signal regions are estimated by simultaneously normalizing the predicted event yields from simulation to the number of data events observed in dedicated control regions (CR) using the fitting procedure described in Section 8. The simulation is then used to extrapolate the measured background rates to the corresponding signal regions. The CRs are designed to have high purity in the background process of interest, a sufficiently large number of events to obtain small statistical uncertainties in the background prediction, and a small contamination by events from the signal models under consideration. Moreover, they are designed to have kinematic properties resembling as closely as possible those of the signal regions, in order to provide good estimates of the kinematics of background processes there. This procedure limits the impact of potentially large systematic uncertainties in the expected yields from the extrapolation. Tables 4-7 list the criteria that define the control regions corresponding to signal regions 2J, 4J highx, 4J low-x, and 6J. As described in Section 5, these signal regions contain multiple bins in m eff . The same binning is maintained for the control regions, so that every signal region bin in m eff has corresponding control regions with the same requirements on m eff and, therefore, the backgrounds are estimated independently in each m eff bin. Dedicated top and W+jets control regions, respectively denoted by TR and WR, are constructed in each bin of m eff . The TR and WR are distinguished by requiring at least one or exactly zero b-tagged signal jets, respectively. Cross-contamination from top and W+jets processes between these two types of control regions is accounted for in the fit. The measured top and W+jets background rates from the TR and WR regions in a given m eff bin are extrapolated to the signal region within the same m eff bin. The signal regions in a given m eff bin may be further separated into regions with at least one or exactly zero b-tagged signal jets as described in Section 5. For such signal regions separated by b-tagged jet multiplicity, the extrapolation is performed from both the TR and WR regions to each individual bin of b-tagged jet multiplicity. To validate the extrapolation from control to signal regions using simulated event samples, dedicated validation regions (VRs) are defined for each set of control and signal regions. The selection criteria defining these VRs are also shown in Tables 4-7. The same binning in m eff used in the control and signal regions is also maintained in the validation regions. The VRs are designed to be kinematically close to the signal regions, with only a small contamination from the signal in the models considered in this search. The VRs are not used to constrain parameters in the fit, but provide a statistically independent cross-check Table 4: Overview of the control and validation region selection criteria corresponding to the 2J SR. The top and W+jets control regions are denoted by TR and WR, respectively. 2J WR of the extrapolation. The observed event yields in the VRs are found to be consistent with the background prediction as further discussed in Section 8. One of the dominant background components in the 2J, 4J high-x, 4J low-x, and 6J SRs is tt production with dileptonic final state, where one lepton fails to be reconstructed ("missing lepton") or is a semihadronically decaying τ lepton; this background is characterized by high values of m T . To validate the above described background estimation technique, which is largely a simulation-based extrapolation from low-m T control regions populated by events with semileptonic tt decays, an alternative method was developed. This method (hereafter referred to as the object replacement method) uses events in a dileptonic control region. To emulate the missing lepton case, the p T of one of the two leptons is added vectorially to the calculation of E miss T . To emulate the hadronic τ decay case, one of the two leptons is re-simulated as a hadronic tau decay using the Tauola generator [95] with appropriate energy scale and resolution corrections. The accuracy of this alternative background estimation technique was validated on simulated samples as well as in data validation regions. The background estimates derived from this object replacement method are found to be consistent with those obtained from the standard semi-data-driven approach as further demonstrated in Section 8. While the background estimation strategy described above works well for the signal regions 2J, 4J high-x, 4J low-x, and 6J, it is not viable for the 9J SR. The reason for this is that the simulation-based extrapolation from the control regions, which are typically located around the peak region of the transverse mass distribution (m T ∼ 80 GeV), to the high-m T signal regions (m T 80 GeV) is affected by large theoretical uncertainties at high jet multiplicities. Because the peak and tail regions of the m T distribution are dominated by semileptonic and dileptonic final states from tt decays, respectively, additional jets from initial-or final-state radiation are required to obtain the same jet multiplicity for dileptonic tt final states. Inadequate modeling of such additional jets is the dominant source of the theoretical uncertainty. To reduce the dependence on the modeling of additional jets, a dedicated data-driven background estimation technique was designed for the 9J SR. The method relies on the assumption that the m T distribution is approximately invariant under changes in the jet multiplicity requirements. This assumption is found to be valid when tight m eff requirements as used in this analysis are applied such that the overall activity in the calorimeter and thus the missing transverse momentum resolution are not significantly affected by variations in the jet multiplicity. Based on the m T invariance, mutually exclusive control regions CR A,B,C are defined in the m T -N jet plane, where CR A is located at high m T and low N jet , CR B at low m T and low N jet , and CR C at low m T and high N jet . The precise requirements of these regions are defined in Table 8 and illustrated in Figure 2. Based on these regions, the background in the high m T and high N jet signal region can then be estimated with the following equation Table 8: Overview of the control and validation region selection criteria corresponding to the 9J SR. The control regions CR A,A ,B,C,C are further divided into bins of exactly 0 or ≥ 1 b-tagged signal jets to enrich top and W+jets backgrounds, respectively. 9J Table 3 (signal region) and where N est <region> is the (estimated) number of events in a given region. The residual small correlations between m T and N jet that bias the background estimate in the signal region can then be expressed in terms of a simulation-based closure parameter defined as where N sim <region> is the number of events in a given region as predicted by simulation while N sim,est SR 9J is the estimated number of events in the signal region based on the simulation predictions in regions A, B, and C. The estimated number of background events in the signal region can then be rewritten as where N obs <region> is the observed number of events in a given region, µ C is the normalization parameter in region C, and the normalization parameter µ A/B is fitted simultaneously with the normalization µ B of the backgrounds in region CR B according to The control regions listed in Table 8 are optimized to provide a sufficient number of events in the backgrounds of interest, low contamination from the signal models considered, and a closure parameter f closure close to unity. All control regions are fitted simultaneously in two bins requiring either zero or at least one b-tagged signal jet to enrich the contributions from the W+jets and top backgrounds, respectively. Therefore, the normalization factors µ B , µ C , and µ A/B exist separately for the W+jets and top backgrounds. The top backgrounds considered in the fit comprise tt as well as single-top production processes, which are treated with a common set of normalization parameters. To validate that the fitted ratio of low-m T to high-m T events (µ A/B ) extrapolates to high values of N jet , a validation region VR m T with seven or eight jets and high m T requirements is introduced. Similarly, a validation region VR N jet with at least nine jets and moderate m T requirements is introduced to validate the extrapolation of the normalization factor µ C in region CR C to higher m T values. Since the normalization factors for different jet multiplicities are expected to differ, a control region CR C along with its normalization factor (µ C ) is introduced. This region is only used to obtain the background estimate in VR m T . Similarly, a control region CR A is constructed to obtain the normalization factor µ A /B that is needed for the background estimation in validation region VR N jet . The definition of the validation regions along with their corresponding control regions is given in Table 8. Systematic uncertainties Experimental and theoretical sources of systematic uncertainty are described in this section. Their effects are evaluated for all simulated signal and background events. The dominant experimental systematic effects are the uncertainties associated with the jet energy scale (JES) and resolution (JER) and with the b-tagging efficiency and mis-tagging rate. The impact of the jet-related uncertainties on the total background prediction ranges from 1.3% in the 6J SR to 18% in the 9J SR. Similarly, the impact of the uncertainties associated with the b-tagging procedure amounts to 1.9% in the 6J SR bins with at least one b-tagged jet and increases to 9.5% in the 6J SR bins with no b-tagged jets. The simulation is reweighted to match the distribution of the average number of proton-proton interactions per bunch crossing (µ) observed in data. The uncertainty in µ is propagated by varying up and down the reweighting factor: it becomes relevant in the signal regions characterized by the highest jet multiplicities. Uncertainties in the theoretical predictions and the modeling of simulated events are also considered. For the W+jets and the tt and single top backgrounds, they affect the extrapolation from each m eff bin in the control regions to the corresponding bin in the signal regions. In the 9J SR the f closure parameter used in the background estimation in this channel is affected as well. For all the other background sources, they impact the inclusive cross-section of each specific process, the acceptance of the analysis selection requirements and the shape of the m eff distribution in each SR. An uncertainty stems from the choice of MC event generator modeling the tt, single top, diboson and W/Z+jets processes. For tt and single top, P -B is compared with MG5_ M @NLO [38] and the relative difference in the extrapolation factors is evaluated. For W/Z+jets, the predictions from S are compared with MG5_ M @NLO [38]. For dibosons, the event yield predictions from S are compared with -interfaced to P . The impact of varying the amount of initialand final-state radiation is evaluated for tt and single top production. Specific samples are used, with altered renormalization and factorization scales as well as parton shower and NLO radiation [50]. Moreover, the difference between the predictions from -interfaced to P and to H ++ [96] is computed to estimate the uncertainty associated with the parton shower modeling. For W/Z+jets samples, the uncertainties in the renormalization, factorization, resummation scales and the matching scale between matrix elements and parton shower (CKKW-L) are evaluated by varying up and down by a factor of two the corresponding parameters in S . For tt and W+jets samples, the uncertainties due to choosing the PDF set CT10 [52] are considered. Inclusive WW bb events generated using MG5_ MC@NLO [38] are compared to the sum of tt and Wt production, to assign an uncertainty to the interference effects between single top and tt production at NLO. The uncertainty in the inclusive Z+jets cross-section, amounting to 5%, is accounted for [97]. An overall 6% systematic uncertainty in the inclusive cross-section of diboson processes is also considered. In Table 9: Breakdown of the dominant systematic uncertainties in the background estimates in the 2J and 4J highx SRs. The individual uncertainties can be correlated and do not necessarily add up in quadrature to the total background uncertainty. The percentages show the size of the uncertainty relative to the total expected background. addition, the S parameters controlling the renormalization, factorization, resummation and matching scales are varied by a factor of two to estimate the corresponding uncertainties. An uncertainty of 30% is assigned to the small contributions of tt + W/Z/WW. Signal region The total systematic uncertainty in the predicted background yields in the various signal regions ranges from 12% in the 2J SR bins with ≥ 1 b-tagged jet, to 50% in the 9J SR. The largest uncertainties in the SR bins with ≥ 1 b-tagged jet originate from the modeling of tt events and amount to 5% in the 2J SR, increasing to 40% in the 9J SR. Similarly, in the SR bins where b-tagged jets are vetoed, the dominant source of systematic uncertainty is the modeling of W+jets events, ranging from 9% in the 6J SR to 20% in the 4J low-x SR. Other important uncertainties are those associated with the finite size of the MC samples, which amount to 18% in the 6J SR, and the theoretical uncertainties originating from the modeling of the diboson background, amounting to 26% in the 6J SR. Tables 9-11 list the breakdown of the dominant systematic uncertainties in background estimates in the various signal regions. For the signal processes, the modeling of initial-state radiation can be affected by sizable theoretical uncertainty. The uncertainties in the expected yields for SUSY signal models are estimated with variations of a factor of two to the MG5_ MC@NLO parameters corresponding to the renormalization, factorization Table 11: Breakdown of the dominant systematic uncertainties in the background estimates in the 9J SR. The individual uncertainties can be correlated and do not necessarily add up in quadrature to the total background uncertainty. The percentage shows the size of the uncertainty relative to the total expected background. Signal region 9J Total background expectation 7 Total background systematic uncertainty ±4 [50%] Theoretical uncertainty ±4 Normalization uncertainty ±2.0 Experimental uncertainty ±1.9 Statistical uncertainty of MC samples ±0.7 and jet matching scales, and to the P shower tune parameters. The overall uncertainties range from about 1% for signal models with large mass splitting between the gluino or squark, the chargino, and the neutralino, to 35% for models with very compressed mass spectra. Results and Interpretation The statistical interpretation of the results is performed based on a profile likelihood method [98] using the HistFitter framework [99]. The likelihood function consists of a product of Poisson probability density functions for the signal and control regions that contribute to the fit. The inputs to the likelihood function are the observed numbers of data events and the expected numbers of signal and SM background events in each region. Three normalization factors, one for signal, one for W + jets, one for tt and single top, are introduced to adjust the relative contributions of the main background and signal components. The small sources of SM background, i.e., diboson, Z + jets and tt + V, are estimated directly from simulation. The uncertainties are implemented in the fit as nuisance parameters, which are correlated between the SRs and the CRs. The systematic uncertainties described in Section 7 are constrained by Gaussian probability density functions, while the statistical uncertainties are constrained by Poisson probability density functions. The observed numbers of events in the signal regions are given in Tables 12-14, along with the SM background prediction as determined with the background-only fit. In a background-only fit, the data event yields in the CRs are used to determine the two background normalization factors: for W + jets and for tt and single top production. The fit is independent of the observation in the SR, and does not consider signal contamination in the CRs. The above-mentioned signal normalization parameter is therefore not included in this fit configuration. The compatibility of the observed and expected event yields in both the validation and signal regions is illustrated in Figures 3-7. No significant excess in data is observed over the SM prediction. The top and W + jets background normalization factors obtained for the 2J, 4J low-x, 4J high-x, and 6J SRs are shown in bins of m eff in Figure 8. A trend toward smaller normalization factors at large values of m eff is observed, which demonstrates the necessity of applying the same binning requirements in control and [GeV]. Uncertainties in the fitted background estimates combine statistical (in the simulated event yields) and systematic uncertainties. The uncertainties in this table are symmetrized for propagation purposes but truncated at zero to remain within the physical boundaries. 2J b-tag All signal regions. The predicted event yields from tt events in which both top quarks decay semileptonically are cross-checked using the alternative object-replacement method described in Section 6. Figure 9 shows that the background estimates obtained from the two methods are consistent. Figures 10-11 show the m eff distribution in 2J, 4J low-x, 4J high-x and 6J in b-tag and b-veto signal regions after fit. Figure 12 shows the m eff distribution in 9J signal region after fit. The uncertainty bands plotted include all statistical and systematic uncertainties. The dashed lines stand for the benchmark signal samples. Using the results of the background-only fit, a model-independent limit fit is performed to test for the presence of any beyond-the-Standard-Model (BSM) physics processes that contribute to the SR ("disc" SR in Table 2) . The BSM signal is assumed to contribute only to the SR and not to the CRs, thus giving a conservative estimate of background in the SR. Observed (S 95 obs ) and expected (S 95 exp ) 95% confidence level (CL) upper limits on the number of BSM signal events are derived using the CL s prescription [100]. Table 15 presents these limits, together with the upper limits on the visible BSM cross-section, σ 95 obs , defined as the product of acceptance, selection efficiency and production cross-section. The upper limits on the visible BSM cross-section are calculated by dividing the observed upper limit on the beyond-SM events by the integrated luminosity of 36.1 fb −1 . Moreover, the discovery p-values are given. They quantify the probability under the background-only hypothesis to produce event yields greater than or equal to the observed data. Additionally, the results are interpreted in the specific supersymmetric scenarios described in Section 3 using model-dependent limit fits. A model-dependent limit fit takes the data event yields in multiple, statistically independent SRs and their associated CRs to compute an upper limit on the cross-section of a targeted SUSY model. The fit includes the expected signal contributions to the SRs and to the CRs, scaled by a floating signal normalization factor. The background normalization factors are also determined simultaneously in the fit. The sparticle mass in a specific SUSY model can be excluded if the upper limit of the signal normalization factor obtained in the fit is smaller than unity. For the gluino/squark one-step models, a model-dependent fit is performed over all bins of the 2J, 4J high-x, 4J low-x, and 6J SRs. An independent set of background normalization factors are allocated for each bin of each SR ("excl" SR in Table 2) and its associated CRs. Figure 13 (top and middle) shows the observed and expected exclusion bounds at 95% CL for the one-step simplified models with gluino and squark production. Gluino masses up to 2.1 TeV and squark masses up to 1.25 TeV are excluded. Figure 13 (bottom) shows the exclusion contours of the 9J SR (Table 3) for the gluino two-step as well as the pMSSM scenario described in Section 3. In both cases the limits reach well beyond 1.7 TeV in gluino mass. Table 15: Results of the model-independent limit fits. For each SR, the observed 95% CL upper limit on the visible cross-section ( σ 95 obs ), the observed (S 95 obs ) and expected (S 95 exp ) 95% CL upper limits on the BSM event yield, and the one-sided discovery p-value (p(s = 0)) are presented. The p-values are capped at 0.5 if fewer events than the fitted background estimate are observed. SR V R a p la n a ri ty V R a p la n a ri ty m V R a p la n a ri ty m V R a p la n a ri ty m V R a p la n a ri ty V R a p la n a ri ty m V R a p la n a ri ty m V R a p la n a ri ty m V R a p la n a ri ty m Figure 13: Exclusion contours for gluino one-step x = 1/2 (top left), gluino one-step variable-x (top right), squark one-step x = 1/2 (middle left) and squark one-step variable-x (middle right), gluino two-step (bottom left), and the pMSSM scenario (bottom right). The red solid line corresponds to the observed limit with the red dotted lines indicating the ±1σ variation of this limit due to the effect of theoretical scale and PDF uncertainties in the signal cross-section. The dark gray dashed line indicates the expected limit with the yellow band representing the ± 1 σ variation of the median expected limit due to the experimental and theoretical uncertainties. For reference, exclusion bounds from previous searches with 20.3 fb −1 at 8 TeV center-of-mass energy [28] and 3.2 fb −1 at 13 TeV center-ofmass energy [22,30] are overlaid where applicable by the gray area (the observed limit is shown by the solid line, while the dashed line shows the expected limit) . Conclusion A search for the pair production of squarks and gluinos in proton-proton collisions provided by the LHC at a center-of-mass energy of √ s = 13 TeV has been performed by the ATLAS Collaboration. Events containing one isolated electron or muon, two or more jets, and large missing transverse momentum are selected in the data collected in 2015 and 2016, corresponding to an integrated luminosity of 36. [79] M. Cacciari, G. P. Salam [94] C. Chen, New approach to identifying boosted hadronically decaying particles using jet substracture in its center-of-mass frame, Phys.
11,901.4
2017-08-28T00:00:00.000
[ "Physics" ]
Suppressing Chaos of Duffing-Holmes System Using Random Phase The effect of random phase for Duffing-Holmes equation is investigated. We show that as the intensity of random noise properly increases the chaotic dynamical behavior will be suppressed by the criterion of top Lyapunov exponent, which is computed based on the Khasminskii’s formulation and the extension of Wedig’s algorithm for linear stochastic systems. Then, the obtained results are further verified by the Poincaré map analysis, phase plot, and time evolution on dynamical behavior of the system, such as stability, bifurcation, and chaos. Thus excellent agrement between these results is found. Introduction For the past ten years, there has been a great deal of interest in the chaos control's research which has become one of nonlinear scientific field hot spot issues.After OGY methods were proposed by Ott et al. 1 , various methods for chaos control's have been given which are composed of the feedback control and the nonfeedback control.The feedback control methods 1-3 , which can exploit the chaos control's characteristic: the sensitivity to initial condition, use some weak feedback control to make the chaotic trajectory approach and settle down finally to a desired stabilized periodic orbit, formerly unstably embedded in the chaotic manifold.The nonfeedback methods 4-9 can eliminate chaos by using a period adjustment coming from a out period incentive for coupling system variables.Because noise is ubiquitous in actual environment, the research into the influence of noise on the system is very important. Stochastic forces or random noise have been greatly used in studying the control of chaos.For example, Ramesh and Narayanan 10 explored the robustness in nonfeedback chaos control in presence of uniform noise and found that the system would lose control while noise intensity was raised to a threshold level.Wei Duffing equation is the reduced form of lots of practical system model, for example, swinging pendulum model and financial model.Duffing system is a typical nonlinear vibration system; in engineering, lots of mathematical models of nonlinear vibration problems could be transformed to this equation, for example, ship's weaving, structural vibration, destruction of chemical bond, and so forth.Disturbed axial tensile force model of lateral wave equation and dynamics equation of rotor bearing are also the same as Duffing system.To some extent, Duffing system is the basis of lots of complicated dynamics; it has not only theoretical significance but also important actual value.This paper focuses on the study of the influences of random phase on the behaviors of Duffing-Holmes dynamics and shows that the random phase methods can actualize the chaos control.Since the Lyapunov index is an important symbol to describe chaos system, by using the Khasminskii 19 spherical coordinates and Wedig 20 algorithm, we can figure out the top of Lyapunov index.Furthermore, we can ascertain the vanishing of chaos by checking the sign of average value of the Lyapunov index.Finally, we show that the random phase can control the chaos behaviors by combining the Poincar section and the time history. Chaotic Behavior of Duffing-Holmes System Consider the following Duffing-Holmes system 21-23 where δ is damping coefficient, f is excitation amplitude, and ω is excitation frequency.Equation 2.1 can be reformulated to one-order nonautonomous equation ẋ y, ẏ −δy x − x 3 f sin ωt . 2.2 The linear format of 2.2 is 2.3 Denote top Lyapunov exponent 24 as Mathematical Problems in Engineering where Y t x 2 1 y 2 1 , the symbol of top Lyapunov exponent is always used to identify the motion state of system, when λ ≥ 0, movement of system is chaotic, and when λ < 0, it is robust. Select parameters δ 0.25, f 0.27, ω 1.0; the initial condition is x 1.0, y 0.0; we use Runge-Kutta-Verner method of sixth-order to solve system 2.2 and 2.3 ; the analysis of top Lyapunov exponent in 2.1 is shown in Figure 1. It is seen from Figure 1 that robust top Lyapunov exponent symbol is positive λ ≈ 0.12 ; this illustrate that the system is chaotic.The phase map and time-history map are shown in Figures 2 a and 2 b . Let 2.6 Denote cross section as Poincaré cross section is shown in Figure 2 c .From Figures 2 a and 2 b , we find that the phase portrait is chaotic and the time history is not regular.From Figure 2 c , the Poincaré surface of section is a chaotic attractor.The conclusion in our paper illustrates that the system is chaos. Suppressing Chaos of the Duffing-Holmes System Using a Random Phase We plug a random phase into 2.2 ẋ y, ẏ −δy x − x 3 f sin ωt σξ t , 3.1 where ξ t denotes a standard Gaussian white noise, σ is an intensity, and ξ t satisfies Eξ t 0, Eξ t ξ t τ ζ τ , where ζ τ is the Dirac-Delta function, that is, 3.2 Equation 3.1 is linearized as follows: Mathematical Problems in Engineering 5 Let We have Assume that f ij , i, j 1, 2, are ergodic and E b F t < ∞, where the norm A is defined as the square root of the largest eigenvalue of the matrix A T A. By Ossledec multiple ergodic theorem 25 , there exist two real numbers λ 1 , λ 2 and two random subspaces and U δ 0 denotes the neighborhood of O 0, 0 , such that, where Y t x 2 1 y 2 1 , λ i i 1, or 2 is the Lyapunov exponent, representing the rate of exponential convergence or divergence of nearby orbits in a specific direction in E i .The Ossledec multiple ergodic theorem states that for almost all random initial values in random subset U δ 0 there holds λ max i λ i lim t → ∞ 1/t log Y t and λ is defined as the largest or top Lyapunov exponent.Using Khasminskii's 19 technique, the computation of the top Lyapunov exponent of system 3.5 can be presented as follows. Let 3.7 It follows that where m t k,l A kl s k s l , n t k,l f kl s k s l , δ ij 1 i j , δ ij 0 i / j , and a m t n t a. 3.9 Thus, the largest Lyapunov exponent can be expressed as 3.10 Now the top Lyapunov exponent can be obtained by numerical integration of 3.10 .Take δ 0.25, f 0.27, ω 1.0; we solve 3.1 and 3.3 with 3.8 -3.10 using the Runge-Kutta-Verner method of sixth order.We plot the top Lyaponov exponent depending on the intensity of the noise in Figure 3. From Figure 3, the top Lyaponov exponent keeps positive when σ is smaller than the critical value σ c 0.05.When the intensity is greater than the critical value, the sign of the top Lyaponov exponent suddenly turns from positive to negative, namely, the behavior of this system turns from chaotic to stable abruptly.From then on, the increase of the intensity of stochastic phase would not affect the sign of the top Lyaponov exponent any longer in the interested parameter range.This suggests the random phase noise effectively stabilizes the system for the parameter range, σ ∈ σ c , 1.0 .Now we apply Poincaré map of 2.1 to verify the above results.Set Poincaré may as 3.11 For the given initial condition as in Figure 4, the differential 2.1 is solved by the sixthorder Runge-Kutta-Verner method and the solution is plotted for every T 2π/ω, and after deleting the first 500 transient points, the succeeded 200 iteration points are used to plot the Poincaré map for σ 0.2 in Figure 4 c .For σ 0.2, the phase portrait and time history are plotted in Figures 4 a and 4 b , too. From the comparison of Figures 2 and 4, the chaotic phase portrait corresponding to σ 0.0 is changed to a circle.The chaotic state of the time history took place by the periodic state.Poincaré surface of section turns from the chaotic attractor to the stable attractor.It appears that the chaotic state of the original system has been controlled to the stable state by using a random phase. Conclusions Based on the work of Khasminskii and Wedig, we derive the top Lyaponov exponent for the Duffing-Holmes random system.We have shown that the chaotic dynamical behavior will be suppressed as the noise intensity increases slightly by the criterion of the top Lyaponov exponent.The Poincaré map analysis, the phase portrait, and the time history fully verified the proposed results.We can point out that the random phase is the important tool for the suppressing chaos as a nonfeedback control method. Figure 2 : Figure 2: a Phase portrait, b time history, and c Poincaré surface of section. Figure 4 : Figure 4: a Phase portrait, b time history, and c Poincaré surface of section. and Leng 11 studied the chaotic behavior in Duffing oscillator in presence of white noise by the Lyapunov exponent.Liu et al. 2 Mathematical Problems in Engineering noise by the criterion of stochastic Melnikov function and Lyapunov exponent.Qu et al. 13 further applied weak harmonic excitations to investigate the chaos control of nonautonomous systems and especially observed that the phase control in weak harmonic excitation may greatly affect taming nonautonomous chaos, and Lei et al. 14 have investigated the control of chaos with effect of proper random phase.Recently, much work for suppressing chaos by random excitation 15-18 is carried out.
2,199.4
2011-04-17T00:00:00.000
[ "Physics" ]
Water accounting under climate change in the transboundary Volta River Basin with a spatially calibrated hydrological model Sustainable water management requires evidence-based information on the current and future states of water resources. This study presents a comprehensive modelling framework that integrates the fully distributed mesoscale Hydrologic Model (mHM) and climate change scenarios with the Water Accounting Plus (WA + ) tool to anticipate future water resource challenges and provide mitigation measures in the transboundary Volta River basin (VRB) in West Africa. The mHM model is forced with a large ensemble of climate change projection data from CORDEX-Africa. Outputs from mHM are used as inputs to the WA + framework to report on water flows and consumption over the historical baseline period 1991 – 2020 and the near-term future 2021 – 2050 at the basin scale, and also across spatial domains including four climatic zones, four sub-basins and six riparian countries. The long-term multi-model ensemble mean of the net inflow to the basin is found to be 419 km 3 /year with an inter-annual variability of 11% and is projected to slightly increase in the near-term future (2021 – 2050). However, evaporation consumes most of the net inflow, with only 8% remaining as runoff. About 4 km 3 /year of water is currently used for man-made activities. Only 45% of the available water is beneficially consumed, with the agricultural sector representing 34% of the beneficial water consumption. Water availability is projected to increase in the future due to the increase in rainfall, along with higher inter-model and inter-annual variabilities, thereby highlighting the need for adaptation strategies. These findings and the proposed climate-resilient land and water management strategies can help optimize the water-energy-food-ecosystem nexus and support evidence-based decisions and policy-making for sustainable water management in the VRB. Introduction Climate change and socioeconomic development are projected to exacerbate water scarcity, contributing to food insecurity and conflicts between those who share resources (Damania, 2020;Leal Filho et al., 2022;Mekonnen and Hoekstra, 2016).Consequently, there is a pressing need for planners, policymakers, implementers, and basin authorities to have quantified data and evidence-based information on the current and projected states of water resources and their users.This is more urging in transboundary basins, where transparent management and equitable allocation of natural resources are essential for geopolitical stability (De Stefano et al., 2017;Mirzaei-Nodoushan et al., 2022;Zeitoun et al., 2016).Nevertheless, data unavailability and inaccessibility hinder sustainable water management and planning of interventions in many regions (Dinku, 2019;Sultan et al., 2020).Moreover, there are challenges in translating hydrology research into practice because methods usually need more clarity, and outputs are difficult to interpret (Rokaya and Pietroniro, 2022).It has become apparent that water information systems require adequate tools for measuring, planning, reporting and monitoring water resources across scales to optimize water uses and develop responsive, proactive and robust strategies for adaptation and mitigation of water risks (Adekola et al., 2022;Uhlenbrook et al., 2022). In this context, decision-making tools like water accounting systems can make a difference. Water accounting is the systematic assessment and presentation of information on the status and trends in water supply, water demand, accessibility and use in time and space within specified regions and with particular standards and clear definitions accessible to various water professionals (Batchelor et al., 2016;Van Dijk et al., 2014).Water accounting serves as a basis for evidence-informed decision-making and is relevant for policy development and water resource planning (Bassi et al., 2020;Mohammad-Azari et al., 2021;Momblanch et al., 2014;Pedro-Monzonís et al., 2016b).Therefore, water accounting can enhance the water-energy-food-ecosystem nexus thus moving towards the sustainable development goals (SDGs), SDG6 in particular, as it highlights connections, synergies and trade-offs among activity sectors (Elmahdi, 2020;Liu et al., 2018;Nkiaka et al., 2021).Several water accounting systems exist (see Dembélé (2020)) but none have been adopted as a general standard (Chalmers et al., 2012;Dost et al., 2013;Momblanch et al., 2018).Reasons for this failure include the facts that their terminologies are ambiguous and their outputs are usually too complex for decision making (Perry, 2007;2011), their input data are often not readily available (Bagstad et al., 2013;Perry, 2012), and they do not explicitly link land and water management practices and usually lack spatial details (Karimi, 2014;Muratoglu et al., 2022). More recently, the Water Accounting Plus (WA+) framework was developed to address the shortcomings of previous water accounting systems (Karimi et al., 2013a).WA+ provides estimates of manageable and unmanageable water flows, stocks, consumption by different users, and explicitly accounts for interactions with land use.The core of the WA+ methodology is based on a water balance calculation using a spatial analysis of water fluxes and stocks obtained via remote sensing.Compared to other water accounting frameworks, WA+ is particularly valuable for water resource reporting in data-scarce regions and ungauged locations because it primarily relies on open-access remotely sensed data.WA+ based on satellite data is rather suitable for the scale of large river basins (larger than a few 1000 km 2 ) and regional studies due to the usually coarse spatial resolution of satellite data.Moreover, WA+ is convenient for independent assessments of water resources in transboundary basins where data accessibility and data exchange are limited (Dembélé et al., 2019;Mukuyu et al., 2020).However, challenges in closing the water balance were observed when solely deploying satellite data with the WA+ framework (e.g.FAO and IHE Delft, 2020a;b;c;Hirwa et al., 2022). To address this drawback, this study proposes a comprehensive framework that uses the outputs of a spatially distributed hydrological model as inputs to the WA+ framework.Major advantages include the closure of the water balance via hydrological simulations, identification of the sources of uncertainties in the components of the water cycle as opposed to using various sources of satellite data, and development of scenarios to assess changes in the water cycle as a result of planned interventions, land use change, climate change, etc.In addition, spatially explicit hydrological modelling offers new possibilities to apply WA+ to future periods and provide projections of water accounts (i.e.water balance components) under changing environments (Dembélé, 2020), i.e. it represents an essential step towards the predictive use of water accounting frameworks. This study aims to propose a comprehensive framework that integrates remotely sensed data, a spatially calibrated hydrological model, climate change scenarios and the WA+ tool for water accounting of past trends as well as future predictions.The developed framework is implemented at a large scale in the transboundary Volta River Basin and water accounts are summarized at three spatial scales including subbasins, riparian countries and climatic zones.Consequently, the proposed novel WA+ modelling framework brings advances as compared to previous studies.These advances all combined in this study include (i) the use of a spatially-calibrated fully distributed (i.e.grid-based) hydrological model as opposed to lumped or semi-distributed models (e.g.Delavar et al., 2020;Delavar et al., 2022;Esen and Hein, 2020;Gao et al., 2020), (ii) the use of a large ensemble of climate change projection data to assess future conditions as opposed to only historical period analyses (e.g.Kivi et al., 2022;Kumar et al., 2023;Patle et al., 2023;Singh et al., 2022a;Singh et al., 2022c), (iii) a case study in a large transboundary basin with a multi-scale analysis across sub-basins, climatic zones and riparian countries to provide detailed insight that support transboundary water management as opposed to small or incountry basins (e.g.Hunink et al., 2019;Momblanch et al., 2014;Singh et al., 2022c), and (iv) the use of the novel WA+ framework instead of previous frameworks (e.g.Pedro-Monzonís et al., 2016a;Vicente et al., 2016;Vicente et al., 2018). Study area The Volta River Basin (VRB) is a transboundary basin covering about 415605 km 2 spread across six countries in West Africa, i.e.Benin, Burkina Faso, Côte d'Ivoire, Ghana, Mali and Togo (Fig. 1).Burkina Faso and Ghana alone share about 82.5% of the basin's total area (Dembélé, 2020).The VRB extends over four eco-climatic zones characterized by increasing vegetation density and precipitation from north to south, namely the Sahelian, Sudano-Sahelian, Sudanian, and Guinean zones.The drainage system comprises four sub-basins known as the Black Volta, White Volta, Oti and Lower Volta.The Volta River flows north--south over 1850 km and drains into the Atlantic Ocean at the Gulf of Guinea after transiting into the Lake Volta formed by the Akosombo dam (Fig. 1).The population of the VRB is essentially rural (70% of the basin's total population), and its annual growth rate is 2.5%, which means the population will double every 28 years (Rodgers et al., 2006).In 2010, 23.8 million people lived in the VRB and the population is projected to reach 56.1 million in 2050 (Williams et al., 2016).Water resources in the VRB play an essential role in socio-economic development, especially for agriculture, hydropower production, aquaculture, livestock and domestic water supply (McCartney et al., 2012).They provide livelihood for the rural populations primarily active in the agricultural sector (Amisigo et al., 2015;van de Giesen et al., 2001). Water demand in the VRB is projected to increase considerably by 2050 (Kotir et al., 2016;Mensah et al., 2022;Mul et al., 2015), thereby posing challenges for transboundary water resource management.First, VRB rainfall is erratic and has high spatiotemporal and inter-annual variabilities, which is expected to be exacerbated under climate change (Nicholson et al., 2018).Secondly, countries in the VRB have different national priorities in terms of water use.The upstream consumptive use of water in Burkina Faso is essentially dominated by agriculture.As Burkina Faso occupies the driest part of the VRB, its priority is the construction of small and medium reservoirs to develop irrigated agriculture (De Fraiture et al., 2014;Owusu et al., 2022).Meanwhile, the downstream priority in Ghana is the production of hydroelectricity from large dams (Darko et al., 2019;Han and Webber, 2020).Despite progress in water governance, the divergent priorities regarding water consumption and management remain sources of tension between both states (Biney, 2010;Owusu, 2012;Yankey, 2019).However, no major explicit conflict has occurred between the two countries, suggesting a certain degree of cooperation demonstrated by the establishment of the Volta Basin Authority and the 2007 riparian state convention (Matthews, 2013).In this context, an independent and unbiased assessment of the spatiotemporal availability of water and various uses could provide a basis for decision-making and potentially alleviate future tensions, in addition to game-theory-developed strategies with issue linkage to support sustainable transboundary water sharing in the VRB (Bhaduri et al., 2011;Bhaduri and Liebe, 2013). Overview of the modelling framework The proposed methodological framework for water account projections is summarized in Fig. 2. Climate projection data from global and regional models forces a fully distributed and spatially calibrated hydrological model after a multivariate bias correction of the climatic inputs (rainfall and temperature).The entire climate change impact modelling chain is described by Dembélé et al. (2022).To ensure a reliable spatial and temporal representation of hydrological processes, the hydrological model is calibrated with multiple variables including in-situ streamflow and satellite remotely sensed soil moisture, actual evaporation, and water storage anomaly, as presented by Dembélé et al. (2020b).The outputs of the hydrological model simulated over past and one future period are fed into the WA+ framework for spatial analysis based on land use and land cover types.The near future was selected because it is deemed more realistic and useful for water management compared with a time further in the future when assumptions underlying the WA+ framework will have evolved.Future land use and land cover scenarios are not integrated in this study because they lie outside the scope of the impacts of climate change on water resources.Results of this study can however help identify future land use practices which would be adaptations to water scarcity and improve water security (Cook and Bakker, 2012;Steduto et al., 2012). Variations in the hydrological model inputs and outputs are assessed with the second-order coefficient of variation (V 2 ) (Kvålseth, 2017), defined as follows: where s is the standard deviation and x is the mean of x.V 2 varies between 0% and 100%. Climate projection data An ensemble of eleven general circulation models (GCMs) and four regional climate models (RCMs) are selected from the Coordinated Regional Climate Downscaling Experiment (CORDEX) for Africa (Giorgi et al., 2009).This gives 18 possible RCM-GCM combinations under the representative concentration pathway RCP8.5 (Table 1), which corresponds to a high greenhouse gas emission scenario with rising radiative forcing pathway leading to 8.5 W m − 2 (~1370 ppm CO 2 equivalent) by 2100 (Van Vuuren et al., 2011).Only RCP8.5 is used because it was found to align more with historical and anticipated total cumulative greenhouse gas emissions to 2050 than other RCPs, so it is the most useful RCP for informing societal decisions over short time horizons including mid-century and sooner (Schwalm et al., 2020a,b).In addition, significant changes to hydrological variables were mainly observed under RCP8.5 over the period 2021-2050 in the VRB (Dembélé et al., 2022). The Rank Resampling for Distributions and Dependences (R2D2) method (Vrac and Thao, 2020) is used for a multivariate bias correction of the climate projection datasets, which are subsequently evaluated with the best-performing satellite and reanalysis rainfall and temperature products in the VRB (Dembélé et al., 2020c). Spatially explicit hydrological model The fully distributed mesoscale Hydrologic Model (mHM) (Samaniego et al., 2010) is used to simulate the hydrological variables required for WA+.The model configuration is adopted from Dembélé et al. (2020b).In this study, the term evaporation represents all forms of evaporation (from canopy, soil and water bodies) including transpiration (Coenders-Gerrits et al., 2020;Shuttleworth, 1993).The full description of mHM and the calculation of the hydrological processes are given by (Kumar, 2010) and Telteu et al. (2021). Despite their limitations, earth observation data are still valuable for water resource monitoring and can improve hydrological model simulations if appropriately used (Dembélé et al., 2020a;Gleason and Durand, 2020;Papa et al., 2022) and for the future period (2021-2050), respectively.The ESACCI-LC-L4-LCCS v2.0.7 data of LULC with a high spatial resolution of 300 m is resampled to 1/512 • using the nearest neighbour method, thereby resulting in 8,834,858 grid cells in the VRB.This resampling is necessary for mHM, as the spatial resolution of the morphological data should be a submultiple of the hydrological simulation resolution.The mHM model is run at a daily time step with a spatial discretization of 0.03125 • (~3.5 km), which corresponds to 34,547 active grid cells in the VRB and was chosen because of restrictions in computational resources.However, it can be considered as a high-resolution modelling in view of the large basin size.The sub-grid variability of the basin physical characteristics (topography, soil texture, geology and land cover properties) is accounted for with a multiscale parameter regionalization technique (Samaniego et al., 2017), which is a critical strength in spatial accounting of ecosystem services (Nedkov et al., 2022).The bias-adjusted climate variables (rainfall and temperature) are used to force the mHM model and the outputs (runoff, potential evaporation, actual evaporation, transpiration, interception, soil evaporation, water evaporation) generated over the historical period and the near-term future (2021-2050) are subsequently used for the WA+ analyses. Water accounting plus (WA+) WA+ is a standardized reporting framework that summarizes and presents water conditions and management practices in river basins (Karimi et al., 2013a).It was developed based on the water accounting framework of the International Water Management Institute (Molden, 1997;Molden and Sakthivadivel, 1999).Beyond the quantification of water volumes, WA+ explicitly considers land use interactions with the water cycle, and assesses depletions rather than withdrawals.In the following, the term total evaporation is used in replacement of the debated "evapotranspiration" term (Miralles et al., 2020;Savenije, 2004), which is however used in the terminology of WA +.To avoid changing the WA+ terminology, the abbreviation "ET" is conserved but is defined as total evaporation in this study.WA+ results are presented in volume of water and the water accounts are reported on an annual basis as WA+ is meant for long-term planning (Bastiaanssen et al., 2015;FAO and IHE Delft, 2020b).Therefore, daily outputs of mHM are aggregated into annual values for WA+ analyses.More information and updates on WA+ can be accessed at https://wateraccounting.un-ihe.org(last accessed on 17 October 2022). Land use and land cover in WA+ Land use and land cover (LULC) is an essential input in WA+ because it determines whether the water is manageable or non-manageable.Four categories are used to group land use and land cover classes and they differ in terms of water management (Karimi et al., 2013a), see a description in Table S1 (supplementary material) and a summary in Table 2. For conciseness, the ESA CCI maps are first reclassified from the original 22 LULC classes into the 10 major LULC classes in the VRB, namely, water bodies, bare areas, urban areas, rainfed croplands, irrigated croplands, grassland, shrubland, evergreen forest, deciduous forest and wetlands (Table S2).Based on ESA CCI data availability, the LULC map of 2005 is used for analysis over the historical period and the map of 2015 is used for the future period (2021-2050), which helps add dynamics in LULC over the modelling periods.Table 2 provides the proportions of LULC classes in the VRB.The final LULC maps for WA+ are obtained by overlapping and intersecting the basic LULC maps from ESA CCI with other spatial data on various land status and uses.The maps of the World Database on Protected Areas (WDPA, 2016) and the Global Reservoir and Dam Database (GRanD; Lehner et al., 2011;Mulligan et al., 2020) are used to reclassify the primary LULC data and distinguish between protected versus nonprotected lands and identify managed water bodies.The final LULC maps for WA+ (Fig. 3), with a spatial resolution of 1/512 • , have additional information about the protection, utilization and management status of each LULC types. Overview WA+ differentiates between exploitable, utilized, managed and consumed water flows and stocks among many other components of the water cycle.Table 3 gives a definition of key WA+ terms and the estimation of water accounts (i.e.water balance components).WA+ is still under development with currently eight standardized accounting forms that are called "sheets" to describe water conditions (Bastiaanssen et al., 2015).Each sheet has a set of indicators that are used to summarize the overall water resources situation.However, this study focuses on the two most important sheets (i.e.sheet 1: the resource base sheet and sheet 2: the consumption or ET sheet) because the other sheets require information that are not available for future predictions (e.g.biomass production, agriculture, etc.).Examples of analyses with other sheets can be found in the literature (e.g.FAO and IHE Delft, 2019;Kivi et al., 2022;Salvadore et al., 2020). Evaporation partitioning from green and blue water sources A specific feature of WA+ is the explicit consideration of green water sources (precipitation, unsaturated soil water available to plants) and blue water sources (runoff and deep drainage recharging aquifers and supplying reservoirs, lakes and streams) (Falkenmark and Rockström, 2006;Velpuri and Senay, 2017).Thus, WA+ separates total actual evaporation (E act ) into green ET (E green ) or rainfall ET and blue ET (E blue ) or incremental ET, which helps identify managed water flows and is achieved here using the Budyko approach (Msigwa et al., 2021;Singh et al., 2022a). Mikhail Budyko developed a supply-demand framework to describe the hydrology of a catchment assuming steady state conditions over large spatial and temporal scales considering long-term water balance and energy balance (Donohue et al., 2011;Sposito, 2017).The long-term annual water balance can be defined as: where P, E act and Q are long-term annual averages of precipitation, actual evaporation and runoff, respectively.ΔS/Δt is the change in water stored in the soil and groundwater and is considered negligible under a steady state.The Budyko framework (Budyko, 1974) relates the ratio of long-term mean annual potential evaporation (E pot ) to precipitation (climatic dryness or aridity index) and the ratio of long-term mean actual evaporation to precipitation (evaporative index), resulting in a curvilinear function known as the Budyko curve described by the following equation (Donohue et al., 2010;McVicar et al., 2012;Simons et al., 2020): and where ε is the long-term mean annual evaporative index and ϕ is the long-term mean annual dryness index. Finally, E green and E blue are calculated as follows (Simons et al., 2020;Singh et al., 2022b): E blue = E act − E green (7) Resource base sheet The WA+ resource base sheet provides an overview on overexploitation and quantifies exploitable, utilized, consumed and nonconsumed water at river basin scale.It is important to note that in the WA+ terminology, all water input to a basin (from precipitation or upstream basins) is called "inflow", all water output from the basin is called "outflow".The resource base sheet gives information on all inflows and outflows of water volumes in a river basin and relates them to various hydrological and water management processes (Fig. 6). The net inflow to the basin is obtained by adding the change in storage to the gross inflow.A fraction of the net inflow is consumed as landscape ET, representing the part of total evaporation that occurs naturally and includes green water consumption (i.e.rainfall ET) and natural blue water evaporation without human influence (FAO and IHE Delft, 2020b).The remaining fraction of the net inflow after subtracting the landscape ET is the exploitable water, i.e. the non-evaporated water, which is available as blue water (Falkenmark and Rockström, 2006).The exploitable water comprises the utilized flow and the non-consumed water.The utilized flow corresponds to the manmade component of the incremental ET (i.e.E blue ) resulting from anthropogenic activities (e.g.irrigation, aquaculture, hydroelectricity, urban and domestic uses, and industries).The non-consumed water or total outflow represents the amount of water that physically leaves the basin through surface and subsurface outlets.It is composed of the water that could be additionally used (i.e.utilizable outflow) and the reserved flow for downstream commitments, navigational flows and environmental flow (Smakhtin et al., 2004).The landscape ET and the utilized flow form the consumed water (i.e.E act ).Table 3 provides a description of the data used and the calculation of the water accounts.A set of WA+ indicators are defined in Table 4 to support the analysis of water accounts (FAO and IHE Delft, 2020b; Karimi et al., 2013a). Consumption sheet The WA+ consumption sheet or ET sheet quantifies managed, manageable, and non-manageable water consumptions and defines their beneficial and non-beneficial proportions by activity sector including agriculture, environment, economy, energy, and leisure.It gives a summary of outflows related to total evaporation from different land use types (Fig. 8).Table 4 provides a set of WA+ indicators used for the consumption sheet. The breakdown of total evaporation into evaporation from soil, water, interception, and vegetation transpiration allows differentiating between beneficial and non-beneficial ET.The proportions of beneficial and non-beneficial ET, as well as the share of the beneficial water consumption per activity sector, depend on case studies and they are determined by value judgment of experts (Bastiaanssen et al., 2015;FAO and IHE Delft, 2019;Karimi et al., 2013b).Nevertheless, there is a list of default values developed by WA+ experts that can be used in first instance and adapted as per case study specifications (see dictionary by IHE Delft, 2016).The following assumptions are made: • Transpiration from vegetative cover is considered to be beneficial as it reflects the amount of water consumed for biomass production (e. g. crops), except for undesirable vegetation such as weed infestation in croplands, alien invasive species and floating vegetation in water bodies that can prevent evaporation (Bastiaanssen et al., 2015).• Interception evaporation from wet leaves and canopies is assumed non-beneficial as it reduces the productive amount of rainfall that effectively reach the ground (i.e.throughfall and stemflow) (Li et al., 2012;Zheng and Jia, 2020).However, interception can have some benefits for micro-meteorological conditions for crops and plant temperature regulation, and contribute to continental rainfall through moisture recycling (Karimi et al., 2013a;Savenije, 2004) fishing, hydropower production, aquatic birds, water sports and leisure (Karimi et al., 2013b).• Beneficial evaporation occurring over protected areas (PLU class) and utilized areas without regular land and water management (ULU) is mainly considered beneficial for the environment (e.g.biota sustainability), moderately to barely beneficial for leisure (e.g.ecotourism and wildlife viewing), and barely to insignificantly beneficial for other sectors.Over areas with land management and natural water supply (MLU) and areas with active water management (MWU), beneficial ET is largely to moderately beneficial for agriculture (e.g.cereals, vegetables and fruits), moderately to barely beneficial for economy (e.g.fishery, breeding and cash crops), energy (e.g.hydropower) and leisure (e.g.urban parks and reservoirs), and barely to insignificantly beneficial for environment. Based on these assumptions, the proportions of beneficial ET per evaporation source (i.e.interception, soil, water and transpiration) and its distribution per activity sector (i.e.agriculture, environment, economy, energy and leisure) are estimated for the VRB (Table S4).The beneficial ET fraction per source varies between 0% and 100% for each evaporation source, while the sum of the share of the beneficial ET per activity sector is 100% for each LULC class as presented in Table S4. Results The results are organized in three parts.The first part gives an overview of the consistency of the WA+ estimates in the Budyko framework.The second part focuses on the basin scale analysis and includes sub-sections on the resource base sheet, the consumption or ET sheet and the WA+ indicators.The third part provides multi-scale summaries of key water budget components across spatial domains (i. e. countries, sub-basins and climatic zones).All results are provided as long-term averages over 30 years along with the inter-annual variability Total inflow to the basin from all sources. Water removed from the basin in the form of evaporation.This is total actual evaporation from all sources. E act from mHM The river outflow at the outlet of the basin. Beneficial consumption Water consumed for the intended purpose. Based on value judgment and sitespecific assessment. Non-beneficial consumption Water consumed for purposes other than the use. Hydrological plausibility of the WA+ LULC classes A first important check is made here to examine the plausibility of water fluxes simulated with mHM per LULC in the Budyko space (Fig. 4).The simulated long-term values for the different LULC classes (period 1991-2020 and 2021-2050) plot well in the physically possible space below the energy and water limits (Donohue et al., 2011;McVicar et al., 2012), and close to the theoretical curve postulated by Budyko.The evaporative index is between 0.76 and 0.98, and the aridity index is between 1.4 and 5.6, which are expected values for sub-humid to semiarid environments such as the VRB (Gunkel and Lange, 2017).It is noteworthy that LULC classes, particularly bare areas, show leftward and downward shifts in the Budyko space under future conditions, which denotes an increase in precipitation.The consistency of the simulated water fluxes for the retained LULC classes, thus underlying a suitable model parametrisation of mHM, is further demonstrated by the fact that the irrigated croplands have a slightly higher evaporative index than the rainfed croplands, and forests, water bodies and wetlands have a lower aridity index than the other LULC classes.However, E act seems to be underestimated for water bodies.Variations are also observed depending on the RCM-GCM simulations as shown by the spread of forest and bare areas classes across different ranges of evaporative index and aridity index. Basin scale WA+ reporting 4.2.1. WA+ resource base sheet Long-term annual averages of water accounts over the historical and future periods are provided in Fig. 5.The WA+ resource base sheet gives an overview of the water repartition into flows, stocks and fluxes as depicted in Fig. 6. For the baseline period 1991-2020, the long-term multi-model ensemble mean of annual total precipitation in the VRB is 419.6 km 3 / year with 4% inter-model variability (V 2 ) across RCM-GCM combinations.The average storage change is − 0.55 km 3 /year (V 2 = 71%), thereby resulting in a net inflow to the basin of 419.1 km 3 /year (V 2 = 3.9%).The landscape ET from green and blue water sources accounts for 92% of the net inflow and occurs at 56% in the ULU class (for abbreviations see Table 2) and at 32% in the MLU class.In the MLU, rainfed croplands represent about 33.51% of the basin area, which justifies the high proportion of the landscape ET.The ULU is dominated by grasslands (21.5%), shrublands (13.9%) and deciduous forest (17.9%), which represent more than half of the basin area (Table 2).The total consumed water in the basin is 388.8 km 3 /year (V 2 = 2.7%), with 95% ascribed to rainfall ET (368.7 km 3 /year and V 2 = 2.8%) from green water sources and the remainder to incremental ET (20.1 km 3 /year and V 2 = 5.1%) from blue water sources, of which 20% is due to manmade activities. Only 34.3 km 3 /year (V 2 = 19.4%) of water in the VRB are exploitable and correspond to 8.2% of the net inflow.The exploitable water refers to the blue water storage available in the basin, of which 11% is utilized (4 km 3 /year and V 2 = 2.4%), while the remainder 88% are not consumed and leave the basin as total outflow (30.3 km 3 /year and V 2 = 21.7%).The total outflow has the highest inter-model variability varying between 40% and 57% (Fig. 5) and represents 7.2% of the net inflow.The estimated outflow of the VRB is in the range of previous findings, which is 30-40 km 3 /year (Amisigo et al., 2015;Barry et al., 2005;McCartney et al., 2012;Sood et al., 2013;Williams et al., 2016).The basin rainfall was also found to be around 400 km 3 /year (Andreini et al., 2000).The utilized flow occurs over the MWU that occupies about 2% of the basin area (Table 2) and is essentially composed of managed water bodies (1.49%), irrigated croplands (0.41%) and urban areas (0.1%). The evolution of the water resources over the future period 2021-2050 in the VRB shows an increase in most of the indicators presented in WA+ sheet 1 (Fig. 5 and Fig. 6).A slight increase in net inflow of +1.6% (6.5 km 3 /year and V 2 = 3.2%) relative to the historical period is expected, which results from an increase in precipitation by +1.4% (5.8 km 3 /year and V 2 = 3.2%) and an increase in storage change by +131% (0.7 km 3 /year and V 2 = 24%).As a consequence of the rise in net inflow, most of the water accounts are projected to increase, including landscape ET by +0.4% (1.5 km 3 /year and V 2 = 2.7%), rainfall ET by +0.5% (1.7 km 3 /year and V 2 = 2.6%), exploitable water by +15% (5.1 km 3 /year and V 2 = 9.5%) and total outflow by +16% (4.9 km 3 /year and V 2 = 10.1%).However, a slight decrease is projected for the incremental ET by − 0.3% (− 0.1 km 3 /year and V 2 = 4.1%).These results align with previous studies in the West African region where projections show an increase in E green and a decrease in E blue under climate change over 2021-2050 (Badou et al., 2018).In general, higher inter-model variabilities are projected in the future as compared to the baseline historical period (Fig. 5). Inter-annual variabilities of water accounts are given in Fig. 7.Over the historical period, the average inter-annual variability across RCM-GCM combinations for precipitation is 13%.While the inter-annual variability of the net inflow is 11%, landscape ET and exploitable water show 14% and 8.7%, respectively.The highest average interannual variability of 99.9% is shown by the storage change, followed by the 48% for total outflow and 43% for incremental ET or E blue , while the lowest values are 4.7% for the utilized flow and 8.7% for rainfall ET or E green .The inter-annual variabilities of water accounts are projected to increase with various magnitude in the future but the ranking of variabilities are conserved (i.e. the storage change and the utilized flow still have the highest and lowest inter-annual variabilities, respectively).The exploitable water gives the maximum increase in the inter-annual variability, which is +11%, while the inter-annual variability of outflow remains almost unchanged. WA+ consumption or ET sheet The WA+ consumption or ET sheet summarizes water consumption and provides the breakdown of total evaporation (ET) into transpiration and evaporation from soil, water bodies and interception (Fig. 8).Over the historical baseline period (1991-2020), the long-term multi-model ensemble mean of total annual ET (E act ) is 388.8 km 3 /year with 3.9% of inter-model variability (V 2 ) across RCM-GCM combinations (Fig. 5).The total ET represents the consumed water, of which 11% are nonmanageable because occurring in protected lands (PLU), 55% are manageable on the utilized lands (ULU) and 34% are managed on modified lands (MLU) and water-managed lands (MWU).Transpiration is 189.5 km 3 /year (V 2 = 4.2%) and alone accounts for 49% of total ET, followed by soil evaporation (26% or 102 km 3 /year), interception evaporation (23% or 88.3 km 3 /year), while water evaporation was the lowest (2% or 9 km 3 /year). From the total water consumed in the VRB during the period 1991-2020, only 45% was beneficial.The total beneficial consumption was 173.5 km 3 /year (V 2 = 3.9%), with 55% attributable to the environment, 34% to agriculture, 5% to the economy, 4% to leisure and 2% to energy production.The non-beneficial water consumption represents 55% of the total consumed water.Most of the non-beneficial water consumption is ascribed to interception and soil evaporation that occurred at 62% in the ULU and 30% in the MLU. The projected water accounts over the period 2021-2050 (Fig. 5 and Fig. 8) show that the overall water consumption in the VRB remains almost unchanged with a minimal increase of +0.4% (1.7 km 3 /year and V 2 = 2.7%), which could be expected because of the low increase of the net inflow over the same period (Fig. 5).By maintaining the current land and water management practices, the beneficial water consumption could increase by +1.6% as a result of the +1.4% increase in transpiration.Moreover, the managed, manageable and non-manageable proportions of water consumption are conserved.The consumed water has an inter-annual variability of 7.6% (Fig. 7), which is projected to increase by +5% on average in the future, while its managed portion (i.e.managed water consumption) has an inter-annual variability of 36%, with +1.2% increase over the future period. The contribution of each WA+ land categories to total ET and its components as well as to the beneficial fraction and the water consumption in different activity sectors is shown in Fig. 9.For the historical period, most of the consumed water occurs in the ULU (55%), followed by the MLU (32%), the PLU (11%), and the MWU (2%).The MLU accounts for 88% of the water consumed for agriculture and 73% for the economy.The ULU is accountable for 77% of the water consumed by the environment, 60% for energy production and 34% for leisure.The PLU contributes at 55% of the water consumed for leisure and at 22% for the environment.The beneficial water consumption mainly occurs in the ULU (48%) due to the forests, followed by the MLU (34%) because of rainfed croplands, and the PLU (14%) because of protected vegetation species, forests and wetlands.Only 4% of the beneficial water consumption occurs in the MWU, which encompasses the irrigated croplands and the managed water bodies.Over the future period, the proportions of total ET per evaporation sources and beneficial ET per activity sectors based on the WA+ land categories are projected to decrease slightly for the PLU and ULU, and increase for MLU and MWU (Fig. 9), because of the future changes in land category areas (Table 2). WA+ key indicators A set of performance indicators (Table 4) are used to understand better the present and future water resource conditions summarized in the WA+ sheets (Fig. 10).The indicators of the resource base sheet show that the long-term multi-model ensemble mean of the exploitable water fraction (EWF) is 0.08 (inter-model variability V 2 = 16%) over the baseline period 1991-2020 with an expected increase of +12% in the near future .The low EWF indicates that a small fraction of the net inflow can be exploited in the VRB, because of the large fraction of water consumed through landscape ET (Fig. 6).The stationarity index (SI) is − 0.0014 (V 2 = 71%), indicating a decrease in storage, with a projected increase by +132% in the future.The basin closure (BC) of 0.93 (V 2 = 1.4%) indicates that a large fraction of the available water is consumed and/or stored in the basin and is projected to decrease slightly by − 1%.The ET fraction (ETF) is 0.93 (V 2 = 1.4%), confirming that a substantial fraction of the total inflow to the basin is consumed through evaporation, while a small fraction is converted into renewable resources that increase storage or generate outflow from the basin. The indicators of the consumption sheet only show minimal changes for future projections of water accounts (Fig. 10).All the performance indicators are projected to slightly increase between +1% and +3%, except the irrigated ET fraction (IEF) that could decrease by − 2%.The average transpiration fraction (TF) is 0.49 (inter-model variability V 2 = 1.8%) and indicates that transpiration is a major process in water depletion in the VRB, which can be explained by the large presence of vegetated lands (rainfed croplands, irrigated croplands, grasslands, shrublands and forests) covering about 98% of the basin area.However, only 45% of the water consumption is beneficial, which can be justified by the low land and water management practices as the managed fraction (MF) is 0.34 (V 2 = 0.5%).Although agriculture occupies 34% of the basin area, the agricultural ET fraction (AEF) is only 0.15 (V 2 = 2.2%), while the contribution of irrigated agriculture is very low with an irrigated ET fraction (IEF) of 0.02 (V 2 = 2.2%).These results suggest that there are possibilities for improving land and water management to increase the benefits of water consumption in the VRB. Water accounts across spatial domains The spatial distributions of key water accounts across spatial domains, including the four climatic zones, the four sub-basins and the six riparian countries of the VRB are presented in this section. Spatial patterns of water accounts The spatial patterns of long-term multi-model ensemble mean of annual key water accounts over the historical baseline period are displayed in Fig. 11 along with the projected changes over the future period (2021-2050), and the associated inter-model (RCM-GCMs) variabilities, while the inter-annual variabilities are shown in Fig. 12.Total annual precipitation depicts a north-south increasing amount, varying between 450 mm/year in the north to about 1430 mm/ year in the south, with the highest values in the south-eastern zones of the basin.Similar patterns to precipitation are shown by actual evaporation (415-1250 mm/year) and runoff (3-400 mm/year), implying that precipitation is the primary driver of the water cycle in the VRB.Green ET (0-1180 mm/year) and blue ET (0-1220 mm/year) patterns are generally inversed as expected, i.e. places with lower green ET have higher blue ET and vice versa.This is clear from water bodies, especially from the Lake Volta in the south. Future projections generally show an increase for most of the water accounts and in most parts of the basin, with some exceptions.Runoff is projected to increase with the highest rates of change among the water accounts and exceeding +100% in some parts of the basin.The patterns of future changes for actual evaporation are very similar to green ET, both vary between − 5% to +5%, and their patterns show the imprints of the precipitation pattern, which changes between − 6% to +7%.Blue ET shows contrasting spatial changes dominated by a decrease in the southwestern and central-eastern regions of the basin, up to − 100%, and an increase in the southeastern side, exceeding +100% in some regions.However, the highest inter-model variabilities and inter-annual variabilities are found for runoff and blue ET, with higher variabilities for all water accounts projected in the future (Fig. 11 and Fig. 12). Multi-scale summary across spatial domains For conciseness, this section focuses more on the country-scale results, particularly for Burkina Faso and Ghana, as they share most of the basin area (Fig. 1), and briefly on the climatic zones and sub-basins.However, additional information and illustrations of the results for the climatic zones and sub-basins are provided in the supplementary material (Tables S5-S8, Figs.S3-S9). Regarding the proportions of WA+ land categories per country (Fig. 13), Ghana hosts the largest fraction of PLU (41.3%),ULU (49%) and MWU (73.1%) of the basin, while Burkina Faso has the largest fraction of MLU (66%) and ranks second for the other land categories.The detailed proportions of WA+ land categories for all countries, subbasins and climatic zones are given in Table S3. A summary of key water accounts (precipitation, actual evaporation, green ET, blue ET and runoff) across the four climatic zones, four subbasins and six riparian countries of the VRB is given in Table 5.The associated inter-model and inter-annual variabilities across RCM-GCM combinations are provided in Tables S5-S6 in the supplementary material.It appears that the highest rates of water accounts are found in the Guinean zone for the climatic zones and in the Lower Volta for the subbasins, while the lowest rates are in the Sahelian zone and Black Volta (except for blue ET), respectively. The long-term multi-model ensemble mean of key annual water accounts per country generally shows higher magnitudes in Ghana than in Burkina Faso (Fig. 14).However, the highest precipitation, evaporation and runoff rates are observed in Togo, while the lowest are observed in Mali, because of the climatic zones they are located in (Fig. 1).An inter-comparison reveals similar differences among the countries under future climate change as for the historical baseline period. In general, the inter-model and inter-annual variabilities for all countries are more critical for blue ET and runoff, and lesser for actual evaporation and green ET (Fig. 14).All variabilities are projected to increase in the future.It is noteworthy that, for all water accounts, interannual variabilities are larger than inter-model variabilities, and runoff has a larger inter-model variability than blue ET, while the opposite is observed for inter-annual variability.Burkina Faso has higher intermodel and inter-annual variabilities of water accounts as compared to Ghana.Mali usually has the highest inter-model and inter-annual variabilities.Details on inter-model and inter-annual variabilities are provided in the supplementary material (Table S5-S6). The evaporative index varies between 87% and 96% among countries, while the runoff coefficient varies between 4% and 13%, with the basin average estimated at 92% and 8%, respectively (Fig. 15).These results corroborate with previous findings, which estimated the evaporative index between 86% and 95% and the runoff coefficient between 5% and 14% in the VRB (Barry et al., 2005;McCartney et al., 2012;Sood et al., 2013;Van de Giesen et al., 2010;Williams et al., 2016).Burkina Faso and Côte d'Ivoire have the lowest runoff coefficient, while Benin and Togo have the highest.A slight decrease of about − 1% on average is projected for the evaporative index under future climatic conditions, while an increase of +1% is projected for the runoff coefficient for all countries.Burkina Faso has 5% more evaporation than Ghana, while the opposite is observed for the runoff.The runoff coefficient varies between 4% and 5% in Burkina Faso, whereas it is between 9% and 10% in Ghana. The share of the basin water volumes per spatial domain is appreciable from Table 6.Ghana is the largest contributor to the basin fluxes and flows with about 46% for precipitation, 46% for actual evaporation, 45% for green ET, 64% for blue ET and 56% for runoff (Table 6).It is followed by Burkina Faso with 36% for precipitation, 37% for actual evaporation, 38% for green ET, 24% for blue ET and 22% for runoff over the baseline period.The third largest contributor is Togo, followed by Benin, Côte d'Ivoire and Mali.It is noteworthy that the contribution of each country to the basin volumes of water accounts depends on the rates or intensities of the water fluxes and flows received or generated over the country area within the VRB.For instance, a country can have high precipitation intensities but a small surface area in the basin, which can result in a relatively smaller contribution to the basin water volumes, like in the case of Togo (Table 5 and Table 6).The contributions of each country, each sub-basin and each climatic zones to the basin water accounts are summarized in Table S8. The pattern of the country's contributions to the water accounts hardly changes under climate change in the future.However, it is noteworthy that the contribution of Burkina Faso to the basin is projected to increase on average by +2% for runoff in the future, and compensated by a decrease in Ghana, when considering the multi-model ensemble mean. Possible land and water management measures Based on the results of the spatially explicit WA+ modelling of this study, it appears that the projected future increase in net inflow by 6.5 km 3 /year over 2021-2050 can be beneficial for the VRB if the available water resources are used appropriately by activity sectors.In this regard, the adoption of integrated water solutions can help cope with the looming and worsening impacts of climate change in the VRB (IWMI, 2021b).As runoff could increase on average by +27% in Burkina Faso and +13% in Ghana by 2050, adaptation measures should consider efficient drainage systems in urban places to mitigate rapid flow accumulations, flood detention and retention basins with minimum environmental impact to exploit excess runoff, and rainwater harvesting systems to combat drought spells during cropping seasons (Campisano et al., 2017;de Sá Silva et al., 2022;Scholz, 2019). There is a high potential for expanding agriculture in the VRB as exploitable water is projected to increase on average by 5 km 3 /year by 2050, thereby setting conditions to grow more crops while adopting sustainable practices to enhance water productivity and water use efficiency.Climate-smart agriculture solutions, including water and soil conservation techniques developed with the inclusion of local knowledge, could be adopted to improve the adaptive capacities of farmers and support food security under disruptions posed by climate variability and change (Lipper et al., 2014;Ogunyiola et al., 2022;Taylor, 2018).For instance, croplands could be expanded by supporting and promoting small-scale initiatives like farmer-led irrigation (IWMI, 2021a;Lefore et al., 2019;Woodhouse et al., 2017).However, water infrastructure development in the VRB was found more important for providing economic benefits to the riparian countries than cropland expansion only (Baah-Kumi and Ward, 2020;Kotir et al., 2016). With the predicted increase in exploitable water revealed in this study, the construction of resilient water storage infrastructure (e.g.small reservoirs, dams) becomes crucial in the VRB as they have long been the cornerstone of socio-economic development, particularly in regions with high climatic variability (McCartney et al., 2022;Rodina, 2019;Yu et al., 2021).Storing water in the VRB is essential for developing off-season irrigated agriculture as well as hydropower production, which are the top priorities of the upstream and downstream countries (Burkina Faso and Ghana) (Bhaduri and Liebe, 2013).Such additional infrastructure could help reduce the non-consumed water, which is projected to increase by +16% or 5 km 3 /year on average, and increase the man-made consumption from water storage, which currently represents 1% of the total consumed water in the VRB.The development of irrigation could increase the beneficial fraction of water consumption in agriculture, which is currently only 34%, with irrigation representing only 2% of water consumed by agriculture.Another potential strategy to increase the share of beneficial water use is to convert parts of the ULU lands (e.g.bare areas and grasslands) into MLU (e.g.rainfed croplands) or MWU lands (e.g.managed water bodies, irrigation) with adequate land and water management practices.These measures can limit non- beneficial soil evaporation through increased infiltration and improved irrigation efficiency.Moreover, there is a high potential to unlock further access to green energy with the development of hydropower in the VRB (Gyamfi et al., 2018;Kling et al., 2016), although the high interannual variability of runoff between 39% and 66% can be a limiting factor, as previously documented for West Africa (Obahoundje and Diedhiou, 2022;Wasti et al., 2022). The projected increase in runoff between +9% (Lower Volta) and +27% (Black Volta) across sub-basins implies a potential increase in the likelihood of floods in the VRB (Table S7), as already reported in previous studies (Dembélé et al., 2022;Jin et al., 2018).Possible adaptation strategies consist of green (i.e.vegetation) and blue (i.e.water) naturebased solutions such as forests, urban trees and parks, wetlands, ponds, and grey (i.e.built) infrastructures such as dams and drainage canals, to enhance storm water control, slow down runoff and increase groundwater recharge (Depietri and McPhearson, 2017;Keesstra et al., 2018;Nesshöver et al., 2017).However, hybrid approaches combining green-blue-grey infrastructures, such as rainwater harvesting systems, managed aquifer recharge, bioswales and green roofs, have shown higher effectiveness in flood mitigation (Alves et al., 2019;Sahani et al., 2019). These solutions, among many others, accompanied by innovative climate-resilient and risk-efficient initiatives, can help balance the water-energy-food-ecosystem nexus in the VRB (Botai et al., 2021;Samberger, 2022), thereby providing a solid foundation for sustainable socio-economic development.Nevertheless, the choice of actual development strategies depends on trade-offs between socio-economic development and nature protection (Dai et al., 2018;Endo et al., 2020), and could be achieved with adequate policy mixes of activity sectors (Schaub et al., 2022).Consequently, care should be taken to avoid environmental degradation and social drawbacks. Discussion As demonstrated in this study, strategic information for water resource management can be obtained from the WA+ framework but it also has limitations.WA+ is not meant for daily monitoring and assessment of water demand and supply and, therefore, cannot be used for day-to-day operation of reservoirs and irrigation systems (Bastiaanssen et al., 2015).It is instead designed for long-term planning of water and land resources in large catchments. This study uses a large ensemble of global and regional climate models to account for uncertainties associated with the meteorological data.It is noteworthy that the results might differ and even give opposite change signals if different climatic models, different simulation periods or different climate change scenarios (i.e.RCPs) are used, especially for rainfall, which governs the water cycle in West Africa, as highlighted in previous studies (Dembélé et al., 2022;Dosio et al., 2021;Liersch et al., 2020).Furthermore, there are additional uncertainties besides the classical sources of uncertainty associated with climate change impact projections (Eyring et al., 2019;Kundzewicz et al., 2018).The key uncertainties in the presented methodology are associated with i) the identification of WA+ land categories, ii) the Budyko approach for green and blue ET partitioning, and iii) the use of expert knowledge to identify the beneficial fractions of consumed water per activity sectors, which are discussed in the following. Information on land cover and land use is the backbone of the WA+ framework.Therefore, the reliability of the results highly depends on the accuracy of the LULC data.The used LULC data from ESA has the advantage of being available at a high resolution of 300 m and being subject to thorough quality check (ESA CCI, 2017), and can therefore safely be assumed acceptable for large-scale modelling in the VRB.Additionally, a constant LULC map is used over the 30-year simulation periods because of the primary goal to focus on climate change, which might only partially reflect the inter-annual changes in LULC.Therefore, the use of dynamic LULC maps is recommended for future studies as this can enhance the inter-annual water balance (Yonaba et al., 2021). Moreover, to bring confidence into the analyses, the Budyko framework was used here to check the plausibility of the LULC classification and the distribution of water and energy fluxes.Although actual evaporation from water bodies seems a little underestimated, the overall distribution of LULC groups in the Budyko space is realistic.Minor inconsistencies in the distribution of LULC groups in the Budyko space might be explained by the difference in spatial resolutions between the LULC maps (~300 m) and the mHM simulations (~3.5 km), as well as the aggregation of hydrological fluxes across contrasting climatic zones.In fact, averaging over spatial heterogeneity affects modelled hydrological processes governed by nonlinear relationships (e.g.evaporation), particularly in places where the spatial variation of precipitation and potential evaporation are inversely correlated like in the VRB (Rouholahnejad Freund et al., 2020). The Budyko framework is typically recommended for long-term analyses at catchment scale rather than at the grid cell.Therefore, there might be challenges using the Budyko framework for green and blue ET partitioning per grid cell, mainly when only using independent satellite remote sensing data (Msigwa et al., 2021).However, mHM is a grid-based hydrological model that guarantees the closure of the water balance for each grid cell (~12.25 km 2 in this study) in the basin before routing the total grid cell runoff through the river network (Samaniego et al., 2010), which justifies the use of the Budyko framework in this work.Other and new approaches for green and blue ET separation should be further investigated in future studies. The estimation of the beneficial and non-beneficial fractions of water consumption and its repartition per activity sector (agriculture, environment, economy, energy and leisure) can be biased as it is based on value judgement, which makes it debatable but also flexible because there is room for adjustments according to case studies.The value judgment requires expert knowledge, and the underlying results are initial estimates that can be refined on demand. A number of improvements and additions can be considered in future studies.For instance, utilizable water (i.e.non-consumed water fraction that could be used), non-recoverable flow (i.e.aquifer recharge and polluted water), non-utilizable outflow (i.e.inundation water) and reserved flows (e.g.downstream commitment for ecosystems and livelihoods) can be estimated, if reliable data and information are available on floods, water pollution, environmental flow, etc. (Mekonnen and Hoekstra, 2015;Pahl-Wostl et al., 2013).Moreover, different approaches to green and blue ET partitioning should be further investigated to better distinguish between natural and anthropogenic water consumption (Msigwa et al., 2021).Scenarios of LULC changes (e.g.deforestation, afforestation, irrigation schemes, reservoirs, etc.) can be used to examine how decisions on land use practices and investments in water infrastructures can affect water accounts.For climate change projections, the use of the new Shared Socioeconomic Pathways (SSPs) is recommended for future studies (Riahi et al., 2017), and multi-model approaches based on different hydrological models are encouraged to account for model structural uncertainties (Dion et al., 2021;Moges et al., 2020).Finally, system dynamics modeling and participatory modelling should be explored to consider the interactions between population, water, land and activity sectors, including industry and domestic uses that can have a higher water demand in the future (Kotir et al., 2017;Zomorodian et al., 2018).The combination of these efforts will help operationalize the WA+ framework (Hundertmark et al., 2020). Conclusion This study successfully demonstrates the benefits of a modelling framework that integrates a spatially explicit hydrological model and climate change scenarios with the WA+ tool for a better understanding and visualization of the impacts of climate change on a large basin's water resources and the various users, with a case study in the transboundary Volta River Basin in West Africa.The proposed WA+ modelling framework has several advantages compared to the traditional WA+ approach solely based on earth observation data.In fact, the use of a spatially explicit hydrological model allows future predictions with climate change scenarios and at a higher spatial resolution with a proper closing of the water balance, which would have been impossible if only using satellite remote sensing data.The proposed standardized reporting method allows managers and policy developers, and implementers to interpret complex modelling outputs and develop evidence-informed climate change mitigation measures across multiple spatial scales, including countries, sub-basins and climatic zones, which is very useful for transboundary applications in large basins. The case study in the Volta River Basin revealed a slight increase in the net inflow under climate change over 2021-2050, driven by an increase in rainfall, and resulting in an increase in the future exploitable water and the total outflow of the basin.The projected increase in net inflow could benefit the Volta River Basin if appropriate measures are implemented for efficient water allocation and management per activity sector.The water storage capacity of the Volta River Basin could be increased to better satisfy the water requirements for agriculture and hydropower generation, which are the priorities of Burkina Faso and Ghana, besides the basic water needs for domestic uses.However, the high inter-annual variability of runoff could be a constraint.Naturebased solutions would be valuable for mitigating the impacts of floods and droughts.The adopted solutions and strategies should consider trade-offs among activity sectors to optimize the water-energy-foodecosystem nexus. In this era of big data sustained by satellite imagery, artificial transboundary information exchange among the riparian states of the basin is essential to bolster resilience and foster regional development in the face of the worsening impacts of climate change.Consequently, sustainable progress in water resources management in the region is only possible under a strong collaboration between scientists, development practitioners and policymakers.These efforts will enhance water governance and strengthen water security in the Volta River Basin. Table 6 Repartition of key water accounts across spatial domains in the Volta River Basin.The colour scale indicates ranked values from the lowest (red) to the highest (blue). Fig. 3 . Fig. 3. WA+ land use classes of the Volta River basin based on ESA CCI data of 2015 (a), and grouped into the four WA+ classes (b). Fig. 9 . Fig. 9.Total evaporation (ET) breakdown and beneficial ET fraction for each activity sector per WA+ land categories. Fig. 11 . Fig. 11.Long-term multi-model ensemble mean of annual water accounts over the historical baseline period (1991-2020) with the projected changes over the future period (2021-2050) and associated inter-model variability across RCM-GCM combinations in the VRB. Fig. 13 . Fig. 13.Share of WA+ land categories per riparian country (a), sub-basins (b) and climatic zones (c) in the Volta River Basin in 2015. Fig. 14.Long-term average of annual water accounts (a, b) with the associated inter-model variability across RCM-GCM combinations (c, d) and inter-annual variability (e, f) for the VRB and its riparian countries over the historical (1991-2020) and future (2021-2050) periods. Table 2 Proportions of land use and land cover classes in the Volta River Basin per WA+ LULC classes. Table 5 Long-term multi-model ensemble mean of key annual water accounts across spatial domains in the Volta River Basin.The colour scale indicates ranked values from the lowest (red) to the highest (blue).
13,454
2023-08-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Quantum kernels for real-world predictions based on electronic health records In recent years, research on near-term quantum machine learning has explored how classical machine learning algorithms endowed with access to quantum kernels (similarity measures) can outperform their purely classical counterparts. Although theoretical work has shown provable advantage on synthetic data sets, no work done to date has studied empirically whether quantum advantage is attainable and with what kind of data set. In this paper, we report the first systematic investigation of empirical quantum advantage (EQA) in healthcare and life sciences and propose an end-to-end framework to study EQA. We selected electronic health records (EHRs) data subsets and created a configuration space of 5–20 features and 200–300 training samples. For each configuration coordinate, we trained classical support vector machine (SVM) models based on radial basis function (RBF) kernels and quantum models with custom kernels using an IBM quantum computer, making this one of the largest quantum machine learning experiments to date. We empirically identified regimes where quantum kernels could provide advantage on a particular data set and introduced a terrain ruggedness index, a metric to help quantitatively estimate how the accuracy of a given model will perform as a function of the number of features and sample size. The generalizable framework introduced here represents a key step towards a priori identification of data sets where quantum advantage could exist. Introduction Over the last years, real-world data has been increasingly used to generate medical evidence and progress precision medicine.This includes sources such as electronic health records (EHRs), claims and billing data, product and disease registries, and data from wearables and health applications 1 .Powerful data mining techniques have been applied to such data sets, particularly to EHRs, in order to predict a broad range of medical conditions and events 2,3,4,5 .However, classical machine learning and data science techniques have limitations with regard to learning some of the most complex patterns; for instance, the predictive power of genetic risk scores derived from genome-wide association studies has plateaued over the last years 6 .As a result, quantum machine learning has been explored as an alternative and for certain general problems it has already been proved that quantum machine learning algorithms can provide benefits beyond the scope of classical ones 7 .The complexity of the correlations and patterns in EHRs and (real-world) medical data sets makes such data sources prime candidates for the application of quantum algorithms 8 . The study of supervised machine learning problems with quantum techniques is an active area of research 9 .In early work on classification with near-term quantum algorithms 10,11,12 , the proposed quantum feature maps typically encode the datapoints into inner products or amplitudes in the Hilbert space.The quantum circuit used to implement the feature map is of a length which is typically a linear or polylogarithmic function of the size of the data set and the number of qubits is a function of the number of features.In subsequent work, the advantage of a quantum feature map was rigorously proved for a carefully chosen synthetic data set 7 .Recently, a body of work 13,14 implementing quantum feature maps for small-scale coarse-grained practical data sets has emerged; while there have been studies of different feature maps 15 , none have been discovered so far with rigorous advantage for general or practical data sets.The capability limits of near-term quantum computers has also been pushed in work where the data set was less coarse-grained 16,17 ; furthermore, efforts have begun to study how hyperparameters affect the potential advantage of a given quantum classifier 18 .Another stream of research has emerged on finding a suitable quantum feature map for a given data set with 19 providing a recent review of quantum classification algorithms.Studies on quantum feature maps involve both the study of the kernel function and the study of the quantum circuits which encode the outcome of the kernel function into Hilbert space.A new set of metrics and a protocol has also been proposed to determine the possibility of quantum advantage for a given pair of data set and quantum feature map 20 . In this work, we focus on one kernel-based method which uses the quantum support vector machine (QSVM), estimating the kernel with a quantum computer and feeding it back into a classical support vector machine (SVM) for classification 12 .To the best of our knowledge, there have not yet been any systematic studies regarding the applicability of quantum kernels to EHRs.Here we predict the six-month persistence of rheumatoid arthritis patients on biologic therapies.The central research questions investigated in this work are therefore: • Can we enhance the prediction of medication persistence by applying quantum kernels to real-world EHRs? • Can we systematically identify problem instances (number of features, number of samples) where quantum computing may have an advantage for such real-world data sets? The methods developed in this work are general and can be applied for a wide variety of problems with different-size data sets in machine learning and optimization.In this paper, nevertheless, we focus on small data sets, particularly those where the ratio of the number of features to the number of samples is relatively large, which typically engender hard classification problems.Such data sets are important in a range of medical settings, for instance in clinical trials, studies of very specific cohorts, and translational medicine.Moreover, small data sets are naturally suited to near-term quantum computers. The concepts "quantum supremacy" 21 and "quantum advantage" 22 have been around for a while and refer to asymptotic performance comparisons between a quantum approach and the best classical approach.Complementing these two foundational concepts, in this work we introduce a related concept called empirical quantum advantage (EQA).We define EQA as the incremental gain of using a specific quantum approach over a specific classical approach for a given problem.Once this heuristic measure is calculated, it is meaningful only in the context of three elements -the problem as well as the classical and quantum approaches used.It may not give any general asymptotic indications about "supremacy" or "advantage" for a family of problems.However, as in the field of practical classical algorithms 23 , practitioners may use EQA to observe trends in empirical data.This is key in biology and medicine where both theoretical and operational factors must be considered, in general, when exploring the benefits of quantum algorithms for a given application 24 . When making measurements of EQA, multiple metrics were considered, with a final choice of three key metrics -F1 score and balanced accuracy at the configuration space coordinate level as well as the phase space terrain ruggedness index (PTRI) at the configuration space landscape level.PTRI is thus a global metric, fully described in the Methods section.The reasoning behind choosing these metrics was as follows. Both F1 score and balanced accuracy are commonly used in machine learning; they measure the performance of a given model.On the other hand, PTRI captures the hardness of the configuration space for a given set of machine learning problems.The typical coordinate structure to explore that space of problems consists of the number of features and the number of samples.While the final data set used for binary classification is quite balanced (52 % to 48 %), more imbalanced cases were also considered.F1 score and balanced accuracy are thus readily generalizable to more imbalanced settings in future research.Furthermore, we chose to present the F1 score because, while it equally weights false negatives and false positives, we do not have the exact cost of either of those.In other words, the relative cost of recall and precision are different in specific model deployments. Results We started by evaluating multiple two-dimensional landscapes in the classical domain with the number of topmost important features ranging from 1 to 20 in increments of 1 and the training set sizes ranging from 50 to 600 samples in increments of 50.The topmost features were determined using the SHAP method 25 (see Methods section).For each landscape coordinate, 200 random train/test subsets were created out of the available data.Since the most resource-and time-demanding part of the process is calculation of the custom kernels in quantum simulations (classical hardware simulating the behavior of a quantum computer) and on a real quantum processing unit (QPU), a small number of these data sets were chosen to evaluate the required total processing time.Custom kernels were calculated using both quantum simulations and QPUs.With the obtained runtimes, a realistic number of data sets that could be executed was calculated.As a result, the configuration landscape was reduced to four feature number values [5, 10, 15, 20] This yielded a total of 24 subpoints across the 12-coordinate configuration space.We found this to be feasible for quantum simulation and, critically, also for QPU execution.Since analytical study of the hardness of such a large practical problem is extremely difficult, these types of large-scale simulation and hardware experiments across a broad configuration space are the most pragmatic way to identify trends and outliers.As a parallel outcome, this work hence represents one of the largest quantum machine learning experiments to date.The feature size component of the coordinates dictates the number of qubits for the QSVM.It was obtained by taking the most important features from the classical models built on the same data using the full-size data sets (see Methods section). We used the predict method from the svm.SVC class within scikit-learn 26 as the main method for comparing quantum and classical support vector machine performance.In addition, the predict_proba method was used to obtain estimates of probabilities.Thresholds were varied in the range 0 to 1 in small increments and applied to the probabilities to generate the optimal split between the two class labels.The plots presented focus on the predict method; a detailed discussion of the probability-based results can be found in the Methods section. Presented in Fig. 1 are the comparative 3D plots of F1 score and balanced accuracy metrics with the orange surface presenting points of the averaged metric for classical computing and the blue surface for quantum computing (QPU).All QPU experiments presented in this paper were run on ibmq_dublin (see Methods section).Each point on the configuration space coordinate was averaged from two selected data sets for that coordinate; thus, each plotted configuration space has 12 points in total.The z-axis is the metric while the x-and y-axis are the number of features and training samples respectively.The PTRI was calculated for the full configuration space both for balanced accuracy and F1 score metrics and plotted in Fig. 2 using the same approach as previously presented for the other metrics.For each point in the configuration space and its corresponding two subpoints, we calculated the geometric difference defined in 20 between radial basis function (RBF) and quantum kernels and averaged the corresponding two values at each coordinate, as shown in Fig. 3.By comparing the position of the data points with the horizontal "zero-advantage line", we observe EQA for a subset of problem instances in the configuration space: 0 % and 92 % of all instances for non-probability-based balanced accuracy and F1 score respectively as well as 33 % and 8 % of all instances for probability-based balanced accuracy and F1 score respectively showed such quantum advantage. Discussion In this work, we have considered a configuration space of classification problems with varying numbers of features and samples.On that manifold, we have observed EQA for 0 % and 92 % of classification problem instances for non-probability-based balanced accuracy and F1 score respectively as well as 33 % and 8 % of classification problem instances for probability-based balanced accuracy and F1 score respectively.This makes it apparent that EQA is something that must be evaluated on a case-by-case basis until clearer trends present themselves.Identification of hard instances through careful domain consideration has allowed us to observe such advantages with no circuit being executed more than 1024 times (i.e. a maximum of 1024 shots), which is almost one order of magnitude less than previous large-scale quantum machine learning experiments 16,17 (and therefore results in higher sampling noise).This indicates that, in the future, domain expertise about the hardness of practical problems is going to be crucial for the development and refinement of quantum algorithms.Our observation of these empirical trends reiterates the significance of developing such large-scale experiments to understand the trends and detect outliers.The considerable differences in EQA based on the choice of performance metric could suggest that practical quantum advantage is going to be highly domain-specific.Further work is needed to explore the applicability of different performance metrices for various domains.Given that there is a need to build robust machine learning models in medical settings where additional samples are costly or impossible to acquire, even a modest reduction in the number of samples required for training based on certain data distributions can yield considerable benefits for many prediction and inference problems in biology 24 . We also introduced a practical metric, PTRI, to quantify and thereby qualify the quantum advantage potential for a given problem.For any metric, PTRI helps identify the flattest and most rugged regions in configuration space.One could imagine that the flattest classical performance region is the configuration subspace where the performance of the classical techniques becomes stagnant and where a quantum algorithm should therefore be considered. In that case, computing the PTRI for the quantum approach over the given configuration space may give some insights about where quantum advantage is likeliest.This domainagnostic metric is one of the first attempts for an operational tool which, in the future, quantum practitioners can use to determine when to use a quantum computer, a dynamic decision that may have to be taken very frequently, under severe timing constraints.Further study is needed to interpret the amount of correlation between the manifolds of classical and quantum performance metrics in terms of PTRI and related measures. As a parallel result, we have also presented, to the best of our knowledge, the first independent application of the geometric difference, which we employed to determine the relative separation between classical and quantum feature maps.Further study is needed to understand how quantum practitioners may combine the concepts of PTRI and geometric difference to first identify the potential for quantum advantage in a configuration subspace and then estimate the potential of a specific quantum feature map in that subspace.We emphasize that there may be other relevant metrics worth exploring in the future when studying forms of quantum advantage, such as energy consumption. It is also important to observe that we used the same kernel function and feature map for every classification problem.More studies are needed to determine appropriate combinations of kernel function and feature map that result in greater EQA.It may also be worthwhile to investigate whether there are kernel functions inspired by one-way 7 , trapdoor, or learning with error (LWE) 27 protocols that may not only provide advantage in prediction accuracy but also in time complexity. Ultimately, we conducted the first systematic study of QSVM configuration space and quantum classification based on an EHR data set.We classified the persistence of rheumatoid arthritis patients on biologic therapies, predicting six-month persistence via binary classification.Furthermore, we proposed an end-to-end framework to study EQA that can be generalized for other machine learning and optimization problems and observed EQA for a subset of problem instances in the configuration space.Our framework represents progress towards a priori identification of data sets where quantum advantage could be achieved and underscored that even with current quantum computers it is possible to arrive at predictions which are at least as good as those obtained with classical computers.These results have implications for classification problems across industries, particularly for small data sets. Quantum Feature Map The feature map used in this work is known as the ZZFeatureMap, which gives rise to a feature space of 2 N dimensions where N is the number of qubits 12 .This family of circuits is believed to be hard to simulate classically 28 . IBM Quantum hardware ibmq_dublin is a 27-qubit superconducting qubit quantum computer available on the IBM Quantum Services.The qubit connectivity is shown in Fig. 5.For qubits, lighter color means higher T2 time, and for couplings, lighter color means lower fidelity.The average CNOT error rate and average readout error rate, at the time of authoring this manuscript, were 1.097 %, and 3.585 % respectively.The average T1 and T2 times were 107.03 μs and 114.53 μs respectively.The average gate time was 473.397 ns.More details may be accessed in real time 29 .For every quantum circuit, 1024 shots were run.The circuits were always maximally optimized using application programming interface (API) calls before the runs.A sample circuit computing the feature map of a five-feature data set is given in Fig. 6. Fig. 6.Cropped circuit for quantum kernel calculations with full subcircuit feature for one inner product calculation on the top two qubits for a five-feature instance.The full circuit repeats a similar pattern across different qubit pairs.Qubits and interactions were mapped based on the connectivity of ibmq_dublin.qi is the i-th algorithmic qubit and the double-digit index after the arrow sign is the physical qubit index on ibmq_dublin. Quantum simulator The quantum simulations were run without noise models on the qasm_simulator, available on the IBM Quantum Services.Each circuit was run with 1024 shots and the circuits were always maximally optimized before each run.The simulations supported the experimental design and results. EHR data In this work, there have been two main challenges related to making predictions based on the EHR data.First, the problem of binary classification in patient persistence depends on the quality of the main classification label -that is, whether or not a given patient is persistent on the medication.This is derived from prescription, and there are known challenges in determining patient persistence from prescriptions 30,31 .While additional claims-based data sets could be used in conjunction with EHR data to improve the certainty of prescription patterns, that was not an available option during this work. In addition, we used imputation to fill in missing data points.For example, not all laboratory results are equally present in each of the patients, and more specifically, not in the period of time covered by the data used for model training and testing.As detailed in the data section, the data was chosen such that for the first 10 features there is no missing data for any selected patient.Above the 10 th feature, the data is sparser.Thus, for the features 11-20 missing data was imputed using the mean of the present data.While the impact of that imputation is minimal given that the majority of the model accuracy comes from the top 10 features, applying different imputation techniques could be explored in the future. Cohort restrictions The models were built using data from the Optum® Electronic Health Record data set, which includes deidentified and aggregated clinical and medical administrative data from over 100 million longitudinal EHR lives.Fields that were used included demographics, laboratory tests, observations, prescriptions, visits, and selected subsets of extracts from physicians' notes pre-processed using natural language processing (NLP) methods.A list of RA (Rheumatoid Arthritis) International Classification of Diseases (ICD) diagnosis codes were used to select a first set of patients, further narrowed down by the given biologic's National Drug Codes (NDCs).The persistence for a given patient was defined as the length of time from initiation to discontinuation of the biologic therapy. The therapy start date, i.e. the index date, was set to be the start date of the first biologic prescription.Inclusion criteria required at least one year of data prior to the index date; any data prior to one year before the index date was truncated, thus guaranteeing the same interval length for all patients.Additional inclusion criteria applied were a minimum of 6 months of data after the index date with stable payer insurance and the patient had to be at least 18 years of age and be in an integrated delivery network (which was indicated by the flag in the data set).Exclusion criteria were more than one diagnosis of systemic lupus erythematosus (SLE) or psoriatic arthritis (PsA), combined with prior use of targeted diseasemodifying antirheumatic drugs (DMARDs) including biologics and Janus kinase inhibitors (JAKs).The inclusion/exclusion process is illustrated in Fig. 7. Fig. 7. Cohort restrictions applied to the EHR data. Creation of train and test subsets The full pipeline was developed on AWS/Databricks using PySpark, Python, and SQL.At the point where all preprocessing, inclusion and exclusion criteria were completed and the model training started, the data set size was reduced to 16000 samples, with a relatively balanced target class (52 % of patients persisted while 48 % did not).The number of features in that model exceeded 500, with the top 10 features accounting for more than 90 % of the achieved accuracy of 0.64.The variability of the model metrics within the set of 10 different train/test splits was under 4 %.The train/test split ratio was set at 80/20. In the preliminary experiments with classical SVMs and quantum simulations, the range of the explored landscape was 50-600 in training set size in increments of 50 and the range in the number of features was from 1 to 20 in increments of 1.Since it was known that the top 20 features carry > 95 % of accuracy, this was deemed sufficient and within the reach of QSVM, where each feature maps to one qubit, thus leading to 20-qubit QPU experiments. To reduce the training set size further from 12800 (80 % of the original 16000), an additional step was applied to the 16000-sample model data.First, the samples with no missing data in the top 10 features were selected, leaving the features from 11 to 20 with some missing data.That reduction yielded a data set of 1300 samples.This resulted in data sets with minimum missing data while preserving the top 20 features required for the experiments.Since not all patients have all the selected laboratory measurements or other features collected during the year before the start of the medication, we used imputation to fill in those values.Using a longer period of 2-3 years of training data prior to the index date significantly increases the chance of a patient having at least some value for the given features but reduces the overall number of patients in the cohort; therefore, that approach was not utilized in the final model. The size of the training set was narrowed down to three different values -200, 250, and 300.The final choice was to use 5, 10, 15, and 20 features, that, when combined with 200, 250, and 300 training set sizes, yielded a 12-point configuration space.From the 1300-sample data set, random sampling was used to create training data sets of 200, 250, and 300 samples.The test set size was kept at 150 to balance the constraints of reasonable runtimes for simulations and QPU while achieving the best stratification of samples under the circumstances. The downside of the training (and test) data set reduction to a few hundred samples is the reduction in predictive accuracy, originally at 0.64 with 16000 samples.This decision was made in order to explore the very difficult cases where it is hard to get predictions better than random guesses.While we could have chosen starting models with 15000-20000 samples and accuracies above 0.75, which were available with somewhat different patient cohort structures, our goal was to tackle the most difficult problems.This careful consideration in selecting harder instances has allowed us to observe empirical advantage of quantum kernels over classical kernels even though none of the circuits was run with more than 1024 shots.In future work, and as quantum hardware and software scales further, we would like to explore train/test data sets with 500-1000 samples, which would allow for reduced loss of accuracy and smaller variability due to reduction in set size. Knowing that there will be significant variability in the performance of different train/test data sets for each coordinate due to the small sample size, the maximum possible number of sets was evaluated.Given the preliminary runtimes for quantum simulations and QPU, the decision was made to use two random train/test splits for each configuration space coordinate, resulting in a total of 24 data sets to be run on QPU.While two random splits for each coordinate do not fully account for the variability resulting from such small data sets, this had to be limited due to QPU availability and simulation runtimes.Future work could be done to increase the number of data sets for each coordinate from two to 10 or more. During subsampling it was ensured that the target classification label proportion was kept in the original proportion within each train and test subset.Different models and class imbalance ratios ranging from 1:1 to 1:5 were evaluated and the final decision was to use the aforementioned (almost) balanced class to reduce the impact of small data set size.It was our judgement that more imbalanced cases would be better addressed in subsequent research. One of the main challenges with the small data sets is that when splitting train and test sets and training models on multiple splits, the resulting model metrics vary widely, especially with the models where predictive accuracy is not very high.To explore that, 200 random splits at each of the 12 coordinates were made.We calculated classical SVM balanced accuracies for each of the splits, a process that executed in less than one hour. For each of the 24 subpoints, having already calculated classical SVM metrics, quantum simulations and QPU runs were executed, both using the Qiskit framework.1024 shots were used as that allowed the execution for all 24 points within the time and resources available.The simulations were run with callback specified to provide additional insight during the running processes.The optimization level was set at three, the feature map to the ZZFeatureMap, the feature dimension equal to the number of features for the specific subpoint, the number of reps equal to two, and the entanglement to linear. Quantum simulation and QPU processing were used to calculate custom kernel matrices for the given train and test set.The train kernel was then saved and used in scikit-learn on the classical computer to train the SVM model using a precomputed option.The test kernel was passed to the model's predict method to make predictions.This way, we generated predictions for quantum simulations and QPU runs.The classical predictions were generated using kernels in SVMs.The models were trained for 18 different values of the regularization parameter C, ranging from 0.006 to 1024, and the best case was used from each model for the classical to quantum comparison.Every model was regularized separately for each of the two metrics; thus, the value of the optimal C parameter for a given set's balanced accuracy is generally different than the value of C for the F1 score.All three predictions were created for each of the 24 subpoints.While the developed framework supports allocating an independent validation data set for the final model metric assessment, a single validation set is unlikely to provide useful insights due to the variability in the metrics for the small data sets in question. Allocating multiple validation data sets and running them through QPU was not feasible with the available time and resources, however; therefore, such a validation step was not included. Variability and errors For the classical models, Fig. 8. illustrates the distribution of balanced accuracy values for 200 different train/test splits for the configuration space coordinate with 300 samples and 10 features.The scope of the SVM modeling was predicting the labels in the binary classification.We used scikit-learn and its predict method to predict labels directly and the predict_proba method for predicting approximate probabilities.Probabilities are estimated inside the predict_proba method using five-way cross-validation.As such, the process is subject to variability depending on the random seed that is provided to the svm.SVC call.One random value was used for calculations and comparisons to support reproducibility.In addition, a range of different random seed values was used in selected cases to obtain the variability of the predictions.It showed a 1.5 % standard deviation in the predictions obtained with the predict_proba method, both in the classical and quantum case, with less distinct values in the quantum domain. Runtimes The QPU runtimes were between 12 and 24 hours.During the QPU runs, both CPU and memory utilization was very low in the same Linux server as the main processing was executed on the QPU instance.The QPU processing was sequential. The QPU processing was executed from a c5.18xlargeUbuntu 18.04 cloud instance on Amazon Web Services (AWS), with 72 vCPUs and 144 GB Ram.We have not used Amazon Braket; instead, to support managing the whole process, we developed a Python package that has a code generation layer to simplify execution of different configuration spaces and management of the results.From within the package, the Qiskit API calls are made to simulators or QPUs.We used virtual environments that, over the course of a year, allowed us to effectively manage different versions of Qiskit and related software components. SHAP (SHapley Additive exPlanations) SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain the output of any machine learning model.It connects optimal allocation with local explanations using the classic Shapley values from game theory and their related extensions.SHAP measures the impact of variables by considering the interaction with other variables.Shapley values calculate the importance of a feature by comparing what a model predicts with and without the feature (variable) 32,33 . A starting list of the topmost 20 features was obtained from the original 16000-sample data sets by training machine learning models and then using SHAP to obtain the top 20 features.Those 20 features were then used in the aforementioned analysis with reduced size subsets in the order of SHAP relevance.Each of the points for 5, 10, 15, and 20 features on the configuration space was obtained by taking that many top features from the full 20-feature list, preserving the order. PTRI -Systematic identification of problems where quantum kernels may have empirical advantages Consider a set of classification problems where the number of features is between 1 and M and the number of samples between 1 and N.There are thus M x N classification problems. In order to help address the question which subset of these problems should be solved with a quantum kernel, we created a geophysics-inspired approach to identify regions of potential EQA in data sets.One way to select a suitable subset of problems involves studying the ruggedness of the manifold via PTRI, a metric we have adapted from 34 and defined as follows.We are considering the F1 score only as an instance of a metric in the formula.In the M X N configuration space defined before, each point is surrounded by eight other points except for the boundary points.For the boundary points, the performance result (F1 score in this case) of the adjacent points that are beyond the boundary is assumed to be 0.For the (i,j)th interior point, the local PTRIi,j is calculated according to PTRIi, j = [(F1i,j -F1i-1, j-1) 2 + (F1i,j -F1i-1, j) 2 + (F1i,j -F1i-1, j+1) 2 + (F1i,j -F1i, j-1) 2 + (F1i,j -F1i, j+1) 2 + (F1i,j -F1i+1, j-1) 2 + (F1i,j -F1i+1, j) 2 + (F1i,j -F1i+1, j+1) 2 ] 1/2 .To determine the PTRI of the full configuration space, we average across the PTRIi,j values. Fig. 1 . Fig. 1.Left: Balanced accuracy landscape.Right: F1 score landscape.The classical domain is shown in orange and the quantum domain in blue. Fig 2 . Fig 2. Left: PTRI (balanced accuracy) landscape.Right: PTRI (F1 score) landscape.The classical domain is shown in orange and the quantum domain in blue. Fig. 3 . Fig.3.Geometric difference between classical and quantum kernels across the configuration space.Despite its name, the mathematical definition demonstrates that it is a ratio.The greater the geometric difference, the more potential for quantum advantage the given quantum kernel has compared to the given classical kernel. Fig. 8 . Fig. 8. Distribution of balanced accuracy values for the classical models for different train/test splits for the point with 300 samples and 10 features.
7,407.4
2021-12-12T00:00:00.000
[ "Computer Science" ]
Dynamics of rotationally reciprocating stirred tank with planetary actuator The article investigates the dynamics of rotationally reciprocating stirred tank (RRST), whose actuator is the original planetary mechanism with elliptical gears. The dynamic model is constructed by reduction of driving forces, masses and moments to the reduction link (the input shaft of the actuator). The study of the resulting dynamic model was carried out by energy-mass method. As a result of the dynamic analysis we determined the necessary moment of driven forces and found the reduction link law of motion. The flywheel has been designed to ensure the required coefficient of rotation irregularity. Resulting dynamic model can be used for development and research of rotationally reciprocating stirred tanks. Introduction Mechanically stirred vessels are widely used in machinery, petrochemical, chemical, food and many other industries [1,2]. Currently, the most common and investigated stirred tanks are agitators with a rotational movement of impellers, because they are highly reliable, easy to manufacture and use. Angular velocity of impeller is constant in such vessels, so over time stirred liquid velocity and impeller velocity are equalized, which leads to low intensity of heat and mass transfer. For better performance of stirred tanks it is necessary to change velocity and direction of impeller rotation [3][4][5]. In [6,7] RRST is investigated, and rotationally reciprocating motion of its impeller is provided by reversing motion of the stepping motor. The developed machine is simple, due to the wide range of the rotation angle it allows to carry out various experiments in the small volume reactors. However, the use of such device is impractical on an industrial scale, since the stepping motor has a low efficiency and can not be used at high oscillation frequency of impeller. Therefore, [8,9] proposed a planetary converter of rotational motion into reciprocating rotational motion, which can be used as an actuator of the stirred tank. It is a double row planetary gear with two external gearing, in which one pair of cylindrical gears is replaced by elliptical ones. Angle of the output shaft rotationally reciprocating motion is defined by eccentricity of elliptical gears. Mechanism with two pairs of elliptical gears will increase the output shaft rotation angle at the same size (Fig. 1). Planetary gear (Fig. 1) consists of a rack 0, the input shaft 1, the carrier 2, the output shaft 3, the sun elliptic gear 4, elliptic gear 5, satellite elliptical gears 6 and 7, a shaft 8, which connects the satellite gears. Reciprocating rotational motion is provided by a rational sizing of actuator links. Connecting the input shaft of investigated mechanism with a motor and output shaft with impeller, we obtain rotationally reciprocating stirred tank (RRST) (Fig. 2). The considered vessel achieves high gradient of stirred liquid velocities, leading to increase in the heat and mass transfer intensity. Selection of the optimal amplitude and frequency of vibrations will reduce the flow time of many processes by 1.5-2 times, and operating costs by 1.2-1.8 times [10]. One of the most important steps in the design of new machines is the study of dynamic processes.To investigate the rotationally reciprocating stirred tank it is necessary to build dynamic model and conduct its analysis. Dynamic model of the actuator Since the planetary actuator has one degree of freedom [11] and its links are rigidly interconnected, for the rational solution it is advisable to take the input shaft 1 as the reduction link ( Fig. 1). Then singlemass dynamic model of RRST (Fig. 2) takes the following form (Fig. 3). To find the law of reduction link motion, it is necessary to define the following parameters of the dynamic model: reduced moment of inertia I r , and the reduced moment of resistance forces M r . The required drive moment M d is calculated during the analysis of the dynamic model. Determination of the reduced moment of inertia According to [12][13][14] reduced moment of inertia given by: where I d is the motor moment of inertia, I im is the impeller moment of inertia. The moments of inertia and velocity analogues of the actuator links identified in accordance with Figure 1. Differentiating (2) according to the generalized coordinate, we get: As seen from equations (2), (3), to find the reduced moment of inertia and its derivative it is necessary to find kinematic characteristics of planetary actuator -angular velocity and angular acceleration analogues of mechanism links, as well as linear velocity and linear acceleration analogues of link's centers of mass. Kinematic analysis of planetary actuator To carry out the kinematic analysis we shall represent the kinematic scheme of the mechanism in one of the positions and construct a plan of linear velocities (Fig. 4). Angular velocity analogue of the output shaft, in accordance with Figure 4, is determined as: To determine the distances included in the equation (4), let's consider the equation of the ellipse in polar coordinates. The focus of the driving ellipse will be taken as the pole, and the major axis will be taken as the polar axis (Fig. 5), then we will get the following ellipse equation [15]: where ϕ is a drive gear rotation angle; p is an ellipse focal parameter; e is eccentricity of the ellipse; a is a semi-major axis of the ellipse. According to Figure 6, taking into account (6), the equation (5) for the ellipses 4 and 7 will have the following form: where 4 ϕ , 7 ϕ are angles of ellipses rotation; e 1 is eccentricity of the first ellipse gear pair (4, 6); e 2 is eccentricity of the second ellipse gear pair (5, 7). According to [16] transmission function of elliptical gear can be written as: . It follows from Figure 6 that the distances in the equation (4) are defined as: ; 2 Substituting (10)…(13) into equation (4) we find the angular velocity analogue of the planetary mechanism output shaft 3: Figure 6 indicates that the satellite 8 and elliptical gearwheels 6 and 7 make the plane-parallel motion, and point B for satellite will be instantaneous center of velocity. Figure 6 also shows that the satellite shaft center of mass (C 8 ) velocity 8 υ and the angular velocity 8 ω of the satellite are determined as: Taking into account (15) and (16) velocity analogues 8 ϕ′ and 8 S′ will take the following form: . Determination of the fluid resistance moment A useful fluid resistance acts on stirred tank impeller, which determines motion laws of the mechanism links. To construct a dynamic model it is necessary to find a moment of force, which arises at interaction between fluid and impeller, and reduces it to the input shaft of the planetary actuator. The stirred product is a liquid, so in general, the viscous resistance force will act on the impeller, and it can be either linear or quadratic function of the velocity: where k 1 , k 2 are coefficients of resistance, υ is the impeller linear velocity. The linear velocity of different impeller areas is variable; therefore, the variable force of fluid resistance will act on the RRST impeller (Fig. 7). In [17] there is obtained the following equation for determining the resistance moment of impeller that perform rotationally reciprocating motion: where lin B′ , quad B′ are reduced coefficients of linear and quadratic resistance, ω is the angular velocity of impeller, l im is an impeller length, h im is the blade width, h x is the distance from the rotation axis to the boundary between the laminar and turbulent regimes, k is the number of impeller blades (in investigated stirred tank k = 2). For most technological processes there we can observe turbulent mode [1], so (27) takes the form: ). ( 2 , as a result we get: According to [10], reduced resistance moment is generally defined as follows: where n is total number of mobile links; m is number of forces F, acting on the i-th link; i l′ is velocity analogue of force application point; q is the number of moments M, acting on the i-th link. Taking into account that only the impeller resistance moment acts in the stirred tank, then (30) to determine the reduced resistance moment takes the form: Given that , we substitute (29) to (31) and obtain: ). Equation (32) allows to find resistance moment, reduced to the input shaft 1. Investigation of the dynamic model As an example, we consider a stirring device with the following parameters (link numbers correspond to (n 1 =500 rpm). A study of the dynamic model is performed using the energy-mass method [12,13], which is widely used in the dynamic analysis of machines. In accordance with the selected method we will find the increment of the kinetic energy T ∆ : , where A d is work of driving forces, A r is work of resistant forces. Works in (33) are determined as: For investigated RRST we construct graphs of A r , A d , T ∆ (Fig. 8). Using (37) and the calculation results (Fig. 8), we construct a graph ) ( 1 t ω (Fig. 9). The graph shows that the reduction link angular velocity is not constant and varies around the average value. Velocity oscillations are determined by the intracyclic changes in gear ratio of mechanism with elliptical gearwheels and force changes on impeller. Since the angular velocity of reduction link is variable, then we define coefficient of rotation irregularity δ [13]: It is seen from Figure 9 . Therefore, it is necessary to install the flywheel in the investigated RRST. According to [13], flywheel moment of inertia I f will be defined as: As can be seen from the graphs, the installation of a flywheel reduced rotation irregularity of reduction link. Coefficient of rotation irregularity decreased to allowable value 05 Conclusion In this paper we construct and investigate dynamic model of the stirred tank with a rotationally reciprocating motion of impellers. As the actuator of such machines, the new planetary mechanism for converting rotational motion into reciprocating rotational with elliptic gears was proposed. The investigations produced the following results: • the reduced link (input shaft of the mechanism) law of motion was found; • we calculated the value of the drive moment, which is required for the stirred tank operation; • we determined flywheel moment of inertia, which is necessary to reduce coefficient of rotation irregularity to allowable value. The resulting mathematical model can be used in the calculations, design and investigation of stirred tanks with rotationally reciprocating motion of impellers.
2,451.4
2017-06-01T00:00:00.000
[ "Engineering" ]
Large-scale asymmetry in the distribution of galaxy spin directions – analysis and reproduction Recent independent observations using several different tele-scope systems an analysis methods have provided evidence of parity violation between the number of galaxies that spin in opposite directions. On the other hand, other studies argued that no parity violation can be identified. This paper provides detailed analysis, statistical inference, and reproduction of previous reports that show no preferred spin direction. Code and data used for the reproduction are publicly available. The results show that the data used in all of these studies agrees with the observation of a preferred direction as observed from Earth. In some of these studies the datasets were too small, or the statistical analysis was incomplete. In other papers the results were impacted by experimental design decisions that lead directly to show non-preferred direction. In some of these cases these decisions are not stated in the papers, but were revealed after further investigation in cases where the reproduction of the work did not match the results reported in the papers. These results show that the data used in all of these previous studies in fact agree with the contention that galaxies as observed from Earth have a preferred spin direction, and the distribution of galaxy spin directions as observed from Earth form a cosmological-scale dipole axis. This study also shows that the reason for the observations is not necessarily an anomaly in the large-scale structure, and can also be related to internal structure of galaxies. The contention of charge conjugation parity (CP) symmetry violation had been proposed in particle physics as early as the 1950's (Lee and Yang, 1956), and clear evidence of CP violation was provided in the 1960s (Cronin et al., 1964).These surprising observations can have implications on parity violation and dipoles in the Universe (Hou, 2011;Bian et al., 2018;Gava and Volpe, 2010;Flanz et al., 1995).Possible charge conjugation, parity transformation, and time reversal (CPT) symmetry violation (Lehnert, 2016) has also been proposed as a factor linked to an anisotropic or polarized Universe (Crow-der et al., 2013;Mavromatos, 2017;Mavromatos and Sarkar, 2018;Pop lawski, 2011). If our Universe is the interior of a black hole in another universe, it agrees with the theory of multiverse (Carr and Ellis, 2008;Hall and Nomura, 2008;Antonov, 2015;Garriga et al., 2016;Debnath and Nag, 2022;Trimble, 2009;Kragh, 2009).Black hole cosmology is also supported by the agreement between the Schwarzschild radius of the Universe and the Hubble radius (Christillin, 2014), as well as the accelerated expansion of the Universe without the need to assume the existence of dark energy.A black hole universe is also expected to violate the CPT symmetry (Lehnert, 2016).Black hole cosmology is directly related to the ability to view space as a projection, which is the theory of holographic universe (Susskind, 1995;Bak and Rey, 2000;Bousso, 2002;Myung, 2005;Hu and Ling, 2006;Rinaldi et al., 2022;Sivaram and Arun, 2013;Shor et al., 2021). Quantum Field Theory (QFT) has shown promising success in explaining microscopic phenomena such as subatomic structures (Davies, 1976).Early analysis predicted an anisotropic universe, noting that "It is a curious unexplained feature that the present condition of the Universe is one of high isotropy" (Davies, 1976).Given QFT, cosmological-scale gravitational dipoles can be expected (Faulkner et al., 2022).Such gravitational dipoles are driven by matter and antimatter that have gravitational charges, and can explain phenomena normally attributed to dark energy and dark matter, and does not assume primordial singularity or cosmic inflation (Hajdukovic, 2013(Hajdukovic, , 2014)).The quantum vacuum can then be explained by polarized gravity, and the gravitational dipoles can also explain disagreement between the observed and theoretically predicted dark energy density (Hajdukovic, 2013(Hajdukovic, , 2014)).The poles as gravitational sources in the Universe can change the large-scale symmetry, provoking fluctuations that contribute in the evolution of the Universe and its geometrical aspect, making its asymmetricity possible. In addition to the probes mentioned above, another probe that supports the contention of anisotropic Universe and a preferred direction is the asymmetric distribution of galaxies with opposite spin directions.Early reports of the asymmetry used a small number of manually annotated galaxies in the local supercluster, suggesting that the number of galaxies spinning clockwise is different from the number of galaxies spinning counterclockwise with certainty of 92% (MacGillivray and Dodd, 1985). The deployment of robotic telescopes allowed the analysis of a larger number of galaxies.Perhaps the first major modern high-throughput autonomous digital sky survey was SDSS (York et al., 2000), with data acquisition power that was unprecedented at the time.Analysis of the distribution of the spin directions of spiral galaxies in SDSS images showed parity violation and a dipole axis, observed in several different experiments using manual and automatic annotation (Longo, 2011;Shamir, 2012Shamir, , 2020cShamir, ,b, 2021aShamir, ,b, 2022c,a),a).In addition to SDSS, experiments with other sky surveys also showed similar large-scale parity violation in galaxy spin directions as observed from Earth.These experiments include data from Hubble Space Telescope (Shamir, 2020b), Pan-STARRS (Shamir, 2020c), DECam (Shamir, 2021b), DES (Shamir, 2022b), and DESI Legacy Survey (Shamir, 2022a).Some of the experiments included over 10 6 galaxies (Shamir, 2022a), providing strong statistical significance.The experiments showed clear separation between hemispheres, such that one hemisphere has an excessive number of galaxies spinning clockwise, while the opposite hemisphere contained more galaxies that seem to spin counterclockwise to an Earth-based observer.An analysis with ∼ 1.3•10 6 galaxies allowed to show a dipole axis alignment without the need to fit it in a certain statistical model (Shamir, 2022a).The parity violation seems to become stronger as the redshift increases (Shamir, 2020c(Shamir, , 2022c)).Other studies include a smaller number of galaxies to show links between the spin directions of neighboring galaxies (Mai et al., 2022), including galaxies that are too far from each other to have gravitational interactions (Lee et al., 2019b).These links were defined "mysterious", suggesting that galaxies in the Universe are connected through their spin directions (Lee et al., 2019b).A correlation was also identified between the cosmic initial conditions and spin directions of galaxies (Motloch et al., 2021).Experiments showed that the strength of the asymmetry is not necessarily affected when limiting the size of the galaxies (Shamir, 2020c), but showed inconclusive evidence of 1.1σ that the asymmetry increases as the density of the galaxies gets larger (Shamir, 2022c). While these studies showed patterns of alignment in galaxies spin directions at scales far larger than any known astrophysical structure, numerous other studies showed alignment in the spin directions of galaxies at smaller scales.For instance, a set of 1,418 galaxies from the Sydney-Australian-Astronomical-Observatory Multi-object Integral-Field Spec-trograph (SAMI) survey (Bryant et al., 2015;Croom et al., 2021) showed spin alignment of galaxies within filaments (Welker et al., 2020), supported by a consequent study that also showed a link between spin alignment and the morphology of the galaxies (Barsanti et al., 2022).Spin direction alignment in cosmic web filaments has also been shown by multiple other studies (Tempel et al., 2013;Tempel and Libeskind, 2013;Kraljic et al., 2021), as well as in numerical simulations (Zhang et al., 2009;Davis and Natarajan, 2009;Libeskind et al., 2013Libeskind et al., , 2014;;Forero-Romero et al., 2014;Wang and Kang, 2018;López et al., 2019). But while several studies showed non-random distribution of galaxy orientation, some experiments argued that the distribution is random (Iye and Sugai, 1991;Land et al., 2008;Hayes et al., 2017;Iye et al., 2021).One of the first documented experiments to claim for random distribution and no preferred handedness in the distribution of galaxy spin directions was an experiment based on manually collected galaxies that took place in the pre-information era in astronomy (Iye and Sugai, 1991).During that time, large-scale autonomous digital sky surveys did not yet exist, which limited the ability to collect and analyze large databases within reasonable efforts.The dataset included 3,257 galaxies annotated as spinning clockwise, and 3,268 galaxies annotated as spinning counterclockwise.The downside of the study was that due to the limited ability to collect data at the time, the dataset was relatively small.The small size of the dataset does not allow to provide statistical significance given the magnitude of the asymmetry reported here and in previous reports. For instance, the asymmetry between the number of clockwise and counterclockwise galaxies as described in (Shamir, 2020c) is ∼1.4%.Given that asymmetry, showing a onetailed P value of 0.05 requires 55,818 galaxies.Therefore, the dataset of a few thousand galaxies is not large enough to show a statistically significant asymmetry.But while these experiments use a small number of galaxies, other experiments used larger datasets, but still showed random distribution of the galaxy spin directions. The purpose of this paper is to carefully examine the claims and experiments, and understand the conflicting results between different experiments that in some cases use the same data but reach different conclusions.Section 2 discusses the results provided by the Galaxy Zoo citizen science initiative to annotate galaxies by their spin directions (Land et al., 2008), Section 3 reproduces the analysis of SDSS galaxies with spectra annotated by the SpArcFiRe method (Hayes et al., 2017), Section 5 reproduces an experiment of SDSS galaxies annotated using the Ganalyzer method (Iye et al., 2021), Section 6 discusses possible reasons for the observations, and Section 7 provides a possible explanation to the observation that does not necessarily require to shift from the standard model. Early analyses with manual annotation and Galaxy Zoo crowdsourcing Another notable experiment was based on galaxies annotated manually by a large number of volunteers through the Galaxy (Land et al., 2008). Zoo platform (Land et al., 2008).The analysis by using manual annotation was limited by a very high error rate, but after selecting just the galaxies on which the rate of agreement was high, the error can be reduced.But more importantly, the annotations were systematically biased by the human perception or the user interface (Land et al., 2008).Even when using the "superclean" criterion, according which only galaxies on which 95% or more of annotations agreed are used, the asymmetry was ∼15%.That asymmetry was determined to be driven by the bias rather than a reflection of the real distribution of galaxies in the sky (Land et al., 2008). When the bias was noticed, a new experiment was done by annotating a small subset of 91,303 galaxies using the same platform.In addition to annotating the original images, the second experiment also included the annotation of the mirrored image of each galaxy.The annotation of the mirrored images is expected to offset the bias. After selecting the "sueprclean" annotations, the results showed 5.525% clockwise galaxies compared to 5.646% mirrored counterclockwise galaxies that were annotated as clockwise.The information is specified in Table 2 in (Land et al., 2008).That showed a 2.1% higher number of counterclockwise galaxies.Similar results were also shown with galaxies annotated as spinning counterclockwise, where 6.032% of the non-mirrored images were annotated as spinning counterclockwise, while 5.942% of the mirrored galaxies were annotated as clockwise.The magnitude and direction of the 1%-2% asymmetry is aligned with the asymmetry reported in (Shamir, 2020c), which is also based on SDSS galaxies with spectra, and therefore the footprint and limiting magnitude of the galaxies used in (Land et al., 2008) and in (Shamir, 2020c) are expected to be similar. The primary difference between the experiment of (Land et al., 2008) and the experiment of (Shamir, 2020c) is the size of the datasets.While in (Shamir, 2020c) more than 6 • 10 4 galaxies were used, the set of "superclean" Galaxy Zoo galaxies that were annotated as mirrored and non-mirrored to offset for the human bias was just ∼ 10 4 .Table 1 shows the number of galaxies that spin clockwise and counterclockwise as annotated by Galaxy Zoo when the original images were annotated, and when the mirrored images were annotated. While the crowdsourcing analysis provided more than 10 5 annotated galaxies, it is still not of sufficient size to show statistically significant asymmetry.On the other hand, it does show a consistent higher number of counterclockwise galaxies compared to clockwise galaxies.For instance, the number of galaxies annotated as clockwise is lower than the number of mirrored galaxies annotated as clockwise, which is 5,044 and 5,155, respectively.That provides a P≃0.13 one-tailed statistical significance of the binomial distribution assuming the probability of each spin direction is 0.5. The numbers of galaxies annotated as counterclockwise and the number of mirrored galaxies annotated as counterclockwise were 5,507 and 5,425, respectively, providing a binomial distribution statistical signal of P ≃ 0.21.That asymmetry cannot be considered statistically significant, but it also agrees in direction and magnitude with the asymmetry shown in (Shamir, 2020c) for the same footprint and distribution in the sky.It is possible that if the dataset used in (Land et al., 2008) was larger the results would have been statistically significant, but since the dataset is of a limited size that assumption cannot be proven or disproven. While none of the two experiments show statistical significance, they both show a higher number of galaxies that spin counterclockwise in the SDSS footprint.The aggregated Pvalue of the two experiments is 0.0273.Although the annotations of each galaxy are different, some of the galaxies might exist in both experiments, where in one experiment the galaxy was annotated using its original image, and in the other experiment it was annotated using its mirrored image.The presence of galaxies that were used in both experiments therefore does not allow to soundly aggregate the P values.But in any case, even if the P value does not show statistical significance, the results observed with Galaxy Zoo data certainly do not conflict with the results shown in (Shamir, 2020c) for SDSS galaxies with spectra. A previous experiment using Galaxy Zoo galaxies annotated by SpArcFiRe Another analysis (Hayes et al., 2017) of SDSS galaxies with spectra used an automatic annotation to provide an analysis with a far larger number of galaxies compared to (Land et al., 2008).The SDSS galaxies were the galaxies also used by Galaxy Zoo 1.To separate the galaxies by their spin direction, the SpArcFiRe method (Davis and Hayes, 2014) was used.SpArcFiRe (SPiral ARC FInder and REporter) receives a galaxy image as an input, normally in the PNG image file format, and extract several descriptors of the galaxy arms.The algorithm works by first identifying arm segments in the galaxy image, and then grouping the pixels that belong in each segment.Once the pixels the different arm segments are identified, the pixels of each arm segment are fitted to a logarithmic spiral arc.The fitness of the pixels in the arm segment provides information about the arm, that can also be used to identify its curve direction, and consequently the spin direction of the galaxy.Source code of the implementation of the SpArcFiRe algorithm is available at https://github.com/waynebhayes/SpArcFiRe,and a full detailed description of the method is available at (Davis and Hayes, 2014).Processing of a 128×128 galaxy galaxy image takes ∼30 seconds using an Intel Core-i7 processor, and therefore a set of 100 cores was used to annotate the ∼ 6.7•10 5 galaxies in the Galaxy Zoo 1 dataset. After applying the SpArcFiRe method to separate the galaxies by their spin direction, the results of the study showed that when separating the spiral galaxies from the elliptical galaxies through Galaxy Zoo 1 annotations, the asymmetry can be considered statistically significant.As shown in Table 2 in (Hayes et al., 2017), the statistical significance ranged between 2σ to 3σ, depends on the agreement threshold of the Galaxy Zoo separation between the elliptical and spiral galaxies.These results agree with the contention of a higher number of galaxies spinning counterclockwise in that footprint, but might be biased due to a possible human bias in the selection of spiral galaxies.For instance, if the human annotators tend to select more counterclockwise galaxies as spiral, that can lead to an observed asymmetry when annotating the spin directions of these galaxies (Hayes et al., 2017). To avoid the bias in the human selection of spiral galaxies, another experiment was made by selecting the spiral galaxies by using a machine learning algorithm.Naturally, the machine learning algorithm was trained by spiral and elliptical galaxies.To avoid bias in the classification, the class of spiral galaxies included the same number of clockwise and counterclockwise galaxies (Hayes et al., 2017).However, in addition to having a balanced number of galaxies in each class, the machine learning algorithm was limited to features that cannot identify the spin direction of the galaxy.That is, all features that showed a certain correlation with the spin direction of the galaxy were removed manually and were not used in the analysis.As stated in (Hayes et al., 2017), "We choose our attributes to include some photometric attributes that were disjoint with those that Shamir (2016) found to be correlated with chirality, in addition to several SpArcFiRe outputs with all chirality information removed". When manually removing the features that correlate with the asymmetry, it can be expected that the step of separation of the galaxies by their spin directions would provide a lower asymmetry signal.To test that empirically, the same experiment was reproduced with the same code, but without using machine learning to identify spiral galaxies, and therefore also without manually removing specific features.Two ways of selecting the galaxies were used.The first was by not applying any selection of spiral galaxies.That is, the SpArc-FiRe algorithm was applied to all Galaxy Zoo 1 galaxies, and all galaxies that SpArcFiRe was able to determine their spin directions were used in the analysis (McAdam et al., 2023).That led to a dataset of 271,063 galaxies.The other way for selecting spiral galaxies was by applying the Ganalyzer algorithm (Shamir, 2011) to select spiral galaxies, but without separating the galaxies by their spin direction. Ganalyzer works by first converting the galaxy image into its radial intensity plot, which is a 360×35 image such that the value of the pixel at Cartesian coordinates (x, y) is the value of the pixel in the polar coordinate (θ, r) in the original galaxy image, where the centre point is the center of the galaxy.The value of θ is within (0, 360), and the radius r is in percentage of the galaxy radius.Then, a peak detection algorithm is applied to identify the maximum brightness in each line of the radial intensity plot.Since galaxy arms are brighter than the background pixels at the same radial distance from the galaxy center, the peaks are the galaxy arms.If the peaks are aligned in straight vertical lines, it means that the angle of the arm does not change with the radial distance, and therefore the galaxy is not spiral.But if the peak form a line that is not vertical, it shows that the arms are spiral.The algorithm is explained in full detail and experimental results in (Shamir, 2011). Ganalyzer is not based on machine learning or pattern recognition, and therefore does not have a step of training or feature selection.The dataset of annotated galaxies after selecting spiral galaxies contained 138,940 galaxies.Both datasets can be accessed at https://people.cs.ksu.edu/~lshamir/data/sparcfire. Figure 1 shows the results of the analysis when using the original galaxy images and the mirrored galaxy images. As the figure shows, the results are different when using the original images and the mirrored images.That is expected due to the asymmetric nature of the SpArcFiRe software is discussed in the appendix of (Hayes et al., 2017).The statistical significance of the datasets ranges from 2.05σ when using the non-mirrored images without a first step of spiral selection, to 3.6σ when using the mirrored images when using the mirrored galaxy images annotated after a first step of selecting spiral galaxies. 4 Previous experiment by using deep neural networks for annotating the galaxies In the past decade, deep neural networks became a very common tool for annotating galaxies by their morphology (Dieleman et al., 2015;Graham, 2019;Mittal et al., 2019;Hosny et al., 2020;Cecotti, 2020;Cheng et al., 2020).Neural networks have been shown to provide superior performance in their ability to annotate images automatically, and due to the availability of libraries such as TensorFlow or PyTorch their implementation is normally more accessible compared to model-driven approaches.The downside of deep neural networks is that they work by complex data-driven rules, which makes them non-trivial to understand.That can add many unexpected biases that makes neural network imperfect for detecting subtle asymmetries reflected by the annotation of image data (Dhar and Shamir, 2022). An application of a deep neural network to annotate galaxies by their sin direction was performed in Tadaki et al. (2020), annotating HCS galaxies by using deep convolutional neural networks.The annotation provided 38,718 galaxies spinning clockwise, and 37,917 spinning counterclockwise in the HCS footprint.The higher number of clockwise galaxies in the HCS footprint is in agreement with analysis using the DESI Legacy Survey, also showing a higher number of galaxies spinning clockwise around that part of the sky (Shamir, 2022a).The one-tailed probability to such distribution to occur by chance is P=0.0019.Because the biases of deep neural networks are very difficult to control (Rudin, 2019;Dhar and Shamir, 2022), such analysis cannot provide a clear proof of non-random distribution of galaxy spin directions, as also state in the paper (Tadaki et al., 2020).But the differences as reported in (Tadaki et al., 2020) also definitely do not conflict with the contention that the distribution of galaxy spin directions is not necessarily symmetric, and in fact agree with that contention rather than disagree with it. 5 Reproduction of analysis of 72,888 SDSS galaxies Another experiments that argued for no preferred handedness in the distribution of galaxy spin directions using a relatively large number of galaxies is (Iye et al., 2021).The analysis of Iye et al. ( 2021) is based on a dataset of 162,514 photometric objects used in (Shamir, 2017).The main argument of the work suggests that the dataset contains a large number of "duplicate objects", and once these objects are removed the dataset does not show non-random distribution (Iye et al., 2021).The dataset used in (Shamir, 2017) was used for photometric analysis of objects that spin in opposite directions.The study (Shamir, 2017) does not make any claim for the presence or absence of any kind of axis in that data, and no such claim about that dataset was made in any other paper.While several previous experiments were made with SDSS galaxies to show a dipole axis formed by the distribution of galaxy spin directions (Shamir, 2012(Shamir, , 2020c(Shamir, , 2021a)), none of these experiments were based on the dataset used in (Shamir, 2017).When using the photometric objects used in (Shamir, 2017) to study the distribution of galaxy spin directions, photometric objects that are part of the same galaxies indeed become "duplicate objects".But as mentioned above, (Shamir, 2017) does not make any claim for the presence of any kind of axis, and no such claim was made about that dataset in any other paper. Although the dataset used in (Shamir, 2017) was not used in previous papers to analyze a dipole axis, it is still expected that the dataset used in (Shamir, 2017) would be consistent with the results shown by previous datasets.The "clean" dataset used by Iye et al. (2021) was compiled by removing duplicate objects from the dataset used in (Shamir, 2017).That provided a dataset of 72,888 galaxies (Iye et al., 2021).That dataset is available for download at https://people.cs.ksu.edu/~lshamir/data/iye_et_al/galaxies.csv. As explained in Equation 2 in Section 2.1 of (Iye et al., 2021), the strength of the dipole D from a certain point in the sky is determined by Σ N i=1 h i Ω i P/N = Σ N i=1 h i cosθ i /N , where P is the fiduciary pole vector, Ω i is the spin vector of galaxy i, h i is the spin direction of galaxy i, θ i is the angle between the direction of the galaxy i and the direction of the pole vector P, and N is the total number of galaxies in the dataset. The spin direction h i of the galaxy i is within the set {1, −1}.The statistical strength of a dipole to exist at a certain point in the sky was determined by the D when the spin directions of the galaxies were taken from the dataset, compared to the mean D and standard deviation D σ computed after running the same analysis when assigning the galaxies with random spin directions.The full implementation of the method is available at https://people.cs.ksu.edu/ ~lshamir/data/iye_et_al. To follow the experiment of Iye et al. (2021), the mean and standard deviation of D when using random spin directions was done 50,000 times, although the change in the results becomes minimal after 2,000 runs.Following the description of (Iye et al., 2021), the spin direction of the galaxy d is taken from the same dataset of 72,888 galaxies used in (Iye Figure 1: Results of analysis of Galaxy Zoo 1 galaxies annotated by SpArcFiRe.Panel (a) is the result of the analysis with spiral galaxies selected by Ganalyzer when the images are mirrored, Panel (b) shows the analysis when the galaxies are not mirrored.Panels (c) and (d) show the results of the analysis without a first step of selection of spiral galaxies when the galaxy images are mirrored or non-mirrored, respectively.et al., 2021), and also available publicly at https://people.cs.ksu.edu/~lshamir/data/iye_et_al/galaxies.csv. As also done in (Iye et al., 2021), the location of the most likely dipole axis was determined by the statistical σ difference between the D computed with the spin directions of the galaxies and the D computed when the galaxies are assigned with random spin directions determined the most likely position and the statistical signal of the dipole.The code that implements the method and step-by-step instructions to reproduce the results are available at https: //people.cs.ksu.edu/~lshamir/data/iye_et_al. Iye et al. (2021) performed experiments with two datasets. The first was the entire dataset used in (Shamir, 2017).The second dataset was a subset of the dataset used in (Shamir, 2017), such that the redshift of the galaxies was limited to less than 0.1.For instance, the low statistical significance of 0.29σ reported in the abstract of (Iye et al., 2021) as the statistical significance of the sample was in fact the statistical significance observed in the sub-sample of galaxies with redshift lower than 0.1, as shown in Table 1 in (Iye et al., 2021). The low statistical significance when limiting the redshift was reported previously in (Shamir, 2020c).For instance, Tables, 3, 5, 6 and 7 in (Shamir, 2020c) show random distribution when the redshift is limited to 0.1.Also, an experiment reported in (Shamir, 2020c) used galaxies limited to z < 0.15, and showed that the statistical significance of the dipole axis was below 2σ.A similar experiment (Shamir, 2022c) also showed that the dipole axis is not statistically significant in the lower redshift ranges, including 0 < z < 0.1.Therefore, limiting the redshift to 0.1 is expected to show low statistical significance.The low statistical significance shown in (Iye et al., 2021) agrees with the random distribution in that redshift range as shown in (Shamir, 2020c(Shamir, , 2022c)). When not limiting the redshift, Iye et al. (2021) argue that the statistical significance of the "clean" dataset of 72,888 galaxies exhibits a dipole axis in the galaxy spin directions with statistical significance of 1.29σ.That is specified in Table 1 in (Iye et al., 2021).However, the reproduction of the experiment using the exact same 72,888 galaxies and the code described in Section 5 shows that the statistical significance of the dipole axis is 2.15σ.The code, data, step-by-step instructions, and the output of the analysis are provided at https://people.cs.ksu.edu/~lshamir/data/iye_et_al. Figure 2 provides a Mollweide projection that visualizes the statistical significance of a dipole axis from every possible integer combination of (α, δ).The most likely dipole axis with statistical significance of 2.15σ is observed at (α = 170 o , δ = 35 o ).That statistical significance is far higher than the 1.29σ reported in Table 1 of (Iye et al., 2021).Reasons for the differences are discussed in Section 5.2. A dipole axis formed by the distribution of galaxy spin directions means that one hemisphere has a higher number of galaxies that spin in one direction, while that asymmetry is inverse in the opposite hemisphere.Table 2 shows the distribution of galaxy spin directions when separating the sky into two hemispheres. Statistically significant non-random distribution is observed in one of the hemispheres.In the opposite hemisphere the asymmetry is not statistically significant, but the asymmetry is inverse to the asymmetry in the hemisphere centred Table 2: The number of galaxies in the (Iye et al., 2021) catalogue that spin in opposite directions when separating the sky into two hemispheres. at (RA=160 o ).When applying statistical Bonferronni correction for the two hemisphere, the statistical significance is still ∼ 0.0104.A Monte Carlo simulation showed that the probability to have such distribution by chance is P ≃ 0.007.Code and instructions to reproduce the analysis are provided at https://people.cs.ksu.edu/~lshamir/data/ iye_et_al.The fact that a simple separation into two hemisphere is statistically significant shows that a method that shows random distribution of this specific dataset is incomplete. The algorithm used to annotate the galaxies works by first converting the galaxy image into its radial intensity plot transformation, and then detecting peaks in the transformation to identify the shift in the peaks, which show the shift in the arms, and therefore the curve (Shamir, 2011).To perform a correct identification of the spin direction of the galaxy, a sufficient number of peaks detected in the radial intensity plot is required.Therefore, if a galaxy does not have a certain minimum number of peaks detected in its radial intensity plot, that galaxy is rejected from the analysis. The analysis shown in Figure 2 is the result of using the galaxies used in (Shamir, 2017).That experiment aimed at performing a photometric analysis rather than an attempt to identify a dipole axis in the spin directions of the galaxies as was done in (Iye et al., 2021).The minimum number of peaks detected in the radial intensity plot to determine the spin direction of a galaxy in (Shamir, 2017) was 10 peaks.That is, if after converting the galaxy image to its radial intensity plot 10 peaks or more were detected, the spin direction of the galaxy could be determined based on these peaks.The low number of peaks increased the number of annotated galaxies, but also led to a certain inaccuracy of the galaxy annotation.The inaccuracy of the annotations is discussed in (Shamir, 2017).In the previous experiments of identifying a dipole axis in the spin directions of the galaxies, the minimum number of peaks required to make an annotation was 30 (Shamir, 2020c).The certain inaccuracy of the annotations of the galaxy images can lead to weaker signal compared to previous studies with a similar number of galaxies such as (Shamir, 2020c).Another reason for the weaker signal is that the objects used in (Shamir, 2017) were relatively bright objects (i < 18), and therefore objects with lower redshift.But despite these selection criteria, the signal observed in the experiment is stronger than 2σ.2021) state that the "clean" dataset of 72,888 galaxies exhibits random distribution of 1.29σ in the spin directions of the spiral galaxies.These results are in conflict with the results shown in Section 5.1, showing that the reproduction of the analysis with the exact same data shows a much stronger statistical significance, stronger than 2σ.One reason that can lead to lower statistical significance is that Iye et al. (2021) used the photometric redshift to limit the volume of the galaxies.For instance, the 0.29σ highlighted in the abstract was determined after using the photometric redshift.The (Iye et al., 2021) paper uses the term "measured redshift", but as explained in (Shamir, 2017) the vast majority of the galaxies in that dataset do not have spectra, and therefore the galaxies could not have spectroscopic redshift.As stated in the journal version of (Iye et al., 2021), the source of the redshifts is the catalogue of (Paul et al., 2018), which is a catalogue of photometric redshift.The photometric redshift is highly inaccurate and can be systematically biased.The inaccuracy of the photometric redshift can add substantial bias to the results, and can lead to lower statistical signal due to the unexpected inaccuracies added to the data.However, the analysis when using the entire "clean" dataset was done without limiting the volume, and without using the photometric redshift.Therefore, the photometric redshift cannot be the reason for the discrepancy between the reported and observed results.An inquiry to the National Astronomical Observatory of Japan (NAOJ), where the research was conducted, provided a reason for the differences.The analysis is available at (Watanabe, 2022).The full analysis of the NAOJ can be accessed at https://people.cs.ksu.edu/~lshamir/data/iye_et_al/watanabe_NAOJ_reply.pdf. Iye et al. ( According to the analysis of the NAOJ (Watanabe, 2022), the analysis was done by assuming that the galaxies are distributed uniformly in the hemisphere.As the analysis summary states: "Because it is hard to verify the detail of simulations, we here calculate the analytic solution by Chandrasekhar (1943) which assumes uniform samples in the hemisphere."That is, the statistical significance can be reproduced to a certain extent if assuming that the galaxies used in the analysis are distributed uniformly in the hemisphere.When making that assumption, the statistical signal of the dipole axis is 1.35σ, which is close to the 1.29σ reported in (Iye et al., 2021).Iye et al. (2021) do not mention in the paper the analytic solution of Chandrasekhar (1943), or the assumption of a uniform sample in the hemisphere.More importantly, the assumption of uniform distribution in the hemisphere is not true for SDSS in general, and specifically not true for the dataset used here.for instance, Figure 3 shows the distribution of the RA of the galaxies.As the figure shows, the distribution is not uniform or close to uniformity, and some RA ranges are far more populated than other ranges. To transform into a uniform sample, the locations of the galaxies need to change so that they are spread uniformly.These changes in the locations can lead to changes in the results of the analysis of the dipole axis.For instance, the file https://people.cs.ksu.edu/~lshamir/data/iye_ et_al/galaxies_uniform.csv is the same 72,888 galaxies such that their RA values are distributed uniformly in the range of (0, 360), and the declination values are distributed uniformly in the declination range of the galaxies in the orig- Figure 4 shows the same analysis as Figure 2, but with the uniformly distributed galaxies.The most likely dipole axis is identified at (α = 172 o , δ = 51 o ), with statistical significance of 1.68σ.That statistical signal is still higher than the 1.35σ reported in (Watanabe, 2022), but it is lower than the statistical signal observed when using the real locations of the galaxies, without making any assumption regarding the nature of their distribution in the hemisphere.But the weaker signal shows that the assumption that the galaxies are distributed uniformly leads to a weaker signal, and could be the reason for the weaker signal observed by Iye et al. (2021). The possibility of error in the galaxy annotation The Ganalyzer algorithm used to annotate the galaxies in Section 5 is a simple model-driven algorithm that follows defined rules.It does not use machine learning or pattern recognition paradigms that their high complexity make them a "black box", and are very difficult to analyze and validate (Rudin, 2019;Dhar and Shamir, 2022).The simple "mechanical" nature of Ganalyzer allows it to ensure that the analysis is symmetric, as was shown experimentally in (Shamir, 2021b(Shamir, ,a, 2022b,a),a).The same analysis shown in Section 5.1 was repeated after mirroring all galaxies by using the Im-ageMagick "flop" command.Figure 5 shows the results, which is the same as Figure 2, and a very similar statistical signal of 2.17σ.The slight difference in statistical signal is expected due to the random assignment of spin directions. Assuming that the annotation of the galaxies has a certain error, Equation 1 defines the asymmetry A between galaxies spinning in opposite directions in a certain part of the sky as observed from Earth where N cw and N ccw are the numbers of galaxies spinning clockwise and counterclockwise, respectively, and E cw and E ccw are the numbers of galaxies incorrectly annotated as spinning clockwise and counterclockwise, respectively.Because the number of galaxies incorrectly annotated as spinning clockwise is expected to be about the same as the number of galaxies incorrectly annotated as spinning counterclockwise.When E cw ≃ E ccw , A can be defined by Equation 2. Since E cw and E ccw must be positive integers, A gets lower as E cw and E ccw gets higher.Therefore, an error in the annotation algorithm can only make the asymmetry A smaller.The effect of incorrectly annotated galaxies was studied in (Shamir, 2021a).That analysis showed that when adding artificial error to the annotation in a symmetric manner the results do not change substantially, and the signal does not increase when the error is added.But when adding the error in a non-symmetric manner, even a small error of merely 2% leads to a dipole the peaks in the celestial pole, with very high statistical significance (Shamir, 2021a). Another aspect is the completeness of the analysis.While some of the galaxies were annotated by their spin directions, most galaxies in the initial dataset were not assigned with a spin direction, and were therefore excluded from the analysis.These galaxies did not have an identifiable spin directions.Clearly, many of these galaxies do spin in a certain direction, but the direction cannot be determined from the image.For instance, Figure 6 shows galaxies imaged by Pan-STARRS, SDSS, and HST. As the figure shows, galaxies imaged by SDSS and Pan-STARRS do not have an identifiable spin direction, while the HST images of the same galaxies show that these galaxies have clear spin patterns.It is obvious that galaxies imaged by HST and do not have an identifiable spin patterns also cannot be assumed to have no spin direction, as HST also has a limiting magnitude.The symmetric nature of the algorithm is therefore critical to ensure the absence of a small bias that can lead to biased results.Other reasons that can affect the analysis are galaxies with leading arms, cosmic variance, hardware flaws, and atmospheric effect as discussed in Section 4 in (Shamir, 2022b) or in (Shamir, 2022a). In all of these previously collected datasets, the direction of rotation of the galaxies were determined by the shape of the arms.Therefore, the galaxies are face-on galaxies, allowing an Earth-based observer to identify the arms and their curves.Obviously, the datasets used here are all datasets used in previous experiments, and were used here in the same manner.These experiments did not include the inclination of the galaxies in the analysis, as these galaxies are mostly face-on galaxies, but it can be assumed that the inclination of the face-on galaxies in not exactly 90 degrees for all of them.But it is also expected that these inclination variations will be distributed equally between galaxies that spin clockwise an galaxies that spin counterclockwise.Therefore, if the expected slight inclination variations affect the analysis, it is expected to affect clockwise and counterclockwise galaxies in a similar manner, and therefore cannot lead to a preferred spin direction. 7 Explanation to the observation that is not related to the largescale structure One of the explanations to the observation described in this paper is that the observation reflects the real Universe. The contention that the Universe is oriented around a major axis shifts from the standard cosmological models.It might be in agreement with other theories such as ellipsoidal universe (Campanelli et al., 2006;Gruppuso, 2007), rotating Universe (Gödel, 1949;Ozsváth and Schücking, 1962;Ozsvath and Schücking, 2001), and black hole cosmology (Pathria, 1972;Stuckey, 1994;Easson and Brandenberger, 2001;Seshavatharam, 2010;Pop lawski, 2010b;Christillin, 2014;Dymnikova, 2019;Chakrabarty et al., 2020;Pop lawski, 2021;Seshavatharam and Lakshminarayana, 2022;Gaztanaga, 2022a,b), which assume the existence of a cosmological-scale axis, but it is not aligned with the standard model.On the other hand, it is also possible that the Universe does not have a cosmological-scale axis, and the observed asymmetry is driven by internal structure of galaxies (Shamir, 2020a;McAdam and Shamir, 2023).In that case, the observation can be explained without the need to modify the standard cosmological model.One of the indications that the observed asymmetry could be driven by internal structure of galaxies is that the peak of the dipole axes as determined by the different experiments and different telescopes tend to be close to the Galactic pole. Figure 7 displays the peaks of the axes observed in 11 previous experiments using SDSS (Land et al., 2008;Longo, 2011;Shamir, 2012Shamir, , 2016Shamir, , 2020cShamir, , 2021a)), Pan-STARRS (Shamir, 2020c), DESI Legacy Survey (Shamir, 2022a), DES (Shamir, 2022b), and DECam (Shamir, 2021b).As the figure shows, the axes peak with proximity to the Galactic pole, leading to the possibility that the observation is driven by internal structure of galaxies and the rotation of the observed galaxies relative to the Milky Way.The analysis described in this paper also shows a dipole axis that peaks with close proximity to the galactic pole.That was observed with galaxies annotated by Ganalyzer, as well as the experiments with galaxies annotated by SPARCFIRE, as shown by Figure 1. Comparison of the brightness of these galaxies shows a statistically significant difference in the brightness of clockwise and counterclockwise galaxies.That was also shown with galaxies from SDSS, Pan-STARRS, HST, and DESI Legacy Survey (Shamir, 2020a;McAdam and Shamir, 2023).The difference in galaxy brightness can be linked directly to the Table 4: The magnitude of SDSS galaxies that spin in opposite directions as annotated by Galaxy Zoo in the 50 o × 50 o window centred at the Northern galactic pole.The galaxies include just galaxies that their annotation met the Galaxy Zoo "superclean" criterion. asymmetry in the number of galaxies that spin in opposite directions.Naturally, if galaxies around the galactic pole that rotate counterclockwise are slightly brighter than galaxies that rotate clockwise, more counterclockwise galaxies will be detected in that part of the sky.That will lead to asymmetry between the number of galaxies that spin in opposite directions, and a dipole axis that peaks around the galactic pole.That axis is not a feature of the large-scale structure, but driven by internal structure of galaxies. To test that, it is possible to compare the brightness of galaxies that spin in opposite directions, and located around the Galactic pole.For instance, analyzing the exponential magnitudes of SDSS galaxies annotated automatically by Ganalyzer used in (Shamir, 2020c) provides 4,087 galaxies in the 50 o × 50 o part of the sky around the Northern galactic pole.The dataset is available at https://people.cs.ksu.edu/ ~lshamir/data/sdss_phot. Table 3 shows the average exponential magnitude of galaxies spinning clockwise and galaxies spinning counterclockwise. As the table shows, galaxies spinning counterclockwise in the part of the sky around the Northern galactic pole are brighter than galaxies spinning counterclockwise in the same part of the sky.These results are aligned with the results of similar experiments with other telescopes (McAdam and Shamir, 2023), and explain the higher number of counterclockwise galaxies observed in that part of the sky.In this case, the asymmetry in the number of galaxies can be attributed to galaxy rotation and internal structure of galaxies, rather than to the large-scale structure of the Universe. A similar analysis with the galaxies annotated by Galaxy Zoo also shows similar magnitude difference.Table 4 shows a similar analysis using all Galaxy Zoo galaxies in the 50 o × 50 o window centred at the Northern galactic pole, that also met the "superclean" criterion of Galaxy Zoo.A galaxy annotation is considered "superclean" if 95% or more of the votes agree on the annotation (Land et al., 2008).That provided a dataset of 2,841 galaxies.The results show similar magnitude difference compared to the galaxies annotated by Ganalyzer as shown in Table 3. The "superclean" criterion of Galaxy Zoo provides cleaner Figure 7: The locations of the most likely dipole axes in several previous experiments (Land et al., 2008;Longo, 2011;Shamir, 2012Shamir, , 2016Shamir, , 2020cShamir, , 2021bShamir, ,a, 2022b,a),a), and the location of the galactic pole (green) at (α = 192 o , δ = 27 o ).Table 5: The magnitude of SDSS galaxies annotated by Galaxy Zoo in the 50 o × 50 o window centred at the galactic pole.The galaxies include just galaxies that their annotation met the Galaxy Zoo "clean" criterion. data, but also leads to the sacrifice of substantial number of galaxies that do not meet the criterion.Galaxy Zoo therefore has also the "clean" criterion, according which 80% or more of the annotation needs to agree.Table 5 shows the results with Galaxy Zoo "clean" galaxies, which provided a larger set of 9,512 galaxies.The results show that the absolute difference is smaller compared to using the "superclean" annotations, which can be explained by the higher number of incorrectly annotated galaxies in the "clean" annotations compared to the "superclean" annotations.The difference, however, is still statistically significant, also due to the higher number of galaxies that meet the "clean" criterion.The brightness differences at the field around the Galactic pole shown in Tables 3 through 5 can be compared to a control field perpendicular to the Galactic pole.Tables 6 and 7 show the magnitude difference in the field centred at (α = 102 o , δ = 0 o ) with galaxies annotated by Ganalyzer and by Galaxy Zoo, respectively.The number of galaxies in these experiments are 3,781 and 5,094, respectively. As both tables show, in the field perpendicular to the Galactic pole the brightness differences are far smaller, and statistically insignificant.Although the reason for the difference observed at around the Galactic pole is still unclear, it is possible that the motion of the observed galaxies relative to the Milky Way might be related to the observation.More work will be required to fully understand the reason for the observation.Table 7: The magnitude of SDSS galaxies with different spin directions at a field perpendicular to the Galactic pole.The galaxies are annotated by Galaxy Zoo. Conclusion Several different probes have shown evidence of large-scale isotropy and parity violation (Aluri et al., 2023).This paper aims at studying a probe with several studies that show conflicting conclusions by analyzing the studies and replicating the results.Code and data used for the analysis are available. The analyses and reproduction of the experiments show that the studies might not necessarily conflict with each other.While reproducibility is a key concept in science (Aristotle, 50BC), it has been shown that more researchers tried but failed to reproduce work of their colleagues (Baker, 2016).These failed attempts are normally left unknown to the public, and attempts to make them public through the scientific literature are often repelled through the peer-review process, leading to unbalanced information in the scientific literature (Baker, 2016).In the case of computational analyses, reproducibility is expected to be more straightforward compared to "wet" laboratory experiments that may require highly trained researchers just to follow the protocols and reproduce the results.Yet, comprehensive analysis have shown that even in the case of papers published in the most regarded outlets with strict reproducibility policies, 76% of the published papers could not be reproduced (Stodden et al., 2018). The attempt to reproduce the results and analyze the research aims at identifying the reasons that explain why different studies show conflicting conclusions.The results join a large number of other observations of cosmological-scale anisotropy reflected through multiple probes (Aluri et al., 2023), although another explanation to the observation that does not require violation of the cosmological principle is also proposed.Other explanations are also possible. Future studies with more data will be required to profile the nature of the observation.The Vera C. Rubin Observatory will provide the world's largest astronomical database, and will allow to provide analysis with high resolution that will help to fully profile the observation and understand its nature.It is expected that such analysis will identify the exact location of the peak of the axis, which might allow to associate it with other observations that can include the Galactic pole, the CMB Cold Spot, the CMB dipole, etc. Instruments such as the Dark Energy Spectroscopic Instrument (Martini et al., 2018) will provide spectra of a large number of galaxies.That will allow to study whether the asymmetry changes with the redshift.Early observations using ∼ 6.4•10 4 galaxies with spectra showed evidence that the asymmetry increases as the redshift gets higher (Shamir, 2020c), suggesting that the asymmetry is of primordial origin.The Dark Energy Spectroscopic Instrument will allow to profile that change with an unprecedented number of galaxies with spectra. Figure 2 : Figure 2: The statistical significance for a dipole axis to exist by chance from all (α, δ) combinations.Reproduction of the analysis is available at https://people.cs.ksu.edu/~lshamir/data/iye_et_al. Hemisphere # Z-wise # S-wise Figure 3 : Figure 3: RA distribution of the galaxies. Figure 4 : Figure 4: The statistical significance of a dipole axis from all (α, δ) combinations when the distribution of the RA and declination is uniform. Figure 5 : Figure 5: The statistical significance a dipole axis to exist by chance from different (α, δ) combinations after mirroring the images. Figure 6 : Figure 6: Galaxies images by HST (left), Pan-STARRS (middle), and SDSS (right).The (α, δ) coordinates of each galaxy are also specified.The images show that galaxies that have clear spin direction in HST cannot be annotated when using the Earth-based SDSS or Pan-STARRS. Table 1 : The number of galaxies in the original images and mirrored image annotated as spinning clockwise or counterclockwise through Galaxy Zoo Table 3 : The magnitude of SDSS galaxies with different spin directions.All galaxies are in the 50 o × 50 o window centred at the Northern galactic pole.The galaxies are annotated by Ganalyzer. Table 6 : The magnitude of SDSS galaxies with different spin directions at a field perpendicular to the Galactic pole.The galaxies are annotated by Ganalyzer.The magnitude differences are far smaller compared to the magnitude difference in the field centred at the Galactic pole.
11,522
2023-09-06T00:00:00.000
[ "Physics" ]
The Role of α,β-Dicarbonyl Compounds in the Toxicity of Short Chain Sugars* The extent to which sugars serve as targets for superoxide was examined using glycolaldehyde as the simplest sugar and using superoxide dismutase (SOD)-replete and SOD-null strains growing under aerobic and anaerobic conditions. Glycolaldehyde was more toxic to the SOD-null strain than to its SOD-replete parent, and this differential effect was oxygen-dependent. The product, glyoxal, could be trapped in the medium by 1,2-diaminobenzene and assayed as quinoxaline. The SOD-null strain produced more glyoxal and eliminated it more slowly than the SOD-replete parent strain. Glyoxal was ∼10 times more toxic than glycolaldehyde and was more toxic to the SOD-null strain than to the parental strain. 1,2-Diaminobenzene protected against the toxicity of glycolaldehyde. TheseEscherichia coli strains contained the glutathione-dependent glyoxalases I and II, as well as the glutathione-independent glyoxalase III. Of these enzymes, glyoxalase III was most abundant, and it was inactivated within the aerobic SOD-null strain and also in extracts when exposed to the flux of superoxide and hydrogen peroxide imposed by the xanthine oxidase reaction. Thus, it appears that short chain sugars are oxidized by superoxide yielding toxic dicarbonyls. Moreover, the defensive glyoxalase III is also inactivated by the oxidative stress imposed by the lack of SOD, thereby exacerbating the deleterious effect of sugar oxidation. Sugars, in which carbon chain backbone is too short to permit conversion to cyclic hemiacetals, are prone to enolization and then to air oxidation. The superoxide is a product of air oxidation (1)(2)(3). Because the superoxide can also initiate the oxidation of such enediolates, free radical chain oxidations are possible (4). Fig. 1 presents a scheme for the tautomerism of the open chain forms of aldoses (I) to the corresponding enediols (II) and for the sequential oxidations of the enediols to a monoradical (III) and then to a very unstable diradical (IV), which rearranges to the ␣,␤-dicarbonyl (V). The one-electron oxidation can be caused slowly by dioxygen yielding superoxide or more rapidly by superoxide yielding hydrogen peroxide. We have previously noted that short chain sugars are toxic to Escherichia coli, aerobically but not anaerobically and that a scavenger of dicarbonyls, such as aminoguanidine, protected (5). Because a SOD 1 -null strain was more prone to this toxicity than the parental strain, we concluded that superoxide was a factor in the oxidation of the short chain sugars and that ␣,␤-dicarbonyls were the proximate toxic products of that oxidation. We did not then actually measure the ␣,␤-dicarbonyls that were supposed to be the cause of the toxicity, nor did we consider the protective actions of glyoxalases that convert ␣,␤-dicarbonyls to ␣-hydroxy acids. The data presented below fill those gaps and add to our understanding of sugars as sources of superoxide and as targets for that radical. We find that glyoxal is produced from glycolaldehyde more rapidly by a SODnull strain than by the parental strain. It is also seen that the parental strain eliminates glyoxal more rapidly than the SODnull strain. One particular ␣,␤-diketone, i.e. methylglyoxal, can be made from dihydroxyacetone phosphate by a specific synthase that is widespread in bacteria and that has been cloned, sequenced, and overexpressed (6,7). The ␣,␤-dicarbonyl compounds, whether made by methylglyoxal synthase or as a result of autoxidation of short chain sugars, are potentially toxic because of their propensity to covalently modify both nucleic acids and proteins (8,9). Our results suggest that under conditions of oxidative stress, such as stress imposed by a lack of SOD, the autoxidation of short chain sugars to dicarbonyls is more of a problem than is the activity of methylglyoxal synthase. Our results also indicate that the specificity of the defensive glyoxalases is broad enough to encompass glyoxal. Cell Culture-The E. coli strain used was AB1157, which was the parental strain for JI132 that was the ⌬sodA/⌬sodB mutant (10). Starter cultures were grown overnight in aerobic LB medium at 37°C and were then diluted to 2 ϫ 10 6 cells/ml in M9CA medium. LB and M9CA media are as described previously (11). The anaerobic condition was achieved in a BBL Gas Pak anaerobic system (Becton Dickinson). When needed, extracts were prepared from 6-h cultures by centrifugation (washing the cells two times in 50 mM potassium phosphate, pH 7.8); then the cells, which had been resuspended in this buffer, were disrupted with a French press. The lysate was clarified by centrifugation. Glyoxal and Methylglyoxal Assay-Glyoxal and methylglyoxal were assayed in the medium by using 1,2-diaminobenzene as derivatizing reagent by a modification of the protocol of Cordeiro and Ponces Freire (12). To a 1-ml sample containing glyoxal and/or methylglyoxal, we added 0.2 ml of 5 M HClO 4 , 0.2 ml of 2,3-dimethylquinoxaline as an internal standard, 0.2 ml of 10 mM 1,2-diaminobenzene, and water to a 2-ml final volume. After 1 h at 25°C, high pressure liquid chromatography (HPLC) analysis was performed in a LKB-Bromma chromatograph. The column was a 5-m, 250 ϫ 4-mm RP-18 (Merck LiChrospher). The mobile phase was 40% (v/v) 25 mM ammonium formate buffer, pH 3.4, and 60% (v/v) methanol. A volume of 150 l was injected. The flow rate was 1.6 ml/min and quinoxalines were detected at 315 nm. Enzymatic Assays-Glyoxalase I was assayed by following an increase in A 240 because of S-D-lactoylglutathione formation (13). One unit of glyoxalase I is defined as the amount of enzyme required to form 1 mol of S-D-lactoylglutathione/min. Glyoxalase II was assayed by monitoring the decrease in A 240 , accompanying the conversion of S-Dlactoylglutathione to lactate plus GSH (14). One unit of glyoxalase II is defined as the amount of enzyme that hydrolyzes 1 mol of S-D-lactoylglutathione/min. Glyoxalase III was assayed by a modification of the method of Misra et al. (15) using the HPLC assay for glyoxal and methylglyoxal as described above. One unit of glyoxalase III is defined as the amount of enzyme required to utilize 1 mol of methylglyoxal or to form 1 mol of D-lactate/min. 50 M xanthine, 2 nM xanthine oxidase, 1 g of Cu,Zn-SOD, 1 g of catalase, or 25 mM mannitol were added to the cell extracts to explore the effects of reactive oxygen species. ATP-dependent phosphorylation of glucose by glucokinase was assayed in cell extracts by monitoring the formation of NADPH and by using excess glucose-6-phosphate dehydrogenase according to the protocol of Fraenkel and Horecker (16), except that a higher ATP concentration (7.5 mM) was used (17). One unit of enzyme activity is defined as the amount of glucokinase that catalyzes the formation of 1 mol of glucose 6-phosphate/min. RESULTS Glycolaldehyde and Glyoxal: Effect of Superoxide, SOD, and 1,2-Diaminobenzene-Glycolaldehyde did not inhibit aerobic growth of the SOD-replete AB1157 strain until its concentration exceeded 4.0 mM ( Fig. 2A, line 1). In contrast, the growth of the SOD-null JI132 strain was suppressed in the lower range of 0 -4.0 mM ( Fig. 2A, line 2). This protective effect of endogenous SOD on the sensitivity to glycolaldehyde was not seen under anaerobic conditions ( Fig. 2A, lines 3 and 4). When glyoxal was examined (Fig. 2B), a similar pattern was seen but at 10-fold lower concentrations. Thus, the anaerobic growth suppression became pronounced above 0.3 mM for both the JI132 and AB1157 strains (Fig. 2B, lines 3 and 4), and aerobically JI132 was more sensitive than AB1157 (Fig. 2B, lines 1 and 2). 1,2-Diaminobenzene converts ␣,␤-dicarbonyls to quinoxalines (12,18), and if glyoxal is the cause of glycolaldehyde toxicity, 1,2-diaminobenzene should protect JI132. Comparison of lines 1 and 2 in Fig. 2C demonstrates that 2.0 mM glycolaldehyde slowed the aerobic growth of the JI132 strain, whereas lines 3 and 4 in Fig. 2C show that 1.0 mM 1,2-diaminobenzene significantly lessened the effect of glycolaldehyde. It should be noted that 1,2-diaminobenzene at 1.0 mM did not itself affect the growth of the JI132 strain in the absence of glycolaldehyde, although it was a growth inhibitor at a higher concentration (data not shown). It should also be recalled that 2.0 mM glycolaldehyde or 0.2 mM glyoxal were without effect on the anaerobic growth rates of AB1157 or JI132 (Fig. 2, A and B, lines 3 and 4). These data support the conclusions that glyoxal may be a cause of the aerobic toxicity of glycolaldehyde, superoxide plays a role in the conversion of the latter into the former, and somehow superoxide also increases the toxicity of glyoxal. Oxidation of Glycolaldehyde into Glyoxal-Glycolaldehyde autoxidizes into glyoxal, and the extent of this autoxidation must be known so that corrections for it can be applied to results obtained with E. coli. Line 1 in Fig. 3A presents the accumulation of glyoxal from 2.0 mM glycolaldehyde in M9 medium. The rate was slower in M9CA medium (Fig. 3A, line 2) presumably because of consumption of glyoxal by reaction with the amino acids in the casein hydrolysate. Aminoguanidine completely eliminated the accumulation of glyoxal (Fig. 3A, line 3) by coupling with it to form an asymmetrical triazine (19). In accordance with these results, when the persistence of glyoxal was examined (Fig. 3B), glyoxal was seen to be stable in M9 medium (Fig. 3B, line 1) but less stable in M9CA medium (Fig. 3B, line 2). Arginine caused rapid consumption of glyoxal (Fig. 3B, line 4), whereas the scavenging of glyoxal by aminoguanidine was most pronounced (Fig. 3B, line 3). The effect of 2.0 mM glycolaldehyde on the growth of E. coli and the concomitant accumulation and consumption of glyoxal were examined. Fig. 4A shows that both the SOD-replete AB1157 and the SOD-null JI132 strains grew at the same rates anaerobically (lines 2 and 4), whereas aerobically AB1157 grew much faster (line 1) than did JI132 (line 3). The glyoxal content of the medium was also followed as shown in Fig. 4B. Line 1 shows that aerobic AB1157 accumulated glyoxal to a maximum lines 2 and 4). Line 5 shows the production of glyoxal by autoxidation in the aerobic M9CA medium without cells, whereas line 6 depicts an anaerobic control for the effect of medium alone. It again appears that JI132 produces glyoxal from glycolaldehyde more rapidly and eliminates it more slowly than AB1157. This is probably why its growth was more strongly inhibited by glycolaldehyde. Glyoxalase Activities-E. coli is known to contain glyoxalases I, II, and III, although one report states that glyoxalase II was not detected (15). Because glyoxalases I and II cooperate in performing the GSH-dependent conversion of ␣,␤-dicarbonyls to ␣-hydroxy acids, it would be expected that glyoxalase II would be present if glyoxalase I was present. Fig. 5 presents the glyoxalase activities in the AB1157 and JI132 strains grown under different conditions. The first point to be made is that glyoxalases I, II, and III are all present, although glyoxalase III Ͼ glyoxalase I Ͼ glyoxalase II Ͼ 0. Hence most of the glyoxalase activity in E. coli is because of the GSH-independent glyoxalase III. A comparison of Fig. 5, A and D, makes it clear that the AB1157 and JI132 strains have comparable glyoxalase activities when grown anaerobically but that JI132 has less glyoxalase III under aerobic conditions. There was no induction of glyoxalase III by growth in the presence of methylglyoxal (Fig. 5B). Paraquat, which can increase the aerobic production of superoxide, suppressed glyoxalase III in JI132 (Fig. 5C). Inactivation of the abundant glyoxalase III by superoxide or by reactive species derived therefrom could explain the lower glyoxalase activity in the aerobic JI132 cells than in the AB1157 strain and could also explain the effect of paraquat; it could further clarify why glyoxal was more toxic to JI132 than to AB1157 and why this differential toxicity was oxygen-dependent. Hence, this possibility was explored. Inactivation of Glyoxalase III-The effect of a flux of superoxide, produced by the xanthine oxidase reaction (20) on the glyoxalase III activity in extracts of E. coli, was examined. Fig. 6A shows that glyoxalase III activity was diminished in the case of AB1157 by exposure to the xanthine oxidase reaction, whereas Fig. 6B shows that the effect on JI132 extracts was greater. Because the SOD endogenous to AB1157 and present in the extract could have accounted for this difference, SOD was added to the extracts and was found to protect completely. Indeed, the added SOD raised the glyoxalase III activity in JI132 extracts to a level greater than that seen in extracts not exposed to the xanthine oxidase reaction. We suppose that this is explained by the inactivation of some glyoxalases III by endogenous superoxide production in the JI132 extracts before the sampling for assay. Superoxide can release Fe(II) from the [4Fe-4S] clusters of dehydratases, and that Fe(II) can reduce hydrogen peroxide to yield the hydroxyl radical (21)(22)(23). If hydroxyl radical produced in this way was the cause of the inactivation of glyoxalase III, then catalase should protect by lowering hydrogen peroxide, and mannitol should protect by scavenging the hydroxyl radical. Fig. 6 illustrates the protective effects of catalase and mannitol. It follows that glyoxalase III, the major glyoxalase in E. coli, is sensitive to inactivation by the pro-oxidant conditions created by the lack of SOD and that Fenton chemistry generates the proximate inactivator. Glucokinase was examined in a similar way to see whether the sensitivity of glyoxalase III to oxidation was unusual. Fig. 7 shows that glucokinase was not inactivated by the superoxide and hydrogen peroxide produced by the xanthine oxidase reaction. Thus, the sensitivity of glyoxalase III was special and might relate to the thiol group that is essential for its activity and possibly to the binding of iron adjacent to the active site thiol. Methylglyoxal-While assaying for glyoxal in terms of the quinoxaline produced from the reaction with 1,2-diaminobenzene, we also measured methylglyoxal in terms of 2-methylquinoxaline. The primary reason for doing so was to gauge the extent to which methylglyoxal synthase was contributing to dicarbonyl production by its non-oxidative pathway. Fig. 8 indicates that the contribution of the methylglyoxal synthase to the total dicarbonyl production in cells exposed to glycolaldehyde was very small indeed. Thus, M9CA medium conditioned by the growth of JI132 underwent reaction with 1,2-diaminobenzene and then was subjected to HPLC in which quinoxaline, derived from glyoxal, eluted at 3 min, 35 s, and in which 2-methylquinoxaline, derived from methylglyoxal, eluted at 4 min, 20 s. There was no detectable methylglyoxal formed from the glucose present in this medium and in only traces of other dicarbonyls (Fig. 8A). When 200 M glyoxal had been added to the culture (Fig. 8B), it was detected as quinoxaline at zero time eluted at 3 min, 35 s and was progressively consumed at longer times of incubation. Similarly, enriching the medium with 200 M methylglyoxal gave 2-methylquinoxaline eluting at 4 min, 20 s at zero time and progressively less in samples drawn at longer times of incubation (Fig. 8C). The consumption of these dicarbonyls by JI132 largely reflects the activity of glyoxalases. When cultures enriched with glycolaldehyde ( Fig. 8D) or glyceraldehyde (Fig. 8E) were examined, the major dicarbonyls detected were glyoxal and methylglyoxal, respectively. However, in the case of glycolaldehyde, the glyoxal was first generated and then consumed during the 24 h of incubation. In contrast, the glyceraldehyde was contaminated with methylglyoxal in the zero time sample. Erythrose (Fig. 8F) was contaminated by polar dicarbonyls, presumably erythrosone, in which quinoxaline products eluted at 2 min, 35 s. It also contained lesser amounts of glyoxal and methylglyoxal. DISCUSSION Short chain sugars, in which carbonyl functions cannot be blocked by the formation of furanose or pyranose rings, are prone to enolization followed by oxidation to toxic dicarbonyls. Thus, they express in an exaggerated way what can also occur with glucose by the slower process of non-enzymatic glycation and oxidations (24,25). Glycolaldehyde, the simplest sugar, is more toxic to a SOD-null strain of E. coli (JI132) than to its SOD-replete parent (AB1157), and this extra toxicity is oxygendependent. The corresponding dicarbonyl, glyoxal, was ϳ10 times more toxic than glycolaldehyde, and JI132 was again more sensitive than AB1157 in an oxygen-dependent way. 1,2-Diaminobenzene protected JI132 against the toxicity of glycolaldehyde, presumably by converting glyoxal to the less toxic quinoxaline. We may infer that superoxide is an important cause of the oxidation of glycolaldehyde, and we may explain the greater and oxygen-dependent sensitivity of JI132 to glyoxal on the basis of an oxidative inactivation of glyoxalases. Glycolaldehyde itself becomes toxic at high concentrations, even anaerobically, probably by converting essential amino compounds to carbinolamines and to Schiff base salts. When the SOD-replete AB1157 strain grew aerobically in the presence of glycolaldehyde, glyoxal first accumulated in the medium and was subsequently consumed. The SOD-null JI132, in contrast, accumulated glyoxal during the entire 10 h of incubation. Thus, it appeared that JI132 both converted glycolaldehyde to glyoxal more rapidly and disposed of it more slowly than did AB1157. Glyoxalases I, II, and III were all present in these strains, but the GSH-independent glyoxalase III was the most abundant and suppressed in the JI132 strain grown aerobically; it was further suppressed when JI132 grew in the presence of 1 M paraquat. Thus, it appears that glyoxalase III was inactivated by the oxidative stress imposed by FIG. 7. Glucokinase activities of AB1157 and JI132. Overnight cultures of AB1157 and JI132 in LB medium were diluted to 2 ϫ 10 6 cells/ml in M9CA medium. Cell extracts were prepared from 6-h cultures. All enzyme activities were determined seven times, and the mean Ϯ S.D. is shown. lack of SOD activity and also by the presence of paraquat. Exposure of bacterial extracts to the superoxide and hydrogen peroxide produced by the xanthine oxidase reaction caused a loss of glyoxalase III activity that was greater in the SODnull extracts. This inactivation was prevented by SOD, catalase, or mannitol. Glucokinase in the extracts was not inactivated by the xanthine oxidase reaction. Glyoxalase III may be selectively inactivated by a flux of superoxide and hydrogen peroxide because it binds the Fe(II) released from the [4Fe-4S] clusters of dehydratases oxidized by superoxide. This bound Fe(II) would then react with hydrogen peroxide to yield Fe(II)-O, Fe(III)-OH, or hydroxyl radical, and these strong oxidants would preferentially attack the nearest target, which in this case is glyoxalase III. When glucose was the carbon source, no dicarbonyls could be trapped by 1,2-diaminobenzene. This negative result is a measure of the degree of protection provided by blocking sugar carbonyls by hemiacetal ring closure. Moreover, the steady state concentrations of the triosephosphate intermediates of glycolysis must be very low because the equilibrium constant of the aldolase reaction greatly favors fructose 1,6-diphosphate. We conclude that dicarbonyl production from dihydroxyacetone phosphate, by the action of methylglyoxal synthase, must be insignificant and in full accord with the observation that 900fold overexpression of this synthase did not cause observable detrimental effects (7). MacLean et al. (26) reported that glyoxalase III was the most abundant glyoxalase in E. coli but nevertheless concluded that glyoxalases I plus II were the most important route of methylglyoxal detoxification. This conclusion was based on the heightened sensitivity to methylglyoxal exhibited by a glyoxalase I-null mutant. However, this can be explained by the protective effect of lowering cytoplasmic pH (27) because of the activation of potassium efflux by the product of the glyoxalase I reaction, S-lactoylglutathione (26). Lowering the pH would slow the reaction of dicarbonyls with target amino or thiol compounds.
4,487
2000-11-10T00:00:00.000
[ "Biology" ]
Micromanipulation and Automatic Data Analysis to Determine the Mechanical Strength of Microparticles Microparticles are widely used in many industrial sectors. A micromanipulation technique has been widely used to quantify the mechanical properties of individual microparticles, which is crucial to the optimization of their functionality and performance in end-use applications. The principle of this technique is to compress single particles between two parallel surfaces, and the force versus displacement data are obtained simultaneously. Previously, analysis of the experimental data had to be done manually to calculate the rupture strength parameters of each individual particle, which is time-consuming. The aim of this study is to develop a software package that enables automatic analysis of the rupture strength parameters from the experimental data to enhance the capability of the micromanipulation technique. Three algorithms based on the combination of the “three-sigma rule”, a moving window, and the Hertz model were developed to locate the starting point where onset of compression occurs, and one algorithm based on the maximum deceleration was developed to identify the rupture point where a single particle is ruptured. Fifty microcapsules each with a liquid core and fifty porous polystyrene (PS) microspheres were tested in order to produce statistically representative results of each sample, and the experimental data were analysed using the developed software package. It is found that the results obtained from the combination of the “3σ + window” algorithm or the “3σ + window + Hertz” algorithm with the “maximum-deceleration” algorithm do not show any significant difference from the manual results. The data analysis time for each sample has been shortened from 2 to 3 h manually to within 20 min automatically. Introduction Microparticles are widely used in many functional products in the industry [1]. Measuring their mechanical strength is essential to optimizing their performance during manufacturing, processing, and end-use applications [2]. For example, microcapsules with self-sensing agents used to produce smart structural composites [3][4][5] should be mechanically strong enough to survive different engineering processing steps leading to their incorporation into the composites but weak enough to break after mechanical damage is occurring to the composites so that the need for repair can be indicated quickly. Understanding the mechanical strength of the self-sensing microcapsules plays a crucial role in ensuring the functionalities of the composites. Furthermore, characterizing the mechanical strength of other microparticles, e.g., perfume microcapsules for fabric softeners and detergents [6], and microspheres for chromatography media for bio-separation [7], can also provide essential technical data for new product development and production as well as help to optimize their functionality and performance in end-use applications. Experimental techniques to determine the mechanical strength of microparticles can be classified as ensemble test methods and single-particle test methods [1,2,8]. The former methods are relatively quick as a group of particles are tested simultaneously, but only the average mechanical strength values can be obtained. The latter methods test particles one by one; thus, their mechanical strength distribution can be obtained, which is crucial in many applications to optimize their functionality and performance. Several techniques have been developed to determine the mechanical strength of single particles, including optical/magnetic tweezers [9], pressure probe [10,11], micropipette aspiration [12,13], atomic force microscopy (AFM) [14,15], nanoindentation [16,17], and micromanipulation based on diametrical compression [18]. The main difference among them lies in the different deformations, which can be generated, and magnitudes of forces, which can be measured. For example, the typical force measured by micromanipulation is from μN to N, while the force by AFM is from pN to μN [2]. Consequently, micromanipulation can provide the rupture strength parameters by compressing particles to break, while it is difficult for other techniques to do so [1]. The micromanipulation technique involves sample preparation, compression of single particles, and data analysis to obtain the mechanical properties of microparticles. The raw data from a micromanipulation test are a series of voltage data as shown in Figure 1. The main task of the data analysis is to identify the starting point M, where the onset of loading occurs, and rupture point R, where the tested particle ruptured, from which the rupture strength parameters and force-displacement data can be obtained. Unlike some commercial or open-source software packages to analyse the force-displacement data from AFM experiments [38,39], it was carried out manually by interacting with the raw data and template spreadsheets to obtain the results from the micromanipulation experiments, which is quite laborious and time-consuming. The software packages for AFM are not easy to be adapted to process the data from the micromanipulation technique because of the differences in data formats, mechanical property parameters to be obtained, and specific mathematical model formulas required to be used. However, similar to the starting point M in the micromanipulation tests, the contact point (CP) is also crucial to analysing the force-displacement data from AFM experiments. Several algorithms have been developed to locate the CP of the force-displacement data obtained from AFM experiments. A simple algorithm with a threshold (typically 0.1%) was used to estimate the CP from the approach curve above the baseline [40]. However, this threshold needs to be modified according to the baseline value and noise level manually, which is not suitable for automatic data analysis. A local regression-based algorithm was then introduced to determine the CP by slope changes [41]. Three parameters, including the number of data points for regression, and two thresholds need to be properly set to locate the CP. An algorithm was developed to estimate the CP by fitting the data in a liner elastic region to a Hertz-like model for the nano indentation data [42]. The algorithm worked well but requires new sets of parameters for other materials of different mechanical behaviours. Moreover, in AFM force data analysis, usually a force map, e.g., 64 × 64 force curves, are obtained for a single particle to yield a spatial distribution of the mechanical strength parameters. These algorithms above are aimed to locate the CPs for the force curves for a single particle so that the parameters set for the algorithms may not need to be adjusted frequently for every force curve. In contrast, from micromanipulation measurements, a single voltage (force) curve is obtained for a single particle, and usually, the particles in a sample have different sizes and mechanical strength values; therefore, the parameters set may need to be modified frequently for each dataset to ensure the above algorithms can work properly for every tested single particle in a sample. Consequently, the algorithms used in AFM data analysis cannot be applied directly to automatic analysis of the micromanipulation data. The aim of this study is to develop a software package to analyse the experimental data obtained from using the micromanipulation technique to automatically obtain the mechanical strength parameters of microparticles to simplify the procedure, save time and labour, and enhance the capability of the micromanipulation technique. In this paper, three algorithms are presented to identify the starting point M, and an algorithm is introduced to locate the rupture point R from the raw voltage data of micromanipulation. Two samples of microparticles, i.e., the microcapsules for self-sensing and the porous PS microspheres with various potential applications, have been tested using the micromanipulation technique, and the experimental data analysed using the developed software package are compared to the manual results to validate the algorithms developed. Microcapsules for Self-Sensing The microcapsules for self-sensing were a very robust type of double-walled microcapsules made by interfacial polymerization. The detailed fabrication methods are described in [5]. The outer and inner shells were made from urea formaldehyde (PUF) and polyurethane (PU), respectively. The core is oil with a fluorophore substance. Porous Polystyrene Microspheres The porous polystyrene (PS) microspheres with various potential applications were fabricated via a novel solvent evaporation methodology based on foaming transfer. The detailed fabrication process is reported in [43]. Specifically, the porous PS microspheres obtained by introducing 20 wt% ethanol concentration to the continuous phrase were used in this paper. Micromanipulation Rig The principle of the micromanipulation technique is to compress single particles to different deformations or rupture between two parallel surfaces, and the force versus displacement data are obtained simultaneously. The schematic diagram of the micromanipulation rig used in this work is illustrated in Figure 2, which is also reported elsewhere [19][20][21][22]. Single microparticles are placed on the glass slide, which is fixed on the sample stage of a three-dimensional micromanipulator, and then compressed by the output probe (with flat end) of the force transducer that is mounted to the one-dimensional fine micromanipulator. The corresponding compression force is acquired by a data acquisition device (USB-201-OEM, Measurement Computing Corporation, Norton, MA, USA) in the control and acquisition box and the data is saved in the computer for post processing. The fine micromanipulator is driven by a servo motor. The power of the servo motor is 24 V DC. Before compression, single microparticles are moved to just below the force probe by operating the sample-stage micromanipulator. Using the sideview camera, the video images of the compression procedure can be displayed by the industrial computer monitor and saved in the computer. The force transducer can be changed according to the mechanical strength scale of the microparticles to be measured. Micromanipulation Rig The principle of the micromanipulation technique is to compress single particles to different deformations or rupture between two parallel surfaces, and the force versus displacement data are obtained simultaneously. The schematic diagram of the micromanipulation rig used in this work is illustrated in Figure 2, which is also reported elsewhere [19][20][21][22]. Single microparticles are placed on the glass slide, which is fixed on the sample stage of a three-dimensional micromanipulator, and then compressed by the output probe (with flat end) of the force transducer that is mounted to the one-dimensional fine micromanipulator. The corresponding compression force is acquired by a data acquisition device (USB-201-OEM, Measurement Computing Corporation, Norton, MA, USA) in the control and acquisition box and the data is saved in the computer for post processing. The fine micromanipulator is driven by a servo motor. The power of the servo motor is 24 V DC. Before compression, single microparticles are moved to just below the force probe by operating the sample-stage micromanipulator. Using the sideview camera, the video images of the compression procedure can be displayed by the industrial computer monitor and saved in the computer. The force transducer can be changed according to the mechanical strength scale of the microparticles to be measured. Micromanipulation of the Microcapsules for Self-Sensing Dry microcapsules were placed onto a glass slide, and single microcapsules were compressed to rupture using the micromanipulation rig at a compression speed of 2.0 μm/s. The sampling time was 0.01887 s, and the force transducer model was GS0-10 (Transducer Techniques, LLC, Temecula, CA, USA) with a pre-calibrated sensitivity of 8.674 mN/V. In total, 50 microcapsules were tested at ambient temperature of 26 ± 2 °C. Micromanipulation of the Porous PS Microspheres The micromanipulation procedure of the porous PS microspheres was the same as the measurement of the self-sensing microcapsules. The transducer used was GS0-10 with a pre-calibrated sensitivity of 7.423 mN/V. In total, 50 PS microspheres were tested under ambient temperature of 16 ± 2 °C. Figure 3 illustrates the procedure to compress a porous PS microsphere between the two parallel surfaces, i.e., the probe end and the glass surface. The diameter of the transducer probe was around 50 µ m, and the diameter of the particle was 16.3 µ m. Micromanipulation of the Microcapsules for Self-Sensing Dry microcapsules were placed onto a glass slide, and single microcapsules were compressed to rupture using the micromanipulation rig at a compression speed of 2.0 µm/s. The sampling time was 0.01887 s, and the force transducer model was GS0-10 (Transducer Techniques, LLC, Temecula, CA, USA) with a pre-calibrated sensitivity of 8.674 mN/V. In total, 50 microcapsules were tested at ambient temperature of 26 ± 2 • C. Micromanipulation of the Porous PS Microspheres The micromanipulation procedure of the porous PS microspheres was the same as the measurement of the self-sensing microcapsules. The transducer used was GS0-10 with a pre-calibrated sensitivity of 7.423 mN/V. In total, 50 PS microspheres were tested under ambient temperature of 16 ± 2 • C. Figure 3 illustrates the procedure to compress a porous PS microsphere between the two parallel surfaces, i.e., the probe end and the glass surface. The diameter of the transducer probe was around 50 µm, and the diameter of the particle was 16. Rutpure Strength of Microparticles The raw data from a micromanipulation test is a series of voltage versus sample sequence data ( 1 , 2 , … , ), where n is the number of the voltage data points. A typical curve is shown in Figure 1. At the beginning, the voltage remains stable along the baseline as the probe moves in the air due to the initial gap between the probe and the microparticle. Then, it starts to increase at M when the probe begins to touch the particle. The voltage keeps rising until R and drops suddenly when the particle is ruptured. After that, the voltage rises again from G as the probe compresses the debris of the particle on the hard bottom surface and stops at H when the voltage limit is reached, or the movement is stopped manually. Point M is named as the starting point and R as the rupture point. The line segment BM is termed as "baseline". The main task of the data analysis is to identify the starting point M and rupture point R from which the rupture strength parameters, including displacement at rupture , rupture force , fractional deformation at rupture , nominal rupture stress , nominal rupture tension , and toughness , can be calculated using the following equations = , and where is the starting point index, is the rupture point index, is the voltage corresponding to rupture, is the average voltage of the baseline, is the compression speed, is the sampling time, is the sensitivity of the force transducer, is the compliance of the force transducer, is the initial diameter of the single microparticle, is the nominal stress, and is the fractional deformation. The force-displacement data can be obtained using the following two equations: and where (1 ≤ ≤ − ) is the index, is the compression force, and is the displacement. Then, the nominal stress and fractional deformation can be calculated using Rutpure Strength of Microparticles The raw data from a micromanipulation test is a series of voltage versus sample sequence data (V 1 , V 2 , . . . , V n ), where n is the number of the voltage data points. A typical curve is shown in Figure 1. At the beginning, the voltage remains stable along the baseline as the probe moves in the air due to the initial gap between the probe and the microparticle. Then, it starts to increase at M when the probe begins to touch the particle. The voltage keeps rising until R and drops suddenly when the particle is ruptured. After that, the voltage rises again from G as the probe compresses the debris of the particle on the hard bottom surface and stops at H when the voltage limit is reached, or the movement is stopped manually. Point M is named as the starting point and R as the rupture point. The line segment BM is termed as "baseline". The main task of the data analysis is to identify the starting point M and rupture point R from which the rupture strength parameters, including displacement at rupture δ r , rupture force F r , fractional deformation at rupture ε r , nominal rupture stress σ r , nominal rupture tension T r , and toughness T C , can be calculated using the following equations and where m is the starting point index, r is the rupture point index, V r is the voltage corresponding to rupture, V B is the average voltage of the baseline, v is the compression speed, T s is the sampling time, s is the sensitivity of the force transducer, c is the compliance of the force transducer, D is the initial diameter of the single microparticle, σ is the nominal stress, and ε is the fractional deformation. The force-displacement data can be obtained using the following two equations: and where i (1 ≤ i ≤ n − m) is the index, F is the compression force, and δ is the displacement. Then, the nominal stress and fractional deformation can be calculated using and In practice, the microparticle toughness in Equation (6) can be determined using the trapezoidal numerical integration as Equation (11). During the micromanipulation test, the voltage V(t) can be expressed as follows: where F(t) is the true value of the compression force, s is the sensitivity of the force transducer, and e(t) is a random noise. Before the onset of compression, F(t) is constant (zero), thus V(t) and e(t) have the same distribution during this period. Assuming the distribution is a normal distribution (the most common distribution [44] for noise), according to the three-sigma rule [45], the possibility (Pr) of V(t) falling away from the mean value (µ) of the baseline by more than three standard deviations (3σ) is at most 0.27%, Thus, if the voltage value at a point starts to deviate from the baseline mean value by three standard deviations, it has a high possibility (99.73%) that the onset of compression begins; i.e., the first point when the voltage deviates from the baseline by three standard deviations can be located as the starting point. In practice, the µ and σ can be estimated by the average (V B ) and standard deviation (S B ) of the voltage data of the baseline. Then, a criterion is obtained to determine the starting point. where m is the index of the starting point. The flowchart of the "3σ" algorithm is illustrated in Figure 4. After initialization, the first z points of voltage ( 1 , 2 , … , ) are taken from the raw voltage data series ( 1 , 2 , … , ) as the baseline, from which the average and standard deviation are calculated using the following equation. After initialization, the first z points of voltage (V 1 , V 2 , . . . , V z ) are taken from the raw voltage data series (V 1 , V 2 , . . . , V n ) as the baseline, from which the average V B and standard deviation S B are calculated using the following equation. Then, the voltage data after z, (V z+1 , V z+2 , . . . , V n ), are looked through for the first point when Inequation (14) is satisfied, whereafter the algorithm is stopped. The value of z can be estimated by the compression speed, sampling time, and the initial gap between the probe and the particle. Usually, z = 20 is used, which is sufficiently accurate to determine V B and S B . "3σ + Window" Algorithm Normally, the "3σ" algorithm can locate the starting point successfully. However, if a pulse noise exists, the starting point may be determined incorrectly as illustrated in Figure 5. The point m 1 rather than m 2 will be misidentified as the starting point because of the impulse noise around m 1 . Although smoothing the raw data by filtering can deal with the impulse noise, other key points such as the rupture value will be evened. After initialization, the first z points of voltage ( 1 , 2 , … , ) are taken from the raw voltage data series ( 1 , 2 , … , ) as the baseline, from which the average and standard deviation are calculated using the following equation. Then, the voltage data after z, ( +1 , +2 , … , ), are looked through for the first point when Inequation (14) is satisfied, whereafter the algorithm is stopped. The value of z can be estimated by the compression speed, sampling time, and the initial gap between the probe and the particle. Usually, z = 20 is used, which is sufficiently accurate to determine and . "3σ + Window" Algorithm Normally, the "3σ" algorithm can locate the starting point successfully. However, if a pulse noise exists, the starting point may be determined incorrectly as illustrated in Figure 5. The point 1 rather than 2 will be misidentified as the starting point because of the impulse noise around 1 . Although smoothing the raw data by filtering can deal with the impulse noise, other key points such as the rupture value will be evened. To tackle this problem, the "3σ" algorithm was modified by introducing a moving window with width w. A point m can be identified as the starting point only if all the points from it in the moving window fall away from V B by 3S B , which leads to the following criterion: where Λ is the logical "and" Boolean operator. The width of the moving window w can be estimated as an integer corresponding to a percent of the diameter of the microparticle. As some brittle capsules and biological cells may rupture at a fractional deformation as small as 0.06 [8], a percent of 5% can ensure w less than the rupture deformation of most microparticles. Thus, w can be estimated using the following equation: 2.4.3. "3σ + Window + Hertz" Algorithm The "3σ + window" algorithm can deal with most cases including those with random noise and impulse noise. However, it may underestimate the displacement when the voltage corresponding to three standard deviations of the baseline is just chosen as the starting point. This can result in a bigger value of the starting point index (m) so that the displacement will be underestimated, as it is related to the starting point by Equations (2), (7), and (8). The underestimation will be even worse when the signal to noise ratio is low. Following the same strategy described in [42], a mathematical model such as the Hertz model can be used to estimate the starting point (m) from the force-displacement data calculated using the "3σ + window" algorithm. For diametrical compression of purely linear elastic microspheres, the Hertz model [1] relates the force to the displacement by the following equation: where E is the Young's modulus, and υ is the Poisson's ratio. Assume the force and displacement obtained from the "3σ + window" is F and δ , respectively, and the difference between the true displacement and the one obtained from the "3σ + window" is ∆δ; then, Equation (18) can be written as Equation (19) can be transformed to where k = 1/(k ) 2/3 . Although the Hertz model is for purely linear elastic microspheres, it can be used to evaluate the true starting point by fitting into the initial compression data, such as within 5% deformation of the force-displacement data [23] obtained using the "3σ + window" algorithms. The flowchart of the algorithm can be illustrated by Figure 6. Firstly, a starting point index m is estimated using the "3σ + window" algorithm, and the force-displacement data series F 1 , δ 1 , (F 2 , δ 2 ), . . . , ] are calculated using Equations (7) and (8). Then, the force-displacement data within 5% deformation δ 2 ), . . . , F q , δ q are fit using Equation (20), and thus, ∆δ is obtained, from which the different number ∆m is estimated by Equation (21) to compensate the starting point. CoD is often explained as the proportion of the variance in the dependent variable that is predictable from the independent variable [46]. It also indicates the extent to which the dependent variable is predictable by the fitting model. In our case, the CoD represents how well the Hertz model can be used to present the relationship between the force and displacement data up to 5% fractional deformation. A value of 1.0 indicates a perfect fit, whilst a value of 0.0 would indicate that the Hertz model fails to model the data. Multiplying ∆δ by CoD is expected only to use the predictable percent of ∆δ to compensate the starting point. In other words, the compensated number ∆m is not only calculated from the ∆δ value estimated by the Hertz model but also from the "goodness" of the fit, i.e., how well the Hertz model can fit the data. In this way, Equation (21) adjusts the compensation extent automatically according to the goodness of fit (CoD), which makes the compensation algorithm intelligent. whilst a value of 0.0 would indicate that the Hertz model fails to model the data. Multiplying ∆ by is expected only to use the predictable percent of ∆ to compensate the starting point. In other words, the compensated number ∆ is not only calculated from the ∆ value estimated by the Hertz model but also from the "goodness" of the fit, i.e., how well the Hertz model can fit the data. In this way, Equation (21) adjusts the compensation extent automatically according to the goodness of fit ( ), which makes the compensation algorithm intelligent. Figure 6. Flowchart of the "3σ + window + Hertz" algorithm. Finally, the index of the starting point can be obtained by Equation (22). Maximum-Deceleration Algorithm Normally, the voltage drops most dramatically just after the rupture point so that it can be identified by looking for the maximum deceleration through the voltage series. Practically, the deceleration is calculated from the following equation: where ( +1 + +2 ) 2 ⁄ rather than +1 is used to filter the data slightly to reduce the possible impact of random noise. Finally, the index of the starting point can be obtained by Equation (22). Maximum-Deceleration Algorithm Normally, the voltage drops most dramatically just after the rupture point so that it can be identified by looking for the maximum deceleration through the voltage series. Practically, the deceleration is calculated from the following equation: where (V i+1 + V i+2 )/2 rather than V i+1 is used to filter the data slightly to reduce the possible impact of random noise. The flowchart of the algorithm is illustrated in Figure 7. Initially, the drop (deceleration) series is calculated from the voltage series. Then, the point with the maximum drop (point p) is found, and the rupture point (r) is located as the peak point before p. The flowchart of the algorithm is illustrated in Figure 7. Initially, the drop (deceleration) series is calculated from the voltage series. Then, the point with the maximum drop (point p) is found, and the rupture point (r) is located as the peak point before p. Table 1. Performance of the Algorithms For the experimental raw voltage data of a microcapsule for self-sensing shown in Figure 8a, the starting point m1, m2, and m3 found by the "3σ", "3σ + window", and "3σ + window + Hertz" algorithms, respectively, are shown in Figure 9a. The diameter of the microcapsule was 87.6 μm. It can be seen that for this set of experimental data, the result Table 1. Performance of the Algorithms For the experimental raw voltage data of a microcapsule for self-sensing shown in Figure 8a, the starting point m1, m2, and m3 found by the "3σ", "3σ + window", and "3σ + window + Hertz" algorithms, respectively, are shown in Figure 9a. The diameter of the microcapsule was 87.6 µm. It can be seen that for this set of experimental data, the result of the "3σ" algorithm seems to underestimate the starting point because of the impulse noise, whilst the result of the "3σ + window" appears to overestimate the starting point. The starting point found by the "3σ + window + Hertz" algorithm looks more reasonable as the voltage starts to increase around this point. However, when no impulse noise exists, the starting point values obtained from the "3σ" (m1) and "3σ + window" (m2) algorithms are the same. For instance, for the raw voltage data of a porous PS microsphere in Figure 8b, the starting point is m1 = m2 = 213 as shown in Figure 9b. In both cases, the rupture points are successfully identified by the maximum-deceleration algorithm. Hertz" algorithm, as it is more reasonable as discussed above. The force-displacement data of the microcapsule in Figure 8a and microsphere in Figure 8b were calculated using Equations (7) and (8), and their curves are shown in Figure 10a and b, respectively. It can be seen from Figure 10a that the force-displacement curve of the self-sensing microcapsule is not very smooth, with some local peaks before rupture that might be due to the roughness of the out-layer PUF [5,47]. The out layer could crack several times before the rupture point shown in Figure 10a, where the inner shell was ruptured, and the force dropped sharply. In contrast, the force-displacement curve of the PS microsphere is quite smooth before the rupture, as its shell was smooth [43]. However, the force at the rupture point did not drop as dramatically as the self-sensing microcapsule since there was no release of any mateiral from the PS microsphere at the rupture point. In the following analysis, the staring point (m3) was found by the "3σ + window + Hertz" algorithm, as it is more reasonable as discussed above. The force-displacement data of the microcapsule in Figure 8a and microsphere in Figure 8b were calculated using Equations (7) and (8), and their curves are shown in Figure 10a,b, respectively. It can be seen from Figure 10a that the force-displacement curve of the self-sensing microcapsule is not very smooth, with some local peaks before rupture that might be due to the roughness of the out-layer PUF [5,47]. The out layer could crack several times before the rupture point shown in Figure 10a, where the inner shell was ruptured, and the force dropped sharply. In contrast, the force-displacement curve of the PS microsphere is quite smooth before the rupture, as its shell was smooth [43]. However, the force at the rupture point did not drop as dramatically as the self-sensing microcapsule since there was no release of any mateiral from the PS microsphere at the rupture point. of the "3σ" algorithm seems to underestimate the starting point because of the impulse noise, whilst the result of the "3σ + window" appears to overestimate the starting point. The starting point found by the "3σ + window + Hertz" algorithm looks more reasonable as the voltage starts to increase around this point. However, when no impulse noise exists, the starting point values obtained from the "3σ" (m1) and "3σ + window" (m2) algorithms are the same. For instance, for the raw voltage data of a porous PS microsphere in Figure 8b, the starting point is m1 = m2 = 213 as shown in Figure 9b. In both cases, the rupture points are successfully identified by the maximum-deceleration algorithm. In the following analysis, the staring point (m3) was found by the "3σ + window + Hertz" algorithm, as it is more reasonable as discussed above. The force-displacement data of the microcapsule in Figure 8a and microsphere in Figure 8b were calculated using Equations (7) and (8), and their curves are shown in Figure 10a and b, respectively. It can be seen from Figure 10a that the force-displacement curve of the self-sensing microcapsule is not very smooth, with some local peaks before rupture that might be due to the roughness of the out-layer PUF [5,47]. The out layer could crack several times before the rupture point shown in Figure 10a, where the inner shell was ruptured, and the force dropped sharply. In contrast, the force-displacement curve of the PS microsphere is quite smooth before the rupture, as its shell was smooth [43]. However, the force at the rupture point did not drop as dramatically as the self-sensing microcapsule since there was no release of any mateiral from the PS microsphere at the rupture point. (a) (b) Figure 10. The force-displacement curves obtained using the starting point of m3 and the rupture point determined automatically for the experimental data of the self-sensing microcapsule (a) and the PS microsphere (b) in Figure 8a and b, respectively. The nominal stress-fractional deformation data up to rupture of the PS microsphere in Figure 8b was calculated using Equation (9) and (10), and its curve is shown in Figure 11. The starting point was m3 in Figure 9b, found using the "3σ + window + Hertz" algorithm. The toughness of the particle was 1.15 MPa, calculated using the trapezoidal numerical integration in Equation (11), corresponding to the area under the curve up to rupture. (a) (b) Figure 10. The force-displacement curves obtained using the starting point of m3 and the rupture point determined automatically for the experimental data of the self-sensing microcapsule (a) and the PS microsphere (b) in Figure 8a and b, respectively. The nominal stress-fractional deformation data up to rupture of the PS microsphere in Figure 8b was calculated using Equation (9) and (10), and its curve is shown in Figure 11. The starting point was m3 in Figure 9b, found using the "3σ + window + Hertz" algorithm. The toughness of the particle was 1.15 MPa, calculated using the trapezoidal numerical integration in Equation (11), corresponding to the area under the curve up to rupture. The nominal stress-fractional deformation data up to rupture of the PS microsphere in Figure 8b was calculated using Equation (9) and (10), and its curve is shown in Figure 11. The starting point was m3 in Figure 9b, found using the "3σ + window + Hertz" algorithm. The toughness of the particle was 1.15 MPa, calculated using the trapezoidal numerical integration in Equation (11), corresponding to the area under the curve up to rupture. The toughness corresponds to the area under the curve, i.e., the integration of the nominal rupture stress over the fractional deformation using Equation (11). The starting point M was found using the "3σ + window + Hertz" algorithm (m3 in Figure 8b). The experimental data of 50 self-sensing microcapsules and 50 PS microspheres were analysed utilizing the developed software package, and a manual analysis was also carried out for comparison. The average and standard error of the calculated rupture strength parameters for the two samples are shown in Tables 2 and 3. It appears that for the two samples, the average rupture force values from the automatic data analyses are all the same as those from the manual analysis, which shows that the "maximum-deceleration" algorithm is very robust to locate the rupture point. So are the average values of nominal rupture stress, nominal rupture tension and the toughness as the former two parameters are calculated from the rupture force and the diameter of the microparticle. Although the toughness is related to the fractional deformation, which depends on the starting point, the force changes little around the starting point, so the effect of the initial integration of the nominal stress over the fractional deformation on the toughness value is negligible. Thus, the average values of the toughness from the four analyses show the same results. The values of the displacement at rupture from "3σ + window" and "3σ + window + Hertz" overlap with the results from the manual analysis. Because of the appearance of impulse noises, the values of displacement at rupture and deformation at rupture from "3σ" algorithm appear to be different significantly from the manual analysis results. It was found that the starting points for nearly half (24/50) of the tested self-sensing microcapsules and 17/50 of the porous PS microspheres were not correctly identified using the "3σ" algorithm. Figure 11. Nominal stress versus fractional deformation up to rupture of the PS microsphere in Figure 8b. The toughness corresponds to the area under the curve, i.e., the integration of the nominal rupture stress over the fractional deformation using Equation (11). The starting point M was found using the "3σ + window + Hertz" algorithm (m3 in Figure 8b). The experimental data of 50 self-sensing microcapsules and 50 PS microspheres were analysed utilizing the developed software package, and a manual analysis was also carried out for comparison. The average and standard error of the calculated rupture strength parameters for the two samples are shown in Tables 2 and 3. It appears that for the two samples, the average rupture force values from the automatic data analyses are all the same as those from the manual analysis, which shows that the "maximum-deceleration" algorithm is very robust to locate the rupture point. So are the average values of nominal rupture stress, nominal rupture tension and the toughness as the former two parameters are calculated from the rupture force and the diameter of the microparticle. Although the toughness is related to the fractional deformation, which depends on the starting point, the force changes little around the starting point, so the effect of the initial integration of the nominal stress over the fractional deformation on the toughness value is negligible. Thus, the average values of the toughness from the four analyses show the same results. The values of the displacement at rupture from "3σ + window" and "3σ + window + Hertz" overlap with the results from the manual analysis. Because of the appearance of impulse noises, the values of displacement at rupture and deformation at rupture from "3σ" algorithm appear to be different significantly from the manual analysis results. It was found that the starting points for nearly half (24/50) of the tested self-sensing microcapsules and 17/50 of the porous PS microspheres were not correctly identified using the "3σ" algorithm. Based on the data of these two samples, the results obtained from using "3σ + window" and "3σ + window + Hertz" algorithms have no significant difference from the manual results so that they both can be used in the automatic analysis of the rupture strength of microparticles. Further Discussion From the mean values in Tables 2 and 3, it appears that the fractional deformation at rupture of the self-sensing microcapsules is quite big (nearly 50%) in comparison with that of the porous PS microspheres (just around 12%). This indicates that the self-sensing microcapsules with double PUF-PU shells showed a ductile failure behaviour, while the porous PS microspheres showed a brittle failure behaviour [8]. However, the nominal rupture stress of the former (0.85 MPa) is much smaller than the latter (26.4 MPa). It is the same with the toughness, as it is related to the nominal stress versus fractional deformation up to rupture. This may result from the large difference in the particle sizes between the two samples since the nominal rupture stress normally decreases with the increasing particle diameter [48]. The values of the diameter for the self-sensing microcapsules and PS microspheres are 86.2 ± 3.1 µm and 11.1 ± 0.4 µm, respectively. Moreover, the nominal rupture tension of the self-sensing microcapsules (53.9 µN/µm) is also much smaller than that of the porous PS microspheres (227.9 µN/µm). This is reasonable, as the former had a liquid core surrounded by a solid shell with thickness between 200 and 500 nm [5], whilst the latter were solid with a few pores on the surface [43]. The nominal rupture tension and the toughness versus diameter of the two samples of individual microspheres are illustrated in Figure 12. Statistical analysis of the data shows that the nominal rupture tension does not change with diameter for each sample significantly (Figure 12a,b), which can be used to compare the mechanical strength between samples with particles of different sizes. In contrast, the toughness decreases with the diameter, which indicates bigger particles were weaker than smaller ones (Figure 12c,d), similar to the nominal rupture stress [48]. Comparison with Other Algorithms The standard deviation was used in several algorithms to evaluate the noise level of the raw data and to help estimate the parameters of the algorithms to identify CP for AFM force data [38,41,42,49]. A moving window was also introduced to help the identification of the CP [41,42]. However, it was only used for local regression rather than dealing with the impulse noise as addressed by the "3σ+ window" algorithm. Besides, the width of the moving window needs to be set manually in the reported algorithms, whilst it is estimated automatically by Equation (17) in the "3σ + window" and "3σ + window + Hertz" algorithms developed in this work. Furthermore, the algorithm in [42] pre-estimates a CP* with a threshold of five standard deviations of the baseline values and then determines the CP by fitting force-displacement data into a Hertz-like model from CP* to an indentation depth empirically determined by the stiffness of the force curve. The "3σ + window + Hertz" algorithm also pre-estimates a prone starting point m followed by the regression of the force-displacement data within 5% fractional deformation to the Hertz model to determine the real starting point m. However, these two algorithms have two main differences. One is that the "3σ + window" algorithm is used to estimate the prone starting point, which can well deal with the impulse noises in the "3σ + window + Hertz" algorithm, whereas CP* is just estimated with a threshold of five standard deviations of the baseline values [42] that may result in a wrong value when the impulse noise greater than the threshold exists before the real CP. The other difference is that after the Hertz regression, the CoD is used in Equation (21) to adjust the degree of the compensation automatically so that when the tested material is not linear ealstic, fewer points will be compensated to m . In contrast, the algorithm reported in [42] was designed for linear elastic materials and cannot adjust automatically for other mechanical behaviours of the tested materials. Besides, using the "maximum-deceleration" algorithm for the detection of rupture point in this work requires no parameter to be adjusted and is fully automatic, which is advantageous. Comparison with Other Algorithms The standard deviation was used in several algorithms to evaluate the noise level of the raw data and to help estimate the parameters of the algorithms to identify CP for AFM force data [38,41,42,49]. A moving window was also introduced to help the identification of the CP [41,42]. However, it was only used for local regression rather than dealing with the impulse noise as addressed by the "3σ+ window" algorithm. Besides, the width of the moving window needs to be set manually in the reported algorithms, whilst it is estimated automatically by Equation (17) in the "3σ + window" and "3σ + window + Hertz" algorithms developed in this work. Furthermore, the algorithm in [42] pre-estimates a CP* with a threshold of five standard deviations of the baseline values and then determines the CP by fitting force-displacement data into a Hertz-like model from CP* to an indentation depth empirically determined by the stiffness of the force curve. The "3σ + window + Hertz" algorithm also pre-estimates a prone starting point ′ followed by the regression of the force-displacement data within 5% fractional deformation to the Hertz model to determine the real starting point . However, these two algorithms have two Conclusions In this study, a data analysis software package was developed to analyse the rupture strength of microparticles automatically from the experimental data of micromanipulation measurements. Three algorithms were developed to find the starting point of the compression data, i.e., the "3σ", "3σ + window" and "3σ + window + Hertz". The "3σ" algorithm determines the starting point where the voltage of a point deviates from the mean of baseline (V B ) by three standard deviations (3S B ), whilst in the "3σ + window" algorithm, a point is determined as the starting point only if the following w points (including this point) all deviate from V B by 3S B . In the "3σ + window + Hertz" algorithm, the starting point is further adjusted by fitting the force-displacement data corresponding to very small deformations (up to 5% fractional deformation) into the Hertz model to compensate the underestimation of the displacement corresponding to the three standard deviations. One algorithm based on the maximum deceleration of the voltage series was developed to determine the rupture point. The results show that the combination of the "3σ + window" or "3σ + window + Hertz" algorithm with the "maximum-deceleration" algorithm can produce results that are in excellent agreement with those obtained manually, and there is no significant difference between them. Moreover, all the developed algorithms work fully automatically without any parameter modification. For analysing 50 microparticles in a typical sample, the time spent on analysing the rupture strength parameters manually was from 2 to 3 h. In contrast, it took less than 20 min to analyse the same data automatically using the software package developed in this work. It is believed that this software package can also be used to analyse the force-displacement data obtained using conventional mechanical testing machines for macro-scale materials, which can have a wide range of applications. Nomenclature c Compliance of the force transducer (mN −1 ) CoD Coefficient of determination D Initial diameter of the single microparticle (m) e(t) Random noise E Young's modulus (Pa) F, F i , F(t) Compression force (N) F r Rupture force (N) F Estimated force in the "3σ + window + Hertz" algorithm (N) ( F 1 , δ 1 , F 2 , δ 2 , . . . , Estimated force-displacement series in the "3σ + window + Hertz" F p , δ p ) algorithm k, k k = 1/(k ) 2/3 , used in the "3σ + window + Hertz" algorithm m Starting point index m 1 Starting point index found by the "3σ" algorithm m 2 Starting point index found by the "3σ + window" algorithm m 3 Starting point index found by the "3σ + window + Hertz" algorithm m Estimated starting point index in the "3σ + window + Hertz" algorithm Voltage corresponding to rupture (V) ∆V, ∆V i Voltage deceleration in the "maximum-deceleration" algorithm (V) w Width of moving window in the "3σ + window" algorithm
10,759
2022-05-01T00:00:00.000
[ "Engineering" ]
Effect of Sandblasting on Low and High-Cycle Fatigue Behaviour after Mechanical Cutting of a Twinning-Induced Plasticity Steel . In the last years, car bodies are increasingly made with new advanced high-strength steels, for both lightweighting and safety purposes. Among these new steels, high-manganese or TWIP steels exhibit a promising combination of strength and toughness, arising from the austenitic structure, strengthened by C, and from the twinning induced plasticity effect. Mechanical cutting such as punching or shearing is widely used for the manufacturing of car body components. This method is known to bring about a very clear plastic deformation and therefore causes a significant increase of mechanical stress and micro-hardness in the zone adjacent to the cut edge. To improve the cut edge quality, surface treatments, such as sandblasting, are often used. This surface treatment generates a compressive residual stress layer in the subsurface region. The monotonic tensile properties and deformation mechanisms of these steels have been extensively studied, as well as the effect of grain size and distribution and chemical composition on fatigue behaviour; however, there is not so much documentation about the fatigue performance of these steels cut using different strategies. Thus, the aim of this work is to analyse the fatigue behaviour of a TWIP steel after mechanical cutting with and without sandblasting in Low and High-Cycle Fatigue regimes. The fatigue behaviour has been determined at room temperature with tensile samples tested with a load ratio of 0.1 and load amplitude control to analyse High-Cycle Fatigue behaviour; and a load ratio of -1 and strain amplitude control to determine the Low-Cycle Fatigue behaviour. Samples were cut by shearing with a clearance value of 5%. Afterwards, a part of the cut specimens were manually blasted using glass microspheres of 40 to 95 microns of diameter as abrasive media. The results show a beneficial effect of the sandblasting process in fatigue behaviour in both regimes, load amplitude control (HCF) and strain amplitude control (LCF) tests, when these magnitudes are low, while no significant differences are observed with higher amplitudes. Introduction Nowadays, lightweight construction is one of the major concerns in automotive industry to meet the increasing demands towards environment and safety related issues. A large part of body-in-white (BiW) components is currently manufactured using ultra high-strength steels (UHSS), with fracture strengths from 1000MPa upwards. The cold forming of pieces of complex geometry is not an easy task with this type of steels, thus, there has been a growing interest for High Mn Steels (HMnS). The main characteristic of these steels is high Ultimate Tensile Strength (UTS) values, around 1000MPa, combined with high ductility, above 50%, and high strain hardening coefficient. This combination of properties is the result of the activation of specific deformation mechanisms such as TRIP and/or TWIP effects. Currently, the industrialization of the production of HMnS and the reliability of their in-service properties have led to the first applications of HMnS for automotive components. Despite this, there are still aspects related to the performance of HMnS produced parts that need further evaluation in order to extend the implementation of these steels for industrial applications. In particular, there are still some concerns regarding the fatigue performance of components' cutting zones and the methodologies to improve it. Regarding fatigue, failures are mainly triggered by pre-existing defects at the surface or inside the material. Internal defects are usually associated to non-metallic inclusions generated during steel manufacturing. Surface defects can be produced during cold rolling or part processing. Among the defects produced during forming, trimming produces surface irregularities at the cut edge, which may act as preferential sites for fatigue crack initiation and propagation. Several authors have assessed the effect of cutting technologies on fatigue properties for different sheet steel grades: carbon and micro-alloyed steels [1]; UHSS with UTS up to 1400MPa [2] and deep drawing, high strength dual phase, austenitic and duplex stainless steels [3]. In all cases, it is concluded that the cut edge makes the fatigue strength of the metal decrease because of the roughness that the cutting process generates. It is also observed that the magnitude of the clearance was of minor importance, being more critical the tool wear. If the effect of the different cutting methods is analyzed, the specimens with milled edges showed higher fatigue limit than mechanically cut edges. These authors also concluded that there were no significant differences between milled and water jet cutting, while the lowest fatigue behaviour was observed in laser cut specimens. Thus, the main conclusion of this and other works [4,5] is that lower strength steels were less sensitive to the cutting method than the higher strength steels and that in most cases, shearing produced lower fatigue strengths than other cutting methods due to the presence of larger surface defects at the edges. Regarding TWIP steels, there is relatively little work reported on their fatigue behaviour. High cycle fatigue [6,7], as well as low cycle and extremely low cycle fatigue life [8], have been evaluated in some works. In general, the fatigue limit of TWIP steels correlates to their high ultimate tensile strength, with a ratio of fatigue limit to tensile strength around 0.4, similar to the one of stainless steels, but far from that obtained with other UHSS steels such as Dual or Complex Phase steels, around 0.5 or even higher [4,9]. There is not much information on the effect of the cut edge in the fatigue life of these steels. Only Mateo et al. [10] evaluated the effect of the laser cutting on fatigue behaviour of two metastable steels, one of which was a TWIP steel, applying the staircase method. According to these results, TWIP steel is less sensitive than stainless steel to defects at the cut edge. However, no studies have been found on how the mechanical cutting affects the fatigue behaviour of these type of steels. On the other hand, there are some works that report the negative effect of mechanical cut on forming and post-forming operations, such as hole expansion, due to the high amount of deformation introduced to the hole edge [11]. Thus, it is expected that this high degree of deformation introduced at the cutting edge could negatively affect fatigue behaviour. In this sense, it has been demonstrated that some surface treatments can improve fatigue behavior through the retardation of fatigue crack initiation or propagation. These treatments, such as sandblasting or shot peening, are effective methods to enhance the fatigue properties, by inducing a compressive residual stress in the subsurface of the material [12,13]. In sandblasting, the sample surface is blasted repeatedly by high-speed sand particles, leading to reduce the surface roughness, removal of surface scale and generation of local plastic deformation in the surface layer. In addition, a compressive residual stress layer is formed in the subsurface region. Microstructural changes due to sandblasting have also been reported in austenitic steels [14]. The aim of the present work is to increase the knowledge about the effect of the mechanical cutting on the fatigue performance of High Mn Steels (HMnS) and, propose industrial treatments to improve their fatigue behaviour. To achieve these objectives, after mechanical cutting, a part of the cut specimens have been manually blasted using glass microspheres of 40 to 95 microns of diameter as abrasive media. Materials with polished cut edges have been used as a reference in this study. Materials Two grades of TWIP steel were analysed: TWIP1.5 and TWIP3.0. The chemical compositions of these steels are shown in table 1 and table 2. Then, samples of these steels were polished and etched following standard methods. Their microstructures can be seen in figure 1 and figure 2. As seen in the previous figures the two TWIP steels shows a homogeneous austenite matrix. The grain size is the only notable difference. Conventional axial tensile tests were performed according to EN-ISO6892-1 with the specimens oriented transverse to the rolling direction. Table 3 shows the results. Table 3 shows, as would be expected, a higher Ultimate Tension Strength (UTS) and Yield Strength (YS) with a decreasing of grain size; whereas the uniform elongation and total elongation become smaller. Experimental procedure In order to evaluate the influence of the cutting process on the fatigue behaviour of these steels, the calibrated zone of specimens was cut using different strategies: -Spark erosion (EDM) and polished, with a first grinding and a polishing with two types of abrasive pastes to get clean and shiny surfaces, identified as [REF] condition. -Shearing using a cutting clearance of 5% of the sheet thickness and a punch radius of 30 microns, identified as [CUT] condition -Shearing as previously commented and manually blasted using glass microspheres, identified as [SAN] condition. Figure 3 shows the tool employed for cutting a part of the specimens tested. The geometry of the edges was evaluated by cutting a specimen of each strategy in cross section and analysing them by optical microscopy, figure 4 and figure 5. Vickers micro-hardness profiles were also performed to assess the effect of each cutting methodology and are shown in figure 6. Residual stresses were also evaluated on the surface of the samples by means of instrumented indentation using the methodology developed by Wang and Bao [15,16]. In order to perform indentation tests, a Berkovich indenter was used with an applied penetration depth of 10 μm. Results of residual stresses, shown in figure 7, correspond to an average value from at least 20 indentations. Regarding fatigue behaviour, tests were conducted in dual column servo-hydraulic testing machine. Low Cycle Fatigue (LCF) tests, to obtain εN curves, were performed according to ISO12106 standard [17] using an axial contact extensometer. Total strain versus number of cycles curves were then used to determine the Basquin and Coffin-Manson relationships. High Cycle Fatigue (HCF) tests, to obtain SN curves up to 2x10 6 cycles, were obtained according to ISO1099 standard [18], using a load ratio (R) of 0.1. All tests were performed at room temperature. After these tests, some fracture surfaces were inspected by means of field emission scanning electron microscopy (FE-SEM). Results Sheet cut edges present different geometry depending on the cutting methodology employed. As shown in figure 4 and 5, polished samples present rounded edges without asperities, as expected (figure 4(a) and 5(a)). Sheared samples show the typical features of a sheared edge with rollover, smooth, fracture zones and burr in two materials (figure 4(b) and 5(b)). Once samples are submitted to the sandblasting operation, the roughness of the cutting edge is reduced and the burr removed. Micro-hardness profiles shown at figure 6 show that the shearing process increases hardness near the cut edge, an increase that is not observed in polished specimens, for which the cut were done by EDM. The length of the cuthardened area is deeper in the thicker steel, while no clear differences of hardness are observed between the sheared samples with and without sandblasting treatment. As can be seen in figure 7, sandblasting process induces high compressive stresses measurable even at penetration depth of 10 microns with instrumented indentation. The magnitude of these compressive stresses does not differ much from those reported by other authors that performed shot penning process with different TWIP steels, although the depth of the compressive layer was higher [19], 100 to 300 microns for a shot peening versus 25 to 75 microns for a sandblasting [20]. The main strain and stress-life fatigue results are summarized in figure 8 and 9, defined here as the strain and stress amplitude leading to 50% survival probability (experimental values). Fig. 8. εN curves obtained with TWIP steels As shown in figure 8 and 9, fatigue performance depends on the cutting strategy employed. At low and high number of cycles (LCF and HCF tests), both strain and stress that can be applied for a given number of cycles, is higher for polished specimens compared with the sheared ones. Concerning sandblasting specimens, the effect of this surface treatment on the fatigue performance depends on the applied stress level. Thus, the sandblasting process significantly increase the strain and stress-life behaviour when the stress level is low (low strain in LCF tests and low stress in HCF tests). On the contrary, at higher stress levels (high strain in LCF tests and high stress in HCF tests) sandblasting treatment does not introduce any noticeable improvement in the fatigue behaviour of this type of steels, since the fatigue results of sheared specimens are similar with and without sandblasting, at high stress levels. These results may be explained as follows: sandblasting process to sheared specimens generates a compressive residual stress layer which is expected to influence the fatigue life. At low cyclic stresses, the favourable effects of the compressive residual stress layer results in life enhancement. As the cyclic stress increases, the residual stresses will relax quickly and the extent of life enhancement will decrease, as some authors support [21]. Discussion Fatigue properties are extremely sensitive to the presence of pre-existent surface defects in the material, such as non-metallic inclusions, irregularities, asperities or even cracks originated during cutting or forming processes, because the crack nucleation step is then shortened, or even removed. It is well accepted that materials with high ductility, or toughness, as TWIP steels, have a high crack propagation resistance and the fatigue life is mainly spent in the crack growth regime. Thus, tough materials are usually referred to as high defect tolerant since the crack nucleation step does not control the fatigue life. On the contrary, high strength materials have a low crack propagation resistance because of their limited toughness. In that case, crack nucleation may take an important part of the total fatigue life, because the fatigue crack growth rate is relatively high. Thus, high strength materials are more sensitive to pre-existent defects, and they are usually referred to as low defect tolerant materials. Thus, to rationalize the origin of failure in each case, a detailed SEM investigation of one specimen of each cutting strategy was done. These SEM fractographies are shown in figures from 10 to 12 for TWIP1.5 and 13 to 15 for TWIP3.0. The fatigue failure surfaces in most cases show a failure origin independent of the cutting strategy, with no clear differences between the polished, sheared and sandblasted specimens. In most of the surface specimens analysed by SEM, the failure origin occurs in nonmetallic inclusions located near the edge of the specimens. Thus, as discussed above, the important differences in fatigue performance observed are not conditioned by the nucleation stage, but by the propagation stage. These non-metallic inclusions, responsible for the crack nucleation, are probably the ones that cause that the fatigue performance of these two steels is not as high as expected. To know the nature of these non-metallic precipitates, an EDX analysis was performed, showing that most of them were aluminium nitrides. Regarding fatigue performance between the different cutting strategies, and considering, as already advanced, that these are motivated by different crack propagation rates in each strategy, it seems clear that concerning the specimens sheared and with a sandblasting process after shearing, this different crack propagation ratio is due to the compressive stresses induced by sandblasting, as is shown in figure 7. These surface compressive stresses require greater tensile stresses to propagate the crack, which shifts the strain and stress-life curves to the right. As said before, this improvement is only observed at low strain or stress levels, since if cyclic stress increases, the residual stresses will relax quickly and the fatigue behaviour enhancement will not take place. Differences between polished and sheared samples, and also with sandblasting specimens submitted at high strains or stresses, cannot be explained by differences in residual stresses, since, as observed with nanoindentation in figure 7, they are practically the same. In this case, this change in crack propagation rate has to be motivated by the microstructural change generated by the mechanical cut in the tool. Thus, in the specimens cut by spark erosion and then polished [REF], no deformation was introduced during cutting process, therefore the original microstructure, tougher, more tolerant to the defects and with a high crack propagation resistance is acting. In contrast, there are many works which support that during specimens shearing a high amount of deformation is introduced on the cut edge [11]. Thus, the microstructure in this region does not tolerate much more deformation and behaves like a material with a limited toughness and with a low crack propagation resistance. Based on the experimental fatigue tests, surface analysis and fractographic observations, the following conclusions can be drawn: Conclusions -The fatigue behaviour of the TWIP steels analysed in this work is mainly governed by aluminium nitrides nonmetallic inclusions located near the surface. Probably this phenomenon is the reason why fatigue performance of these two steels is not as high as would be expected. -The presence of these defects near the surface makes the crack propagation step determinant in fatigue behaviour of these materials. Thus, to enhance fatigue performance is necessary to reduce the crack propagation rate. -As seen above, a strategy to reduce the crack propagation rate is to use a cutting methodology which does not introduce a high amount of deformation on the microstructure, making the steel less defect tolerant because of a toughness reduction. -Another strategy to reduce the crack propagation rate is the introduction of compressive residual stresses, which requires an increase of the applied tensile stresses to propagate the crack. Sandblasting process has proven to be efficient for this.
4,054.4
2018-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
The Potential of Cylindrical Piezoelectric Transducers for High-Frequency Acoustic Energy Harvesting : The work presented in this paper studies the potential of cylindrical piezoelectric transducers for harvesting high-frequency acoustic energy. The cylinder was made of a modified PZT (lead zirconate titanate) and had the shape of a squared cylinder with a side length of 4 cm and a wall thickness of 1 mm. The study used open-circuit measurements to study the relationship between the sound wavelength and the cylinder size and its effect on the performance of energy harvesting. The cylinder was found to give the best performance at a frequency of 20 kHz. In addition to open-circuit measurements, closed-circuit measurements were performed to demonstrate the ability to dissipate energy harvested from 20 kHz sound waves across an electric load. The load was designed in a series of experimental steps that aimed at optimizing an impedance-matched energy harvester. Finally, the cylinder was tested at the optimized load conditions, and it was possible to harvest and store energy with a power of 67.6 µ W and harvesting efficiency of 86.1%. Introduction Currently, the environment is facing critical challenges that threaten numerous life forms on the planet. These challenges come as a result of the energy-related activities adopted by mankind over the last few decades. Therefore, there has been a growing effort to push energy systems towards sustainability and clean energy sources. Among the forms of this effort is the development of energy harvesters. Energy harvesters are devices that can absorb the energy wasted in the surroundings and change it to more useful forms [1,2]. These harvesters may rely-among other types of materials-on piezoelectric materials to change the captured energy from one form to another. Piezoelectric materials are materials that can develop electric charges on their surfaces in response to applied mechanical pressure [3,4]. Hence, it has acquired its name "piezoelectric" from the Greek language, where the word "piezo" means to push [5]. Piezoelectric materials were discovered in 1880 by the brothers Curie in naturally existing materials [6,7]. The ability of piezoelectric materials to interchange electric and mechanical energy is a result of its material structure that allows the motion of electrons upon being pressed. When this structure is subjected to mechanical pressure, electrons will be displaced, creating polarization between the negatively charged electron cloud and the positively charged nucleus. This polarization eventually forms an electric field [8,9]. It is necessary to mention that the direction of the applied pressure relative to the material structure is significant. Therefore, piezoelectric materials may be used in two modes: mode 33 and mode 31. As shown in Figure 1a, mode 33 is the mode when the applied pressure is parallel to the material poling direction, while mode 31 is the mode when the pressure is applied perpendicular to the poling direction as shown in Figure 1b [10]. mode 31. As shown in Figure 1a, mode 33 is the mode when the applied pressure is parallel to the material poling direction, while mode 31 is the mode when the pressure is applied perpendicular to the poling direction as shown in Figure 1b [10]. Hassan et al. [11] compared different transduction technologies based on energy density-as energy converted per unit volume of the transducer-showing that piezoelectric transduction is the technology with the highest efficiency and the smallest size. Such facts make piezoelectric materials a favorable option during the process of designing energy harvesters where they have been used in several applications of energy harvesting. For example, research has been conducted on the use of piezoelectric transducers to harvest the energy of flow-induced vibrations. These vibrations are induced by putting a cylinder-a non-piezoelectric one-across the flow of a fluid. Being interrupted by the cylinder, the flow moves around the cylinder, inducing a wake region from which a vortex train is shed [12]. Gao at al. [13] developed a harvester of flow energy based on a piezoelectric cantilever with a cylindrical extension. The harvester managed to continuously operate an electronic thermometer chip MCP9700 using an airflow of velocity of 5.2 m/s. In addition to harvesting flow energy, piezoelectric materials have been used to harvest acoustic energy. Examples of piezoelectric energy harvesters shall be presented later in Section 4.2. To assess the quality by which an energy harvester can harvest vibration energy, it is necessary to define a qualitative measure that would describe such a process. Mostly, researchers tend to use efficiency, η, which is the ratio of the output of useful electric energy to the input of mechanical vibration energy. This number stands as the resultant final efficiency of the overall harvester taking into account all the factors governing its performance such as the electromechanical coupling factor of the material, the geometry of the transducer, the inhibited mechanical damping in the system, and the electrical damping characteristics of the load as well as of the whole circuit. Despite how simple the definition of efficiency is, it is quite challenging to agree on a strict way to evaluate efficiency because of the practical constraints on measuring the input energy. Therefore, large discrepancies can be found in the value of efficiency reported in the literature, where it can be estimated to be as high as 80% by some researchers, while others claim that it can never be more than 50%. Yang et al. [14] presented an overview of work conducted studying the efficiency by which piezoelectric energy harvesters convert energy. They reported that PZT (lead zirconate titanate)-based energy harvesters harvesting vibrational energy had the highest theoretical efficiency in the range of 50-90%. Richards et al. [15] derived a formula to calculate the efficiency in the case of a system operating at the resonance frequency and matched the impedance with the load. They considered systems of low damping and high mechanical coupling. Their theoretical study predicted that efficiency may be higher than Hassan et al. [11] compared different transduction technologies based on energy density-as energy converted per unit volume of the transducer-showing that piezoelectric transduction is the technology with the highest efficiency and the smallest size. Such facts make piezoelectric materials a favorable option during the process of designing energy harvesters where they have been used in several applications of energy harvesting. For example, research has been conducted on the use of piezoelectric transducers to harvest the energy of flow-induced vibrations. These vibrations are induced by putting a cylinder-a non-piezoelectric one-across the flow of a fluid. Being interrupted by the cylinder, the flow moves around the cylinder, inducing a wake region from which a vortex train is shed [12]. Gao at al. [13] developed a harvester of flow energy based on a piezoelectric cantilever with a cylindrical extension. The harvester managed to continuously operate an electronic thermometer chip MCP9700 using an airflow of velocity of 5.2 m/s. In addition to harvesting flow energy, piezoelectric materials have been used to harvest acoustic energy. Examples of piezoelectric energy harvesters shall be presented later in Section 4.2. To assess the quality by which an energy harvester can harvest vibration energy, it is necessary to define a qualitative measure that would describe such a process. Mostly, researchers tend to use efficiency, η, which is the ratio of the output of useful electric energy to the input of mechanical vibration energy. This number stands as the resultant final efficiency of the overall harvester taking into account all the factors governing its performance such as the electromechanical coupling factor of the material, the geometry of the transducer, the inhibited mechanical damping in the system, and the electrical damping characteristics of the load as well as of the whole circuit. Despite how simple the definition of efficiency is, it is quite challenging to agree on a strict way to evaluate efficiency because of the practical constraints on measuring the input energy. Therefore, large discrepancies can be found in the value of efficiency reported in the literature, where it can be estimated to be as high as 80% by some researchers, while others claim that it can never be more than 50%. Yang et al. [14] presented an overview of work conducted studying the efficiency by which piezoelectric energy harvesters convert energy. They reported that PZT (lead zirconate titanate)-based energy harvesters harvesting vibrational energy had the highest theoretical efficiency in the range of 50-90%. Richards et al. [15] derived a formula to calculate the efficiency in the case of a system operating at the resonance frequency and matched the impedance with the load. They considered systems of low damping and high mechanical coupling. Their theoretical study predicted that efficiency may be higher than 90%. However, high values of coupling are rarely achieved experimentally to justify this result. Shu et al. [16] also devised a formula for the calculation of efficiency that was based on the material coupling coefficient, mechanical damping, and normalized values of each of the harvested sound frequency as well as the electric resistance. A maximum efficiency Energies 2021, 14, 5845 3 of 18 of 46% was reported for harvesters of weak coupling, while a maximum efficiency of more than 80% was reported for harvesters of strong coupling. Liao et al. [17] have proposed a different way to assess the performance of piezoelectric energy harvesters, where they derive a new efficiency that is analogous to the material loss factor. This efficiency is calculated as the ratio between the strain energy over each cycle and the power output. They validated this new quantity by numerical simulations, and it was estimated that harvesters would have a maximum efficiency of 2.5% around the resonance frequency. Yuan et al. [18] also compared various designs of acoustic energy harvesters reported in the literature. They did not base their comparison on the efficiency of energy conversion of these harvesters; however, they derived their own parameter and called it "Metric". Their parameter is set to give a higher score for a smaller harvester that is able to harvest more power at a lower input sound power. Therefore, they devised their Metric according to Equation (1), where it is set to be the ratio between the harvested power and the product of the harvester volume and the square of the involved sound pressure, where P, Vol, and Pre are the harvested power in µW, the harvester volume in cm 3 , and the involved sound pressure in Pascals, respectively. Their motivation behind using a squared value of sound pressure is that sound intensity is proportional to the square of sound pressure as shown in Equation (2) which relates sound intensity I to the sound pressure Pre via the acoustic impedance Z acoustic . Using their parameter (i.e., Metric), the authors rated seventeen different designs of acoustic energy harvesters from the literature. The harvesters' operating conditionssound frequency and pressure-spanned over a wide range. The list included harvesters operating at a sound frequency in the range of 146-13,570 Hz, with most of the list targeting frequencies under 1 kHz. On the other hand, the operating sound pressure ranged over 1-563.7 Pa which corresponds to an SPL (sound pressure level) of 94-149 dB. Sixteen designs out of the presented seventeen achieved a value of Metric in the range of 7.7228 * 10 −12 -0.1885 µW/ cm 3 .Pa 2 , while one last design achieved 14.536 µW/ cm 3 .Pa 2 . The goal of this work was to study the potential of cylindrical piezoelectric transducers for the application of high-frequency acoustic energy harvesting. Piezoelectric transducers have already been used in many shapes such as circular disks, plates, annular disks, and bending elements. Considering planar harvesters, such as plate-based and ring-based designs, the harvester's dimension that is in the direction of acoustic wave propagation is the transducer thickness. Since this thickness is much smaller than the wavelength of the acoustic wave, the waves apply almost equal pressure on the two sides of the transducer which may reduce the transducer output [19]. However, a cylindrical geometry might have its advantages over planar geometries due to the bigger dimension-diameter as opposed to thickness in planar geometries-in the direction of the acoustic wave propagation. Moreover, this work targets noise in the high-frequency range up to 20 kHz. According to a study presented by Fletcher at al. [20], it was found that noises with a sound pressure level of 100 dB and a frequency near 20 kHz often exist in public places. Interestingly, a hand drier can produce a noise with an SPL of 84 dB at 40 kHz. The cylinder under study was made of PZT which is a suitable material for the application of energy harvesting thanks to its high piezoelectric constant [21]. The electrical impedance of the cylinder was already studied over a wide frequency spectrum in [22], and it was found that the cylinder impedance tended to drop as the frequency increased, and this drop happened almost exponentially in the acoustic spectrum. Moreover, it was found that the cylinder has a capacitive nature. The work in this paper can be divided into two parts. The first part studied the effect of sound frequency and, hence, the sound wavelength on the energy harvesting process. Then, the second part demonstrates the operation of the cylinder in a complete energy harvester, harvesting sound waves with a frequency of 20 kHz. Materials The used piezoelectric transducer was made of PZT-5A, which is a modified lead zirconate titanate. According to the manufacturer, the material had piezo constants as shown in Table 1, where it had a piezoelectric voltage constant of −11.5 kV.m/N and 22 kV.m/N in the 31 and 33 directions, respectively. It also had resonance frequency constants in the radial as well as the thickness modes which were 1950 Hz.m. Moreover, the cylinder had a square shape with a diameter, D cyl , of 40 mm and a wall thickness, t, of 1 mm as summarized n Table 2. Mechanical quality factor (-) 100 Using these data, it was possible to calculate the resonance frequency of the various vibration modes of the cylinder. For the setup used in this paper, the vibration modes that were of interest were the radial and the thickness modes with the resonance frequencies defined as F r p and F r t , respectively. Using Equations (3) and (4), it was possible to calculate F r p and F r t to be 48.75 kHz and 1950 kHz, respectively, as summarized in Table 3. Description of the Experimental Setup As shown in Figure 2, the experimental setup was built such that it was composed of speakers (1) supplying acoustic waves of controlled frequency onto the cylinder (2). To do so, aluminum profiles were assembled to form a crane (3) from which the cylinder could be hanged. The cylinder was hanged from the crane using a malleable metallic strap (4) covered with a plastic cover (5). The cover was intended to have two functions: the first function was to electrically insulate the cylinder from the conductive aluminum frame, while the second function was to protect the cylinder's surface from scratches that may develop from metal-to-metal contact with the strap. Such a measure was necessary as such cracks may degrade the piezoelectric material. be hanged. The cylinder was hanged from the crane using a malleable metallic strap (4) covered with a plastic cover (5). The cover was intended to have two functions: the first function was to electrically insulate the cylinder from the conductive aluminum frame, while the second function was to protect the cylinder's surface from scratches that may develop from metal-to-metal contact with the strap. Such a measure was necessary as such cracks may degrade the piezoelectric material. After supporting the cylinder from the frame, the speakers were placed such that sound waves would be applied radially onto the cylinder. The main consideration behind the design of this setup was to support the cylinder from one point to avoid damping the vibration of the cylinder. Moreover, the design aimed to minimize the effect of sound reflection. This was achieved by minimizing the number of objects-including the groundthat were in the vicinity of the speakers. Figure 2. The setup of the experiment was constructed such that the speakers (1) supplied sound waves of controlled frequency onto the piezoelectric cylinder (2). The cylinder (2) was supported from a crane (3) using a metallic strap (4) that was insulated with a plastic cover (5). The cylinder had a diameter of 40 mm and was placed 10 mm away from the speakers-all dimensions are expressed in mm. The cylinder was connected to a breadboard by a pair of electric wires that were soldered to the inner and outer surfaces of the cylinder. Into the breadboard, four Schottky diodes BAT 48 were mounted forming a full-wave rectifier as shown in Figure 3, where the cylinder was modeled as an AC source. The BAT 48 diodes were produced by STMicroelectronics and were specifically preferred due to the fact of their relatively low forward voltage value, V , where it had a V of 240 mV. Hence, it was determined that a total voltage drop of 480 mV should be imposed on the output of this circuit. Figure 2. The setup of the experiment was constructed such that the speakers (1) supplied sound waves of controlled frequency onto the piezoelectric cylinder (2). The cylinder (2) was supported from a crane (3) using a metallic strap (4) that was insulated with a plastic cover (5). The cylinder had a diameter of 40 mm and was placed 10 mm away from the speakers-all dimensions are expressed in mm. After supporting the cylinder from the frame, the speakers were placed such that sound waves would be applied radially onto the cylinder. The main consideration behind the design of this setup was to support the cylinder from one point to avoid damping the vibration of the cylinder. Moreover, the design aimed to minimize the effect of sound reflection. This was achieved by minimizing the number of objects-including the groundthat were in the vicinity of the speakers. The cylinder was connected to a breadboard by a pair of electric wires that were soldered to the inner and outer surfaces of the cylinder. Into the breadboard, four Schottky diodes BAT 48 were mounted forming a full-wave rectifier as shown in Figure 3, where the cylinder was modeled as an AC source. The BAT 48 diodes were produced by STMicroelectronics and were specifically preferred due to the fact of their relatively low forward voltage value, V f , where it had a V f of 240 mV. Hence, it was determined that a total voltage drop of 480 mV should be imposed on the output of this circuit. Methods The investigation was conducted in two phases. In Phase (1), the effect of sound frequency on the energy harvesting performance was studied. In this phase, the cylinder Methods The investigation was conducted in two phases. In Phase (1), the effect of sound frequency on the energy harvesting performance was studied. In this phase, the cylinder underwent measurements-defined as Case (0)-at different sound frequencies aimed at defining the optimum frequency for energy harvesting. Then, Phase (2) sought to demonstrate the ability to use the cylinder as a transducer in an acoustic energy harvester. The design of the harvester targeted impedance matching at the optimum frequency found in Phase (1). The harvester was designed experimentally in two steps, defined later to be Case (1) and Case (2). For a better understanding of the process, Figure 4 summarizes the procedure followed in the two phases, where f, L, and C are the sound frequency, the circuit inductance, and the circuit capacitance, respectively, while P electric and P acoustic are the involved electric and acoustic powers, respectively. Methods The investigation was conducted in two phases. In Phase (1), the effect of sound frequency on the energy harvesting performance was studied. In this phase, the cylinder underwent measurements-defined as Case (0)-at different sound frequencies aimed at defining the optimum frequency for energy harvesting. Then, Phase (2) sought to demonstrate the ability to use the cylinder as a transducer in an acoustic energy harvester. The design of the harvester targeted impedance matching at the optimum frequency found in Phase (1). The harvester was designed experimentally in two steps, defined later to be Case (1) and Case (2). For a better understanding of the process, Figure 4 summarizes the procedure followed in the two phases, where f, L, and C are the sound frequency, the circuit inductance, and the circuit capacitance, respectively, while P and P are the involved electric and acoustic powers, respectively. (1) In Phase (1), the cylinder was subjected to sound waves of different frequencies while the generated voltage over the bridge poles was monitored. The experiment was set-up as defined above such that the distance between the speakers and the cylinder was almost 1 cm. A frequency sweep was performed from a frequency of 20 kHz, in steps of 1 kHz, up to a frequency of 1 kHz. Then, a step of 200 Hz was used from 1000 Hz up to 200 Hz. In case a peak point was found, additional measurements were performed in a range of 1 kHz centered at the found peak with a step between measurements of 200 Hz. At each (1) In Phase (1), the cylinder was subjected to sound waves of different frequencies while the generated voltage over the bridge poles was monitored. The experiment was set-up as defined above such that the distance between the speakers and the cylinder was almost 1 cm. A frequency sweep was performed from a frequency of 20 kHz, in steps of 1 kHz, up to a frequency of 1 kHz. Then, a step of 200 Hz was used from 1000 Hz up to 200 Hz. In case a peak point was found, additional measurements were performed in a range of 1 kHz centered at the found peak with a step between measurements of 200 Hz. At each frequency, the generated voltage across the circuit was measured using an oscilloscope UT-81A as shown in Figure 5. frequency, the generated voltage across the circuit was measured using an oscilloscope UT-81A as shown in Figure 5. The significance of Phase (1) was that it helped define an optimum frequency for maximized energy harvesting for this specific cylinder. It will be shown later in Section 3 (Results) that this frequency was found to be 20 kHz. (2) In Phase (2), an energy harvester was constructed respecting the maximum power transfer theorem. According to this theorem, a power source will transfer to its load max- The significance of Phase (1) was that it helped define an optimum frequency for maximized energy harvesting for this specific cylinder. It will be shown later in Section 3 (Results) that this frequency was found to be 20 kHz. (2) In Phase (2), an energy harvester was constructed respecting the maximum power transfer theorem. According to this theorem, a power source will transfer to its load maximum amount of power when the source and the load are of conjugated impedances, i.e., matched impedance [16]. The harvester was optimized for sound waves with a frequency of 20 kHz, which were found to provide the best performance in Phase 1. Knowing that the cylinder impedance at a frequency of 20 kHz is 1.93-50.7 j Ω [22], the load can be designed using Equations (5)-(7) to have an electric impedance of 1.93 + 50.7 j Ω, where Z t , R, Z l and Z c refer to the total series impedance, the resistor, the impedance of an inductor, and a capacitor, respectively, while f, L, and C refer to the signal frequency, the inductance of the inductor, and the capacitance of the capacitor, respectively. Therefore, the load will use a resistor of value 2 Ω. However, it will be challenging to define the inductance and the capacitance since their impedance depends mainly on the frequency of the signal flowing in the load. Phase Since the output signal of the source circuit has been through a full-wave rectifier bridge; therefore, the signal is expected to be ideally of a frequency that is double the sound frequency. However, it was clear from the patterns obtained from the oscilloscope during the open-circuit experiment that the rectification did not occur with 100% efficiency. So, it was not clear which frequency to consider: 20 or 40 kHz. To solve this question, the problem was divided into two steps: Case (1) and Case (2). In the first step, two designs of the load were considered. The load in each case was composed of the 2 Ω resistor, an inductor, and a capacitor. Design (1A) was to achieve impedance matching considering a sound frequency of 20 kHz, while Design (1B) considered a frequency of 40 kHz. Since the influence of an inductor on the total value of impedance was more than the capacitor's, according to Equations (5)- (7), it was decided to vary the inductance value while keeping the capacitance constant at a value of 220 µF. Hence, it was possible to choose an inductance value of 440 µH for Case (1A) and 220 µH for Case (1B) using Equations (5)- (7). Table 4 summarizes the details of the two circuits, while Figure 6 shows the schematic diagrams for Cases (1A) and (1B). After assembling the electric circuit for each case, sound waves with a frequency of 20 kHz were applied onto the cylinder using the same settings discussed in the open-circuit measurement. The measured parameter, however, was the overall voltage, V, generated across the load-the R-L-C branch. The measurement was performed until the measured voltage flattened over time. It will be seen later, in Section 3 (Results), that Case (1A) achieved better results. Therefore, the inductance of 440 µH was used in the next step considering the frequency to be 20 kHz. After assembling the electric circuit for each case, sound waves with a frequency of 20 kHz were applied onto the cylinder using the same settings discussed in the opencircuit measurement. The measured parameter, however, was the overall voltage, V, generated across the load-the R-L-C branch. The measurement was performed until the measured voltage flattened over time. It will be seen later, in Section 3 (Results), that Case (1A) achieved better results. Therefore, the inductance of 440 μH was used in the next step considering the frequency to be 20 kHz. Then, in the next step, which was defined as Case (2), the capacitor value was varied while fixing the inductance value. This was performed to show the effect of achieving impedance matching on harvesting performance. Three different values of capacitance-2, 10, and 1 μF-were chosen using Equations (5)-(7) to achieve balanced impedance across the whole circuit. The data of the impedance values are summarized in Table 5. After constructing the circuit, the experiment was conducted using the same procedure that was followed in Case (1), where sound waves with a frequency of 20 kHz were applied to the cylinder. However, there were two practical aspects that were considered in all of the measurements involved in the two phases: 1. Taking into consideration the capacitive nature of the cylinder, it was believed that the harvesting behavior of the transducer would depend on how many charges were Then, in the next step, which was defined as Case (2), the capacitor value was varied while fixing the inductance value. This was performed to show the effect of achieving impedance matching on harvesting performance. Three different values of capacitance-2, 10, and 1 µF-were chosen using Equations (5)-(7) to achieve balanced impedance across the whole circuit. The data of the impedance values are summarized in Table 5. After constructing the circuit, the experiment was conducted using the same procedure that was followed in Case (1), where sound waves with a frequency of 20 kHz were applied to the cylinder. However, there were two practical aspects that were considered in all of the measurements involved in the two phases: 1. Taking into consideration the capacitive nature of the cylinder, it was believed that the harvesting behavior of the transducer would depend on how many charges were already stored in the cylinder. To neutralize this effect, it was set that the measurement would be repeated three times. Before each measurement, the bridge terminals were connected to a short circuit using a 2 Ω resistor to dissipate any residual electric charges that might be stored in the cylinder; 2. All measurements were performed across the output terminals of the full-wave rectifier. This means that it would take into account the losses in voltage caused by the bridge. To quantify these losses and also to make sure that none of the diodes had burned out during a previous measurement, the total forward voltage across the bridge terminals was always measured before each measurement using a voltmeter set to the diode measurement mode. It was found out that the bridge imposed a total drop in voltage of 0.432 Volts. Defining the Acoustic Energy Used in Case (2) Two acoustic measurements were performed. The first measurement aimed at measuring the variation in the sound pressure across the distance from the speaker. The goal behind this step was to identify a position at which the sound pressure would be maximized to amplify the generated voltage and, hence, facilitate its measurement. Then, in the second measurement, the acoustic power was measured using a microphone. Using this measurement, it would be possible to calculate the efficiency of the harvesting process. However, there were practical complications regarding the measure-Energies 2021, 14, 5845 9 of 18 ment of sound power level, L w , or sound intensity level, L i , at such a high frequency. To overcome this problem, we focused on the near field sound, since the numerical value of the sound pressure level was equal to that of the intensity level, L i , to that of the velocity level, L v . Hence, by measuring the sound pressure level in the near field, it would be possible to quantify the intensity, I, using Equation (8) and, hence, the total amount of power falling on the cylinder using Equation (9). In Equation (8), I ref is taken to be 10 −12 W/m 2 according to the sound intensity threshold required for human hearing, while in Equation (9), D cyl and L cyl were taken to be 0.04 m. Figure 7 shows the raw data of the measured voltage across the harvester. It can be noticed that the voltage tended to generally increase with the increase in the sound frequency, achieving a maximum voltage of 110 mV at the ceiling of the acoustic spectrum. However, it can be noticed that a peak voltage formed at a frequency of 12.2 kHz despite the absence of a similar peak in the cylinder impedance at such a value of frequency. lines are plotted in Figure 7 at each of the frequencies representing F /4 and F /8. However, no peak in the voltage was found at the frequency F /8. In addition to the peak at F /4, the voltage tended to increase, starting from the frequency 16 kHz up to the end of the acoustic range. Considering the frequency value F /2 situated at 24.375 kHz-outside of the acoustic range, this increase in voltage could be attributed to the left side of a peak that was centered at a frequency of 24.375 kHz. Unfortunately, it was not possible to expand this measurement outside of the acoustic range to verify this hypothesis, since the used speakers were rated up to 20 kHz only. Moreover, it would not have been possible to measure the power of the ultrasonic waves to carry out Phase (2). Figure 7. Variations in the open-circuit voltage with the sound frequency where it was found that the voltage increased with the increase in the sound frequency, achieving a peak value at the quarter of the cylinder resonance frequency of 12.2 kHz. However, no peak was found at the one-eighth of the resonance frequency. To investigate the potential relationship between the cylinder size and the frequency of the harvested sound waves, the data presented above were replotted in Figure 8 using a dimensionless frequency, f . This dimensionless number was defined to be the ratio between the cylinder diameter and the sound wavelength as shown in Equation (10), where D is the outer diameter of the cylinder, f is the sound frequency, and U is the sound speed in air taken to be 343 m/s. By representing the data using the new dimensionless parameter, it was possible to divide Figure 8 into two parts separated by a green separation line at f = 1. The part on the left of the separation line represents the case when the cylinder diameter was less than the sound wavelength, and the part on the right of the separation line represents the case when the cylinder diameter was greater than the sound wavelength. Variations in the open-circuit voltage with the sound frequency where it was found that the voltage increased with the increase in the sound frequency, achieving a peak value at the quarter of the cylinder resonance frequency of 12.2 kHz. However, no peak was found at the one-eighth of the resonance frequency. To investigate this peak, it was necessary to recall the data of the resonance frequencies presented in Table 3, where it can be noticed that this peak frequency represents 0.25025 of the radial resonance frequency F r p where that the radial resonance frequency itself lies outside of the acoustic range. A possible explanation of this peak was that the sound waves applied from the speakers with a frequency 12.2 kHz would form other overtones with harmonic number, n, equals 2-4. Such overtones would be sufficient to create vibrations in the cylinder with its resonance frequency, F r p . Moreover, a similar behavior could be expected at the sound frequency F r p /8, where the eighth overtones of such waves could be able to achieve vibrations with the natural frequency. To furtherly investigate this observation, dashed red lines are plotted in Figure 7 at each of the frequencies representing F r p /4 and F r p /8. However, no peak in the voltage was found at the frequency F r p /8. In addition to the peak at F r p /4, the voltage tended to increase, starting from the frequency 16 kHz up to the end of the acoustic range. Considering the frequency value F r p /2 situated at 24.375 kHz-outside of the acoustic range, this increase in voltage could be attributed to the left side of a peak that was centered at a frequency of 24.375 kHz. Unfortunately, it was not possible to expand this measurement outside of the acoustic range to verify this hypothesis, since the used speakers were rated up to 20 kHz only. Moreover, it would not have been possible to measure the power of the ultrasonic waves to carry out Phase (2). To investigate the potential relationship between the cylinder size and the frequency of the harvested sound waves, the data presented above were replotted in Figure 8 using a dimensionless frequency, f o . This dimensionless number was defined to be the ratio between the cylinder diameter and the sound wavelength as shown in Equation (10), where D cyl is the outer diameter of the cylinder, f is the sound frequency, and U is the sound speed in air taken to be 343 m/s. the voltage tended to drop with increasing frequency. Then, Section (b)-the aforementioned peak-extended over the range f = 1.17-1.52. Then, Section (c) extended over f = 1.52-1.87, where the voltage was almost constant. Then, Section (d), where the voltage tended to increase with increasing frequency, forming what was thought to be a peak centered at F /2 outside of the acoustic range. The difference in the trends of the two halves may be attributed to the relation between the sound wavelength and the cylinder size. In the left half, where the sound wavelength was larger than the cylinder diameter, the sound waves tended to diffract off the cylinder without transporting its energy into the cylinder. In other words, the cylinder was not able to well capture the mechanical energy of sound waves of large wavelength and small frequency. As the frequency increased and the wavelength decreased, the sound waves gradually lost their ability to diffract off the cylinder and more energy was transferred from the acoustic waves to the cylinder. This may justify the direct proportionality between the generated voltage and sound frequency in the left half. When the sound wavelengths reached a critical value that was equal to the cylinder diameter, acoustic waves were no longer able to diffract and were rather either absorbed or reflected off the cylinder. In such a case, energy transfer between the waves and the cylinder depended only on the difference between the acoustic impedance of the two media-the cylinder and the air. Therefore, it would be expected that the right half of the scale would have a constant voltage level which agreed with the obtained data, ignoring Sections (b) and (d), since they are altered by the resonance of the cylinder. However, the drop in voltage in Section (a) remains unjustified. Figure 9 shows the voltage generated over time in both Cases (1A) and (1B). From Figure 9, it can be noticed that it took 13 min to reach the steady-state voltage across the load. Moreover, it was noticed that Case (1A) achieved a high steady-state value of 115.6 mV. Unfortunately, it was not possible to measure the flowing electric current. Therefore, the electric power, P, dissipated in the load was determined using Equation (11). Figure 10 shows the power generated in the two cases, where 224 μW of power was harvested by the circuit of Case (1A), while only 116 μW of power was harvested by the circuit of Case (1B). By representing the data using the new dimensionless parameter, it was possible to divide Figure 8 into two parts separated by a green separation line at f o = 1. The part on the left of the separation line represents the case when the cylinder diameter was less than the sound wavelength, and the part on the right of the separation line represents the case when the cylinder diameter was greater than the sound wavelength. Case (1) It can be noticed that the voltage tended to increase in the left part with increasing frequency up until the separation line exactly. On the other hand, the trend in the right part was different. By examining the right part, one would find that it was composed of four main sections: a, b, c, and d. Section (a) extended over the range of f o = 1-1.16, where the voltage tended to drop with increasing frequency. Then, Section (b)-the aforementioned peak-extended over the range f o = 1.17-1.52. Then, Section (c) extended over f o = 1.52-1.87, where the voltage was almost constant. Then, Section (d), where the voltage tended to increase with increasing frequency, forming what was thought to be a peak centered at F r t /2 outside of the acoustic range. The difference in the trends of the two halves may be attributed to the relation between the sound wavelength and the cylinder size. In the left half, where the sound wavelength was larger than the cylinder diameter, the sound waves tended to diffract off the cylinder without transporting its energy into the cylinder. In other words, the cylinder was not able to well capture the mechanical energy of sound waves of large wavelength and small frequency. As the frequency increased and the wavelength decreased, the sound waves gradually lost their ability to diffract off the cylinder and more energy was transferred from the acoustic waves to the cylinder. This may justify the direct proportionality between the generated voltage and sound frequency in the left half. When the sound wavelengths reached a critical value that was equal to the cylinder diameter, acoustic waves were no longer able to diffract and were rather either absorbed or reflected off the cylinder. In such a case, energy transfer between the waves and the cylinder depended only on the difference between the acoustic impedance of the two media-the cylinder and the air. Therefore, it would be expected that the right half of the scale would have a constant voltage level which agreed with the obtained data, ignoring Sections (b) and (d), since they are altered by the resonance of the cylinder. However, the drop in voltage in Section (a) remains unjustified. Figure 9 shows the voltage generated over time in both Cases (1A) and (1B). From Figure 9, it can be noticed that it took 13 min to reach the steady-state voltage across the load. Moreover, it was noticed that Case (1A) achieved a high steady-state value of 115.6 mV. Unfortunately, it was not possible to measure the flowing electric current. Therefore, the electric power, P, dissipated in the load was determined using Equation (11). Figure 10 shows the power generated in the two cases, where 224 µW of power was harvested by the circuit of Case (1A), while only 116 µW of power was harvested by the circuit of Case (1B). . A comparison between the overall voltage generated in Case (1A) and Case (1B), where Case (1A) achieved a eady-state voltage. Figure 10. A comparison between the harvested electric power in Case (1A) and Case (1B). Case (1A), using 440 μH inductance assuming a signal frequency of 20 kHz, achieved a higher power. Case (2) 3.3.1. Results of Measuring the Electric Power Figure 11 shows the developed voltage across the load circuit over time in the case of using different capacitors, while Figure 12 shows the dissipated power in the load for each case. It was found that the load circuit with the 10 μF capacitor achieved the highest steady-state voltage. However, the circuit with the 1 μF capacitor dissipated the most power. This agrees with the data shown in Table 5, where it was expected that the circuit with the 1 μF capacitor would dissipate the most power since it had the best impedance match among the three circuits. Figure 11 shows the developed voltage across the load circuit over time in the case of using different capacitors, while Figure 12 shows the dissipated power in the load for each case. It was found that the load circuit with the 10 µF capacitor achieved the highest steady-state voltage. However, the circuit with the 1 µF capacitor dissipated the most power. This agrees with the data shown in Table 5, where it was expected that the circuit with the 1 µF capacitor would dissipate the most power since it had the best impedance match among the three circuits. 14, x FOR PEER REVIEW 13 of 18 Figure 11. A comparison between the overall generated voltages with different capacitors (Case (2)), where the circuit with capacitor 10 μF generated the highest steady-state voltage. Figure 11. A comparison between the overall generated voltages with different capacitors (Case (2)), where the circuit with capacitor 10 µF generated the highest steady-state voltage. Energies 2021, 14, 5845 13 of 18 Figure 11. A comparison between the overall generated voltages with different capacitors (Case (2)), where the circuit with capacitor 10 μF generated the highest steady-state voltage. Figure 12. A comparison between the overall dissipated power with different capacitors, where the circuit with capacitor 1 μF generated the highest steady-state power. Results of Measuring the Acoustic Power The mentioned measurement was conducted at several points at different distances from the speakers. For reference, the FFT (fast Fourier transform) of the sound pressure level measured at a distance of three centimeters is shown in Figure 13, while Figure 14 plots the variation in the 20 kHz component with the distance from the source compared to the diameter of the cylinder. From Figure 14, it can be noticed that there were points of local maxima and local minima which is a property of the near-field sound. The highest pressure level, L , was achieved at a distance of three centimeters with a value of 102 dB. Using Equations (8) and (9) together with the L value at a distance of three centimeters, it would be possible to calculate the total acoustic power collected by the cylinder to be 78.5 μW. Results of Measuring the Acoustic Power The mentioned measurement was conducted at several points at different distances from the speakers. For reference, the FFT (fast Fourier transform) of the sound pressure level measured at a distance of three centimeters is shown in Figure 13, while Figure 14 plots the variation in the 20 kHz component with the distance from the source compared to the diameter of the cylinder. From Figure 14, it can be noticed that there were points of local maxima and local minima which is a property of the near-field sound. The highest pressure level, L p , was achieved at a distance of three centimeters with a value of 102 dB. Using Equations (8) and (9) together with the L p value at a distance of three centimeters, it would be possible to calculate the total acoustic power collected by the cylinder to be 78.5 µW. Estimation of the Energy Conversion Efficiency To evaluate the performance of the considered energy harvester, it was possible to use Equations (12) and (13) to calculate the harvesting efficiency, η, and energy harvesting density, p, respectively. The results obtained from the data of the three capacitor cases are summarized in Table 6, where it was found that the optimized harvester with the capacitor of 1 μF achieved a harvesting efficiency of 86.1% with an energy-harvesting density of 1.3455 μW/cm per unit squared centimeter of the cylinder's surface area. Estimation of the Energy Conversion Efficiency To evaluate the performance of the considered energy harvester, it was possible to use Equations (12) and (13) to calculate the harvesting efficiency, η, and energy harvesting density, p, respectively. The results obtained from the data of the three capacitor cases are summarized in Table 6, where it was found that the optimized harvester with the capacitor of 1 µF achieved a harvesting efficiency of 86.1% with an energy-harvesting density of 1.3455 µW/cm 2 per unit squared centimeter of the cylinder's surface area. η = P P acoustic (12) p = P A cyl = P ΠD cyl L cyl (13) Estimation of the Effect of the Electric Circuit To evaluate the voltage drop imposed by the electric circuit, Equation (14) was devised to calculate the voltage rectification efficiency, η v , where V is the steady-state voltage across the load, and V loss is the voltage drop across the bridge. As was mentioned previously in Section 2.3.2, the full-wave rectifier bridge imposed a total voltage drop, V loss , of 432 mV. Therefore, it would be possible to calculate η v for the different load circuits as summarized in Table 6. It is worth mentioning that the voltage rectification efficiency depended mainly on the forward voltage drop of the used diodes and the steady-state voltage generated by the cylinder. The steady-state voltage itself was related to-via the voltage constant of the piezoelectric material-the mechanical pressure applied on the harvester. In applications of acoustic energy harvesting, the magnitude of the pressure of the involved vibrations is relatively small. Therefore, the generated steady-state voltage will also be small, lowering the value of the voltage rectification efficiency. (14) 4. Discussion 4.1. Effect of the Size of the Geometry of the Transducer on the Energy Harvesting Process Based on the results obtained from the open-circuit measurements performed at different frequencies, it can be concluded that there is a critical sound frequency above which acoustic energy harvesting by a cylindrical transducer is improved. This frequency is related to a property of sound waves which is sound diffraction. Such a property would not be relevant in the case of other geometries such as plate-shaped, ring-shaped or bending element transducers. This critical frequency was based on the relationship between the corresponding sound wavelength and the diameter of the cylinder. By studying Equation (10), it can be concluded that the bigger the cylinder is, the smaller this critical sound frequency. In other words, the cylinder will have a wider frequency range over which its harvesting performance is improved. Therefore, the next step would be to verify this hypothesis by studying the harvesting performance of cylinders of different diameters. Moreover, it would be interesting to compare the harvesting performance of these cylinders to the performance of a plate-shaped transducer made from the same piezoelectric material. Evaluation of the Complete Energy Harvester It has been practically demonstrated that these piezoelectric cylinders can be used to harvest and store acoustic energy at a frequency of 20 kHz. The cylinder was able to harvest 67.6 µW of acoustic energy with an efficiency of 86.1% and an energy-harvesting density of 1.3455 µW/cm 2 . In order to compare this piezoelectric cylinder to the other harvesters, the evaluation method set by the authors in [18] was used, where they calculated the Metric parameter defined in Equation (1) for various harvesters reported in the literature. By applying Equation (1) on the piezoelectric cylinder proposed in this paper, it turns out that the piezoelectric cylinder achieved a Metric of 0.2152 which outperformed 16 out of the 17 designs on their list. In the following section, some of the interesting designs that share some of the working conditions of the piezoelectric cylinder are highlighted for a more in-depth comparison. A summary of the data on these harvesters can be found in Table 7. The design presented by Khan et al. [23] had a Metric score of 14.536 µW/ cm 3 .Pa 2 , and it was the only design with a Metric that was higher than that of the piezoelectric cylinder; it is based on electromagnetic transduction rather than piezoelectric. It harvests acoustic energy using a set of composed wound coils mounted on a thin membrane, a permanent magnet, and a Helmholtz resonator. Energy harvesting starts when the acoustic waves make the air particles vibrate inside the Helmholtz resonator. Because of the resonation mechanism of the resonator, the vibration of the air particles inside the resonator neck are amplified which, in turn, induces vibrations in the thin membrane causing motion of the coil. Then, the induced motion of the coil relative to the permanent magnet eventually generates electric energy. The design managed to provide a voltage of 198.7 mV over a load of 50 Ω. This means it harvested 789.6 µW of electric power. The input sound waves were of a frequency of 319 Hz-the resonance frequency of this harvester-and an SPL of 100 dB, which is quite similar to the SPL used in the experiment performed on the piezoelectric cylinder. By comparing the operating conditions of this harvester to the operating conditions of the piezoelectric cylinder, it can be noticed that the two harvesters shared the same operating sound pressure. However, the electromagnetic harvester was operating at its resonance frequency, while the cylinder was operating at a frequency near to half of its resonance frequency where the resonance frequency lies outside of the acoustic range. Among the designs in the list in [18] was the harvester proposed by Horowitz et al. in [24]. Their harvester is a micro-machined acoustic energy harvester that uses a Helmholtz resonator to amplify the sound pressure. The harvester uses a silicon wafer as a diaphragm and a ring-shaped PZT as a piezoelectric material. They were able to harvest 6 * 10 −6 µW with the harvester at the resonance frequency of 13.57 kHz from sound waves of SPL of 149 dB. This design had the highest operating sound frequency in the list, where it presented 67.8% of the sound frequency used by the piezo-cylinder. Yet, the cylinder outperformed this design as can be seen by the Metric number, where this design achieved a Metric of 7.7228 * 10 −12 . Another design that was mentioned in the same comparison was the design reported by Li et al. [25]. This energy harvester is based on a quarter-wavelength straight tube resonator and multiple piezoelectric cantilever plates placed in the first half of the tube. This harvester was able to harvest remarkably high electric power-it was the highest in the list-where it generated 12,697 µW of power at an SPL of 110 dB and a frequency of 199 Hz. This high power was harvested by a relatively compact harvester, where it scored a Metric value of 0.187. It would be difficult to compare this design to the cylinder, since they were designed for two very different sound frequencies-199 Hz versus 20 kHz. However, if the cylinder is to be coupled with other harvesters and a resonator, it can have the potential to harvest more energy even at smaller sound frequencies. Conclusions This article examined the potential of acoustic energy harvesting using a cylindrical piezoelectric transducer. It was found that it had better performance at high-frequency sound waves, where it generated a maximum voltage of 112.5 mV at the highest acoustic frequency, 20 kHz. Moreover, it is believed that sound diffraction affects the process of energy harvesting, where sound waves that are longer than the cylinder diameter-of a frequency less than 9 kHz-diffract off the cylinder minimizing the energy transfer to the cylinder. Moreover, a complete energy harvester was built from the cylinder. Measurements were performed on several load circuits testing the effect of impedance matching. The efficiency of the energy harvesting process was calculated at a frequency of 20 kHz for the most impedance-matched circuit. It was found that the cylinder generated a steadyvoltage of 56.6 mV with a rectification efficiency of 11.58%. Power-wise, the cylinder harvested of 67.6 µW of acoustic power, achieving a harvesting efficiency of 86.1% and energy-harvesting density of 1.3455 µW/cm 2 of the cylinder surface area. The results showed that the cylinder outperformed some of the energy harvesters presented in the literature from the point of view of performance in relation to compactness. Conflicts of Interest: The authors declare no conflict of interest. Moreover, the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
12,672.8
2021-09-15T00:00:00.000
[ "Engineering", "Physics" ]
Metformin Retards Aging in C. elegans by Altering Microbial Folate and Methionine Metabolism Summary The biguanide drug metformin is widely prescribed to treat type 2 diabetes and metabolic syndrome, but its mode of action remains uncertain. Metformin also increases lifespan in Caenorhabditis elegans cocultured with Escherichia coli. This bacterium exerts complex nutritional and pathogenic effects on its nematode predator/host that impact health and aging. We report that metformin increases lifespan by altering microbial folate and methionine metabolism. Alterations in metformin-induced longevity by mutation of worm methionine synthase (metr-1) and S-adenosylmethionine synthase (sams-1) imply metformin-induced methionine restriction in the host, consistent with action of this drug as a dietary restriction mimetic. Metformin increases or decreases worm lifespan, depending on E. coli strain metformin sensitivity and glucose concentration. In mammals, the intestinal microbiome influences host metabolism, including development of metabolic disease. Thus, metformin-induced alteration of microbial metabolism could contribute to therapeutic efficacy—and also to its side effects, which include folate deficiency and gastrointestinal upset. PaperClip INTRODUCTION Metformin is the world's most widely prescribed drug, as an oral antihyperglycemic agent for type 2 diabetes (T2D) and in the treatment of metabolic syndrome. However, the real and potential benefits of metformin therapy go beyond its prescribed usage, including reduced risk of cancer (Dowling et al., 2011) and, in animal models, delayed aging, an effect seen in rodents (Anisimov et al., 2011) and in the nematode Caenorhabditis elegans (Onken and Driscoll, 2010). The mechanisms underlying these positive effects remain unclear. One possibility is that met-formin recapitulates the effects of dietary restriction (DR), the controlled reduction of food intake that can improve late-life health and increases lifespan in organisms ranging from nematodes and fruit flies to rodents and rhesus monkeys (Mair and Dillin, 2008). In mammals, the action of metformin is partly mediated by AMPK activation, which results in downregulation of TOR and the IGF-1/AKT pathways to reduce energy-consuming processes (Pierotti et al., 2012). An unexplored possibility is that metformin alters mammalian physiology via its effects on gut microbiota (Bytzer et al., 2001). The gut microbiome (or microbiota) plays a major role in the effects of nutrition on host metabolic status (Nicholson et al., 2012), as well as contributing to metabolic disorders such as obesity, diabetes, metabolic syndrome, autoimmune disorders, inflammatory bowel disease, liver disease, and cancer Kau et al., 2011;Nicholson et al., 2012). It may also influence the aging process (Ottaviani et al., 2011). It has been argued that the host and its symbiotic microbiome acting in association (holobiont) should be considered as a unit of selection in evolution (Zilber-Rosenberg and Rosenberg, 2008). Coevolution of microbiota facilitates host adaptation by enabling e.g., nutrient acquisition, vitamin synthesis, xenobiotic detoxification, immunomodulation, and gastrointestinal maturation. In return, the host provides a sheltered incubator with nutrients (Bä ckhed et al., 2005). Thus, the two components of the holobiont are symbiotic, but microbiota can also be commensal or pathogenic. Defining interactions between drug therapy, microbiome and host physiology is experimentally challenging given the complex and heterogeneous nature of mammalian gut microbiota. Here simple animal models amenable to genetic manipulation can be helpful. For example, in the fruit fly Drosophila, microbiota modulates host development and metabolic homeostasis via the TOR pathway (Storelli et al., 2011). C. elegans is particularly convenient for such studies because under standard culture conditions only a single microbe is present (as a food source): the human gut bacterium Escherichia coli (Brenner, 1974). Active bacterial metabolism is a critical nutritional requirement for C. elegans, the absence of which retards development and extends lifespan (Lenaerts et al., 2008). Moreover, worms are sometimes long-lived on mutant E. coli with metabolic defects (Saiki et al., 2008;Virk et al., 2012) and on microbial species thought to enhance human health, e.g., from the genera Lactobacillus and Bifidobacterium (Ikeda et al., 2007). These observations suggest that E. coli plays a more active role in C. elegans nutrition and metabolism than as a mere food source, and in some respects acts as microbiota (Lenaerts et al., 2008). C. elegans has also been used extensively to identify genes that specify endocrine, metabolic, and dietary regulation of aging (Kenyon, 2010). In this study, we examine the mechanism by which metformin extends lifespan in C. elegans. We report that its effects are mediated by the cocultured E. coli, where metformin inhibits bacterial folate and methionine metabolism. This, in turn, leads to altered methionine metabolism in the worm, and increased lifespan. These findings reveal how drug action on host-microbiome interactions can impact health and longevity. Extension of C. elegans Lifespan by Metformin Is Mediated by Live E. coli We first verified the effects on worm lifespan of metformin, and also the more potent biguanide drug phenformin. Metformin at 25, 50, and 100 mM increased mean lifespan by 18%, 36%, and 3% ( Figure 1A; Table S1 available online). Phenformin at 1.5, 3, and 4.5 mM also increased lifespan, by 5%, 21%, and 26% ( Figure 1B; Table S1). As expected, maximal effects on lifespan of these pharmacologically similar drugs were nonadditive (C) Phenformin (4.5 mM) does not increase lifespan in the presence of 50 mM metformin, consistent with similar mechanism of drug action. (D) Metformin decreases the exponential increase in age-related mortality (for survival curve see Figure S1E). (E) Later-life administration (day 8) of metformin increases lifespan at lower concentrations (25, but not 50 or 100 mM). See also Figure S1. For statistics, see Table S1. ( Figure 1C; Table S1). Metformin reduced the exponential age increase in mortality rate ( Figure 1D), demonstrating that it slows aging (at least until day 18) rather than reducing risk of death. Metformin also modestly increased mean lifespan when administered from middle age onward, but only at 25 mM (+8%, p < 0.001; Figure 1E; Table S1). In most trials, the DNA replication inhibitor FUdR was used to prevent progeny production, but effects of metformin on lifespan are not FUdR-dependent (Figures S1F and S1G; Table S1) (Onken and Driscoll, 2010). These results confirm the robust effects of biguanide drugs on aging in C. elegans. Interventions altering E. coli can affect C. elegans lifespan (Garigan et al., 2002;Gems and Riddle, 2000;Saiki et al., 2008). To test the possibility that metformin increases worm lifespan by altering the E. coli, we assessed its effects in the absence of bacteria (axenic culture). As expected, culture on axenic medium (Lenaerts et al., 2008) and bacterial deprivation (Kaeberlein et al., 2006) caused an increase in worm lifespan, typical of DR. Under these conditions, metformin did not increase worm lifespan, but instead markedly reduced it (Figures 2A, S2A, and S2B; Table S2). UV-irradiation of E. coli impairs bacterial viability and extends worm lifespan without reducing fertility, suggesting a mechanism distinct from DR (Gems and Riddle, 2000). Under these conditions, metformin still shortened lifespan (À16%, p < 0.001; Figure 2B; Table S2). Next, we raised E. coli in the presence of metformin and then transferred it to drug-free agar plates. Drug pretreatment of E. coli robustly extended worm lifespan (+33%, p < 0.001; Figure 2C; Table S2). We conclude that the life-extending effect of metformin is mediated by live E. coli. Moreover, in the absence of E. coli, metformin shortens C. elegans lifespan, likely reflecting drug toxicity. One possibility is that metformin extends worm lifespan by reducing E. coli pathogenicity. Proliferating E. coli block the alimentary canal in older worms, and antibiotic treatment can both prevent this proliferation and increase worm lifespan (Garigan et al., 2002). To determine whether metformin extends worm lifespan by preventing E. coli proliferation, we tested its effects in the presence of carbenicillin. This antibiotic is bacteriostatic, blocking bacterial proliferation without greatly reducing its viability. Metformin increased lifespan to a similar degree in the absence (+25%) or presence (+24%) of carbenicillin (p < 0.001; Figure 2D; Table S2). Thus, metformin does not increase lifespan by preventing bacterial proliferation. Culture of C. elegans with Bacillus subtilis increases lifespan (Garsin et al., 2003), suggesting that this microbe is less pathogenic to C. elegans than E. coli. Metformin increased lifespan of worms cultured on B. subtilis (+9%, p < 0.001; Figure 2E; Table S2). These findings suggest that reduced bacterial pathogenicity is not the cause of metformin-induced longevity. Biguanides Have Bacteriostatic Effects at Concentrations that Increase Lifespan Biguanides induced a dose-dependent inhibition of E. coli proliferation ( Figures 2F and S2C) and an alteration in bacterial lawn morphology ( Figure 2G). Similar results were obtained with B. subtilis ( Figures S2D-S2F). Thus, metformin can also act as See also Figure S2. For statistics, see Table S2. an antibiotic. Notably, the drug concentration thresholds for bacterial and worm lifespan effects were similar, and also pH-dependent (Figures S2G and S2H and Table S2). We then asked if the antibiotic effects of metformin were bacteriocidal or bacteriostatic. When subcultured from metformin plates, E. coli showed no reduction in colony forming units ( Figure 2H), implying that metformin has bacteriostatic rather than bacteriocidal effects. To probe whether metformin acts via one of the major, known antibiotic mechanisms, we employed the R26 P-group plasmid that confers resistance to carbenicillin, neomycin, kanamycin, tetracycline, streptomycin, gentamicin, mercuric ions, and sulfonamides. However, metformin still extended lifespan in worms on R26-transformed E. coli (39%, p < 0.001; Figure 2I and Table S2). What is the property of E. coli whose alteration by metformin increases worm lifespan? Coenzyme Q (ubiquinone) deficiency in E. coli increases C. elegans lifespan due to impairment of bacterial respiration (Saiki et al., 2008). We therefore tested whether metformin can increase lifespan of worms on Q-deficient ubiG mutant E. coli and found that it does (+20%, p < 0.001; Figure 3A). We then tested whether metformin reduces respiration rate in E. coli OP50. Although metformin transiently reduced respiration rate, long-term exposure increased it (Figure 3B). Taken together, these findings suggest that metformin's effect on worm lifespan is not caused by inhibition of bacterial respiration. Lipopolysaccharides (LPS) are the major component of the outer wall of Gram-negative bacteria. The structure of E. coli LPS can affect C. elegans lifespan (Maier et al., 2010). To test whether metformin action is dependent upon E. coli LPS type, we looked at worm lifespan on seven E. coli strains with a variety of LPS structures. Although effects of metformin on worm lifespan differed between E. coli strains (Figures 3A-3E and S3A and Table S3), this variation did not correlate with the E. coli LPS type. Interestingly, among E. coli strains there was a strong positive correlation between the capacity of metformin to increase worm lifespan and to inhibit bacterial growth (R 2 = 0.82, p < 0.0007; Figure 3F). There was no correlation between bacterial metformin sensitivity and effect on worm lifespan in the absence of metformin (R 2 = 9.6 3 10 À5 , p = 0.98; Figure S3B). This suggests that the capacity of the drug to extend worm lifespan is a function of the microbial sensitivity to growth inhibition by metformin. To test this directly, we isolated a metformin-resistant OP50 derivative (OP50-MR) (Figures 3G and S3C-S3E) that proved to contain eight mutations (see Extended Experimental Procedures). As predicted, on this strain 50 mM metformin shortened worm lifespan (À37%, p < 0.001; Figure 3H). We conclude that in metformin-resistant E. coli strains, life-shortening toxic effects Error bars represent SEM. *p < 0.05; **p < 0.01; ***p < 0.001. See also Figure S3. For statistics, see Table S3. predominate. However, inhibition of bacterial proliferation per se is not the cause of worm life extension, as already shown ( Figure 2D; Table S2). Metformin Disrupts Folate Metabolism in E. coli It was recently discovered that C. elegans live longer on an E. coli mutant with reduced folate levels (aroD) (Virk et al., 2012). Moreover, metformin can decrease folate levels in patients (Sahin et al., 2007). We therefore asked whether metformin increases worm lifespan by altering bacterial folate metabolism. Folates are B-group vitamins whose structure incorporates a pteridine ring, p-aminobenzoic acid (pABA), and glutamic acid(s). Folates are typically present as the reduced forms, dihydrofolate (DHF) and tetrahydrofolate (THF). THF can be substituted with a variety of one-carbon units (including formyl and methyl groups) that function as a coenzyme in metabolic reactions involving transfer of one-carbon moieties ( Figure 4A). These are involved in the biosynthesis of purines and pyrimidines, in amino acid interconversions, and for the provision of methyl groups in methylation reactions (Kwon et al., 2008). Metformin Disrupts C. elegans Methionine Metabolism To explore whether metformin-induced alterations in microbial folate metabolism increase host lifespan by altering worm folate metabolism, we first examined worm folate profiles under standard culture conditions (agar plates with E. coli OP50). In worms, as in humans, 5-methyl-THF was the predominant folate (59%) and treatment with metformin did not alter the ratio of different folate forms ( Figure 5A). However, it did decrease glutamate chain length (n = 1-3) (Figures 5B and S5; Table S5), suggesting a possible change in the activity of folate-dependent enzymes. methylene-THF, and reduction of the product THF imply that metformin also reduces microbial methionine availability. This suggests that metformin might increase lifespan by reducing levels of bacterial-derived methionine in the host. To explore this, we employed a C. elegans MS mutant, metr-1(ok521), which cannot synthesize methionine and is therefore wholly dependent upon exogenous methionine (Hannich et al., 2009). In the absence of metformin, metr-1 did not increase worm lifespan (p = 0.85; Figure 5D). Interestingly however, metr-1 did increase lifespan in the presence of 50 mM metformin (+67%, p < 0.001; Figure 5D). Thus, metr-1 sensitizes C. elegans to the life-extending effects of metformin. This suggests that microbes are the main source of dietary methionine, but the worms also synthesize some methionine of their own using METR-1. Thus, effects of metr-1 on lifespan are only detected when dietary methionine levels are reduced. Supporting this scenario, metformin treatment lowered SAMe levels in C. elegans (À72%, p = 0.005) and increased SAH levels (+181%, p = 0.002; Figure 5E). In summary, in E. coli metformin increases SAMe and 5-methyl-THF. By contrast, in C. elegans it decreases SAMe and the SAMe/SAH ratio without affecting 5-methyl-THF levels. In C. elegans, SAMe is synthesized by the SAMe synthase SAMS-1, RNAi knockdown of which extends lifespan (Hansen et al., 2005). Notably, sams-1 RNAi does not increase eat-2 mutant lifespan, suggesting a shared mechanism with eat-2induced DR (Ching et al., 2010;Hansen et al., 2005). If metformin increases lifespan by the same mechanism as loss of sams-1, then metformin should not increase lifespan in the absence of sams-1. To test this, we employed a sams-1(ok3033) null mutant that, as expected, extended lifespan (+35%, p < 0.001; Figure 5F). Strikingly, in a sams-1 mutant, metformin reduced lifespan (À38%, p < 0.001), reminiscent of the effect of metformin on eat-2 mutants (Onken and Driscoll, 2010). These results suggest the possibility that metformin and eat-2-induced DR act by similar disruptions of methionine-associated functions. AMP Kinase and SKN-1 Protect C. elegans Against Metformin Toxicity Metformin-induced longevity requires the worm AMP-dependent protein kinase (AMPK) (Onken and Driscoll, 2010). This is consistent with the fact that biguanide drugs activate AMPK (Hawley et al., 2003). However, if extension of C. elegans lifespan by biguanide drugs is mediated by E. coli, why should this effect require the worm AMPK? To explore this, we first tested whether Error bars represent SEM of at least three independent biological replicates. *p < 0.05; **p < 0.01; ***p < 0.001. See also Figure S6. For statistics, see Table S6. biguanides activate worm AMPK, by measuring phosphorylation of Thr-172 in the worm AMPKa subunit AAK-2. Phenformin, but not metformin, detectably increased pAMPK levels ( Figures 6A and S6A), perhaps reflecting the greater membrane permeability of phenformin. We then verified the AMPK-dependence of the effect of biguanides on worm lifespan in the presence of E. coli. Lifespan in aak-2 mutants was not increased by either metformin ( Figure S6B; Table S6), as previously noted (Onken and Driscoll, 2010), or phenformin ( Figure 6B). In fact, phenformin reduced lifespan (À15%, p < 0.001; Table S6). Notably, the metformin-induced deceleration of the age increase in mortality rate was still present in aak-2 mutants, but initial mortality rates were markedly greater ( Figure S6C), consistent with increased sensitivity to metformin toxicity. Our findings imply that the impact of metformin on worm lifespan reflects the sum of indirect, E. coli-mediated life-extending effects and direct life-shortening effects. A possible interpretation of the AMPK and SKN-1 dependence of biguanide effects on lifespan is that these proteins protect worms against drug toxicity. To test this, we compared growth inhibition by metformin in wild-type and mutant C. elegans using a food clearance assay. aak-2 and skn-1 but not daf-16 mutants showed increased sensitivity to growth inhibition by biguanides ( Figures 6E, 6F, and S6D). Note that metformin-induced life extension is not daf-16-dependent (Onken and Driscoll, 2010). We also observed that metformin induced a similar level of gst-4 expression in worms on E. coli OP50 and HT115 (+29 and +30%, respectively) even though the drug increases lifespan only with the former strain ( Figure 6G). These findings further suggest that aak-2 and skn-1 protect worms against biguanide toxicity. To test this further, we raised E. coli with or without metformin, and then transferred it to metformin-free plates with carbenicillin to prevent further growth. Carbenicillin does not affect E. colimediated effects of metformin ( Figure 2D). Notably, metforminpretreated E. coli caused a larger increase in mean lifespan in wild-type worms than aak-2 worms (+48 and +29%, respectively, p < 0.001, Figure 6H) but not skn-1 worms (+17 and +21%, respectively, p < 0.0001; Figure 6I). Moreover, extension of lifespan by blocking folate metabolism with 1 mg/ml TRI ( Figure S6E) or by a folate-deficient mutant E. coli aroD also appeared to be partially aak-2-dependent ( Figure S6F). These results suggest that AMPK-dependence of life extension by metformin is partly due to resistance against drug toxicity, but also partly to AMPK mediation of microbial effects on the worm. By contrast, skn-1 activation appears to act solely by protecting against the life shortening effect of metformin. Error bars represent SEM of at least three independent biological replicates. *p < 0.05; **p < 0.01; ***p < 0.001. See also Figure S7. For statistics, see Table S7. How might SAMe levels regulate AMPK? Increased levels of SAMe can inhibit AMPK activation (Martínez-Chantar et al., 2006). To probe this we tested whether longevity induced by sams-1 RNAi is AMPK-dependent, and this proved to be the case ( Figure S6G). This suggests that metformin increases lifespan at least in part via the AMPK-activating effects of reduced SAMe levels. Metformin Does Not Extend Lifespan on a High Glucose Diet Metformin is a treatment for hyperglycemia caused by diabetes. We wondered whether metformin is able to provide protection against high glucose levels, which can shorten worm lifespan (Lee et al., 2009). In fact, metformin proved unable to extend the lifespan of worms supplemented with 0.25% or 1% glucose (Figures 7A and S7A; Table S7), but instead shortened lifespan. Next we tested whether high glucose affected inhibition of bacterial growth by metformin. Strikingly, glucose supplementation suppressed metformin-induced inhibition of bacterial growth (Figures S7B-S7D). This may reflect a switch from amino acid-based to glucose-based metabolism for growth, relieving the need of glucogenic amino acids (e.g., methionine) as a source of carbon. Thus, a diet high in glucose can abrogate the beneficial effects of metformin on lifespan, a finding of potential relevance to mammals. DISCUSSION In this study we have shown how metformin slows aging in C. elegans by metabolic alteration of the E. coli with which it is cultured. Metformin disrupts the bacterial folate cycle, leading to reduced levels of SAMe and decelerated aging in the worm. Two Mechanisms of Action of Metformin on C. elegans The effect of metformin on worm lifespan was strongly dependent upon the accompanying microbes. In the presence of some E. coli strains, metformin increased lifespan, whereas with other strains or in the absence of microbes it shortened lifespan. This study demonstrates that metformin has both direct and indirect effects on C. elegans. Metformin (50 mM) acts directly to shorten worm lifespan, likely reflecting drug toxicity, and indirectly to increase lifespan by impairing microbial folate metabolism. The actual effect of metformin on lifespan depends on whether direct or indirect effects predominate. Given metformin-sensitive E. coli strains (e.g., OP50), drug treatment impairs folate metabolism and slows aging. But given metformin-resistant strains (e.g., OP50-MR), folate metabolism is less affected, the toxic effect predominates, and lifespan is shortened. It is possible that in other host organisms the capacity for metformin to slow aging is also microbiome-dependent. For example, the recent observation that metformin activates AMPK but does not increase lifespan in Drosophila (Slack et al., 2012) might reflect the presence of metformin-resistant microbiota. Our findings imply that life-extending effects of metformin are not due to rescue from proliferation-mediated bacterial pathogenicity. Instead, the drug alters bacterial metabolism, leading to a state of nutritional restriction in the worm, which increases lifespan. Consistent with this, as under DR, concentrations of biguanides that increase lifespan also reduce egg laying rate (Onken and Driscoll, 2010) (Figures S1A and S1B) and reduce the rate of increase in age-specific mortality ( Figures 1D and S6C) (Wu et al., 2009). It was previously demonstrated that AMPK-dependent activation of SKN-1 is essential for metformin benefits on health span and lifespan (Onken and Driscoll, 2010). Our findings show that AMPK and SKN-1 promote resistance to biguanide toxicity, and imply it is for this reason that in their absence drug-induced life extension is not seen. However, AMPK (but not SKN-1) is also required for the full microbe-mediated life extension ( Figure 6H). Metformin Effects on Methionine Metabolism in E. coli and C. elegans We investigated the likely bacterial target of metformin, first ruling out DHF reductase as a target ( Figures S4E-S4G). Instead, metformin induction of a methyl trap, in which 5-methyl-THF accumulates, is consistent with lowered MS activity (Nijhout et al., 2004) and therefore attenuated methionine biosynthesis. Moreover, metformin also increases bacterial levels of SAMe, which is known to inhibit transcription of genes involved in methionine biosynthesis (Banerjee and Matthews, 1990). Studies in mammalian liver cells show that SAMe can act both as an allosteric activator of SAMS and a feedback inhibitor of MTHFR leading to reduced levels of methionine. In addition, increased levels of 5-methyl-THF block methyltransferases (e.g., glycine N-methyltransferase) (Mato et al., 2008). This provides a potential explanation for the observed rise of SAMe in addition to MS inhibition by metformin, and strongly suggest that it reduces bacterial methionine levels ( Figure 7C). Consistent with this, treating the C. elegans/E. coli system with metformin caused a 5-fold decrease in SAMe levels and a drop in the SAMe/SAH ratio in the worm. Moreover, mutation of the worm MS gene metr-1 enhanced metformin-induced life extension, again consistent with MS inhibition in metformintreated E. coli, and also with methionine restriction as a mechanism of worm life extension. The latter is further supported by the inability of metformin to extend the lifespan of sams-1 mutant worms, which have a 65% decrease in SAMe levels (Walker et al., 2011). Both sams-1 RNAi and metformin increase lifespan in wildtype but not eat-2 (DR) mutants worms, and both treatments are thought to recapitulate the effects of DR (Hansen et al., 2005;Onken and Driscoll, 2010). Indeed, metformin induces a DR-like state that, similarly to decreased levels of sams-1 by RNAi, reduces brood size, delays reproductive timing, and increases lifespan independently of the transcription factor DAF-16/FoxO but not in eat-2 DR mutants (Onken and Driscoll, 2010). Also, sams-1 mRNA levels are reduced 3-fold in eat-2 mutants (Hansen et al., 2005). Similar DR-like phenotypes, including reduced body size, were observed in our study when using phenformin (Figures S1A-S1D). Moreover, restriction of dietary methionine can extend lifespan in fruit flies and rodents (Grandison et al., 2009;Orentreich et al., 1993). Taken with these observations, our findings suggest a potential common mechanism underlying the action of metformin, knockdown of sams-1 and DR, which will be interesting to investigate in future studies. Potential mechanisms by which reduced SAMe might increase lifespan include reduced protein synthesis and altered fat metabolism (Ching et al., 2010;Hansen et al., 2005;Walker et al., 2011). Additionally, reduced SAMe/SAH ratio, as a measure of reduced methylation potential, could modulate lifespan via histone methylation (i.e., epigenetic effects). One possibility is that the relative abundance of metabolites such as SAMe allows the cell to assess its energy state and respond accordingly, creating a link between diet, metabolism and gene expression to modulate physiology and consequently lifespan. Metformin and Gut Microbiota in Humans Our findings are of potential relevance to mammalian biology and human health. Bacteria in the human gut play a central role in nutrition and host biology, and affect the risk of obesity and associated metabolic disorders such as diabetes, inflammation, and liver diseases (Cani and Delzenne, 2007). Our finding that metformin influences C. elegans aging by altering microbial metabolism raises the possibility that this drug might similarly influence mammalian biology by affecting microbial metabolism or composition. Metformin is the most prescribed drug to treat T2D, with doses ranging from 500-2,500 mg/day (Scarpello and Howlett, 2008). Drug concentration in the jejunum is 30-to 300-fold higher than in the plasma in metformin recipients (Bailey et al., 2008) and concentrations above 20 mM have been detected in the intestinal lumen after administration of 850 mg metformin (Proctor et al., 2008). Interestingly, common side effects include gastrointestinal disorders (e.g., bloating and diarrhea) (Bytzer et al., 2001), reduced folate, and increased homocysteine levels (Sahin et al., 2007). Similarly, we find that metformin impairs bacterial folate metabolism and reduces host SAMe/SAH ratio. Factors causing perturbation of the microbiome (dysbiosis), e.g., obesity, a high-fat diet, and antibiotics, often lead to metabolic dyshomeostasis in the host Nicholson et al., 2012) e.g., due to release of proinflammatory microbial LPS into the bloodstream. Our data show that the effects of metformin are bacterial strain-dependent but independent of LPS. One possibility is that metformin might promote a better balance of gut microbiota species. We were able to develop a metformin-resistant bacterial strain that confers benefits to the host ( Figure S3F) suggesting that long-term administration of metformin could benefit the host even after treatment is ceased. Indeed, metformin administration to rats causes a change in the composition of the microbiome (Pyra et al., 2012), although it remains unclear what effect this has upon the host. Moreover, the antibiotic norfloxacin can induce alteration of mouse gut microbiome that has beneficial effects, e.g., enhanced glucose tolerance (Membrez et al., 2008). Lowering dietary glucose can benefit humans with metabolic syndrome or T2D (Venn and Green, 2007). Diet strongly influences the metabolism of the human microbiota (Turnbaugh et al., 2009). We have found that elevated dietary glucose suppresses the effects of metformin on bacterial growth and worm lifespan. This suggests that a high-sugar diet might impair microbe-mediated benefits of metformin. Overall, our findings point to the potential therapeutic efficacy of drugs that alter gut microbiota, particularly to prevent or treat metabolic disease . In addition, it underscores the value of C. elegans as a model to study host-microbe interactions. E. coli as Food Source and Microbiome for C. elegans Mammals, including humans, coexist with intestinal microbes in a relationship that includes elements of commensalism, symbiosis, and pathogenesis, and microbiota strongly influences host metabolism Nicholson et al., 2012). Several observations suggest that in at least some respects E. coli could act as microbiome for C. elegans. Although worms can be cultured on semidefined media in the absence of E. coli (axenically), such media do not support normal growth and fertility. C. elegans seems to require live microbes for normal growth, reproduction, and aging (Lenaerts et al., 2008;Smith et al., 2008). However, unlike microbiota and their mammalian hosts, E. coli is the principal food source for C. elegans. Studies of GFPlabeled E. coli imply that in late stage larvae (L4), bacterial cells are largely broken down by the pharynx prior to entering the intestine (Kurz et al., 2003), although by day 2 of adulthood intact E. coli are visible in the intestine (Labrousse et al., 2000). In senescent worms, E. coli contribute to the demise of their host, clogging the lumen of the alimentary canal and invading the intestine (Garigan et al., 2002;Labrousse et al., 2000;McGee et al., 2011). Thus, it appears that in early life C. elegans and E. coli exist in a predator-prey relationship, whereas in late life the tables are turned. But it remains possible that metabolic activity in intact or lysed E. coli within the worm contributes to intestinal function and host metabolism throughout life. Presumably, C. elegans has evolved in the constant presence of metabolically active intestinal microbes. We postulate that, consequently, intestinal function requires their presence. Thus, it may only be possible to fully understand C. elegans metabolism as it operates within the C. elegans/E. coli holobiont (Zilber-Rosenberg and Rosenberg, 2008). Our account of how metformin impacts on the two organisms is consistent with this view. Strains and Culture Conditions Nematode and bacterial strains used and generated in this study are described in the Extended Experimental Procedures. Where indicated, molten NGM agar was supplemented with drugs. Axenic plates were prepared as previously described (Lenaerts et al., 2008). Lifespan Analysis This was performed as follows, unless otherwise indicated. Briefly, trials were initiated by transfer of L4-stage worms (day 0) on plates supplemented with 15 mM FUdR. Statistical significance of effects on lifespan was estimated using the log rank test, performed using JMP, Version 7 (SAS Institute). GST-4::GFP Fluorescence Quantitation Animals were raised from L1 stage on control or drug-treated plates. Quantification of GFP expression at the L4 stage was carried out using a Leica DMRXA2 epifluorescence microscope, an Orca C10600 digital camera (Hamamatsu, Hertfordshire, UK), and Volocity image analysis software (Improvision, UK). GFP intensity was measured as the pixel density in the entire cross-sectional area of each worm from which the background pixel density was subtracted (90 worms per condition). Bacterial Growth Assay Liquid bacterial growth was performed in microtiter plates containing the respective bacterial strain (previously grown overnight in LB and diluted 1,000-fold) and drugs in 200 ml of LB at pH 7.0. Absorbance (OD 600 nm) was measured every 5 min over an 18 hr period with shaking at 37 C using a Tecan Infinite M2000 microplate reader and Magellan V6.5 software. For colony forming unit counts, see Extended Experimental Procedures. Bacterial Respiration This was measured in a Clark-type oxygen electrode (Rank Brothers, Cambridge, UK) in a 1 ml stirred chamber at 37 C (Lenaerts et al., 2008). Metabolite Analysis by LC-MS/MS Bacterial and nematode metabolite analysis was performed as described in Extended Experimental Procedures. Metabolomic Principal Component Analysis Raw LC-MS/MS spectral data were uploaded into MetaboAnalyst. To avoid propensity to data overfitting, PCA analysis was used to create the 2D analysis plot. Western Blotting Briefly, phosphorylation of AAK-2 subunit (pAMPKa) was detected using pAMPKa (Cell Signaling) at a 1:1,000 dilution. Films were scanned and the density of each band or the entire lane was quantified by densitometry using ImageQuant TL (GE Healthcare Europe Gmb, UK). Food Clearance Assay The effect of biguanide compounds on C. elegans physiology was monitored from the rate at which 50% of the E. coli food suspension was consumed, as a read out for C. elegans growth, survival, or fecundity. SUPPLEMENTAL INFORMATION Supplemental Information includes Extended Experimental Procedures, seven figures, and seven tables and can be found with this article online at http://dx. doi.org/10.1016/j.cell.2013.02.035.
7,333.2
2013-03-28T00:00:00.000
[ "Biology", "Medicine" ]
Learning-Based Pose Estimation of Non-Cooperative Spacecrafts with Uncertainty Prediction : Estimation of spacecraft pose is essential for many space missions, such as formation flying, rendezvous, docking, repair, and space debris removal. We propose a learning-based method with uncertainty prediction to estimate the pose of a spacecraft from a monocular image. We first used a spacecraft detection network (SDN) to crop out the rectangular area in the original image where only spacecraft exist. A keypoint detection network (KDN) was then used to detect 11 pre-selected keypoints with obvious features from the cropped image and predict uncertainty. We propose a keypoints selection strategy to automatically select keypoints with higher detection accuracy from all detected keypoints. These selective keypoints were used to estimate the 6D pose of the spacecraft with the EPnP algorithm. We evaluated our method on the SPEED dataset. The experiments showed that our method outperforms heatmap-based and regression-based methods, and our effective uncertainty prediction can increase the final precision of the pose estimation. Introduction For the demands of some space missions, such as maintenance for spacecrafts [1], on-orbit docking [2] and removing space debris [3], the pose estimation for non-cooperative spacecrafts has been a hot topic. Non-cooperative spacecrafts generally refer to spacecrafts that do not provide effective cooperative information, including malfunctioning or failed satellites, space debris, and opposing spacecrafts. In the past, the pose of spacecrafts was usually estimated by high-precision sensors [4][5][6]. However, due to the high costs and power consumption of these sensors, this solution of pose estimation is not applicable to many low-cost spacecrafts [7]. Monocular images can provide the key position and orientation information required by the navigation system for spacecraft under low power [8]. In this paper, we mainly focus on how to estimate the 6D pose of a spacecraft from a monocular image. The main difficulty of this task is the limited amount of available pose information. Moreover, the complex shooting environment in space, such as illumination and backgrounds, also brings more challenges. Dhome proposed a closed model-based 6D pose image recognition method [9]. This method corresponds all possible 3D model edges to the captured 2D image edges one by one and uses soft assign to avoid the computational overload caused by exhaustive enumeration. Following Dhome, Kanani and Petit made partial improvements to improve its computational speed and reduce data dependence [10,11]. These methods were initially applied to ground-based robotic navigation algorithms and later to satellite-based monocular navigation. However, modelbased methods require a large amount of feature matching before solving the positional pose, which is difficult to apply in real time [12]. Therefore, some people proposed a non-model-based method to estimate the 6D pose. Augenstein and Rock proposed to use SIFT-based SLAM for pose solution of spacecrafts [13]. Nevertheless, non-model-based approaches have the possibility of losing target features due to large changes in image conditions or perspective relationships [14]. The pose estimation methods have been further developed with the further development of image recognition algorithms. D'Amico proposed a perceptual organization of detected edges in images using the Sobel algorithm and the Hough algorithm to solve the pose-initialization problem [15]. For the first time, pose estimation of a fully non-cooperative spacecraft has been achieved. However, this method is computationally expensive, difficult to use in real-time on onboard hardware, and lacks robustness to illumination conditions [15]. Sharma improved D'Amico's research by proposing Sharma-Ventura-D'Amico (SVD) architecture and introducing the weak gradient elimination (WEG) to reduce the search space [12]. Sharma's method reduces the computation time and improves the detection accuracy, but has the drawback of generating spurious edges when the image condition is bad. In recent years, due to the development of deep learning algorithms, especially the neural networks, there have been new advances in pose estimation for spacecrafts from monocular images. It has been shown that feature detected by CNNs has more accuracy and stability than traditional methods for computer vision domain tasks [16]. Therefore, many learning-based methods have been proposed to solve the pose estimation problem [17][18][19][20][21][22][23]. Recently, Chen and Park proposed a similar pipeline to estimate the 6D pose of spacecrafts from a monocular image [18,19]. They used CNNs to automatically crop out the part of image where the spacecraft exists and predicted the 2D pixel coordinates of keypoints from the cropped image. They used the 2D pixel coordinates of keypoints and a wireframe model of the spacecraft obtained in advance to estimate the 6D pose. Following their work, we propose a learning-based 6D pose estimation method for spacecrafts, with effective uncertainty prediction enabling automatic selection of keypoints for pose estimation. Our main contribution can be concluded as follows: • We introduce the idea of region detection into the keypoint detection of spacecrafts, which can capture the feature of keypoints better; • We achieve effective uncertainty prediction for the detected keypoints, which can be used to automatically eliminate keypoints with low detection accuracy; • We conduct sufficient experiments on SPEED dataset [17]. Compared with previous methods, our method can reduce the average error of pose estimation by 53.3% while reducing the number of model parameters. The rest of this paper is organized as follows. First, in Section 2 we briefly introduce previous works on learning-based 6D pose estimation of spacecraft and the keypoints detection. Second, the proposed methods are detailed in Section 3.4. Third, the experimental results will be benchmarked in Section 4. Finally, Section 5 will conclude this work. Learning-Based Methods Instead of handcrafting the image features to estimate the pose of spacecraft, learningbased methods use deep learning to automatically extract the features to estimate the 6D pose of the spacecraft. These methods can be divided into two categories, direct estimation and indirect estimation. Sharma [22] used a CNN to extract the features in images and a fully connected layer to output a 6-dimensional vector as the predicted 6D pose. In Gao's work [21], the prediction of the orientation vector was converted into the regression of a heatmap. Sharma adopted multi-task learning [20,23] to estimate 6D pose. While predicting the 6D pose, he completed the task of keypoints prediction, spacecraft detection and image segmentation simultaneously. For the indirect estimation methods, Park [18] and Chen [19] first used CNN to predict the position of keypoints and then took these keypoints to estimate pose with the EPnP algorithm [24]. They mainly differ in how to detect the keypoints. Park [18] used light MobileNetv2 [25] as a backbone to extract features and used a fully convolutional network (FCN) [26] to regress the pixel coordinates of keypoints. Chen [19] predicted a heatmap for each keypoint, meaning the probability of keypoints appearing at each pixel coordinate. Our method also belongs to the method of indirect prediction. Different from [19] and [18], we treat each keypoint as a square region to detect. Although Chen also treated each keypoint as a square region, the size of the area he set is fixed. We replace three square anchors of different sizes for each pixel on the feature map for the situation of different relative distance to the spacecraft (Figure 1). (a) (b) Figure 1. Our advantage over Chen [19] on how to set the region of keypoints. (a) Chen [19], (b) Ours. The blue box represents the box containing a keypoint, and the yellow box represents the anchor in our Keypoint Detection Network. When the relative distance of the spacecraft is too small, the fixed region ignores some key area of the keypoint. However, our adaptive region size can solve this problem better, which is described in Section 2.2. Keypoint Detection Keypoint detection is a traditional task in computer vision, and there have been many surveys that extensively discuss related methods [27]. We present related works in two main categories: handcrafted and learned detector. For handcrafted detectors, Harris [28] and Hessian [29] detectors used first and second order image derivatives to find corners or blobs in images. The more refined keypoint feature can be calculated through some engineered algorithms [30][31][32][33], which seek alternative structures within images to represent the keypoint. MSER [32] segmented and selected stable regions as keypoints, and SIFT [30] looked for blobs over multiple scale levels. For learned detectors, the improvement of learned methods in object detection help to explore similar techniques for keypoint detectors. FAST [34] was one of the first attempts to use machine learning to design a keypoint feature descriptor, and then some people made improvements on this method [31,35,36]. Recently, many methods have been proposed to utilize CNNs to detect keypoints. TILDE [37] trained multiple patch-wise linear regression models to detect keypoints that are robust under severe weather and illumination changes. Georgakis [38] proposed a pipeline to automatically sample positive and negative pairs of patches from a region proposal network to optimize jointly point detections and their representations. LF-Net [39] estimated the position, scale and orientation of features by jointly optimizing the detector and descriptor. For the keypoint detection of spacecrafts, Park [19] directly used CNN to regress the 2D coordination of keypoints. Sharma [23] and Park [23] improved it by introducing multitask learning. Chen used HRNet [40], a CNN proposed to predict the pose of the human body, to predict the heatmap of the monocular image. However, he assigned the same region for all the keypoints, which is not rational for different relative distances. We introduce the idea of region detection into the keypoint detection task of spacecraft, where anchors of different sizes can fit different relative distance s (Figure 1). At the same time, an effective uncertainty prediction is introduced for detected keypoints, enabling end-to-end accurate keypoint selection. Method The overall pipeline of our method is shown in Figure 2. We first selected 22 images from multiple views to manually obtain the 2D coordinates of each keypoint, and used the simulated annealing (SA) algorithm [41] to obtain the spacecraft's 3D wireframe model. For each input image, we first used a spacecraft detection network (SDN) to find the location and the area where the spacecraft exists. Then, the cropped image of the spacecraft was put into a keypoint detection network (KDN) to detect the position of keypoints. KDN simultaneously estimates the uncertainty of detection for each keypoint. We developed a strategy to select more accurate keypoints as candidate keypoints. The reconstructed 3D coordinate and predicted 2D coordinates of all the candidate keypoints were used to solve the 6D pose of spacecrafts through EPnP [24]. 3D Wireframe Model Recovery Given the internal parameter matrix K c and the external parameter matrix R and T of the monocular camera, if the 3D coordinate p 3D,k of the k-th keypoint in the world coordinate system is known, we can obtain its 2D coordinate in the image. We selected 11 keypoints with great visibility. For each keypoint, we obtained its 2D coordinate manually from 22 images. For each k-th keypoint, the sum of the reprojection error was minimized over a set of images in which the k-th keypoint was visible. The optimal 3D coordinate of each keypoint can be obtained by minimizing the following objective function, where R i and T i represent the known camera extrinsic parameters. p h 2D,i,k represents the 2D coordinate of the k-th keypoint in the i-th image and p h 3D,k represents the according 3D coordinate. The superscript h indicates that the point is expressed in homogenous coordinates. λ i,k represents the scaling factor, which is also needed to solve. N is the number of selected images for k-th keypoint. We define the symbols in Equation (1) in more detail as: where p * * i represents the element in matrix P i . The (u i,k , v i,k ) represents the pixel coordinate of the k-th keypoint in the i-th image. The (X k , Y k , Y k ) represents the 3D coordinate of the k-th keypoint in the world coordinate system. Due to the presence of noise, the optimal solution cannot make the Equation (1) zero. The most general way is to use the least square (LS) method to obtain the optimal solution. According to Equation (1), we can construct N linear equations with N images as: Thus, we can construct over-determined linear equations for s = (X k , Y k , Z k ) T as: where A is a 2N × 3 matrix and b is a 2N × 1 matrix, i.e., The optimal solution can be obtained by the LS as: In this paper, we mainly consider that the manually chosen 2D coordinates of the keypoints may have different degrees of error in different images. We selected only 12 out of 22 images for each keypoint to obtain its 3D coordinates, which makes Equation (1) reach the least value. We used SA [41] to obtain the 3D ordinates p h 3D,k and the scaling factors λ i,k , and calculated the value of Equation (1) to select the best 12 images for each keypoint. In Section 4.6, we show that compared to obtaining the optimal solution directly through LS, the SA method can achieve a better solution. After obtaining the wireframe model of spacecraft, we can obtain the 2D coordinates of keypoints in each image without manually labeling a large number of images for subsequent tasks. Spacecraft Detection Network (SDN) We used a Spacecraft Detection Network (SDN) to automatically find the location of the spacecraft. Considering the smaller model consumes less, we took the tiny version of YOLOX [42] as our SDN. The 2D bounding boxes were obtained by projecting the 3D keypoints onto the image using the ground-truth poses. In order to ensure that the bounding boxes could contain the whole spacecraft, we enlarged the boxes by 10% in the center as our final labels. Keypoints Detection Network (KDN) We treated each keypoint as a square region and used anchor-based methods to detect them. Different from the general object detection method, where we needed to replace rectangular boxes of different sizes for each pixel, since our detection area was square, we only replaced three square boxes with different sizes for each pixel to adapt to the different relative distances of the spacecraft from the camera. The framework of the KDN is shown in Figure 3. We used CSPDarknet [43] as the backbone to extract features of three scales from the input image. We used the feature pyramid network (FPN) [44] to complement the features between different scales to obtain refined features. Finally, all features were input to the detection head for keypoint detection. For the detection and classification, we minimized the following loss function, commonly used in object detection [43], i.e., where b i , c i and C i represent the box, keypoint class and confidence predicted by the KDN for the i-th image, respectively.b i ,c i andC i represent the corresponding labels. L reg (•) represents the MSE loss function, L cls (•) and L con f (•) represent the cross entropy loss function, and N represents the number of images in each batch. We define the predicted box b i and labelb i as: where (x i , y i ) represent the pixel coordinates of the center point of the predicted box on the image, and w i and h i represent the width and height of the predicted box, respectively. The symbols with superscript ∼ represents the corresponding label. The L reg (b i ,b i ) can be written as: For L cls (c i ,c i ), both the predicted keypoint class c i and labelc i are 11-dimensional column vectors. For c i , each element c i,k represents the probability that the k-th keypoint exists in the box. Each elementc i,k inc i represents the corresponding label. The L cls (c i ,c i ) can be written like cross entropy loss function as: For the uncertainty prediction, we minimized the following loss function, where U i represents predicted uncertainty, i.e., the probability of whether there is a target for each keypoint, andŨ i represents the corresponding label. L uncertain (U i ,Ũ i ) can be written as: where U i,k represents predicted uncertainty for the k-th keypoint, andŨ i,k represents the corresponding label. The uncertainty label for the k-th keypoint can be calculated as: where IOU(•) is the intersection ratio of the predicted box b i and the ground truth boxb i . K is the number of keypoint classes. The subscript k indicates that the variable is related to the k-th keypoint. In order to guide KDN to achieve the joint prediction of classification uncertainty and regression uncertainty, the loss function of our KDN is defined as: Pose Estimation After obtaining the 3D coordinates and 2D coordinates of the keypoints, we used the EPnP [24] to solve the 6D pose of the spacecraft. To increase the accuracy of pose estimation, we developed a strategy to select more accurate keypoints by the predicted uncertainty. We divided the selection strategy into two separate sub-strategies, Top K and uncertainty threshold selection (UTS). For each category of keypoints, the keypoint with the lowest uncertainty was used as the final detected keypoint of this category. In UTS strategy, for these eleven detected keypoints, we selected the keypoints whose uncertainty was less than a given threshold µ as candidate keypoints. In Top K strategy, if the number of candidate keypoints was less than five, we directly used the five keypoints with the lowest uncertainty among the eleven detected keypoints as candidate keypoints, since four keypoints may be coplanar, which is detrimental to the pose estimation. If the number of candidate keypoints was more than K n , we took the K n keypoints with the lowest uncertainty as candidate keypoints. All the candidate keypoints were used to solve the 6D pose with EPnP [24]. The above architecture is described in Algorithm 1. Algorithm 1 keypoints selection strategy Require: Keypoints with predicted uncertainty p 2D,i,k , U i,k , uncertainty threshold µ, candidate keypoints set C , detected keypoints set D and K n . Datasets and Implementation Details We evaluated our method using the SPEED dataset [17] with 12,000 synthetic satellite images and five real satellite images provided by the Advanced Concepts Team (ACT) at European Space Agency (ESA) in the pose estimation challenge 2019 [45,46]. Each image was annotated with the extrinsic parameter matrices R and T corresponding to the camera. The difficulty of pose estimation varied from image to image. They had varying degrees of light intensity, relative distance to spacecraft, perspective occlusion, and background complexity ( Figure 4). From the synthetic images of SPEED dataset, we randomly selected 10,000 images as the training set and 1000 images as the validation set. The rest 1000 synthetic images were used as the test set, as well as five real images. We took the methods of Park [18] and Chen [19] as our baselines. Their methods share a similar pipeline with ours, and the main difference is how to predict the 2D coordinates of keypoints. Park [18] used CNNs to directly regress the 2D coordinates of keypoints, which belong to the regression-based method. Chen [19], however, predicted a heatmap for each keypoint, indicating the probability of each keypoint appearing at different positions, which is a heatmap-based method. We introduce the idea of region detection for the prediction of keypoint positions. We hope to prove the superiority of our method for improving the accuracy of 6D pose estimation by comparing it with the above methods. In order to ensure the fairness of the comparison, all three methods used the data augmentation method used by Park and were trained with the Adaptive Momentum Estimation (Adam) optimizer for 300 epochs with a 0.001 learning rate, 48 batch-size, momentum of 0.9, and weight decay of 5 × 10 −4 . In Section 4.3, we set K n and µ as 7 and 0.5, following Algorithm 1. Evaluation Metrics In order to quantitatively evaluate our final pose estimation results, we adopt the evaluation metrics provided by ESA to define the errors of estimation of translation, orientation and 6D pose. For the i-th image, the error of the pose estimation is calculated as the sum of the orientation error E R,i and the translation error E T,i , i.e., The translation error and orientation error can be calculated as: where t i andt i represent the predicted and real translation vectors, and q i andq i represent the predicted and real orientation vectors, respectively. • 2 is to calculate the two-norm of a vector and •, • is to calculate the angle between two vectors. The mean error of the pose estimation for the test set is calculated as: where N is the number of images in the test set. Similarly, we can calculate the mean and median of other errors on the test set. We take the above six metrics, medianE T , medianE R , medianE, meanE T , meanE R and meanE, to evaluate the pose estimation results. Comparison in Synthetic Images In this section, we compare three methods in 1000 synthetic images (Table 1). In order to prove that our method can maintain high accuracy while reducing the number of parameters, we reduced the size of the feature map output by the backbone 25% and 50% to obtain ours-small version and ours-nano version respectively. It can be seen from Table 1 that our method performs much better than Park [18] and Chen [19] in six metrics, except for the nano version. However, the number of parameters of our nano version is only about one-tenth of Chen's [19], and the nano version is only slightly worse than Chen on medianE T and medianE. It means that our nano version can achieve considerable accuracy of estimation with obviously less memory space. Notably, compared with Chen [19], all three versions of our method achieve reductions in both the estimation error and number of parameters, up to 53.3% and 89.6% respectively at most. Comparison in Real Images In this section, we compare three methods in five real images ( Figure 5 and Table 2). Due to the large gap in the field between the training set and the test set, the accuracy of all three methods has declined. Some estimation results of Chen's [19] have been especially unacceptably bad (Figure 5c). Table 2 shows that the estimation error of our method is still much smaller than that of the other two methods, which proves that the generalization ability of our method is stronger. Ours-small and ours-nano do worse than Park [18] in three metrics in Table 2. We consider the reason that the small number of parameters limits their generalization ability. However, both ours-small and ours-nano still achieve better estimation than Chen [19]. Performance with Different Background In this section, we compare three methods in images with different backgrounds (Figure 6c,d). Among the 1000 synthetic images, 506 have Earth backgrounds with different degrees of complexity ( Figure 4). We divided the test set images into two groups with Earth backgrounds (EB) and pure black backgrounds (BB) to test the estimation errors of three methods. Figure 6c,d shows that our method achieves better pose estimation than Park [18] and Chen [19] in either EB or BB. Performance in Different Relative Distance In this section, we compare three methods in images with different relative distances to the spacecraft (Figure 6a,b). In the 1000 test images, we took 100 images as a group to divide the images of the test set into 20 groups in the order of relative distance. We draw the translation error and orientation error curves at different relative distances respectively. Figure 6a,b show that our method can maintain a very high prediction accuracy in each distance segment. Park's [18] method has a greater estimation error in both too short and long distances. This is reasonable for when the spacecraft is very close, a part of the spacecraft often falls out of the camera's field of view, called occlusion, a common challenge for object detection and segmentation [47]. When the spacecraft is far, its features in the image will become coarse, making it more difficult for the keypoints detection module to work well. Chen's method [19] has good accuracy of translation estimation in each distance segment, but the error of orientation estimation is still affected by the too-long or short relative distance. Our method achieves stable translation and orientation estimation accuracy over the full range segment, proving that our method is more capable of resisting target occlusion and recovering the feature of small spacecraft. Effective Uncertainty Prediction We conducted an ablation study to prove the effectiveness of our uncertainty prediction and keypoints selection strategy. According to Algorithm 1, we can only take UTS strategy or Top K strategy to select keypoints. If both strategies were not taken, we directly chose all eleven keypoints to estimate the pose with EPnP [24]. Here, we set µ and K n as 0.5 and 7 (Top 7) for the analysis in Section 4.7. Table 3 shows that both strategies can improve the accuracy of pose estimation of our method separately, and our complete keypoints selection strategy helps our method achieve the best estimation. We show four cases that demonstrate the effectiveness of our uncertainty prediction and keypoints selection strategy in Figure 7. The lower right corner marks the percentage reduction in the three-class estimation errors after removing the detection points in red. Our selection strategy succeeded in selecting accurate keypoints with effective uncertainty prediction to reduce the error of pose estimation. . Uncertainty Prediction Helps Reduce Pose Estimation Error. The blue points represent the key points that we retained for pose estimation, the red points represent the key points that we eliminated due to the high uncertainty, the yellow points represent the true positions of the eliminated keypoints, and we used the green dotted line Connect the corresponding yellow and red points. Although Chen [19] proposed an iterative trial-and-error method to remove some detected keypoints, they did not consider that this method would increase the time cost of the entire pose estimation process. Our method performs this by the uncertainty prediction of the network. Comparison between SA and LS In order to verify the superiority of using SA to solve the optimal problem in Equation (1), we recovered a new 3D wireframe model through LS for all the 22 images and analyzed the changes in the accuracy of the three versions of our method. Table 4 shows that all six error metrics for the three versions have increased when using LS to recover the wireframe model. Our SA method can help to obtain a more accurate 3D wireframe model under the noise from manual selection of images. Hyperparameters Analysis We conducted a hyperparameters analysis to study how K n and µ in Algorithm 1 affect the pose estimation of our method. We analyzed how the choice of K n affects the performance of our method. Although the EPnP algorithm [24] only requires more than three keypoints, since our detection result may have four coplanar points, which is fatal to the EPnP algorithm, we separately analyzed the change of three estimation errors when K n changes from five to eleven (Figure 8). Here we set the µ as 0.5. All three versions of our method show the same pattern of changes. As K n changes from five to seven, the estimation error decreases gradually, which is reasonable for larger point sets to introduce redundancy and reduce the sensitivity to noise [24]. However, when K n changes from seven to eleven, the estimation errors of three versions increases. We consider that compared with the top seven keypoints, the errors introduced by the last four keypoints are too large to improve the accuracy, which also proves the validity of our uncertainty prediction. We took our version to analyze how the µ affects the pose estimation of our method. Here, we set K n as 7, which works best. Figure 9 shows that all three errors decrease as the µ decreases, proving that the uncertainty our KDN predicts for each keypoint has a certain positive correlation with its detection error. We call the keypoints screened out by our keypoints selection strategy as refused keypoints. When the µ changes from 1.0 to 0.4, the average number of refused keypoints remains generally unchanged. When it changes to 0.2, this number begins to rise rapidly, which means that our method fails to complete the pose estimation from the corresponding images, since the number of available keypoints does not meet the acquirement of the EPnP algorithm [24]. Therefore, in practical applications, it is necessary to consider the trade-off between the continuity and accuracy of 6D pose estimation. Conclusions In this paper, we proposed a monocular pose estimation framework for space-borne objects, such as spacecraft. Our main contribution is to introduce the idea of area detection into the task of spacecraft keypoints detection and use the uncertainty of keypoints predicted by our KDN to automatically select keypoints with higher prediction accuracy to estimate the 6D pose of the spacecraft. Our method achieves a 53.3% reduction in pose estimation error with the reduction of the number of network parameters. In future work, we will study how to adaptively choose the k value of the Top k strategy to achieve a more effective trade-off between estimation precision and computational efficiency. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Nomenclature The following nomenclature are used in this manuscript: Ground-truth category in the k-th image C i Predicted confidence in the k-th imagẽ C i Ground-truth confidence in the k-th image U i Predicted uncertainty in the k-th imagẽ U i Ground-truth uncertainty in the k-th image K The number of keypoint categories µ Uncertainty threshold C Candidate keypoints set D Detected keypoints set K n The number of keypoints used for pose estimation q i Predicted orientation in the k-th imagẽ q i Ground-truth orientation in the k-th image t i Predicted translation in the k-th imagẽ t i Ground-truth translation in the k-th image
7,033.6
2022-10-11T00:00:00.000
[ "Engineering", "Computer Science" ]
Faculty survey on upper-division thermal physics content coverage Thermal physics is a core course requirement for most physics degrees and encompasses both thermodynamics and statistical mechanics content. However, the primary content foci of thermal physics courses vary across universities. This variation can make creation of materials or assessment tools for thermal physics difficult. To determine the scope and content variability of thermal physics courses across institutions, we distributed a survey to over 140 institutions to determine content priorities from faculty and instructors who have taught upper-division thermodynamics and/or statistical mechanics. We present results from the survey, which highlight key similarities and differences in thermal physics content coverage across institutions. Though we see variations in content coverage, we found 9 key topical areas covered by all respondents in their upper-division thermal physics courses. We discuss implications of these findings for the development of instructional tools and assessments that are useful to the widest range of institutions and physics instructors. I. INTRODUCTION Thermal physics, which includes both thermodynamics and statistical mechanics, is a core course required for attaining a physics bachelors' degree at most institutions. However, anecdotally the material covered in thermal physics courses often varies between instructors and across institutions. This content variability poses a significant challenge in development of standardized thermal physics assessments and teaching tools that can be utilized by a wide range of instructors. Though there is a body of research surrounding student understanding of thermal physics concepts 1 , less is known about the breadth of topics covered in upper-division thermal physics courses. Here, we present findings from a survey distributed with the purpose of soliciting instructor priorities in upper-division thermal physics as a part of a broader research effort to develop a standardized upper-division thermal physics assessment. Findings may lay an important foundation for other researchers interested in developing course materials and assessments for thermal physics, and inform instructors in defining course objectives and content-foci for their thermal physics courses. In this paper, we begin by describing the process of constructing and distributing the survey (Sec. II). Then, we present results of the survey (Sec. III), including general course information, key concepts covered, and valued scientific practices. We also consider response consistency between survey responses and submitted syllabi, followed by an analysis of content variability across institutions. We conclude with a short consideration of implications of the survey and future directions (Sec. IV). II. METHODS The faculty survey was designed to solicit key information about thermal physics courses, such as content covered, general course structure and emphasis (thermodynamics, statistical mechanics, or both), and needs or interest in an upperdivision thermal physics assessment. This section describes methods for developing and distributing the survey with an emphasis on creating a format that was accessible and relatively short in duration, while still soliciting sufficient information. Survey Development: Prior to constructing the survey, a focus group was conducted with four experts, all with experience teaching thermal physics and researching student difficulties in thermal physics. The focus group solicited expert perspectives surrounding upper-division thermal physics, including textbooks, content coverage, learning goals, and existing thermodynamics assessments. Outcomes from the focus group informed several questions included on the survey. For example, participants discussed notational conventions as one major challenge for a thermal physics assessment (e.g. the sign convention of work). To address this concern, one question on the survey solicited specific notational issues worth considering in development of a thermal physics assessment. Additionally, textbooks brought up during the focus group comprised the list of textbook options provided on the survey. To faciliate ease of responses, the survey was a primarily multiple-response format with only a select set of questions being free-response. Thus, one of the first steps in survey development was determining which options to provide for various multiple-response questions. We began by investigating the scope of thermal physics in texts; we analyzed six thermal physics texts brought up during the focus group 2-7 for key content coverage. This process involved reviewing each text and identifying topical areas for each based on chapter titles, section headings, and emphasized key terms. Based on the frequency of topics appearing across the different texts, we classified topical areas into core topics and supporting topics. To put these into an accessible form for use in the survey, topics were sorted and condensed into 29 core topics, most with roughly 4 supporting topics (see Table I). For example, the core topic of "thermodynamic laws" had four supporting topics: 0th law, 1st law, 2nd law, 3rd law. Some core topics had no supporting topics (e.g. semiconductors) while some had as many as seven (e.g. energy and thermodynamic potentials); the one exception to this was statistical mechanics, which had 14 supporting topics. In addition to focusing on content, and in response to recent calls in science education literature for more consideration of scientific practices in course materials, assessment, and instruction 8 , the survey also solicited information on the scientific practices valued by respondents in their thermal physics courses. The list of scientific practices provided on the survey was pulled from the Next Generation Science Standards (NGSS) list of science and engineering practices 9 . In their list, the NGSS combined similar practices together (e.g. developing and using models); however, in upper-division courses, it is less clear that all paired practices would be targeted together. Thus, to collect more specific data about individual practices, paired NGSS practices were split into separate categories. For example, "developing and using models" was split into "developing models" and "using models" for the survey. The survey was administered through the survey platform Qualtrics and hosted by the University of Colorado Boulder (CU). The survey was divided into 4 major sections: (1) general course information, (2) content coverage, (3) scientific practices, and (4) interest in, and concerns about, an upperdivision thermal physics assessment. Respondents also had the option to identify their institution and submit their course syllabus. Additionally, gender and racial identity information were collected at the end of the survey. After initial construction of the survey, we solicited feedback from CU physics faculty who were familiar with teaching upper-division thermal physics. Based on these discussions, and informed by the frequency of topical areas appearing across the six different analyzed texts, we grouped the core topics into two categories: assumed core topics and other core topics. Assumed core topics are topics that one might expect are covered in every thermal physics course: energy and thermodynamic potentials; engines and refrigerators; entropy; equilibrium; monatomic gases; heat; temperature; thermodynamic laws; and work. The survey presented these assumed core topics at the beginning of Section (2) of the survey, with their supporting topics shown on the same page. A free-response textbox followed these assumptions to allow respondents to indicate disagreement with the assumptions made. All other core topics were provided on the following page of the survey without their supporting topics displayed. After selecting from the list of other core topics, associated supporting topics for each of the selected core topics were displayed on the following page. This conditional formatting was motivated by the desire to reduce respondent fatigue due to survey length. Survey Distribution: To ensure the information collected was reflective of a broad range of institutions, we collected contact information for a large variety of physics degreegranting institutions, including minority serving institutions (MSIs) and women's colleges, for use in distributing the survey. Institutions were identified using the American Physical Society's "Top Educators" lists 10 , each of which identifies 16-20 institutions with the highest average number of physics bachelors' degrees awarded by the institution per year. We also utilized the overall and underrepresented minority (URM) lists for Ph.D.-granting, MS-granting, and BSgranting institutions. Beyond that, we used the American Physical Society's MSIs list 11 , which included a list of Historically Black Colleges and Universities, Black-serving insitutions, and Hispanic-serving institutions, to identify all other physics-degree-granting MSIs not on the "Top Educators" lists; the MSI list included institutions with both large and small physics departments. We also identified women's colleges with the "Women in Physics" report produced by the American Institute of Physics 12 . We note that other small physics departments (e.g. those that are not Top Educators, or at MSIs or women's colleges) were not targeted in the initial distribution of the survey, but will be targeted in the broader project moving forward. After identifying institutions, we obtained contact information of department chairs from physics department websites. We then emailed the survey solicitation to the department chairs, with a specific request for the email to be forwarded to all faculty within their department who were currently teaching or had previously taught upper-division thermal physics. In addition to department chairs, the research team solicited the help of their professional contacts at different institutions to take the survey or forward it to faculty in their department. III. RESULTS The survey was open for response collection for three and a half months. During this time, 59 respondents fully com- pleted the survey while 2 completed all of the survey except questions regarding scientific practices and assessment. Only responses that completed the sections with core topics and supporting topics and beyond were used for analysis. We do not report response rate, as it is unclear how many people recieved the solicitation forwarded from their department chairs. We collected institutional information, including selectivity, research activity, student population, and highest physics degree offered via the Carnegie Classifications 13 and institutions' physics department websites. From the Carnegie Classifications, we identified 70% (N=34) of identifiable institutions as being selective or more selective with regards to admissions practices, while 31% (N=15) are considered "inclusive" institutions. Additionally, 18 schools are classified as having high or very high research activity. Overall, we identified 52 unique institutions from the survey, 28 of which were MSIs and/or women's colleges; one institution could not be identified and one was not in the Carnegie Classifications database. Figure 1 presents institution type by highest physics degree offered and MSI/women's college classification. In a few cases, (N=7) institutions were represented by 2-3 responses; it was evident from submitted syllabi and individual item reponses that these were submitted by different people. Course Information: We asked respondents if their course focused on thermodynamics, statistical mechanics, or both (thermal physics); 97% (N=59) selected thermal physics and the remaining 3% (N=2) of responses were split evenly between thermodynamics and statistical mechanics. Most institutions reported one semester of thermal physics (79%, N=48); some reported two quarters (10%, N=6) or two semesters (8%, N=5), while a small minority reported one quarter (3%, N=2). The student population was composed of mostly juniors (N=41) and seniors (N=39), though some (N=12) reported sophomores in the course as well. The majority of respondents (72%, N=44) reported using An Introduction to Thermal Physics by Daniel V. Schroeder 6 . Thermal Physics by Charles Kittel and Herbert Kroemer 4 was the second most frequently cited text (16%, N=10). All other texts appeared at a frequency of 7% or below. Most of the instructors (74%, N=45) teach with the assumption that their students have little to no prior exposure to thermal physics content. Some (N=19) expected familiarity with topics such as energy, heat, the first and second laws of thermodynamics, and the ideal gas law. A few (N=7) said they expect thermal physics exposure from the introductory physics sequence, though several noted that thermal physics is only covered for a few weeks, and sometimes not at all, in that sequence. These data show most institutions require one semester of thermal physics, most instructors use Schroeder's text 6 , and many instructors assume their students have no prior exposure to thermal physics content. These results suggest two implications for PER: (1) development of Schroeder-based thermal physics assessments and materials could serve many instructors and institutions, though would still exclude the sizable population of instructors and institutions who do not use that text; and (2) pretest administration of an upper-division thermal physics assessment may not produce meaningful measurements of student understanding of thermal physics content prior to taking the course. Key Topical Areas: Table I shows frequency of assumed supporting topics and other core topics. All assumed core topics (see Section II) appeared at a frequency of 100%; these frequencies are not reported in Table I. Frequency of supporting topics is given relative to the number of times the corresponding core topic was selected; the frequency of core topics is given relative to the total number of valid responses. We present frequencies of all other core topics, but do not present their 56 associated supporting topics or their frequencies due to space limitations. Four respondents reported teaching thermal physics but did not select statistical mechanics as a core topic. This result may be due to statistical mechanics being covered in their course but not seen as a core focus by the respondent; we note one of these respondents mentioned statistical distribution functions in a textbox but did not select statistical mechanics as a core topic. These results are relevant for researchers interested in materials and assessment development in upper-division thermal physics, and can be used to guide content-foci for those endeavors such that they serve a wide range of instructors and institutions. Scientific Practices: Of the 16 practices presented on the survey, three appeared at a frequency of over 85%: using mathematical thinking (98%, N=58), asking questions (95%, N=56), and using models (86%, N=51). Review of syllabi indicates the practice of "asking questions" may have been misinterpreted; the NGSS practice refers to asking scientific questions (namely for scientific investigations), but we suspect respondents may have interpreted this practice as referring to asking questions about content during class or office hours. The next most frequently appearing practices were constructing explanations (70%, N=41), communicating information (64%, N=38), and computational thinking (61%, N=36). The remaining 10 practices appeared at a frequency of 56% or less. These results highlight at most three scientific practices that stand out as valued by nearly all thermal physics instructors in our sample and demonstrate many other scientific practices are less of a universal focus for thermal physics courses at the upper-division level. Thus, researchers should pay particular attention to including opportunities for students to demonstrate and develop the practices of using models and using mathematical thinking in thermal physics-oriented materials and assessments. Response Consistency: As a verification of the survey data, we checked for consistency between survey responses and submitted syllabi for the 39 responses that provided a syllabus. We looked at key topics on syllabi and compared with the associated survey response to ensure topics appearing on the syllabus also appeared on the survey response. No core or supporting topics had more than 3 discrepancies when comparing between survey responses and the 39 syllabi. Discrepancies could be due to the amount of focus placed on those topics in the course. For example, Bose-Einstein condensates may appear on the syllabus but may not be seen as a major content focus for the instructor when completing the survey, resulting in a discrepancy between their syllabus and response. Some topics, such as large systems (N=10), interacting systems (N=8), and Boltzmann and/or quantum statistics (N=9), appeared in syllabi but did not appear as explicitly named core or supporting topics on the survey. However, those who included topics such as these on their syllabus selected other topics on the survey that encompass or require the same idea, such as multiplicity, thermal equilibrium, and statistical mechanics. Canonical ensembles (N=11) and thermodynamic identities (N=6) were the other most common topics that appeared on syllabi but were not provided as options on the survey. This analysis shows that the survey reliably captured the scope of content coverage for most survey responses without large discrepancies. Content Variability: To investigate the claim of content variation across upper-division thermal physics courses, we examined survey responses to see how many topics were selected by all instructors. We looked at the three groups of topics laid out in Table I: assumed core topics, assumed core topics' supporting topics, and other core topics. We found that 9/9 (100%) of assumed core topics, 5/32 (16%) of assumed supporting topics, and 0/20 (0%) of other core topics were selected by all respondents. When repeated with institutions with multiple responses (e.g. different instructors at the same institution), we saw an average of 72% of assumed supporting topics and 20% of other core topics chosen by all respondents at a given institution. These results support the anecdotal claim that upperdivision thermal physics content coverage varies both across institutions and between instructors at the same institution (though to a lesser extent). It also makes the case, however, that there are some topics, namely our assumed core topics, that all or most instructors prioritize in their upper-division thermal physics courses. IV. CONCLUSIONS Our data suggest important considerations for researchers and instructors interested in curricular materials and assessment development for upper-division thermal physics. Despite the demonstrated content variability within thermal physics, our results point to content-foci, scientific practices, and reference texts that can act as baselines for materials that can serve a broad range of institutions and instructors. The results presented here will lay the groundwork for development of an upper-division thermal physics assessment. In order for this assessment to be useful broadly, we carefully and deliberately collected data from institutions that serve a wide range of student populations. We recommend other researchers interested in making widely-available upper-division materials utilize similar methods in collecting input from a wide range of institutions to inform their work. Results from this survey can inform upper-division thermal physics investigations in PER and the methodology can be reproduced for investigation of the scope of other upper-division physics courses. ACKNOWLEDGMENTS This work was supported by funding from the Center for STEM Learning and the Department of Physics at CU. We thank S. Pollock and M. Dubson for their input in refining the survey, J. T. Laverty for his encouragement of including scientific practices, and all focus group participants. We also thank the department chairs who distributed the survey and the faculty who completed it. We are grateful for their time.
4,208.4
2019-11-24T00:00:00.000
[ "Physics", "Education" ]
Sputter deposition of highly active complex solid solution electrocatalysts into an ionic liquid library: e ff ect of structure and composition on oxygen reduction activity † Complex solid solution electrocatalysts (often called high-entropy alloys) present a new catalyst class with highly promising features due to the interplay of multi-element active sites. One hurdle is the limited knowledge about structure – activity correlations needed for targeted catalyst design. We prepared Cr – Mn – Fe – Co – Ni nanoparticles by magnetron sputtering a high entropy Cantor alloy target simultaneously into an ionic liquid library. The synthesized nanoparticles have a narrow size distribution but di ff erent sizes (from 1.3 ± 0.1 nm up to 2.6 ± 0.3 nm), di ff erent crystallinity (amorphous, face-centered cubic or body-centered cubic) and composition ( i.e. high Mn versus low Mn content). The Cr – Mn – Fe – Co – Ni complex solid solution nanoparticles possess an unprecedented intrinsic electrocatalytic activity for the oxygen reduction reaction in alkaline media, some of them even surpassing that of Pt. The highest intrinsic activity was obtained for body-centered cubic nanoparticles with a low Mn and Fe content which were synthesized using the ionic liquid 1-etyl-3-methylimidazolium bis(tri uoromethylsulfonyl)imide [Emimi][(Tf) 2 N]. Introduction The discovery of novel highly active catalyst materials such as multinary metal alloys forming a complex solid solution (CSS) phase with a multifunctional surface structure 1-5 opens opportunities for replacing scarce catalysts, such as the benchmark Pt catalyst for the electrocatalytic oxygen reduction reaction (ORR). Increased entropy in multinary alloys with multiple principle elements can stabilize the solid solution state and hence these materials are often denoted as high-entropy alloys (HEA). CSS nanoparticles (CSSNPs) based on non-noble metals such as Cr, Fe, Mn, Ni and Co have been demonstrated to be excellent candidates for ORR catalysis. [1][2][3][4][5][6][7][8][9][10][11][12][13][14] In particular, CSS catalysts might have the capability to overcome existing limitations regarding position in volcano plots and scaling relations. 6,15 However, little is known about structure and composition-activity correlations of CSS catalysts and an increased understanding is a crucial missing piece of information for targeted catalyst design. Traditional wet synthesis routes are suitable for the synthesis of single-element or binary alloys. 16,17 The synthesis of single-phase CSSNPs demands a technique with great control over the nucleation process down to the atomic scale, which can be achieved by combinational co-sputtering into ionic liquids (ILs), 18,19 which offers an almost unlimited flexibility for the used pure elements. ILs are excellent dispersion media to control size, structure and composition of CSSNPs. 20,21 Additionally, ILs act as a media for growth and as stabilizer for highly stable colloidal solutions, 22 devoid of any other chemical stabilizers which could affect the CSSNPs catalytic properties. CSSNPs compositions can be defined by co-sputtering of several elements simultaneously from individual targets or by using alloy targets. 18,21 In addition, the desired phase, crystallinity and size of the CSSNPs can be modulated by means of post-annealing treatment. 21 Co-sputtering is a high-through-put deposition technique, which allows for screening, but also upscaling of the production of CSSNPs seems feasible. 23 In an earlier work 1 we used 1-butyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [Bmim][(Tf ) 2 N] as IL to synthesize binary, quaternary and one type of quinary CSSNPs. The quinary NPs were in the as-grown state amorphous and had an average composition of Cr 39 Mn 2 Fe 12 Co 24 Ni 23 21 and they showed similarly high electrocatalytic ORR activity as Pt-NPs prepared by sputtering into the same IL. 1 However, other ILs were not used for the synthesis of quinary NPs until now. We used before the concept of sputtering into a library of ILs with different anions and cations for the formation of pure Ag NPs. 20 An influence of the IL's molecular structure on the NPs size and morphology was found. These results motivated us to synthesize Cr-Mn-Fe-Co-Ni NPs by sputtering into a library of eight different ILs. Identical process parameters as ensured by simultaneous preparation as well as cleaning steps allow to compare the effect of different ILs on the CSSNPs formation and their behavior during the ORR. Aberration-corrected (scanning) transmission electron microscopy ((S)TEM) complemented with energy-dispersive X-ray spectroscopy (EDS) are used to characterize the CSSNPs. The results show that the CSSNPs size, crystal structure, and composition depend on the used IL, and accordingly variations in the electrocatalytic ORR activities are observed. Using this approach, we identify several CSSNPs with a high intrinsic activity even higher than Pt. Results During the CSSNPs synthesis, an identical flux of neutral sputtered atoms from the alloy target arrives at the surface of IL library (Fig. 1). The IL library contains eight IL, from which five have the same base cation (1-butyl-3-methylimidazolium) but different base anions, while the other three have the same base anion (bis(trifluoromethylsulfonyl)imide) but different base cations. Their molecular structure is displayed in Fig. 1b. We choose these 8 different ILs as we want to study systematically the effect on the molecular chain, lengths and structure. In addition, they have a high purity and stability and low water content. The different properties of the ILs are expected to lead to differences in NP properties. We investigated the intrinsic electrocatalytic activity of the individual CSSNPs after NP immobilization at etched carbon nanoelectrodes from IL as demonstrated and established in an earlier work. 1 The voltammetric activity curves are summarized in Fig. 2 and indeed, a strong effect of IL-dependent NP properties on activity can be observed. The group of IL 1, IL 6, IL 7, and IL 8 presents the highest catalytic performance (with the order IL 6 > IL 7 > IL 8 > IL 1) followed by the group of IL 3, IL 4 and IL 5. CSSNPs synthesized with IL 2 have an intrinsic activity which lies between the two groups and is similar to the benchmark Pt NPs. Please note that all curves including Pt seem to be shifted towards higher overpotentials due to the low mass loading of the isolated NPs. As described in more detail in the ESI of ref. 24, a mass loading reduced by two orders of magnitude and constant background noise level provided by the carbon electrode and a Tafel slope of 80 mV dec −1 would imply a visible Pt current exceeding the background current at 160 mV higher overpotential. This finding is in accordance with ref. 1, where we also used IL 2 to produce one amorphous quinary NP system (Cr 39 Mn 2 Fe 12 Co 24 Ni 23 ), which outperformed binary and quaternary CSSNPs (an in situ phase transformation video showing the transition from the amorphous to the crystalline state is added in the ESI †). In the present work, the most active CSSNPs show considerably higher intrinsic activity than the benchmark Pt NPs, showcasing the high potential of NP property optimization to exploit the paradigm changing concept of CSS catalysts. Note that a completely stable "blank" electrode signal cannot be maintained, implying that a slight overcorrection is performed by subtracting the electrode polarization curve prior to NP immobilization. 25 This effect causes an "oxidative bump" at about 700 mV vs. reversible hydrogen electrode (RHE) where usually the current increase of the blank electrode becomes visible. For the active samples (CSSNPs in IL 1, IL 2, IL 6, IL 7 and IL 8) this effect does not play a role since they reach the maximum current already at lower overpotentials. For the less active samples (CSSNPs in IL 3, IL 4 and IL 5), however, a deviation at low currents is obtained, which is counteracted once the kinetic CSSNP current starts to dominate at higher overpotentials and the influence of the carbon instability becomes insignificant. Such a bump is also observed when subtracting two consecutive blank electrode cycles without any NPs attached and, therefore, it is not attributed to any NP property. All polarization curves were normalized by the plateau current of the first current wave, which provides a mass loading independent information about the position of the most favorable adsorption peak and thus, the potential in intrinsic activity for this catalyst. 24 In order to clarify the origin of the differences in the CSSNPs activities, aberration-corrected high resolution (HR) TEM and EDS data acquisition in STEM were required to analyze the size, structure and composition of the CSSNPs at the atomic scale. The average size of the CSSNPs ranges from 1.3 ± 0.1 nm to 2.6 ± 0.3 nm for the different ILs (Fig. 3). Histograms for the size distribution were obtained for the different CSSNPs by analyzing up to 20 HRTEM images. The data are presented in the ESI Fig. S1. † The crystal structure of the CSSNPs could not be determined by selected area diffraction experiments due to the high contribution from the amorphous signal of the carbon-coated TEM grid. Thus, crystal structures were analyzed by fast Fourier transformation (FFT) and the results are shown as insets in Fig. 3. As the CSSNPs were too small to be tilted into a specific zone axis HRTEM images were taken with random orientation of the CSSNPs. The FFT analysis revealed that by choosing the appropriate IL, the CSSNPs can be synthesized in either amorphous, face centered cubic (fcc) or body centered cubic (bcc) state (Fig. 3). We showed earlier that by ex situ annealing of the initially amorphous NP prepared by DC sputtering, or by high-power impulse magnetron sputtering crystalline NPs can be produced using IL 2. 21 Here we demonstrate that depending on the IL used we can synthesize CCSNPS with different crystal structures. It should be noted that the different crystal structures are not due to crystallization caused by the electron beam as they were all acquired with similar electron dose rates and at the same 300 keV energy of the incident electron beam. To analyze the crystal structure of the different CSSNPs, we used the second reflection due to the following reason. For the fcc structure, the first reflection is {111} and the corresponding lattice spacing is d 1 = 0.208 nm and the second is {022} with d 2 = 0.180 nm. For bcc, the first reflection is {011} corresponding to d 1 = 0.205 nm and the second {002} with d 2 = 0.145 nm. Thus, while the difference in the lattice plane spacing {111} for fcc and {011} for bcc is too small, they can be distinguished by the lattice planes with the higher Miller indices. CSSNPs synthesized in IL 2, IL 3 and IL 7 do not show a long-range order and are thus considered amorphous. A fcc structure of the CSSNPs was found when using IL 1 and IL 4, while those synthesized with IL 5, IL 6 and IL 8 possess bcc structure. In general, the CSSNPs in the ILs showed a high stability over time and no agglomerates in the colloidal solution were formed after more than one month. The chemical composition of individual CSSNPs was analyzed by EDS in STEM mode. EDS quantification was achieved using the non-matrix factorization, three main components were identified which were related to the individual CSSNPs, the surrounding IL and the noise/background stemming from the TEM carbon grid. Only the component representative for the individual NPs was quantitatively analyzed of the EDS data by applying principal component analysis. 26 In order to get compositional values with high accuracy, we determined k-factors from the Cr 18 Mn 20 Fe 20 Co 21 Ni 21 target. The chemical composition was calculated and averaged for ∼20 CSSNPs. The quantitative analysis revealed that the chemical composition of the individual CSSNPs depends on the IL used in the synthesis. The data together with the observed crystal structure are summarized in Table 1. Fig. 4 shows exemplary the EDS elemental mapping from the CSSNPs synthesized using IL 8. For these NPs as well as for the ones prepared using the other ILs a homogenous distribution of the five elements was observed. Discussion The synthesis of CSSNPs by sputtering of a Cr 18 Mn 20 Fe 20 Co 21 Ni 21 target simultaneously in an IL library containing 8 different ILs resulted in different particle sizes, crystal structures, and compositions. These differences emphasize the importance to analyze the ultra-small NPs individually with aberration corrected (S)TEM and analytical techniques such as EDS. If only average chemical compositions would be considered, a conclusion on structure-properties correlations would be challenging. The results on the use of different ILs shows the important finding that the molecular structure of the IL influences the size, the composition and the crystal structure as discussed in more detail below. It also demonstrates the capabilities of the synthesis protocol where we kept all parameters identical except for the IL involved which allows to draw the conclusion that the NP properties can be tailored by the choice of the IL. The chemical composition for the individual CSSNPs, as shown in Table 1, reveals that the sputtered atoms are not interacting equally with the different ILs. If an atom has a sufficient high energy for diffusion inside the IL, it can cluster together with other atoms forming nuclei, which need to overcome a critical nucleus radius to start the growth of CSSNPs. Thus, the composition of individual CSSNPs should be also related locally to the chemical environment and, hence, to the elements of the ILs surrounding the individual nuclei during earlier stages of the nucleation process. 27 Furthermore, there are three main observations. First, we only observed up to 11 at% of Mn in the CSSNPs expect for IL 3. From this we can speculated that all the others ILs can act as a chemical barrier for Mn diffusion from the surface at the IL droplet/volume filling into the cavity interior. For the IL 3 the higher content of Mn can be due to the absence of fluorine groups. 28 Second, Ni shows for IL 3 a composition of only 3 at% inside of the CSSNPs. This phenomenon can be due to the specific morphology and charge distribution of the IL 3, which is not favorable for Ni diffusion. Third, Fe has a low concentration <10 at% in IL 6. The main difference between IL 6 and IL 7 is in their ethyl group form base cation. Thus, it seems that a larger group is needed to allow Fe to enter inside the IL matrix. The size and the crystal structure are affected by the type of the IL used. The sputtered atoms can diminish the hydrogen bonding network of the IL and lead to a difference in the electrostatic and van der Waals interaction. Furthermore, the Coulomb forces of ILs and the different interactions can lead to various growth mechanisms yielding different crystal structures and sizes, which could be related to the functional group of the IL. 28 The comparison of ORR activity at a required overpotential to reach −0.5 a.u. on the normalized current scale is summarized in Table 2 and is correlated to composition, particle size and crystallinity. On a first glance it looks like that the group of the most active CSSNPs were obtained irrespective of their crystallinity as they possess a bcc, amorphous or fcc state. However, when comparing the ones with similar sizes (those synthesized with IL 6, IL7 and IL 4) we observe that the one with bcc structure (IL6) has a higher activity then the one in an amorphous state (IL7) and the one with a fcc structure. Thus, our results indicate that the crystalline structure does play, at least, a minor role. The NP size is known to affect the activity due to higher surface areas of smaller particles and different ratios of edge/corner active sites compared to regular lattice plane active sites. 29 Due to normalizing the current, differences in surface area are already taken into account and the effect on activity would directly displayed by activity trends However, there is no pronounced trend visible, indicating a minor effect compared to the other NP properties. Four samples show very high activity, namely CSSNPs prepared with IL 1, IL 6, IL 7, and IL 8 (Fig. 3). CSSNPs in IL 2 is following with a small, but clear shift of 120 mV and finally, three samples show a very low activity (CSSNPs in IL 3, IL 4 and IL 5). The different activities can be predominantly related to the different CSSNPs compositions which are determined by the used ILs. Three out of four samples with low activity contain one element with a content of only 1-2 at% (see Table 2), which might be below the threshold of still preser-ving a CSS phase. In the literature the minimum content in a CSS phase for all elements should be at least 5 at%. 25 For the four active samples, each element is at least present with 5 at% and the NPs are assumed to possess a CSS structure. 30 Thus, the fluorocarbon from the ILs can play an important role in generating more even content NPs since it was present in all ILs stabilizing the active class of NPs (IL 1, IL 2, IL 6 and IL 8). Comparing the different samples, the composition appears to be most important. This is in line with the current understanding that the presence of a complex solid solution and its constitution at the surface are responsible for the activity. To reach high activities, increased content of Cr and Co are required. The high activity differences showcase the importance of composition optimization for CSS catalysts. ]) present even higher catalytic ORR activity than the benchmark Pt NPs, emphasizing the potential of this material class to obtain unprecedented activities induced by the synergy of the multi-element active sites. We investigated the effect of ILs on particle size, crystallinity and the elemental composition by HRTEM and EDS. We revealed that size and crystallinity effect the activity, but the largest contributions is due to the composition which induces large activity differences even though the same set of elements is used throughout. Thus, our work provides improved understanding how to modify the CSSNPs and adapt them to versatile applications as for the ORR. Synthesis of CSSNPs by sputtering into an IL library 20 CSSNPs were synthesized by sputtering in the following ILs (IoLiTec-Ionic Liquids Technologies): (1) IL 1: 1-butyl-3-methylimidazolium bis( perfluoroethylsulfonyl)imide [Bmim][(Pf ) 2 N], Table 2 Intrinsic ORR activity of CSSNPs sputtered in different ILs together with their related composition trend, particles size and crystal structure. The different colors within the composition denote the molar ratio of the respective element from 40% (violet) to 1% (yellow) Fig. 1. Sputter deposition 21 The sputter process was performed in a magnetron sputter system (AJA POLARIS-5, AJA International) with 1.5 inch diameter cathodes and a DC power supply (DC-XS 1500 from AJA International Inc., North Scituate). Before synthesis the cavity holder was cleaned by ultrasonication for 30 min each in isopropanol and acetone. A lid was used to expose only 36 cavities. The cavities were filled with the different ILs containing a volume of 40 µL per cavity 20 under Ar gas atmosphere with a purity of 99.9999%. Four cavities were filled with identical IL to increase the total IL volume (Fig. 1). Before sputtering, the ILs were evacuated for three days in the sputter chamber to remove air and moisture until a pressure of 1.7 × 10 −4 Pa was reached. An alloy target was used for sputtering, with a composition of Cr 18 Mn 20 Fe 20 Co 21 Ni 21 , as analyzed by inductively coupled plasma mass spectrometry (ICP-MS), and a purity of 99.95%. Sputtering was performed at 30 W (312 V, 95 mA) for 2 h with rotation of the substrate holder of 30 rotations per minute and an angle of 12°between the target and the cavity holder. After plasma ignition (1.33 Pa, 20 W and 2 min precleaning step prior to sputtering) the Ar pressure was fixed to 0.5 Pa, the power was set to 30 W and the shutter in front of the sputter cathode was opened for the desired time. After the sputter process, the MNP/ILs suspensions were collected and stored under Ar atmosphere in a glovebox (oxygen and water content <0.5 ppm). Electrochemical measurements of CSSNPs on nanoelectrodes Etched carbon nanoelectrodes were obtained by preparation of nanopipettes using laser pulling (Sutter Instruments P-2000) of quartz glass capillaries (Sutter Instruments, outer diameter 1.2 mm, inner diameter 0.9 mm). Two capillaries with a conically-shaped end with opening between 100 nm and 250 nm were obtained. These nanopipettes were flushed with a propane/butane gas mixture from Campinggaz and subsequently heated with a torch at the conical end in an Ar counterflow to fill the capillary with a carbon film. In order to increase the electrode surface area, the thin quartz capillary at the tip apex was removed by etching in 5 : 1 buffered hydrofluoric (HF) solution containing 40% hydrofluoric acid (aq.) (AnalaR NORMAPUR):40% NH 4 Cl (aq.) (Sigma Aldrich) by immersion of the tip apex for 4 min. Afterwards, the tip apex was immersed into water to remove any contaminants or HF residues. 31 Electrochemical measurements were performed in a three-electrode setup comprising a miniaturized Agar Ag/ AgCl (3 M KCl) reference electrode, a carbon cloth in a second compartment (0.1 M KOH) as counter electrode and the etched carbon nanoelectrodes as the working electrode. A potentiostat ( pgu-BI 100 from ips-jaissle) was used for activity measurements of the blank carbon electrode as well as after immobilization of CSSNPs in 0.1 M KOH in a Teflon beaker. Cyclic voltammograms (CVs) were measured in a potential range between 0 mV and −800 mV vs. Ag/AgCl (3 M KCl). After every three cycles, the electrodes were lifted and immersed back into the electrolyte to invoke convection and air contact of the electrode surface. This procedure was repeated until three consecutive cycles reached a stable response. The last cycle served as "blank electrode" cycle. After immobilization of CSSNPs, three CV cycles were performed, and the last cycle was used as "electrode + MNP response". By subtraction of the blank electrode current, the CSSNPs signal was obtained as described previously. 25 For each sample, a new nanoelectrode was used and the KOH solution was exchanged. CSSNPs immobilization, after the "blank electrode" CV, was obtained with the electrodes immersed for 30 min into a suspension of 35 µl of NPs in IL, 300 µl pure IL and 300 µl EtOH while applying a potential of −400 mV vs. Ag/AgCl (3 M KCl). Electrodes and potentiostat were the same as for activity measurements. TEM characterization of CSSNPs TEM characterization of CSSNPs was carried out using two different Titans 80-300 X-FEG (Thermo Fischer Scientific) operated at 300 kV, one equipped with an image corrector and the other one with a probe corrector. A metal-oxide-semiconductor (CMOS) camera with 4k × 4k pixels was used to record TEM images. EDS was carried out in STEM mode using a beam current of ∼150 pA and a beam size of ∼0.2 nm. For each sample, approximately 100 CSSNPs were studied. The chemical composition was determined by quantifying the EDS data. In order to investigate the crystal structure of the CSSNPs, FFT were calculated from high-resolution (HR)TEM images, each having a size of 33.5 × 33.5 nm 2 , averaging over several CSSNPs. Holey carbon-coated Au grids (200 mesh, Plano) were used to prepare TEM samples. An amount of 2.5 µL IL for each sample was dropped on the carbon-coated side and left for adhesion for 2 h. Subsequently, dried acetonitrile was used to clean the grid dropwise for 1 h under Ar atmosphere. The final grid was stored inside of a vacuum chamber under Ar atmosphere. Conflicts of interest There are no conflicts to declare. Nanoscale Paper This journal is © The Royal Society of Chemistry 2020
5,555.2
2020-11-16T00:00:00.000
[ "Materials Science", "Chemistry" ]
Description of a Non-Canonical AsPt Blue Species Originating from the Aerobic Oxidation of AP-1 in Aqueous Solution The peculiar behavior of arsenoplatin-1, ([Pt(µ-NHC(CH3)O)2ClAs(OH)2], AP-1), in aqueous solution and the progressive appearance of a characteristic and intense blue color led us to carry out a more extensive investigation to determine the nature of this elusive chemical species, which we named “AsPt blue”. A multi-technique approach was therefore implemented to describe the processes involved in the formation of AsPt blue, and some characteristic features of this intriguing species were revealed. Introduction Arsenoplatin AP-1 ([Pt(µ-NHC(CH 3 )O) 2 ClAs(OH) 2 ]) is a promising dual-function inorganic drug that was developed and characterized in the laboratory of Thomas O'Halloran a few years ago as a prospective anticancer agent [1,2].The chemical structure of AP-1 is shown in Figure 1.AP-1 is a chimeric species possessing an arsenous acid moiety bound to a platinum(II) center with an uncommon five-coordinate As(III) geometry. Introduction Arsenoplatin AP-1 ([Pt(µ-NHC(CH3)O)2ClAs(OH)2]) is a promising dual-function inorganic drug that was developed and characterized in the laboratory of Thomas O'Halloran a few years ago as a prospective anticancer agent [1,2].The chemical structure of AP-1 is shown in Figure 1.AP-1 is a chimeric species possessing an arsenous acid moiety bound to a platinum(II) center with an uncommon five-coordinate As(III) geometry. The presence of the Pt-coordinated As(III) center makes the reactivity of AP-1 markedly different from that of cisplatin.The replacement of the chloride ligand with other small ligands in aqueous solutions is rapid at room temperature because of the very strong trans effect of the arsenic atom [1].The presence of the Pt-coordinated As(III) center makes the reactivity of AP-1 markedly different from that of cisplatin.The replacement of the chloride ligand with other small ligands in aqueous solutions is rapid at room temperature because of the very strong trans effect of the arsenic atom [1]. Notably, this chimeric complex, arising from the condensation of a cisplatin-like Pt(II) moiety with the arsenite anion, combines into a single molecular entity the favorable pharmacological properties of both cisplatin and trisenox, two FDA-approved inorganic anticancer drugs.The activation of AP-1 seems to rely on the cleavage of the Pt-As bond.Evidence for the progressive slow breaking of this bond within the cellular milieu was recently gained [2].Further mechanistic, biological, and pharmacological studies on AP-1 are in progress [3]. Remarkably, when working with AP-1 aqueous solutions in the pH range of 5.5-8, we noticed that an intense blue color slowly develops with time under a variety of experimental conditions; the blue color is attributed to the progressive formation of a novel Pt-containing species (referred as AsPt blue hereafter). The first platinum "blue" compound was prepared by the German chemists Hofmann and Bugge at the beginning of the 20th century [4].Notably, after more than one hundred years since the first discovery of these derivatives, several examples of platinum "blues" have been reported in the literature.To the best of our knowledge, these elusive compounds are related to polynuclear Pt-Pt structures, in which an unusual Pt(III) center is present in the metal bond chain, typically containing four Pt atoms (with an average Pt oxidation number of 2.25) [5][6][7].Moreover, a few reported papers have also described the existence of mononuclear Pt(III) blue compounds, in which the presence of bulky ligands seems to hamper the formation of a Pt-Pt direct bond [8,9].In this frame, the aim of the present study is to obtain insights into the molecular speciation of AP-1 in the above-mentioned conditions and possibly gain information about the formed blue compound.It would be of great interest to discriminate between the mononuclear and polynuclear natures of the formed blue species and investigate the presence of arsenic atoms in the chromophore. Formation of AsPt Blue in Solution In our case, the kinetics of the process of AsPt blue formation is greatly influenced by the applied solution conditions.Indeed, through a series of explorative experiments, we were able to establish that the formation of Pt blue is strongly favored by the presence of phosphate or carbonate ions in the solution and inhibited by an excess of chloride ions or by an acidic environment (pH < 5).Even more importantly, we observed that no blue color develops when working with the exclusion of dioxygen; this latter observation strongly suggests that AsPt blue is indeed an oxidation product of AP-1.A typical spectrophotometric profile documenting the formation of AsPt blue in 10 mM phosphate buffer at pH 7.4 is depicted in Figure 2, where the lowering of the LMCT band at 278 nm (ε = 1092 L mol −1 cm −1 ), previously attributed to AP-1 [1], is clearly visible due to its hydrolysis.At the same time, an intense and broad absorption band progressively grows around 595 nm, with two evident shoulders at 520 and 670 nm [10,11]. The evolution of the blue band was monitored for 72 h.The formation of the blue species is relatively slow, follows a typical sigmoidal kinetic profile, and is still incomplete after 72 h of observation, although it is approaching saturation (Figure 2).Interestingly, the characteristic shape of the absorbance increase at 595 nm versus time suggests the existence of an initial induction period before the process of AsPt blue formation can begin (see Supporting Materials Table S1 for fitting parameters).This evidence is consistent with an autocatalytic process where the system should undergo a pre-activation step before starting to produce the blue species.This is something that is typical of several catalytic reactions, where the precatalyst converts into the actual catalytic species that determines the sigmoid shape of the reaction itself [12].In this context, we can speculate that the induction period is due to the mechanism underlying the formation of the blue species itself.Indeed, the latter should form with the concomitant release of the chloride ligand from AP-1 and the coordinative assistance of the phosphate anion, together with the oxidizing atmospheric dioxygen.This hypothesis is well supported by further evidence that the formation of The evolution of the blue band was monitored for 72 h.The formation of the bl species is relatively slow, follows a typical sigmoidal kinetic profile, and is still incomple after 72 h of observation, although it is approaching saturation (Figure 2).Interesting the characteristic shape of the absorbance increase at 595 nm versus time suggests t existence of an initial induction period before the process of AsPt blue formation can beg (see Supporting Materials Table S1 for fitting parameters).This evidence is consistent wi an autocatalytic process where the system should undergo a pre-activation step befo starting to produce the blue species.This is something that is typical of several cataly reactions, where the precatalyst converts into the actual catalytic species that determin the sigmoid shape of the reaction itself [12].In this context, we can speculate that t induction period is due to the mechanism underlying the formation of the blue speci itself.Indeed, the latter should form with the concomitant release of the chloride ligan from AP-1 and the coordinative assistance of the phosphate anion, together with t oxidizing atmospheric dioxygen.This hypothesis is well supported by further eviden that the formation of AsPt blue is inhibited by an acidic environment, the addition NaCl, or working in an inert atmosphere. The formation of the AsPt blue species was then analyzed using 1 H NM measurements.Time-dependent 1 H NMR spectra of freshly prepared solutions of AP showed progressive broadening of the 1 H signals, a behavior that is tentatively ascrib to AP-1 oligomerization.To support this hypothesis, the Pt blue species was pass through filters of increasing cutoffs (Figure S1).We observed that the freshly prepar AsPt blue species was able to cross filters with cutoffs of 3 kDa.Regardless, in ag solutions, the AsPt blue chromophore could not cross filters with cutoffs of 3 kDa nor kDa.This observation supports the view that the formation of the AsPt blue speci encompasses both oxidation and oligomerization processes; apparently, the molecul masses of the resulting oligomers grow to above 10 kDa.As stated above, in the absen of dioxygen, no AsPt blue formation is observed, yet a similar AP-1 oligomerizati The formation of the AsPt blue species was then analyzed using 1 H NMR measurements.Time-dependent 1 H NMR spectra of freshly prepared solutions of AP-1 showed progressive broadening of the 1 H signals, a behavior that is tentatively ascribed to AP-1 oligomerization.To support this hypothesis, the Pt blue species was passed through filters of increasing cutoffs (Figure S1).We observed that the freshly prepared AsPt blue species was able to cross filters with cutoffs of 3 kDa.Regardless, in aged solutions, the AsPt blue chromophore could not cross filters with cutoffs of 3 kDa nor 10 kDa.This observation supports the view that the formation of the AsPt blue species encompasses both oxidation and oligomerization processes; apparently, the molecular masses of the resulting oligomers grow to above 10 kDa.As stated above, in the absence of dioxygen, no AsPt blue formation is observed, yet a similar AP-1 oligomerization process takes place, leading to the formation of a scarcely soluble Pt(II) species that we earlier called Pt white; this species was preliminarily characterized in a previous work of ours [13]. Pt White: A More Advanced Characterization The Pt white species has been further characterized here, as it can serve as a valuable basis of comparison for AsPt blue.The CP/MAS 1 H- 13 C spectrum of Pt white (Figure 3a) shows two signals at δ 16.9 and δ 171.7, ascribable to CH 3 and CONH of Pt-coordinated acetamidate ligands.The corresponding solid-state static 195 Pt NMR spectrum is shown in Figure 3b and reveals that the isotropic 195 Pt chemical shift is δ iso = −3746 ppm.The values for the chemical shifts of the three tensors are δ 11 = −1158 ppm; δ 22 = −3885 ppm; δ 33 = −6196 ppm.Such values are comparable to those found for AP-1 (whose solid-state static 195 Pt NMR spectrum is reported in the SI) for which δ iso = −3643 ppm; δ 11 = −1000 ppm; δ 22 = −3734 ppm; δ 33 = −6196 ppm, indicating that the structure of the two species is roughly similar. a hydroxide of another AP-1 moiety replaces the chloride ligand (see Scheme 1).Interestingly, this attribution is in nice agreement with our previous results concerning the proposed crystallographic model of the Pt white oligomeric species [13].According to this model, the Pt white species features a dendritic growth process-with chloride ligand release-through the interaction of the two hydroxyl oxygens of an arsenous moiety of an AP-1 molecule with Pt atoms belonging to different AP-1 molecules.We demonstrated that Pt white originated from the polymerization of AP-1, wherein a hydroxide of another AP-1 moiety replaces the chloride ligand (see Scheme 1).Interestingly, this attribution is in nice agreement with our previous results concerning the proposed crystallographic model of the Pt white oligomeric species [13].According to this model, the Pt white species features a dendritic growth process-with chloride ligand releasethrough the interaction of the two hydroxyl oxygens of an arsenous moiety of an AP-1 molecule with Pt atoms belonging to different AP-1 molecules. AsPt Blue: Preparation, Analytical Results, and Absorption Spectra The above observations concerning the likely process of Pt blue formation prompted us to carry out further and more accurate studies to prepare, isolate, and characterize this novel Pt blue species in larger amounts.Based on several trials, a preparative procedure was determined to maximize the formation of this product.The procedure consists of dissolving AP-1 in 50 mM phosphate buffer at a pH of 7.4 in the presence of oxygen under continuous stirring.The blue color fully develops within 72 h.The AsPt blue species is then recovered through lyophilization of the aqueous buffer, shaking in methanol of the residual solid, and filtration of the formed suspension.The deep blue solution is then evaporated on a gently warmed watch glass, and the AsPt blue species is recovered by scratching the thin black layer formed on the glass.Extensive analytical data were collected for the AsPt blue samples, which were compared with those of AP-1 and Pt white (see Table 1).In the elemental composition of AsPt blue, the total Pt + As content is significantly reduced compared to AP-1 and Pt white, and the As content experiences a far larger decrease than Pt; an increase in the oxygen percentage is noted as well.Chlorine is retained at a percentage similar to that of AP-1.These results suggest that the oxidation process probably involves an appreciable loss of platinum and a more extensive loss of arsenic, implying partial cleavage of the As-Pt bond.It is worth noting that these data were obtained through the combination of different analytical techniques, i.e., elemental analysis (for C, H, N, O, and Cl) and ICP-OES (for Pt and As).Moreover, the determination of the elemental composition becomes less and less accurate as the molecular size increases, and, realistically, AsPt blue is a large oligomeric species [14].Considering all the above, we determined the deviation obtained from the theoretical 100% value to be acceptable and in agreement with the errors deriving from the use of different techniques. AsPt Blue: Preparation, Analytical Results, and Absorption Spectra The above observations concerning the likely process of Pt blue formation prompted us to carry out further and more accurate studies to prepare, isolate, and characterize this novel Pt blue species in larger amounts.Based on several trials, a preparative procedure was determined to maximize the formation of this product.The procedure consists of dissolving AP-1 in 50 mM phosphate buffer at a pH of 7.4 in the presence of oxygen under continuous stirring.The blue color fully develops within 72 h.The AsPt blue species is then recovered through lyophilization of the aqueous buffer, shaking in methanol of the residual solid, and filtration of the formed suspension.The deep blue solution is then evaporated on a gently warmed watch glass, and the AsPt blue species is recovered by scratching the thin black layer formed on the glass.Extensive analytical data were collected for the AsPt blue samples, which were compared with those of AP-1 and Pt white (see Table 1).In the elemental composition of AsPt blue, the total Pt + As content is significantly reduced compared to AP-1 and Pt white, and the As content experiences a far larger decrease than Pt; an increase in the oxygen percentage is noted as well.Chlorine is retained at a percentage similar to that of AP-1.These results suggest that the oxidation process probably involves an appreciable loss of platinum and a more extensive loss of arsenic, implying partial cleavage of the As-Pt bond.It is worth noting that these data were obtained through the combination of different analytical techniques, i.e., elemental analysis (for C, H, N, O, and Cl) and ICP-OES (for Pt and As).Moreover, the determination of the elemental composition becomes less and less accurate as the molecular size increases, and, realistically, AsPt blue is a large oligomeric species [14].Considering all the above, we determined the deviation obtained from the theoretical 100% value to be acceptable and in agreement with the errors deriving from the use of different techniques.The electronic absorption spectrum was recorded for the Pt blue species dissolved in phosphate buffer at a pH of 7.4.The spectrum closely corresponds to the spectrum reported in Figure 2, with a broad and intense band centered at 600 nm.Attempts were also made to record an EPR spectrum for the AsPt blue samples in solution: the virtual lack of EPR signals points out that AsPt blue is predominantly diamagnetic in nature, this being nicely consistent with the NMR results (see below). AsPt Blue: NMR Studies Since the 1 H NMR spectra, as mentioned above, were poorly resolved due to the excessive line broadening caused by oligomerization and thus scarcely informative, we decided to study the AsPt blue species by 13 C and 195 Pt NMR spectroscopy.Solid-state 13 C and 195 Pt NMR measurements were carried out on the Pt blue species and compared to those of AP-1.Despite several attempts, we were not able to obtain either solid-state or solution-state 195 Pt NMR spectra of Pt blue.The investigated spectral window spanned from +2000 to −6000 ppm.The CP/MAS 1 H- 13 The electronic absorption spectrum was recorded for the Pt blue species dissolved in phosphate buffer at a pH of 7.4.The spectrum closely corresponds to the spectrum reported in Figure 2, with a broad and intense band centered at 600 nm.Attempts were also made to record an EPR spectrum for the AsPt blue samples in solution: the virtual lack of EPR signals points out that AsPt blue is predominantly diamagnetic in nature, this being nicely consistent with the NMR results (see below). AsPt Blue: NMR Studies Since the 1 H NMR spectra, as mentioned above, were poorly resolved due to the excessive line broadening caused by oligomerization and thus scarcely informative, we decided to study the AsPt blue species by 13 C and 195 Pt NMR spectroscopy.Solid-state 13 C and 195 Pt NMR measurements were carried out on the Pt blue species and compared to those of AP-1.Despite several attempts, we were not able to obtain either solid-state or solution-state 195 Pt NMR spectra of Pt blue.The investigated spectral window spanned from +2000 to −6000 ppm.The CP/MAS 1 H- 13 C spectrum of Pt blue is shown in Figure 4 and consists of several peaks centered around 23 ppm (CH3) and 180 ppm (C(O)N), suggesting the presence of different species containing the acetamidate ligand.The existence of several species may account for the lack of detection of 195 Pt NMR signals due to the intrinsic low sensitivity of the 195 Pt nuclei.On the other hand, the good quality of the CP/MAS 1 H- 13 C spectrum of Pt blue and the narrow linewidths indicate that the AsPt blue species is essentially diamagnetic, in agreement with the above observations.It is likely that the AsPt blue species possesses a complex architecture where the various acetamidate moieties may experience different kinds of local environments. AsPt Blue: XPS Measurements The fact that the AsPt blue species is most likely an oxidation product of AP-1, the disappearance of the 195 Pt NMR signals, and the evident diamagnetism suggest that this species might contain pairs of magnetically coupled Pt(III) centers in line with previous cases described in the literature [15][16][17].This idea prompted us to exploit XPS spectroscopy to further analyze the Pt blue species and gain insight into the Pt oxidation state.The XPS spectrum of the Pt4f region recorded from a Pt blue sample is shown in Figure 5 and reveals that AsPt blue contains a mixture of high-oxidation-state Pt species.In particular, there is a main component with a doublet at BE(Pt4f7/2) = 73.2± 0.2 eV and AsPt Blue: XPS Measurements The fact that the AsPt blue species is most likely an oxidation product of AP-1, the disappearance of the 195 Pt NMR signals, and the evident diamagnetism suggest that this species might contain pairs of magnetically coupled Pt(III) centers in line with previous cases described in the literature [15][16][17].This idea prompted us to exploit XPS spectroscopy to further analyze the Pt blue species and gain insight into the Pt oxidation state.The XPS spectrum of the Pt4f region recorded from a Pt blue sample is shown in Figure 5 and reveals that AsPt blue contains a mixture of high-oxidation-state Pt species.In particular, there is a main component with a doublet at BE(Pt4f 7/2 ) = 73.2± 0.2 eV and BE(Pt4f 5/2 ) = 76.5 ± 0.2 eV and a small contribution attributable to Pt(IV) with a doublet at BE(Pt4f 7/2 ) = 74.5 eV ± 0.2 eV and BE(Pt4f 5/2 ) = 77.8± 0.2 eV. binding energies of the As3d5/2 peaks are very similar, i.e., 44.8 eV for Pt blue, 44.6 eV for Pt white, and 44.7 eV for AP-1, suggesting a formal arsenicum oxidation state of in all the samples [20].Moreover, considering the surface atomic percentages of platinum, arsenicum, and chlorine, these were recorded in all the investigated samples at a ratio of 1:1:1, as expected.As far as the main component at 73.2 eV is concerned, this BE value is intermediate between those typical for Pt(II) and those for Pt(IV) species.This contribution can be attributed to a Pt(III) species, in agreement with what was observed by Stadnichenko et al. [18], and is approximately 90% of the total platinum amount.For comparison, the binding energies of the peaks of the doublet relevant to Pt white and AP-1 were found at 72.5 ± 0.2 eV and 75.8 ± 0.2 eV for Pt4f 7/2 and Pt4f 5/2 (Figure 6) and were attributed to the Pt(II) oxidation state [19].In Figure 7, the XPS spectra of the As3d regions relevant to the Pt blue sample (a), Pt white (b), and AP-1 (c) are reported.Each signal was fitted with a doublet whose components, As3d 5/2 and As3d 3/2 , showed a typical distance of 0.7 eV.The binding energies of the As3d 5/2 peaks are very similar, i.e., 44.8 eV for Pt blue, 44.6 eV for Pt white, and 44.7 eV for AP-1, suggesting a formal arsenicum oxidation state of (III) in all the samples [20].Moreover, considering the surface atomic percentages of platinum, arsenicum, and chlorine, these were recorded in all the investigated samples at a ratio of 1:1:1, as expected. As far as the main component at 73.2 eV is concerned, this BE value is intermediate between those typical for Pt(II) and those for Pt(IV) species.This contribution can be attributed to a Pt(III) species, in agreement with what was observed by Stadnichenko et al. [18], and is approximately 90% of the total platinum amount.For comparison, the binding energies of the peaks of the doublet relevant to Pt white and AP-1 were found at 72.5 ± 0.2 eV and 75.8 ± 0.2 eV for Pt4f7/2 and Pt4f5/2 (Figure 6) and were attributed to the Pt(II) oxidation state [19].In Figure 7, the XPS spectra of the As3d regions relevant to the Pt blue sample (a), Pt white (b), and AP-1 (c) are reported.Each signal was fitted with a doublet whose components, As3d5/2 and As3d3/2, showed a typical distance of 0.7 eV.The binding energies of the As3d5/2 peaks are very similar, i.e., 44.8 eV for Pt blue, 44.6 eV for Pt white, and 44.7 eV for AP-1, suggesting a formal arsenicum oxidation state of (III) in all the samples [20].Moreover, considering the surface atomic percentages of platinum, arsenicum, and chlorine, these were recorded in all the investigated samples at a ratio of 1:1:1, as expected. Pt Vibrational Spectroscopy The Raman spectra of AP-1 and AsPt blue were obtained in the solid phase upon excitation at 514.5 nm and are shown in Figure 8.The most intense bands observed in AP-1 and highlighted in bold in the figure are due to the vibrational modes of the acetamide molecules that crystallize in the unit cell.The signals at 1127, 1592, and 3078 cm −1 are assigned to the OCNH acetamidate moiety of AP-1.These bands are also observed in the Pt blue sample at the same wavenumbers, suggesting that the acetamidate moiety is not significantly perturbed by the oligomerization process.Indeed, in the CH stretching spectral region, the low-intensity band of the CH3 stretching vibrations can be related to the stiffening of the three-dimensional structure in the Pt blue species.The low-frequency spectral region of AP-1, between 230 and 400 cm −1 , is characterized by vibrational modes mainly involving the platinum atom.These vibrations consist of the coupling of stretching Pt-Cl, Pt-As, or Pt-N with bending O-As-O [21,22].In comparison with AP-1, the Pt blue compound shows a very strong band centered at 124 cm −1 with a shoulder around 200 cm −1 . Pt Blue: Vibrational Spectroscopy The Raman spectra of AP-1 and AsPt blue were obtained in the solid phase upon excitation at 514.5 nm and are shown in Figure 8.The most intense bands observed in AP-1 and highlighted in bold in the figure are due to the vibrational modes of the acetamide molecules that crystallize in the unit cell.The signals at 1127, 1592, and 3078 cm −1 are assigned to the OCNH acetamidate moiety of AP-1.These bands are also observed in the Pt blue sample at the same wavenumbers, suggesting that the acetamidate moiety is not significantly perturbed by the oligomerization process.Indeed, in the CH stretching spectral region, the low-intensity band of the CH 3 stretching vibrations can be related to the stiffening of the three-dimensional structure in the Pt blue species.The low-frequency spectral region of AP-1, between 230 and 400 cm −1 , is characterized by vibrational modes mainly involving the platinum atom.These vibrations consist of the coupling of stretching Pt-Cl, Pt-As, or Pt-N with bending O-As-O [21,22].In comparison with AP-1, the Pt blue compound shows a very strong band centered at 124 cm −1 with a shoulder around 200 cm −1 .According to previous data obtained for various Pt complexes [23][24][25][26], Raman frequencies in the range of 100-230 cm −1 can be considered specific signatures for the presence of Pt-Pt bonds with Pt in different oxidation states. UV-Vis Experiments The solution behavior of AP-1 was assessed through spectrophotometric studies performed with a Cary 50 Bio UV-Vis spectrophotometer (Varian, Palo Alto, CA, USA) in the presence of 10 mM phosphate buffer, pH = 7.4.The obtained solution of AP-1 (5 × 10 −4 M) was monitored in the wavelength range between 200 and 800 nm for 72 h at 25 °C. NMR Experiments Solid-state NMR analyses were performed on a Avance I 400 spectrometer (Bruker Biospin GMBH, Rheinstetten, Germany) (operating at a frequency of 100.6 MHz for 13 C and 86.0 MHz for 195 Pt) using a 4.0 mm HX MAS probe at 298 K.For the MAS and static (non-spinning) experiments, the samples were packed into zirconia rotors. The chemical shifts for 13 C were referenced against SiMe4 (0 ppm) by using the methylene signal of adamantane (δ 38.48) as a secondary reference, while the 195 Pt chemical shifts were referenced against H2PtCl6. The 1 H- 13 C CP/MAS NMR experiments were acquired using a 3.25 μs proton π/2 pulse length, an νCP of 55.0 kHz, a contact time of 5.0 ms, an νdec of 76.9 kHz, and a recycle delay of 6.0 s.The spinning rate for the 1 H- 13 C CP/MAS NMR spectra was 10,000 Hz. The static solid-state 195 Pt NMR experiments were performed using the Cross-Polarization Carr-Purcell-Meiboom-Gill (CP/CPMG) pulse sequence [27,28].The 1 H-195 Pt CP/CPMG spectra were obtained by collecting subspectra with a spectral width of 75 kHz and 50 Meiboom-Gill (MG) loops.For Pt white, seventeen subspectra were acquired using transmitter offsets spaced by 30 kHz (the first transmitter offset was set at −81,714.33 Hz).The subspectra were co-added using the skyline projection method.The acquisition time (1/τa) was adjusted to attain a spikelet separation of 4.4 kHz.The 195 Pt spectra were obtained using a 3.25 µs proton π/2 pulse length, a νCP of 65.4 kHz, a contact time of 7.0 ms, an νdec of 77.0 kHz, and a recycle delay of 6 s.A two-pulse phase modulation (TPPM) decoupling scheme was used for the 1 H decoupling.According to previous data obtained for various Pt complexes [23][24][25][26], Raman frequencies in the range of 100-230 cm −1 can be considered specific signatures for the presence of Pt-Pt bonds with Pt in different oxidation states. UV-Vis Experiments The solution behavior of AP-1 was assessed through spectrophotometric studies performed with a Cary 50 Bio UV-Vis spectrophotometer (Varian, Palo Alto, CA, USA) in the presence of 10 mM phosphate buffer, pH = 7.4.The obtained solution of AP-1 (5 × 10 −4 M) was monitored in the wavelength range between 200 and 800 nm for 72 h at 25 • C. NMR Experiments Solid-state NMR analyses were performed on a Avance I 400 spectrometer (Bruker Biospin GMBH, Rheinstetten, Germany) (operating at a frequency of 100.6 MHz for 13 C and 86.0 MHz for 195 Pt) using a 4.0 mm HX MAS probe at 298 K.For the MAS and static (non-spinning) experiments, the samples were packed into zirconia rotors. The chemical shifts for 13 C were referenced against SiMe 4 (0 ppm) by using the methylene signal of adamantane (δ 38.48) as a secondary reference, while the 195 Pt chemical shifts were referenced against H 2 PtCl 6 . The 1 H- 13 C CP/MAS NMR experiments were acquired using a 3.25 µs proton π/2 pulse length, an ν CP of 55.0 kHz, a contact time of 5.0 ms, an ν dec of 76.9 kHz, and a recycle delay of 6.0 s.The spinning rate for the 1 H- 13 C CP/MAS NMR spectra was 10,000 Hz. The static solid-state 195 Pt NMR experiments were performed using the Cross-Polarization Carr-Purcell-Meiboom-Gill (CP/CPMG) pulse sequence [27,28].The 1 H-195 Pt CP/CPMG spectra were obtained by collecting subspectra with a spectral width of 75 kHz and 50 Meiboom-Gill (MG) loops.For Pt white, seventeen subspectra were acquired using transmitter offsets spaced by 30 kHz (the first transmitter offset was set at −81,714.33 Hz).The subspectra were co-added using the skyline projection method.The acquisition time (1/τ a ) was adjusted to attain a spikelet separation of 4.4 kHz.The 195 Pt spectra were obtained using a 3.25 µs proton π/2 pulse length, a ν CP of 65.4 kHz, a contact time of 7.0 ms, an ν dec of 77.0 kHz, and a recycle delay of 6 s.A two-pulse phase modulation (TPPM) decoupling scheme was used for the 1 H decoupling. XPS Experiments XPS analyses were performed on a scanning microprobe PHI 5000 VersaProbe II (Physical Electronics, Chanhassen, MN, USA).The instrument was equipped with a microfocused monochromatized AlKα X-ray radiation source.The were examined in HP mode with an X-ray take-off angle of 45 • (instrument base pressure = ~10 −9 mbar).The size of the scanned area was about 1400 µm × 200 µm.Wide scans and high-resolution spectra were recorded in FAT mode for each sample, setting the pass energy values equal to 117.4 eV and 29.35 eV, respectively.To fit the high-resolution spectra, the commercial MultiPak software, version 9.9.0.8, was used.Atomic percentages were inferred from the peak areas, previously normalized by MultiPak library's sensitivity factors.Adventitious carbon C1s was set at 284.8 eV and used as a reference. ICP-OES Measurements for Pt and As The determination of the Pt and As concentrations was performed using a Varian 720-ES inductively coupled plasma atomic emission spectrometer (ICP-OES) (Varian, Palo Alto, CA, USA) equipped with a CETAC U5000 AT+ ultrasonic nebulizer (Teledyne, Omaha, NE, USA) in order to increase the method's sensitivity. A total of 100 µL of each of the aqueous solutions containing the Pt blue and Pt white species was transferred into PE vials and digested in a thermo-reactor at 80 • C for 8 h with 2 mL of aqua regia (HCl supra-pure grade and HNO 3 supra-pure grade at a 3:1 ratio).After mineralization, ultrapure water (≥18 MΩ) was added to a final volume of 6 mL.All the samples were spiked with 1 ppm of Ge, used as an internal standard, and analyzed.Calibration standards were prepared through gravimetric serial dilution from a commercial standard solution of Pt and As at 1000 mg L −1 .The following wavelengths were used: 214.424 nm for Pt, 188.980 nm for As, and 209.426 nm for Ge.The operating conditions were optimized to obtain the maximum signal intensity, and between each sample, a rinsed solution of HCl supra-pure grade and HNO 3 supra-pure grade at a 3:1 ratio was used to avoid any "memory effect". Raman Measurements Raman measurements were performed by means of a Renishaw 2000 spectrometer (Renishaw plc, Wotton-under-Edge, UK) equipped with the 514.5 nm line from an argon laser and an incident power of 300 µW, coupled with a Leica DLML confocal microscope (Leica, Wetzlar, Germany) with a 20× objective.The back-scattered Raman signal was collected and focused into a single grating monochromator (1200 lines mm −1 ) through 40 µm slits and detected using a Peltier-cooled CCD detector at −20 • C. The spectrometer was routinely calibrated with respect to the 520 cm −1 band of a silicon wafer. EPR Experiments The EPR spectra at X-band (ca.9.4 GHz) were acquired using an Elexsys E500 spectrometer (Bruker Gmbh, Billerica, MA, USA) equipped with an SHQ cavity and an ESR900 continuous-flow cryostat for low-temperature operation.The sample was prepared by transferring the AsPt blue solution into a standard EPR tube.Measurements were conducted both at room temperature and at 30 K, freezing the sample in liquid nitrogen before its insertion into the cavity. Conclusions On the basis of the data collected so far, the following interpretation of the process of AsPt blue formation can be proposed.In aerobic aqueous solutions, AP-1 may undergo a slow but progressive conversion into the so-called AsPt blue species.The latter appears to be the result of combined oligomerization and oxidation processes.Interestingly, an oligomeric white species is obtained in the absence of dioxygen, which was previously characterized.Based on the results reported here, we can confirm that AsPt blue is an oligomeric species with a MW > 10 kDa.This species is mostly diamagnetic and is characterized by an intense absorption band at 600 nm.Analytical determinations revealed significant losses of platinum and arsenic in AsPt blue compared to AP-1.Despite the presence of some apparent heterogeneity, most likely due the concomitant oligomerization/oxidation processes, we propose that the platinum centers in AsPt blue are predominantly in the +3 oxidation state, probably as diamagnetic Pt(III) pairs.This hypothesis is broadly supported by the XPS results and the 13 C NMR spectra.The vibrational spectra provide further support for the Pt-Pt bond formation hypothesis by revealing a low Raman shift band specific to the Pt-Pt interaction.Attempts are being made to further characterize this species from a structural point of view.In any case, we have gathered enough evidence to document that this species is non-canonical Pt blue, essentially diamagnetic in nature. Figure 2 . Figure 2. Time-dependent spectrophotometric profiles showing Pt blue formation.AP-1 (5 × 10 −4 incubated for up to 72 h in the presence of 10 mM phosphate buffer, pH = 7.4.The inset plot sho the variation in absorbance monitored at 595 nm.Fourteen time points from t = 0 to t = 72 h we considered. Figure 2 . Figure 2. Time-dependent spectrophotometric profiles showing Pt blue formation.AP-1 (5 × 10 −4 M) incubated for up to 72 h in the presence of 10 mM phosphate buffer, pH = 7.4.The inset plot shows the variation in absorbance monitored at 595 nm.Fourteen time points from t = 0 to t = 72 h were considered. C spectrum of Pt blue is shown in Figure 4 and consists of several peaks centered around 23 ppm (CH 3 ) and 180 ppm (C(O)N), suggesting the presence of different species containing the acetamidate ligand.The existence of several species may account for the lack of detection of 195 Pt NMR signals due to the intrinsic low sensitivity the 195 Pt nuclei.On the other hand, the good quality of the CP/MAS 1 H-13 C spectrum of Pt blue and the narrow linewidths indicate that the AsPt blue species is essentially diamagnetic, in agreement with the above observations.It is likely that the AsPt blue species possesses a complex architecture where the various acetamidate moieties may experience different kinds of local environments.Int.J. Mol.Sci.2024, 25, 7408 6 of 12 Figure 5 . Figure 5. XPS spectrum and relevant curve fitting of Pt4f region recorded for AsPt blue. Figure 6 . Figure 6.XPS spectra and relevant curve fittings of Pt4f regions recorded for Pt white (a) and AP-1 (b). Figure 5 . Figure 5. XPS spectrum and relevant curve fitting of Pt4f region recorded for AsPt blue. Figure 5 . Figure 5. XPS spectrum and relevant curve fitting of Pt4f region recorded for AsPt blue. Figure 6 . Figure 6.XPS spectra and relevant curve fittings of Pt4f regions recorded for Pt white (a) and AP-1 (b). Figure 6 . Figure 6.XPS spectra and relevant curve fittings of Pt4f regions recorded for Pt white (a) and AP-1 (b). Figure 7 . Figure 7. XPS spectra and relevant curve fittings of As3d regions recorded for Pt blue (a), Pt white (b), and AP-1 (c). Figure 7 . Figure 7. XPS spectra and relevant curve fittings of As3d regions recorded for Pt blue (a), Pt white (b), and AP-1 (c). Table 1 . Elemental composition of the studied Pt-As compounds. * without acetamide.Scheme 1.Initial step of Pt white formation. Table 1 . Elemental composition of the studied Pt-As compounds.
8,421
2024-07-01T00:00:00.000
[ "Chemistry" ]
Low-Temperature Fluoro-Borosilicate Glass for Controllable Nano-Crystallization in Glass Ceramic Fibers A fluorosilicate (FS) nano-crystallized glass ceramic (NGC) is one of the most commonly used gain materials for applications in optical devices due to its excellent thermal stability as well as high-efficiency luminescence. However, FS glass can hardly be used to prepare NGC fibers due to its high preparation temperature. Here, a series of low-temperature fluoro-borosilicate (FBS) glasses were designed for the fabrication of active NGC fibers. By modulating B2O3, the preparation temperature of FBS glass was reduced to 1050 °C, and the crystallization in FBS NGCs was more controllable than in FS NGC. The crystallization of the impure phase was inhibited, and single-phase rare earth (RE)-fluoride nanocrystals were controllably precipitated in the FBS NGCs. The 40Si-20B FBS NGC not only exhibited a higher optical transmittance, but the luminescence efficiency was also much higher than traditional FS NGCs. More importantly, NGC fibers were successfully fabricated by using the designed FBS glass as core glass. Nanocrystals were controllably precipitated and greatly enhanced, and upconversion luminescence was observed in NGC fibers. The designed FBS NGCs provided high-quality optical gain materials and offered opportunities for fabricating a wide range of NGC fibers for multiple future applications, including fiber lasers and sensors. Introduction Actively optical fibers have been extensively investigated owing to their potential for diverse applications in fiber lasers, fiber amplifiers and fiber sensors [1][2][3]. Generally, the properties of optical fibers are deeply governed by glass matrices. The past few decades have witnessed developments in optical gain glasses for exploiting optical fibers featuring high-efficiency and multiple-wavelength luminescence. Nano-crystallized glass ceramic (NGC), a significant composite containing a large number of glass phases as well as specific nanocrystals, has been developed as desirable optical gain material because it possesses the advantage of glass featuring high optical transmittance and that of crystal, exhibiting excellent coordinated environments for efficient luminescence [4][5][6][7][8][9][10]. Fluorosilicate (FS) NGC has been developed as one of the most common optical gain materials because it possesses strong framework structures constructed by [SiO 4 ] networks as well as efficient luminescence when active ions are incorporated into fluoride crystals featuring intrinsically low phonon energy. Thus, FS NGC exhibited excellent thermal stability as well as highefficiency luminescence. So far, a variety of FS NGCs containing NaYF 4 , LaF 3 , YF 3 , KSc 2 F 7 and KYb 3 F 10 have been designed to enhance the luminescence efficiency of active ions [11][12][13][14][15][16][17][18][19][20]. More importantly, FS NGC maintains the unique fiber drawing properties of glass when it is heated to soften with temperature. This provides significant opportunities for the fabrication of optical fibers containing nanocrystals based on NGCs. Since nanocrystals are precipitated from a glass matrix in NGC fibers, the luminescence efficiency is greatly enhanced, and the emission wavelength is modulated when active ions are successfully Materials Preparation In this work, glasses with the nominal components of (60-x)SiO 2 -xB 2 O 3 -20KF-20ZnF 2 were chosen as the host glasses (in mol %) (x = 0, 5, 10, 20, 30 and 40). These samples are referred to as 60Si-FS, 50Si-10B, 40Si-20B, 30Si-30B and 20Si-40B FBS glasses, respectively. In addition, YbF 3 and ErF 3 were doped into the glasses to achieve upconversion (UC) luminescence. All glasses were prepared using the melt-quenching method. A stoichiometric mixture of 30 g of the reagent grades SiO 2 (99.99%), ZnF 2 (99.99%), KF (99.99%), YbF 3 (99.99%) and ErF 3 (99.99%) was mixed thoroughly and then melted in a platinum-rhodium crucible at temperatures ranging from 1000 to 1450 • C. The crucible was covered during the glass preparation process. The glass melt was poured onto a cold brass mold and pressed with another brass plate to prepare the precursor glasses. Then, the precursor glasses were heat treated to fabricate NGCs. In order to prepare NGC fibers, the Yb 3+ -Er 3+ co-doped 40Si-20B FBS glass and commercial borosilicate glass (composition: 80SiO 2 -11B 2 O 3 -9Na 2 O (mol%)) were selected as the core and clad material, respectively. The NGC fibers were fabricated via the melt-in-tube method [26]. The precursor fiber was drawn at 1080 • C, where the core glass melted while the clad glass was softened. By quick drawing (15 m/s), the FBS precursor fiber was prepared. Then, the precursor fibers were treated with heat to prepare NGC fibers. Characterizations Differential scanning calorimetry (DSC, STA 449 C, NETZSCH, Bavaria, Germany) analysis was performed in an argon atmosphere to study the glass transition temperature and crystallization temperature at a rate of 10 K min −1 . To identify the crystalline phase in NGCs, X-ray diffraction (XRD) patterns were performed on an X-ray diffractometer (Bruker, Fällanden, Switzerland) using Cu/Ka (λ = 0.1541 nm) radiation. The morphol-ogy and size distribution of the nanocrystals in NGCs were measured by high-resolution transmission electron microscopy (HR-TEM) (Tecnai G2, FEI, Omaha, NE, USA). The UC emission spectra of the samples were recorded using an Edinburgh FLS980 fluorescence spectrometer (Edinburgh Instruments, Edinburgh, UK). Transmission spectra were measured by a UV/VIS/NIR spectrophotometer (Lambda-900, PerkinElmer, Waltham, MA, USA). A 980 nm laser diode (LD) was used as the excitation source to measure the UC emission spectra. Quantum yields of the samples were measured using the same spectrometer equipped with an integrating sphere. The measurement details of the quantum yield values are similar to our previous works [10]. All measurements were carried out at room temperature. Results and Discussion The DSC curves of the FBS glasses are shown in Figure 1. From the DSC curve, the glass transition temperature (T g ) of 50Si-10B, 40Si-20B, 30Si-30B and 20Si-40B glass was found to be 455, 450, 454 and 453 • C, respectively. The crystallization peak temperature (T p ) of 50Si-10B, 40Si-20B, 30Si-30B and 20Si-40B glass was found to be 623, 615, 604 and 592 • C, respectively. To prepare high-quality NGCs containing fluoride nanocrystals, the heat treatment temperature for the glasses was set between the T g and T p . analysis was performed in an argon atmosphere to study the glass transition temperature and crystallization temperature at a rate of 10 K min −1 . To identify the crystalline phase in NGCs, X-ray diffraction (XRD) patterns were performed on an X-ray diffractometer (Bruker, Fällanden, Switzerland) using Cu/Ka (λ = 0.1541 nm) radiation. The morphology and size distribution of the nanocrystals in NGCs were measured by high-resolution transmission electron microscopy (HR-TEM) (Tecnai G2, FEI, Omaha, NE, USA). The UC emission spectra of the samples were recorded using an Edinburgh FLS980 fluorescence spectrometer (Edinburgh Instruments, Edinburgh, UK). Transmission spectra were measured by a UV/VIS/NIR spectrophotometer (Lambda-900, Perki-nElmer, Waltham, MA, USA). A 980 nm laser diode (LD) was used as the excitation source to measure the UC emission spectra. Quantum yields of the samples were measured using the same spectrometer equipped with an integrating sphere. The measurement details of the quantum yield values are similar to our previous works [10]. All measurements were carried out at room temperature. Results and Discussion The DSC curves of the FBS glasses are shown in Figure 1. From the DSC curve, the glass transition temperature (Tg) of 50Si-10B, 40Si-20B, 30Si-30B and 20Si-40B glass was found to be 455, 450, 454 and 453 °C, respectively. The crystallization peak temperature (Tp) of 50Si-10B, 40Si-20B, 30Si-30B and 20Si-40B glass was found to be 623, 615, 604 and 592 °C, respectively. To prepare high-quality NGCs containing fluoride nanocrystals, the heat treatment temperature for the glasses was set between the Tg and Tp. To reveal the evolution of crystalline phases in the NGCs, the XRD patterns of Yb 3+ -Er 3+ co-doped FS and FBS NGCs were carefully investigated and are shown in Figure To reveal the evolution of crystalline phases in the NGCs, the XRD patterns of Yb 3+ -Er 3+ co-doped FS and FBS NGCs were carefully investigated and are shown in Figure 2a. It is clear that KYb 3 F 10 crystals were precipitated in the 60Si-0B FS NGC. Meanwhile, the diffraction peaks of the KZnF 3 crystal were also observed in the XRD pattern of 60Si FS NGC. KYb 3 F 10 crystals confined Yb 3+ ions in fluoride crystal environments featuring low phonon energy [10,17], while KZnF 3 provided no appropriate lattice for the incorporation of RE ions. Thus, the crystallization of KZnF 3 in the FS NGC made no contribution to the enhancement of luminescence for RE, ions and KZnF 3 was an impure phase in the FS NGC. For the 55Si-5B FBS NGC, KZnF 3 and KYb 3 F 10 crystals were also both precipitated from the glass via heat treatment. The intensities of the diffraction peaks in 55Si-5B FBS NGC were all higher than the corresponding peaks in 60Si FS NGC, indicating that more KZnF 3 and KYb 3 F 10 crystals were precipitated in 55Si-5B FBS NGC. However, the diffraction peak of KZnF 3 crystals was not observed in the XRD patterns of FBS NGCs when the B 2 O 3 concentration increased from 10 to 40%. More importantly, only the diffraction peaks of KYb 2 F 7 crystals were precipitated in XRD patterns of 50Si-10B, 40Si-20B and 30Si-30B NGCs, proving that pure RE-fluoride crystals were precipitated in these FBS NGCs. For 20Si-40B NGC, the diffraction peaks of KYb 2 F 7 crystals were weak, and other impure phases could be observed in the XRD patterns. Therefore, the crystallization of the impure KZnF 3 phase was successfully inhibited, and pure RE-fluoride crystals were controllably precipitated in the FBS NGCs with adjustable compositions when B 2 O 3 concentration, which changed from 10 to 30 mol%. 2a. It is clear that KYb3F10 crystals were precipitated in the 60Si-0B FS NGC. Meanwhile, the diffraction peaks of the KZnF3 crystal were also observed in the XRD pattern of 60Si FS NGC. KYb3F10 crystals confined Yb 3+ ions in fluoride crystal environments featuring low phonon energy [10,17], while KZnF3 provided no appropriate lattice for the incorporation of RE ions. Thus, the crystallization of KZnF3 in the FS NGC made no contribution to the enhancement of luminescence for RE, ions and KZnF3 was an impure phase in the FS NGC. For the 55Si-5B FBS NGC, KZnF3 and KYb3F10 crystals were also both precipitated from the glass via heat treatment. The intensities of the diffraction peaks in 55Si-5B FBS NGC were all higher than the corresponding peaks in 60Si FS NGC, indicating that more KZnF3 and KYb3F10 crystals were precipitated in 55Si-5B FBS NGC. However, the diffraction peak of KZnF3 crystals was not observed in the XRD patterns of FBS NGCs when the B2O3 concentration increased from 10 to 40%. More importantly, only the diffraction peaks of KYb2F7 crystals were precipitated in XRD patterns of 50Si-10B, 40Si-20B and 30Si-30B NGCs, proving that pure RE-fluoride crystals were precipitated in these FBS NGCs. For 20Si-40B NGC, the diffraction peaks of KYb2F7 crystals were weak, and other impure phases could be observed in the XRD patterns. Therefore, the crystallization of the impure KZnF3 phase was successfully inhibited, and pure RE-fluoride crystals were controllably precipitated in the FBS NGCs with adjustable compositions when B2O3 concentration, which changed from 10 to 30 mol%. In order to illuminate the crystallization mechanism in FBS NGCs, the XRD patterns of 40Si-20B FBS NGCs with various concentrations of the dopants are presented in Figure Figure 2. (a) Compared XRD patterns of 1.0Yb 3+ −0.2Er 3+ co-doped FS and FBS NGCs when heat treated at 520 • C for 10 h, (b) XRD patterns of no-doped and xYb 3+ −0.2Er 3+ co-doped 40Si-20B NGCs (x = 1.0-2.0) when heat treated at 520 • C for 10 h, (c) XRD patterns of 1.0Yb 3+ −0.2Er 3+ co-doped 40Si-20B glass and NGCs when heated at different temperatures, (d) TEM image of 1.0Yb 3+ −0.2Er 3+ co-doped 40Si-20B NGC when heated at 580 • C for 10 h; the inset is the enlarged HR-TEM image. In order to illuminate the crystallization mechanism in FBS NGCs, the XRD patterns of 40Si-20B FBS NGCs with various concentrations of the dopants are presented in Figure 2b. The XRD pattern of the no-doped sample exhibited broadband, and no feature of the crystal can be found, indicating that no crystal was precipitated in the no-doped sample even though the glass was also heat treated at 520 • C. Interestingly, sharp diffraction peaks of the crystals were observed in the XRD patterns of Yb 3+ -Er 3+ co-doped NGCs. These peaks all matched well with those of the JCPDS Card of KYb 2 F 7 (027-0457) crystal, proving that pure KYb 2 F 7 crystals were precipitated in the 1.0Yb 3+ -Er 3+ co-doped NGC. Furthermore, the intensities of diffraction peaks for KYb 2 F 7 crystals were enhanced monotonously when the concentration of Yb 3+ increased from 1.0 to 2.0 mol%. These results indicate that the crystallization in the FBS NGC was highly dependent on the doping of Yb 3+ . This crystal was not precipitated from the no-doped host glass, and a small number of dopants induced the crystallization in NGCs. This is called dopant-induced crystallization [10]. In the 60Si-FS glasses, interpenetrating phase separation was observed in the networks [37]. Yb 3+ ions were distributed in the separated fluoride networks and worked as a crystallization center to induce the precipitation of KYb 3 F 10 crystals (Figure 2a) [10]. Moreover, a large number of ZnF 2 and KF were distributed in the fluoride networks and thus, KZnF 3 crystals were precipitated in the FS NGC. In the FBS glasses, [BO 3 ], units were distributed among the glass networks and worked as network modifiers when the content of B 2 O 3 was low. A part of the Si-O frameworks was broken, and the local viscosity was reduced when 5 mol% B 2 O 3 was incorporated into the glass. Thus, more KYb 3 F 10 and KZnF 3 crystals were precipitated in the 55Si-5B FBS NGC after the heat treatments compared to 60Si NGC (Figure 2a). However, when the content of B 2 O 3 in the FBS glass increased to 10 mol%, a "boron anomaly" occurred in FBS glasses. [BO 4 ] tetrahedrons appeared in the glass networks, worked as network formers and constructed the framework structure of glass together with [SiO 4 ] tetrahedrons. The micro phase-separation also occurred in FBS glass networks, and thus, RE-fluoride crystals were precipitated from the separated fluoride networks, and crystallizations were induced by the doping of Yb 3+ in the FBS NGCs (Figure 2a,b), which was similar to that of FS NGC. Actually, two molar [BO 4 ] units were produced when one molar B 2 O 3 replaced one SiO 2 . Thus, the framework structures of glass were strengthened, and fluoride networks were dispersed by the introduction of B 2 O 3 [36], which restrained the precipitation of the impure KZnF 3 phase, as presented in Figure 2a,b. Therefore, the design of FBS NGCs inhibited the precipitation of the impure phase and RE-fluoride crystals were controllably precipitated in a system of FBS NGCs when the glass composition varied in a large range. Figure 2c shows the XRD patterns of the 40Si-20B glass and NGCs. Owing to the amorphous morphology of the glass, a broad band was observed in the XRD pattern of the glass, indicating that no crystals were precipitated in the precursor glass. Only the diffraction peaks of KYb 2 F 7 crystal were observed in the XRD patterns of samples heat treated at 460 and 500 • C for 10 h. The diffraction peaks of KYb 3 F 10 were also observed in the XRD pattern apart from those of KYb 2 F 7 when the sample was heat treated at 540 • C for 10 h. However, only the peaks of KYb 3 F 10 crystals could be observed in the XRD patterns of NGC when heat treated at 580 • C for 10 h. The intense diffraction peaks indicate that a large number of KYb 3 F 10 crystals were precipitated in NGCs. These results prove that the crystalline phases in the FBS NGCs were also modulated by heat treatments and transfer from KYb 2 F 7 to KYb 3 F 10 crystals when the temperature increased from 460 to 580 • C. Additionally, the TEM image of 1.0Yb 3+ -0.2Er 3+ co-doped 40Si-20B NGC heat treated at 580 • C for 10 h, as shown in Figure 2d, revealed that crystal particles were uniformly dispersed in the glass matrix with a diameter from 5 to 30 nm. The interval of crystal lattice fringes could be measured directly in the HR-TEM image in the inset of Figure 2d, and its value was about 0.204 nm, which corresponded to the (440) crystal facet of KYb 3 F 10 , proving the precipitation of KYb 3 F 10 nanocrystals in the 40Si-20B NGC. Owing to high phonon energy in the FBS glass networks, no emission was observed in the 40Si-20B glass, as shown in Figure 3a. As proved above, KYb 2 F 7 and KYb 3 F 10 Nanomaterials 2023, 13, 1586 6 of 11 crystals were controllably precipitated in the FBS NGCs. Owing to the precipitation of RE-fluoride crystals, RE ions are incorporated into the crystal structure from the glass networks during the heat treatments. The RE-fluoride crystals possessed lower phonon energy compared to the glass networks, resulting in the high emission efficiency of RE ions in the NGCs. As shown in Figure 3a, intense emission peaks attributed to UC emissions of Yb 3+ -Er 3+ ion pairs were observed in the NGCs. The peaks around 522, 540 and 650 nm were assigned to 2 H 11/2 ⇒ 4 I 15/2 , 4 S 3/2 ⇒ 4 I 15/2 , and 4 F 9/2 ⇒ 4 I 15/2 transitions of Er 3+ ions, respectively. The UC emission intensity of NGCs was enhanced monotonously when the heat treatment temperature increased from 460 to 580 • C. The phonon energy of the KYb 3 F 10 crystal was as low as 387 cm −1 . The emission intensity of NGC heat treated at 580 • C increased dramatically because a large number of KYb 3 F 10 crystals were precipitated in the NGC. Accordingly, the controllable crystallization in NGCs significantly enhanced the UC emissions of Yb 3+ -Er 3+ pairs. Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 1 tals were controllably precipitated in the FBS NGCs. Owing to the precipitation o RE-fluoride crystals, RE ions are incorporated into the crystal structure from the glas networks during the heat treatments. The RE-fluoride crystals possessed lower phono energy compared to the glass networks, resulting in the high emission efficiency of R ions in the NGCs. As shown in Figure 3a, intense emission peaks attributed to UC emis sions of Yb 3+ -Er 3+ ion pairs were observed in the NGCs. The peaks around 522, 540 an 650 nm were assigned to 2 H11/2⇒ 4 I15/2, 4 S3/2⇒ 4 I15/2, and 4 F9/2⇒ 4 I15/2 transitions of Er 3+ ions respectively. The UC emission intensity of NGCs was enhanced monotonously when th heat treatment temperature increased from 460 to 580 °C. The phonon energy of th KYb3F10 crystal was as low as 387 cm −1 . The emission intensity of NGC heat treated at 58 °C increased dramatically because a large number of KYb3F10 crystals were precipitate in the NGC. Accordingly, the controllable crystallization in NGCs significantly enhance the UC emissions of Yb 3+ -Er 3+ pairs. The compared emission spectra of FS and FBS NGCs when heat treated at 520 °C fo 10 h are shown in Figure 3b. Excited by the use of a 980 nm laser diode, the emission in tensity was enhanced by the addition of B2O3 into NGCs, reaching a maximum of 2 mol% B2O3. The emission intensity decreased obviously when the B2O3 concentration in creased to 30%, and the emission in 20Si-40B NGC was very weak due to a small numbe of RE-fluoride crystals in the NGC. The emission intensity in 40Si-20B FBS NGC was eve higher than in 60Si FS NGC. The incorporation of B2O3 into the glass enhanced the UC emission intensity of Er 3+ in the NGC, and 40Si-20B NGC was the most efficient gai material for UC luminescence. The compared emission spectra of FS and FBS NGCs when heat treated at 520 • C for 10 h are shown in Figure 3b. Excited by the use of a 980 nm laser diode, the emission intensity was enhanced by the addition of B 2 O 3 into NGCs, reaching a maximum of 20 mol% B 2 O 3 . The emission intensity decreased obviously when the B 2 O 3 concentration increased to 30%, and the emission in 20Si-40B NGC was very weak due to a small number of RE-fluoride crystals in the NGC. The emission intensity in 40Si-20B FBS NGC was even higher than in 60Si FS NGC. The incorporation of B 2 O 3 into the glass enhanced the UC emission intensity of Er 3+ in the NGC, and 40Si-20B NGC was the most efficient gain material for UC luminescence. More importantly, the UC emission intensity of Er 3+ in the 40Si-20B NGC was much higher than that in traditional β-NaYF 4 and KYb 2 F 7 NGC, as shown in Figure 3c. In past studies, the quantum yields of UC luminescence in NGCs were all below 1%. The quantum yield of UC luminescence in 40Si-20B NGC was as high as 2.16%, which is higher than 60Si FS NGC (1.56%) and even 3.4 and 4.2 times higher than that of the traditional β-NaYF 4 (0.64%) and KYb 2 F 7 (0.51%) NGC, respectively (Figure 3d). The NGC containing β-NaYF 4 crystals has been considered one of the most efficient luminescent NGC [38]. RE ions are incorporated into the crystal structures via ionic substitution processes for Y 3+ . Owing to the ionic mismatch between RE and Y 3+ ions, the quantity of active ions incorporated into crystal structures in β-NaYF 4 NGC is usually small. For our 40Si-20B FBS NGC, RE-fluoride crystals were controllably precipitated from the glass induced by the doped RE ions. RE ions were all confined in the fluoride crystals during the crystallization process of KYb 3 F 10 [10], resulting in a more efficient UC emission in 40Si-20B FBS NGC. In order to precipitate KYb 2 F 7 crystals in the traditional NGC, a large number of YbF 3 (~6%) was added to the glass [39]. The highly concentrated RE ions in NGCs lead to a heavy absorption of excited light, the probability of non-radiative transitions was very high, and the thermal quenching of luminescence was severe in the traditional KYb 2 F 7 NGC. However, owing to the dopant-induced crystallization of FBS NGCs, the concentrations of Yb 3+ were low and thermal quenching of luminescence was slight. Thus, the quantum yield in the traditional KYb 2 F 7 NGC was much lower than in 40Si-20B NGC. This indicates that dopant-induced crystallization in the FBS NGC made a contribution to the highest efficiency of UC luminescence and provided significant opportunities for overcoming the bottleneck in the UC luminescence efficiency of traditional NGCs. The transmission spectra of FS and FBS NGCs are shown in Figure 4. The peaks at 976nm are all attributed to Yb 3+ : 2 F 7/2 ⇒ 2 F 5/2 transitions. The peaks at 800, 647, 537, and 520 nm were ascribed to transitions from the ground state 4 I 15/2 to excited levels 4 I 9/2 , 4 F 9/2 , 4 S 3/2 , and 2 H 11/2 of Er 3+ , respectively. The transmittances of 60Si FS NGCs were lower than that of the precursor glass. The transmittances were reduced dramatically when the heat treatment temperature was increased. The scattering caused by the crystal particles in 60Si NGC led to a decrease in the transmittance, as shown in the transmission spectrum ( Figure 4a). For the 50Si-10B FBS samples, the transmittances of NGCs also reduced when the heat treatment temperature increased. The transmittances of 50Si-10B NGCs were higher than those of 60Si FS NGCs. Interestingly, the transmittances of 40Si-20B NGCs were almost equal to that of the precursor glass. The 40Si-20B NGCs samples were very all transparent, and their transmittance was as high as 90% at 600 nm even though the sample was heat treated at 580 • C for 10 h. As presented in the XRD pattern in Figure 2a, KZnF 3 crystals were precipitated in the 60Si NGC. The peaks of KZnF 3 in 60Si NGC were sharper than those of KYb 2 F 7 crystals in FBS NGCs. The average sizes of KZnF 3 crystals were larger than that of KYb 2 F 7 crystals, which resulted in heavy optical scattering and the low optical transmittance of 60Si NGC. The refractive index for the KYb 3 F 10 crystal was about 1.48, and that of FBS glass was measured at 1.49 at 633 nm. The small difference between the refractive index of KYb 3 F 10 crystal and the glass matrix made a contribution to the high transmittance of NGCs. Moreover, the transmittances of 30Si-30B NGCs decreased dramatically by the heat treatments due to the heavy separation of B-O phases when too much B 2 O 3 was added to the glass. Therefore, the incorporation of B 2 O 3 into FS glass restrained the precipitation of the impure phase (KZnF 3 ) and obviously increased the optical transmittance of NGCs. 40Si-20B NGCs exhibited the highest transmittances among the NGCs. As proved above, the RE ions that doped 40Si-20B NGC exhibited the most efficient UC luminescence and highest optical transmittance. Moreover, the crystallization of RE-fluoride nanocrystals in 40Si-20B NGC was more controllable compared to FS NGC. This indicated that the 40Si-20B NGC was a more desirable gain material for optical devices. Previously, we have tried to fabricate NGC fiber by using 60Si FS glass as the core glass through a melt-in-tube method. However, no nanocrystal was precipitated in the fiber due to the volatilization of fluorine at the high preparation temperature. The compositions of the FS and FBS glasses were quantitatively established through the use of X-Ray Fluorescence (XRF) measurements (Table 1). These measured compositions differed from the nominal values. Fluorine losses for FS and FBS glasses were calculated at 40.35 and 27.60 mass%, respectively. The preparation temperature of 40Si-20B FBS glass (1050 °C) was much lower than that of 60Si FS glass (1450 °C), which reduced the volatilization of fluorine in optical fibers during the fiber drawing process. Accordingly, 40Si-20B NGC is an excellent candidate for the fabrication of active NGC fibers. The NGC fibers were fabricated by using Yb 3+ -Er 3+ co-doped 40Si-20B FBS glass as the core glass through a melt-in-tube method. The fiber core exhibited as amorphous and transparent, as shown in the inset Figure 5a. Then, the precursor fibers were heat As proved above, the RE ions that doped 40Si-20B NGC exhibited the most efficient UC luminescence and highest optical transmittance. Moreover, the crystallization of REfluoride nanocrystals in 40Si-20B NGC was more controllable compared to FS NGC. This indicated that the 40Si-20B NGC was a more desirable gain material for optical devices. Previously, we have tried to fabricate NGC fiber by using 60Si FS glass as the core glass through a melt-in-tube method. However, no nanocrystal was precipitated in the fiber due to the volatilization of fluorine at the high preparation temperature. The compositions of the FS and FBS glasses were quantitatively established through the use of X-Ray Fluorescence (XRF) measurements (Table 1). These measured compositions differed from the nominal values. Fluorine losses for FS and FBS glasses were calculated at 40.35 and 27.60 mass%, respectively. The preparation temperature of 40Si-20B FBS glass (1050 • C) was much lower than that of 60Si FS glass (1450 • C), which reduced the volatilization of fluorine in optical fibers during the fiber drawing process. Accordingly, 40Si-20B NGC is an excellent candidate for the fabrication of active NGC fibers. The NGC fibers were fabricated by using Yb 3+ -Er 3+ co-doped 40Si-20B FBS glass as the core glass through a melt-in-tube method. The fiber core exhibited as amorphous and transparent, as shown in the inset Figure 5a. Then, the precursor fibers were heat treated at 580 • C for 10 h to fabricate NGC fibers. Nanocrystals were precipitated in the NGC fibers (inset (Figure 5c)). They were irradiated through the use of a 980 nm laser diode, and an intense yellow emission is observed in the NGC fiber (inset (Figure 5b)), which could be ascribed to the UC emission of Yb 3+ -Er 3+ pairs as presented in the spectrum in Figure 5. However, no obvious emission was observed in the precursor fiber. These results indicate that the RE-fluoride nanocrystals were controllably precipitated in the fibers. Therefore, the designed FBS glass could provide an excellent matrix when engineering NGC fibers for their application in optical devices. Nanomaterials 2023, 13, x FOR PEER REVIEW 9 of 11 NGC fibers (inset (Figure 5c)). They were irradiated through the use of a 980 nm laser diode, and an intense yellow emission is observed in the NGC fiber (inset (Figure 5b)), which could be ascribed to the UC emission of Yb 3+ -Er 3+ pairs as presented in the spectrum in Figure 5. However, no obvious emission was observed in the precursor fiber. These results indicate that the RE-fluoride nanocrystals were controllably precipitated in the fibers. Therefore, the designed FBS glass could provide an excellent matrix when engineering NGC fibers for their application in optical devices. Conclusions In this work, a series of FBS glasses were engineered to fabricate high-quality NGCs and NGC active fibers. The preparation temperature was reduced from 1450 °C to 1050 °C by the addition of B2O3 into the glasses. Compared to FS NGC, the crystallizations in FBS NGCs were more controlled due to the dispersion of the fluoride network by B2O3. Impure phase (KZnF3) crystals were precipitated in FS NGC, but these were not observed in the XRD patterns of FBS NGCs. Pure phase KYb2F7 nanocrystals were controllably precipitated in FBS NGCs when the B2O3 content changed from 10 to 30%. The crystalline phase was modulated from Kyb2F7 to Kyb3F10 when the heat treatment temperature increased to 580 °C, and this led to the greatest enhancement of UC emissions in the NGCs. The optical transmittances of FBS NGCs were higher than FS NGC due to their controllable crystallization in FBS NGCs. More importantly, the designed 40Si-20B FBS NGCs exhibited the highest transmittance as well as the most efficient UC luminescence. The NGC fibers were successfully fabricated based on the 40Si-20B FBS glass. A greatly en- Conclusions In this work, a series of FBS glasses were engineered to fabricate high-quality NGCs and NGC active fibers. The preparation temperature was reduced from 1450 • C to 1050 • C by the addition of B 2 O 3 into the glasses. Compared to FS NGC, the crystallizations in FBS NGCs were more controlled due to the dispersion of the fluoride network by B 2 O 3 . Impure phase (KZnF 3 ) crystals were precipitated in FS NGC, but these were not observed in the XRD patterns of FBS NGCs. Pure phase KYb 2 F 7 nanocrystals were controllably precipitated in FBS NGCs when the B 2 O 3 content changed from 10 to 30%. The crystalline phase was modulated from Kyb 2 F 7 to Kyb 3 F 10 when the heat treatment temperature increased to 580 • C, and this led to the greatest enhancement of UC emissions in the NGCs. The optical transmittances of FBS NGCs were higher than FS NGC due to their controllable crystallization in FBS NGCs. More importantly, the designed 40Si-20B FBS NGCs exhibited the highest transmittance as well as the most efficient UC luminescence. The NGC fibers were successfully fabricated based on the 40Si-20B FBS glass. A greatly enhanced UC emission was observed in the NGC fibers compared to the precursor fiber. These intriguing properties indicate that the designed FBS NGCs could prove excellent optical gain materials and promising matrices for the manufacturing of optical fiber devices.
7,516.6
2023-05-01T00:00:00.000
[ "Materials Science", "Physics" ]
Completeness of the Gaia-verse IV: The Astrometry Spread Function of Gaia DR2 Gaia DR2 published positions, parallaxes and proper motions for an unprecedented 1,331,909,727 sources, revolutionising the field of Galactic dynamics. We complement this data with the Astrometry Spread Function (ASF), the expected uncertainty in the measured positions, proper motions and parallax for a non-accelerating point source. The ASF is a Gaussian function for which we construct the 5D astrometric covariance matrix as a function of position on the sky and apparent magnitude using the Gaia DR2 scanning law and demonstrate excellent agreement with the observed data. This can be used to answer the question `What astrometric covariance would Gaia have published if my star was a non-accelerating point source?'. The ASF will enable characterisation of binary systems, exoplanet orbits, astrometric microlensing events and extended sources which add an excess astrometric noise to the expected astrometry uncertainty. By using the ASF to estimate the unit weight error (UWE) of Gaia DR2 sources, we demonstrate that the ASF indeed provides a direct probe of the excess source noise. We use the ASF to estimate the contribution to the selection function of the Gaia astrometric sample from a cut on astrometric_sigma5d_max showing high completeness for $G<20$ dropping to $<1\%$ in underscanned regions of the sky for $G=21$. We have added an ASF module to the Python package SCANNINGLAW (https://github.com/gaiaverse/scanninglaw) through which users can access the ASF. INTRODUCTION Gaia has initiated an era of large scale Milky Way dynamical modelling by providing 5D astrometry (position, proper motion and parallax) for more than 1.3 billion stars (Gaia Collaboration et al. 2016Collaboration et al. , 2018Lindegren et al. 2018). The Gaia satellite measures source positions at multiple epochs over the mission lifetime. These epoch astrometry measurements are the inputs of the Astrometric Global Iterative Solution (AGIS Lindegren et al. 2012) which iteratively solves for the spacecraft attitude, geometric calibration of the instrument, global parameters, and 5D astrometry of each source: the right ascension and declination ( 0 , 0 ), the proper motions ( , ) and the parallax ( ). Alongside the source astrometry, ★ E-mail<EMAIL_ADDRESS>Gaia also publishes the 5D astrometric measurement covariance and various statistics of the astrometric solution for all sources which meet the quality cuts. The 5-parameter astrometric model of AGIS assumes sources are point-like with apparent non-accelerating uniform motion relative to the solar system barycenter which we will refer to as 'simple point sources'. Both resolved and unresolved binary stars accelerate due to their orbits around the common center of mass which shifts the centroid off a uniform motion trajectory. For example, given the full epoch astrometry, it is expected that Gaia can characterise the orbits of stars with brown dwarf companions out to 10 pc and black hole companions out to more than 1 kpc when considering tight constraints of the uncertainty on the mass function 3 2 −2 tot (Andrews et al. 2019). When one is interested in (the less constraining) orbital parameter recovery with ∼ 10% precision, Gaia might detect a stag-gering 20 k brown dwarfs around FGK-stars out to many tens up to a few hundreds of pc for the longer period objects (100-3000 d), which could reach even 50 k out to several hundred pc when one is only interested in the detection of BD candidates (e.g. for followup studies) (Holl et al. subm), with black hole companions being detectable out to several kpc. Similarly exoplanet orbits pull their host stars away from uniform motion although with a much smaller amplitude due to the lower companion mass. From simulations it is expected that Gaia is capable of detecting 21,000 long-period, 1-15 Jupiter mass planets during the 5 year mission (Perryman et al. 2014), more than 4 times the number of currently known exoplanets. Ranalli et al. (2018) have further demonstrated that the 5 year Gaia mission will be able to find Jupiter-mass planets on 3 au orbits around 1 stars out to 39 pc and Neptune-mass planets out to 1.9 pc. Not only will the presence of planets be detectable but it is expected that ∼ 500 planets around M-dwarfs will receive mass constraints purely from Gaia astrometry (Casertano et al. 2008;Sozzetti et al. 2014). Microlensing occurs when the light from a background source is gravitationally lensed by a foreground lensing star causing a shift in the apparent position of the source, detectable by high precision astrometric surveys (Miralda-Escude 1996). The deflection can be used as a direct measurement of the lens mass as demonstrated by Kains et al. (2017) using HST observations. A signficant amount of work has gone towards predicting microlensing events in using Gaia proper motions (Klüter et al. 2018;Bramich 2018;McGill et al. 2019) with 528 events expected in the extended Gaia mission ∼ 39% of which pass astrometry quality cuts (McGill et al. 2020). For a small number of these events Gaia will be able to determine the lens mass to < 30% uncertainty (Klüter et al. 2020). Extended sources such as galaxies will have a reduced astrometric precision from each Gaia observation due to the increased spread of flux. Gaia scans a source in many different directions over the mission lifetime from which the source shape can be reconstructed (Harrison 2011). With the Gaia epoch astrometry for the 5 year mission, Gaia will be able to distinguish between elliptical and spiral/irregular galaxies with ∼ 83% accuracy (Krone-Martins et al. 2013). These classifications would be incredibly valuable for galaxy morphology studies. The Gaia epoch astrometry will be first released in DR4, several years from now. However, a (very) condensed form of this large amount of information is stored in the summary statistics of the astrometric solution currently published in Gaia DR2 and updated in EDR3. Binary stars, exoplanet hosts, microlensing events and extended sources will induce an excess noise in the astrometric solution as they are not well described by simple point sources. This excess noise has been modelled for binaries (Wielen 1997;Penoyre et al. 2020) and already Belokurov et al. (2020) has found many binaries in Gaia DR2 using the renormalised unit weight error statistic, RUWE, that is the re-normalised square root of the reduced 2 statistic of the astrometric solution. RUWE is a 1D summary statistic of the residuals of the 5parameter astrometric solution of a source relative to the Gaia inertial rest frame. But we can glean even more information on the excess noise from the 5D uncertainty of the astrometric solution. The uncertainty in the 5D astrometric solution for a source in Gaia can be expressed as the convolution of Gaia's astrometric measurement uncertainty expected for a simple point source and excess noise. We term Gaia's expected astrometric measurement uncertainty the Astrometry Spread Function (ASF) defined as the probability of measuring a simple point source to have astrometry r ∈ R 5 given the true source astrometry r ∈ R 5 and apparent magnitude ASF(r ) = P(r | r, ). (1) The excess noise will be driven by un-modelled source characteristics such as binary motion, exoplanet host motion, microlensing or extended source flux as well as any calibration noise which is not accounted for in the ASF. In this work, we'll assume that all significant calibration effects are included in the ASF such that the excess noise is dominated by un-modelled source characteristics. However this assumption breaks down in some regimes, particularly for bright sources in crowded regions where CCD saturation becomes a significant issue. Possible un-accounted calibration effects should be considered when using Gaia astrometry to search for excess noise due to genuine un-modelled source characteristics. Since the astrometric solution is evaluated using least squares regression, the ASF will be Gaussian distributed where Σ( , , ) ∈ R 5×5 is the expected covariance for a simple point source with position , and apparent magnitude as measured by Gaia. The astrometric calibration is also a function of source colour which was either estimated from BP − RP or added as a sixth parameter of the astrometric solution, -_ _ . As colours are only published for a subset of the Gaia catalogue and the astrometric correlation coefficients for pseudo-colour are not published in DR2, we neglect colour dependence of the astrometric solution in this work. For EDR3, all pseudo-colour correlation coefficients are published and it will be worth considering how this impacts the ASF. Given the ASF and published astrometric 5-parameter model uncertainties we can reconstruct the 5D excess noise and use it to characterise binary systems, exoplanet orbits, microlensing events and extended sources in Gaia without requiring the epoch astrometry. The focus of this paper is to construct the ASF for Gaia DR2. This builds on analysis of the scanning law from Boubert et al. (2020a) and Boubert et al. (2020b) and will be used, in conjunction with the results of to determine the selection function for the subsample of Gaia DR2 with published 5D astrometry. In Section 2 we provide a whistle-stop tour of the Gaia spacecraft, scanning law and how this translates to constraints on the position, proper motion and parallax of sources. This paper is focused on Gaia DR2 for which we estimate the ASF, although we note that the method will be directly applicable to Gaia EDR3. The method for constructing the ASF of Gaia is derived in Section 3 and the results compared with the astrometry sample are shown in Section 4. We will also use the ASF for an alternative derivation of the Unit Weight Error demonstrating the applicability of the method in Section 5. As a secondary motivation, the Gaia DR2 5D astrometry sample is selected from the full catalogue with a cut on the parameter _ 5 _ which is a function of the astrometric covariance matrix. In predicting the astrometric covariance for simple point sources, we can also estimate the contribution from this cut to the astrometric selection function which we will present in Section 6. Finally we will discuss applications of the ASF in Section 7 and provide instructions for accessing the data in Section 8 before concluding. (b) Gaia produces stronger measurements in the AL direction therefore the astrometry will be better constrained in the mean scan direction. Areas with more Equatorial polar scans (black) will constrain declination whilst lateral scans (white) constrain right ascension. The significance in the difference in directional constraints is reflected by the clustering of scan directions. Heavily clustered scan directions (light yellow) will produce a much stronger constraint in the mean scan direction than the perpendicular direction. The Scanning Law The Gaia spacecraft is in orbit around the Lagrange point 2 (L2), orbiting the Sun in phase with the Earth. The spacecraft spins with a 6 hour period around a central axis which precesses with an aspect angle of 45 deg around the pointing connecting the satellite and the Sun, with a 63 day period. This is similar to a spinning top which has been left long enough to wobble. The orbit of the spacecraft around the sun adds a third axis of rotation. Perpendicular to the spin axis, two fields of view (FoV) observe in directions separated by 106.5 deg. The direction in which each FoV is pointing at any point in time throughout Gaia's observing period is the scanning law. The Gaia DR2 observing period runs from July 25 2014 (10:30 UTC) until May 23 2016 (11:35 UTC). The scanning law for DR2 is published by DPAC and refined by Boubert et al. (2020a) and Boubert et al. (2020b, hereafter Paper III). Whilst this tells us where Gaia was pointing, it doesn't tell us whether Gaia was obtaining useful scientific measurements that contributed to the published data products. Many time periods in the DR2 window did not result in measurements which contributed to the Gaia astrometry as discussed in Paper III. In this paper we only include the scanning law in the OBMT 1 interval 1192.13-3750.56 rev (Lindegren et al. 2018, hereafter L18) which removes the Ecliptic Polar Scanning Law, an initial calibration phase of Gaia which contributed to the published photometry but not astrometry. DPAC have published a series of additional gaps in astrometry data taking 2 . We remove any time spans of the scanning law for which the gap is flagged as 'persistent'. Using the published Epoch Photometry for 550,737 variable sources (Riello et al. 2018;Evans et al. 2018;), Paper III constrained additional gaps which are persistent across all data products of Gaia DR2 which we also remove from the scanning law. Finally, Paper III determines the probability of an observation being recorded and used in Gaia DR2 in 19 magnitude bins. These observation probabilities will be used to weight observations in the ASF in Section 3.2. Taking Observations Both FoVs project source images onto a single panel of CCDs called the focal plane. On the focal plane there are 9 columns and 7 rows of CCDs, referred to as the astrometric field (AF), which measure the position of a source although the middle row only has 8 CCDs (because one of the 9 CCD positions is taken by a wave front sensor). As the spacecraft spins, stars track across the CCD panel in the along-scan direction and are observed with up to 9 astrometric CCDs during a single FoV transit (see e.g. Fig. 1 of Lindegren et al. 2016). Individual CCD measurements will be referred to as observations whilst a full track across the CCD panel is a scan (also referred to as a FoV transit). Before the AF, sources pass over the 'Sky Mapper' (SM) CCD which triggers the initial detection and needs to be confirmed by the first AF CCD in order for any observations within the scan to successfully provide a measurement. Each observation records the position and apparent brightness of 1 Onboard Mission Time (OBMT) is the timing system used in Gaia and is normalised such that OBMT is 0 in October 2013 and increments by 1 for every revolution of the Gaia satellite which corresponds to 6 hours 2 https://www.cosmos.esa.int/web/gaia/dr2-data-gaps the source. If the source is recorded with < 13 by the SM, a 2D observation window is assigned measuring position in the alongscan (AL) direction and orthogonal across-scan (AC) direction. For fainter stars with > 13, only the AL position is recorded. Observations are saved on-board Gaia in 'Star Packets' grouped by apparent magnitude in 19 bins (Table 1.10, Section 1.3.3 de Bruijne et al. 2018). The majority of data is uploaded to Earth, however some can be lost or deleted (see Section 3.3 Gaia Collaboration et al. 2016) changing the scanning law sampling for stars in different Star Packet magnitude bins. After a first process of CCD signal level, background, and PS-F/LSF calibration, the data is input to the AGIS pipeline (Lindegren et al. 2012) which uses an iterative linear regression algorithm to simultaneously fit the attitude of the spacecraft, a large number of calibration parameters, and the position, proper motion and parallax of all sources in Gaia DR2. We here provide a general description of how position, proper motion and parallax can be understood to depend on the nature of the observations a source receives, though in Gaia they are simultaneously solved from the offsets of all source observations with respect to its (iteratively improved) internal reference system. The precision with which position, proper motion and parallax of a source can be measured is heavily dependent on magnitude (beyond > 13 the uncertainties will monotonically increase with magnitude), the number of observations taken, the scan directions of these observations and their distribution in time. More observations will produce a greater precision therefore sources in regions of the sky with the most scans as shown in Fig. 1a will have the best constrained astrometry. Notably, the Galactic center in the middle of the plot has received only ∼ 10scans whilst the best observed regions of the sky are scanned over 100 times. For the vast majority of sources, Gaia only measures position in the AL direction and even for 2D observations, the AL position constraint is much tighter than the AC measurement (Lindegren et al. 2012). Therefore North-South scans in equatorial coordinates will constrain declination, whilst East-West scans constrain right ascension, . Fig. 1b gives the mean direction of Gaia DR2 scans modulo such that a North-South and South-North scan appear the same with = 0. The mean direction is estimated from the argument of the mean vector = arg [ exp(2 ) ]. ( This statistic is published for sources in Gaia EDR3 as _ _ _ 2. Darker areas will have stronger constraints whilst lighter areas constrain more tightly. The difference in accuracy in right ascension and declination depends on the clustering of scan directions. The absolute value of the mean scan vector, | exp(2 ) | which will be ∼ 1 for heavily clustered scans and ∼ 0 for a spread of scan directions. This is shown in Fig. 1c where light areas will strongly constrain position in the mean scan direction but only provide a weak constraint in the perpendicular direction whilst dark regions have a spread of scan directions and therefore won't show a strong direction preference. This statistic is also published in Gaia EDR3 as _ _ _ 2. Constraints on the source proper motion come from measuring the position of a source at multiple different epochs and estimating the rate of change. A larger spread of observation times will produce a tighter proper motion constraint. Fig. 1d shows the standard deviation of observation times with light regions producing tighter proper motion constraints whilst dark regions produce a weaker constraint. Finally, source parallax is estimated from the apparent motion of a source relative to the background of distant sources due to Gaia's motion around the sun on a one year period. A larger spread of observations throughout the year will produce a tighter constraint on the source parallax. The position of an observation in the yearly solar orbit is described by the complex vector exp(2 ). As with the scan direction, the clustering of observations in the year is estimated from the absolute value of the mean vector | exp(2 ) |. If observations are heavily clustered at one time of year the absolute mean will be close to 1, shown by lighter areas of Fig. 1e, and only a weak constraint on parallax will be achieved. Values close to 0 have well spread observations throughout the year and therefore provide a stronger constraint on parallax. Data The previous sections have provided a qualitative prediction of Gaia's expected performance as a function of position on the sky. In the following sections we'll produce a quantitative estimate of the predicted precision with which Gaia can measure source astrometry as a function of position on the sky and apparent magnitude. Gaia DR2 (Gaia Collaboration et al. 2016Collaboration et al. , 2018 provides 5D astrometry for 1,331,909,727 of the 1,692,919,135 source in the full DR2 catalogue. To test our predictions, we'll use the full Gaia DR2 source catalogue and 5D astrometry sample. METHOD As input AGIS takes the 1D measurement of the position of each source in the AL and, for bright sources, also AC direction. For bright sources, Gaia produces a 2D observation however AGIS assumes the constraints in the AL and AC directions are uncorrelated treating them as independent 1D observations. We make the same assumption in this work. This is a gross simplification of all the steps which AGIS takes -for instance, calibrating the satellite attitude noise -however it allows for a very appealing and tractable derivation of the ASF from the available data. We will proceed with four key assumptions: • The 1D position measurement uncertainty is Gaussian. • Individual measurements, including AL and AC measurements from the same observation, are independent and uncorrelated. As the AGIS pipeline uses the same assumption, this will not produce any discrepancy between our predictions and the published Gaia astrometry. • The position measurement uncertainty is a function of source apparent magnitude at the time of observation only. Any dependence of the observation precision of the satellite as a function of time for a given apparent magnitude is neglected which we justify in Appendix A. This also assumes that the measurement uncertainty is colour independent. • Astrometric parameters of different sources are assumed to be independent. In reality measurements of different sources can be considered independent, however due to the joint estimation of the attitude and geometric calibration from the same set of observations, the posterior astrometric parameters will be correlated. Pre-launch estimates by Holl et al. (2010) predicted correlations of only a fraction of a percent for sources separated by less than one degree (in a fully calibrated AGIS solution dominated by photon noise). DR1 (see Sec. D.3 of Lindegren et al. 2016) seems to be well above that with correlation as high as perhaps 0.25 at separations up to ∼ 1 deg, though much smaller on longer scales. Studies on the quasar sample in DR2 (see Sect. 5.4 of Lindegren et al. 2018) show still very large covariances as small spatial scales (< 0.125 deg) and milder effects over larger spatial scales. With each successive data release it is expected that these spatial correlations will shrink, though they will never be zero, especially at small scales. Throughout this paper we also only consider sources with constant magnitude to keep the results simple and tractable however the formalism is easily generalisable to variable sources. Over this section we derive the ASF of Gaia DR2. As many different variables are introduced, we refer the reader to Table 1 to clarify our notation. Astrometry from linear regression Gaia's goal for each source in the astrometry catalogue is to measure the five parameter astrometric solution, r ∈ R 5 , consisting of the positions, proper motions and parallax. The AGIS pipeline estimates the astrometry of sources through linear regression on all observations of a single source in a step called source update (see Sect. 5.1 of Lindegren et al. 2012). We will use the same technique to determine the expected precision for a simple point source as a function of apparent magnitude and position on the sky. Take observations of a source at times with the scan direction where ∈ {1, . . . , }. The on-sky positions of the source at times in ICRS coordinates are ( , ). The source position relative to the solar system barycenter at the reference epoch (J2015.5 for Gaia DR2 L18) is 0 , 0 . The position at time will be a linear combination of the position at a reference epoch with the proper motion and parallax motion. The offset due to parallax motion is given by where , , are the barycentric coordinates of Gaia at time and is the parallax of the source. We have assumed the parallax and proper motion are small enough such that the parallax ellipse is only dependent on the source reference epoch position which keeps the system of equations linear. Therefore the position of the source at time will be given by * = * 0 + * + Π (6) = 0 + + Π where , is the source proper motion, is the time relative to the reference epoch and we use the notation * = cos( ) and * = cos( ). Writing this set of linear equations out in matrix notation: Our measurables are 1D positions in either the AL or AC direction of the Gaia focal plane. This is given by = * sin + cos where the scan position angle, is the scan direction of Gaia at the observation time for AL observations (shifted by /2 for AC observations) and is defined such that = 0 • in the direction of local Equatorial North, and = 90 • towards local East 3 . Substituting into Eq. 8: where = cos and = sin , M ∈ R ×5 is the design matrix for the linear equations and r ∈ R 5 is the vector of astrometric parameters. Assuming Gaussian measurement uncertainty for both AL and AC measurements and assuming all observations are independent, the observed source positions are distributed x ∼ N (x, K) where the covariance matrix K = diag 2 1 , . . . , 2 . This measurement covariance implicitly assumes all observations are independent and uncorrelated, one of our key assumptions also adopted in AGIS. Following standard linear least squares regression (Hogg et al. 2010), the astrometric uncertainty covariance matrix of the inferred r is given by Expanding this out in terms of all scan angles, the full inverse covariance matrix is given by where Π = Π + Π . Eq. 11 assumes that every scan of a source will produce a detection which contributes to the astrometric solution. Even after removing gaps in the scanning law, there are periods of time and magnitudes which are less likely to result in good astrometric observations. We need to account for the efficiency of Gaia observations. Scan Weights As Gaia scans a source, up to 9 observations are taken with the 9 astrometric-field CCD columns. There are two ways in which observations may not be propagated to the astrometric solution. If a source is not detected and confirmed by the SM and first AF CCDs and allocated a window, none of the CCDs in the scan will produce a successful detection. Secondly, an individual CCD observation may either not be taken or the measurement may be down-weighted in the astrometric solution. There are many reasons why this might happen such as stray background light, attitude calibration or the source simply passing through the small gaps between CCD rows. Accounting for these processes, the astrometric precision matrix may be approximated as where ∼ Bernoulli( ) × Binomial (9, ) where is the fraction of scans used in the astrometric solution and is the probability of a CCD producing a successful observation. The binomial distribution assumes that a CCD observation is either successful or not therefore only allowing a full weight or zero weight. The weight formula in the AGIS pipeline (Eq. 66, Lindegren et al. 2012) does allow for non-discrete weights however we anticipate that this will have a small effect on our results. Assuming that the event of a successful scan or observation are independent events, the expected value of the weights is given by Therefore the expected astrometric precision is given by For sources with > 13, Gaia only measures a 1D position in the AL direction however for bright sources, < 13 and AC measurement is also taken. Following the method in Lindegren et al. (2012), we treat the AL and AC observations as independent 1D measurements such that Eq. 15 expands out to In Boubert et al. (2020b) the fraction of scans, ( ) which contribute to the Gaia photometry is estimated in Star Packet magnitude bins as a function of time in DR2. Due to their separate pipelines, the probability of an observation contributing to the astrometric solution will differ from the photometry. To determine the astrometry weights, we renormalise the photometry scan fraction using the published number of astrometric detections used, where AL is the weight for AL source observations since we have renormalised by the number of good AL observations used in the astrometry. The multiplication of the scan fraction by = 9 × 62 63 converts the scan fraction to average number of observations. There are 9 columns and 7 rows of CCDs in the astrometric field of Gaia The magnitude dependence of the ASF is a function of the AL astrometric uncertainty AL (top), the fraction of photometric observations which generate good astrometric observation used in the AGIS pipeline good (middle) and the ratio of AC to AL observations (bottom). In all cases the median and 16 th to 84 th percentiles of the Gaia DR2 astrometry sample are given by the red sold line and shaded area respectively. The distribution of AL (black histograms, log normalised) extends high above the median due to source excess noise. however one CCD is replaced by a wave front sensor hence only 62 are left. good ( ), shown in the middle panel of Fig. 2, is above 90% across the most magnitudes and only significantly deviates from 100% at the bright end. For a given source, the number of AL and AC observations is published in Gaia DR2 as _ _ _ and _ _ _ . These statistics do not account for downweighting of observations in the astrometry pipeline, however, assuming the AL and AC measurements of the same observations are equally likely to be down-weighted, the ratio between the numbers will be unaffected. = gives the fraction of observations which produce an AC measurement. The bottom panel of Fig. 2 shows that observations with < 13 produce AC measurements whilst > 13 do not. We use this fraction to relate the observation weights AC = ( ) AL . In truth, the scan fraction, , may be a weak function of position on the sky at bright magnitudes due to crowding causing problems with window assignment. However, as we'll see in the following section, the contribution from AC observations to the astrometric precision is ∼ 3% compared to the AL contribution and so any weak uncertainty in will have a small impact on the estimated precision. Centroid error The centroid error in the AL and AC directions, AL , AC is a function of the spacecraft instrumentation and apparent brightness of the source due to photon shot noise. For the remainder of this section, we assume that all CCDs in the astrometric field of the times in the DR2 time frame shown by blue and red arrows for FoV1 and FoV2 respectively. Bottom: Each scan, marked by the vertical red and blue dashed lines, contributes to the 5D astrometry constraints. The expected uncertainty on each astrometry parameter is shown for a source with = 16 and therefore AL = 0.37 mas and reduce with each subsequent scan. When fewer than 6 visibility periods are observed, only * 0 (green solid) and 0 (purple solid) are shown with priors placed on all parameters. With at least 6 visibility periods, uncertainties are also given for * (green dashed), (purple dashed) and (red dotted). CCD panel have similar noise properties. We also assume that this performance is time independent and does not depend on the position of the source on the plane. Changes to the spacecraft such as mirror condensation and micrometeoroid impacts mean that the performance of the space craft is not perfectly time independent however we demonstrate in Appendix A that the dependence is small compared to the scatter of individual measurements using the epoch photometry. Therefore we assume that all AL observations of a single source have the same precision and likewise for all AC observations such that Eq. 16 becomes The final unknown in Eq. 22 is AL , the astrometric centroid error of AL observations. AL was estimated from the Gaia published astrometry by Belokurov et al. (2020) using the formula 0.53 √ where was the number of AL observations used for the source astrometry published as _ _ _ _ . 0.53 was used as this empirically matched the published distribution in Fig.9 of L18. However √ is a strong function of position on the sky depending on scan directions and spread of observations throughout the year. This means that the running median as a function of magnitude will be heavily affected by where the given stars lie on the sky. For this work, we find a more mathematically motivated route to the scan variance. By summing up the first two diagonal terms of the inverse covariance matrix from Eq. 12, the dependence on the scan angle disappears. Therefore the AL astrometric error can be determined independent of position on the sky by substituting Σ for the published covariance and rearranging in terms of AL . where =1 AL = _ _ _ _ . As we will discuss in more detail in Section 6, the selection of the Gaia 5D astrometry sample included a cut on _ 5 _ . Sources with large astrometric uncertainty would not receive 5D astrometry and therefore, particularly at the dim end, AL would be biased low. To mitigate this, we calculate AL ( ) using all stars in Gaia DR2 with at least 6 _ _ . For sources without 5D astrometry we use the inverse of the published 2D astrometry covariance matrix as a proxy. This is a rough approximation and therefore we suggest that our results are only trusted out to 20.5 at which point the cut on _ 5 _ becomes significant (see Section 6). The distribution of AL is shown in the top panel of Fig. 2 demonstrating a relatively flat behaviour for < 13 where 2D observations are taken and time windows are truncated to avoid saturation. For > 18 the variance grows with magnitude due to photon shot noise. The red line gives the median value in 0.1mag and we linearly interpolate this as a function of magnitude to estimate AL ( ). For reference, the grey-scale histograms are the AL for 5D astrometry sources where the truncation for AL ∼ 10 mas is caused by the _ 5 _ cut. The blue line in the top panel of Fig. 2 is the blue line from L18. Across most of the magnitude range, our estimate is lower than L18 by ∼ 10%. This is expected because we're actually calculating slightly different statistics. L18 used the residuals of all AL observations relative to the best fit astrometric solution. In calculating the source astrometry, observations are assigned weights as a function of their residuals which disfavoured observations with large residuals from being used in the astrometric solution. Therefore the value of AL inferred by L18 will be higher than ours which has implicitly ignored large outliers. As our task in this paper is to predict the published 5D astrometry uncertainties, our formula for AL is the appropriate one to use. Finally, we can substitute in AL from Section 3.2. where we have defined as the magnitude dependent normalisation and Φ( , ) ≡ AL =1 ( )A as the scanning law dependent matrix. Φ has a weak magnitude dependence as the fractions ( ) change between the magnitude bins in which Gaia downloads data however, within any download bin, it is independent of magnitude. . The ratio of the median AL in HEALPix level 7 bins to the value of AL evaluated at the median magnitude of stars in the HEALPix bins highlights any dependence of AL on sky position in Galactic coordinates. Particularly in the highest source density regions of the Galactic plane and bulge at brighter magnitudes, sources have significantly higher astrometric measurement uncertainty than the average across the sky. Astrometry Spread Function In the previous sections we have derived the expected precision, E Σ −1 for simple point sources as observed by Gaia. The DR2 data has been used to estimate AL ( ), ( ) and good ( ) as running medians as a function of magnitude. Φ( , ) = AL =1 ( )A is a function of the scanning law only and has no dependence on the Gaia astrometry data. For the remainder of this paper, we will simplify the notation taking Σ = E Σ −1 −1 as the expected 5D astrometry covariance for a simple point source in Gaia. For a point source moving without acceleration with true astro-metric coordinates r observed in Gaia DR2, the expected measured astrometric coordinates will be drawn from a multivariate normal distribution with covariance Σ, r ∼ N (r, Σ( , , )). This normal distribution is the Astrometry Spread Function where , , are the apparent magnitude and position of the source on the sky. To demonstrate how the astrometry is fit in practice, we show the expected observations and astrometric uncertainty for a hypothetical source at = 30 deg, = 10 deg with apparent magnitude = 16 in Fig. 3. The source is given proper motion * = 20 mas/y, = 20 mas/y which produces a trajectory from South East to North West. Adding the parallax ellipse for = 12 mas generates a spiralling apparent position observed by Gaia throughout DR2 given by the black-dashed line in the top panel of Fig. 3. Gaia scans this region of the sky 15 times in DR2 given by the blue and red arrows for scans from FoV1 and FoV2 respectively. Each scan improves the constraint on each of the five astrometry parameters the uncertainties for which are given in the bottom panel of Fig. 3. Gaia selects sources for the 5D astrometry catalogue which have at least 6 _ _ where a visibility period is a group of observations separated by less than four days. Where fewer than 6 visibility periods have been observed the AGIS pipeline places priors on the astrometry derived in Michalik et al. (2015) and only the 2D position constraints are published. We replicate this using the same priors and only providing uncertainties for the * 0 (green solid) and 0 (purple solid) parameters before the sixth visibility period (9th scan). After the sixth visibility period, the priors were dropped and the uncertainties on * (green dashed), (purple dashed) and (red dotted) parameters are also shown. For simplicity, this demonstration assumes all observations were successful and equally likely to contribute to the astrometry however as discussed in Section 3.2, this is not always the case and this is corrected for by weighting observations. RESULTS To test that our method is producing reasonable covariance matrices, we compare our predictions with the published 5D astrometry covariances. From the Gaia DR2 astrometry sample we determine the median published covariance on a level 7 HEALPix grid (Górski et al. 2005) in the magnitude range ∈ [18.1, 19.0] which represents a single Star Packet bin in which the scan fractions, are unchanged. We estimate the predicted covariance using the formula in Eq. 22 where is taken as the median apparent magnitude of stars in the given magnitude bin and HEALPix pixel. The scan angles and times are inferred at the central coordinates of the HEALPix pixel. All figures are shown in Galactic coordinates. The diagonal elements of both the median observed and pre-dicted covariance matrices are shown in Fig. 4 demonstrating excellent agreement down to degree scales in all components. In all coordinates the variance is significantly enhanced in regions which have been scanned less in DR2, most notably around the Galactic bulge. Thin streaks of boosted variance on the sky correspond to time periods in Gaia DR2 where data was not taken due to mirror decontamination or other disruptive processes. In Fig. 5 we compare the correlation coefficients evaluated by dividing the off-diagonal covariance elements by the square root of the products of their respective variances. Correlation coefficients are less dependent on the number of observations, which has largely been divided out, and more on the scan directions and time variance leading to a more complex and varied structure on the sky. Again, the observed correlation (upper right triangle) and predicted correlation (lower left) show excellent agreement down to small scale variations. Fig. 6 provides a more direct comparison between the predicted and observed covariances. Diagonal elements give the ratio of predicted to observed variance. Across the vast majority of the sky, there is strong agreement with noise dominating in underscanned regions. Two features stand out in the variance ratios where the model has not fully captured the system. A streak of scans in the South East and North West show underestimated uncertainties from the model. The scans in Gaia responsible for this are constrained and discussed in Section 6. Secondly, the Galactic bulge also shows a significant systematic underestimate against the observed variance. This is not unexpected as high source crowding can cause single windows to be allocated to multiple sources generating spurious centroid positions. The third panel of L18, Fig. B.4 shows the same issue but manifested in the _ _ of the source fits. We demonstrate this issue in Fig. 7 where we show the median AL ( , , ) evaluated using Eq. 21 with the median taken in every 0.1 mag magnitude bin and HEALPix level 7 pixel divided by AL ( ) evaluated at the median magnitude of stars in the HEALPix pixel. From Section 3.3, we expect AL to be independent of position on the sky which is a key assumption in our model. Across the sky AL shows only weak dependence on the scanning law at less than 10%. However, particularly for brighter magnitude bins, AL is not uniform over the sky as expected and is significantly higher in regions of the disk and bulge with the highest source density. This issue is further exacerbated for the bulge as it happens to reside in a region of the sky which has been scanned very few times by Gaia whereas the LMC and SMC which have been scanned more heavily show no clear signal. In future Gaia data releases, the Galactic bulge will likely receive significantly more scans reducing this issue. Fig. 7 also shows residual scanning law structure which is likely caused by the ∼ 20% variation in the instrument precision discussed in Appendix A. For example, the green strips in the North East and South where AL is systematically higher correspond to areas which received many observations before the first decontamination when the satellite measurement precision was at its worst as shown in Fig. A2. Fig. 8 shows the percentage of observations which took place before the first decontamination event in DR2 for which the highest regions match exactly with regions of the sky in Fig. 7 with enhanced AL . The diagonal elements of Fig. 6 show that these features are comparable to the background noise level and so are not of significant concern. The off-diagonal elements of Fig. 6 show the difference between predicted and observed correlation coefficients. The structure of the scanning law can be seen in white as the regions which are most heavily scanned will have the lowest uncertainty. There is Figure 9. The reduced 2 of the astrometric solution, UWE is estimated from the published covariances using the predicted covariance for simple point sources producing a distribution with median ∼ 1 (red solid). The distribution of source (black histogram, log normalised) extends out to high values of UWE due to sources with high excess noise. some marginal bias in the * and components but this is small compared with the overall signal seen in Fig. 5. From these results, we demonstrate that the ASF is accurate across the majority of the sky across all magnitudes at the 10% level. However for bright sources ( 18) close in crowded regions (| | 5 deg) un-corrected calibration effects become significant inflating the systematic uncertainties. When using Gaia DR2 astrometry to search for excess noise from genuine source characteristics, these systematic uncertainties should be taken into account. UNIT WEIGHT ERROR Unit Weight Error (UWE) is the reduced chi-squared statistic of the astrometric fit to observations. where x and x are the measured and expected position measurements of a source, K = diag[ 2 1 , 2 2 ... 2 ] is the measurement covariance and = − 5 is the number of degrees of freedom. For simple point sources UWE will be drawn from a Gamma distribution, UWE ∼ Γ [ /2, /2] such that the expected value is 1 and the variance is inversely proportional to the degrees of freedom. However any excess stellar motion or an extended flux distribution will introduce an excess UWE above 1 as happens for binary systems (Penoyre et al. 2020) or astrometric microlensing events (McGill et al. 2020). Gaia publishes 2 and the degrees of freedom = − 5 for all stars with 5D astrometry in DR2 from which UWE can be calculated. However, the published 2 is plagued by the DoF bug (L18) which makes values unreliable to use for estimating the excess noise. This can be remedied by renormalising the published UWE as a function of colour and apparent magnitude to produce a new statistic, RUWE 4 . RUWE is normalised such that the 41 st percentile is 1 as this was found to represent well behaved sources where the median showed significant contamination from sources with excess error. This works well at face value and produces a usable statistic however there are two limitations. Firstly RUWE does not follow a well defined 2 distribution as would be expected from UWE, therefore estimating the significance of excess noise is challenging. Secondly, in cases where excess noise is not equally likely in all colours and apparent magnitudes, the renormalisation can hide some of the expected excess. This would be problematic when establishing the binary fraction as a function of colour and absolute magnitude which is expected to vary considerably between stellar populations (Price-Whelan et al. 2020;Belokurov et al. 2020). An alternative of UWE for a source with measured 5D astrometry is given by where = 5 is the dimensionality of the astrometry. Given = (r − r) ∼ N (0, C) All off-diagonal elements of W produce antisymmetric integrands in y leaving only the diagonal elements Substituting Σ back into this we have UWE in terms of the published covariance Using this formula, we estimate UWE for all stars with 5D astrometry in Gaia DR2. The distribution of UWE as a function of magnitude, shown in the Fig. 9, is uniform with the median UWE 1. The fact that the median UWE sits slightly higher than 1 is due to the contribution from sources with excess noise. The spread of UWE which is greatest at ∼ 13 and narrows to fainter magnitude is a clear signature of excess error which is resolvable at brighter magnitude but becomes increasingly dominated by photon count noise for fainter sources. Our estimate is compared with the published UWE for sources with ∈ [18.1, 19.0] in Fig. 10. At these dim magnitudes, the impact of the DoF bug is small. Across the sky, our estimate of UWE is in excellent agreement with the published value producing no systematic residual signal in the right hand panel down to 10% uncertainty. In Gaia EDR3, the DoF bug is be fixed and our estimate of UWE will be superseded by the published value. However, the fact that our measurement is in good agreement with the published UWE is indicative that the published covariance alongside our prediction . Predicted 5Dmax after correcting for the DoF bug (red solid) as a function of magnitude for all sources in the Gaia DR2 astrometry shows strong agreement with the published values (median -blue solid, 16 th − 84 th percentiles -blue shaded). The model before correcting for the DoF bug (red dashed) shifts at ∼ 13 the magnitude at Gaia switches from 2D to 1D observations. The systematic underestimate of the prediction against the median published astrometry is expected to be due to remaining calibration uncertainties which we have not fully accounted for. of the ASF contains all of the information contained in UWE and more. Whilst UWE can be used to determine the probability and amplitude of any excess variance, the ASF has the potential to decode the orientation and time variation of excess noise. ASTROMETRIC SELECTION In order to construct unbiased dynamical models of the Milky Way, it is critically important that we have a strong understanding of the completeness of our sample. produced the selection function for the full Gaia DR2 catalogue however the subset of DR2 with 5D astrometry constitutes a biased subsample and therefore an astrometry selection function is required for studies which rely on parallax or proper motion data. The Gaia DR2 5D astrometry sample is the subset of the full sample that satisfies the cuts (L18, Section 4.3): To construct the selection function for the 5D astrometry sample we can combine the effect of these cuts with the full sample selection function where S 5Dast is the event that a source is published with 5D astrometry and S DR2 is the event that a source is included in DR2 with or without 5D astrometry. P(S DR2 ) is the full Gaia DR2 selection function estimated in . The probability of a star in DR2 receiving 5D astrometry, P(S 5Dast |S DR2 ), is governed by the three cuts outlined above. The second cut on _ _ ( VP ) is a complex function of the scanning law and detection probability and will be the subject of a future work. Here we will focus on the _ 5 _ ( 5Dmax ) cut. 2 5Dmax is the maximum eigenvalue of the scaled astrometric covariance matrix where ∈ R 5×5 is the published 5D covariance matrix and = diag[1, 1, sin ( ), /2, /2] where = 45 deg is the solar aspect angle of the Gaia satellite and = 1.75115 yr is the time window of observations used in Gaia DR2. Our aim here is to estimate the contribution to the selection function solely from the cut on 5Dmax , 5Dmax and VP are published for all sources in Gaia DR2 so this could be easily achieved by taking the ratio of number of sources with 5Dmax < 1.2 ( ) and VP > 5 to only those with VP > 5 as a function of apparent magnitude and position on the sky P( 5Dmax < 1.2 | VP > 5, , , ) This approach is limited by Poisson count noise. To resolve scanning law variations, one would need to resolve the sky to at least HEALPix level 7. Using 200 magnitude bins, this results in an average of ∼ 30 stars with astrometry per bin which will be dominated by the Milky Way disk. At high latitudes the inference will be entirely dominated by Poisson noise. Predicted Observed Predicted/Observed 10 −2 10 −1 10 0 astrometric sigma5d max (mas) 0.67 1.0 1.5 ratio √ encodes the magnitude dependence of the predicted astrometric precision of Gaia DR2. 5D astrometric covariance is only published for the subset of DR2 with 5D astrometry however 5Dmax is published for all sources in DR2. We estimate √ for all sources in DR2 with VP > 5 using Eq. 37 shown here as a function of magnitude. Instead, we can use Gaia's predicted covariance as a function of position on the sky given in Section 3. This enables us to reach unlimited resolution on the sky without HEALPix binning the data. We can predict 5Dmax for any source in Gaia as a function of magnitude and position on the sky where we have used the substitution Σ −1 = ( )Φ from Eq. 22 and ( ) is defined in Eq. 23. A comparison of the running median of the predicted 5Dmax (red dashed) and observed -_ 5 _ (blue solid) in Fig. 11 shows that the prediction overestimates for < 13 and underestimates for 13 < < 16. The cause of this is the 'DoF' bug detailed in L18, Appendix A. Our predicted 5Dmax has been corrected for the DoF bug whilst the published values, on which the astrometry was selected, had not been corrected. The DoF bug is de-corrected from our prediction dividing through by a factor from Eq. B1 to produce the red solid line, in good agreement with the published 5Dmax as a function of magnitude. The predicted value marginally systematically underestimates 5Dmax across all magnitudes by ∼ 10% which we conjecture may be linked to time dependence of AL which pro- duces systematic uncertainties at the same level however the exact cause of this discrepancy for 5Dmax is unclear. The predicted and observed distribution of 5Dmax on the sky are shown in Fig. 12 with the right panel showing strong agreement across the majority of the sky. Some residual streaks still persist in the South East and North West regions of the sky which match those seen in Section 4 when comparing the predicted and observed astrometry variances. These correspond to broken scans in Gaia DR2 which haven't previously been diagnosed. We use the HEALPix time extractor tool (Holl prep) to constrain the times at which these scans happened in DR2. The clearest time ranges are given in Table 2 where the time range OBMT= 1556 − 1560rev is the direct cause of the residual streaks discussed above. 5Dmax is published for all sources in Gaia DR2 whether or not they have published 5D astrometry. We can therefore use the published 5Dmax to estimate for all stars in DR2 The distribution of √ as a function of magnitude is shown in Fig. 13 where the distribution is largely flat at brighter magnitudes whilst declining for > 13 due to low photon count noise. The spread to lower values is driven by excess noise due to binaries and other accelerating or extended sources. In every 0.1 mag bin we fit a two component Gamma mixture model (ΓMM) to model the distribution of , One component of the mixture model fits the peak of the distribution which is dominated by well behaved simple point sources whilst the second component has an extended tail to low which accounts for sources with significant excess noise. Examples of these fits in four magnitude bins are shown in Fig. 14 demonstrating reasonable agreement at dim magnitudes whilst somewhat cutting through the low tail at bright magnitudes. At dim magnitudes, there is also a small excess of sources at large . The precise cause of this tail is unclear but since any cuts on 5Dmax will be on the low end, the fact that we haven't correctly modelled the high tail will only generate a < 1% systematic uncertainty in the inferred selection function. Priors used for each of the parameters in the ΓMM are given in Table 3. The parameters are fit using expectation maximisation and posterior distributions produced using emcee (Foreman-Mackey et al. 2013). The behaviour of the ΓMM parameters as a function of magnitude is modelled with a single Gaussian Process. For values of the same parameter at different magnitudes, the GP uses a square exponential kernel with variance and scale length . For different parameters, we assume no intrinsic correlation, however, correlations will be introduced between different parameters of the same magnitude bin through the covariance of MCMC samples. Applying -fold cross validation with = 5 we infer hyperparameter values of = 0.224, = 2.578. The posterior GP is shown in Fig. 15 where the blue solid and red dashed lines are the two components for each parameter. Due to a lack of bright sources in Gaia DR2 astrometry, the GP at the bright end is dominated by the prior from the kernel. Since a negligible proportion of stars will be influenced by the 5Dmax cut at these magnitudes, this is not a significant issue for the model. Using the ΓMM as a function of magnitude, the selection function probability is given by where (1.2 ( )) from substituting 5Dmax = 1.2 ( ) into Eq. 37. The selection probability is given at three magnitudes in Fig. 16 demonstrating that the cut only has a significant effect for > 20. At the faintest magnitudes, regions of the sky which have been only sparsely scanned in Gaia DR2 are most likely to be removed due to the cut on 5Dmax . In the most extreme cases such as in the Milky Way bulge, this can result in < 1% completeness in the Gaia DR2 astrometry sample. Due to the simplicity of our 2 component ΓMM, the fits to the distribution of stars can produce significant offsets from the true distribution of data at the low √ tail as is seen in the fourth panel of Fig. 14. The overestimate of the number of sources at low √ in this case will lead to a significant overestimate of the number of stars with high 5Dmax which will subsequently get cut from the 5D astrometry sample. For this work we consider the method a proof of principle for applying the ASF in order to derive the selection function and will refine the fits to √ as a function of magnitude when producing the full Gaia DR2 5D astrometry selection function. Excess Covariance In this work we have derived and discussed the importance of the ASF for analysing simple point sources in Gaia DR2. However we haven't established how to use the ASF to estimate the excess covariance or precisely how this can be interpreted. Consider a source with true 5D astrometry, . However the source is not a simple point source such that the apparent position as a function of time is not well modelled by the 5D astrometric solution. If the excess noise may be parameterised by a 5D covariance, E, the probability of measuring the apparent 5D astrometry as r will be given by If one attempts to measure this source, the uncertainty with which the 5D astrometry is measured is given by the ASF P(r ) = N (r ; r , Σ). By multiplying the two distributions together and marginalising over r , we can determine the probability distribution of the measured 5D astrometry P(r ) = ∫ d 5 r N (r ; r , Σ) N (r ; r, E) = N r ; r, Σ −1 + E −1 −1 = N (r ; r, C). Therefore, in this vastly oversimplified situation, the final measurement uncertainty for the 5D astrometry is given by the convolution of the excess noise and the ASF (providing the contribution from the observation measurement uncertainty). There are two significant issues with this interpretation when considering the astrometry published by Gaia. Firstly, the AGIS pipeline does not formally infer the measurement uncertainty induced by excess noise. Residuals beyond simple point source astrometry are absorbed into a 1D excess noise parameter for each source as well as impacting the weights used for the given observations. The second problem is that source excess noise can disguise itself as a shift in the simple point source astrometry. As shown in Penoyre et al. (2020), excess binary motion can have complex effects on the posterior astrometry from Gaia including a phenomenon called the proper motion anomaly (Kervella et al. 2019). Interpretation of the excess covariance will require simulating stellar populations and emulating the AGIS pipeline in order to forward model how the intrinsic properties of the source relate to the posterior excess. Mock observations Whilst we have entirely focused on the implications of the ASF for constraining excess source noise, it is also directly applicable to simulations in order to generate mock Gaia catalogues for Milky Way analogues. Recent simulations such as Auriga (Grand et al. 2017) and VINTERGATAN (Agertz et al. 2020) have demonstrated the ability of the latest generation of cosmological simulations to produce Milky Way analogues which are excellent tools for studying the physical processes which govern the evolution of our galaxy. Performing a direct comparison with Gaia observations requires the Gaia selection functions and measurement uncertainty. The ASF provides the expected uncertainty of 5D astrometry for a simple point source. Given a simulated star with astrometry r as observed from the sun, the astrometry that would be measured by Gaia, r can be inferred by sampling from the ASF r ∼ N (r, Σ( , , )). ACCESSING THE ASF The ASF is a useful tool for inferring excess astrometric covariance of Gaia 5D astrometry sources. To make this accessible, we've added a module to the P package (https:// github.com/gaiaverse/scanninglaw) (Boubert et al. 2020b). The user can ask the question 'What astrometric covariance would Gaia have published if my star was a simple point source?'. As always, this is demonstrated by determining the ASF covariance of the fastest main-sequence star in the Galaxy (S5-HVS1, Koposov et al. 2020) for Gaia DR2. The diagonal elements of the output covariance give the variance in * 0 , 0 , (mas 2 ), * , * (mas 2 /y 2 ). 1 import scanninglaw . asf as asf 2 from scanninglaw . source import Source CONCLUSION The Astrometry Spread Function is the astrometric uncertainty distribution which would be expected for a point source with linear motion relative to the solar system barycenter (simple point source) given the source apparent magnitude and position on the sky. Gaia's DPAC estimate the astrometric solution using an iterative linear regression algorithm. Given the uncertainty of individual observations and the scanning law, we have been able to reconstruct the astrometric covariance that would be expected for a simple point source observed by Gaia DR2. The ASF is a 5D multivariate Gaussian distribution with mean 0 and covariance Σ ∈ R 5×5 where we have formally derived Σ( , , ). Assuming the bulk of stars in the Gaia DR2 5D astrometry sample are simple point sources down to Gaia's detection limit, we compare our result with the published covariances and find extremely good agreement down to sub-degree scales on the sky. The only region with marginal disagreement is the highest source density regions of the bulge where the combination of source crowding and few scans in Gaia DR2 invalidate our assumptions. Therefore we caution the use of the ASF in highly crowded regions with low scan counts. We used the ASF in combination with the published covariance to infer unit weight error for Gaia DR2 sources. The strong agreement with the published UWE demonstrates that the ASF can be used to find the excess error in Gaia observations due to physical source characteristics. The ASF will be a valuable tool for exploiting Gaia data to model binary stars, astrometric microlens events and extended sources. Finally we applied the ASF to predict the selection function contribution from the cut on _ 5 _ used to generate the Gaia DR2 5D astrometry sample. This will be a key component of the full astrometry selection function which is a vital tool for unbiased modelling of Milky Way kinematics from Gaia's 5D astrometry. impacted sources with > 13 although the increased photon count noise at dimmer magnitudes dampens the effect for dim stars. As a result, when estimating _ 5 _ in Section 6, in order to obtain a good agreement with the data, we must decorrect for the DoF bug by dividing through by the correction factor, . This paper has been typeset from a T E X/L A T E X file prepared by the author.
14,621.6
2021-01-05T00:00:00.000
[ "Physics", "Geology" ]
Data-driven Stochastic Model for Quantifying the Interplay Between Amyloid-beta and Calcium Levels in Alzheimer's Disease The abnormal aggregation of extracellular amyloid-$\beta$ (A\beta) in senile plaques resulting in calcium (Ca^{+2}) dyshomeostasis is one of the primary symptoms of Alzheimer's disease (AD). Significant research efforts have been devoted in the past to better understand the underlying molecular mechanisms driving A\beta deposition and Ca^{+2} dysregulation. To better understand this interaction, we report a novel stochastic model where we analyze the positive feedback loop between A\beta and Ca^{+2} using ADNI data. A good therapeutic treatment plan for AD requires precise predictions. Stochastic models offer an appropriate framework for modelling AD since AD studies are observational in nature and involve regular patient visits. The etiology of AD may be described as a multi-state disease process using the approximate Bayesian computation method. So, utilizing ADNI data from $2$-year visits for AD patients, we employ this method to investigate the interplay between A\beta and Ca^{+2} levels at various disease development phases. Incorporating the ADNI data in our physics-based Bayesian model, we discovered that a sufficiently large disruption in either A\beta metabolism or intracellular Ca^{+2} homeostasis causes the relative growth rate in both Ca^{+2} and A\beta, which corresponds to the development of AD. The imbalance of Ca^{+2} ions causes A\beta disorders by directly or indirectly affecting a variety of cellular and subcellular processes, and the altered homeostasis may worsen the abnormalities of Ca^{+2} ion transportation and deposition. This suggests that altering the Ca^{+2} balance or the balance between A\beta and Ca^{+2} by chelating them may be able to reduce disorders associated with AD and open up new research possibilities for AD therapy. Introduction Alzheimer's disease (AD) is the most prevalent kind of adult dementia.AD is a medical disorder that gradually kills neurons and produces severe cognitive impairment [1,2].Many medication therapies are shown to slow the progression of AD, but there is no permanent cure [3,4,5,6,7].The clinical and pathological hallmarks of AD include progressive neuronal loss, synaptic degradation, and the formation of amyloid plaques and neurofibrillary tangles in particular regions of the brain [8,9].According to some findings, AD may be a systemic disease since it affects not only neurons but also peripheral cells such as fibroblasts, lymphocytes, and platelets in AD patients [10,11,12]. Although the exact cause of AD is unknown, a few major theories, including the cholinergic, amyloid cascade, and tau hypothesis, have been presented to explain the progression of AD.The amyloid cascade theory appears to be the most likely, as there are numerous plaques composed of amyloid (Aβ) peptide in the AD patient's brain [13,14].According to the amyloid cascade theory, Aβ oligomers or amyloid fibrils are formed by the aggregation of Aβ oligomers or amyloid fibrils, which are key components of Aβ peptide and impair the function of neuronal cells [8].Many mathematical models have been presented to explain the development of Aβ monomer synthesis or aggregation (see [9,15,16,17] and the references therein).Another hypothesis that has attracted a lot of attention, proposes that disturbance of calcium (Ca +2 ) homeostasis is crucial to AD pathogenesis [18,19,20].The disruption of Ca +2 homeostasis has been extensively explored in order to understand the processes of Aβ-induced neurotoxicity.Intracellular Ca +2 operates as a second messenger, regulating neuronal activities such as brain development and differentiation, action potential, and synaptic plasticity [8,21,22].The Ca +2 hypothesis of AD proposes that activation of the amyloidogenic pathway affects neuronal Ca +2 homeostasis as well as the processes involved in learning and memory.Aβ may alter Ca +2 signalling by numerous methods, including boosting Ca +2 inflow from the extracellular area and stimulating Ca +2 release from intracellular repositories within the brain [23,24].Moreover, growing evidence suggests that there is a positive loop between Ca +2 and Aβ levels [25,26,27].We know by now, for example, that the persistent high concentration of Ca +2 is favorable to the formation of Aβ in rat cortical neurons by imitating Aγ secretase activity, which is crucial for the breakdown of amyloid precursor protein (APP) [25,28]. Some of the causes of substantial trial failure include an inadequate understanding of AD etiology and development, as well as an inappropriate trial design.The creation of clinical trial simulations and mathematical modelling of AD progression are key tools for exploring the reasons why clinical trials fail and refining the clinical trial methodology.The poorly understood nature of AD etiology and development limits the capacity to build solid mechanistic models for reliable disease progression prediction.There are also mathematical models based on inverse problems that have been established to reflect modifications to cognition over time, as measured by errors on various cognitive tests used to assess patients' intellectual capabilities, such as the Modified Mini-Mental State Examination (MMSE) and the AD Assessment Scale [29,30,31,32,33,34,35,36]. The majority of models developed to date in the aforesaid context are stochastic in nature, as evidenced by [25,37,38].The FDA has authorized following medicines for the clinical treatment of AD, i.e., tacrine, rivastigmine, galantamine, and donepezil, which are acetylcholinesterase inhibitors (AChEIs), which increase the concentration and duration of the neurotransmitter acetylcholine's activity (Ach) and another therapeutically utilized medicine is memantine, which is an N-Methyl-d-aspartate (NMDA) receptor antagonist [39].Additionally, lecanemab and aducanumab are newly approved medicines.In the brains of AD patients, NMDA receptors are overstimulated due to glutamate excess release by neurons, resulting in increased intracellular Ca +2 and the death of neuronal cells.Therefore, reducing the concentrations of Ca +2 fluxes during the disease state could stop the progression of AD. Importantly, a persistent increase in baseline Ca +2 may also play a role in disease progression by increasing the synthesis and toxicity of Aβ's in cells harboring AD-related mutations.These mutually beneficial regulations, in which Aβ promotes a Ca +2 increase, which in turn raises the level of Aβ, create a positive feedback loop that is expected to create a vicious circle leading to disease development [8,25,40]. Inspired by this fact and using clinical data such as study data from the ADNI database (adni.loni.usc.edu), we developed and tested a simple novel stochastic model to predict the interplay between the Aβ and Ca +2 concentrations on AD progression in a clinical trial.Also, we have selected the AD patients to be monitored at frequent visits, i.e., between 0−2 year visits.We analyzed the data for Aβ concentration and fitted it to the developed stochastic model using the approximate Bayesian computation (ABC) technique.ABC is a data-driven strategy that utilizes a number of low-cost numerical simulations.ABC evaluates unknown physical or model parameters and associated uncertainties using reference data from real-world experiments or higher-fidelity numerical simulations [41].We found that during the disease state in the patient's brain, there is a tremendous increase in Aβ oligomers, which enhance the influxes of intracellular Ca +2 .In return, Ca +2 encourages the production of these hazardous Aβ oligomers, and this fact reinforces the positive feedback between Ca +2 and Aβ.We show that the simulations of our model with the ADNI data correlate with the finding that a variety of dysregulations in Ca +2 and Aβ may lead to disease, as well as random fluctuations of Aβ in vulnerable patients that can lead to a transition from the "healthy" to the "pathological" state [8,15,25,40,42,43,44,45,46,47].This vulnerability may explain the high prevalence of sporadic AD found in the elderly population.The rest of the paper is organized as follows: In Section 2, we describe the modelling approach based on the (i) deterministic and stochastic models of the interplay between Aβ and Ca +2 , and (ii) the ABC technique.In Section 3, we set up the experimental data and model the participant dynamics.We present our results and computational simulations based on the developed stochastic models in Section 4. The computational results have been obtained with an in-house developed MATLAB code, and all data analysis has been carried out in Python.Finally, we discuss our results, conclude our findings, and outline future directions in Sections 5 and 6. Methodology This section highlights the deterministic and stochastic modelling approaches for AD incorporating the interplay between Aβ and Ca +2 within the Bayesian setting.This is achieved by defining the stochastic model of Aβ and Ca +2 by adding the stochastic noises.Additionally, we simulate the trajectories of the stochastic model of Aβ and Ca +2 using the ADNI data by incorporating the ABC technique presented in section 2.2.The aim is to fit the ADNI data of Aβ concentrations into the stochastic model of Aβ and see the impact on Ca +2 concentrations. The modelling approach In this section, we will present the mathematical model for AD to account for the coexistence between Ca +2 and Aβ.AD is associated with Aβ produced by the cleavage of the amyloid precursor protein (APP), which is partly embedded in the plasma membrane.APP is cleaved by either an αor a β-secretase.In the amyloidogenic pathway, cleavage of APP by the β-secretase generates sAPPβ and CTFβ.The latter is in turn cleaved by a γ-secretase to form Aβ. A rise in cytosolic Ca +2 enhances the production and release of Aβ which leads to stimulation of γ-secretase activity in cortical neurons [40].In resting neurons, the free cytosolic Ca +2 level is maintained around 50 − 100nM, while it is increased up to 1µM upon electrical or receptor-mediated stimulation.As described in [8], Ca +2 influx is enhanced by VGCCs or ligand-gated ion channels such as glutamate and acetylcholine receptors.However, the main intracellular Ca +2 store is the endoplasmic reticulum(ER), from where Ca +2 can be released through the inositol 1, 4, 5-trisphosphate 1)-( 2) that describes the interaction between Ca +2 and Aβ during the progression of AD.The positive feedback exerted by Ca +2 on the creation of Aβ, as well as the fact that Aβ tends to increase intracellular Ca +2 , establish a positive loop (motivated by [40]). receptor (IP 3 R) or through the ryanodine receptor.Decrease of cytosolic Ca +2 occurs through Ca +2 -ATPases, the N a + /Ca 2+ exchanger, or the mitochondrial uniporter.Aβ's perturb the balance between Ca +2 entry into and extrusion out of the cytoplasm.In healthy neurons, this process equilibrates, leading to a basal Ca +2 level in the range of 50-100nM.Using transgenic mouse models for AD together with Ca +2 imaging, Kuchibhotla et al. [45] have shown that this resting concentration is higher in neurites located close to amyloid deposits, while another study reports that the basal level of Ca +2 in the cortical neurons of such animals is around 250nM, i.e. twice that found in controls [48].Ca +2 channels are also deregulated in brain cells and the formation by Aβ oligomers of pores in the plasma membrane enhances the influx of extracellular Ca +2 [28].This feedback is further reinforced by the fact that Ca +2 promotes the formation of these toxic oligomers. We will propose the simple model schematized in Fig. 1 based on the experimental observations using ADNI data explained in Section 3. The main variables of the model are the intracellular Ca +2 concentrations and the concentrations of Aβ (without distinction between intracellular and extracellular compartments, nor between amyloid compounds of different lengths and in different oligomerization states).These concentrations are denoted by Aβ and Ca in the equations, respectively.The evolution of the two variables of the model equation is given as follows: where the Ca concentration represents the basal level of cytoplasmic Ca +2 , whose value does not significantly depend on the short-lived Ca +2 peaks arising from the electrical activity of the neurons.Aβ is assumed to be synthesized at a constant rate V 1 and eliminated with first-order kinetics, characterized by a rate constant k 1 .Activation of Aβ synthesis by Ca +2 is represented by a Hill term with a maximal rate of V α , a half-saturation constant of K α and a Hill coefficient n.Similarly, Ca +2 enters the cytoplasm at a constant rate V 2 and is eliminated with first-order kinetics, characterized by a rate constant k 2 .Moreover, Aβ oligomers induce Ca +2 entry into the cell, putatively by provoking an increase in plasma membrane permeability.This process is characterized by a cooperativity coefficient m, and a rate constant k β .This latter term is taken as non-saturable to model the formation of pores by oligomers of Aβ.The model Eqs. (1)-( 2) are adopted from [40], It is considered that in a healthy neuron, the concentrations of Aβ (∼ 5 nM) are lower than those of Ca +2 (∼ 50 − 100 nM) [49].In the present section, we formalize this positive loop in a mathematical model and show that it exhibits bistability.Therefore, a stable steady state characterized by low levels of Ca +2 and amyloids, which correspond to a healthy situation, coexists with another 'pathological state' where the levels of both compounds are high.The onset of the disease corresponds to the switch from the lower steady-state to the higher one induced by a large enough perturbation in either the metabolism of amyloids or the homeostasis of intracellular Ca +2 .It is well known that the majority of models created to date for AD using clinical data in the above context are stochastic in nature [34,37,38,50,51].Such models have the important advantage of allowing for variation in model parameters and disease biomarkers when predicting disease progression.Therefore, stochastic noises can be incorporated into the present model of AD i.e., Eqs (1)-( 2), which focuses on the evolution of Aβ and Ca +2 .The dynamics of Aβ and Ca +2 are perturbed by intrinsic or extrinsic noises.The intrinsic noises arise from the random fluctuations of biochemical reaction events such as the stimulation of calcium on γ-secretase activity, the nucleated aggregation process, and the changes of cell membrane integrity induced by Aβ [52], whereas the extrinsic noises originate from the stochastic variations of the microenvironment for Aβ and Ca +2 , which include pH, the concentrations of N a + , reactive oxygen species, neurons, and peripheral macrophages [40].Additionally, AD is a neurological disorder that progresses over a long period of time, from a normal state to severe dementia.In contrast, the concentration of Ca +2 changes quickly, which is at the timescale of seconds or minutes [25]. On the basis of the above biological backgrounds, we will incorporate the stochastic noises and the explicit time scales into the model Eqs.( 1)-( 2) as follows: where 0 < ϵ << 1 is used to indicate that the change of Aβ concentration is much slower than that of Ca +2 concentration and B i (t), i = 1, 2, represent the standard Wiener process defined on a complete probability space (Ω, F, P) and σ 2 i > 0 for i = 1, 2, denote the intensities of white noise, other relevant parameters are adopted from [25].The aim of adding a stochastic term to the model is to show that the stochastic noises can induce a jump transition from a state with a lower concentration of Aβ to a state with a higher concentration of Aβ using ADNI data.Such jump transactions represent a key phenomenon for AD.Secondly, we analyze the impacts of stochastic noises on the progression of Aβ and Ca +2 since AD models are stochastic.The novelty of the present research is in the development of a stochastic AD model of Aβ and Ca +2 using ADNI data for AD patients at 2-year visits (details are given in Section 3).It is expected that numerical simulations of the model will reproduce a variety of experimental observations about the disease using the ADNI data, which could be useful when developing therapeutic protocols to slow down the progression of AD.Since the ADNI data contains only the concentrations of Aβ, thus the aim is to fit the Aβ concentration data and incorporate its effect on Ca +2 concentrations within the developed stochastic models.This can be done using the ABC technique with details given in Section 2.2. Approximate Bayesian Computation (ABC) technique ABC is a data-driven approach that employs several low-cost numerical simulations.Using reference data from real-world experiments such as ADNI data or higher-fidelity numerical simulations, ABC also estimates unknown physical or model parameters, as well as their uncertainties [53]. Table 1: List of the parameter values used for deterministic and stochastic models adopted from [40]. Parameters Values (units) Bayesian inference allows for the estimation of instability by evaluating the likelihood of the model parameters supplied by the experimental data [41,54,55].Since our template contains only the Aβ concentrations data, we treat the model Eqs.( 3)-( 4) as inverse problems and get data for "Aβ" from the ADNI database for AD patients per 2-year visit.To solve the differential equations (i.e., Eqs.3-4) using Bayesian inference, we need to first specify the prior distributions for the unknown parameters and m), and then use Bayesian methods to update these priors based on the observed data.Let's assume that the priors for all the parameters are independent and normally distributed with a mean of 0 and a variance of 10.This is a fairly non-informative prior that allows for a wide range of possible values for the parameters.Next, we need to define the likelihood functions for the two differential equations.The likelihood function for the first equation (i.e., Eq. 3) is given by: where Aβ = (Aβ 1 , Aβ 2 , . . ., Aβ n ) is the vector of observed values for Aβ at times t = (t 1 , t 2 , . . ., t n ), Aβ i is the predicted value of Aβ at time t i based on the current parameter values θ 1 , and σ 2 is the measurement error variance.The predicted values Aβ i can be obtained by numerically solving the differential equation using the current parameter values as given in Table 1.We can use any numerical solver, such as the Runge-Kutta method, to do this.The likelihood function for the second equation (i.e.Eq. 4) is given by: where Ca = (Ca 1 , Ca 2 , . . ., Ca n ) is the vector of observed values for Ca at times t = (t 1 , t 2 , . . ., t n ), Ca i is the predicted value of Ca at time t i based on the current parameter values θ 2 , and σ 2 is the measurement error variance.The predicted values Ca i can also be obtained by numerically solving the differential equation (i.e., Eq. 4) using the current parameter values.To update the priors based on the observed data, we use Bayes' theorem [56]: where ) is the vector of unknown parameters.We use a Markov Chain Monte Carlo (MCMC) algorithm, a Hamiltonian Monte Carlo approach implemented in the Python package PyMC3 to sample from the posterior distribution p(Aβ, Ca, t|θ) and obtain estimates of the posterior mean [57,58].After inserting the sampled parameters into our model and comparing the resulting simulated uptake values of Aβ and Ca with the observed data, we can rate each sample based on its likelihood and use Bayes' theorem to determine the posterior distributions of most likely parameter values for each patient. To fit the ADNI data into the developed stochastic models, we replace the observed data with ADNI data for Aβ concentrations and the time t, as well as the age of the patients given in years.The details on the ADNI experimental setup data are given as follows. 3. Experimental setup analysis supported by ADNI data ADNI data The datasets used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database( adni.loni.usc.edu).The primary goal of the ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of AD.Specifically, we used the ADNI data prepared for the AD modelling challenge and followed the recommendations prescribed in this work to incorporate the interplay between Aβ and Ca +2 using the stochastic approach described in Section 2, and fit data for the progression of AD. Modelling the participant's dynamics We use the qualitative template (available as a CSV file by permission on the ADNI website, (adni.loni.usc.edu)) for the progression of the AD project dataset, which includes the three ADNI phases: ADNI 1, ADNI GO, and ADNI 2. This dataset contains measurements from brain MRI, PET, CSF, cognitive tests, demographics, and genetic information [50].From ADNI 1/GO/2, we used the data for 1706 individuals with 6880 visits.The ADNI Conversion Committee made clinical diagnoses of MCI (mild cognitive impairment), NL (normal), EMCI (early mild cognitive impairment), LMCI(late mild cognitive impairment), and AD based on the standards outlined in the ADNI protocol.We designed an analysis based on the data for the individuals with clinical follow-up visits to fall within the 24-month window.Moreover, in the data we use, clinical follow-up visits with improperly arranged dates are discarded for each individual.Also, based on the data statistics, an acceptable time gap of 12 months is estimated for the present study.In the data set, measurements and clinical diagnoses with missing dates and information per visit are set to missing values in order to use the suggested procedure, and participants with fewer than two distinct visits are excluded.Notably, data sets containing missing values and clinical status are denoted as "Missing".For the present study, we have used only the AD patients' data.We assume that the participants with AD brains for each visit between 2 years have different Aβ concentrations as obtained from the ADNI.Also, we choose the mean of the baseline (bl), 12 and 24 months values of Aβ concentration as the true initial value of Aβ, i.e., Aβ 0 to incorporate the Bayesian inference described in Section 2.2.Therefore, using these initial values first, we will fit the ADNI data to the developed stochastic models, i.e., Eqs. ( 3)-( 4).Then, we will investigate the relationship between Aβ and Ca concentrations by fitting the ADNI data into the developed stochastic models.For the present study, we simply considered only AD patients, given our focus on examining the interplay between Aβ and Ca concentrations during the disease state. Results In this section, we will present the results obtained from the developed physics-based Bayesian model presented in Section 2. Our model can reproduce typical Aβ and Ca growth dynamics with or without the influence of stochastic noise.In the present study, we are interested in the analysis of such dynamics based on stochastic models of Aβ and Ca (i.e., Eqs.3-4), since AD models are predominantly stochastic.Moreover, we have fitted the data for Aβ for AD patients from the ADNI database within 2-year visits in the developed stochastic model.We used the Bayesian inference approach to fitting stochastic differential equations to data.In Bayesian inference, a prior distribution is placed on the parameters, and the posterior distribution of the parameters is estimated using Bayes' theorem, which takes into account both the likelihood of the data and the prior distribution of the parameters.Once the parameters have been estimated, the equations can be used to simulate the concentrations of Aβ and Ca over time, which can be compared to the experimental data.The latter is a mathematical model of Aβ and Ca that has been formulated within a Bayesian setting to understand the Ca dynamics mediated by Aβ in AD.Note that in Figs.4-6, we produced the results based on the concentrations of Aβ and Ca.The parameter values are given in Table 1 and these values were chosen such that the Ca +2 concentration, develops quicker than the amyloid Aβ concentration given that the latter is defined by time scales on a number of years, whereas Ca +2 is characterized by seconds to minutes (reproduced from [40]). To get an initial insight into the interplay between Aβ and Ca, the simulation of the developed deterministic model (i.e., Eqs. ( 1)-( 2) onset of AD describing the positive loop between Aβ and Ca +2 is depicted in Fig. 2. It can be seen that the model for the onset of AD simulates the shift from a healthy to a diseased state.Here, from an initial scenario corresponding to a stable state defined by low values of Aβ and Ca +2 concentrations represent the healthy situation.In contrast, the high values of Aβ and Ca +2 concentrations represent the pathological situation [40].According to the deterministic model (i.e., Eqs. ( 1)-( 2), the pathological state may only be attained by long-term changes in Ca +2 homeostasis, including those that impact the Ca +2 fluxes, as shown in Fig. 2. Our findings suggest that when the basal level of Ca +2 in the body increases over time, it can decrease the effectiveness of Ca signalling triggered by receptors [59]. Stochastic noises play a crucial role in the metabolism of Aβ and Ca +2 [25].From now on, we will report the results based on the stochastic model, i.e., Eqs. ( 3)-( 4), since the purpose of our study is to fit the ADNI data onto the developed stochastic model.Before fitting the ADNI data we will delineate the 3)-( 4), the y-axis represents the probability density function (1/nM ).The relevant parameters are given in Table 1.results using Eqs.( 3)-( 4) based in stochastic simulations as presented in Fig. 3 and Fig. 4. In Fig. 3 (a), we show that oscillations caused by molecular noise can cause a system to jump from the healthy state to the pathological steady state.Fig. 3 (a) illustrates how such a variation might lead to the disease.It's interesting to note that while Ca +2 evolves more quickly than Aβ, alterations in Ca +2 alone are unlikely to cause disease.In other words, the shift to the pathological steady state cannot be reversed by a short-duration of Ca +2 rise, which is a characteristic of the model that makes sense given that, without this feature, every action potential would lead to an increase in Aβ as discussed in [8,9].Yet, because of positive feedback and the rapid development of Aβ, any noise-induced rise in Aβ will also generate an increase in Ca +2 , reinforcing the initial increase in Aβ.Moreover, in Fig. 3 (b), we plot histograms representing stationary density obtained by stochastic simulations of the Eqs.( 3)-( 4).We define the number of simulations and the length of each simulation as 1000.The Euler-Maruyama method has been used to simulate the stochastic differential equations for Aβ and Ca +2 for a large number of iterations until the system reaches its stationary density [60].As depicted in Fig. 3 (b), the system of stochastic differential Eqs. ( 3)-(4) follows a normal distribution which shows that the simulated data fit well.We found that, when the noise strength σ 1 and σ 2 increases from zero, the number of extrema of stationary density obtained by stochastic simulations changes.By varying these two parameters, the analysis could investigate how the stationary state properties of the system change with different levels of noise strength.For example, the analysis could evaluate the number of extrema of the stationary density obtained by stochastic simulations, as well as other relevant properties such as the mean and variance of the system's state. Note that in Figs. (2-3), we produced the results based on the model Eqs.(1-4) without fitting the data.The next goal is to plot the predicted results by fitting the simulated data before adding the ADNI data into the Eqs.( 3)-( 4) using Bayesian inference as described in Section 2.2.Therefore, we set up the initial conditions accordingly.At first, the simulated stochastic trajectories were sampled for three different values of Aβ 0 for the stochastic model (i.e., Eqs. ( 3)-( 4)) as presented in Fig. 4.These initial values of Aβ are chosen based on the ADNI data (i.e., we choose the mean values of Aβ concentrations at the bl, 12 and 24 month visits as the true initial value of Aβ, i.e., Aβ 0 for fitting ADNI data) and time is chosen in years that is actually the age of patients.So that the simulated trajectories corresponding to the time periods utilized 3)-( 4).The initial values of Aβ chosen here are 200, 400, 600nM (top to bottom) and for Ca +2 , set it to 0. Again, the concentration of Aβ is defined by time scales on a number of years, whereas Ca +2 is characterized by seconds to minutes.These values are chosen based on our assumptions to analyze ADNI data, also, Aβ is a promising biomarker that is measured in CSF fluids collected from ADNI participants measured in nM in the experimental measurements of Aβ completely align.As predicted in Fig. 4, we fit the stochastic data using Eqs.( 3)-( 4) for Aβ concentration and analyzed its impact on Ca +2 concentration.As evident from this figure, the temporal evolution of Aβ concentration is increasing as Aβ 0 increases.In return, there is a large jump in the Ca +2 concentration and it keeps increasing until the system goes to the disease state.This makes sense that increasing Aβ concentration would increase Ca +2 concentration, which leads to the progression of AD.Also, because of the positive feedback and the rapid growth of Aβ 0 , any noise-induced rise in Aβ will also produce an increase in Ca +2 , which will reinforce the increase in Aβ [8,25,61].Importantly, it should be noted that the amount of Aβ is a significant biomarker of AD.The National Institute on Aging (NIA), the Alzheimer's Association, and ADNI have recommended new criteria for diagnosing AD, and one of those suggestions uses Aβ as a biomarker for an early diagnosis of AD [50,62,63,64].Medical studies have demonstrated that AD without symptoms can be found by counting the particles of Aβ in the cerebrospinal fluid or brain [65,66,67].As science and technology advance, several biomarker tests to gauge the level of Aβ, such as beta-amyloid PET imaging and cerebrospinal fluid testing, are being employed in some settings to support the diagnosis.Moreover, work is being done to create straightforward and affordable biomarker tests [50,68].Therefore, based on this knowledge, the progression of disease associated with the amount of Aβ used in this study helps predict the likelihood of developing AD because of the significant role that Aβ plays in the diagnosis of AD and Ca +2 dysregulation.Our aim is to fit the ADNI data for Aβ concentration using the stochastic model (i.e., Eqs. ( 3)-( 4)) and see the impact of Aβ on Ca +2 concentration for AD patients as per a 2-year visit.In the context of solving coupled differential equations i.e., Eqs. ( 3)-( 4), ABC involves defining a model that generates the data, choosing a prior distribution for the model parameters, and simulating data from the model.The resulting posterior distribution can be used for inference about the model and the system being studied.Therefore, we have added the ADNI data and solved the inverse problem (i.e., Eq. ( 3)) using the ABC technique as discussed in Section 2.2.Then the resulting data from Eq. ( 3) has been added to Eq. ( 4) to analyze the system for the interplay between Ca +2 and Aβ during the onset of AD.The two coupled equations i.e., Eqs. ( 3)-( 4) can be treated as an inverse problem by using Bayesian inference to estimate the values of the unknown parameters that give rise to the observed data.Specifically, we used the available Aβ data on the ADNI database to estimate the disease progression trajectories for Eq. ( 3), and then use these estimated trajectories as input to Eq. ( 4).We can then simulate the biomarker measurements predicted by Eq. ( 4) and compare them to the observed biomarker measurements.Then we estimated the posterior distribution of the model parameters given the observed data, since in an inverse problem, we are trying to infer the values of the unknown parameters.The corresponding Aβ concentrations with relative contributions of the Ca +2 concentrations are evaluated by Eqs. ( 3)-( 4) and shown in Fig. 5.The effects of different values of Aβ 0 on Aβ concentrations at bl, 12, and 24 months visits are presented in Fig. 5 (a).The time on the x-axis represents the age of patients per year, assuming that the participants are all 57 years old at baseline as per the ADNI data.It can be seen that there is a rapid growth in the concentration of Aβ for AD patients with three different frequency visits from bl to 24 months.At an initial stage, AD patients have lower Aβ concentrations, but they keep increasing with the passage of time.In return, there is rapid growth in the Ca +2 concentrations, which corresponds to Aβ growth during the AD as presented in Fig. 5(b).This shows that the inclusion of Aβ has altered the steady-state populations of Ca +2 due to the interactions between Aβ and Ca +2 signalling pathways as depicted in Fig. 5. Aβ can disrupt calcium homeostasis by promoting calcium influx and inhibiting calcium efflux, leading to an increase in intracellular calcium levels.This increase in intracellular calcium can alter the steady-state populations of Ca +2 by affecting the dynamics of calcium-dependent processes, such as calcium-dependent enzyme activation, gene expression, and synaptic plasticity [19].Moreover, it is noteworthy that the Ca +2 ions stimulate Aβ production, which increases the Ca +2 concentrations entering the cytoplasm of neuronal cells, resulting in a positive feedback loop.Interestingly, as the AD patient's age increases, there is a rapid growth in Aβ and Ca +2 concentrations since AD progresses.The results and the model predictions obtained in Fig. 5 (a-b) align with previous theoretical and experimental studies, which reveal a feedback loop between Ca +2 levels and Aβ [25,40,43,44,45,46]. Next, to determine whether the ADNI data fit well in the stochastic models (i.e., Eqs. ( 3)-( 4)) presented in Fig. 5, we plotted the histogram as shown in Fig. 6 (a).As depicted in Fig. 6 (a) after fitting the ADNI data, we see that both histograms for Aβ (top) and Ca +2 (bottom) follow the normal distribution.This demonstrates that the distribution of an ADNI data set follows a normal distribution, with the majority of data points clustering around the mean value and fewer data points in the tails of the distribution.The number of simulations and the length of each simulation (which is in years) are set to 1000.The normal distribution proves that the ADNI data fits well into the Eqs.( 3)-( 4).Since the data set follows a normal distribution, it allows us to make certain statistical inferences and predictions about the data.For example, we can use the mean and standard deviation of the distribution to calculate the probability of observing the experimental data and estimating the unknown parameters (i.e., V 1 , V α , K α , n, k 1 , V 2 , k β , and m) using the ABC tech- 3)-( 4) The initial conditions for for Aβ and Ca +2 chosen here are 400nM, 0nM, since the patients data has been fitted at 12 month visit (Fig. 5).The time scale chosen here is the age of patients ranging from 57 − 87 years based on the ages of patients given on ADNI data.nique as described in Section 2.2.In the present study, using Bayesian inference, the likelihood function is used to quantify the probability of observing the ADNI data given a particular parameter value or set of parameter values, as shown in Table 1.Specifically, the likelihood function is a function that takes in the observed data and a set of model parameters and returns the probability of observing the data given those parameter values [69].Thus, using this approach, the estimated parameters (i.e., V 1 , V α , K α , n, k 1 , V 2 , k β , and m) within the 95% confidence interval are 0.00720, 0.0435, 125, 1.98, 0.02, 4.989, 0.0021, 3.980, 0.099, respectively, by taking the true parameter values as given in Table 1.Importantly, there is a difference in the time scales represented on the x-axis of Fig. 3 (b) and Fig. 6 (a).This distinction arises from the specific analysis conducted in each figure.In Fig. 3 (b), we aimed to assess the goodness-of-fit between the simulated data, generated by stochastic simulations of Aβ and Ca +2 concentrations as illustrated in Fig. 3 (a).On the other hand, in Fig. 6 (a), we focused on fitting the patient data obtained at the 12-month visit (as shown in Fig. 5 for Aβ and Ca +2 levels.The objective here was to see the stochastic simulation in the results of simulated data and whether ADNI data fit well or not taking into account the inherent stochasticity of the Wiener process used in the simulations and the time scales of the ADNI data.Therefore, due to the different analyses performed in Fig. 6 (a Finally, the relation between corresponding Aβ concentrations and the relative contributions of the Ca +2 concentrations is shown in Fig. 6 (b) after fitting the ADNI data.It can be seen that both concentrations are directly proportional to each other.This reveals, that there is positive feedback between Ca +2 and Aβ.Specifically, as the concentration of Ca +2 increases, it leads to an increase in the concentration of Aβ, which in turn leads to an increase in the concentration of Ca +2 , and so on.This positive feedback loop could be important in understanding the pathological mechanisms underlying AD, which is characterized by the accumulation of Aβ in the brain.This is consistent with an experiment in which Ca +2 levels are high during AD and there is positive feedback between Ca +2 levels and Aβ concentrations [25,40,43,44,45,46,47].As a result of the bistable tendency [40], the final size of Aβ is determined by the initial levels of Aβ and Ca +2 .For example, if the initial concentrations of Aβ and Ca +2 are both set to low values, the system may not reach the threshold concentration needed to trigger the positive feedback loop, and the final size of Aβ would be small.On the other hand, if the initial concentrations of Aβ and Ca +2 are both set to high values, the system may quickly reach the threshold concentration, and the positive feedback loop would be initiated, leading to a large final size of Aβ.By systematically varying the initial concentrations of Aβ and Ca +2 and monitoring the resulting final sizes of Aβ, one can observe the dependence of the final size on the initial levels of these concentrations and infer the existence of a bistable tendency [70,71,25].Another strategy in this direction is to assess the dynamics of Ca +2 inside neurons as well as any existing alterations to its control during the development of the disease.It has been demonstrated experimentally that there may be a feed-forward loop between Aβ and Ca +2 regulation [72].Mathematically, this has been proven to be a bistable switch, in that when low levels of Aβ and Ca +2 (i.e.healthy state) begin to rise due to any form of disruption that leads to increased Aβ or intracellular Ca +2 , resulting in certain pathologic effects [40].This suggests that if the existing levels of Aβ and Ca +2 are adjusted to their healthy states, the advancement of AD can be prevented.Because of the positive link between Ca +2 and Aβ, this might be achieved by reducing Ca +2 uptakes.The second technique to delay the development of the disease is to raise the intensity of stochastic noise for Ca +2 and reinforce the strength of stochastic noise for Ca +2 since these noises tend to lessen the severity of the disease when the stationary distribution is unimodal.They are made feasible by controlling the microenvironment for Aβ and Ca +2 .To the best of our knowledge, these are novel results based on the association of Aβ concentration obtained from ADNI data.The interactions between Aβ and Ca +2 add a new degree of complexity to important processes related to the beginning and progression of AD and may help to explain why viable therapeutic therapies for the disease have yet to be developed. The novelty of our present study is that the proposed model utilizes clinical data to measure the relationship between Aβ and Ca +2 , specifically ADNI data gathered during per 2-year visits with AD patients.The initial conditions for Aβ and Ca +2 are chosen based on our sole assumptions considering the specific data frame since we are only interested in the interplay between Aβ and Ca +2 , once the disease started in the individuals.The data was analyzed using the ABC approach and fit our proposed model.The Markov chain Monte Carlo algorithm is used for the model that is fully coupled and for parameterizations.Our research has shown that the presence of Aβ can create a zone with a steady state that is bistable.This means that the system can exist in two stable states, depending on the initial conditions.However, we used the ADNI data for AD patients only, so we discussed only the disease state and progression of AD.It is believed that the growth of cytosolic Ca +2 due to Aβ can lead to the development of AD.Our focus is on the quick development of abnormal Ca +2 signals.The concentration of Aβ changes over a much longer period than Ca +2 due to accumulation over months, years or even decades.In this case, our research has shown that the presence of Aβ in the model can create a zone with a bistable steady-state population of Ca +2 due to the disruption of calcium homeostasis.However, the question of whether the steady-state population of Ca +2 in AD patients is bistable requires further investigation and validation using experimental data.Multifidelity modelling is an approach used to efficiently predict the behaviour of a complex system by using multiple computational models of varying levels of fidelity or accuracy [73].In the present study, we used the MCMC approach to combine low and high-fidelity calculations.In this approach, the low-fidelity calculation is used to generate a proposal distribution, which is then refined by the high-fidelity model.Next, the MCMC algorithm iteratively samples from the proposal distribution and accepts or rejects the samples based on their likelihood.This allows the low-fidelity calculations to explore the parameter space efficiently while the high-fidelity calculations are used to refine the results and increase their accuracy.The use of clinical datasets such as ADNI, in conjunction with the computational modelling described herein, facilitates the implementation of multi-fidelity association studies, which are novel and promising tools for evaluating the potential benefits and side effects of therapeutic agents that target known AD pathways.The study has the potential to generate novel ideas and hypotheses for future research, particularly in medication discovery and safety initiatives, which can be validated through additional cohort studies and clinical trials.Moreover, the findings may stimulate further research in the field by highlighting the significance of exploring Ca +2 and their potential effects on Aβ. The computational results have been obtained with an in-house developed MATLAB code to solve the coupled ordinary differential equations of the developed models (i.e., Eqs.(1)(2)(3)(4) presented in the earlier Section 2. Our data contains 1706 patients, and we divide our job into many processors.We have selected the number of CPUs as the divisors of 1706.For example, for 10, 000 iterations, for 1 CPU, the computational time is 5012.1s,and for 2 CPU's it is 140.3s,etc.To attain the required total time for each model, it often takes several million-time steps.If we solve the problem using conventional serial programming, this requires a significant amount of computing time.Through the use of open MPI and the C programming language, we can reduce computing time.We divide the sequential tasks involving the time step among available processors for each time iteration and perform them in parallel.All figures have been plotted and shown in Matlab after the data have undergone post-processing.To reduce the time required to acquire results for the parallel computation, we employed the SHARCNET supercomputer facilities (64 cores). Discussion In this paper, we extended the stochastic mathematical model of AD by introducing ADNI data for Aβ and analyzed the interplay between Aβ and Ca +2 .We investigated the dynamical behaviours of stochastic processes with the model by incorporating slow-fast timescales between Aβ and Ca +2 , which revealed the influence of random noises on the advancement of AD.The number of AD modelling tools available to date has been fairly limited, most likely due to the enormous complexity of the molecular systems underlying its pathogenesis.Many mathematical models have previously been constructed to explore particular and well-defined features of the disease [9,15,16,17].To the best of our knowledge, none of the computational models have been proposed yet to explore the synaptic interplay between Aβ and Ca +2 using clinical data.Here, we provide a simple stochastic model that qualitatively describes the interactions between intracellular Aβ and Ca +2 using ADNI data.It is based on two simple coupled stochastic differential equations with the Wiener process or stochastic noises. Our goal was to analyze any potential functional effects or interplay between Aβ and Ca +2 resulting from the positive loop that exists between the two chemicals in AD patients' brains, despite the fact that the model is obviously oversimplified.Importantly, it is possible for Aβ to bind to NMDA receptors and produce Ca +2 dyshomeostasis, which results in oxidative stress, the formation of free radicals, and the death of neurons [74].Furthermore, Aβ can activate mGluR5 receptors, which elevates postsynaptic Ca +2 levels in the cell [8,49].Then, APP processing is accelerated by NMDA receptors and mGluR, which create a positive feedback loop that boosts Ca +2 influx and free radical generation [75].We have shown that it explains well-known aspects of the disease, such as its inability to be reversed, the threshold-like transition to severe pathology following the relatively slow accumulation of symptoms, the so-called "prion-like" autocatalytic behaviour, and the naturally random nature of the disease's emergence that is typical of AD in sporadic cases. Nonetheless, there are several more general characteristics of bistable behaviour that may be mentioned.Here, the bistable behaviours mean that the final size of Aβ is dependent on the initial levels of Aβ and Ca +2 [25].First, since each neuron is either in one steady state or the other, with those two states being distinguished by very different values for concentration levels of Aβ and Ca +2 and enzymatic activities, average measurements of Aβ or Ca +2 -related quantities are expected to have little significance in terms of experimental observations on disease characteristics.The model contends that comprehensive identification of the condition of the neurons, or at least of the damaged ones, is required for experimental quantification.Therefore, in the present study, we considered only AD patients, and then in damaged neurons, we analyzed the interplay between Aβ and Ca +2 .Our previous study [8] shows that Aβ enhances the dysregulation of Ca +2 in AD, so the present studies validate our hypotheses.Furthermore, in terms of the progression of Alzheimer's disease, it is intriguing to note that dysregulations in both Aβ and Ca +2 constitute the causes and effects of the disease in this scenario originating from the positive loop.In this way, it unifies two theories for the onset of AD that are frequently presented as opposed in the literature: the "amyloid hypothesis", in which Aβ is presented as the causative factor, and the "Ca +2 hypothesis", in which the upregulation of Ca +2 signalling is assumed to play the primary role.Both molecules are intimately connected to one another and equally responsible for the development of AD during the clinical trial, according to the current study. Our model is limited by the simplification of a single pathway of Aβ changes from many stages (e.g., NL to MCI to AD using ADNI data) to AD.It is well recognized that AD progresses in many ways throughout the years.If large patient samples are researched over an extended period of time, clinical trials are difficult to conduct and expensive.Moreover, modelling and simulating AD dynamics can be done for a small cost, and they are invaluable resources for improving clinical trial designs and raising the probability of accurate treatment efficacy assessments.The results obtained here are consistent with earlier theoretical and experimental investigations [8,15,25,40,42,43,44,45,46,47].Our goal in this work was to describe the progression of AD, which encompasses not just AD pathology but also biochemical and cognitive alterations brought on by Aβ and Ca +2 .The modelling strategy developed here can calculate individual Aβ and Ca +2 growth trajectories and markers of latent disease progression at the population level using ADNI data.Individuals are identified along simulated trajectories by utilizing the proposed framework offered by Bayesian inference.This study highlights the rising role of Ca +2 ions in the development of AD and focuses on the key components of the interplay between Aβ and Ca +2 homeostasis. Conclusions Millions of individuals suffer from the progressive neurological disorder known as Alzheimer's disease.AD patients suffer from gradual, permanent cognitive deterioration.The biggest risk factor for AD is age.The development of plaques in the brain, caused by the gradual deposition of cerebral amyloid-β (Aβ) peptides in the extracellular space, and of intracellular neurofibrillary tangles, made of misfolded proteins that typically stabilize microtubules with neuronal axons, are pathological symptoms of AD.Moreover, there is growing evidence that long-term disturbances of intracellular Ca +2 homeostasis may be a key factor in AD.Particularly, it appears that Aβ's cause an increase in intracellular Ca +2 since multiple studies have discovered Ca +2 -dysregulations resulting in higher Ca +2 entry in the cytoplasm in AD mouse models.Despite extensive studies, the pathophysiology of AD is still poorly understood, and the associated underlying molecular alterations have not yet been fully discovered. In the present study, we developed a simple, yet effective, stochastic model formalizing a positive feedback loop between Aβ and Ca +2 .The novelty of the proposed model is that it incorporates clinical data, such as ADNI data for AD patients per 2-year visit, for quantifying the interplay between Aβ and Ca +2 .The data were fitted to the given model using the ABC technique.The goal was to analyze the specific roles of Aβ and Ca +2 on synaptic homeostasis and discuss therapeutic protocols to slow down the progression of AD.More specifically, we investigate the underlying mechanisms that lead to neuronal hyperactivity and the role of Aβ growth on the Ca +2 dynamics.We demonstrated that in the AD brain, increasing Aβ concentrations could lead to an increase in Ca +2 dysregulation, which is harmful and promotes neuronal death.Moreover, there exists a positive feedback loop between the growth of both compounds (i.e., Aβ and Ca +2 ).It is expected that the proposed model will assist in a more precise prediction of the synaptic mechanism during AD and pave the way for the experimental testing of different hypotheses.We provided numerical simulations that agree with the previous findings that a number of dysregulations within the brain can lead to a disease state [8,15,42,43,44,45,46,40,25,47].Importantly, changing the balance between Aβ and Ca +2 concentrations or lowering both concentrations may be able to alleviate AD-related disorders and open up new research avenues for AD treatment.Our findings fill gaps in AD research by explaining how Aβ plaques develop, what happens when Ca +2 and Aβ interact, and how they induce selective neuronal death in AD patients.Future research will be able to more precisely evaluate model predictions and forecast the progression of the diseaseby including more patients, for instance, MCI, EMCI, and NL, to clinical trial data. Figure 1 : Figure 1: (Color online) Graphic illustration of the mathematical model provided by Eqs.(1)-(2) that describes the interaction between Ca +2 and Aβ during the progression of AD.The positive feedback exerted by Ca +2 on the creation of Aβ, as well as the fact that Aβ tends to increase intracellular Ca +2 , establish a positive loop (motivated by[40]). Figure 2 : Figure 2: (Color online) Simulation of the transformation from a healthy to a pathological state in the model for the onset of AD, specified by Eqs.(1)-(2).The transition is caused by a shift in Ca +2 homeostasis, which is represented in the model by increasing the rate of Ca +2 entry (V2) from 4 to 5. The initial concentration of both Aβ and Ca has been assumed to be zero, respectively.The parameter values are given in Table1and these values were chosen such that the Ca +2 concentration, develops quicker than the amyloid Aβ concentration given that the latter is defined by time scales on a number of years, whereas Ca +2 is characterized by seconds to minutes (reproduced from[40]). Figure 4 : Figure 4: (Color online) The trajectories of the stochastic model by fitting the simulated data describing the interplay between Ca +2 and Aβ during the onset of AD and defined by Eqs.(3)-(4).The initial values of Aβ chosen here are 200, 400, 600nM (top to bottom) and for Ca +2 , set it to 0. Again, the concentration of Aβ is defined by time scales on a number of years, whereas Ca +2 is characterized by seconds to minutes.These values are chosen based on our assumptions to analyze ADNI data, also, Aβ is a promising biomarker that is measured in CSF fluids collected from ADNI participants measured in nM Figure 5 : Figure 5: (Color online) The trajectories of the stochastic model by fitting the ADNI data describing the interplay between Ca +2 and Aβ during the onset of AD and defined by Eqs.(3)-(4).The initial values of Aβ chosen here are the means of Aβ concentrations at the bl, 12 and 24 month visits (from top to bottom) which were taken as 200nM, 400nM, 600nM.(a) The impact of Aβ (left) on (b) Ca +2 (right) is presented.The initial values of Ca +2 are set to zero. Figure 6 : Figure 6: (Color online) (a) The blue histograms represent the stationary density of Aβ (top) and Ca +2 (bottom) obtained by stochastic simulations by fitting the ADNI data to the Eqs.(3)-(4), the red lines stand for stationary density in Eqs.(3)-(4), the y-axis represents the probability density function (1/nM ).(b) The straight line represents the interplay between Aβ and Ca +2 concentrations after fitting the ADNI data into Eqs.(3)-(4) The initial conditions for for Aβ and Ca +2 chosen here are 400nM, 0nM, since the patients data has been fitted at 12 month visit (Fig.5).The time scale chosen here is the age of patients ranging from 57 − 87 years based on the ages of patients given on ADNI data. ) and Fig. 3 (b), the histograms presented in Fig. 3 (b) and Figure Fig. 6 (a) have varying time scales.
12,593
2023-06-17T00:00:00.000
[ "Medicine", "Mathematics" ]
Joining Spacetimes on Fractal Hypersurfaces The theory of fractional calculus is attracting a lot of attention from mathematicians as well as physicists. The fractional generalisation of the well-known ordinary calculus is being used extensively in many fields, particularly in understanding stochastic process and fractal dynamics. In this paper, we apply the techniques of fractional calculus to study some specific modifications of the geometry of submanifolds. Our generalisation is applied to extend the Israel formalism which is used to glue together two spacetimes across a timelike, spacelike or a null hypersurface. In this context, we show that the fractional extrapolation leads to some striking new results. More precisely we demonstrate that, in contrast to the original Israel formalism, where many spacetimes can only be joined together through an intermediate thin hypersurface of matter satisfying some non- standard energy conditions, the fractional generalisation allows these spacetimes to be smoothly sewed together without any such requirements on the stress tensor of the matter fields. We discuss the ramifications of these results for spacetime structure and the possible implications for gravitational physics. Introduction The theory of fractional calculus has been considered a classical but obscure corner of mathematics [1,2,3]. It remained, until a few decades, a field by mathematicians, for mathematicians and of purely theoretical interest. Though it played a crucial role in the development of Abel's theory of integral equations and many mathematicians like Liouville, Riemann, Heaviside and Hilbert took an active interest in it, fractional calculus found limited applications and was referred to only occasionally, to simplify complicated solutions. For example, this formalism has been used quite often to simplify the solutions of both the diffusion as well as the wave equation (for example, see [4], and [5]). During the last few decades, however, this theory has found important applications for large number of practical real life situations. Indeed, fractional calculus is providing excellent tools to develop models of polymers and materials [6,7]. In particular, it has been found that to understand properties of various materials which require long-range order to hold, fractional calculus provides a sound platform [8]. Fractional calculus have also been found to naturally incorporate some subtle effects in the dynamics of fluids, and these have found important applications in understanding mechanical, chemical and electrical properties of nanofluids. However, possibly the most prominent application of these derivatives of non-integer order has been in the theory of fractals [9]. It has been found that for many stochastic processes, the phenomena progresses through increments which are not independent, but instead tend to retain some memory of previous increment, though not necessarily the immediately previous increment [9,10,12,11,13]. In other words, these are random processes with long term memory. The theory of fractional Brownian motion, which provides a very natural explanation for these effects, incorporates these persistence effects (or anti-persistence effects) though the fractional modification of the usual Brownian formula relating displacement and time [14]. It has now been understood that statistically speaking, all the naturally occurring signals are of the Weierstrass type, i.e continuous but non-differentiable [9,15] (here differentiation is in the sense of the usual calculus) and indeed, such Weierstrass-like functions arise even in many quantum mechanical situations. For example, it has been shown that many quantum mechanical problems involving discontinuous potentials possesses energy spectrum of the Weierstrass type [16]. Furthermore, the Feynman paths in the path integral formulation of quantum mechanics are also examples of these kind [17]. However, the most significant discovery has been that, though the naturally occuring functions are of the Weierstrass type and are endowed with a fractal dimension, they are fractionally differentiable and that the maximal order of differentiability is related to the box-dimension of the function [18,19]. Thus, fractional calculus has been highly advantageous in modelling dynamical processes in self-similar systems and for analysing processes which generate chaotic signals and are apparently irregular. In this paper, we apply the techniques of fractional calculus to general relativity. As is well known, the issue of final state of gravitational collapse is a long standing open question in general relativity. The appearance of spacetime singularity reveals the domain of failure of the classical theory of general relativity [20]. Quite naturally, it is assumed that general relativity must be corrected to eliminate these failures. Both the string theories and effective field theories necessitate that one must add terms involving higher order as well as higher derivatives in the Riemann tensor to incorporate the effects of physics at small scales. It is a general hope that these higher order corrections will certainly get rid of the singularities [21]. However, in absence of any comprehensive proof of these expectations, we propose to look at another alternative which may present itself at the small scales. As we shall see in the subsequent sections, fractional calculus, in any of it's possible alternative forms, define differentiation through an integration. Hence, it naturally incorporates non-local spacetime correlations and long-range interactions, which are expected to be natural at high energy scales, into account. Thus, many subtle non-local effects may manifest itself if one replaces the ordinary differentiation by it's fractional counterpart. One may immediately ask as to where should one look for such non-local terms to arise physically and envisage the regions of strong gravity where the classical theory of general relativity is known to require modifications 1 . An obvious candidate for the strong gravity regime is the black hole region since black holes are created due to gravitational collapse of matter fields in an intense gravitational field. There are two regions of the black hole spacetime which are ideally suited for the fractional effects to manifest itself. First is the horizon, which for small mass black holes are regions of intense gravitional field and second, near the singularity where the effects of strong gravity, though invisible to the asymptotic observer, are most spectacular. In either of these two situations, possibilities of long range order have quite interesting repercussions on modelling of spacetime. Let us first discuss the region near the horizon. The black hole horizon, here taken to be an event horizon, is a null expansion-free hypersurface which lies in the region adjoining two spacetimes. So, the horizon may be thought of as a null hypersurface which glues the two spacetimes. Naturally, the joining of two spacetimes through the hypersurface requires that some conditions on the spacetime variables be satisfied on the hypersurface. The Israel-Darmois-Lanczos (IDL) junction condition demands that the metric on either side of the horizon, when pulled back to the hypersurface, must be continuous [25]. In contrast, the extrinsic curvature of the horizon is not required to be continuous. In fact, consistency requires that the Riemann tensor and hence the extrinsic cuvature on the hypersurface admit delta function singularities. Using the Einstein equations, the Ricci part of this singularity is related to the stress tensor. Thus, the IDL condition only requires that the difference of the extrinsic curvatures of the hypersurface as embedded in these two spacetimes, must be proportional to the stress-energy tensor living on the horizon. In other words, due to the geometry itself, the hypersurface comes naturally equipped with a energy-momentum tensor. These junction conditions on the horizon has given rise to speculations of constructing a singularity free spacetimes, and in particular, non-singular black hole interiors, in the following way [22,23]: Take the exterior of the Schwarzschild horizon as the future spacetime region and the interior to de-Sitter horizon as the past spacetime region. The boundary between these two regions, the common hypersurface to these two regions, will be a thin null hypersurface endowed with some specific energy-momentum tensor derived from the IDL matching conditions. Thus, one may have well defined matching conditions to create a singularity free universe, with the exterior a Schwarzschild spacetime while the interior being a de-Sitter spacetime. However, in most of the cases, the matching conditions leads to energy-momentum tensors which violate some of the well known energy conditions. In [23], there have been attempts at constructing a singularity free universe by adjoining the de-Sitter interior with the inner horizon of a Reissner-Nordstrom black hole with a particular values of charge and mass. In this particular case, the matching is smooth, with no requirement of any energy momentum tensor. In general situations, these matchings are not smooth and require energy condition violating energy-momentum tensors on the matching hypersurface. The second point is related to the another such attempt where, the de-Sitter spacetime is glued to the Schwarzschild interior though a spacelike hypersurface. This attempt was made by [24], in their famous proposal of limiting curvature. They devised a model in which the Schwarzschild metric inside the black hole region is matched to a de-Sitter one at some spacelike junction surface which represent a thin transition layer. As a requirement of their proposal, this layer is placed at a region very close to the singularity where the curvature reaches it's limiting value. However again, for general singularity free matchings of the above kind, the junction layers admit energy momentum tensors which violate energy conditions. In particular, the effective stress-energy tensor of the model [24] violates the weak energy condition. In fact, in almost all similar attempts of creating singularity free models like that of [24], have energy condition violations. These kind of violations are actually characteristic of quantum effects which become important in strong gravitational field. As a remedy to these energy condition violations, we argue in this paper that the notion of fractional derivatives offers a possibility of creating singularity free universe through smooth matching of spacetimes. More precisely, we demonstrate the following: First, that the IDL junction conditions both for timelike/spacelike as well as for null hypersurfaces are modified due to the fractional generalisation of the spacetime connection. This fractional generalisation of the IDL conditions will in turn modify the energy-momentum tensor on the hypersurface 2 . Secondly, using specific examples, we show that this generalisation allows us to fix the conditions on the junction shell in such a way that the Schwarzschild or the Reissner-Nordstrom spacetimes can always be smoothly matched to the de-Sitter spacetime in the interior without any energy condition violating requirement on the energymomentum tensors of the adjoining shell. The paper is organised as follows: In the next section, section 2, we briefly discuss the mathematical formalism of fractional calculus and the relevant notations. In sections 3 and 4, we introduce the notations for timelike/spacelike and null hypersurfaces and discuss the generalisations of the IDL junction conditions for fractional exponents. We also argue that these generalised junction conditions leads to smooth joining of spacetimes, which otherwise are known to be joined only through a thin shell of matter. The implications of these are discussed in the Discussion section. Mathematical preliminaries Let us discuss some notations useful for the mathematical formulation of geometry of hypersurfaces. Let us consider a 4 dimensional spacetime (M, g) with signature (−, +, +, +). Let a hypersurface ∆ be embedded in M and is given by f : ∆ → M. We shall assume that the embedding relation is such that the restriction of f to the image of ∆ is C ∞ . Let, {x µ } be a local coordinate chart on M and {y a } be a local coordinate chart on ∆. The embedding relation implies x µ = x µ (y a ). Let, g µν be the metric on the spacetime in terms of it's local coordinates. The first fundamental form or the induced metric on ∆ is the pull back of the metric g under the map f . In the local coordinates this can be written as h ab . where, (∂x µ /∂y a ) = e µ a and we have used that e µ a ∂ µ is the push forward of the purely tangential vector field ∂ a onto the full spacetime M. One may define a linear connection and hence a derivative operator on the spacetime. Let, T M denote the tangent bundle on M and let, X and Y are two arbitrary vector fields on it. The covariant derivative is a linear map The Riemannian theory assumes the covariant derivative to be metric compatible, On the tangent bundle one may also define a covariant derivative (D) on ∆ using the Gauss decomposition formula: D X Y is purely tangential and K(X, Y ) is an element of the normal bundle and refereed to as the extrinsic curvature. The Gauss equation also implies along with that metric compatibility of ∇ with g that the derivative operator D a is metric compatibile with the metric on the hypersurface h ab (i.e. D a h bc = 0). In terms of the local coordinate charts, the Gauss equation gives the following expression for the derivative operator (for X ≡ ∂ a ): The extrinsic curvature can also be defined for the hypersurface in terms of the local coordinates. The normal bundle for the hypersurface is one dimensional. Let, n µ be the normal. The extrinsic curvature is For our later use, let us give the Gauss equation in terms of the local coordinates: The Codazzi equation in local coordinates is given by: where | denotes the covariant derivative with respect to the coordinates on the hypersurface. Several of these spacetime functions have different values on either sides of a hypersurface. Then, it is required to express their continuity across the hypersurface. A useful and prominent example of this idea is that of the Israel-Darmois-Lanczos (IDL) junction condition [25]. Consider a hypersurface ∆ which partitions the spacetime into two regions (M + , g + ) with coordinates {x µ The spacetime M + is assumed to be to the future of the spacetime M − . Quite naturally, it is not generally true that the metrics on these two spacetimes could be continuously matched across the hypersurface ∆ (The either side of the hypersurface ∆ has been installed with coordinates {y a }). The discontinuity in the metric would be reflected in the fact that Riemann tensor would have a delta-function singularity on the hypersurface. The Israel junction conditions provides a method to smoothly match these hypersurfaces by using the following trick: relate the Ricci part of the singular Riemann curvature tensor to the surface stress-tensor using the Einstein equations. For spacelike hypersurfaces, the Israel junction conditions for a smooth joining of hypersurfaces at ∆ is given by where However, if the extrinsic curvature is not identical on both the sides on the hypersurface ∆, the surface stress tensor (S ab ) on the hypersurface is However, on the null surface, the standard extrinsic curvature corresponding the normal of the hypersurface (which is also the tangent to null hypersurface) is always continuous and hence, one needs to define a transverse curvature [25]. The metric induced on the null hypersurface is again continuous but the discontinuities in the components of the transverse curvature is related to the energy-momentum tensor induced on the this hypersurface. In deriving the above relations, we have implicitly made two crucial assumtions: First, that the point functions are continuous and differentiable in the region under consideration. However, it may happen the scalar, vector or the tensor functions are only fractionally differentiable. In that case, the limiting values defined by our ordinary differential calculus become singular on the hypersurface. Thus, in addition to the IDL conditions, their fractional character must also be taken into account. Secondly, the spacetime connection is assumed to be a Levi-Civita connection. This arises since the spacetime is assumed to be a Riemannian spacetime and hence, the spacetime metric is compatible with the covariant derivative (∇ γ g αβ =0). The Gauss decomposition, eqn.(4), then imples that the connection on the hypersurface is also a Levi-Civita connection and that the extrinsic curvature is uniquely detremined in terms of this connection. However, it may happen that in the strong gravity regime we are interested in, the spacetime is slightly modified from it's Riemannian character and that the connection is not Levi-Civita connection derived from the metric. Quite naturally, in such a situation, the Gauss decomposition implies that the connection on the hypersurface will also be modified and the expression of the extrinsic curvature will also change. 3 In [26], a fractional generalisation of the Lie derivative has been proposed and utilised to generalise the definition of the extrinsic curvature for non-null hypersurfaces. In this fractional generalisation, which is based on Caputo's modification of the Riemann-Liouville definition of fractional derivative (see the appendix), the usual definition of the extrinsic curvature K ab = (1/2)(£ n g αβ ) e α a e β b is modified to give: Here, the superscript q denotes the fractional parameter, 0 < q ≤ 1 (see the appendix) and D q r−∆,r denotes the derivative: where the integration is carried out from a spacepoint r − ∆ to r. In the context of matching of spacetimes across hypersurface, ∆ is taken to be the thickness of the hypersurface. The junction conditions will be modified from eqn. (11) to Naturally, because of the definition of the derivative, it has a non-local character imbedded into it. In the following sections, we shall utilize this generalisation of the definition of extrinsic curvature to modify the junction conditions for spacelike/timelike as well as null hypersurfaces. Additionally, we shall show that the junction conditions lead to a smooth matching of hypersurfaces. to a Minkowski metric on a timelike hypersurface. We show that depending on the width of the shell, the energy momentum tensor of the shell changes. We utilize this observation in the second example, which deals with matching of a Schwarzschild spacetime with a de-Sitter spacetime on a spacelike hypersurface. Again the energy-momentum tensor residing on the thin shell differs substantially from the standard results. Joining Minkowski and slowly rotating Kerr metrics Let us consider the metric of a Kerr spacetime in the slow-rotation approximation. We shall assume a shell of mass M and angular momentum J in the spacetime. The exterior spacetime (M + ) has the following metric: where f (r) = (1 − 2M/r) and a = (J/M ) M , is a parameter for the angular momentum which is usually used to replace the shell's angular momentum J. Let us assume that the shell is located at r = R 0 . The induced metric on the shell becomes: Using the definitions, ψ = (φ − ωt) with ω = (2M a/r 3 ), and keeping terms upto first order of a, we get the induced metric to be h ab dy a dy b = −f (r) dt 2 + R 2 0 (dθ 2 + sin 2 θ dψ 2 ). We shall use y a = (t, θ, ψ) as the co-ordinates on the shell and the parametric equations for the hypersurface in the form x α = x α (y a ) are t = t, θ = θ and φ = (ψ + ωt). The shell's unit normal is n α = f (r) −1/2 ∂ α r, which in the coordinates is given by: Now let's calculate the non vanishing components of extrinsic curvature. The definition of transverse component of the fractional generalisation is: where the projectors e α a s are e α t ∂ α = (∂ t + ω ∂ φ ), e α θ ∂ α = ∂ θ and e α φ ∂ α = ∂ φ . Let us first determine q K + tt , where + denotes that the variable is associated with the external spacetime. Note that since e α t ∂ α = (∂ t + ω∂ φ ), one get the only contribution from q K + tt = (1/2) q £ n (g + tt ). The other contribution to q K + tt from g + tφ is neglected since g + tφ = −(2M a sin 2 θ/r), is directly proportional to a and further, together with ω = (2M a/r 3 ) contributes an overall a 2 term. Note that due to the form of the normal vector, and the metric, only the first term in expansion in (19) contributes. Using the expression for D q r−∆,r (r −1 ) in the appendix, eqn. (92), we get and hence, using the metric, one easily determines that Similarly, one finds the contribution from q K tψ as follows: Using D q r−∆,r (r −1 ) in the appendix, eqn. (92), we get that Again, using D q r−∆,r (r 2 ) in the appendix , eqn. (86), we get q £ n (g + φφ ) = n r D q r−∆,r (r 2 sin 2 θ) Putting ω = (2M a/R 3 0 ) in eqn. (21), and using equations (22) and (23), we get The expression naturally leads to the following expressions for extrinsic curvatures: The angular components of the extrinsic curvatures are q K + θθ and q K + ψψ and their expressions may be found in exactly the same method and we get: For interior spacetime, we take it to be the flat Minkowski spacetime. So, to the past of the hypersurface at r = R 0 , the spacetime M − is given by the metric where ρ is a radial coordinate. The intrinsic metric on the hypersurface from the interior matches with the induced metric from the exterior region. The normal to the hypersurface is n α = (∂/∂ρ) α . The expressions for the extrinsic curvatures may be determined and the only non-vanishing components are q K θθ and q K φφ : Let us now determine the stress-energy tensor of the thin shell of matter forming the hypersurface joining the two spacetimes. The discontinuities in the extrinsic curvatures are related to the shell's surface stress-energy tensor S ab . The shell's matter may be assumed to be made of perfect fluid, with density σ = −S t t , pressure p = S θ θ and rotating with angular velocity ω = −S t ψ /(−S t t + S ψ ψ ). The expressions for these components of the energy momentum tensor are: Quite naturally, all the expressions of the energy momentum tensor are modified due to the improved notion of fractional differential. The modification takes the thickness of the shell into account. One very interesting notion is the determination of the angular velocity of the shell. The angular velocity is obtained from ω = S t ψ /(S t t − S ψ ψ ). This gives for R 0 2M , This expression given above for the angular velocity is different from that obtained in the usual case [25] but reduces to it in the limit ∆/R 0 → 0. Matching the Schwarzschild and the de-Sitter spacetimes The joining of exterior spacetime of the Schwarzschild black hole (taken as the exterior spacetime) with the de-Sitter spacetime has been the subject of many investigations, which were particularly directed to create singularity free models of black hole interior. One particularly interesting application was considered by Frolov, Markov and Mukhanov [24] to exemplify their limiting curvature hypothesis. They suggested that inside the Schwarzschild black hole, very close to the singularity, when the Planck scale is reached, there would be corrections to the Einstein theory of gravity. These corrections would not allow the curvature of the spacetime to dynamically grow to infinite values. Instead, the effective curvature of the spacetime would be bounded from below by −2 p , where p is the Planck length. Naturally, this hypothesis implies that there will be no curvature singularity. Instead, the model in [24] proposes that very close to the spacetime singularity, where the curvature reaches the 2 p , the spacetime makes a transition from the Schwarzschild to the de-Sitter spacetime by passing through a very thin transition layer . The spacetime passes through a deflation stage and instead of singularity, reaches a new inflating universe free of singularity. The matching of these two spacetimes require stress-energy tensors on the joining shell which violate energy conditions. In the following, we recalculate the stress-energy tensor on the matching shell using the fractional calculus and show that the stress-tensor is modified. The modified stress tensor will be shown to lead to smooth matching of the spacetimes. Let us match the de-Sitter spacetime with the interior Schwarzschild spacetime. The metric for the two spacetimes may be written in a combined form as: where f (r) = (2M/r − 1) for the Schwarzschild metric and the f (r) = [(r/l) 2 − 1] for the de-Sitter metric. For simplification, let us define a new set of coordinates: v = λ/ √ f . The induced metric on the spacelike surface becomes ds 2 = dλ 2 +r 2 dΩ 2 . The normal to this surface is given by: and n α = 1/ √ f , √ f , 0, 0 . The coordinates in the spacetime is taken to be x α = (v, r, θ, φ) and that of the hypersurface to be y a = (λ, θ, φ). This implies that e α q ∂ α = (1/ √ f )(∂/∂v) α , e α θ ∂ α = (∂/∂θ) α and e α φ ∂ α = (∂/∂φ) α . Let us evaluate the extrinsic curvatures. The general expressions for these quantities for either of these spacetimes are given by the following: For the Schwarschild Metric, which is taken to be the interior spacetime, these expressions are obtained using the equations (92) and (86) are: For the de-Sitter metric, taken to be the external or the future spacetime, the same expressions are given as, using (86) are : The jump in the components of the extrinsic curvatures are given by: The components of the stress-energy tensor is given by S q q = λ/4π and S θ θ = S φ φ = (κ + λ)/8π. Quite noticably, the values of the energy momentum tensors are markedly different from those obtained in [24]. The values differ by quantities which are proportional to the ratio (∆/R 0 ), and hence by choosing the value of this ratio judiciously, it can be easily seen that the energy momentum tensor can be made to vanish. Hence, one may match the two spacetimes smoothly across a spacelike hypersurface. Junction conditions for null hypersurfaces Let us consider a null hypersurface that partitions the 4-dimensional spacetime into two regions M + , g + µν (x + ) and M − , g − µν (x − ) , which we shall conveniently call as the future and the past respectively. Let us denote the coordinates of the spacetime as x α , α = 0, 1, 2, 3, whereas the coordinates on either side of the hypersurface will be denoted by y a , a = 1, 2, 3, which will mean the collective coordinates (λ, θ A ), where θ A , A = (2, 3) denotes the variables on the two-dimensional cross-sections of the hypersurface. On each side of the hypersurface, one may construct the tangents to the generators of the null hypersurface ( α ) and the transverse spacelike vectors (e α A ), which are tangents to the cross-sections (taken to be compact) of the hypersurface. These vectors shall be denoted by: with the following properties: α α = 0, α e α A = 0. These vectors may be constructed for both sides of the null hypersurface. Further, on each side, the basis needs four vectors and the fourth vector, will be taken to be a null vector. It will be denoted by n α with the following properties: α n α = −1, n α n α = 0, n α e α A = 0. The typical situation with a null surface is that the usual extrinsic curvature, K ab = (1/2) (£ g αβ ) e α a e β b , corresponding to the normal to the hypersurface is continuous, since the normal is also the tangent α . So, one usually defines the transverse component of the extrinsic curvature corresponding to the null vector field normal to the transverse cross-sections of the hypersurface. This vector is n a , such that .n = −1. The transverse extrinsic curvature may be defined as C ab = (1/2) (£ n g αβ ) e α a e β b . The stress-energy tensor of the shell is given by: is the shell's surface density and p = (−1/8π)[C λλ ] is the surface pressure. Null Charged Shell Collapsing on a Charged Black Hole Let us consider a spherically symmetric charged black hole of mass M and charge Q on which a null charged shell of mass E and charge q collapses. Outside the shell, the spacetime outside the total configuration may be viewed as a spherically symmetric Reissner-Nordstrom type geometry with mass (M + E) and charge (Q + q). As before, the spacetimes outside and inside the shell shall be denoted by + and − respectively. The metric is continuous across the shell. Let us check that the extrinsic curvature of the null surface corresponding to the null normal α is also continuous on either side of the surface. This is always true for the non-fractional case and precisely for this reason, the concept of the transverse curvatures have been introduced. We show that for the fractional case too, the extrinsic curvatures corresponding to the null normal of the surface is continuous. For the interior solution, we get that −α = (0, 1, 0, 0) and hence, the components of the fractional extrinsic curvature on the cross-sections are: q K − AB = (1/2) q £ g AB . Using the formulae from the appendix, equation (86), we get These two equations may be combined to the following form: For the exterior solution too, the null normal is given by +α = (0, 1, 0, 0) and the extrinsic curvature corresponding to this null normal is q K + AB = 1 2 q £ (g AB ) is given by: This implies that the extrinsic curvatures are also continuous q K + AB = q K − AB . The transverse extrinsic curvature is not continuous for this metric. The expression for q C + θθ is given by Similarly for q C + φφ , we get: So these two expressions may be combined to give: where σ AB = R 2 0 + R 2 0 sin 2 θ. Similarly, for the interior spacetime, the transverse component of the extrinsic curvature is given by: These equations immediately imply that the shell's surface pressure is zero for this case. The shell's surface density is This relation clearly implies that to satisfy the weak energy condition, we must have As a simple application, let us study if the charged black hole may be overcharged, so that the total charge (Q+q) exceed the total mass (M +E). It is a simple matter to check that the condition for overcharging violates the weak energy condition. So, even in the fractional modification, a charged black hole cannot be overcharged. Matching Schwarschild and de-Sitter spacetimes across horizons Let's start with a general form of the metric and then we shall specialize to the individual cases. The general form for a spherical symmetric metric in the advanced Eddington -Finkelstein coordinates is given by: The coordinates of the spacetime is given by x α = (v, r, θ, φ). Let us assume a null hypersurface (a shell) given by r = r 0 with coordinates of the shell being y a = (v, θ, φ). The null surface is foliated by compact surface S 2 . The vector fields tangent to the sphere are given by, e α θ = (∂/∂θ) α e α φ = (∂/∂φ) α . Let us now determine the set of null vectors tangent to the null surface which is given by the relation f (r 0 ) = 0. The generator of the null surface is α = (∂/∂v) α and the transverse null vector is n α = −(∂/∂r) α . Let us consider the interior metric (M − ) to be the de-Sitter spacetime: As usual, the standard extrinsic curvatures associated to the null normals of the are continuous and hence let us calculate the transverse extrinsic curvatures q C − θθ and q C − φφ : and similarly for q C + φφ , we get So combining them together, we get: where σ AB = R 2 0 + R 2 0 sin 2 θ. The quantity q C − vv gives: The exterior spacetime is taken to be the Schwarzschild spacetime (M + ), with the metric Again, the co-ordinate on null shell are (v, θ, φ). Let us calculate transverse curvatures. Just as in the previous case, the result is The q C + vv is given by: Let us calculate the quantities associated with the shell. The surface density µ = 0. The pressure is given by: So, if the matching surface is the horizon, R 0 = 2m = l and hence the pressure must by non vanishing. However, in the fractional modification, we may choose the ∆/R 0 judiciously to get a smooth matching of the two spacetimes. Matching the Reissner-Nordstrom and the de-Sitter spacetimes Let us determine the criteria for matching the Reissner-Nordstrom spacetime and the de-Sitter spacetimes on the inner horizon of the non-extremal charged black hole. Interestingly the matching is to be carried out on the inner horizon as was firts proposed in [23]. The Penrose diagram is given in fig 2. . The external spacetime is the Reissner-Nordstrom spacetime (M + ) with the following metric: The co-ordinates on null shell are (v, θ, φ) and R 0 = m− m 2 − Q 2 . The transverse extrinsic curvatures q C + θθ and q C + φφ may be written as The interior spacetime is the de-Sitter spacetime with matching at R 0 = l. The curvatures have already been found out in the previous subsection. The properties of the shell may be immediately obtained. The surface density µ = 0 but the pressure is So, again, in the standard case, when (∆/R 0 ) = 0 the spacetimes matching requires a shell which shall hold this pressure and hence the matching is not smooth. Incidentally, in [23], the authors noted that for special case like 3l 2 = Q 2 , there is a smooth matching of the two spacetimes. This matching is a special case. The fractional generalisation however, shows that it is possible to adjust the parameter (∆/R 0 ) to get a vanishing pressure and hence, a smooth matching of the spacetimes on the hypersurface. Discussions In this paper, we have developed the fractional generalisation of the Israel-Darmois-Lanczos junction conditions for spacelike/timelike as well as for null hypersurfaces. We have observed that there is a significant modifications due to the fractional generalisation. First, due to the definition of the fractional differentiation through an integral, it automatically incorporates the non-local spacetime correlations into itself. As a manifestation of this, the thickness of the shell gets incorporated into to the values of the shell's properties like the energy and pressure. We have taken several examples and have demonstrated that by choosing this thickness parameter ∆/R 0 judiciously, it is possible to join many spactimes smoothly across spacelike timelike or null hypersurfaces. A point of crucial importance is that must be mentioned here is that the dimension of the spacetime has been taken to be integral. Fractal dimensions may also be possibly included. In fact, general relativity may also be suitably adapted for fractal spacetimes, which would also require revising our notions of coordinate transformations and covariance. However, we have not attempted this path, of altering the theory of general relativity to recast it for all spacetime dimensions, integral or non-integral. Instead, we have looked for alternate avenues by generalising the notion of Lie-derivative which is initrinsically attached to the differentiable structure of the spacetime [26]. Unlike the usual Levi-Civita connection, there is no requirement of the metric and hence the Lie derivative is much primitive and turns out to be most useful. This generalisation has been used to construct the extrinsic curvatures and hence the surface properties of the shell. In the appendix, we have developed the reasons as to why we should expect that there should be some modification in the dynamics as well. We show that the Einstein equations modify significantly. The ramifications of these issues shall be dealt with in future papers. Fractional derivative The Riemann-Liouville definition of fractional calculus is usually given in the form of a integral transform of a specialised type, as given below [1,2,3]: where ν > 0. This definition is the foundation of the theory of fractional differentiation (and integration), but breaks down at the integral points, ν = 0, −1, −2, · · · . At those points the integration may however be replaced by the ordinary integration formula. The Caputo derivative is a modification of the Riemann-Liouville derivative where suitable modification have been applied so that it satisfies all the rules of a derivative. The Caputo derivative is defined as follows [1,2,3]: where the superscript q denotes the fractional parameter, 0 < q ≤ 1. To take into account of the tensor indices, a further modification is added in [26] as follows: For example, if the integration of the metric variable g ij (r) is to be carried out from one end of the shell (of thickness ∆) to the other, the above definition gives: where the integration limits have been chosen appropriately. This definition has been utilised in this paper. Beta Function and relation to the Hypergeometric functions In the paper, we have frequently made use of the Beta function, defined as: where a > 0, b > 0. In general, we may also write it as B x (a, b) = x a 1 a For D q r−∆,r (r −1 ), a similar calculation yields the following result: Using eqn. (83) and property of Gamma function i.e (1 − q)Γ(1 − q) = Γ(2 − q) we get: The computation for D q r−∆,r (r −2 ) proceeds along similar lines and gives: which using the equation (84) and (1 − q)Γ(1 − q) = Γ(2 − q) gives us: Modification of the Einstein equations The fractional derivative leads to a modification of the partial derivative. From the previous sections, we note that the Caputo derivative modifies the derivative through a factor (1 − q)∆/R 0 . Let us use this form to write for any function g, a modification of the derivative operator as: here β is some constant, and q denotes the fractional parameter. Using this definition of the derivative, the relation between new Christoffel symbol (for non-Levi-Civita connection) and old Christoffel symbol (for Levi-Civita connection) becomes :Γ α βγ = Γ α βγ ±α(1 − q) whereα is some constant. This gives a relation between old Riemann tensor and new Riemann tensor. The usual definition is modified to a new definitioñ The relation between the two Riemann tensors is given by: where µ = β∂ γ Γ α βδ + ∂ γα − β∂ δ Γ α βγ − ∂ δα ±α(Γ ν βδ ± Γ α νγ ∓ Γ ν βγ ∓ Γ α νδ ) . The Ricci tensor isR Similarly Ricci scaler is :R where η and τ are some constants. The Einstein field equations get modified as well:R αβ − So, the dynamics of the gravitational fields get modified for the fractional generalisation.
9,270.8
2018-10-25T00:00:00.000
[ "Mathematics" ]
Determination of the carrier concentration in CdSe crystals from the effective infrared absorption coefficient measured by means of the photothermal infrared radiometry In this paper, a non-contact method that allows to determine the carrier concentration in CdSe crystals is presented. The method relies on the measurement of the effective infrared absorption coefficient by means of the photothermal infrared radiometry (PTR). In order to obtain the effective infrared absorption coefficient and thermal diffusivity, the frequency characteristics of the PTR signal were analyzed in the frame of a one-dimensional heat transport model for infrared semitransparent crystals. The carrier concentrations were estimated using a theory introduced by Ruda and a recently proposed normalization procedure for the PTR signal. The deduced carrier concentrations of the investigated CdSe crystals are in reasonable agreement with those obtained using Hall measurements and infrared spectroscopy. The method presented in this paper can also be applied to other semiconductors with the carrier concentration in the range of 10 14 –10 17 cm - 3 . Introduction Electrical characterization techniques of bulk semiconductors using conventional methods are very often inconvenient as electrical contacts are needed. For industrial applications, on-line inspections are required. In this context, non-contact, non-destructive and fast methods for the measurement of thermal and electrical properties of semiconductors seem to be more appropriate. Photothermal radiometry (PTR) is a non-contact method which enables to measure optical and thermal parameters of materials [1,2]. In the case of semiconductors, PTR additionally yields information about the recombination parameters [3,4]. Recently, it was found that for infrared (IR) semitransparent semiconductors, it is possible to obtain information on the carrier concentration with quantitative values after calibration to Hall carrier concentration [5]. In this paper, a simple normalization procedure of the PTR spectra, which allows to determine the carrier concentration in CdSe single crystals using the normalized effective IR absorption coefficient measured by means of the PTR method and the theory proposed by Ruda [6], is presented. Materials n-type CdSe single crystals were grown by the high-pressure Bridgman method without a seed under an argon overpressure. The crystals were cut perpendicular to the growth direction into 0.9-to 1.3-mm-thick plates. Next, the plates were mechanically polished and chemically etched in a mixture of K 2 Cr 2 O 7 , H 2 SO 3 and H 2 O in the proportion 3:2:1. Then, they were treated in CS 2 and hot 50 % NaOH solution and finally rinsed in water and ethyl alcohol. Experimental setups The PTR experimental setup is presented in Fig. 1. The thermal waves were excited using an argon ion laser (LASER) with an output power of 200 mW and a photon energy of 2.41 eV (the operating wavelength k = 514 nm). The laser beam of about 2 mm diameter was intensity M. Pawlak (&) Faculty of Physics, Astronomy and Informatics, Institute of Physics, Nicolaus Copernicus University, ul. Grudziądzka 5/7, 87-100 Toruń, Poland e-mail<EMAIL_ADDRESS>modulated by means of an acousto-optical modulator (A-O M) in the frequency range 1 Hz-100 kHz and focused onto the sample. Two BaF 2 lenses (L 1 , L 2 ) were used to collect IR radiation from the sample (S) on the photoconductive mercury cadmium telluride (MCT, HgCdTe) detector with a detectivity peak at 10.6 lm. The IR transmittance of the BaF 2 lens was about 90 %. The MCT detector covering the IR range from 2 to 12 lm was supplied with a ZnSe cutoff filter (F) in the front having a transmittance of about 90% in the wavelength range of the detector. The signal (amplitude A and phase u) of the MCT detector was amplified and filtered by a lock-in amplifier (LIA, Stanford 830 DSP) and analyzed by a personal computer (PC). The IR spectra were measured using the IRScope II, attached to a IFS 66 IR spectrometer (Bruker GmbH, Ettlingen, Germany) equipped with the liquid N 2 cooled mercury cadmium telluride (MCT) detector. The detailed description of the IR spectrometer and an experimental procedure were presented elsewhere [7]. The IR absorption coefficients were calculated from the transmission spectra under the assumption that the optical reflection coefficient is constant in the investigated spectral range. Theoretical model In the Drude theory, the free carrier absorption is proportional to the carrier concentration N and can be written as follows [8] b where e is the elementary charge, m e is the effective mass of an electron, e 0 is the electric permittivity, c is the speed of light and s is the relaxation time of the scattering processes. For semiconductors, however, the dependence of the optical absorption coefficient on the wavelength of the absorbed radiation is rather of the type of b(k) * k p, where 2 \ p B3.5. The parameter p depends on the scattering mechanisms. There are three main types of scattering mechanisms: acoustic phonons, optical phonons and impurity scattering mechanisms. Using the quantum mechanical approach, it is possible to calculate the free carrier absorption coefficient. The IR absorption coefficient can be written as a sum of all scattering mechanisms as follows [9] b where A acoustic , A optical , A impurity are proportionality coefficients for particular scattering mechanisms. For semiconductors with the carrier concentration N, in the range of 10 14 -10 17 cm -3 , Ruda [6] found the following 2 IR absorption spectra as a function of wavelength k for CdSe crystals with the best fits using formula (4) (lines) to the experimental data. For clarity, the experimental points were reduced relationship between the IR absorption coefficient measured at k = 10 lm, the p coefficient and the carrier concentration for n-ZnSe: Experimental results and discussion According to the validity range of the Ruda formula, the method presented should be applicable for the carrier concentration in the range 10 14 -10 17 cm -3 . Two samples that possess a concentration within almost this range were available for this study. Figure 2 presents the IR absorption coefficients for CdSe crystals. As one can observe with an increasing excitation wavelength, the IR absorption coefficient increases monotonically. This behavior is typical for an absorption caused by free carriers. In Fig. 1, the experimental data are displayed together with a fit using the relation: The obtained fitting parameters p for a-CdSe and b-CdSe are 2.43 and 2.30, respectively. The absolute error in determination of the p parameter is Dp = 0.02 and 0.03, respectively. This means that the dominating scattering mechanism in these crystals is the phonons scattering mechanism. Assuming that the formula (3) can be applied to the n-CdSe crystals, the carrier concentration can be estimated. The carrier concentrations for a-CdSe and b-CdSe crystals calculated with the formula (3) are 1.38 9 10 17 and 3.54 9 10 16 cm -3 , respectively. The effective IR absorption coefficient and the thermal diffusivity of CdSe crystals were estimated using the onedimensional heat transport model for IR semitransparent crystals [5]. The experimental procedure consists in measuring the PTR signal in the transmission and reflection configurations with the PTR signal obtained in reflection configuration used as the reference signal. The applied theoretical approach is valid only for semiconductors with very short recombination lifetime (with a direct energy bandgap). For semiconductors with an indirect energy bandgap, such as silicon, the influence of plasma waves has to be taken into account. In paper [5], authors found the very good agreement between the experimental values of the thermal diffusivity obtained using the PTR method and the photopyroelectric technique [10]. The good agreement of the data obtained by the two different techniques proved the reliability of the PTR data and of the theoretical approach based on the adjustment of the two parameters, i.e., the thermal diffusivity D t and the effective IR optical absorption coefficient b IR,eff . The obtained results of the effective IR absorption coefficient for a-CdSe and b-CdSe crystals were 7.5 ± 0.2 and 0.9 ± 0.1 cm -1 , respectively. The effective IR absorption coefficient b IR,eff represents the IR absorption coefficient averaged over the wavelength range of the MCT detector, in this case, in the wavelength between 2 and 12 lm. When the IR absorption coefficient increases monotonically along with the increasing excitation wavelength in the detection wavelength range, one can normalize the effective absorption coefficient to the IR absorption coefficient measured at k = 10.6 lm. Figure 3 presents a proposed normalization procedure. Figure 3 shows that the relation between the effective IR absorption coefficient and the IR absorption coefficient measured at k = 10.6 lm using the IR spectrometer for the samples investigated can be linear in the first approximation. The slope coefficient is 2.6 ± 0.2. A limitation of this procedure is worth to be discussed here. In the case of semiconductors with the carrier concentration higher than 2 9 10 17 cm -3 , this procedure failed. For example, for Cd 1-x Mg x Se crystals, the free carrier absorption region moves toward smaller wavelengths of about 6 lm [7]. Multiplying the values of the effective IR absorption coefficient of CdSe crystals and the slope coefficient, one can calculate the normalized effective absorption coefficient b n,IR,eff . Assuming that the Ruda formula can be rewritten (3) as follows N ¼ 6:7 Á 10 15 Á 3:5 À p ð ÞÁ Á b n;IR;eff ; ð5Þ the carrier concentration in CdSe crystals can be calculated. Table 1 presents the values of the carrier concentration obtained using Hall measurements, the IR spectroscopy and the PTR method. As one can see in Table 1, the carrier concentrations obtained using the three techniques for a-CdSe are in very Table 1 were obtained using the exact differential method taking into account errors in determination of the normalization slope, p coefficients and effective IR absorption coefficients. Compared to the Hall method, the results obtained are in reasonable agreement. Although the Hall method usually has smaller errors (1-2 %), the error strongly depends on the sample preparation procedure (contacts) [11]. The PTR method presented in this paper does not suffer from the sample preparation procedure. Conclusions A simple normalization procedure of the IR spectra is proposed which allows to determine the carrier concentration in CdSe crystals using the effective IR absorption coefficient deduced from PTR experiments has been presented in this paper. The results obtained are in reasonable agreement with the results obtained from the Hall measurement and the IR spectroscopy. The proposed method can be used for IR semitransparent semiconductors and with the carrier concentration in the range between 10 14 and 10 17 cm -3 . Also, the estimated errors of the data deduced from the PTR measurements are given
2,469.6
2015-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
A Comprehensive Solution Approach for CNC Machine Tool Selection Problem A proper CNC machine selection problem is an important issue for manufacturing companies under competitive market conditions. The selection of an improper machine tool can cause many problems such as production capabilities and productivity indicators considering time and money industrially and practically. In this paper, a comprehensive solution approach is presented for the CNC machine tool selection problem according to the determined criteria. Seven main and thirteen sub-criteria were determined for the evaluation of the seven alternatives. To purify the selection process from subjectivity, instead of a single decision-maker, the opinions of six different experts on the importance of the criteria were taken and evaluated using the Best-Worst method. According to the evaluations, the order of importance of the main criteria has been determined as cost, productivity, flexibility, and dimensions. After the weighting of the criteria, three different ranking methods (GRA, COPRAS, and MULTIMOORA) were preferred due to the high investment costs of the selected alternatives. The findings obtained by solving the problem of selection of the CNC machine are close to those obtained by past researchers. As a result, using the suggested methodology, effective alternative decision-making solutions are obtained. Introduction Companies need to have many plans related to marketing, financing, and production in today's competitive markets. On the other hand, companies, based on these strategies, have to take a series of decisions, especially at the stage of establishment and when making growth decisions. One of these decisions is the selection of machines and equipment to be used in manufacturing. Identifying the appropriate machine or equipment from among the alternatives available is also a very important decision which, in the long run, affects the efficiency of the production system. The use of suitable machinery improves the manufacturing process, ensures the effective use of manpower, increases productivity, and enhances the versatility of the system (Dağdeviren, 2008). Also, the characteristics of the chosen machine have a considerable effect on prices, efficiency, and performance numbers, which are the key objectives of the production strategy. Generally, computer numerical control (CNC) machines, which can be used with high precision to perform repetitive, challenging, and unsafe production jobs, are considered cost-effective equipment (Athawale and Chakraborty, 2010). CNC machines are regarded as cost-effective instruments that can be used to perform routine, demanding, and dangerous manufacturing tasks by offering a high degree of precision to eliminate human errors. CNC machines are also used in innovative fields such as the production of molds for phase change material (PCM) (Lim et al., 2018). A very complex decision problem is the purchase of such a technological machine tool, as it requires a large investment and has many alternative and selection criteria. There is a large amount of data to be analysed by the decision-maker and many features to consider for an appropriate and effective evaluation of the selection of machine tools. To choose the most suitable one, the decision-maker must be an expert or be familiar with the technical specifications of the machines (Rao, 2006). The scope of this paper, which is based on these needs, is to select a proper machine tool using Best-Worst weighted GRA, COPRAS, and MULTIMOORA methods. These methods are used to determine the order of priority with managerial insights and implications. However, this paper tries to answer the following questions: (1) What are the criteria of the most used features in the CNC machine tool selection process? (2) Which alternative CNC machine tool may be more suitable under variable weighted uncertainties? (3) How different weights of expert opinions will affect this selection problem on the Best-Worst methodology? The rest of this study is organized as follows: a related literature review is given in Section 2. In Section 3, at first, the problem definition is given and then, Best-Worst, Grey Relational Analysis (GRA), COPRAS, and MULTIMOORA methodologies are explained. The proposed solution approach and its implementation are placed in Section 4 with the numerical case study. In Section 5, the conclusion and discussions are presented for considering future studies. Literature Review For several years, machine tool selection has been an important decision problem for manufacturing firms. The primary explanation for this is that there are several issues with the selection of an inappropriate machine that affects overall efficiency and production capabilities in the long run (Taha and Rostam, 2012). A detailed literature review is given in Table 1 and some selected studies are summarized in the following. Since there is more than one criterion, Multi-Criteria Decision Making (MCDM) methods are widely used in the solution of the Machine Tool Selection (MTS) problem. Several options and criteria are evaluated in these studies to decide the best alternative. It is considered as the most suitable option for the decision-maker who, after rating the alternatives, gets the highest score (Ayağ and Özdemir, 2006). The researchers have used various approaches to solve the MTS problem until today. The Analytical Hierarchy Process (AHP) and TOPSIS Method are the most commonly used methods among these techniques. Due to uncertainties in the decision-maker's decisions, a fuzzy AHP instead of traditional AHP was used for the evaluation and justification of an advanced production system (Ayag and Ozdemir, 2006) with developing a software (Durán and Aguilo, 2008). To analyse the structure of the equipment selection problem and evaluate the weights of the parameters, Dağdeviren (2008) suggested an integrated approach using AHP and the PROMETHEE approach for obtaining the final rating and conducting sensitivity analysis by adjusting weights. Önüt et al. (2008) suggested a fuzzy TOPSIS based approach for the evaluation and selection of vertical CNC machining centres, where weights were determined by fuzzy AHP. Moreover, in order to measure the level of benefit provided by using fuzzy numbers in multi-criteria decision models, Yurdakul and Ic (2009) solved the problem of MTS and compared the solutions of TOPSIS and Fuzzy TOPSIS techniques. The TOPSIS method was used by Athawale and Chakraborty (2010) to evaluate CNC machines in terms of system features and costs. Then, as the consecutive studies, fuzzy numbers were used for pairwise comparison with an Analytic Network Process (ANP) which was proposed to improve the imprecise ranking of the company's requirements which is based on the conventional ANP for machine tool selection problem. The proposed methodology was developed to eliminate the effects of vagueness and uncertainty on the judgments of a decisionmaker (Ayağ and Özdemir, 2011). The next one is TOPSIS and ANP methods which are commonly used MCDM methods for performance analysis on the machine tool selection problem (Ayağ and Özdemir, 2012). Similarly, Fuzzy ANP and Fuzzy PROMETHEE-II techniques were integrated by Samanlioglu and Ayağ (2016) to solve the problem of machine tool selection. Chen et al. (2021) proposed an approach consisting of DEMANTEL, ULOWA, and PROMETHEE methods for mechanical product optimization design based on meta-action reliability. An example of the application and feasibility of their proposed method is demonstrated with an automatic pallet changer (APC) of a CNC machine tool. In this study, a new solution approach is proposed where criteria weights are determined by the Best-Worst method, and rankings are determined by considering with GRA, COPRAS, and MULTIMOORA methods. Within the scope of the study, a new solution approach in which weighting and ranking methods are used together has been tried to be put forward. The methods used are powerful methods that have not been used before in the machine tool selection problem and their effectiveness has been shown in previous studies in the literature and this study. Problem Definition (CNC Machine Selection Process) In the machine tool market, there are hundreds of CNC machine alternatives. In the first step, machine tool alternatives that can satisfy the company's needs should be identified. In the second stage, the defined alternatives are evaluated using any decision model. When comparing various machine tools, decision-makers use a set of criteria. These criteria are generally related to the technological features of the machine, but they also include criteria such as productivity, flexibility, cost, maintenance, and service. Ayağ and Özdemir (2006), as well as Ayağ (2007), defined 8 key criteria and 19 sub-criteria for machine tool selection. Productivity, flexibility, space adaptability, precision, reliability, safety and environment, and maintenance and service are the main criteria used in these studies. Taha and Rostam (2012) used literature information and expert opinion to develop 5 key and 27 sub-criteria that represent the technological characteristics of a machine tool. In their fuzzy-based decision-support system, Özceylan et al. (2016) used "cost", "quality", "flexibility", and "performance" as the main criteria. These four criteria are subdivided into 15 sub-criteria. Due to differences in manufacturing facilities and decision makers' viewpoints, different criteria have been used in machine tool selection in previous research. As shown by the examples in this section, technological features and cost elements are commonly used in machine selection. Methods In this paper, the Best-Worst method is applied for determining the criteria weights using the mean of the expert opinions via taking advantage of pairwise comparison from best to worst. This method has been preferred for reasons such as making less and more consistent comparisons, being able to be used with other methods to be used for sorting, and not having to deal with fractional numbers. On the other hand, the choice of CNC machine tool is one of the decision problems that require a very high investment. For this reason, alternatives and decisions can be compared by using more than one method rather than a single method for ranking the alternatives. As for the choice of alternatives, GRA is selected with reference series, COPRAS is also selected to evaluate the performance of each alternative, taking into account the contradictory situations, and MULTIMOORA is preferred to apply dominance solution in terms of the subordinate ranking methods for this study. These alternative selection methods are used with the determined criteria weights from the Best-Worst method. Consequently, the whole solution procedure is designed for the proper decision-making process on the CNC selection research problem. Best-Worst Method The method proposed by Rezaei (2015) is a multi-criteria decision-making method based on pairwise comparison. In areas such as supplier selection (Rezaei et al., 2016), assessment of the social sustainability of supply chains (Ahmadi et al., 2017), evaluation of service quality in the aviation industry (Gupta, 2018), and evaluation of companies' RandD performance (Salimi and Rezaei, 2018) applications have been made. The steps of the method are presented below (Kheybari et al., 2019). Step 2: The best (most important) and the worst (least important) criteria are determined. Step 3: A pairwise comparison is made between the best criterion and other criteria using a scale of 1-9 and the BO vector A B = (a B1 , a B2 , . . . , a Bj , . . . , a Bn ) is obtained. (Here 1 means equally important, 9 means much more important.) Step 4: A binary comparison is made between the other criteria and the worst criterion, again using the scale 1-9, and the OW vector (A w = (a 1w , a 2w , . . . , a jw , . . . , a nw )) is obtained. Step 5: Optimal weights (w * 1 , w * 2 , . . . , w * n ) are calculated for each criterion. Here, the status w B w j = a Bj and w j w w = a jw must be provided for each pair of w B /w j and w j /w jw . The following mathematical model has been created to minimize the maximum w j 0, for all j. Then the expressions here are converted into the mathematical model shown below: With the solving of the model, the value of the optimal weights is obtained that is the criterion that shows how consistent the evaluations are. If this value is close to zero, it means that a consistent evaluation has been made. In the method, when the decision-maker has no information, that is, when the information is black, the greyness of a process is done. In most decision problems with insufficient and/or incomplete information, the GRA method is used to select, rank, and evaluate (Chan and Tong, 2007;Yildirim, 2014;Aydemir, 2020). In the solution process, logical and numerical measurements between two decision series are called grey relational degrees, and values are assigned between 0-1. The method consists of three steps: normalization, grey relational coefficient calculation, and grey relational degree calculation. In the first step, the data of the alternatives are transformed into comparison sequences by the normalization process. In the GRA method, the normalization process is performed using Eqs. (7)-(9), respectively, according to benefit, cost, and optimality (Feng and Wang, 2000;Yildirim, 2014;Sahin and Aydemir, 2019): here: x i (j ): The value of criteria j for alternative i; min j x i (j ): the smallest value for criteria j ; max j x i (j ): the greatest value for criteria j ; x ob (j ): the reference series (ideal sequence) value for criteria j . After the normalization process, all values take values between 0-1. A decision alternative (i) getting a value close to 1, and 1 for a criterion (j ) means that the alternative is one of the best alternatives for that criteria. It is uncommon in practice that any decision alternative provides the best value for all criteria. Therefore, the closest alternative to a reference series should be determined (Kuo et al., 2008). For this process, the absolute differences between the reference series values and the normalized benchmark value are calculated using Eq. (10) and thus the absolute difference matrix is created (Yildirim, 2014). here: x * o (j ): the normalized value of reference value for criteria j ; x * i (j ): the normalized value alternative i for criteria j . In the following step, the relationship between the desired and actual experimental data is determined by calculating the grey relational coefficient from the absolute difference matrix. Grey coefficients (γ 01 (j )) are calculated with the help of Eq. (11). min and max in the equation are the smallest and largest values in the absolute difference matrix, 0i is the reference series value and expresses the absolute difference between the value of the alternative j . The discriminant coefficient (ζ ) is the discriminant coefficient that can take values between 0 and 1 and generally takes 0.5 (Ho and Lin, 2003). In the last step, the grey relational degree is calculated by taking the average of the grey relational coefficients and the ranking is performed according to this value. Grey relational degrees (γ i ) are determined by Eq. (12) by dividing the sum of the grey relational coefficients calculated by Eq. (11) by the number of criteria (n), that is, for the case where the criteria are equally weighted (Lin et al., 2002). Also, if the criteria have weights in terms of the decision-maker (w j ), grey relational degrees (γ i ) are determined by Eq. (13). The order of suitability and/or preference of the alternatives is obtained with the order of the calculated grey relational degrees in descending order. COPRAS Method The COPRAS method developed by Zavadskas et al. (1994) applies a stepwise ranking procedure to evaluate the performance of each alternative, taking into account the contradictory situations. It is a frequently preferred method especially for ranking processes in subjects such as evaluation of road design solutions (Zavadskas et al., 2007), supplier selection (Keshavarz Ghorabaee et al., 2014;Yildirim and Timor, 2019), investment project selection (Popović et al., 2012), and analysis of the basic factors of sustainable architecture (Amoozad Mahdiraji et al., 2018). The COPRAS method assumes a direct and proportional dependence of the degree of importance and utility of decision options on a system of criteria that adequately defines the alternatives and the values and weights of the criteria. Determining the importance, priority order, and degree of use of alternatives is carried out in five stages (Kaklauskas et al., 2005(Kaklauskas et al., , 2006: Step 1: The weighted normalized decision matrix (D) is created. The aim is to take nondimensional weighted values from comparative indices. For this, the following equation is used: The sum of the dimensionless weighted index values is equal to "q ij ", which is the weight value of each criteria Step 2: The sum of the weighted normalized indices defining the alternative j is calculated. The index of the criteria trying to be minimized is shown as "S −j " and the index of the criteria trying to be maximized is shown as "S +j ". The lower the value of indices such as total cost and implementation time (S −j ) is, the larger the value of indices calculated for criteria such as utility and strategy fit (S +j ), the better the goals are achieved. Based on this, the total value of the indices is calculated with the following equation: Step 3: The degree of importance of comparative alternatives (Q j ) is determined by the following equation: The larger the value of Q j , the higher the priority of the alternative. The alternative with the highest Q j value will be the one that meets the demands and targets the most. Step 4: The utility degree of the alternative j (N j ) is calculated using equation (18): Step 5: The order of the alternatives is determined according to the degree of use (N j ). The alternative with this value of 100 is the best. MULTIMOORA Method The Multi-Objective Optimization Based on Ratio Analysis (MOORA) method proposed by Brauers and Zavadskas (2006) was later developed as MULTIMOORA by Brauers and Zavadskas (2010) with the addition of the "Full Multiplicative Form of Multiple Objectives" method. MOORA plus the full Multiplicative form, which consists of three subordinate methods: full multiplicative, reference point, and full multiplicative. MULTIMOORA is mostly used as a multi-criteria decision-making technique in fields such as industry, economy, environment, health services, and information technologies as practical applications. In this section, we first explain the MULTIMOORA method in terms of the subordinate ranking methods. The first step also involves generating a decision matrix and weight vector, as seen below, with x ij ratings for m alternatives and n criteria. Also, on the MCDM problems, the ratings of alternatives may have different dimensions generally, so, the normalized ratings should be required and for this, Van Delft and Nijkamp normalization approach is used in MULTIMOORA application considering the most robust choice and proving by Brauers et al. (2008) for the denominator in the ratio system: In certain cases, the triple subordinate methods are also known as the ratio, complete multiplicative, and reference point forms, and they are used to solve the exits problem. The ratio method should be used as a completely compensatory model if the problem has any independent criteria. The ratio system is computed by Eq. (21), where g is the number of useful criteria and y i is the utility value. The best alternative solution by using the ratio system is applied to select the maximum utility y i in descending order with Eq. (22) (Hafezalkotob et al., 2019): The reference point approach, on the other hand, is a conservative method for measuring and comparing the ratio system and complete multiplicative form with Eqs. (23)-(25). Initially, the maximal objective reference point (MORP) vector is defined as Eq. (23), where r j represents the utility value (Hafezalkotob et al., 2019): Eq. (24) defines the distance between the weighted value of the vector members and the weighted alternative rating, and the efficiency of the Reference Point Approach is obtained by maximizing the distance introduced in Eq. (25): The best alternative found by the Reference Point Approach has the least benefit (z i ), and the approach's order is provided by Eq. (26): Although Brauers and Zavadskas (2012) demonstrated that using weights as multipliers in the full multiplicative form is meaningless, it is mentioned that the weights determined in the developed MULTIMOORA method proposed by Hafezalkotob and Hafezalkotob (2016) can be calculated as shown in Eq. (27): The maximum utility alternative is the best alternative based on the Full Multiplicative Form, and the sequence of this technique is obtained by equation (28) in descending order: Using these subordinate ranks, we also should decide the final ranking of the alternatives in the final phase. The aggregating multiple subordinate rankings are presented by Brauers and Zavadskas (2012) to obtain a final ranking list that is more robust than each ranking list of the subordinate methods. Dominance-based principles, mathematical operators, MCDM methods, and programming approaches are examples of these approaches. Using the principle of dominance, the original MULTIMOORA incorporates MOORA with the exact multiplicative form. At this point, it is obvious that Dominance Theory (Brauers and Zavadskas, 2011) is the most widely applied method; but, in recent years, other tools with potential success have been used instead of this theory (Brauers and Zavadskas, 2006;Hafezalkotob et al., 2019). As a result, the dominance theory is used in this analysis to produce a unified final ranking list. Results One of the most important decisions in the design and construction of a competitive manufacturing environment is the selection of the appropriate machine tools. This chapter contains the application of the proposed method to solve the machine tool selection problem. The basic framework of the methods proposed within the scope of the study and detailed in Section 3.2 is shown in Fig. 1. The method starts with determining the criteria to be used. After the literature review and the determination of the criteria by taking the expert opinion, the criteria weights were determined with the BWM, details of which are specified in Section 3.2.1. The determined weighted criteria are used as inputs to the GRA, COPRAS, and MULTIMOORA methods used in the ranking of machine alternatives. Final rankings were obtained as a result of the calculations made separately with these methods. Determination of Criteria and Weighting According to the consumer specifications, the appropriate machine should be selected from the existing database. At the beginning of the research, 4 main and 13 sub-criteria were determined to be used in the solution of the problem, taking into account the literature research and expert opinions. Dimensions (C 1 ), Flexibility (C 2 ), Productivity (C 3 ), and Cost (C 4 ) criteria, whose sub-criteria are shown in Table 2, were determined as the main criteria. The determined weights can be used with equal weight or they can be weighted differently according to the needs of the company. The importance of the criteria was determined by using the Best-Worst method, details of which are given in Section 3.2.1, as a result of the interviews with six experts. BWM is a method based on testing the importance level of criteria. Also, BWM emerges as a method that is being used frequently in scientific and industrial situations. The criterion weights determined as a result of the calculations made with the BO and OW vectors created as a result of expert opinions are shown in Tables 3, 4 and 5, respectively. The weights of all the main and sub-criteria are shown in Table 6, and the values taken by the alternatives for these criteria (decision matrix) are shown in Table 7. The weight calculations of the sub-criteria are presented in Appendix A. After determining the decision alternatives and criteria weights, the ranking process was started with the GRA, COPRAS, and MULTIMOORA methods. The following section explains the details of the sorting process with the aforementioned methods. Table 7 The data of the alternatives (decision matrix). Sorting the Alternatives Using GRA The method consists of three basic steps: normalization, grey relational coefficient calculation, and grey relational degree calculation. In the first step, the data of the alternatives are transformed into comparison sequences by normalizing the criteria according to the benefits, cost, and optimality of the criteria. Normalized versions of the data presented in Table 7 are shown in Table 8. After the normalization process, the absolute value table is created by using the equation shown in equation (10). The values in the absolute value table correspond to the absolute value of the difference between the reference series and the criteria value. The absolute values calculated are shown in Table 9. Table 8 The normalized decision matrix. Alt. Alt. Grey coefficients (γ 01 (j )) are calculated with the help of equation (11). Then, the grey relational degrees (γ i ) to be used in the ranking are determined by dividing the total weighted grey coefficients value by the number of criteria as shown in Table 10. As a result of the sorting made with the grey relational analysis method, the order of preference of the alternatives was determined as Sorting the Alternatives Using COPRAS The second method used to sort the alternatives is the COPRAS method. This method starts with the formation of the weighted decision matrix with the help of Eq. (14). The matrix obtained with this equation is shown in Table 11. After calculating the normalized decision matrix, the sum of the criteria values to be minimized for each alternative (S −j ) and the sum of the criteria values to be maximized (S +j ) is calculated. Depending on the S −j and S +j values, the importance degrees of the alternatives (Q j ) are calculated using Eq. (17). Then, the utility degree of the alternatives (N j ) are calculated by writing the obtained "Q j " values into Eq. (18). In the last step, the order of alternatives is obtained in descending order of the utility degree of Table 10 The absolute value table. Alt. the alternatives (N j ). The alternative with a N j value of 100 is the best alternative. The S −j , S +j , "Q j " and N j values calculated for the alternatives and the priority order of the alternatives are shown in Table 12. The order obtained with the COPRAS method is Table 14 Calculations of the ratio system. Sorting the Alternatives Using MULTIMOORA The first step of the MULTIMOORA method also includes creating a decision matrix and weight vector with x ij ratings for m alternatives and n criteria, as seen below. As in the other methods, in the first step of this method, the normalization process is carried out by using Eq. (20). The normalized decision matrix obtained by Eq. (20) is shown in Table 13. After the normalized decision matrix is created, the alternative ranking is determined according to the decreasing order of the calculated y i value. Alternative ranking obtained with the Ratio System (RS) is " A 7 " as shown in Table 14. In the Reference Point Approach (RPA), which is a conservative method, first of all, the absolute difference (distance) between the r j values obtained by Eq. (23) and the normalized value (x * ij ) is determined. The decreasing order of "z" values obtained using Eq. (25) determines the order of the alternatives. The calculations of the Reference Point Approach are shown in Table 15 and alternative ranking obtained with the Reference Point Approach is "A 7 A 5 A 4 A 3 A 2 = A 6 A 1 ". Table 15 Calculations of the reference point approach. Alt. Goal Table 16 Calculations of the full multiplicative form. Alt. Goal In the Full Multiplicative Form (FMF), the multiplication values of the criteria in the normalized decision matrix, which are in the direction of maximization, are divided by the multiplication value of the criteria to be minimized, and "u i " values for each alternative are calculated. The order made according to the decreasing value of the "u i " values of the alternatives will give the final ranking. The calculations of the FMF Approach are shown in Table 16. The order obtained with the full multiplicative form is " At the last stage, the rankings found as a result of the calculations above have been converted into a single line with the theory of dominance. The final ranks were determined by taking the average of the rankings. The final rankings determined by applying the theory of dominance in the MULTIMOORA method are given in Table 17. The order obtained in the last step is " As a result, the different sequences shown in Table 18 and Fig. 2 were determined. The sequences obtained with the COPRAS and MULTIMOORA methods are similar. The rankings obtained by grey relational analysis differ from the other two methods. Due to the dominant ranking, decision-makers may consider Alternative 1 as the best option. Discussion and Conclusions The abundance of machine alternatives, the difficulty in accessing reliable information, and the lack of experts in evaluating machine features make machine tool selection a difficult and important problem. In addition, it is known that an unsuitable machine selection adversely affects the efficiency, sensitivity, and flexibility of the entire production system. When all these situations are taken into consideration, it is seen that the right information should be made by the right people and using the appropriate methods for the selection of a proper machine tool. When the studies in the literature are examined, many different methods provide different solutions. In this context, many studies have been conducted in which the uncertainty situation, as well as deterministic methods, are taken into account. The important thing here is to make the right decision by evaluating the opinions of more than one expert working in the production environment with different methods, rather than a single method and a single expert's opinion. In this paper, a new framework is proposed to examine the performance of different methods using the same criteria weights for a suitable machine tool (CNC machine) selection problem. Weights of the criteria determined by BWM were used for weighting decision matrices for the sorting methods in this new framework. Using seven alternatives, four main, and thirteen sub-criteria in the problem, machine alternatives were evaluated with GRA, COPRAS, and MULTIMOORA methods. To create a reliable final ranking in the MULTIMOORA method, the theory of dominance was used and the final rankings were determined by averaging the different rank values. As a result, it is aimed to increase the reliability of the final solution with this new approach including BWM as the criteria weighting method. In the evaluations made for the main criteria, it has been seen that the cost of the machine tool is the most important criterion, as in the studies of similar criteria in the literature (Arslan et al., 2004;Önüt et al., 2008), followed by the productivity, flexibility and dimensions criteria, respectively. In the ranking made using the criterion weights obtained, it is seen that COPRAS and MULTIMOORA methods give similar rankings, but the gray relational analysis method offers a different ranking. The proposed solution procedure is well-designed for the research problem. The CNC machine selection problem is also studied in many pieces of research. In this way, the selected seven alternatives, four main, and thirteen sub-criteria can also be accepted as the main research limitation. On the other hand, the obtained results are shown effective and robust decisions for the problem using comprehensive methods as the main advantage. In future studies, it may be considered that fuzzy logic-based methods can be used for the solution in cases where decision-makers express the importance levels of the criteria with linguistic variables. The evaluation of the expert opinions can be considered by intuitionistic approaches on MCDM methodologies.
7,345.8
2022-01-01T00:00:00.000
[ "Engineering" ]
The Swipe Card Model of Odorant Recognition Just how we discriminate between the different odours we encounter is not completely understood yet. While obviously a matter involving biology, the core issue is a matter for physics: what microscopic interactions enable the receptors in our noses-small protein switches—to distinguish scent molecules? We survey what is and is not known about the physical processes that take place when we smell things, highlighting the difficulties in developing a full understanding of the mechanics of odorant recognition. The main current theories, discussed here, fall into two major groups. One class emphasises the scent molecule's shape, and is described informally as a “lock and key” mechanism. But there is another category, which we focus on and which we call “swipe card” theories: the molecular shape must be good enough, but the information that identifies the smell involves other factors. One clearly-defined “swipe card” mechanism that we discuss here is Turin's theory, in which inelastic electron tunnelling is used to discern olfactant vibration frequencies. This theory is explicitly quantal, since it requires the molecular vibrations to take in or give out energy only in discrete quanta. These ideas lead to obvious experimental tests and challenges. We describe the current theory in a form that takes into account molecular shape as well as olfactant vibrations. It emerges that this theory can explain many observations hard to reconcile in other ways. There are still some important gaps in a comprehensive physics-based description of the central steps in odorant recognition. We also discuss how far these ideas carry over to analogous processes involving other small biomolecules, like hormones, steroids and neurotransmitters. We conclude with a discussion of possible quantum behaviours in biology more generally, the case of olfaction being just one example. This paper is presented in honour of Prof. Marshall Stoneham who passed away unexpectedly during its writing. We also discuss how far these ideas carry over to analogous processes involving other small biomolecules, like hormones, steroids and neurotransmitters. We conclude with a discussion of possible quantum behaviours in biology more generally, the case of olfaction being just one example. This paper is presented in honour of Prof. Marshall Stoneham who passed away unexpectedly during its writing. A Brief Introduction to Our Senses Our senses allow us to demystify our surroundings. It is surprising then perhaps that one of our senses (smell) is still somewhat mysterious. Our senses receive and record input from the environment in order for us to respond and react in a fashion conducive to survival. A type of sense can vary from the very basic chemotaxis that plants exhibit as they grow towards light to the quite complex issue of pheremonal signalling in mate selection. Senses and their importance vary, of course, across species and environment. Note, for example, that cats can detect the difference in taste between sugar and saccharin, some snakes see via infrared light (heat), bats see via hearing (or echo-location), fish can smell water soluble molecules, dogs squirm at very high audio frequencies and detect cancerous scents that humans are quite oblivious to. For humans at least, science has a reliable idea of the mechanisms involved in most senses. The taste of a molecule corresponds to which of the five (umami, sweet, sour, bitter and salty) receptors the molecule is able to activate (e.g., sodium glutamate, sucrose, acetic acid, quinine and sodium chloride respectively). Visual receptors allow us to see according to the wavelength of light that enters our eyes (red, green, blue) provided it is in the range 400-700 nm. Hearing uses mechanics within the ear to translate acoustic vibrations to sound: for example, frequencies of 16.35, 18.35 and 20.60 Hz correspond to musical notes C, D and E. Touch converts physical damage to sense receptors (heat, pressure) into sensory perception. Science knows the fundamentals: that glycerol makes things sweet, that blue and yellow can make green, that fundamental frequencies and integer related harmonics together sound nice, in ways identifiable by the chemistry and physics of the input information. Yet we do not completely understand the basic determinants (metrics) of how smell works at the odorant recognition level. Smell is a process where small molecules meet large receptor proteins (factors of 1000's larger in size) and depending on the combination of David and Goliath, there is (or is not) a triggering of a signalling cascade that results in a smell perceived by the brain. But how do particular molecules cause (or inhibit) this process? It is not just in olfaction that the effect of one specific small molecule can cause a cascade of important processes. Other examples include the triggering of cells by hormones or the signal transmission in nerves by acetylcholine [1]. This combination of sensitivity (one molecule can initiate a complex chain of events) and selectivity (different molecules generate distinct perceived odours) is very remarkable [2]. Thus the question of how this works in principle extends beyond olfaction: what controls the very specific actions of neurotransmitters, hormones, pheromones, steroids, odorants and anaesthetics? How could the side effects of certain drugs be predicted? How do we control desirable and undesirable interactions of molecule and receptor? Answering questions like these would not only satisfy basic scientific curiosity, but might also provide a firmer foundation for drug design and development. In important work that led to the award of the 2004 Nobel Prize in Physiology or Medicine, Axel and Buck isolated genes that coded for olfactory receptors, showing they belonged to the class of G-protein coupled receptors, GPCR [3]. Remarkable progress has been made over recent years regarding the genomics involved. However, whilst there is little doubt over what machinery is involved in the smelling (see Section 2), we still need to understand better the mechanics of how it does what it does. How can one understand the physics of the mechanisms that control the initial activation step when an odorous molecule meets one olfactory receptor? Though the crystal structure of soluble proteins can be determined, the detailed structure of olfactory receptors is still quite unclear because GPCRs are membrane proteins. Despite substantial progress [4,5] in producing large quantities of olfactory receptors (ORs), the ambitious aim of crystallizing these elusive proteins has yet to be achieved, thus there are still no detailed atomic structures of ORs. We note that whilst full structural information will surely be highly illuminating, a static picture of structure alone also may not tell us how odorant recognition is achieved. What We Know and What We Do not Know about Odorant Recognition As well as many of the biological mechanisms involved, we also know very precisely the molecular structure of most odorant molecules, and we can quantify a smell response. Response can be measured at the receptor level (the depolarization of the cell triggered by receptors) or by fluorescent magnetic resonance imaging (fmri) of the brain. It can also be measured by an individual's perception, though possibly less objectively. These parts of the puzzle, while understood, are difficult to manage because the number of degrees of freedom is so vast: the number of possible odorants may be in excess of 100,000 and the number of functional human receptor types is currently noted as 390 [6]. As a result, the number of different odorant-receptor combinations would be 390 100,000 (i.e., practically unlimited). Many programs, for example E-DRAGON, calculate molecular descriptors based on the molecular structure of the odorants submitted [7]. However, cross-correlation between molecular descriptors and response patterns reveals that no particular metric, such as number of carbons or the presence of a particular functional group, represents truly faithful response patterns. A common metric for odorant description is usually the shape, possibly defined via a van der Waals space-filling model, that may be designed for particular odorants to fit within binding pockets within the receptor. There are certainly some correlations between the shape of a molecule and odorant receptor response, but likewise there are many cases where very different shapes produce the same pattern of activation of the odorant receptor repertoire because many ORs are broadly tuned. It has been suggested that Infrared (IR) vibrational spectra might be better predictors of smell than shape [8]. Programs like E-DRAGON, and those used for pharmaceutical design, do not directly implement vibrational spectra as an odorant metric. Furthermore, they explore the minimum energy (usually in vacuo) geometry of the odorant and do not account for effects at the binding site of the receptor or environment. Regarding typical IR spectra however, it is not certain that the correlation is any better than for shape for all known cases, though it does work for some [9]. One case where shape is clearly important in OR activation is the existence of odorant enantiomeric pairs that sometimes smell the same and sometimes smell different. The IR spectrum measured in achiral solution (the molecules are free to rotate) would not betray any difference between mirror-image related molecules, and yet the chiral environment of olfactory receptors would do. Therefore, IR absorbance without any geometrical consideration does not explain or predict odorant response; on the other hand, shape-based theories also do not explain why enantiomers sometimes smell the same and sometimes they do not. Recent work [10,11] probes whether drosophila melanogaster identify chemical species on the basis of shape or not. In the experiments [10], four odorants were considered, along with their deuterated counterparts. The flies were trained to avoid one or other of the isotopic versions, and were then found that they can generalize their response to the other molecules based on which isotope of hydrogen was used. It can be concluded that the flies are responding to the presence or absence of deuterium, rather than molecular shape. The effect of shape on OR activation can be excluded in this case on two counts. First, replacing hydrogen by deuterium produces very little change in shape, yet the flies can distinguish the isotopes. Second, flies learn about deuterium from molecules with one shape, and then can generalize this knowledge to molecules of a different shape. One hypothesis is that the flies are responding to vibrational frequency of the C−H bond stretch: since deuterium has twice the mass of hydrogen, the frequency drops by about 40% following deuteration. This conjecture is given substantial support by a final experiment in which flies were allowed to respond to an odorant containing a nitrile (−C ≡ N) group, which has a vibrational frequency very similar to a C−D group. They reacted to it as they would to the deuterated odorants. This is important evidence suggesting that drosophila melanogaster can distinguish odorants by their molecular vibrations [10]. Of course, the drosophila olfactory receptors are of a different type to human receptors. However, the only commonality that is required to support the swipe card model is a hydrophobic receptor environment and an acceptable energy tuned gap (to the odorant vibrations). We address here how exactly these vibrations may be detected. The Problem with Odorant Recognition Humans can perceive by smell thousands of molecules, all small enough to be volatile, each of which activate a few olfactory receptors (it is very unusual that one odorant only activates one receptor). Further, humans can detect odorants at very small concentrations in air even 1 parts per trillion. The selectivity of these olfactory receptors is also especially remarkable considering that some odorants, may agonize or antagonize a receptor [12]. There are 390 functional olfactory receptors in humans [6] that can respond to 100,000 or more odorants, thus eliminating the concept of 1:1 receptor to odorant matching. Another reason that receptors cannot have evolved to identify individual molecules (at least not all of them) is their ability to respond to chemicals never encountered before. Thus olfactory receptors are versatile and accommodating and yet often discriminating and selective. Olfactory receptors are large, floppy transmembrane proteins, containing tens of thousands of atoms. Yet their abilities are rightly envied by scientists designing artificial noses and other sensors. How is this selective, sensitive, powerful, versatile, activation achieved? Understanding the physics of odorant recognition at the receptor level means understanding how an odorous molecule (which we shall call M) initiates a measurable signal. There are potential analogies with vision, where a photon causes an initial molecular transformation [13]. In this paper, we seek to better understand the corresponding atomic-scale mechanisms by which olfactory receptors are activated by odorants. We shall develop the swipe-card paradigm for the selective activation of receptors by small molecules. This paradigm recognizes that the small molecule must have a shape that is, in some sense, good enough to engage with the receptor, but some other property or process is needed to yield a selective response. With recent evidence that vibrations may be indeed detected, we suggest that the likely property is a vibration in the odorant molecule. We attempt here to identify and, where possible, quantify the first signal transduction step in the receptor that results in the release of a G-protein. Processes at the glomeruli and olfactory bulb are beyond the scope of this work. Indubitably, there are important processes that control the overall perception [14] as the brain builds a scent perception from a number of receptors. Almost certainly any one molecule will be able to initiate a signal to the brain from a range of receptors. But the brain must have distinctive information to work with, and our concern here is just what molecular information determines whether a given receptor is activated and initiates a signal to the brain. Competing Theories of Odorant Recognition When odorant M reaches the hydrophobic cavity within the receptor, it will interact with amino acid residues of GPCR protein, which is comprised by seven transmembrane helices. Though considerable progress has been made regarding proton switch activation in rhodopsin [15] exactly how M activates the olfactory receptor is uncertain in detail. The orientation of M will fluctuate, influenced by weak bonds such as van der Waals and electrostatic interactions. These relatively weak interactions may stabilize an active configuration that might induce an on configuration of the receptor (which we term R) and M, which we may label R+M. This is the stage to invoke analogies with a key in a lock [16], or a hand in a glove [17] and also to recognize that not only must M have the right shape, but somehow the key must be turned. The lock-and-key principle requires that there is a good fit between two reactants in order to create the desired product, and it operates as follows. In the absence of the ligand, a receptor protein fluctuates about some average configuration, and with this is associated a free energy, G R . The average configuration is such as to minimize G R . Once a ligand binds to the receptor, there is a new average configuration for the receptor. The change in average structure can induce a signal, and thus corresponds to the key turning. This mechanism works very well for many receptors (see for example Sigala et al. [18]), and is likely to be the mechanism when a receptor is tuned to just one ligand (as presumably is the case for pheromones). Given the success of this mechanism in many known cases, it is natural to extend it to olfaction, as has indeed been done by Amoore in 1962 and added to by Moncrieff in 1967 (refs). However, for promiscuous olfactory receptors that are known to respond to multiple ligands, it is far from clear that this mechanism can explain all properties of olfaction. Indeed, systematic studies show shape alone is a poor criterion for predicting odour [19]. We thus look for complementary possibilities. In any theory of odorant recognition, the olfactant molecular shape must play a role, if only to let the scent molecule access key parts of the receptor. Theories of the initial actuation event fall into two broad categories. One class relies on olfactant molecular shape alone, a class that covers many structureactivity relation descriptions. Some level of fit is clearly necessary, but even a good fit is not sufficient (see Figure 1: the odorant must somehow activate the receptor. What turns the key in the lock? As we have seen above, one natural assumption is that the odorant causes a mechanical deformation of the receptor. To illustrate the problem, consider ferrocene and nickelocene. These molecules have different odours, and yet have similar shapes (for example see following figures). A systematic and extensive analysis of the problem by Charles Sell makes the point much more forcibly in our opinion [19]. A possible alternative model is what we have termed the swipe card picture [20]. It proposes that, whilst the shape must be good enough, other information characterising the odorant is also important. In lock-and-key models, a key of the right shape contains all the information to open the lock. In a swipe card (or keycard) model, the shape has to be good enough to fit the machine, but additional information is conveyed in a different way. Typical macroscopic swipe cards, like credit cards or hotel room cards, often encode the information magnetically. The specific swipe card model of odorant recognition we assess here uses a molecular vibration frequency as the additional information. The theory of Turin proposes that an electron transfer occurs if the odorant has the right vibrational frequency: discrimination and activation are achieved by an inelastic electron tunnelling (IET) mechanism, dependent on the ability of the odorant to absorb the correct amount of energy. IET describes a phenomenon well known in inorganic systems. The swipe card description was conceived as a generalisation of models like Turin's original idea of conventional IET within a biological context, but we emphasise that there are important differences. In our own work [20] we made a critical assessment of Turin's basic ideas, showed that the ideas seemed robust, needing values of key parameters in line with those from other biological studies. Our present paper extends the analysis, generalizing the simpler model of the previous paper, and assessing possible physical realizations. In our earlier publication, we could find no physics-based objections to Turin's model of the signal transduction mechanism in odorant recognition, in which discrimination and activation are achieved by IET. An advantage of such models is that they are potentially predictive. IET in inorganic systems is usually observed in circumstances that allow transmission over a continuum of energies. In biological systems, there are no equivalent continuous energy distributions. Our evaluation describing signalling times lets us compare the relative rates of non-discriminating tunnelling (characterized by an average time τ 0 ; in this case, energy is given to some combination of host modes or other degrees of freedom) and of discriminating tunnelling (characterized by a time τ 1 ). Successful selective activation requires the discriminating contribution (sensitive to the oscillator frequency ω o of odorant M) to dominate the non-discriminating contribution: τ 0 ≫ τ 1 . We showed this to be the case in rather general and robust circumstances in our previous paper. We stress that our receptor models need to recognize three points: 1. First, there will surely be some shape constraints, though these may play only a small part in discrimination. 2. Secondly, there are dynamic factors (such as conformational change) that appear detectable by olfactory receptors, so a purely static model is not appropriate. 3. Thirdly, we need to consider both charged components of the receptor/olfactant system and also charge transfers during actuation. Finally, we describe a quantized model for biological signal transduction at room temperature, a field of physics surrounded by controversy. Just as the initial events in photo-induced processes are very well described [13], a physically viable phonon-mediated mechanism is, we believe, well within the realms of reality as a putative signalling process. Even though the main thrust of the paper concerns what happens when the olfactant encounters a receptor, it is important to recognize that this is just one-albeit a critical one-of the steps between there being an olfactant in the atmosphere and the brain perceiving some odour. The sequence of events leading up to odorant recognition provides a context and lets us estimate a timescale for the critical steps. Given this context, we assess the feasibility of this proposed biological spectroscope as an olfactory detector. This extends our previous discussion, with a closer look at just what the relevant biological components might be, and what would be reasonable values of the basic parameters. This allows us to identify some of the implications of the model, and especially what might prove significant tests. We also note that odorant shape and frequency are not always sufficient to define a smell, since other factors, such as conformational mobility, are certainly important for a large class of enantiomers. But shape and vibrational frequency go a long way towards defining odour. Why Any Solution must Involve Physics Turin's mechanism is a specifically quantum idea, partly because of tunnelling, but primarily because a quantum oscillator can only receive or give energy as quanta of specific energy. A classical oscillator can, of course, give or receive any amount of energy. Inelastic electron tunnelling is long known in the physical sciences [22][23][24] and in a biological context in reactions [25]; however, it is new to biological signalling, and has led to misunderstandings [26]. Turin's basic idea, leads directly to possible experimental and theoretical tests. We shall discuss his ideas critically, emphasizing the observations that confront these ideas most significantly. Our first main aim is to check any possible physics-based objections, in an extension of our earlier work [20]. Our second main aim is to see what existing experimental scent studies imply. Thus, we examine especially those molecules that appear to be problematic. Such problem molecules are, in fact, challenges to almost all theories of odorant recognition. It will become clear that vibration frequencies do appear important, but there are still limits to what can be understood in terms of odorant shape and vibrations. In re-examining some of the ideas of how small molecules selectively activate receptors, we conjecture that the definition of signal generation here is repeated in other natural systems. Thus we have found a general rule that determines whether the two molecule types making up an enantiomer pair will smell the same or different [27], and this has implications for any model of signal transduction. We shall also discuss isotope effects [10], and the very striking observation that zinc nanoparticles available in the vicinity of the olfactory receptor can greatly enhance perceived odour intensity [28]. We note that often very subtle differences in molecular structure can drastically alter a scent, often in a surprising way, making scent prediction difficult. Evidence is emerging that, when shape information is combined with molecular vibrational data, good selectivity is possible [9]. For example, drosophila melanogaster can distinguish odorants by their molecular vibrations, and can even selectively avoid deuterated counterparts [10]. We note however, just as for shape, vibrations as a distinguishing characteristic alone is not enough. A swipe card model, going beyond the simpler lock and key ideas, can cover both requirements. Journey of the Odorant to the Receptor Smell is a process where we directly interact with the world. Once the odorant is inhaled, it is only a short journey for this molecule (M) to interact directly with our central nervous system. The first stage on this journey takes M to the olfactory mucus, the 10-40 µm thick covering of the olfactory epithelium [29]. The role of the olfactory mucus is not obvious, though it may simply moderate the concentration of odorants reaching the epithelium; it has even been suggested that the mucus serves as a separation column [30]. It has also been shown that diffusion of inhaled air towards the epithelium and its variable distribution inside the nasal cavity may be another way to differentiate scents before they hit the receptors [31]. The mucus layer contains odorant binding proteins (OBPs), small lipocalin carrier proteins whose role is unclear [32]. These OBPs have a high affinity for aldehydes and large fatty acids [33,34], so it seems likely their purpose is to assist transport of the largely hydrophobic odorants across this aqueous mucus layer to the epithelium. A further suggestion is that non-sensory respiratory cilia embedded in the nasal mucus aid odorant molecule transport. Also within this mucus layer reside biotransformation enzymes. The purpose of these enzymes is also as yet unclear. It is possible that they clear odorants away, or even metabolize them before they reach the receptor site [35]. It is usually assumed that the odorant is unaffected chemically on its journey to the receptor. However, comparisons of odours could well be affected even by small differences in metabolism, for instance from reaction rates depending on isotope, or chiral catalysts affecting enantiomers differently. Recognition of the Odorant and Signal Initiation Olfactory sensory neurons (OSN), traverse the epithelium and project cilia that extend into the mucus. Each OSN type projects cilia containing one particular type of olfactory receptor (OR); the number of ORs at the cilia varies according to species. The odorant M meets the OR by passing the mucus layer interface and docking at a binding site in the protein. The main thrust of our paper will be what happens when the odorant M reaches the olfactory receptor: what are the primary activation events that lead to a signal being initiated and a G-protein released, which is the primary action of a GPCR? Signal Amplification and Processing Once activated, the OR releases subunits of G olf in a series of local steps that are well understood [32,36,37]. The G olf activates the formation of adenyl cyclase III (AC), an enzyme which, in turn, activates an increase of second messenger cyclic adenosine monophosphate (cAMP). Then cAMP binds cAMP-activated cationic channels and cyclic nucleotide gated (CNG) signaling is released resulting in an ion channel opening and a Ca 2+ and Na 2+ influx. This results in a depolarization of the OSN, with perhaps subsequent amplification steps (possibly by as much as 85% [36]). The axons of the OSNs project through the cribriform plate to the olfactory bulb (OB). In the bulb, neural axons route to structures called glomeruli [3] which are discrete loci on the olfactory bulb. For each type of OR, the location to which they extend in the brain is the same in all subjects. One OSN expresses only one type of OR and there is a direct, non-branching route from OSN to glomeruli type which is referred to as zone-to-zone mapping [38]. The combinatorial pattern thus makes an impression on the brain which characterizes the smell of M. Functional magnetic resonance images can then reveal the regions of the brain activated by odorants [39]. There is evidence that the perception is at least partly a learned phenomenon [14]. However, this lies outside the scope of this paper, and is not discussed further. We make a clear distinction between the peception of smell and the depolarisation of a cell caused by OR activation: it is the latter that is our concern here. Shape, Weak Bonds and Vibrations All Matter All the current theories acknowledge that the odorant must fit within the receptor, and must remain there long enough for a signal to be generated. Thus to some extent shape and the weak interactions between the odorant and the receptor must matter. However, as we saw above, this is not enough. Turin's assertion is that vibrational modes of the odorant matter as well [8]. As is elaborated below, the conjecture is that the receptor exploits inelastic electron tunnelling to detect the molecule's vibration frequency. We note that vibrational frequencies are of course strongly dependent on the odorant geometry. Inelastic Electron Tunnelling: Turin's Model The physics of Turin's mechanism was assessed previously [20], finding it consistent with other biophysical mechanisms. Here our aim is to confront theory with empirical observation, a more important test. Turin's model envisages two special points where the receptor and odorant M make contact: a donor D linked to a source of electrons, and an acceptor A linked to an electron sink. The electron transfer event from donor D to acceptor A-because charge is moved-will change the forces on M. This sudden change in force causes M to change vibrational state. Energy must be conserved overall. If the transfer is to occur, as an inelastic tunnelling event, the electronic energy difference between D and A sites must match the vibrational energy taken up by M. The transfer of the electron to A triggers a conformational change of the receptor, which produces the release of the α-subunit of a neighbouring G-protein (G), which via subsequent processes outlined above, initiates the large influx of Ca 2+ ions into the cell, thus initiating a signal communicated by consequent firing of neurons to the brain. Figure 2 shows these first events at the ligand binding domain (LBD). The process just described, inelastic electron tunnelling, is very well established in inorganic systems. Its commonest form has tunnelling between metal junctions bridged by a single molecule [22] in an insulating gap. The molecule within the gap can be identified, and even its orientation revealed. But metal electrodes have a continuum of energies, so it is hard to resolve the weak inelastic transition superimposed on the dominant elastic transition to this continuum of states, even at the very low temperatures usually used. This problem is avoided in the system hypothesized for odorant recognition. In the olfactory receptor, the assumption is that D and A have discrete energies, and-to an extent that can be calculated [20] (see below)-there is essentially no elastic transition, and competition comes from weak transitions where energy is taken up solely by host vibrations. This is important: extracting a weak inelastic adjunct from a larger non-discriminating signal could be an unnecessarily noisy job for the brain, especially at ambient temperatures. With discrete initial and final states, the inelastic transition can be far better resolved. Of course, charge transport in the nose to D and from A must be inherently different in some ways, but this does not seem to raise any insuperable problems. For example, conducting polymer nanotubes (CPNTs) conjugated with human olfactory receptors have recently been created [40]which act as field-effect transistors (FETs). These biosensors are sensitive to a current increase upon odorant binding, even at low concentrations. How the Charge Moves: Inter-Chain or Intra-Chain Charge Transfer For a biological inelastic tunnelling process, two optimal routes could be proposed for the moving charge via the receptor helices. These are depicted in Figures 2 and 3. As usually described, the inelastic tunnelling transition takes the electron from donor D on one of the olfactory receptor's polypeptide chains through the odorant to an acceptor A on another polypeptide chain. This inter-strand picture (see Figure 2) was used in our earlier analysis. But it is not necessary that the electron passes through the molecule, the word "through" meaning that the odorant wavefunction is a significant part of the transition matrix element. A sudden change in electric field at the odorant is sufficient to cause it to change vibrational state. So an intra-strand charge transfer transition (see Figure 3) is a satisfactory alternative. We can show by explicit calculation (see Appendix A for simple analytical examples; more detailed results will be published separately) that the couplings between the odorant vibrations and the electron transition can easily be of the same size for both inter-and intra-chain charge transfers. Perhaps the main advantage of intra-chain charge transfer is that there is no need for any long-range motion of charge to re-set the donor and acceptor to their original states. It is possible, for example, that the original intra-chain electronic states will be recovered simply by the olfactant leaving the receptor, but that is pure speculation. In the case of inter-chain charge transfer, one might assume that this single electron current is what starts the next stage in the series of local processes mentioned in Section 2.3. In the case of intra-chain charge transfer, there may be no current in the same sense, but there will be a significant, possibly short-lived, electrical dipole moment. One conjecture might be that the electric field from this transient dipole initiates the next stage. Time Scales Involved Experimentally, olfaction occurs over milliseconds, decidedly slowly when compared with most processes at the molecular scale (see Table 1). In the model of Figure 2, the likely rate-determining steps involve transport of an electron to D or removal of an electron from A. For the receptor to operate it requires an electron in the donor that is free to make the transition to the acceptor. Producing this initial state requires some input of energy, though this is not expected to be problematic as voltages of order 0.5V are certainly available in cells. The precise mechanisms are not known, but we can make some simple rough estimates for different options. First (Model 1), suppose that charge q must diffuse a typical distance L with diffusion constant D (related to the mobility µ = qD/kT by the Nernst-Einstein relation). Assuming there is no driving force (bias) the characteristic time will be 1/τ X1 ∼ D/L 2 . Secondly (Model 2) suppose the charge motion is diffusive, but biased by a field U/L; here U may be an electrochemical potential. The drift velocity is µE = µU/L and distance to move is L, so 1/τ X2 ∼ (qD/kT )U/L 2 . With q as one electronic charge, and T as 300K, the biased motion is faster for U bigger than around 1/30 volts. With D ∼ 10 −4 cm 2 /s, typical of liquids, and L ∼ 100 nm, one finds D/L 2 ∼ 10 6 sec −1 . U may well be larger, say 0.5 V. Whilst these arguments are speculative, it is not unreasonable to expect characteristic times associated with charge transport to be of the order of a microsecond. The Table shows that, for the model of Figure 2, electron transport to the donor or from the acceptor could be relatively long, assuming incoherent hopping, transfer, of the electron at rates typical of other biological systems. No matter how fast the tunnelling event, the re-population of D (D must be replenished systematically) and re-emptying of A puts a bound on how many tunnelling events can occur in one receptor. In the intra-chain charge transfer, Figure 3, it is simply necessary for the charge transfer to be reversed, i.e., for the electron to return from A to D. This should not need transport over any extended distance, but may need the olfactant to leave the receptor. It is tempting to assume that only one electron can pass from the time the odorant enters the receptor until it leaves. However, we have no evidence on this point. It is certainly possible to devise models in which more than one electron would pass. [20]. Time Interval Estimate Description T X 10µs − 1ms The time interval taken for reducing species X to diffuse through the cytoplasm. T I 1µs − 1ms The time taken for charge injection into a helical backbone of the protein. The time taken for the charge to hop from RD on the helix to D (see Figure 2). 1nsThe time taken for the electron to elastically or inelastically cross from D to A. T R 1µs − 1ms The time taken for the charge to hop from A on the helix to RA (see Figure 2). Which Electronic States are Important? So far, we have simply observed that as an electron transfers from D to A, it alters forces on M, so causing a change in the odorant's vibrational state. Such behaviour has parallels in many other solid state systems. Until more is certain about the receptor structure, doubts must remain as to precisely which groups D and A correspond. We return to this question in Section 5.1. We have assumed that D and A are relatively localized, and that the odorant molecule M in the receptor is close to either D or A or, perhaps more probably, to both of them. The donor and acceptor species will have discrete energies, unlike the electrodes in most inorganic inelastic tunnelling experiments. Two distinct types of transition can be identified immediately. In one, there is a direct transition from D to A that is modulated by the presence of the odorant, and this was the case considered in [20]. In the second type, there is an electron transition onto the odorant, followed by an incoherent second transfer to A. This second category, with electron transfer into the odorant molecule M (intra-molecular tunneling [41,42]) requires available molecular orbitals of M close in energy to those of the donor and acceptor. Typically this requires re-hybridization between the adsorbed molecule and receptor to make the energy differences suitably small [41,42]. We can rule out these ideas for OR activation, ultimately because the HOMO-LUMO (highest occupied and lowest occupied molecular orbitals, respectively) gaps of odorants are typically large, of order 10 eV. Thus we examined the case of extra-molecular tunnelling [20], electron transfer near the odorant molecule, which seems more probable. We shall discuss the possible natures of D and A later, but, for the moment assume that D and A can be either occupied by an electron or unoccupied by an electron, and that the states (D occupied, A unoccupied) and (D unoccupied, A occupied) differ in energy by 0.1-0.2 eV. The odorant M may facilitate the transition (see Appendix B) but there is no need to assume a quasi-stationary state in which M hosts the electron for any length of time. We may consider here two possibilities for the electron tunnelling path: (1) the electron crosses between opposing helices (inter-helix crossing), possibly passing through M, which will affect the relevant tunnelling matrix element; (2) the electron moves within one helix (intra-helix crossing). The original model [20] assumed inter-helix crossing; there is little difference in the physics between the two cases, see above, but they have different implications in determining the chemical natures of molecular units D and A, and just how they form part of the receptor structure. As noted, we regard D and A as localized, with their relevant electronic states compact compared with the distance between them. Sensible guesses from the distances between helices suggest a typical value of around 8Å. This would be realistic if we believe the likelihood that these electron source/sinks are amino acids and if we compare to distances between important residues for rhodopsin [43]. Site-directed mutagenesis [44,45] studies have determined that for odorant recognition in MOR-EG there are nine amino acids involved directly at the binding site, with Ser113 being a crucial H-bond donor for odorants with aliphatic alcohols. It is noteworthy that none of the nine is strongly conserved (see Figure 4 of [44]), and indeed some are at sites that are highly variable. Thus they can only be associated with binding or modifying the donor and acceptor characteristics; Katada et al. [44] associate them with binding. For electron transfer via the odorant, all that is needed is a combination of firstly the right quantized vibration frequencies and secondly a rapid electronic charge transfer. In the tunnelling transition between these two states, what the impulse is that drives vibrational excitation becomes the key question. Odorant Vibrations To activate the receptor the odorant frequency has to stand out against the many background host modes, such as the C−H stretch vibrations that are abundant in the environment and occur at around 0.36eV (2911.3 cm −1 ) [46]. The couplings will depend on an effective charge and on the root mean square amplitude of vibration. We can estimate a root mean square displacement (rms) thermal atomic displacement of these stretches using: which with M = 1 for hydrogen in atomic units, and ω = 360meV , we see a root mean square amplitude of 0.076Å. This is small, which is reassuring. Estimates of effective charges in the receptor environment are not simple. A potential problem is the abundance of C-H modes, even if each individually were weakly coupled. Turin [8] has suggested that frequencies around common CH stretches are blind-spots and the appropriate receptor type does not occur. Odorant Vibration Coupling The Huang-Rhys factor S, is a measure of the coupling of the tunnelling transition to the vibrational mode of the olfactant M. To be more precise, it is a measure of the change in force experienced by the vibrational modes following the transfer of an electron from the donor to the acceptor site. It can be calculated using readily available electronic structure codes, and this can be done accurately for free odorant molecules. The same methods give vibrational mode frequencies. The Huang-Rhys factor thus allows us to make predictions for different odorants, based on the strength of this factor. The predicted values (to be discussed in a separate publication) do indeed lie in the useful range 0.05-0.3. The couplings are thus strong enough to be detectable by inelastic tunnelling, but not so strong as to have 2 or 3 phonon processes that obscure discrimination. Key Non-Radiative Transition Probabilities We now estimate the key rates and probabilities for the inelastic tunnelling events. We shall need to calculate the relative rates of discriminating and non-discriminating tunnelling, since this determines whether a signal might be initiated that would allow the brain to distinguish odours. These rates are characterized by the two timescales, τ T 0 and τ T 1 , for transitions without and with excitation of the olfactant vibration respectively, see Figure 4. Secondly, we need to estimate the spectral resolution that might be achieved. Thirdly, we need to make some estimate of limits, even crude, of absolute rates (effectively, absolute values of τ T 0 and τ T 1 ), so that we can verify that the timescales can be met. In these calculations, we shall make use of the large body of standard non-radiative transition theory. This theory takes various forms, including Huang-Rhys theory and Marcus theory, having many elements in common, but differing because of the specific cases at which they were first aimed. The standard elements of these theories include the idea of a configuration coordinate (reaction coordinate, see Figure 4), and the assumption of processes sufficiently slow that the usual perturbation approaches to transition probabilities (like the Fermi Golden Rule) can be used. We remark that, even in classical physics, the charge transfer from D to A would cause a change in force that would, in turn, change the vibrational state of the molecular oscillator. A quantum description makes two relatively simple changes. First, there is a more complete description of the charge transfer event, here a coherent event in which the electron loses energy and the molecule gains the same amount of vibrational energy. Secondly, the vibrational energy of the molecule can only change by discrete amounts, the vibrational quanta corresponding to their characteristic frequencies. In general, an odorant will have multiple vibrational modes but, for simplicity, we concentrate on the one mode presumed dominant in olfaction. At ambient temperatures, that mode will normally be in its ground state before excitation. For a typical odorant vibrational quantum of ω 0 = 0.2eV the probability P ex of that mode being in an excited state P ex = exp − ω 0 k B T is very small indeed, about 0.0006, for normal human body temperature, k B T = 0.027eV. An interesting case that previously alleged to prove vibration based theories wrong becomes poignant here: two isomers, methyl cyanide (CH 3 − C ≡ N) and methyl isocyanide (CH 3 − N . = C), despite having very similar higher frequency vibrational spectra, smell quite different. Wright's objection then was that "for quantum reasons, the vibrations in question [for smell] must have a rather low vibrational frequency and probably lie in the range 500 to 50 wave numbers" [47], and differences in the lower frequency region of the spectra explained the discrepancy in smell. We note here that, for this suggested refutation, the IR spectra do indeed differ [48], most notably in the 2,200 cm −1 region where methyl isocyanide exhibits much stronger IR absorbance, and so fits comfortably within predictions described by the model used here: it is not a refutation of a vibrations-based model. The electron transfer between the initial state on D and the final state on A couples to the vibrations of the molecule M and of its environment. In effect, forces on the atoms change because charge has moved. The receptor environment will surely have some effect on the details of the vibrational modes of the molecule M, but we shall assume that the high frequency modes associated with selectivity are not altered greatly from those of the free molecule; this assumption could be removed in larger calculations. The environment, including the "soft" floppy protein backbone fluctuations observed in protein dynamics, we take to be a collection of low frequency oscillators that couple only very weakly to the mobile charge. This is a less accurate approximation for the amino acids near the donor and acceptor sites than for the remote regions [49]. This will be able to be approved once we are confident of the location of the donor and acceptor sites, and have reliable geometries for the receptors. We can then obtain the electron transfer rate, as in [20], using the standard theories of non-radiative transitions based on Fermi's golden rule [20,[50][51][52]. The final expression involves an electronic matrix element (Appendix B) and factors that describe how readily the molecule and environment oscillators can take up energy. These factors are conveniently expressed in terms of a dimensionless Huang-Rhys factor S (the molecular relaxation energy for a mode divided by its vibrational energy quantum) for the molecular modes, and a reorganization energy for the environment modes. For inelastic tunnelling to be effective, the Huang-Rhys factor S for the molecule should lie roughly in the range 0.01 to 0.3. For S <0.01 inelastic events will probably be too rare. For S >0.3, multiphonon events will begin to be a problem. The environmental reorganization energy should also be small, so that the electron transfer is unlikely to be achieved using the softer environmental modes alone. It is the coupling to these environment modes that limits the spectral resolution of the inelastic tunnelling mechanism through processes (for example) where the vibrational energy needed, ω 0 is the sum of a molecular mode energy ω M and an environmental mode energy ω e . In at least some systems (Marcus, private communication) this environmental reorganization energy is very small, but calculations of a full molecule plus receptor system are desirable. The Influence of the Environment The environment of the odorant varies from air to wet mucus to a dry hydrophobic region. At the point of interest within the binding domain between the membrane which is surrounded by hydrophobic phosphorolipids the odorant is in a very dry environment. That is, vibrations around the odorant are typically not moving charges (the important moving charges are only the atoms on the odorant and the itinerant electron). The inelastic tunnelling model is very sensitive to the reorganization energy of the environment, the host vibration couplings: λ. With the parameters chosen above, for values of λ below about 30 meV, the inelastic channel dominates, but increasing λ to above 62 meV would mean that the olfactant environment plays a dominant role. In terms of Marcus theory, we must consider contributions to the reorganization energy from both inner shell regions (λ i ) and outer shell regions ( λ o ) [25]. For the inner shell, we assume harmonic modes, and have a contribution: where α runs over modes. As indicated above, most of these modes are soft, with low energies. There will be some modes with higher energies, like CH vibrations, but all may be weakly coupled. For the outer shell element λ o it is usual to use a continuum picture, estimated from the polarizability of the environment: where r 1 and r 2 are characteristic radii , D op is the square of the refractive index (the fast response dielectric constant) and D S the static dielectric constant (the slow response dielectric constant), and ∆e is the charge that is transferred. In essentially non-polar environments, such as the hydrophobic ligand binding domain, the charge transfer has little effect, and the outer shell nuclei move very little. There is good reason to assume that these reorganization energies for the olfactory system will be small (see also sections below). We note that the photosynthetic bacteria Rhodobacter capsulatus which has values of λ below 30 meV at room temperature. These possible environmental vibrational excitations could be calculated once a detailed structure for the receptor system is known. The Electronic Transition Matrix Element |t| The electronic matrix element t for a non-radiative transition is never trivial to calculate accurately. This is especially true when overlaps are small, as here, when the electronic states of D and A have very small overlap. Quite possibly t would be negligible in the absence of the olfactant. Certainly t will be different when the odorant is present, partly from changes in geometry, but also because of extra terms in the wavefunction (cf. Appendix B). For a rough estimate, we might consider single molecular orbitals, giving an effective hopping energy of: where ε M is the energy level of the relevant odorant orbital and ε A of the acceptor, taking the appropriate highest occupied (HOMO) or lowest unoccupied (LUMO) molecular orbitals. For most olfactants, the HOMO and LUMO energy difference can be as big as 10eV. The hopping integral v will not usually exceed 0.1eV, determined from the strength of hydrogen bonds between the donor, acceptor and molecule. Better knowledge of the atomic structures of likely D and A units would allow better estimates of this parameter. Thus, for instance, Newton et al. calculate the matrix element in F e 2+ − F e 3+ , from the overlap of the orbitals from these two iron atoms [53]. However, we cannot usefully attempt such a calculation without better indications of what the important groups and their orientations. Again, we can compare our system with experimental data for C.vinosum, from which, when the experimental parameters are inserted into a similar equation to 4 (with the same assumptions of non-adiabacity and low temperatures), then the matrix element obtained is 2.4 meV. This suggests our estimates have the right order of magnitude [53]. Fortunately, the proportions of transitions with and without olfactant vibrational excitation is essentially independent of t. Discriminating between Molecules The rate equation for tunnelling with or without olfactant vibrational excitation [20], can be summarized as: where n = 0 for the non-discriminating channel when the olfactant is not excited (or the wrong olfactant is blocking the electron route) and all energy is taken up by host vibrations. For the discriminating channel, where the olfactant takes up this energy, n = 1. Typical values of the important parameters, given in the Table 2, indicate τ T 0 ∼ 87ns and τ T 1 ∼ 0.15ns. These satisfy the condition that τ T 1 ≪ τ T 0 by a substantial margin. The discriminating inelastic channel dominates the tunnelling between these states with discrete energies. This is, of course, the opposite of what is found for inelastic tunnelling involving metal electrodes with their continua of initial or final electronic states. Table 2. Estimated values for the parameters needed to compute τ T 0 and τ T 1 [20]. Note here we use S = 0.1, which is more realistic than our previous S = 0.01. We discuss below the likely sensitivities of the various parameter values, see section below Perhaps the least certain parameter is the reorganization energy, λ, associated with the environment. If the environment were strongly coupled, the environment modes could take up most of the electronic energy and discrimination based on the olfactant mode would be ineffective. In Figure 5 we show a plot of the characteristic times for the channels with and without olfactant vibrational excitation as a function of λ. The figure shows that tunnelling primarily mediated by environment modes would dominate for values of λ > 62meV . Weak coupling to the host modes is crucial. Without a very detailed receptor structure, it is hard to check this further, but there are systems for which reorganization energies are in an acceptable range (Marcus, private communication). Challenging Cases The previous sections examined the basic physics of inelastic electron tunnelling as a potential critical step in olfaction. Even though the underlying physics appears viable, with credible parameters [20], the ultimate test is experiment. So how well do these ideas fit the observed phenomena of olfaction? We have chosen a set of examples that might challenge the role of inelastic electron tunnelling. From these, shown in Tables 3-5, we analyze and address implications. In several cases, further experiments are suggested. Isotopes Any isotope (see Figure 6) dependence of scent is inconsistent with standard notions of discrimination due to shape. However, humans and drosophila can indeed discriminate between isotopes in some cases, and drosophila can be trained to respond in a way that illustrates this [10]. Certainly there have been experiments that gave no evidence for an isotope effect [63], and the effect is relatively subtle. But the picture emerging leaves little doubt that there is an isotope effect [64]. It is possible to invoke special effects, such as isotope-dependent chemical reactions en route to the receptor, but the obvious effect of the isotopic mass difference is on the vibrational spectrum. Isotope dependence gives an opportunity to measure the sensitivity to the energy separation of D and A for those responsive receptors, since the model parameters t and S remain essentially the same. We can predict the isotopic change in vibrational frequency ∆δ and see how that relates to observed discrimination and non-discrimination. Density Functional Theory (DFT) computations using a B3LYP functional and the basis set 6-311+G(d,p) for acetophenone show that the largest shift in the IR spectra occurs towards the higher frequency end. For a simple oscillator in which only the proton moves, the frequency for a deuteron would be smaller by a factor of 1 − 1/ √ 2 ≈ 0.29, or about 800 cm −1 for these modes. Assuming this cluster of modes is significant in olfaction, the shift ∆δ ∼ 800cm −1 should readily suffice to disengage activation for at least one receptor. Possibly even a much smaller shift would suffice. Such psychophysical tests on humans and behaviour studies on drosophila can be strengthened with discrimination at the glomerular level (as opposed to the perception level, which is sometimes contentious [58]) by using calcium imaging [59]. This class of experiment could definitively establish discrimination and would be a desirable next step to establish the phenomenon of isotope discrimination at the receptor level free from ambiguity. This strongly indicates that vibrational modes are "smelt". The notion that smell can be learnt (by training) indicates that there are innate abilities that are not employed unless it is necessary. Evolution has resulted in a diminishing sense of smell for humans where the sense is not as relied upon as it once was. This is indicated by the presence of 1,000 olfactory genes, only~390 of which are still functional, and 462 are pseudo-genes). The two sulphur compounds will possess near identical partition coefficient in the mucus and likely reach the same receptors and have similar interactions at the binding site. This indicates the two molecules must differ in some other action at the site in order for there to be such a discrepancy in threshold. A possible alternative is that somehow after G-protein release the signal is amplified or reduced. But what feature of the odorant would tell the receptor to do this? Antagonists Eugenol (EG) and methyl isoeugenol (MIEG). Antagonism can occur at: the receptor level, the second messenger transduction level or at the membrane current level. In these studies ratiofluorometric studies were done to measure Ca 2+ influx and so indicate antagonism at the first step. Thus, olfactory receptors can be extremely sensitive and selective. Table 4. Interesting examples continued. Examples Observations Conclusions Smell the same. Hydrogen sulphide has the typical sulphuraceous smell and decaborane (though it contains no sulphur) also shares a "boiled onion, SH smell" [8]. Though no two odorants smell exactly the same (and have the same combinatorial code expressed to the glomeruli) there is some degree of overlap here. This poses the question: what causes two elementally and structurally very different molecules to activate some receptors in common? Smell different. The only difference between these two examples is the metal ion in the centre of the structure. Something other than shape differentiates these two. Ambergris smells "oceanic" at low concentration and "Rotting" at high concentration [58]. Similar discrimination has also be seen at the receptor [2] and the glomerular level [59] for a range of other odorants. At high enough concentrations receptors are recruited, where they otherwise would not be activated. From a previous study [27] it has been found mirror image molecules with a 6-membered ring flexibility always smell different in their enantiomeric forms. This conformational mobility introduces an asymmetry where one hand is enabled to activate and the other hand is frustrated. Immobile enantiomers (RSRS)-Tetrahydronootkatone and (SRSR)tetrahydronootkatone Both smell "dusty-woody, fresh, green, sour, spicy, herbal, slightly fruity, animal, erogenic" [62] From a previous study [27] it has been found mirror image molecules with a constrained 6-membered ring (a rigid molecule) always smell the same in their enantiomeric forms. This conformational rigidity reduces the asymmetry rendering these molecules superimposable from the receptor point of view. Steroids/pheromones 5α-androst-16-en-3one 5α-androst-16-en-3-one "smells strongly and disagreeably to about one third of people, one third smell it well, but do not describe it as particularly unpleasant; while one third cannot smell it" [55]. The "natural" steroid has a smell (according to a percentage of the population) implying some people do not have the equipment or ability to smell 5α-androst-16-en-3-one, or have not been trained to detect it. Structurally Similar Odorants with Different Thresholds Examples where similar odorants have differing thresholds highlight the difference between affinity and efficacy. In olfaction, we define the propensity with which the odorant populates a receptor as affinity (typically binding), and the propensity with which it activates the receptor once there as efficacy (typically actuation), though the distinction is not as clear-cut as it might seem [65]. Affinity may include the rate of diffusion to particular receptor sites, the ability for it to cross into the hydrophobic domain, and the ability to make necessary contacts once there and "bind". Affinity may also include how long the odorant remains within the ligand binding domain, and this time may impact on efficacy. One possible explanation for the differences observed in Figure 7 is given by Zarzo [66] where it is realized the presence of P-450 enzymes may cause differing interactions with odorants and hence altered responses at the affinity level. Although efficacy and affinity cannot be treated as entirely independent, one can make a cautious separation of the two. Odorants that have very similar structures will usually have similar affinity for a given receptor site. However, there can still be differences in efficacy within our swipe card model. These could come from differences in t and/or S, and so might underlie the differences in perceived odour. For example, one receptor tuned to detect the SH stretch will receive quite different signals when the S-H axis is differently oriented within the receptor. Odorants with very similar affinities can provide us with good examples to measure Huang-Rhys factors S and compare these with observed efficacy, to see if actuation may be accurately described by our simple formulae. Antagonists Oka et al. [56] state emphatically that their own results, and those in previous reports, clearly demonstrate that "antagonists tend to be structurally related to the agonists, as is often the case for other GPCRs". This conclusion may not be consistent with shape-based theory; the receptors would have to be incredibly sensitive to always respond in different ways to very similar ligands (there would have to be as many receptor types as there are smellable molecules) but could be consistent with a swipe-card model, including cases where odorants can be differentiated spectrally. Comparing eugenol (EG) and methyl-isoegenol (MIEG) in Figure 8, we might surmise that it is the difference between the OH and O−CH 3 modes that accounts for the difference in their perceived odours. However, examining the whole set from this study reveals that other mOR-EG receptor agonists, such as methyl-eugenol (MEG), do not possess the OH stretch either, though perhaps they do have a more suitable shape to fit the receptor site and D/A contacts. These results strongly suggest that what is needed is a combination of suitable parameters, δ , t and S. Both the fit and the correct vibrations are necessary. Oka et al. [56] have provided a set of odorants with varying levels of antagonism (and perhaps thus efficacy). Again, a good model for S may be able to account for any trends in Ca 2+ response. Eugenol (EG) (left) is an agonist of olfactory receptor MOR-EG, methyl-isoeugenol (MIEG) (middle) is an antagonist and methyl-eugenol (MEG) (right) is an agonist. Antagonists remain puzzling in some cases. For example, Drosophila avoids CO 2 whilst suppressing this aversion when it is associated with food sources, in which case odorants present in such foods directly inhibit CO 2 -sensitive neurons in the antenna. Some such odorants have been identified [67]. These antagonists include 2,3-butanedione (CH 3 CO) 2 , butanal (CHO)(CH 2 ) 2 (CH 3 ), pentanal (CHO)(CH 2 ) 3 (CH 3 ), and hexanol (COH)(CH 2 ) 5 (CH 3 ). They do not resemble CO 2 in shape, and hexanol lacks a C=O unit. As regards vibrations, one interesting, if puzzling, feature is that all molecules have a vibration with frequency close to that of the infrared inactive symmetric stretch of CO 2 . Smell the Same but Have Different Structures Similarity of smell suggests that both molecules shown in Figure 9 activate at least one receptor in common. The commonality in the cases shown may be that they share similar vibrations at around 2,600 cm −1 . Thus assuming, a receptor with tuning δ corresponds to sulphuraceousness, we can use this example as a model for sufficient combination of δ , t and S in decaborane, although it is not the endogenous ligand (we assume that hydrogen sulphide is). Calculations using the B3LYP functional and the 6-31G** basis set indicate a difference between the H 2 S symmetric stretch and the strongest IR absorbing BH stretch in decaborane (around the 2,600 cm −1 region) be around 25 cm −1 (this result verges on the limits of accuracy for small energy difference calculations in DFT). We nonetheless surmise that the sulphur receptor must have a range of detection at least this amount, and so could be tuned to 2,600 ± 25 cm −1 for example. Modelling the "perfect" sulphuraceous receptor we could predict when some odorants do or do not smell sulphurous, particularly in examples which are surprising (p-menthene-1-en-8-thiol smells of grapefruit predominantly, see Figure 7) by again calculating S. Determination of a human olfactory receptor responsible for "sulphur" detection would provide a good starting model to compare and contrast this example and the sulphur containing examples in Figure 7. Smell Different with the Same Structures In Figure 10 are shown some striking exceptions to shape theories. Ferrocene and nickelocene smell different, yet they appear to have similar shapes and probably similar tunnelling matrix elements. Presumably, one molecule fails to activate at least one receptor that the other activates. Their different vibrational spectra might explain the difference, and this can be estimated (see Appendix C). This should guide us to what difference renders one odorant undetectable. We know from other systems (e.g., hydrogen sulphide and decaborane) that a difference of less than ∼25 cm −1 is unlikely to be detected, so presumably there must be a larger difference between the vibrational quanta of the key modes for ferrocene and nickelocene. Cases where the structures are the same raise some broader issues. In the present case, the similarities of shape and interactions of ferrocene and nickelocene mean that they are likely to spend similar times in any receptor. Even when one has the wrong frequency, there could also be a weak contribution from tunnelling made possible by environment modes alone. Will there still be a different signal for that one receptor that will impact on the perceived scent? The point is that we have a series of steps: an electron must become available in D, a tunnelling transition must occur, and the electron must be removed from A. If the electron transfers into D or from A are slower than the tunnelling transition with only environmental modes contributing, then the signals could be the same even when the olfactant vibrations differ. See Section 5.2 for a further discussion of the effect of timescales. Another interesting example within this category arises from the work done by Saito et al. [68] who measure mouse receptor level responses to a plethora of odorant stimuli. Contrasting similar-shaped 1-octanol and octanethiol for example it can be observed both activate the same number of receptors, but only 2 in common. Furthermore, they activate these receptors with varying EC50's. This demonstrates that similarly shaped odorants will smell different in character (according to "combinatorial coding" one receptor difference can alter the smell signal) but also likely different in odor thresholds [66]. This indicates that vibrational analysis may explain different receptor type activation whereas the differing EC50's may occur from the differing affinities of the O-H moiety versus the S-H present in the molecule. Smell Alters with Increasing Concentration The distinction between the affinity of an olfactant for a particular receptor and its efficacy, determined by the signals initiated to the brain, can be crucial. One underlying question is whether the receptors are binary, having only on and off states. Rhodopsin receptors are known to be binary, i.e., on or off states only. But in the case of the β 2 -adrenergic receptor (one of the better characterized GPCRs), dopamine (a weak partial agonist) is just as efficacious as isoproterenol (full agonist) in disrupting what appears to be the molecular switch, but this is not enough to induce the full activation of the receptor as in the case of isoproterenol [69]. Further, it has been shown [69] using fluorescence resonance energy transfer (FRET) that, for the bimane-tryptophan quenching system, different types of agonists induce different types of conformational states, an observation which contradicts the binary proposition: the ligands do not simply modulate the equilibrium between an active on and inactive off state, but there are many degrees in between. Receptors are not always binary. But are olfactory receptors binary? In a swipe card model like the inelastic tunnelling model, there are still some important features that we cannot yet decide. When an odorant binds to a receptor, can more than one electron tunnel, limited only by electron supply to D or removal from A? Or does the odorant need to leave and be replaced before the next electron can contribute to the signal? The concentration dependence of odour therefore introduces an extra degree of complexity. If the olfactory receptors were binary, the potency of an odorant's signal could be directly attributed to the number of receptors occupied by odorant molecules. The potency would thus vary linearly with concentration, at least at low concentrations. In olfaction, this is notoriously not the case: in many cases, the higher the concentration, the more likely an odorant is to change its character [59,70,71], which implies at saturation certain "wrong" odorants are likely to find their way into an olfactory receptor and, whilst they may fit and bind inefficiently, they still activate the olfactory receptor to a degree. Smell change with increasing concentration suggests that, as absolute receptor saturation is approached, some odorants can activate non-parent receptors. Receptors that are unimportant at low concentrations become significant when some other receptors are saturated, see Figure 11. Conformationally Mobile Enantiomers Enantiomers-chiral molecules M with left-and right-handed mirror image forms-should all smell different in the simplest shape-based theories. More sophisticated (but less predictive) shape-based ideas argue that smell is combinatorial, and that parts of the odorant are detected by particular receptors ref 2; this is also known as the Odotope theory. Even so, it becomes hard to understand how enantiomers have different odours if only functional groups are detected by individual receptors; any chirality is lost and all enantiomers would smell the same. In the simplest frequency-based models, since left-and right-handed variants have exactly the same frequencies, all enantiomers should smell the same. However, as we have been emphasizing, other factors influence the response of the (chiral) olfactory receptors: it is not just the frequencies, but their couplings to the electron transition are important, and also the matrix element determining that transition. In a swipe card model, it is these extra factors that are critical in deciding whether chirality matters. Chiral molecules, as mirror images, see Figure 12, will have the same frequencies, and need the same δ in a receptor. However, in any given receptor, t and S will differ, since-as shown in the figure-the two cases are not superimposable. The issue of superimposability fits well with the general swipe card approach. We have shown previously [27] that these ideas can indeed predict whether enantiomers will be differentiated. These ideas suggest that chiral molecules will be distinguishable to some extent. Receptors are clearly very sensitive to structural variations, and any change in stereochemistry would affect actuation. Enantiomers have mirror image conformations that will asymmetrically activate the same chiral receptor, and any other conformational freedoms would exacerbate this. Figure 12. 2 Nootkatones: the 4R, 4aS, 6R(+) enantiomer (left) smells of grapefruit (odor threshold 0.8 ppm) and its mirror is "woody, spicy" (threshold 600 ppm) [60]. Note also the (+)-enantiomer is around 750 times more potent odorant than the (−)-enantiomer [61]. Usually flexibility is said to aid receptor actuation, as if affinity where the only consideration. This seems not to be the case in olfaction, where conformational mobility can be either an aid or a hindrance in receptor actuation. It is assumed flexibility, in the sense of adapting to a binding pocket, is tantamount to agonistic behaviour. The phenomenon of flexible and distinguishable enantiomers however highlights the importance of efficacy and affinity in combination for actuation. This might be associated with features well known in non-radiative transition studies, but not normally considered in the biological context. For instance, we have concentrated on what, for non-radiative transitions, is the accepting mode; this takes up the energy in the non-radiative transition (and is the olfactant mode in our previous discussion). But other motions can affect the transition matrix element, and may enhance the transition; such modes are known as promoting modes. Promoting modes will have different symmetries, and may have substantial effects on t, again needing a more careful analysis than usually found. Conformationally Immobile Enantiomers Whilst it is very rare that enantiomers smell exactly the same, both in intensity and in character, those that do share two common features. First, they have just one osmophoric group, a region of interesting electronegativity and superimposability. Secondly, they are not conformationally mobile. This is seen in the example in Figure 13, where any 6-membered ring flexibility is constrained [20]. One simple way to test for true type 1 (enantiomers that smell identical) is to smell a racemic mixture of the optical isomers. If the component parts are identical, then a mixture of the two must in turn be identical. Figure 13. Type 1, tetrahydronootkatones smell "dusty-woody, fresh, green, sour, spicy, herbal, slightly fruity, animal, erogenic" on the left is (4R,4aS,6R,8aS)-(+)-tetrahydronootkatone and on the right its mirror image (4S,4aR,6S,8aR)-(−)tetrahydronootkatone. Steroids/pheromones Why, in the case of 5α-androst-16-en-3-one, such as in Figure 1: can less than 100% of the population detect the naturally-occurring steroid? It is not difficult to believe that some people might miss one particular olfactory receptor, but smell is generally combinatorial, so would need a whole set of olfactory receptors to be missing [2]. However, as indicated earlier, pheromones are expected to behave differently from other olfactants with one receptor responding to one ligand through a lock and key mechanism. This is probably what is happening here. We note that detecting steroids would need a class of receptors with larger than average binding sites; steroids and hormone molecules have typically ∼55 atoms, whereas odorants are generally smaller, 3-20 atoms. Thus, steroid and hormone receptors might work differently from those involved in smell (indeed, there may be crossover with the vomeronasal region). Donor and Acceptor Specifications For an inelastic tunnelling mechanism to work, the molecular units D and A have to satisfy certain important constraints. Just what D and A are is not clear. They are probably common units among the likely receptor structures. They must be able to occur in two charge states, which we might call full and empty (so the transition takes D(full)A(empty) to D(empty)A(full)), though that is possible for many possible molecular units. Transition metals, often found in living systems, are among the species that can occur in several charge states. The D and A units must be able to revert back to their original states many times, i.e., D and A should not be destroyed in the olfaction process. It must be possible to feed an electron into D and remove an electron from A (inter-chain model) or return the electron to D (intra-chain model). To detect odorants within milliseconds, though tunneling via an odorant can be much faster, the replenishment of D and A should be within ms but not longer. Whilst that is not a strong constraint as regards timescale, it does require other reactions outside the receptor to maintain electrochemical equilibria that drive these motions. We note also that D and A must be sharp energy levels, which means only weak interactions to cause broadening. This is consistent with our calculated results, where all relevant interactions appear weak. Perhaps the strongest constraint on D and A is the need for a small energy splitting ε D − ε A that corresponds to the small (but typical) vibrational quantum ω 0 . Most olfactants M and many possible molecular units of the receptor are closed shell systems, and the gap between the highest occupied (HOMO) and lowest unoccupied (LUMO) levels is two orders of magnitude too large. Electron transfer from the HOMO of one unit to the LUMO of another is ruled out by their large energy difference, perhaps even 10 eV. One simple and general way round this problem is to assume that D and A are essentially the same molecular units, differing only slightly in geometry or because of slightly different units to which they are bound. We conjecture that likely donor/acceptor candidates are amino acid residues, perhaps of the same unit such as tryptophan (Trp). If, however, D and A are essentially identical (subject to minor differences already mentioned) for example two tryptophan residues (there are tryptophans that are highly conserved), we can imagine a suitably small splitting. As a hypothetical example, if D and A differed in energy solely because of single proton charge placed asymmetrically at 5Å from D and 4Å from A, this charge would the cause an energy separation e 2 εR A − e 2 εR D ∼ 0.72/ε eV which, for dielectric constant ε = 3, corresponds to 1935 cm −1 . Whilst lack of detailed receptor structural information means we cannot be too precise, it does make sense to suppose that D and A are typical units to be found in most-if not all-the receptor types, and that subtle modification by surrounding residues provides the fine tuning to different olfactant phonons. This would reconcile nicely with the observation that across OR types amino acids on helices 4 and 5 are highly variable (the moderating residues) and on helix 7 highly conserved (the staple residues D/A) [72]. We cannot be more precise without further experimental structural information and, in view of the considerable disagreements about odour receptors [73] we can make only very tentative observations. First, there are some common units, such as tryptophan, OH or SH groups, that might deviate in energy by small amount due to surrounding charge. Others have observed [74] that N-ethylmaleimide (NEM) reacts with the sulfhydryl groups in olfactory receptors rendering them irreversibly inactive; thus strongly suggesting that SH groups (perhaps in cysteine residues) might play a key role, possibly as D/A units. Secondly, there is evidence that very potent smelling odorants also bind strongly to zinc [8], and that a zinc deficiency results in anosmia reversible upon supplementing the diet. Conceivably Zn 2+ or Cu 2+ are components of electron donors, although it is possible their role involves protein structural stabilization as opposed to redox chemistry. The observation that zinc nanoparticles (but not zinc ions) can enhance the sensitivity of smell also suggests another role for zinc, perhaps as a source of electrons. Thirdly, recently the importance of NADPH towards GPCR functioning has been emphasized and investigated [75]; and also odorant binding proteins have a role not yet defined. One might conjecture that they are involved somehow in donating or recycling of electrons. Finally, we still do not know whether D and A are situated on two adjacent helices (inter helix tunnelling) or on a single helix (intra helix tunnelling, see Section 3.3). This raises the possibility that a bridge, like a disulphide bridge between two cysteine residues on one helix (with −S−S− and −SH HS− oxidation states), is a component of D or A, which has been postulated before [8]. For inter-helix tunnelling, we should ask what supplies the donor with its electron and removes it from the acceptor? We have assumed there is some electrochemical reaction or reactions that can achieve this, though it is perhaps not obvious in the olfactory biology what this source is. There are several possible explanations. One possibility is that odorant binding at the receptor site provides the energy and electrochemical requirements to prime D and A. Timescales Depend on the Full System The brain distinguishes odorants by using information from receptors. Communication is achieved via influxes of ions triggered by activated receptors, with the information somehow encoded as times between subsequent influxes. How does the brain distinguish between the influxes from activation of receptors by odorants from the occasional activation of receptors when they receive other molecules? There will be a small tunnelling rate even for an empty receptor, which presumably gives some background noise that the brain can filter out. But, in the inelastic tunnelling picture, are the tunnelling rates for the right odorant M (1/τ M say) and for the wrong odorant W (1/τ W say) sufficiently different? And how do these characteristic times τ M , τ W compare with the other times for steps in the overall process? We know, for instance, that odours can be detected in a time of perhaps a millisecond. Since this time involves the transfer of information from the receptor and interpretation in the brain, we should probably imagine events at the receptor itself taking perhaps a tenth of a millisecond. One can imagine several different situations. One possibility is that the receptor itself inhibits signals from wrong molecules, perhaps because the molecule is resident for too short a time, or because there are competing processes we have yet to identify. Or, more generally, the brain could ignore signals below some threshold current, i.e., less than some critical number of activations in a given interval. Thus Crick [76] discusses attentional mechanisms for vision, describing the possible "correlated firing" of neurons, and saying "spikes arriving at a neuron at the same time will produce a larger effect than the same number of spikes arriving at different times". For olfaction, the spikes arriving at effectively the same time might correspond to a number of receptors activated in a period of less than or of the order of a millisecond. The inelastic tunnelling rates we calculated previously were much faster, corresponding to a characteristic time of the order of nanoseconds. Our earlier calculations suggested that even the characteristic times τ W for non-discriminating transitions were significantly shorter than a millisecond. We now offer several ways that this apparent contradiction can be resolved. If the donor D and acceptor A could indeed be restored in times less than milliseconds, the shorter timescale for the right molecule (τ M ≪ τ W ) would be reflected directly in more influxes during the period over which the neuron integrates producing a greater average current. That option seems more likely for intra-protein transfers (Figure 3) than for inter-protein transfers. If that were correct, the right molecule in a receptor could initiate several ionic influxes in each period of residence that become integrated into a single event by the brain. Now suppose instead a receptor cannot send more than one signal in a ms, perhaps because of the slowness of the processes that ensure the donor D contains an electron and acceptor A is empty. Then both the right (discriminating) molecule M and wrong (non-discriminating) molecule W would cause a single influx in the integration time, and the brain would regard them as equivalent. Where might these assumptions be wrong? One possibility is that the tunnelling rates are really much slower, so only the right molecules M are effective, even on the millisecond timescale. A second possibility is that we need to examine not just the tunnelling event in isolation, but the whole sequence of events from the arrival of the electron at the donor to the docking and departure of the olfactant from the receptor. Could the tunnelling rates be significantly less, yet still leave inelastic tunnelling a viable process? In our estimates, we used an extension of Marcus theory, involving a reorganization energy that is the first moment of the line shape function. This reorganization energy brings together all the couplings to modes at any one single frequency into a single mode by a linear transformation that is general for a harmonic system. In its usual form, there is the further assumption of a configuration coordinate that gathers modes of all frequencies into an effective environment coordinate with just one frequency. In the case of olfaction, as here, and other cases of very weak coupling, this second assumption is not essential and can be avoided. For olfaction, the important requirement is that the discrimination should not be limited by two phonon processes. With a one phonon process, there is the potential for the successful discriminating detection of odour. Two phonon processes and beyond introduce weaker, slower, signals that obscure discrimination. For such multiphonon processes, the modes can be those of the odorant or the environment. From the standard extension of Huang-Rhys theory [52] in the very weak coupling limit, the "right" transition has a probability proportional to the Huang-Rhys factor S M for the discriminating mode. The competing "wrong" transition probability would be a two phonon transition, where two modes accept energy, and the corresponding factor is ½ S'S", where S' and S" are the Huang-Rhys factors for these modes. Since values of S are in the range 0.01-0.3, this second probability could easily be smaller by a factor 100-500, or even more, since the phonon energy could be from environment modes that are less well coupled. Should the receptor be empty, the electronic transition matrix element would be reduced by a further factor. For example, a factor of order 30 (readily possible from simple models), would reduce the non-discriminating rate by a further factor of about 1000. So it seems possible that the non-discriminating transitions are weaker by a large enough margin to be at the level of noise. We now consider the interplay between the electron transfer process and other processes to which it is coupled. There are two types we have investigated, both involving a race between two processes: in the first, there is a straight race between one process leading to signalling and another that frustrates it; in the second there is a race in which the competing process delays the signalling, but does not frustrate it. In our model of the first type of interaction (the competing process frustrates signalling) we assume that, when the "right" odorant is in the receptor and when there is an electron in the donor D, then there is a constant probability that inelastic tunnelling occurs with a characteristic time τ M or τ W , depending on which molecule is present; as before, τ M ≪ τ W . We now also recognise that this key tunnelling process has competition. For instance, it might be prevented altogether if the electron on the donor D returns to the reservoir from which it came, or if the olfactant molecule leaves the receptor, or some further competing process. Suppose this competing process has a constant probability characterised by time τ R , largely independent of the odorant, but characteristic of the receptor and perhaps of the electronic reservoir that supplies electrons to D in the inter-protein case. In simple terms, there is a finite window of opportunity of order τ R . If this window is long enough for discriminating transitions (characteristic time τ M ) then the odorant will indeed initiate a signal to the brain. If the time τ R is short enough relative to τ W , the characteristic time for non-discriminating transitions, then the wrong molecules will give signals only at the level of noise. We can readily calculate the ratio of successful odorant events leading to influxes for "right" molecules M and "wrong" molecules W as [τ R + τ W ]/[τ R + τ M ]. This ratio can be quite substantial: with a short window of opportunity, only the fast process will matter. A description of our model of the second type of interaction (the competing process delays signalling) will be presented in a future publication. The main result is that the electron transition rates seen by the brain for both the "right" and "wrong" molecules get substantially reduced relative to the actual transition rates from D to A. This is because the electron can only reach A from D, but spends most of its time elsewhere. Thus, if the electron does not make a successful transition to A during one visit to D, the receptor has to wait for the return of the electron to D before another attempt is possible. The revised transition times are then τ ′ where τ D is the time associated with getting an electron to D, and might be much longer than the times needed to get an electron from D to A, and here τ R is the time taken for the electron to leave D by a competing route. Consequently the difference in time between signals (τ ′ M − τ ′ W = (τ M − τ W )τ D /τ R ) could be much larger than the difference in the electron transfer times (τ M − τ W ) if τ D ≫ τ R . Summary The development of the swipe card paradigm introduces a new and in many ways more satisfactory way of describing olfactory signal transduction. It gives a framework in which to evaluate critically theories like Turin's, and to identify key questions. Does a receptor measure a single electron crossing, (Figure 2) or several electrons crossing, or even none ( Figure 3)? What is the nature of the olfactory G protein and how in turn does it propagate odorant dependent information? How many G-proteins are released? What are the turn-off mechanisms and the timescales olfaction may be limited by? What are good candidates for the donor and acceptor? Where does the supply of electrons come from? What happens to the odorant once it is smelled? Can we detect the odorant in a metabolized or even excited state? There remains a wealth of opportunities for future research, notably experimental. The field is seriously limited by a lack of careful odorant physiological tests. Elimination of trace impurities is crucial (our noses can detect 1 part in a billion) as is the determination of non-subjective descriptors to describe odorant response. Compromise in these two considerations can lead to conflicting results [54,63]. To avoid dispute, olfaction and its dependence on odorant vibrational modes should be tested in double-blind tests with at least gas chromatograph pure samples. Only with fullest care could one answer with confidence whether humans detect isotopic changes. Simple racemic mixtures can test discrimination of enantiomers: odorants that truly smell the same will do so when mixed together. There are also plenty of more general non-transduction hypothesis laden experiments that can be conducted. To test for example more general olfactory receptor characteristics such as conformational changes: site-directed spin labelling, site-directed fluorescence quenching, sulfhydryl accessibility, disulphide cross-linking, spin labelling studies, an arsenal of techniques could be implemented to provide definitive answers. Questions could be: Are the receptors binary as in rhodopsin? Or do they possess many degrees of actuation as in recent discoveries [77] for the β 2 adrenergic receptor? Given recent warnings on the amount of conflicting data analysis ubiquitous in olfaction (as it was for vision years ago) [73] we must be careful not to assume too much. The biophysical characterization of the olfactory receptors, is an integral next step to developing any theory. Most conclusions on the processes in olfactory signalling are based on sequence homology analysis that compares olfactory receptors to bovine rhodopsin, which, whilst helpful, may lead to assumptions that olfaction always works in similar ways. Yet the specialization of the olfactory class is still to be established; the OR's may be an entirely different GPCR class. Given the challenging cases above, they certainly seem to be overwhelmingly discerning. Given 50% of pharmaceuticals are targeted at GPCR's, there is not inconsiderable interest in this area [77] and the holy grail of X-ray crystallization and structural identification for olfactory receptors would mark considerable progress. That said, for serious modelling of D, A and tunnelling, positions of atoms to 0.1-0.2Å are needed, well beyond the best current data that resolves at best to 2Å. Since this is an order of magnitude too inaccurate for tunnelling rates, we may not be able to confirm too much, but there may at least be validation of what the predicted D/A units are. Further, the importance of a dynamic picture of the receptor is rapidly emerging [78,79], so the determination of a functioning coordinate frame would be a fundamental first step towards implementing molecular dynamics calculations to better understand the fluctuating world of receptor and ligand. One criticism levelled against a non-standard theory of olfaction, like Turin's, is that nature reuses mechanisms that work, so one should expect many GPCRs to use electron transfer; this is not believed to be the case. However, even though many GPCRs might use electron transfer, it might not be an optimal solution in general. Olfaction is a special case in which receptors are most useful when promiscuous, i.e., an organism does not know in advance what chemical species it will encounter in its environment. Effectiveness in this situation is greatly aided by using molecular vibration frequencies to identify molecules, since it makes it possible to recognise small chemical groups from which molecules are composed. By contrast, many receptors are finely tuned to one or very few molecules, in which case shape alone may be superior. However, even when high chemical specificity is demanded, something additional to shape alone may be necessary: for example, it would appear that shape alone is inadequate for steroids. The Future Prospects of the Turin Theory of Olfaction We have given a critical analysis and review of a model for olfaction, attempting to address directly its main challenges. Alongside the shape-based lock and key ideas, the role of molecular vibrations has been around for many years, but only in 1996 was a specific mechanism for signal transduction proposed. Most of our present discussion of the swipe card model has concentrated on Turin's specific proposal that molecular vibrations provide the information for activation, and that the relevant vibration frequency is recognized by inelastic tunnelling. This proposal plus simple shape constraints, we believe, goes a long way towards understanding how odorants activate olfactory receptors. We have previously shown the ideas to be sound as regards physics, and need only realistic values of key parameters. In our present paper, we have chosen as many distinct examples as possible that confront this theory. To the extent that the data are good, Turin's proposal of vibration frequencies monitored by inelastic electron tunnelling stands up well. It cannot be the whole story, since such further factors as conformational mobility have a role. Nonetheless, the vibration frequency is a crucial part that can dominate smell, and the swipe card description appears to be a more useful paradigm than lock and key. A shape-based theory cannot provide a full description of signal signatures, where the swipe card paradigm can. We know, for instance, that conformational mobility correlates with different odours for enantiomers [20]. We know that the receptor itself will be undergoing larger length-scale motions, as observed in protein studies. We can suspect that other dynamical aspects, from promoting modes to stochastic resonance, may have roles. Nonetheless, some form of inelastic electronic process seems a fully viable and important part of our sense of smell. No theory of olfaction can hope to be comprehensive until at least two experimental developments have been achieved. First, there is a very clear need for further careful olfaction experiments, using at least gas chromatograph pure samples in double blind or similar quality tests. Secondly, we need a detailed structure of the olfactory receptor, good enough to define or dismiss particular ideas for the atomic-scale processes. There are many more questions to be answered, and the field invites many interesting experimental studies. The Rise of Quantum Biology One important consequence of going beyond discrimination based on shape alone is that quantum phenomena become much more evident. Shape, of course, already invokes implicitly the quantum nature of chemical bonding. Inelastic electron transitions of the sort discussed here involve a coherent quantum electron transfer event. Using vibrational frequencies as a discriminant relies on the quantum behaviour of the odorant vibrational modes, since energy can only be given to an oscillator in units of its vibrational quantum. There may be other quantum aspects, such as the role of zero-point motion, but these are not evident at this stage. The lock and key paradigm was one of the earliest attempts to rationalize remarkably selective responses to different molecules. For large molecules, it is still a key concept. For small molecules, the underlying idea that shape is the sole critical factor fails badly. The swipe card paradigm, whether at this stage definitive as a model or not, introduces perhaps more productive ways of thinking that confront interesting observations in nature. For this reason alone it has the power to eliminate thinking based on theories that do not work and that road-block progress. Shape is not necessarily the actuating factor in smell; we must determine what factors are for reasons of phenomenological interest, but also because it is possible that the mechanisms and underlying processes of olfaction have parallels in the operation of a range of receptors activated by small molecules such as neurotransmitters, hormones, steroids, and so on. Since this is clearly a possible mechanism, surely nature and evolution would have used it somewhere! The details will never be precisely the same, but there is a clear grand challenge in the understanding of the responses of receptors to small molecules and linking them to their biomedical impacts. A. Simple Huang-Rhys Factor Model The Huang-Rhys factor gives a measure of the coupling of the olfactant to the rapid movement of electron charge from donor D to acceptor A. We have done detailed calculations using density functional theory, and these will be reported in another paper. Here we describe the calculation for a simple model system, since this identifies some of the key dependencies in an analytical calculation. Assume the olfactant is represented by a charge q bound harmonically to the centre of a cavity and able to move along the x axis. The harmonic vibration frequency is ω, and the force constant K = Mω 2 . We further assume that the electronic charge moves from one point D to another point A. We can choose these points to correspond to the inter-molecular (Figure 2) or intra-molecular ( Figure 3) tunnelling cases. What we can estimate is the Huang-Rhys factor S as a function of the relative orientations of jump path and the oscillator axis, and also examine the dependence on the dipole moment and on the vibrational frequency. The simplest calculation focuses on the change in the projection of the electric field along the oscillator axis as the electron moves from D to A. In Huang-Rhys, this transition corresponds to the switching on of a constant force that here we can represent as electric field having a projection E in the x direction. The new force is F = Eq on the oscillator charge. In consequence, the oscillator now starts moving about mean position X = F/K = Eq/K, so there is now a change in mean dipole qX = Eq 2 /K. The relaxation energy that relevant for the Huang Rhys factor calculation is R = F 2 /2K = E 2 q 2 /2K. The Huang Rhys factor itself is S = R/ ω or (E 2 q 2 /2K)/ ω. Since the force constant K is ω 2 /M then S = E 2 q 2 /(2M ω 3 ). It then remains to calculate E for given positions of D and A, and also to decide what effective mass M and charge q is appropriate. This result is consistent with more formal theory of infrared absorption (cf. Section 11.9.2 of [51]). The effective charges need careful discussion in any realistic case (see, e.g., [80]) but need not concern us so much in a model calculation.
22,944.6
2012-11-12T00:00:00.000
[ "Physics", "Chemistry" ]
A review of bush dog Speothos venaticus ( Lund , 1842 ) ( Carnivora , Canidae ) occurrences in Paraná state , subtropical Brazil We report six new occurrence records of the bush dog Speothos venaticus, a widely distributed South American carnivore that is threatened with extinction. These records are accompanied by notes on the places where the records were made, such as vegetation type, date and information about the protection of areas. The records, obtained over the last 17 years in Paraná state, southern Brazil, offer an improved understanding of the species geographic range and the threats it faces and can enable better assessments of the conservation status of the species in southern Brazil. Introduction The bush dog, Speothos venaticus (Lund, 1842), is a small canid widely distributed in South America.It ranges from Panama (Central America) in the north to northeastern Argentina and Paraguay in the south; and also occurs in Colombia, Venezuela, the Guianas, Brazil, and eastern Bolivia and Peru (Alderton, 1994).Its unique features give it the most distinctive appearance of all canids: small size, elongated body, small eyes, short snout, short tail, short legs, and small and rounded ears, in addition to gregarious and diurnal behavior (Beisiegel and Zuercher, 2005;Busto and Pérez, 1998;Alderton, 1994;Sheldon, 1992). Considered naturally rare and difficult to see within its geographic range (Emmons and Feer, 1997;Beisiegel and Zuercher, 2005), the species has been recorded occasionally in South America, as summarized by Defler (1986) for Colombia; Beccaceci (1994) for Paraguay; Strahl et al. (1992) for Venezuela; Barnett et al. (2001) for Guyana; and Aquino and Puertas (1997) for the Peruvian Amazon.DeMatteo and Loiselle (2008) reviewed what was known about the species' distribution in order to assess its overlap with protected areas and to recommend strategies for its conservation.In Brazil the species has been recorded at a few sites in Goiás by Silveira et al. (1998); in the Amazonian region by Silva-Júnior and Soares (1999); in the Pantanal by Rodrigues et al. (2002); in the Atlantic Forest of São Paulo state by Beisiegel and Ades (2004); in Pará, Tocantins, and Maranhão by Oliveira (2009); in Mato Grosso by Michalski (2010); in Mato Grosso do Sul by Teribele et al. (2012); and in Paraná by Fusco and Ingberman (2012). Difficulties to obtain information on the species' ecology, distribution and behavior has generated significant interest in the species and led to research focused on its conservation.For example, Peres (1991) reported observations on bush dog hunting behavior in Amazonia.Busto and Pérez (1998) reported new insights into the species' behavior in the wild, based on observations of five individuals captured in the headwaters of the Acaray River (Paraguay) and raised in captivity at the Itaipu Wildlife Research Center.They reported that the animals, two parents and three pups, were captured in dense forest from a den constructed under the trunk of a canafístula tree (Peltophorum dubium).The den had a striking system of subterranean rooms with multiple exits that were small and well-concealed by vegetation.Other recent studies have focused on the natural history of the species in the Atlantic Forest (Beisiegel, 1999;Beisiegel and Ades, 2002); on methods of attracting bush dogs for research in Paraguay's Mbaracayú Reserve (DeMatteo et al., 2004); on radio-telemetry with captive animals (DeMatteo and Kochanny, 2004); on diet and habitat in eastern Paraguay (Zuercher et al., 2005); and on home range, activity patterns, and habitat selection (Lima et al., 2012).A review of existing information on bush dogs carried out by canid specialists was recently compiled by DeMatteo (2008). The red list of globally threatened species classified Speothos venaticus as Vulnerable in 1996, but demoted the species to Near Threatened in 2008.There was no explanation for the change, which was unexpected given that the species' population trend is declining and the threats it faces increasing (IUCN, 2012a, b). S. venaticus is listed in Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which protects the animal species most severely threatened by hunting at the international level.It is worth noting that just one other canid (gray wolf, Canis lupus) is listed on CITES Appendix I, which protects several other notoriously threatened species, including giant panda (Ailuropoda melanoleuca), red panda (Ailurus fulgens), elephants (Elephas maximus and Loxodonta africana), gorillas (Gorilla spp.), chimpanzees (Pan spp.), and Tasmanian wolf (Thylacinus cynocephalus), which is possibly extinct (CITES, 2012). In Brazil, state-level red lists suggest a more critical status for bush dogs.The species is considered likely extinct in Minas Gerais (Machado et al., 2005), despite a recent record in the state after 170 years without a sighting (Braga, 2012).In Rio de Janeiro and São Paulo bush dogs are classified as Critically Endangered (Bergallo et al., 2000;São Paulo, 1998), while in Paraná (Braga et al., 2010) and Rio Grande do Sul (Fontana et al., 2003) they are considered Vulnerable, in line with the national red list status (Chiarello et al., 2008).The Paraná red list offers no information regarding the species apart from noting that " [...] given the limited number of confirmed records in the state, maintaining the national-level classification is recommended [...]" (Braga et al., 2010, p. 71). In order to inform more detailed analyses of the species' geographical distribution and the design of effective conservation measures, we compiled records and information on occurrence localities over the last 17 years in Paraná state, subtropical Brazil.Based on this new information, we argue that an updated threat status is needed for Speothos venaticus in Paraná state. Methods Information on the species was compiled from several field studies over the last two decades in Paraná state, via direct observations of the species and its sign in the wild.Field observations were complemented with searches of collection records in museums and the scientific literature, and via interviews with trusted sources who are familiar with the species and have recorded it recently. Only records from Paraná state (southern Brazil) over the last 20 years were included in the study.Each record included geographical coordinates (degrees-minutes-seconds) and the most complete possible details, such as time of day, site characteristics, number of individuals, and other complementary information.Vegetation at the occurrence localities was classified following the system of Veloso et al. (1991).Threat categories discussed here are those used by the International Union for the Conservation of Nature and Natural Resources (IUCN, 2012a, b). ArcView10 software was used to map the records onto a layer of the Phytogeographic Map of Paraná (1989/1990) obtained from Maack/ITCG/IBGE. Results and Discussion The current literature contains very little information on bush dog occurrence in southern Brazil (e.g., Beisiegel and Zuercher, 2005;DeMatteo and Loiselle, 2008).Distribution maps of the species are based on confirmed occurrence records, more generalized estimated range maps like those of Eisenberg (1989), Redford andEisenberg (1992, 1999), and Emmons and Feer (1997), and a compilation of interviews with researchers working throughout the area where bush dogs are likely to occur.At the continental scale, the maps with the highest resolution are those of Beisiegel and Zuercher (2005), Zuercher et al. (2004), and DeMatteo & Loiselle (2008).At the regional level, Oliveira (2009) has provided precise occurrence records for Speothos venaticus in northern Brazil, and Michalski (2010) has done the same for some records in Mato Grosso state. Despite this striking lack of information regarding the species' occurrence in southern Brazil, the first documented record in the region was made in 1911 by the naturalist Hermann von Iheringi, who described the species Speothos wingei with the type locality of "Santa Catarina".Cabrera (1958Cabrera ( , 1960) ) did not accept Speothos wingei as valid, considering it one of two subspecies: Speothos venaticus venaticus and Speothos venaticus wingei.Wilson and Reeder (2005) subsequently treated wingei as one of three subspecies: S. venaticus venaticus, S. venaticus panamensis, and S. venaticus wingei.Vieira (1946) attributed Icticyon venaticus, the specimen von Ihering used to describe the new species, to the locality Colônia Hansa in Santa Catarina.This specimen and one other from the same locality are deposited in the mammal collection of the Museum of Zoology of the University of São Paulo, under N os 2864 and 2902, respectively.Vieira (1946, p. 157) described the fur of the Speothos wingei holotype as "[...] almost albino, with a very light yellowish-brown color on nearly the entire body, except the legs and tail.".According to the author, the other specimen from the same locality was identical to a specimen he examined from São João da Boa Vista, São Paulo state. Eighty-five years after the first record was made in Santa Catarina, Cimardi (1996) reported a second: near Sassafrás Biological Reserve, in the municipality of Mafra, in the northern portion of the state.Few additional details were provided. We compiled eight records of bush dog in Paraná state over the last 17 years, as described below and illustrated in Figure 1.The geographic coordinates of these records are given in Table 1 and plotted in the map in Figure 1.These records represent a marked increase in the number of confirmed occurrences of the species in southern Brazil. (1) Iguaçu National Park, Foz do Iguaçu: a record made by Peter Crawshaw in 1995, during his studies of Leopardus pardalis and Panthera onca ecology in Foz do Iguaçu and in Argentina (Crawshaw, 1995).Reports by local residents regarding the species' presence were confirmed via an analysis of the bile-acids in scats, which matched the standard profile for the species (Leite Pitman, pers.comm.).The area is protected by Iguaçu National Park, most of which is covered by Seasonal Semideciduous Forest.The park measures 185,262 ha and is contiguous with the 67,720 ha Argentinian park of the same name. (2) Serra dos Castelhanos, Guaratuba: a record made by biologists Liliani Marília Tiepolo and Alexandre Lorenzetto in 1999 between the Atlantic Forest mountain range and coastal lowlands, via interviews with a local resident.The resident described the species and reported having killed two "cachorro-pitoco" that attacked the chickens in his property.Vegetation in the area is Dense Montane, Submontane, and Lowland Ombrophilous Forest.The site is inside the Guaratuba Environmental Protection Area, a 199,596 ha sustainable use protected area, and in the immediate vicinity of Saint-Hilaire/Lange National Park, a 25,000 ha strictly protected area. (3) Estrada do Cerne (Cerne Road), Castro: this record is based on a stuffed specimen deposited in the Museum of Zoology of Ponta Grossa State University.2012) also recorded bush dog tracks at this locality. (5) Nova Fronteira Farm, Guaraqueçaba: three individuals were seen by ornithologists Fernando Costa Straube and Leonardo Deconto on 19 October 2011, at approximately 9:45 AM on the secondary road leading to Salto Morato Nature Reserve.The observers were able to observe the animals from a distance of 8-10 m and confirmed all of the characteristic features of the species: small size, short legs and tail, broad head, and small eyes.They also emphasized the rusty reddish color of the back of the animals' necks.The locality is at 22 m elevation and within the Nova Fronteira Farm (a privately owned ranch), approximately 1.6 Km from the Guaraqueçaba River and 3.0 Km from the southeastern border of Salto Morato Nature Reserve.The area is forested, with pioneer tree species along the forest edges and mature forest in its center.Near this site is the 2,340-ha Salto Morato Nature Reserve, which forms part of the 282,444-ha Guaraqueçaba Environmental Protection Area.Vegetation at Salto Morato consists of Alluvial Ombrophilous Forest, Dense Montane and Submontane Ombrophilous Forest, and River-influenced Pioneer Formations. (6) Serra do Itaqui Nature Reserve: two records of adult individuals were obtained with a camera trap on two consecutive days in August 2011 by Fusco and Ingberman (2012).The reserve covers 6,650 ha, ranges in elevation from 0 to 800 m, and is covered in Dense Ombrophilous Forest.The reserve forms part of a block of continuous forest within the Guaraqueçaba Environmental Protection Area, a sustainable use protected area comprising private properties and public and private strictly protected areas. (7) Morro da Mina Nature Reserve: two adults were photographed with a camera trap in October 2011 by Fusco and Ingberman (2012).The 3,300 ha reserve is located in the town of Antonina, in coastal Paraná, and covered in Dense Ombrophilous Forest. (8) Sengés: two adult individuals were observed by geologist Márcio Kazubek at approximately 12:30, February 2011.The animals were on an infrequently used road in well-preserved gallery forest with Araucaria angustifolia trees, interspersed with natural fields and steep escarpments dominated by Pinus sp.plantations.The observer described the two animals as small with stubby tails, stocky bodies, large heads, dark brown coloration in the posterior portion of the body, and more reddish coloration on the head.The site is close to the old Fazenda Morungava, and contains a mosaic of soybean plantations, pastures, Pinus sp.plantations, and natural vegetation that remains connected to some degree.Human population density in the area is low, due to the rugged terrain of the Devonian Escarpment.The observer also mentioned that several species of mammals were commonly observed in the region, and emphasized that the two individuals he saw were not tayras (Eira barbara), which can be confused with bush dogs because of their similar size and coloration. One other putative record of S. venaticus in Paraná was discarded after a review of the information.The report was based on two bush dogs, male and female, that arrived at the Curitiba Zoo between 1986 and 1987.The animals were found in the Boqueirão neighborhood of the city (where the zoo is located), after children were reported throwing rocks at them.The person who rescued the animals (Francisco Cominese), however, revealed a detail that was not in the original record: the animals were inside a broken box that had been discarded in the street.One of the animals was outside the box and being harassed by the children, while the other was inside.The box was full of old scat, which suggests that the animals had traveled a long distance in it.Both animals were tame and allowed themselves to be petted, which is typical of animals raised in captivity.One of these individuals is now deposited in the Capão da Imbuia Natural History Museum.Given the uncertainty regarding the original locality, the record was excluded from our database. These records increase the number of localities where S. venaticus have been recorded in Paraná State from four (Mangini et al., 2009;Fusco and Ingberman, 2012) to eight.They also show that the species is occasional in Paraná in both forested and non-forested areas (i.e., open areas interspersed with croplands, Cerrado, and Araucaria forest), as illustrated by the records from Castro, Ponta Grossa, and Sengés.Likewise, these records show that S. venaticus travels during the day through disturbed areas close to densely populated towns and that it is hunted, as illustrated by the records from Serra dos Castelhanos and the Cerne Road. Of the records described in this paper, four are in protected areas or in the vicinity of protected areas.This reflects the important role these areas play in the conservation of threatened fauna.A high priority for research is now to determine the population densities of bush dogs at these localities. Speothos venaticus has been classified as threatened on the Paraná state red list since it was first published in 1995.In the first edition of the red list Margarido (1995) considered the species Endangered, but noted the lack of official occurrence records in Paraná and cited the state of Santa Catarina as the southernmost extent of its range.In the second edition of the red list, Margarido and Braga (2004) classified the species as Critically Endangered, based on the only two records known at that time (Crawshaw, 1995;Zanon et al., 2003).In light of the scarcity of state-level records, in the list of Braga et al. (2010) the threat category was changed to Vulnerable, in line with the species' status at the national level.Given the new records described here, the infrequency of bush dog sightings, the degree of fragmentation and degradation of natural landscapes in Paraná, the growth in industrial monoculture agriculture in the state and the recent changes imposed to the Forest Table 1 . Zanon et al. (2003)ocalities of Speothos venaticus in Paraná state.The specimen was identified by Liliani Marilia Tiepolo in 2003 and the record published byZanon et al. (2003).According to the authors, the animal was killed by a hunter.In the 80's on Estrada do Cerne, between Castro and the town of Abapan, collected by workers of the Lands and Cartography Institute (now ITCG), and donated to Ponta Grossa State University, where it was stuffed and used as a teaching aid.Vegetation in the region is a mosaic of Figure 1.Map of vegetation in Paraná state, southern Brazil, showing the occurrence record localities of Speothos venaticus. Rio Cachoeira Nature Reserve, Antonina: tracks of the species were observed by Juliana Quadros in 2005 (on the Arapongas Trail) and 2007 (on the Ferro Trail) during field work in the reserve.Measuring 8,600 ha, the reserve protects Atlantic Forest in a variety of successional stages, including Alluvial Ombrophilous Forest, Dense Submontane Ombrophilous Forest, Dense Lowland Ombrophilous Forest, River and Ocean-influenced Pioneer Formations, as well as disturbed areas where former pasture is returning to forest.Fusco and Ingberman (
3,945.2
2016-03-22T00:00:00.000
[ "Biology", "Environmental Science" ]
Analysis and Application of a Novel Layered Bidirectional Equalizer for Series Connected Battery Strings To eliminate the influence of the inconsistency on the cycle life and the available capacity of the battery pack, and improve the balancing speed, a novel inductor-based layered bidirectional equalizer (IBLBE) is proposed. The equalizer is composed of the bottom balancing circuits and the upper balancing circuits, and the two layer circuits both consist of a plurality of balancing sub-circuits, which allow the dynamic adjustment of equalization path and equalization threshold. The battery string is modularized by layered balancing circuits to realize fast active equalization, especially for long battery strings. By controlling the bottom balancing circuits, the individual cells can be balanced in each module. At the same time, the equalization between battery modules can be realized by controlling the upper balancing circuits. Simulation and experimental results demonstrate that the proposed equalizer can achieve fast active equalization for a long battery string, and has the characteristics of multi balancing path, large balancing current and high accuracy. The advantages of the proposed equalizer are further verified by a comparison with existing active equalizer. Introduction Since the individual cell has limited voltage and capacity, power batteries require a variety of series or parallel combinations to achieve the voltage or capacity level for various applications [1] [2]. However, due to the manufacturing inconsistency and unique performance characteristic of every single cell, the cells in series connected may suffer from serious imbalance between cell voltages or state of charge (SOCs) after several charging of discharging cycles, which can cause some of the cells to be overcharged or over discharged.This degrades the performance of battery strings, shortens its lifetime, and even poses a safety hazard (e.g., explosion or fire.etc.).Equalization for battery strings could be realized to prevent these phenomena and prolong the battery strings lifetime. Numerous equalization topology and balancing method have been proposed and they are well summarized in [1] [6] [16].The equalization topologies can be categorized as passive equalization and active equalization.The passive methods remove the excess energy from the fully charged cells through dissipative elements [3] [4]. The passive equalization is the most straightforward and the cheapest one.However, the excess energy is converted into heat rather than be stored, which leads to the energy waste and thermal management problems. The active cell balancing methods transfer energy from higher energy individual cell to lower energy cells.These active equalization circuits can generally be categorized as capacitors, converters or transformers circuits, each of which has its own advantages and disadvantages in terms of speed, accuracy, cost, and efficiency. Daowd [1], Ye [5], Shang [6], and others have proposed several equalization circuit topologies based on switched-capacitors or LC quasi-resonant circuits, which charge or discharge capacitors to realize energy transfer.These equalization circuits can realize zero-voltage gap.However, these circuits suffer from a longer balancing time, especially with a small difference in cells voltages.Transformer-type equalization circuits have the basic structure of a fly-back transformer [1] [14], which can be divided into a variety of types, such as single magnetic cores, multiple magnetic cores, single vice sides, and multiple vice sides.Transformer-type equalization circuits have the characteristics of high level of integration and high balancing speed but poor expandability and large transformer magnetic flux leakage. Finally, converter-type equalization circuits use DC-DC converters for energy transfer, including buck, boost and Ćuk.The equalization circuits proposed by Lee [7], Yarlagadda [8], Phung [9], Lu [10], Chen [11], and Guo [12] belong to this type.Converter equalization circuits can realize bidirectional energy flow with higher balancing efficiency, but they often require a complex switch array and a precise control algorithm.Two of the most common inductor-based equalization circuits are the adjacent equalizer proposed in [9] and the single switched inductor equalizer proposed in [8].As shown in figure 1(a), the inductor-based adjacent equalizer (IBAE) senses the voltage difference of the two neighboring cells, and transfers energy from the higher one to the lower one.Multiple balancing sub-circuits can operate at the same time, but it takes a relatively long time for transferring energy from the first one to the last one especially in the long battery string.Figure .1(b) presents the single switched inductor equalizer (SSIE).The SSIE can achieve the direct cell-to-cell energy transfer between any two cells in the battery string.However, this equalizer has a complex switch matrix which needs to operate 4 switches in each switching period, and it takes a long time to serve all cells in a long battery string with only one balancing bypass.In consideration of the advantages and disadvantages of the equalization methods discussed above, a novel inductor-based layered bidirectional equalizer (IBLBE) with the advantages of more balancing path and large balancing current is proposed.The main idea of the IBLBE is modularizing the battery string with layered balancing circuits based on power inductors to realize fast active equalization, especially for long battery strings.The IBLBE reduces the charging current of the higher energy cells in the charging process, and decreases the discharging current of the lower energy cells in the discharging process.The proposed equalizer realizes dynamic equalization and preventing over-charge and over-discharge. In this paper, the IBLBE is proposed and analyzed.Its circuit structure and equalization principle are introduced in Section 2. The models and calculation of the key parameters are derived in Section 3. In Section 4, the simulation results are presented.The experimental results and the advantages of IBLBE compared with the inductor-based adjacent equalizer are presented in Section 5, followed by the conclusion in Section 6. Structure of the Proposed Equalizer Figure 2 shows the system configuration of the proposed equalizer.Figure 2 The battery string connected in series is subdivided into N modules, and each module contains n individual cells.The N battery modules are divided into two parts with cutoff point K.If N is an even number, n=2*K; if N is an odd number, n=2*K-1.Every battery module Mi is equipped with a balancing sub-circuit Ei, which controls the charge transfer between Mi and other modules.Each battery module is divided into two parts with cutoff point k.If n is an even number, n=2*k; if n is an odd number, n=2*k-1.Every individual cell is equipped with a balancing sub-circuit Sj, which controls the charge transfer between Bi and other individual cells in this module. The balancing sub-circuits Ei and Si are implemented by buck-boost converters, which allow the bidirectional energy flow.The balancing sub-circuits Ei compose the upper balancing circuits, which are used to achieve the equalization between battery modules.The balancing sub-circuits Si compose the bottom balancing circuits which are used for equalization between cells in each module. The proposed equalizer IBLBE can achieve the dynamic adjustment of equalization path and equalization threshold, described as follows: (1) Dynamic adjustment of the equalization path. In the equalization process, the balancing sub-circuit Ei provides equalization path for module Mi, and the balancing sub-circuit Sj provides equalization path for individual cell Bj.Furthermore, the equalizer can extend the equalization path by operating Ei and Sj concurrently. (2) Dynamic adjustment of equalization threshold.The equalization circuits have two equalization threshold; one for the battery modules and the other for the individual cells in the same module.If the two thresholds are not met, the balancing circuits stop working. The working condition for Ei is V( ) − V( ) > 5 V, V( )s are the modules terminal voltages. In every battery module, the working condition for Sj is V( ) − V( ) > 3 V, V( )s are the terminal voltages of cells in the same module. The equalization threshold could be adjusted according to the system require and the accuracy of sampling circuits [15]. Equalization principle The battery strings have four different working states: charging state, discharging state, the idle state after charging and the idle state after discharging.The equalization principles of charging state and the idle state after charging are the same; the equalization principles of discharging state and the idle state after discharging are the same. Cell balancing based on voltage inconsistency is more easily implemented and more common [13].In this paper, the cells terminal voltages are employed as the index of inconsistency.A balancing circuit shown in figure 3 for 16 cells in series subdivided into 4 modules is used as the example to introduce the principles of the proposed equalizer. 2.2.1The equalization principles of charging state In the charging state or the idle state after charging, firstly, identify the highest cell voltage; secondly control the corresponding sub-circuit Sj to transfer energy from highest cell to the other cells (in the same module).Suppose that the cell voltage of B1 is higher than the others in the module M1.The equalization principle can be divided into the following two stages: Stage 1: L1 charging As shown in figure 4 (a), the individual cell B1 charges the inductor L1 when the switch S1a is turned ON.The inductor L1 stores energy with the current gradually increasing.Some of the electrical energy is transferred into magnetic energy stored in the inductor. in which the circuit total resistance when S1a is turned ON, including the DC resistance of L1,the turn-on resistance of S1a, and the internal resistance of cell B1. is the current of inductor L1, is the turn-on time of S1a.The general solution of is: The turned-on time t of S1a determines the peak value of inductor current i , which is a key parameter to calculate the average current of inductors and the average balancing current of balancing sub-circuits. In this process, L1 stores energy and the balancing sub-circuit S1 decreases the charging current of B1. In which iB1 is the current through B1, is the charging current of battery string, is the equalization current of balancing sub-circuit S1. Stage 2: L1 discharging As shown in figure 4 (b), the inductor L1 charges cell B2, B3 and B4 in M1 through the flywheel diode of S1b when the switch S1a is turned OFF, realizing the energy transfer from cell B1 to the other cells.This is a first order full response circuit, and the inductor current response is: is the forward voltage of the (body diode of ), is the loop total resistance, is the total terminal voltage of , and , is the stop time of L1 discharging.In this process, the inductor L1 charges 、 、 through the flywheel diode of switch and the balancing sub-circuit increases the charging current of 、 、 . , , are the current through 、 、 respectively, is the equalization current of balancing sub-circuit . Equalization principles of discharging state In the discharging state or the idle state after discharging, firstly, identify the lowest cell voltage; secondly control the corresponding sub-circuit Sj to transfer energy from the other cells (in the same module) to the lowest voltage cell.Suppose that the cell voltage of B3 is lower than the others in the module M1.The equalization principle can be divided into the following two stages: Stage 1: L3 charging As shown in figure 5 (a), the individual cell B1 and B2 charge the inductor L3 when the switch S3a is turned ON.The inductor L3 stores energy with the current iL3 increasing.Some of the electrical energy is transferred into magnetic energy stored in the inductor.The balancing sub-circuit S3 increases the discharging current of cell B1 and B2. Module equalization principle In a long battery string, it needs more equalization path working simultaneously to improve the balancing speed. The proposed equalizer modularizes the battery string with layered bidirectional balancing circuits.The balancing sub-circuits Ei in upper layer circuits are operated simultaneously with the sub-circuits Sj in the bottom layer, which extends the balancing path and can achieve the fast equalization between modules. In the charging state or the idle state after charging, the energy is transferred from the highest voltage module to the others.When the voltage of module M4 is higher than the others, the operation can be divided into two stages. As shown in figure 6 (a), the M4 charges the inductor L20 when the switch S20b is turned ON.Then, the inductor L20 charges the module M1, M2 and M3 though the flywheel diode of S20a when the switch S20b is turned OFF, shown in figure 6 (b). In the discharging state or the idle state after discharging, the energy is transferred from the other modules to the lowest voltage one.When the voltage of module M2 is lower than the others, the operation can be divided into two stages. As shown in figure 6(c), the module M3 and M4 charge the inductor L19 when the switch S19b is turned ON.Then, the inductor L19 charges the module M2 though the flywheel diode of S19a when the switch S19b is turned OFF, shown in figure 6 (d). Modeling, Analysis and calculation of the key parameters In this section several key parameters are analyzed including the balancing speed, the balancing current, the inductors current, the duty cycle of switch waveform, the switching frequency and the inductances.During the energy transfer process of the inductor , denote the voltage across by when the switch is turned ON, namely the total cells voltage charging ; denote the voltage across by when the switch is turned OFF, namely the total cells voltage discharging by . Modeling of inductor current and duty cycle of driving waveform Ignore the resistance since its value is very small.Then the inductor current increases linearly when the switch is turned ON: When the switch is turned OFF, ignore the resistance and forward voltage of flywheel diode, the inductor current decreases linearly: in which D represents duty cycle,and T represents switching period. To make the DC-DC converters working in the DCM, the inductor current need drop to 0 in the off-time of switch.The conditions that the duty cycle need satisfy are derived as follows: = 0 when = ; since > , when = − ( − ) < 0 Furthermore, In the equalization process, the duty cycle of driving waveforms for balancing sub-circuits Ei and Sj are the same. In the equalization for charging state or the idle state after charging, the balancing circuits transfer energy from the individual cell with highest voltage to the others, so: In the equalization for discharging state or the idle state after discharging, the balancing circuits transfer energy from the other cells to the lowest voltage cells, so: Inductance, switching frequency and equalization speed There is a key indicator for equalizer need to be considered: the cell-balancing time.This problem can be solved by increasing the balancing current.This is because the amount of charges transferred from one cell to others in unit time is proportional to the average value of current through the cell [5]. In which, ∆Q is the energy transferred from one cell to the others in one second, is the average current through , T is the switching period and is the switching frequency.In equalization process, the current in balancing circuit is derived as follows: The peak value of inductor current The average value of inductor current In equalization for charging state or the idle state after charging, the average value of current through cell and module Mi are: The average current value of cells/modules are proportional to the product of and V /V and is inversely proportional to the product of and .The values of and V /V cannot adjust at will, in that case, the selection of and can achieve different effects on the balancing current and balancing speed. Simulation Result The simulation model of the proposed equalizer is built in PSIM9.0.In order to reduce the balancing time, sixteen 1F capacitors are employed to substitute cells with initial voltage shown in table 1.The switching frequency is set as 10 kHz.The inductances in Si are 100uH, and the inductances in Ei are 330uH.It is assumed that the capacitors, inductors and switches are all lumped element.Moreover, the influence of parasitic inductance and parasitic capacitance and the deviation generated by AD transfer are ignored.The active condition set for balancing sub-circuit is 16, where V(B)max and V(B)min are the cells voltage in the same module.Figure 9 shows the simulation result of equalization for the idle state after charging.Firstly, the equalization between modules is achieved at about 0.113s.Then, the equalization stops at 0.238s, and the final voltage gap is 4mV. Equalization simulation for charging state or the idle state after charging. Figure 10 shows the equalization simulation results of discharging state with a 0.5A discharging current.The module equalization is achieved at about 0.213s and the cell balancing stops at 0.267s, with the final voltage gap 4mV. Figure 11 shows the equalization simulation result of the idle state after discharging.The module balancing stops at about 0.217s.The cells balancing stops at about 0.262s, and the final voltage gap is 4mV. Comparative study with typical inductor-based equalizers In order to validate the advantages in balancing speed of the proposed equalizer, comparative studies with traditional inductor-based adjacent equalizer (IBAE) and single switched inductor equalizer (SSIE) are implemented.In the comparative simulation, the initial cells voltages are the same with previous simulation shown in table 1; the inductances are 100uH; the switching frequency is 10 kHz; the duty cycle is 48%. The topologies of IBAE and SSIE are shown in figure 1. The equalization simulation results of traditional inductor-based equalizers (IBAE and SSIE) for 16cells during idle state is given in figure 12.When the IBAE achieves the same 4mV voltage gap, its balancing time is 0.347s.The SSIE achieves the same 4mV voltage gap at about 0.851s. The proposed equalizer achieves cells balancing at about 0.238s in the idle state after charging.It reduces the balancing time by 31.4% compared with the IBAE (0.347s) and reduces the balancing time by 72% compared with SSIE (0.851s). Experimental results In order to further verify the equalization principles and show the balancing performance of the proposed equalizer, a balancing system is implemented and tested for sixteen 2.6 Ah Sanyo ternary lithium batteries.This paper adopts the traditional fuzzy logic control algorithm (FLC) [6].In every equalization cycle, the equalization time teq=10s, and the standing time tsd=30s. The standing time in every equalization cycle is aimed at eliminating the polarization voltage of cells.Therefore, it is more accurately to test the real open-circuit voltages (OCVs) of cells, and issue the balancing instructions. Equalization for charging state The battery string charging current is 0.5A.The initial voltage of cells and modules is shown in table 1.The balancing circuits S2, S5, S11, S14 and E4 operate in the beginning. Figure 13 (a) shows the experimental waveforms of inductor current iL11 and iL20.The duty cycle of PWM is 65%.The inductor current is discontinuous and varies in saw-tooth waveform.The peak value of iL11 comes to 1.7A, and the peak value of iL20 is 1.8A.Figure 14 shows the 16 cells voltage trajectories in equalization during charging process.The initial voltage gap shown in table 1 is 152mV, and it decreases to 3mV at about 3197s. Table 3 presents 16 cells voltage before and after equalization in charging and discharging state. Equalization for discharging state In discharging process, the discharging resistor is 150Ω.The balancing sub-circuits S3, S8, S12, S15 and E1 operate in the beginning.Figure 13 (b) presents the experimental waveforms of inductors current iL3 and iL17.The duty cycle of PWM is 24%.The inductor current is discontinuous and varies in saw-tooth waveform.The peak value of iL3 comes to 1.8A, and the peak value of iL17 is 1.9A. Figure 15 shows the 16 cells voltage trajectories in equalization during discharging process.The initial voltage gap shown in table 1 is 152mV, and it decreases to 3mV at about 3098s.The final voltages of 16 cells after equalization are shown in table 3. of switching frequency and inductance can achieve the different levels of balancing current and balancing speed.Comparative simulations with two typical inductor-based equalizers (IBAE and SSIE) are implemented.The proposed equalizer reduces the balancing time by 31.4% compared with the IBAE and reduces the balancing time by 72% compared with SSIE.The simulation and experimental results show that the proposed equalizer can decrease the charging current of higher voltage cells during charging process and reduce the discharging current of lower voltage cells during discharging process; the equalizer can transfer energy from higher voltage cells to the lower voltage cells in the idle period after charging or discharging.Moreover, the layered balancing circuits extend the balancing path and increase the balancing current, which allow fast and active equalization especially for a long battery string.As a future work, the proposed topology will be used in battery strings in energy storage and EVs. Figure2shows the system configuration of the proposed equalizer.Figure2(a) is the structure of the equalizer; figure 2 (b) is the schematic diagram of the bottom balancing circuit; figure 2 (c) is the schematic diagram of balancing sub-circuit.The battery string connected in series is subdivided into N modules, and each module contains n individual cells.The N battery modules are divided into two parts with cutoff point K.If N is an even number, n=2*K; if N is an odd number, n=2*K-1.Every battery module Mi is equipped with a balancing sub-circuit Ei, which controls the charge transfer between Mi and other modules.Each battery module is divided into two parts with cutoff point k.If n is an even number, n=2*k; if n is an odd number, n=2*k-1.Every individual cell is equipped with a balancing sub-circuit Sj, which controls the charge transfer between Bi and other individual cells in this module.The balancing sub-circuits Ei and Si are implemented by buck-boost converters, which allow the bidirectional energy flow.The balancing sub-circuits Ei compose the upper balancing circuits, which are used to achieve the equalization between battery modules.The balancing sub-circuits Si compose the bottom balancing circuits which are used for equalization between cells in each module.The proposed equalizer IBLBE can achieve the dynamic adjustment of equalization path and equalization threshold, described as follows:(1) Dynamic adjustment of the equalization path.In the equalization process, the balancing sub-circuit Ei provides equalization path for module Mi, and the balancing sub-circuit Sj provides equalization path for individual cell Bj.Furthermore, the equalizer can extend the equalization path by operating Ei and Sj concurrently.(2)Dynamic adjustment of equalization threshold.The equalization circuits have two equalization threshold; one for the battery modules and the other for the individual cells in the same module.If the two thresholds are not met, the balancing circuits stop working.The working condition for Ei is V( ) − V( ) > 5 V, V( )s are the modules terminal Figure 2 . Figure 2. System configuration of the proposed equalizer (a) Structure of the proposed equalizer (b) Schematic diagram of bottom balancing circuit (c) Schematic diagram of balancing sub-circuit Figure 4 . Figure 4. Equalization principles of proposed equalizer in charging state or the idle state after charging (a)Cell B1 charges inductor L1 (b)Inductor L1 charges the cell B2,B3 and B4. Stage 2 :Figure 5 . Figure 5. Principle of proposed equalizer in discharging or the idle period after discharging (a)B1 and B2 charge inductor L3 (b) L3 charges the cell B3. Figure 7 . Figure 7. Two operating mode of DC-DC converter (a) Continuous current mode (CCM) (b) Discontinuous current mode (DCM) Figure 8 Figure 8 . Figure8shows the equalization simulation results of charging state with a 0.5A charging current.The equalization between modules is achieved at 0.035s and the cell-balancing stops at 0.234s, with the final voltage gap 4mV. Figure 9 . Figure 9. Equalization simulation results of proposed equalizer for 16cells during idle period after charging (a)16cells voltage trajectories (b) 4 modules voltage trajectories. Figure 10 . Figure 10.Equalization simulation results of proposed equalizer for 16cells during discharging process (a)16cells voltage trajectories (b) 4 modules voltage trajectories. Figure 11 . Figure 11.Equalization simulation results of proposed equalizer for 16cells during idle period after discharging.(a) 16cells voltage trajectories (b) 4 modules voltage trajectories. Figure 13 . Figure 13.Experimental waveform of inductors current (a) iL11 and iL20 in equalization for charging state (b) iL3 and iL17 in equalization for discharging state Figure 15 . Figure 15.16 cells voltage trajectories in equalization for discharging state Table 1 Initial voltage of 16 cells.Equalization simulation for charging state or the idle state after charging. Table 2 summarizes the parameters of IBLBE.The inductances and the resistances in Table2are measured by a TH2810D LCR Meter. Table 2 Component Values used for the equalizer Table 3 . 16cells voltage before and after equalization in charging and discharging state
5,632.2
2017-03-27T00:00:00.000
[ "Engineering" ]
Pattern Avoidance in Task-Precedence Posets We have extended classical pattern avoidance to a new structure: multiple task-precedence posets whose Hasse diagrams have three levels, which we will call diamonds. The vertices of each diamond are assigned labels which are compatible with the poset. A corresponding permutation is formed by reading these labels by increasing levels, and then from left to right. We used Sage to form enumerative conjectures for the associated permutations avoiding collections of patterns of length three, which we then proved. We have discovered a bijection between diamonds avoiding 132 and certain generalized Dyck paths. We have also found the generating function for descents, and therefore the number of avoiders, in these permutations for the majority of collections of patterns of length three. An interesting application of this work (and the motivating example) can be found when task-precedence posets represent warehouse package fulfillment by robots, in which case avoidance of both 231 and 321 ensures we never stack two heavier packages on top of a lighter package. Introduction In this paper, we continue a rich tradition of extending the notion of classical pattern avoidance in permutations to other structures. Given permutations π = π 1 π 2 · · · π n and ρ = ρ 1 ρ 2 · · · ρ m we say that π contains ρ as a pattern if there exist 1 ≤ i 1 < i 2 < · · · < i m ≤ n such that π ia < π i b if and only if ρ a < ρ b . In this case we say that π i1 π i2 · · · π im is order-isomorphic to ρ and that π i1 π i2 · · · π im reduces to ρ. If π does not contain ρ, then π is said to avoid ρ. The classical definition of pattern avoidance in permutations has shown itself to be worthwhile in many fields including algebraic geometry [17] and theoretical computer science [9]. Analogues of pattern avoidance have been developed for a variety of combinatorial objects including Dyck paths [1], tableaux [11], set partitions [15], trees [14], posets [8], and many more. We use a definition of pattern avoidance that is similar to that used in the study of heaps [10], but distinct from that used in previous studies of trees. Unlike the question studied by Hopkins and Weiler [8] which identified classes of rightmost diamond for retrieving object 3. The labels represent the order in which each task of the 12 total tasks is executed. Each robot operates autonomously and independently, and each faces its own challenges. For example one of the objects may be at the back of the warehouse, there may be significant traffic along some of the paths the robots travel through the warehouse, or the robot assigned to retrieve an object may still be executing its previous assignment. Thus the labels on the least elements of each diamond can vary significantly, and there can be a large difference in the labelling of the least element of a particular diamond and its greatest element. In Figure 1.2, the first task completed is that the robot for object 3 arrives and picks up the rack containing object 3. Next, the robot retrieving object 1 arrives at the rack containing object 1. Next, the robot carrying object 1 rotates its rack on its back to have the correct orientation to the picker. This continues, and based on the labelling of the elements, we see that object 3 (the lightest) is placed in its shipping box first (in step 9), then object 1 (in step 11), and then object 2 (in step 12). So our human picker has placed two heavier objects on a lighter object (unless they rearrange the objects after packing). Then a a sufficient (though not necessary) condition to ensure that two heavier objects do not arrive after a lighter object is that the associated permutation avoid 231 and 321. One could consider other applications that arise from task-precedence problems, but our motivating example can be generalized most appropriately by changing 4 tasks per autonomous robot to v tasks. Throughout this paper, the main question we answer is "How many elements are in D v,d (P )?" for any collection P of patterns of length 3. In general we fix v ≥ 4 and a set of patterns P and then determine a formula for the sequence {|D v,d (P )|} d≥1 , with key results for v = 4 shown in Table 1. The third column of the table gives entries from the Online Encyclopedia of Integer Sequences [13]. Our results for pattern-avoiding diamonds have connections with many other combinatorial objects, as evidenced by the low reference numbers. Sequences A260331, A260332 and A260579, however, are new results particular to this study of task precedence posets. Our task, which answers our primary question, is to find f P v,d (x, y). Then when we substitute x = 1 and take the coefficient of y d , we obtain |D v,d (P )|. In Section 2 we consider collections of diamonds that avoid a single pattern of length 3. In Section 3 we consider collections of diamonds that avoid a pair of patterns of length 3, and in Section 4 we consider collections of diamonds avoiding three or more patterns of length 3. Finally in Section 5, we list some open problems relating to this work. 2 Diamonds avoiding a single pattern of length 3 Before we count pattern-avoiding diamonds, it is useful to enumerate all diamonds. Proof: Let v ≥ 4 and d ≥ 1, first we choose v labels for each diamond, and then there are (v − 2)! ways to arrange the internal vertex labels of any given diamond. We obtain Proof: It is impossible to avoid 123 while having a diamond since the pattern is inherent in all valid diamond labellings. The patterns 132 and 213 The complement of a permutation π of length n, denoted by π c , is obtained by replacing each letter j by the letter n − j + 1. The reverse of π = π 1 π 2 . . . π n , denoted by π r , is π n π n−1 . . . π 1 . We let π rc be the reverse-complement of π and D v,d (p) rc be {π rc D | D ∈ D v,d (p)}. Given a permutation π in S n , lis(π) is the length of a longest increasing subsequence in π. For example, in the permutation 1 2 5 6 3 4 7 8 a longest increasing subsequence is 1 2 5 6 7 8 and lis(1 2 5 6 3 4 7 8) = 6. Given a permutation π in S n , rlmax(π) is the number of right-left maxima in π. For example, in the permutation 2 4 6 8 1 3 5 7 a maximum is reached when reading right-to-left twice and rlmax(2 4 6 8 1 3 5 7) = 2. Let Dyck v,d be the set of all paths from (0, 0) to (d, vd) using only (0, 1) and (1, 0) steps (East and North steps) which stay weakly under y = vx. Given any p ∈ Dyck v,d , touchpoints(p) is the number of times p touches the line y = vx, excluding the point (v, vd). In Figure 2.1, the Dyck path touches the line y = 4x three times and touchpoints(p) = 3. Given any p ∈ Dyck v,d , corners(p) is the number of North steps that are followed by one or more East steps in p. In Figure 2.1, there are three places where the Dyck path has one or more North steps followed by one or more East steps and corners(p) = 3. Given any p ∈ Dyck v,d , height(p) is the greatest vertical distance from any point on p to the line y = vx. In figure 2.1, the longest distance from a corner in the Dyck path to the line y = 4x is seven (from (3,5) to (3,12)) and height(p) = 7. Lemma 1. Any element of D v,d (132) has the elements on each diamond labelled in increasing order. Otherwise the label of the first element of the diamond together with the first descent would form a 132 pattern. Proof: We define a map φ from Dyck v,d to D v,d (132). To find φ(p), first write out the heights of the East steps. For each height, include a subscript j that indicates how many East steps are at that height. Reverse this sequence and add 1 to every item in the list, leaving the subscripts unchanged. Each of the elements of this list becomes the first label of a diamond, and then place vj labels in increasing order using the smallest elements that have not already been used as labels. This map is certainly reversible, with the first label on each diamond forming a list, unless there is an increase between diamonds, in which case the first label is repeated. Then the list is reversed and 1 is subtracted from each element, giving us the heights of the East steps in the Dyck path. This bijection is particularly natural when you examine common statistics on both paths and permutations. Following touchpoints, corners, and height through the bijection, we find they correspond exactly to rightleft maximum, descents, and longest increasing sequence on the permutation. Proof: These equalities hold by the bijection in Theorem 3 and trivial Wilf equivalence from Proposition 1. The patterns 231 and 312 Consider D in D v,d (231), and suppose label vd occurs in position k. Then for all i < k and for all j > k, a i < a j . Consequently, if label vd is in position k, then labels (1, . . . , k − 1) appear in positions (1, . . . , k − 1). We define D d v,j to be the collection of labelled diamonds for d − 1 full diamonds with v vertices each followed by an incomplete diamond with j vertices for j = 1, . . . , v − 1. Likewise D d v,j (p) are those diamonds that avoid pattern p. Note, when j = 1 there exist no order relations in the final partial diamond. An example is shown in Figure 2 x des(πD ) . and C i is the i th Catalan number. will avoid 231. When m appears on the greatest element of the i th complete diamond, 1/leqi/leqd − 1, we have α i v,v−1 as the gfd for the vertices before m, and α d−i v,1 as the gfd for the vertices following m. Because we have created a descent from m to the least element of the next diamond or partial diamond, we must also multiply by x to account for this extra descent. Hence Now, assume we have (d − 1) diamonds followed by an incomplete diamond with j vertices where j = 2, . . . , v − 1. The m th element can appear on any of the interior vertices but not on the least element of the incomplete diamond, or m can appear on the greatest element of any complete diamond. When m appears on any of the interior vertices of the final diamond we need to count the descents before m, after m, and from m itself. The descents that occur before m can be counted by α d v,j−g where g is the number of interior vertices following m including m. The descents following m are counted by α 1 v,g because the same number of descents can occur in the remaining interior vertices as when we have a single incomplete diamond. We then count the descent that results from m by multiplying our gfd by x, but we do not get a descent from m when it appears on the final interior vertex. We then sum over all possible values of g to give us the gfd when m appears on the interior vertices of the final diamond which gives us Also, m can appear on the greatest element of any of the full diamonds. When m appears on the greatest element of the i th complete diamond the gfd for vertices that appear before m is α i v,(v−1) and α d−i v,j for the vertices following m. We count the descent from m by multiplying our gfd by x. The total gfd when m appears on the greatest element of the i th diamond is then Tab. 2: The recursive steps necessary to find the generating function for descents in D5,3(231). Thus Lastly, we look at when we have d complete diamonds. The m th element can appear on any of the greatest elements. When m appears on the greatest element of the last diamond, the gfd is α d v,(v−1) which counts descents before m. When m appears on the greatest element of the i th complete diamond (1 ≤ i ≤ d − 1), the gfd for vertices that appear before m is α i v,(v−1) and α d−i v,v for vertices following m. We count the descent from m to the following least element by multiplying the gfd by x. Hence We can use this result to recursively obtain f 231 v,d (x, 1) for any v and d. The pattern 321 We were unable to find a closed formula for the pattern 321. In Table 4, we present the first few terms of the sequence and the first few generating functions for descents, which we found using Sage. We are 3 Diamonds avoiding a pair of patterns of length 3 Next, we study pairs of patterns of length 3. While there are 15 such pairs of patterns, we focus on the 8 pairs of patterns σ, ρ where |D v,d (σ, ρ)| is non-trivial. Diamonds avoiding the set of patterns 132, 213 Lemma 2. In order to avoid 132 and 213, the labels on each diamond must be increasing and consecutive. Proof: By Lemma 1, the labels appear in increasing order on each diamond. Then any label "missing" from consecutive labelling would either create 213 if it occurred before its surrounding labels, or a 132 if it occurred after. Therefore the labels on each diamond must be consecutive and increasing. . Proof: By Lemma 2, we know that the labels on each diamond are consecutive and increasing, so there is a diamond labelled 1, 2, . . ., v, another labelled v + 1, . . ., 2v, etc. So the only thing we must ensure is that the entire collection of diamonds avoids 132 and 213 between the respective diamonds. In their foundational paper, Simion and Schmidt enumerated permutations avoiding 132 and 213 [16], and the recursive nature of their proof can also be adapted to find our generating function for descents. The labels v(d − 1) + 1, . . . , vd must occur on either the first diamond, or the last. In the first case, they create a descent. In the second, they do not, giving a (1 + x) term in the generating function. We continue recursively and obtain: Proof: We proceed similarly to the proof of Theorem 5 with a recursive argument. By Lemma 2, the final diamond has only two possibilities, one of which forms a descent with the previous diamond, and one of which doesn't. Thus our descent generating function gains a (1 + x) term for each additional diamond, and exactly as in Theorem 5, the result follows. Proof: By examination of cases, any arrangement of two descents forms either a 132 or a 321. If a descent does not involve the 1, then either the 1 occurs before, causing a 132, or the 1 occurs after, causing a 321. Proof: By Lemma 4, we need only enumerate those diamonds with one descent where the descent involves the 1. Everything after the 1 increases, as does everything before the 1. In fact, the permutations associated to diamonds that avoid 132 and 321 look like a portion of the identity permutation was deleted from the front and . Proof: Let n = vd be the largest label on a diamond D ∈ D v,d (231, 312). Avoiding the pattern 231 means the 1 must be at the beginning and avoiding the pattern 312 implies everything after n must be decreasing which forces n to the end of the permutation. By a result of Simion and Schmidt on permutations, there are 2 v−3 ways to arrange the middle-level vertices within each of the d diamonds in order to avoid both 231 and 312 creating between 0 and v − 3 descents [16]. There are also two ways to either swap or not swap the last element of each diamond with the first element of the next. This gives the following generating function. Avoiding 231, 321 Lemma 5. All labels that appear after n = vd must be consecutive and increasing, and if a n = n, then a n = n − 1. Theorem 9. is the generating function for descents for D v,d (231, 321). Proof: We approach the proof similarly to that of Theorem 4 and partition our diamonds by the position of the largest element and proceed recursively. Because the proofs are very similar, we omit the details of this proof for brevity. The only differences are that since we are now avoiding 321, we have no descents after the appearance of the largest label, and we have different initial conditions on one diamond. Table 5 is an example of using this recursive technique to find the generating function for descents in D 5,3 (231, 321). Tab Proof: Let n = vd be the largest label in d diamonds with v vertices. Avoiding the pattern 132 forces all labels before n to be larger than all labels after. Avoiding the pattern 213 forces all labels before n to be increasing. Avoiding the pattern 321 forces all labels after n to be increasing. This indicates that all vertices that appear before n will be the consecutive numbers prior to n and all vertices after n will be the remaining elements ordered consecutively. A label a i = n iff i = vs for some s = 1, . . . , d, and there is only one arrangement for the rest of the elements. Therefore, there can only be, at most, one descent and it occurs between diamonds. So, Diamonds avoiding 231, 312, 321 We will proceed by examining what changes can be made to the identity permutation while still avoiding 231, 312, and 321. Lemma 6. For labels a i , a j , a k if a i , a j < a k , then i < k or j < k in order to avoid the patterns 312 and 321. A swap is when two consecutive labels from the identity permutation switch positions in the permutation. Since any permutation can be created from the identity using swaps, restricting our changes to swaps will not exclude any possibilities. Lemma 7. All swaps must be disjoint in order to avoid 321. Proof: We simply examine the cases when two swaps overlap in some way, either with two swaps executed on 3 elements, or two overlapping swaps on 4 elements. Proof: Every final element of a diamond can either remain unchanged or be swapped with the least element of the next diamond. This then gives the generating function (1 + x) d−1 for each possible swap. Let k represent the nonconsecutive positions from which to choose a swap among the interior vertices. Note that in a diamond there are v−3 positions to swap since there are v−2 interior vertices. By Lemma 6 and Lemma 7 any consecutive interior vertices can only be swapped disjointly. Since the swaps must be nonconsecutive, k must be chosen from v − 3 − (k − 1). This gives v−2−k k . We then sum over all k in order to generate all possible descents for a single diamond. Since we have d diamonds in which to execute these swaps, we raise to the d th power. The gfd for D v,d (231, 312, 321) is then Proof: Let n be the largest label in any permutation. Due to the structure of the diamonds, any set of permutations involving 123 cannot be avoided. For any other collection of 4 or more patterns, the result is easily seen using the lemmas for avoiding a single pattern earlier in the paper. Open problems This investigation leaves several directions open for future study. We did not touch on patterns of length 4, they all remain open. We are confident the techniques of Bevan et.al. [2] will give the growth rate and minimal polynomial for diamonds avoiding 321, but in addition it is likely that these techniques would also work for some patterns of length 4. Although the minimal polynomials are unlikely to generalize, the transition operators in particular cases could potentially even generalize to length k for the decreasing pattern k k − 1 . . . 2 1. There are also a wide variety of other poset classes that could be approached in this manner other than diamonds. We generalized our diamonds by adding additional elements and order relations between the least and greatest elements, but one could also imagine creating a diamond-type poset with more than 3 levels as another generalization.
5,051.2
2015-10-31T00:00:00.000
[ "Mathematics" ]
Instances and concepts in distributional space Instances (“Mozart”) are ontologically distinct from concepts or classes (“composer”). Natural language encompasses both, but instances have received comparatively little attention in distributional semantics. Our results show that instances and concepts differ in their distributional properties. We also establish that instantiation detection (“Mozart – composer”) is generally easier than hypernymy detection (“chemist – scientist”), and that results on the influence of input representation do not transfer from hyponymy to instantiation. Introduction Distributional semantics (Turney and Pantel, 2010), and data-driven, continuous approaches to language in general including neural networks (Bengio et al., 2003), are a success story in both Computational Linguistics and Cognitive Science in terms of modeling conceptual knowledge, such as the fact that cats are animals (Baroni et al., 2012), similar to dogs (Landauer and Dumais, 1997), and shed fur (Erk et al., 2010). However, distributional representations are notoriously bad at handling discrete knowledge (Fodor and Lepore, 1999;Smolensky, 1990), such as information about specific instances. For example, Beltagy et al. (2016) had to revert from a distributional to a symbolic knowledge source in an entailment task because the distributional component licensed unwarranted inferences (white man does not entail black man, even though the phrases are distributionally very similar). This partially explains that instances have received much less attention than concepts in distributional semantics. This paper addresses this gap and shows that distributional models can reproduce the age-old ontological distinction between instances and concepts. Our work is exploratory: We seek insights into how distributional representations mirror the instance/concept distinction and the hypernymy/instantiation relations. Our contributions are as follows. First, we build publicly available datasets for instantiation and hypernymy (Section 2). 1 Second, we carry out a contrastive analysis of instances and concepts, finding substantial differences in their distributional behavior (Section 3). Finally, in Section 4, we compare supervised models for instantiation detection (Lincoln -president) with such models for hypernymy detection (19th century president -president). Identifying instantiation turns out to be easier than identifying hypernymy in our experiments. Datasets We focus on "public" named entities such as Abraham Lincoln or Vancouver, as opposed to "private" named entities like my neighbor Michael Smith or unnamed entities like the bird I saw today), because for public entities we can extract distributional representations directly from corpus data. 2 No existing dataset treats entities and concepts on a par, which would enable a contrastive analysis of instances and concepts. Therefore, we create the data for our study, building two comparable datasets around the binary semantic relations of instantiation and hypernymy (see Table 2). This design enables us to relate our results to work on hypernymy (see Section 5), and provides a rich relational perspective on the instance-concept divide: In both cases, we are dealing with the relationship between a more general (concept/hypernym) and a more specific object (instance/hyponym), but, from an ontological perspective, hyponym concepts, as classes of individuals, are considered to be completely different from instances, both in theoretical linguistics and in AI (Dowty et al., 1981;Lenat and Guha, 1990;Fellbaum, 1998). We construct both datasets from the WordNet noun hierarchy. Its backbone is formed by hyponymy (Fellbaum, 1998) and it was later extended with instance-concept links marked with the Hypernym Instance relation (Miller and Hristea, 2006). We sample the items from WordNet that are included in the space we will use in the experiments, namely, the word2vec entity vector space, which is, to our knowledge, the largest existing source for entity vectors. 3 The space was trained on Google News, and contains vectors for nodes in FreeBase which covers millions of entities and thousands of concepts. This enables us to perform comparative analyses, as we sample instances and concepts from a common resource, and that we have compatible vector representations for both. INSTANCE. This dataset contains around 30K datapoints for instantiation (see Table 1 for statistics and Table 2 3. The I2I (instance-to-instance) subset pairs the instance with a random instance from another concept, a sanity check to ensure that we are not thrown off by the high similarity among instances (see Section 3). HYPERNYMY. This dataset contains hypernymy examples which are as similar to the INSTANCE dataset as possible. The set of potential hyponyms are obtained from the intersection between the nouns in the word2vec entity space and WordNet, excluding instances. Each of the nouns that has a direct WordNet hypernym as well as a co-hyponym is combined with the direct hypernym into a positive example. The confounders are then built in parallel to those for INSTANCE. Note that in this case the equivalent of NOTINST is actually not-hypernym (hence NOTHYP in the results discussion), and the equivalent of I2I is concept-to-concept (C2C). 5 Instances and Concepts We first explore the differences between instances and concepts by comparing the distribution of similarities of their word2vec vectors (cf. previous section). We use both a global measure of similarity (average cosine to all other members of the respective set), and a local measure (cosine to the nearest neighbor). The results, shown in Table 3, indicate that instances exhibit substantially higher similarities than concepts, both at the global and at the local level. 6 The difference holds even though we consider more unique concepts than instances (Table 1), and might thus expect the concepts to show higher similarities, at least at the local level. The global similarity of instances and concepts is the lowest (see last row in Table 3), suggesting that instances and concepts are represented distinctly in the space, even when they come from the same domain (here, newswire). Taken together, these observations indicate that instances are semantically more coherent than concepts, at least in our space. We believe a crucial reason for this is that instances share the same specificity, referring to one entity, while concepts are of widely varying specificity and size (compare president of the United States with artifact). Further work is required to probe this hypothesis. It is well established in lexical semantics that cosine similarity does not distinguish between hypernymy and other lexical relations, and in fact hyponyms and hypernyms are usually less similar than co-hyponyms like cat-dog or antonyms like good-bad (Baroni and Lenci, 2011). This result extends to instantiation: The average similarity of each instance to its concept is 0.110 (standard deviation: 0.12), very low compared to the figures in Table 3. The nearest neighbors of instances show a wide range of relations similar to those of concepts, further enriched by the instanceconcept axis: Tyre -Syria (location), Thames river -estuary ("co-hyponym class"), Luciano Pavarotti -soprano ("contrastive class"), Joseph Goebelsbolshevik ("antonym class"), and occasionally true instantiation cases like Sidney Poitier -actor. Modeling Instantiation vs. Hypernymy The analysis in the previous section suggests clearly that unsupervised methods are not adequate for instantiation, so we turn to supervised methods, which have also been used for hypernymy detection (Baroni et al., 2012;Roller et al., 2014). Also note that unsupervised asymmetric measures previously used for hypernymy (Lenci and Benotto, 2012;Santus et al., 2014) are only applicable to non-negative vector spaces, which excludes predictive models like the one we use. We use a logistic regression classifier, partitioning the data into train/dev/test portions (80/10/10%) and ensuring that instances/hyponyms are not reused across partitions. We report F-scores for the positive class on the test sets. Table 4 shows the results. Rows correspond to experiments. The task is always to detect instantiation (left) or hypernymy (right), but the confounders differ: We combine the positive examples with each of the individual negative datasets (NOTINST/NOTHYP, INVERSE, I2I/C2C, cf. Section 2, all balanced setups) and with the union of all negative datasets (UNION, 25% positive examples). The columns correspond to feature sets. We consider two baselines: Freq for most frequent class, 1Vec for a baseline where the classifier only sees the vector for the first component of the input pairfor instance, for NOTINST, only the instance vector is given. This baseline tests possible memorization effects (Levy et al., 2015). For instantiation, we have a third baseline, Cap. It makes a rulebased decision on the basis of capitalization where available and guesses randomly otherwise. The remaining columns show results for three representations that have worked well for hypernymy (see Roller et al. (2014) and below for discussion): Concatenating the two input vectors (Conc), their difference (Diff ), and concatenating the difference vector with the squared difference vector (DDSq). Instantiation. Instantiation achieves overall quite good results, well above the baselines and with nearly perfect F-score for the INVERSE and I2I cases. Recall that these setups basically require the classifier to characterize the notion of instance vs. concept, which turns out to be an easy task, consistent with the analysis in the previous section. Indeed, for INVERSE, the 1Vec and Cap baselines also achieve (near-)perfect F-scores of 0.96 and 1.00 respectively; in this case, the input is either an instance or a concept vector, so the task reduces to instance identification. The distributional models perform at the same level (0.98-0.99). The most difficult setup is NOTINST, where the model has to decide whether the concept matches the instance, with 0.79 best performance. Since the INVERSE and I2I cases are easy, the combined task is about as difficult as NOTINST, and the best result for UNION is the same (0.79). The very bad performance of 1Vec in this case excludes memorization as a significant factor in our setup. Instantiation vs. Hypernymy. Table 4 shows that, in our setup, hypernymy detection is considerably harder than instantiation: Results are 0.57- require the classifier to model the notion of concept specificity (other concepts may be semantically related, but what distinguishes hypernymy is the fact that hyponyms are more specific), which is apparently more difficult than characterizing the notion of instance as opposed to concept. Frequency Effects. We now test the effect of frequency on our best model (Conc) on the most interesting dataset family (UNION). The word2vec vectors do not provide absolute frequencies, but frequency ranks. Thus, we rank-order our two datasets, split each into ten deciles, and compute new F-Scores. The results in Figure 1 show that there are only mild effects of frequency, in particular compared to the general level of inter-bin variance: for INSTANCE, the lowest-frequency decile yields an F-Score of 76% compared to 81% for the highest-frequency one. The numbers are comparable for the HYPERNYM dataset, with 28% and 36%, respectively. We conclude that frequency is not a decisive factor in our present setup. Input Representation. Regarding the effect of the input representation, we reproduce Roller et al.'s (2014) results that DDSq works best for hypernymy detection in the NOTHYP setup. In contrast, for instantiation detection it is the concatenation of the input vectors that works best (cf. NOTINST row in Table 4). Difference features (Diff, DDSq) perform a pre-feature selection, signaling systematic commonalities and differences in distributional representations as well as the direction of feature in-7 Our hypernymy results are lower than previous work. E.g. Roller et al. (2014) report 0.85 maximum accuracy on a task analogous to NOTHYP, compared to our 0.57 F-score. Since our results are not directly comparable in terms of evaluation metric, dataset, and space, we leave it to future work to examine the influence of these factors. clusion; Roller et al. (2014) argued that the squared difference features "identify dimensions that are not indicative of hypernymy", thus removing noise. Concatenating vectors, instead, allows the classifier to combine the information in the features more freely. We thus take our results to suggest that the relationship between instances and their concept is overall less predictable than the relationship between hyponyms and hypernyms. This appears plausible given the tendency of instances to be more "crisp", or idiosyncratic, in their properties than concepts (compare the relation between Mozart or John Lennon and composer with that of poet or novelist and writer). This interpretation is also consistent with the fact that difference features work best for the INVERSE case, which requires characterizing the notion of inclusion, and concatenation works best for the I2I and C2C cases, where instead we are handling potentially unrelated instances or concepts. Error analysis. An error analysis on the most interesting INSTANCE setup (UNION dataset with Conc features) reveals errors typical for distributional approaches. The first major error source is ambiguity. For example, WordNet often lists multiple "senses" for named entities (Washington as synonym for George Washington and a city name, a.o.). The corresponding vector representations are mixtures of the contexts of the individual entities and consequentely more difficult to process, no matter which sense we consider. The second major error source is general semantic relatedness. For instance, the model predicts that the writer Franz Kafka is a Statesman, presumably due to the bureaucratic topics of his novels that are often discussed in connection with his name. Similarly, Arnold Schönberg -writer is due to Schönberg's work as a music theorist. Finally, Einstein -river combines both error types: Hans A. Einstein, Albert Einstein's son, was an expert on sedimentation. Related Work Recent work has started exploring the representation of instances in distributional space: Herbe- (2015) and Gupta et al. (2015) extract quantified and specific properties of instances (some cats are black, Germany has 80 million inhabitants), and Kruszewski et al. (2015) seek to derive a semantic space where dimensions are sets of entities. We instead analyze instance vectors. A similar angle is taken in Herbelot and Vecchi (2015), for "artificial" entity vectors, whereas we explore "real" instance vectors extracted with standard distributional methods. An early exploration of the properties of instances and concepts, limited to a few manually defined features, is Alfonseca and Manandhar (2002). Some previous work uses distributional representations of instances for NLP tasks: For instance, Lewis and Steedman (2013) use the distributional similarity of named entities to build a type system for a semantic parser, and several works in Knowledge Base completion use entity embeddings (see Wang et al. (2014) and references there). The focus on public, named instances is shared with Named Entity Recognition (NER; see Lample et al. (2016) and references therein); however, we focus on the instantiation relation rather than on recognition per se. Also, in terms of modeling, NER is typically framed as a sequence labeling task to identify entities in text, whereas we do classification of previously gathered candidates. In fact, the space we used was built on top of a corpus processed with a NER system. Named Entity Classification (Nadeau and Sekine, 2007) can be viewed as a limited form of the instantiation task. We analyze the entity representations themselves and tackle a wider set of tasks related to instantiation, with a comparative analysis with hypernymy. Conclusions The ontological distinction between instances and concepts is fundamental both in theoretical studies and practical implementations. Our analyses and experiments suggest that the distinction is recoverable from distributional representations. The good news is that instantiation is easier to spot than hypernymy, consistent with it lying along a greater ontological divide. The bad (though expected) news is that not all extant results for concepts carry over to instances, for instance regarding input representation in classification tasks. More work is required to better assess the properties of instances as well as the effects of design factors such as the underlying space and dataset construction. An extremely interesting (and challenging) extension is to tackle "anonymous" entities for which standard distributional techniques do not work (my neighbor, the bird we saw this morning), in the spirit of Herbelot and Vecchi (2015) and Boleda et al. (2017).
3,592.4
2017-04-01T00:00:00.000
[ "Computer Science" ]
Active learning for ontological event extraction incorporating named entity recognition and unknown word handling Background Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Methods Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. Results and conclusion We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities. Background A common framework of information extraction systems is supervised learning, which requires training data that are annotated with information to be extracted. Such training data are usually manually annotated, where the annotation process is time-consuming and expensive. On the other hand, in biomedical domain, recent research efforts on information extraction are extending from focusing on a single event type such as protein-protein interaction (PPI) [1] and gene regulation [2] to simultaneously targeting more complicated, multiple biological events defined in ontologies [3], which makes the manual annotation more difficult. There is thus the need of reducing the amount of annotated data that are required for training event extraction systems. Active learning is the research topic of choosing 'informative' documents for manual annotation such that the would-be annotations on the documents may promote the training of supervised learning systems more effectively than the other documents [4]. It has been studied in many natural language processing applications, such as word sense disambiguation [5], named entity recognition [6][7][8], speech summarization [9] and sentiment classification. Its existing works can be roughly classified into two approaches: uncertainty-based approach [10] and committee-based approach [11]. The uncertainty-based approach is to label the most uncertain samples by using an uncertainty scheme such as entropy [12]. It has been shown, however, that the uncertainty-based approach may have worse performance than random selection [13][14][15]. In the biomedical information extraction, the uncertainty-based approach of active learning has been applied to the task of extracting PPIs. For instance, [16] proposed an uncertainty sampling-based approach of active learning, and [17] proposed maximum uncertainty based and density based sample selection strategies. While the extraction of PPI is concerned with a single event type of PPI, however, recent biomedical event extraction tasks [18] involve multiple event types, even hundreds of event types in the case of the Gene Regulation Ontology (GRO) task of BioNLP-ST'13 [19]. The committee-based approach, based on a committee of classifiers, selects the documents whose classifications have the greatest disagreements among the classifiers and passes them to human experts for annotation. This approach, however, has several issues in adaptation for event extraction tasks. First, event extraction (e.g. PPI extraction, gene regulation identification) is different from many other applications of active learning, which are in essence document classification tasks. Event extraction is to locate not only event keywords (e.g. bind, regulates), but also event participants (e.g. gene/protein, disease) within documents and to identify pre-defined relations between them (e.g. subject-verb-object). Thus, even if the event extraction systems produce confidence scores for its resultant events, the confidence scores do not correspond to the probability of how likely a document expresses an event type: in other words, how likely a document belongs to an event type class, which should be the goal of classifiers of the committee-based approach for event extraction. Second, previous classifiers for the committeebased approach may miss some details of events including event participants. For example, the keyword 'expression' may mislead a classifier to predict that the document with the keyword expresses gene expression event, although the document does not contain any gene name. Our target tasks of event extraction for active learning in this paper are those introduced in BioNLP-ST'13 [20], which involve multiple, complicated event types. Currently, there is only one event extraction system available for all the tasks, called TEES [21], and we need an additional classifier to follow the committee-based approach. We thus propose as an additional classifier a novel statistical method for informativity estimation, which predicts how likely a text expresses any event concept of a given ontology. The method is based on a language model for co-occurrences between n-grams and event concepts. Furthermore, it independently estimates the presence of event participants in a text and the probabilities of out-ofvocabulary words and combines them with the prediction of event concept in the text. We collectively estimate the informativity of a text for all the concepts in a given ontology, similarly to the uncertainty-based approach of [22][23][24]. We also present a revised committee-based approach of active learning for event extraction, which combines the statistical method with the TEES system as follows: Since the confidence scores of the TEES system are not reliable for active learning, we take TEES outputs as binary, that is, whether the system extracts any instance of a concept from a text or not. The disagreement between the TEES system and the statistical model is captured when, given a text (T) and an event concept (C), the TEES system does not extract any instance of C in T, but the probabilistic model predicts a high probability of C in T. In other words, the TEES system is used for filtering potential false positives, and the probabilistic model for ranking them. We further adapt our active learning method and the statistical method for event concept detection to named entity recognition, including gene name recognition. We show that our method can improve active learning for named entity recognition as well, when tested against the BioCreative and CoNLL datasets. Methods We formalize the general workflow of active learning as follows: At the start of round t, let U (t−1) be the pool of unlabeled documents and let L (t−1) be the pool of labeled documents, where t starts from 1. In round t, we select the most 'informative' document x (t) from U , manually label it, and add it to L. If the label y (t) is assigned to the document x (t) by the oracle, the labeled and unlabeled document sets are updated as follows: Such process is iterated until a certain stopping criteria is met, such as when U = ∅ and after a pre-defined number of rounds. It also can be done in a batch mode, where a group of documents are selected at each round for the manual labeling. Active learning method for event extraction As explained above, our active learning method follows the committee-based approach. As the committee, we employ two classifiers: A classifier based on an event extraction system called TEES and a statistical classifier based on language modeling (see the next section for details). The TEES [21] is a state-of-the-art biomedical event extraction system based on support vector machine, and was the only system that participated in all the tasks of BioNLP-ST'13, showing the best performance in many of the tasks [25]. The TEES system produces the confidence score of each event it extracts. However, we do not use the score for active learning because the confidence score does not indicate the probability of the event in the document. We also assume that if the TEES system extracts an event (E) from a document (D), D is not informative for E, because true positives are already not informative and because the correction (i.e. labeling) of false positives might not be useful for training event extraction systems where event descriptions are scarce, and thus there are far more negative examples than positive examples. In other words, the primary goal of our active learning method is to correct more false negatives, that is, to annotate the true events not extracted by the existing system. Figure 1 depicts the workflow of the proposed method. Our method works iteratively as follows: In round t, we train the TEES system and the statistical classifier based on L (t−1) . We measure the informativity of each unlabeled document among U (t−1) and choose the top documents as feed for manual annotation. We measure the informativity score of a document at the sentence level, that is, the average of the informativity scores of all the sentences in the document, as illustrated in (2). Fig. 1 Overview of proposed active learning method. The integration of underlying system into active learning method. For event extraction task, if the underlying event extraction system (TEES) can recognize the concept (C) in the given document (D), the D is not considered as informative θ t indicates the current models of the TEES system and the statistical classifier at round t, but we will omit it for simplicity. The informativity of a sentence (S k ) is measured for the event concept set E, which contains all event defined in a given ontology, as expressed in (3). The informativity score for event concept set is denoted as I(S k , E). In fact, the BioNLP-ST'13 tasks include not only events, but also relations. A key difference between events and relations is that an event always involves an event keyword (e.g. 'regulates' for GeneRegulation), but a relation does not have any keyword (e.g. partOf ). For simplicity, we mention only events in the paper, while our method involves both events and relations in the same way. Informativity for event concept set The informativity of a sentence for event concept set is calculated as the sum of the informativity scores of the sentence for all the event as follows: As explained earlier, we treat a sentence as noninformative for an event if the event extraction system TEES can extract any instance of the event from the sentence. Otherwise, the informativity score is estimated as the probability of the concept given the sentence as follows: (5) p(E i |S k ) can be converted into (6) using the Bayes' theorem. The P(E i ) is estimated using the maximum-likelihood estimation (MLE) based on the statistics of event annotations in the training data. As for P(S k |E i ), we score the correlation between the sentence S k and the event E i with a real value scoring function Z (see below for details) and use the softmax function to represent it as a probabilistic value, shown in (7). We use two types of units to approximately represent the sentence S k : n-grams (NG) and predicate-argument relations (PAS) produced by the Enju parser [26]. A sentence is represented as a bag of elements of a unit, for example, a bag of all n-grams or a bag of all predicateargument relations from the sentence. A. Using N-gram feature for probability estimation If we use the bag of n-gram model, the score Z(S k : E i ) is measured using the average of the correlation score between the n-gram (NG) contained in the sentence with the event, expressed in (8), where len(S k ) is the normalization factor and is calculated as the word count of sentence S k . While the probability between the n-gram and event p(NG j |E i ) is also calculated using a correlation score W (NG j , E i ) between the n-gram and the event, together with the softmax function, shown in (9). The correlation score W (NG j , E i ) is calculated using one of the following three methods: 1) Yates' chi-square test, 2) relative risk, and 3) odds ratio [27]. For the calculation of the three methods, a 2×2 table is constructed for each pair of an N-gram and an event at the level of sentences, as shown in Table 1. For example, a indicates the number of sentences that contain the N-gram NG j and express the event E i . Based on the 2×2 table, the three methods of Yates' chi-square test, relative risk, and odds ratio calculate the correlation score for the pair as shown in the formulas (10), (11), and (12), respectively. B. Using predicate-argument relation for probability estimation Similarly for the bag of predicate-argument relation model, the score Z(S k : E i ) is calculated with the average of the correlation scores between the event and the predicate-argument relations from the sentence, as in (13). Additional features of active learning We introduce two additional features of our active learning method: Incorporation of event participants and dealing with out-of-vocabulary words. Incorporation of event participants The absence of event participants should negatively affect the prediction of events. To reflect this observation, we utilized a gene name recognition system, called Gimli [28], in order to recognize gene/protein names, since most of the BioNLP shared tasks involve genes and proteins (e.g. gene expression, gene regulation, phosphorylation). We incorporate the results of the Gimli system into our active learning method as follows: Total T indicates the number of gene/protein names predicted in a sentence S k . In fact, the Gimli system can be replaced with other named entity recognition systems for tasks whose event participants are other than gene/protein. Since the event extraction tasks for evaluating our active learning method (i.e. BioNLP shared tasks) are mainly about gene/protein, we do not replace the Gimli system when evaluating the incorporation of event participants. When we apply our active learning method for the tasks of named entity recognition (NER), however, we will evaluate it against two NER systems (i.e. Gimli, Stanford NER system) (see for details Sections 'Active learning method for NER task' in Page 8, 'Datasets and employed systems' in Page 11, and 'Evaluation of active learning method for NER task' in Page 19). Dealing with OOV issue with word similarity When we use the n-gram features, there is Out-of-Vocabulary (OOV) issue, such that some n-grams in the test dataset may not appear in the training dataset. To tackle this issue, we adopt the word2vec system, which is an unsupervised method for representing each word as a vector in a latent semantic model and for measuring word similarity [29], as follows: Consider an n-gram NG out that does not occur in the training dataset. We use word2vec to find the top-k n-grams NG in that are closest to NG out , where the word similarity score between NG out and each NG in is designated as Sim(NG out , NG in ). We then recalculate the correlation scoring function W (NG out , E i ) as shown in Formula (16). Note that since word2vec can only handle unigrams, and also since unigrams show the best performance in our experiments of parameter optimization (see the next section), we only deal with unknown unigrams in this method. The word similarity scores are trained a priori using the whole set of MEDLINE abstracts released in April 2014. We denote the n-gram-based informativity of sentence calculated using the updated correlation scoring function (16) as I(S k , NG OOV ). For example, when the correlation scoring function in (9) is updated, the resultant informativity in (4) is denoted as I(S k , E, NG OOV ). Linear combination of n-gram and predicate-structure relation features While we choose either n-grams or predicate-argument relations as features, we also tested the linear combination of the two feature sets for our active learning method, as follows: Table 2 illustrates the calculation of informativity scores in pseudo codes. Active learning method for NER task We also adapt our active learning method to named entity recognition (NER), considering the ontology concepts of named entities (e.g. Gene, Disease) instead of events (e.g. PPI, gene regulation). The method for named entity recognition estimates informativity, or the likelihood of a text expressing any named entities. Similar to Eq. (2), the informativity estimation in the NER task is expressed in (18). θ t indicates the current model of a given NER system and the statistical classifier at round t, but we will omit it for simplicity. We evaluate our method with two NER systems of Gimli for biomedical domain and Stanford NER system for general domain (see Section "Results and discussion" for details of evaluation), one system at a time. The informativity of a sentence for named entity set is calculated as the sum of the informativity scores of the sentence for all the named entities as follows: Similar to the active learning method for event extraction, we treat a sentence as non-informative for an named entity if the NER system can recognize any instance of the named entity from the sentence. Otherwise, the informativity score is estimated as the probability of the named entity given the sentence as follows: The probability p(N i |S k ) is calculated as follows: Similarly to the estimation for event, p(N i ) is estimated using the maximum-likelihood estimation (MLE) based on the statistics of named entities in the training data. For the calculation of p(S k |N i ), we follow similar steps as in (7), using n-grams (i.e. Formula (8)), but not using PAS (i.e. Formula (13)). Comparison with related works In this section, we describe the previous methods of active learning that we compare with our proposed methods for event extraction in the evaluation experiments. A. Conventional committee-based method The committee based active learning, based on a committee of classifiers, selects the documents whose classifications have the greatest disagreements among the classifiers and passes them to human experts for annotation, expressed as follows: is the disagreements among the classifiers for a document x under the model θ, and the Y is the whole label set. We use the summation of disagreement over the sentence S k contained in the document x. For each sentence, we measure the collective disagreement over the whole event concept set E defined in the ontology by using the sum of all disagreement for all event E i . The disagreement D(E i |S k ) is calculated using the absolute value of the differences of the probability produced by the classifiers, named the aforementioned informativity estimation method and the TEES event extraction system. The p TEES (E i |S k ) is the probability estimated from the TEES system, and p Informativity (E i |S k ) is from the informativity estimation using statistical method, which is calculated in Eq. (6). Note that while p(E i |S k ) in Eq. (5) is estimated using Eq. (6) only for the sentences from which no E i is recognized by the TEES, the same informativity probability in Eq. (25) is estimated for all the sentences of unlabeled documents. However, as the TEES is a support vector machine (SVM) based system and do not produce probabilistic output, we use the confidence the SVM classifier has in its decision for a event prediction as follows: The confidence is calculated using the difference-2 of the distance from the separating hyperplane, produced by the SVM classifier. It is shown to have best performance in active learning [30,31], and the calculation is expressed as follows: The dist(m, S k ) is the distance of the predicted label m in such sentence S k . Similarly in adapting to the NER task, for each sentence, we measure the collective disagreement over the whole named entity concept set N by using the sum of all disagreement for all named entity N i . The disagreement D(N i |S k ) is calculated using the absolute value of the differences of the probability produced by the classifiers, named the aforementioned informativity estimation method and the NER system. The p NER (N i |S k ) is the marginal probability provided by the Conditional Random Field (CRF) model from the NER system, and p Informativity (N i |S k ) is from the informativity estimation using statistical method. B. Entropy based active learning method Entropy is the most common measure for uncertainty, which indicates a variable's average information content. The document selection of entropy-based methods is formalized as follows: The H θ (Y |x) is the entropy of a document x under the model θ and the Y is the whole label set. We use the summation of entropy over the sentence S k contained in the document x. For each sentence S k , we use the aforementioned bag of n-gram method, and estimate H(Y |S k ) as the average entropy of each n-gram NG j in S k , as follows: We estimate the collective entropy over the whole event concept set E defined in the ontology as the summation of the entropy for all event E i . H(E i |NG j ) is calculated by using the Weka package for the calculation of entropy [32]. C. Gibbs error based active learning method Gibbs error criterion is shown to be effective for active learning [33], which selects documents that maximize the Gibbs error, as follows: Similarly to the entropy-based method implementation, we calculate the collective Gibbs error as follows: For the calculation of H Gibbs (E i |NG j ), we use the conditional probability of p(E i |NG j ), defined as follows [33], where p(E i |NG j ) is estimated using the proposed method as shown in (9): Datasets and employed systems The BioNLP shared tasks (BioNLP-ST) were organized to track the progress of information extraction in the biomedical text mining. In this paper, we used the datasets of three tasks, namely GRO'13 (Gene Regulation Ontology) [19], CG'13 (Cancer Genetics) [34] and GE'13 (Genia Event Extraction) [35]. Each corpus was manually annotated with an underlying ontology, whose number of concepts and hierarchy are different from each other. A comparison between the datasets is given in [37] dataset, and the Gimli gene name recognition system [28] for the BioCreative II Gene Mention [38] dataset. Note that in BioCreative task, the named entities are naturally of one class, i.e., the Gene/Protein name; while the CoNLL dataset involves four classes of named entities (i.e. Person, Organization, Location, Misc). Evaluation metrics for comparison of active learning methods To compare the performance of the different strategies of sample selection, we plot their performance in each iteration. Since the difference between some plots is not obvious, however, we mainly use the evaluation metric of deficiency for comparison [39,40], defined as follows: The acc t (C) is the performance of the underlying classifier C at t th round of learning iteration. AL is an active learning method, and REF is a baseline method (see below for details). n refers to the total number of rounds (i.e. 10). A deficiency value smaller than 1.0 means that the active learning method is superior to the baseline method, and in general, a smaller value indicates a better method. Parameter optimization We first take a parameter optimization step to determine the most appropriate parameters for the aforementioned calculation of informativity scores. Correlation measure and n-gram size As mentioned above, we considered three correlation measures to estimate the correlation score between ngram and event, including chi-square test, relative risk, and odds ratio. We also should determine the value of n for n-grams. To find the optimal solutions for the two tasks, we carried out a simulation of ontology concept prediction at the sentence level as follows: Given a sentence S i and N i ontology concepts manually annotated on the sentence, we predict the top N i ontology concepts in S i and compare them with the N i manually annotated concepts, measuring the overlap between the two concepts sets. We select the best combination of co-occurrence analysis method and n-gram size for the rest of experiments in this paper. Using 10-fold cross validation, the average prediction rate is calculated and reported in Table 4. Each column corresponds to an n-gram size, and each row to one of the three co-occurrence analysis methods used for the prediction. Note that when N=2 (i.e. bi-grams), it does not include unigrams for the calculation. N=1-2 indicates the mixture of unigrams and bi-grams. This experiment is carried out using the GRO'13 dataset. As shown in Table 4, for all co-occurrence analysis methods, the accuracy mostly drops as the length of Ngrams increases. This may happen due to the data sparseness problem for large N-grams. We choose to use chisquare test and unigrams for the following experiments based on the results. Parameter for the incorporation of event participants The parameter of δ in Eq. (15) is to determine the significance of effects of event participants on event concept prediction. We tested our active learning method in Eq. (14) against the GRO'13 dataset with the δ values set as 0.15, 0.25 and 0.35. We summarize the performance results in terms of deficiency in Table 5. We choose the δ = 0.25 for the following experiments based on the results. The deficiencies of active learning method using different factor against the GRO'13 are reported. The best deficiency is highlighted in boldface in this table and also in the tables below Parameter for dealing with OOV issue In dealing with the OOV issue, we choose top-k similar words for an unknown word, as in Formula (16). In order to choose the optimal value for k, we use the linear combination method in Eq. (17) with the other parameters α = 0.1, β = 0.1 and γ = 0.8, and test our active learning method against the GRO'13 dataset, as changing the k value from 5 to 25. We summarize the deficiency of the active learning method using the different k values in Table 6. As the result, we choose k=25 for the remaining experiments. Evaluation of active learning methods for event extraction Active learning methods using informativity estimation In the following evaluations, we show the learning curves and deficiencies of the event extraction system TEES under different sample selection strategies against the dataset of GRO'13, CG'13 and GE'13 task. The active learning methods use only the informativity estimation, but not the additional features such as incorporation of event participants and dealing with OOV issue, which will be discussed in the next section. We compare the proposed active learning method with other sample selection strategies, including random selection, and entropy-based [17], and Gibbs error [33] based, as well as a conventional committee based active learning methods. We use the random selection as the baseline for deficiency calculation. Each experiment has ten rounds, where in each round, 10 % of the original training data are added for training the TEES system. The initial model of the TEES system before the first round is trained only on the development dataset. Note that the test data of each dataset is fixed. The followings are considered for the selection of additional 10 % training data in each round: -Random selection: We randomly split the training data into 10 bins in advance, and during the training phase in each round, one bin is randomly chosen. We report the averaged performance of random selection for ten times (hereafter referred as RS_Average). -Proposed active learning: We evaluate the method using either unigrams (Unigram) or predicate-argument relations (PAS). The resultant method is referred as AL(Informativity_Unigram) and AL(Informativity_PAS), respectively. -Conventional committee-based active learning: We evaluate the committee based method based on (22), using the confidence score produced by TEES. We estimate the informativity using either unigrams (Unigram) or predicate-argument relations (PAS) for the proposed statistical method. The resultant method is referred as AL(Conventional Committee_Unigram) and AL(Conventional Committee_PAS), respectively. We first apply those methods to the dataset of GRO'13 [19] and measure the performance change of the TEES system with the incremental feed of the training data. We summarize the deficiency for each method in Table 7. The proposed active learning methods and the conventional committee-based methods achieve deficiency value of less than 1, while the entropy and Gibbs error method achieve a deficiency higher than 1, suggesting that the entropy and Gibbs error methods do not perform better than that of random selection. Particularly, the AL(Informativity_Unigram) method achieves the best deficiency of 0.760, while the corresponding conventional committee based method achieves the performance of 0.832 in AL(ConventionalCommittee_Unigram), which is an 8.65 % improvement for the informativity based method over that of conventional committee-based method. However, when using the PAS model, the AL(Informativity_PAS) achieves deficiency of 0.845, which is 1.78 % worse than that of the committee-based method, whose deficiency is 0.830. In addition, when comparing the performance of the methods using the PAS and unigram, we notice that using the unigram, the proposed informativity method shows an 10.1 % improvement over that using PAS model, yet this is not evident in the committee-based method. The results suggest that the proposed informativity method performs best when using the unigram model in the GRO'13 dataset. We then plot the learning curves for each method in Figs. 2 and 3. In Fig. 3, the AL(Informativity_Unigram) Comparison of active learning with informativity based, entropy-based, Gibbs error based, and conventional committee based method, random selection against CG'13 dataset. The learning curves for the TEES system under active learning (AL), using the Gibbs error based method (Gibbs Error), entropy based method(Entropy), conventional committee based method (ConventionalCommittee) and the proposed informativity method (Informativity), as well as the random selection (RS), when tested against the CG'13 task dataset. The active learning method uses the unigram model method is consistently performing over the other methods after 50 % of the documents are selected, which also explains the results in the comparison of deficiency values. In addition, in the comparison of average number of instances per ontological concept provided in [41], the GRO'13 dataset have 13 instances per concept, while such value for GE'13 dataset is 82. This also suggests that in datasets such as GRO'13 whose document annotation may not be abundant, the active learning method using the unigram may perform better than the PAS model. However, the experiment result in the GRO'13 dataset indicates that the proposed informativity based active learning method with unigram model can show better performance than the conventional committee-based, the entropy based and the Gibbs error based active learning methods. We then carry out a similar experiment using the CG'13 dataset. We summarize the deficiency for each method dataset. In addition, while comparing the proposed informativity method and committee-based method, the informativity method achieves better deficiency value over the committee-based method. In terms of deficiency difference, the improvements are 0.020 and 0.008, for PAS and unigram feature, respectively, which is a less obvious improvement for the informativity method. However, this also suggest that the PAS feature may be more sensitive than that of unigram in the CG'13 dataset. Note that one of the specialties in CG'13 dataset is that only a single We extend the aforementioned active learning methods to the GE'13 dataset, and the Table 7 summarize the deficiency of the methods. In Table 7, all methods achieve deficiency values less than the random selection. The method of Gibbs error based approach achieve the deficiency of 0.850, while the deficiency for the entropy method is 0.854. The proposed active learning methods using the unigram shows a more obvious improvement than that using PAS. For instance, in the committeebased method, there is an improvement of 40.1 % for the unigram model over the PAS model. This may suggest that, against the GE'13 dataset, the unigram feature is more suitable for proposed method than that of the PAS feature. We notice a more obvious improvement for the unigram model in the informativity method. Particularly, the best performing AL(Informativity_Unigram) achieve a deficiency value of 0.139. While the corresponding committee-based method achieve the deficiency of 0.263 in AL(ConventionalCommittee_Unigram). We plot the learning curves in Figs. 6 and 7. In the Fig. 7, the active learning method using unigram generally shows obvious improvement over the baseline of random selection method, yet the active learning method using PAS show less significant improvement over the baseline method. This may due to the fact that the ontology defined in GE'13 task is generally less complicated than that in GRO'13 and CG'13. In addition, the document annotation in the GE'13 dataset may be abundant, as the average number of instances per ontological concept in GE'13 dataset is 82, above six times more than that of GRO'13 dataset [41]. Given the dataset with less complicated ontological concepts and abundant training data of document annotation, the unigram model may show obvious improvement for active learning methods. Incorporation of event participants We evaluate the active learning method that is incorporated with the recognition of gene/protein names for event extraction, as illustrated in Formula (14). We show the performance of the TEES system, with active learning method that is either with or without using the gene/protein names. Such experiment is carried out using the GRO'13 dataset. The experiment results are plotted in Fig. 8 and we summarize the deficiency values in the Table 8. In the Table 8, the incorporation of gene/protein names shows positive effects towards the active learning method for event extraction, for both of bag of n-gram or PAS method. By using the gene/protein names, the deficiency for the active learning method using PAS is further improved from 0.845 to 0.589, which is a 30.3 % improvement. Yet in the unigram model of the informativity method, the improvement is rather less significant of 7.1 %, which may suggest that some named entities are already captured as n-grams, thus redundant. In addition, we notice similar improvement of the conventional committee-based method by incorporating the information of event participants into the part of statistical informativity estimation, from 0.830 (i.e. Con-ventionalCommittee_PAS) to 0.693 (i.e. Conventional-Committee_PAS + NE), a 16.5 % improvement. However, this improvement is significantly less than that for our proposed method, which may indicate that the confidence scores of the TEES used by the conventional committee-based method hamper the effects of event participants. Dealing with OOV issue with word similarity The n-gram model is based on the 'registered' n-grams that occur in the training data, which has the issue of Outof-Vocabulary (OOV) words. We solve this by using the word2vec toolkit to find top-k words that are closest to a given OOV word in the test data and to use their weights to estimate the weight of the OOV word. The results of evaluating the word vector incorporation against the GRO'13 dataset are plotted in Fig. 9, and the deficiency is summarized in Table 9. Note that the experiments about OOV word handling are carried out only for events, excluding relations, observing that the relations of the BioNLP-ST'13 tasks are little affected by the OOV issue, since they are not associated with trigger words. By using the word similarity, the n-gram model method is further improved, as the deficiency of n-gram model goes from 0.790 to 0.769, an improvement of 2.66 %. The rather less significant improvement may suggest that such OOV issue is rather not prevalent in the GRO'13 dataset. Linear combination of n-gram and predicate-structure relation features Lastly, we linearly combine the proposed n-gram and predicate-structure relation features for the active learning, as expressed in Eq. (17), and to understand which of the active learning methods proposed in this paper are more important towards the overall performance. The results of comparison are plotted in Fig. 10, and we summarize the deficiency values in Table 10. Overall, the weight combination of (α=0.1, β=0.1, γ =0.8) shows the best performance (deficiency 0.563). Compared to PAS or unigram-based statistics, the incorporation of event participants has the most effect on the best performance. Note, however, that the model of using only the event participants, i.e., the weight combination of (α=0, β=0, γ =1), achieves the deficiency of 0.583, higher than the best deficiency, which indicates that the PAS or n-gram based statistics are complementary to event participants. Evaluation of active learning method for NER task We apply the active learning method into NER task as expressed in Eq. (18), and follow the similar experiment design. Each sample selection method starts with the same held-out labeled development dataset for model initialization and a pool of unlabeled training dataset for selection. In each round, 10 % of the unlabeled documents in the training dataset are selected by different sample selection strategies. For evaluation, we report the performance of NER system trained with the selected training document in each round, against the same held-out test dataset following the official evaluation procedure. The sample selection strategies are as follows: -Random selection: We randomly split the training dataset into 10 bins in advance, one bin is randomly chosen in each round. Following 10-fold cross validation, we report the averaged performance in each round. (hereafter referred to as RS_Average) -Entropy-based active learning: The entropy of documents are calculated, and select documents by their entropy values, from the top to bottom. (designated as AL(Entropy) ) -Maximum Gibbs Error based active learning: Similar to the entropy-based method, but uses the Gibbs error, as introduced in [33]. (designated as AL(GibbsError) ) -Proposed active learning method using informativity scoring only: Use the aforementioned system in Fig. 11 Comparison of active learning with informativity based, entropy-based, Gibbs error based, and conventional committee based method, and random selection against BioCreative dataset. The learning curves for the Gimli system under active learning (AL), using the Gibbs error based method (Gibbs Error), entropy based method(Entropy), conventional committee based method (ConventionalCommittee) and the proposed informativity method (Informativity), as well as the random selection (RS), when tested against the BioCreative task dataset Eq. (18), and selects documents based on their informativity scores. (designated as AL(Informativity)) -Conventional committee-based active learning: We evaluate the committee based method based on (22), using the confidence score produced by NER system. The resultant method is referred as AL(Conventional Committee). We applied these methods to the BioCreative dataset and plotted the learning curve of Gimli in Fig. 11, and summarized their deficiency values in Table 11. In Fig. 11, the proposed active learning method show steady improvement over the other methods in most rounds. Based on the deficiency comparison in Table 11, the proposed method achieved a deficiency value of 0.514, while the deficiency for the conventional committee based method is 0.684. We carried out similar experiments with the CoNLL dataset, and the learning curves are plotted in Fig. 12, and the deficiencies are compared in Table 11. In Fig. 12, the proposed active learning method outperforms the other methods; and in terms of deficiency, the proposed method achieves 0.575 in the deficiency, a nearly 42 % improvement over the random selection. In contrast, the benchmark of Entropy and Gibbs error based approaches also are shows deficiency value of less than 1, yet their improvement over the random selection is nearly 26 % and 11 %. The deficiency for the conventional committee based method is 0.763. The experiment results in the BioCreative and CoNLL datasets indicate that the proposed informativity based method can show better performance than the conventional committee-based method, as well as the Entropy and Gibbs error based methods. Conclusions In this study, we proposed a novel active learning method for ontological event extraction, which is more complex than the simple PPI extraction. Our method measures the collective 'informativity' for unlabeled documents, in terms of the potential likelihood of biological events unrecognizable for the event extraction system. We evaluated the proposed method against the BioNLP Shared Tasks datasets, and showed that our method can achieve better performance than other previous methods, including entropy and Gibbs error based methods and the conventional committee-based method. In addition, the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improved the active learning method. Finally, we adapted the active learning method into named entity recognition tasks and showed that the method also improved the document selection for manual annotation of named entities. Fig. 12 Comparison of active learning with informativity based, entropy-based, Gibbs error based, and conventional committee based method, and random selection against CoNLL dataset. The learning curves for the Gimli system under active learning (AL), using the Gibbs error based method (Gibbs Error), entropy based method(Entropy), conventional committee based method (ConventionalCommittee) and the proposed informativity method (Informativity), as well as the random selection (RS), when tested against the CoNLL task dataset
9,738.4
2016-04-27T00:00:00.000
[ "Biology", "Computer Science" ]
Simulation and Test of Energy Consumption of High-Tc Superconducting Coils in Electromagnetic Levitation System In order to evaluate the energy consumption of the superconducting magnetic levitation system with real-time control, the methods of simulation and test were used to study the AC plus DC loss of the superconducting coil. The simulation established a 2D finite element model in COMSOL, which was based on the H equations method with simple form and fast calculation speed. The test was carried out on a new type of measurement system, which adopted the electrical method with high sensitivity, simple operation and easy implementation. The results of simulation and test showed that, under normal working conditions, the levitation current in the form of AC plus DC produced a relatively small energy loss below 1mW per meter. This conclusion is helpful for the promotion and application of superconducting magnetic levitation control system. I. INTRODUCTION H IGH-Tc superconducting magnetic levitation is a novel and promising research direction. It uses superconducting tapes to completely replace normal wires to wind the excitation coils of levitation electromagnets, and through real-time control of the current in superconducting coils to achieve the stable levitation of the system [1]- [3]. It can effectively solve the shortcomings of the normal-conducting magnetic levitation system, such as heating of the magnet, saturation of the magnetic field, limited carrying capacity, small levitation gap, and high rail cost [4]. This type of technical scheme has important practical significance for improving the performance of current maglev system and reducing its energy consumption. In our previous published work [5], we designed and built a set of high-Tc superconducting magnetic levitation test system based on Bi-2223/Ag tape, which achieved stable levitation without quenching under the conditions of 77K and current ≥ 20A, as shown in Fig. 1. This system not only verified the feasibility of using HTS tape to completely replace the normal wire for winding maglev coils, but also its current-carrying index has been greatly improved compared with previous experimental systems, reaching the level of engineering application for the first time. However, no indepth study of its loss characteristics. Broadly speaking, the superconducting magnetic levitation system is an AC application of superconducting magnets, and there should be a certain degree of AC loss, but it is different from the AC equipments such as superconducting motors [6]- [8], superconducting transformers [9], accelerator magnets [10], etc. When the system is normally suspended, although the control current inside superconducting coils is constantly changing, it actually superimposes a relatively small dynamic component on a large steady-state DC component, which is also generally called AC plus DC. In theory, the loss of AC plus DC should be less than the pure AC loss of the same level. However, the loss under this special working condition still needs to be studied. If the loss is too large, the advantages of superconducting tapes used in electromagnetic levitation systems cannot be reflected. Regarding AC plus DC loss, some scholars have proposed some theoretical calculation methods [11]- [13], but these methods have made varying degrees of assumptions and simplifications in their derivation process, not only the accuracy is not high, but the scope of application is limited. For this reason, this article focuses on the study of the AC plus DC loss of the superconducting magnetic levitation system from the perspective of simulation and test, and compares and discusses the energy consumption and total cost of superconducting and normal-conducting levitation systems. In addition, some issues need to be emphasized. A suspension system, in which the internal current of superconducting coils can be adjusted in real-time by an active control method, is a brand-new suspension system. Therefore, the current research on this type of suspension system, including design, loss, modeling, control, etc., are very few. This article focuses on the aspect of loss. Because it is different from the DC current in superconducting coils, this kind of suspension system will inevitably produce AC loss in the superconducting coils. If the loss is too large, it will restrict its subsequent application and promotion, so qualitative evaluation is urgently needed. Therefore, this article is not to propose a set of new theories or solve the engineering problems encountered, but to use simulation and test methods to study and analyze the energy consumption level of the suspension system according to its special working conditions, and evaluate the applicability of the system. The research conclusions can lay the foundation for follow-up research. A. MATHEMATICAL PROBLEMS IN SIMULATION For high-Tc superconductors, the finite element method (FEM) is a very effective simulation analysis method to study its electromagnetic characteristics and calculate its AC and DC losses. As for the mathematical calculation behind FEM, the H equations method with simple form and fast calculation speed is usually adopted [14]. The main feature of H equations method is that magnetic field intensity H is the independent variable, and its derivation process from Maxwell's equations is as follows. Firstly, the differential form of Maxwell's equations is shown as: In the solution domain, the relationships of magnetic flux density B versus magnetic field intensity H and electric field intensity E versus current density J are shown in (2) and (3), respectively. In (2) and (3), µ 0 represents the permeability of the vacuum. ρ represents the resistivity of the substance. For the liquid nitrogen region, ρ is a constant; for the superconducting region, it is necessary to calculate the equivalent resistivity of the superconductor according to the E-J index model, i.e. In (4), E 0 represents the critical electric field intensity, which is also called the quench judgment standard of superconductors; According to international standards, the value is generally 1µV /cm. J c represents the critical current density in A/m 2 . The value of n is used to describe the quench change rate of high-Tc superconductors, which is related to the inherent characteristics of superconducting materials. It needs to be noted, ρ sc is only a hypothetical intermediate variable, and does not represent the actual current density distribution of the superconductor. Substituting (2), (3) and (4) into the first two formulas in the system of equations (1), one can get the H equations used to solve the electromagnetic characteristics in the superconducting region, shown as The three variables H, E and J can be solved by equations (5), and then E and J can be integrated to obtain the loss value Q, namely (the subscript scr of the integral represents the region where the superconductor is located) B. SIMULATION MODEL AND RELATED SETTINGS The multiphysics coupling analysis software COMSOL was adopted, and in order to reduce the amount of calculation, a 2D simulation model of multi-layer tapes stack was established. In the calculation of electromagnetic heating, for taking into account the magnetic field between each layer of the superconducting coil, we subdivided the cross section of the coil, which was divided into multi-turns, and each turn represented a double-cake single layer, as shown in Fig. 2(a). The principal parameters of Bi-2223/Ag tape and HTS coil are shown in Tab. 1. In order to improve calculation efficiency, the simulation model was further optimized: 1) mesh the superconducting coil area according to a more regular quadrilateral; 2) coarsen the triangular mesh of the liquid nitrogen area. The improved model meshing is shown in Fig. 2 C. TEST AND FITTING OF THE E-J MODEL OF SUPERCONDUCTING COIL As known from Section 2.1, the E-J model of superconducting coil is needed in the simulation. Generally, different superconducting materials have different E-J models. In addition, the critical properties of superconducting materials are affected by various factors such as temperature, magnetic field and mechanical deformation. Even for the same superconducting sample, the results of its E-J model measured at different times may be different. Therefore, in order to improve the authenticity of the simulation, we firstly carried out experimental measurements on the critical characteristics of the superconducting coil, and obtained its true E-J exponential relationship through data fitting, as shown in Fig. 3. In Fig. 3, the abscissa I represents the current input into the superconducting coil through a constant current source; the ordinate U represents the steady-state voltage at both ends of the coil measured by a precision voltmeter. In fact, the meaning of U here and E in the previous equations are essentially similar, but the E in the equations has universal meaning and can represent superconductors of different shapes. Since the total length of the tape in a single superconducting coil is 13.5m, if E 0 = 1µV /cm is used as the quench criterion, the result of the fitting function is Considering that the cross-sectional area of the tape is 4.2mm × 0.23mm, which is 9.66 × 10 −7 m 2 , its critical current density can be calculated as J c = 5.01 × 10 7 A/m 2 . Then, the E-J model of the superconducting coil is The superconducting coils were made of Bi-2223/Ag tape produced by Innova Superconductor Technology Co., Ltd. Their researchers have previously measured the n values of different lengths (5cm-1m) of superconducting tapes, the results obtained were 8.1-15.5. However, after the superconducting tape is wound into a coil, the value of n will be attenuated to a greater extent, so the 8.39 we got here is consistent with their results. In addition, different superconducting coils have different values of n under different conditions, here we only measured the n value required by the simulation model. D. LOSS UNDER THE GIVEN WORKING CONDITIONS When the superconducting levitation system is in a steady state, its excitation current is in the form of AC plus DC, which can be approximated by the following formula (where I d represents the DC component, I a represents the amplitude of the AC component, and f represents the frequency of the AC component): When a current is applied to a high-Tc superconductor, the distribution of current density is not uniform, and with the passage of time, the distribution of current density will also change, which makes the loss of the superconductor at different moments different. In order to observe this phenomenon, we first selected a set of typical parameters for simulation, namely I d = 20A, I a = 5A, f = 50Hz. This set of parameters is closest to the steady-state operating conditions of the superconducting levitation system. The simulation step length is 1/10 cycles, and the total simulation time is 5 cycles, which we found is enough to analyze the data after a lot of simulation tests. At different times in the first cycles, the change trend of the current density distribution inside the superconducting coil is shown in Fig. 4(a)-(d). As can be seen from Fig. 4, in the initial stage of the current flowing into the coil, the areas with higher current density are mainly distributed inside the outermost and innermost tapes and at the two ends of the middle tapes; As it progresses, current begins to penetrate into the inner area of the coil. At the same time, the area with the largest current density gradually concentrates on the four corners of the coil section, and the value of current density gradually decreases. In addition, it can be seen that starting from the second cycle, the current density distribution inside the superconducting coil gradually tends to a stable state. Based on the simulation model and the given working conditions, the relationships between the coil loss Q t and the three parameters I d , I a and f were analyzed, as shown in Fig. 5(a), (b) and (c) respectively. These three parameters were not randomly selected, and were mainly based on the real maglev train system. When the maglev train runs stably, its steady-state levitation current is generally around 20A, the dynamic adjustment current is generally less than 5A, and its adjustment frequency is not greater than 100Hz. However, in order to cover more data and it is easier to operate than the experiment, the range of parameters was expanded in the simulation. As can be seen from Fig. 5, the total loss Q t of superconducting coil increases with the increasing of DC component I d , AC component I a and frequency f . In addition, the loss value of superconducting coil is relatively low. As for a typical operating condition, the power loss of a single superconducting coil is only about 0.04W . A. TEST SYSTEM AND ITS OPERATING PRINCIPLE Considering the small size and low loss of the superconducting coil, the loss test was carried out on a new type of measurement system, which adopts the electrical measurement method with high sensitivity, simple operation and easy implementation. Its principle diagram is shown in Fig. 6. The working process is as follows: 1) firstly adjust the function generator to obtain the required AC and DC components, frequency and period of the current waveform output by the programmable AC and DC power supply; 2) then input the current into the superconducting sample to be tested, and collect the voltage at both ends of the sample and the current inside it through voltage leads and current transformer respectively; 3) finally send the voltage and current signals to the filter amplifier, after filtering and amplifying, to the oscilloscope recorder. In addition, for the superconducting coil, because the voltage signal contains inductive components and its phase leads the current signal by 90 degrees, it is necessary to use a compensation coil with a better inductance value to compensate it. Multiply the compensated voltage and current and after integrating, one can get the loss value of the sample. The test was completed at the Superconducting Research Center of Shanghai Jiaotong University. The test platform and test site are shown in Fig. 7(a) and (b), respectively. B. LOSS RESULTS OF THE TESTED SUPERCONDUCTING COIL We conducted a lot of experiments, data collection and postprocessing, and revised the simulation model based on the test results. In order to form a simple and intuitive comparison with simulation, only a part of the typical working conditions of a single superconducting coil were selected and divided into three cases: 1) set I a = 5A and f = 50Hz, then adjust I d = 5, 10, 15, 20, 25, 30A; 2) set I d = 20A, f = 50Hz, then adjust I a = 5, 7.5, 10, 12.5, 15A; 3) set I d = 20A, I a = 5A, then adjust f = 25, 40, 50, 60, 75Hz. The measured loss of the above three cases and compared with their corresponding simulation results after model correction are shown in Fig. 8(a), (b) and (c) respectively. As can be seen from Fig. 8, the measured data and the simulated data are in good agreement with the change trend, but the simulation results are slightly higher than the measured results in value. The reason may be that the measured loss is only the transmission loss, and the simulation results not only include the transmission loss, but also part of the magnetization loss caused by the changing magnetic field. However, the loss value of the superconducting coil is low, and the power loss of a single superconducting coil under normal working conditions is only about 0.04W . A. LOSS UNDER THE ACTUAL WORKING CONDITIONS In order to make the simulation results closer to the actual working conditions of superconducting levitation system, we collected several current data during the experiment, which represent the start floating, start landing, stable levitation and square-wave interference of the system, as shown in Fig. 9(a), (b), (c) and (d) respectively. Input the four actual levitation current into the simulation model of superconducting coil, the corresponding loss data can be obtained after calculation, as shown in Fig. 10 whether it is an equivalent formula or the test data, the simulation model can calculate the corresponding loss. As can be found from Fig. 10, under the four actual working conditions, the loss value generated by the superconducting coil is not large, and the maximum power loss of a single coil is less than 0.02W ; In addition, the area with large loss generally occurs at the moment when the levitation current fluctuates sharply. However, the energy consumption of maglev trains mainly occurs in long-term steady-state operation, the loss generated by higher frequency and transient conditions can be negligi- ble in the total energy consumption when considering longterm accumulation. B. ENERGY CONSUMPTION AND TOTAL COST OF TWO LEVITATION SYSTEMS Under normal levitation conditions, i.e. the rated current is about 20A, the current fluctuation is about 5A, and the frequency range is 50-200Hz, the total loss of the superconducting levitation system with two coils is less than 0.1W . If aluminum wire (its resistance per unit length is 1.35 × 10 −3 Ω/m) is used to wind a normal conducting levitation electromagnet, the total resistance of the two coils is 0.0365Ω, and the resistive power loss is 14.58W , which is 145.8 times the loss of superconducting levitation system. In addition, considering the cooling efficiency of liquid nitrogen is about 1/15, the energy consumption of the superconducting levitation system using Bi tapes is about 1/10 of that of the normal conducting levitation system. However, one disadvantage of superconducting tape is that it is more expensive. Its price in Chinese market is about 150 Y uan/m (23 Dollars/m), which is about 15 times the price of aluminum wire. Besides, this is only the basic cost proposed by the manufacturer, including the tape itself and the winding coil, other content and factors such as the additional hardware requirements are not taken into consideration. Therefore, in order to save the amount of superconducting tape, a feasible solution is to increase the current of single-turn coil to reduce the number of coil turns. However, as the current increases, the loss generated by superconducting levitation system also increases, so a compromise between loss and price needs to be made. Assuming that the price of aluminum wire is 10 Y uan/m, the superconducting levitation system runs for 10 hours a day for 10 years, and the price of electricity is 1 Y uan/kW h, then the total cost versus excitation current of superconducting and normal-conducting levitation systems is shown in Fig. 11. As can be seen from Fig. 11, when the excitation current is in the range of 22-78A, the total cost of the superconducting levitation system is less than that of the normal-conducting levitation system, and the excitation current corresponding to the lowest cost is about 40A. One point worthy of special mention: due to the small size of the platform [5] and the low consumption of liquid nitrogen during the test, we stored the liquid nitrogen in an open foam dewar and used a liquid nitrogen pump to manually replenish the liquid nitrogen in real time. Because this method can effectively guarantee the normal operation of the test, we have not carried out research on the relevant thermal management system. However, we are currently planning to build a larger 1:1 test platform. The above method will no longer be applicable. We will learn from some schemes or examples in the application of HTS cables [15] to design and manufacture an enclosed liquid nitrogen cycle refrigeration system, to ensure the safe, automatic and stable operation of the system. The related thermal management research will be elaborated in a follow-up presentation. VOLUME 4, 2016 V. CONCLUSION In order to evaluate the energy consumption, economy and applicability of the superconducting magnetic levitation system, this article used the methods of simulation analysis and experimental measurement to study the AC plus DC loss of the superconducting coil, and compared with the normalconducting magnetic levitation system based on the results of simulation and test. The main conclusions are as follows: 1) The loss will increase with the increasing of the DC component, the amplitude and frequency of the AC component. 2) Under normal working conditions, the excitation current in the form of AC plus DC produces relatively small loss in the superconducting magnetic levitation system, and the order of magnitude is below 10 −3 W/m. 3) For the superconducting magnetic levitation system, only when its excitation current is placed in a certain range (the special case of this article is 22-78A), it has the advantages of energy consumption and total cost compared with the normal-conducting magnetic levitation system.
4,733.4
2021-01-01T00:00:00.000
[ "Physics", "Engineering" ]
A Stronger Theorem Against Macro-realism Macro-realism is the position that certain"macroscopic"observables must always possess definite values: e.g. the table is in some definite position, even if we don't know what that is precisely. The traditional understanding is that by assuming macro-realism one can derive the Leggett-Garg inequalities, which constrain the possible statistics from certain experiments. Since quantum experiments can violate the Leggett-Garg inequalities, this is taken to rule out the possibility of macro-realism in a quantum universe. However, recent analyses have exposed loopholes in the Leggett-Garg argument, which allow many types of macro-realism to be compatible with quantum theory and hence violation of the Leggett-Garg inequalities. This paper takes a different approach to ruling out macro-realism and the result is a no-go theorem for macro-realism in quantum theory that is stronger than the Leggett-Garg argument. This approach uses the framework of ontological models: an elegant way to reason about foundational issues in quantum theory which has successfully produced many other recent results, such as the PBR theorem. Introduction The concept of macro-realism was introduced to the study of quantum theory by Leggett & Garg alongside their eponymous inequalities [1]. Loosely, macro-realism is the philosophical position that certain "macroscopic" quantities always possess definite values. The Leggett-Garg inequalities (LGIs) are inequalities on observed measurement statistics that are derived by assuming a particular form of macro-realism and can be violated by measurements on quantum systems. The purpose of the LGIs is therefore to prove that quantum theory and macro-realism are incompatible. However, since its introduction the exact meaning of "macro-realism" has been the subject of debate [2][3][4][5][6][7][8]. Recently, there has been a surge of interest in violation of the LGIs from both physical and philosophical angles. The review in Ref. [9] comprehensively covers experimental and theoretical work up to 2014. More recent experimental work has focussed on noise tolerance and closing experimental loopholes [10][11][12]. Also, several theoretical investigations have aimed to interrogate and clarify exactly what is required to derive the LGIs and what is implied by their experimental violation [7,8,[13][14][15][16]. This paper follows the clarifying work of Ref. [8]. Macro-realism is an ontological position; that is, the statement that a certain "macroscopic" quantity is macrorealist is a statement about the real state of affairs, the ontology, of the universe. In the field of quantum foundations, the framework of ontological models has been developed as a way to analyse such statements in generality, making as few assumptions as possible. To use an ontological model to describe a system requires just two core assumptions. First, that the system being described has some ontological state-the real fact about how the system actually is (of course, which ontological state is currently occupied is generally unknown). Second, that standard probability theory can be applied to the ontological states. Their generality has made ontological models very useful and they have been used to derive and clarify many important results in quantum foundations including: Bell's theorem [17], the PBR theorem [18], and excess baggage [19]. Reference [20] comprehensively reviews many of these results. By using ontological models it is possible to illuminate and classify various definitions of macro-realism precisely [8]. This analysis reveals some fundamental loopholes in the Leggett-Garg argument for the incompatibility of quantum theory and macro-realism. In particular, it shows that violation of the LGIs serves only to rule out one subset of macro-realist models and that there are other macro-realist models of quantum theory which are compatible with the LGIs. For example, Bohmian quantum theory [21][22][23][24][25] can be viewed as a macro-realist model which reproduces all predictions of quantum theory and therefore cannot be ruled out by the Leggett-Garg argument. These loopholes are not experimental but logical; the only way to close them is to fundamentally change the argument. In this paper, a stronger theorem for the incompatibility between quantum theory and macro-realism is presented. This theorem closes a loophole in the Leggett-Garg argument and establishes that quantum theory is incompatible with a larger subset of macro-realist models. It does not prove incompatibility of quantum theory with all macrorealist models since this is not possible; such a result would be in conflict with the existence of the theory of Bohmian mechanics (section 5). The theorem proceeds in a very different manner than the Leggett-Garg argument and is related to the main theorem from Ref. [26]. It thereby circumvents many of the controversies of the original Leggett-Garg approach. It should be noted that mathematically there is no meaning to the stipulation that macro-realism is about "macroscopic" quantities, as opposed to other physical quantities that aren't "macroscopic". Philosophically, however, it is easy to understand the desire for macro-realism applying to "macroscopic" quantities. The types of physical quantity that humans experience are all considered macroscopic and they certainly appear to possess definite values. On the other hand, it is much easier to imagine that microscopic quantities that aren't directly observed behave in radically different ways. So while there is nothing in the structure of quantum theory to pick-out "macroscopic" versus "microscopic", the motivation for considering macro-realism does come from considering macroscopic quantities, hence the name. This paper is organised as follows. The framework of ontological models is introduced in section 2. This is then used in section 3 to give the precise definitions of macro-realism and its three sub-sets. Both of these sections are without reference to quantum theory and are entirely independent of it. This is as it should be, since those concepts apply to descriptions of any physical system based on philosophical assumptions and do not depend on any specific physical theory (of course their most common applications are with quantum systems). In section 4 macro-realism is applied to quantum theory and several useful definitions and lemmas about quantum ontological models are presented. Section 5 discusses the Leggett-Garg argument against macro-realism and explains why the existence of loopholes [8] restrict it to only a simple class of macro-realist models. Section 6 then proves the main theorem which introduces a new approach, ruling out a larger class of macro-realist models than Leggett-Garg while using weaker assumptions. A discussion follows in section 7 where conclusions are drawn, together with discussion of further research directions including the possibility for experiments based on this result. Ontological Models When we debate types of "realism" in quantum theory, we are normally making an ontological argument. We're trying to say something about the underlying actual state of affairs: whether such a thing exists, how it can or can't behave, and so on. So it is with macro-realism. The macro-realist, loosely, believes that the underlying ontology in some definite sense possesses a value for certain macroscopic quantities at all times. The framework of ontological models [27,28] has been developed to make discussions about ontology in physics precise and is the natural arena for such discussions. An ontological model is exactly that: a bare-bones model for the underlying ontology of some physical system. The system may also be correctly described by some other, higher, theory-such as Newtonian mechanics or a quantum theory-in which case the ontological model must be constrained to reproduce the predictions of that theory. By combining these constraints with the very general framework of ontological models, interesting and general conclusions can be drawn about the nature of the ontology. It is important to note that, while ontological models are normally used to discuss quantum ontology, the framework itself is entirely independent from quantum theory. The presentation of ontological models here follows Ref. [20], which contains a much more thorough discussion. As noted above, the framework of ontological models relies on just two core assumptions: 1) that the system of interest has some ontological state λ and 2) that standard probability theory may be applied to these states. Together, these bring us to consider the ontology of some physical system as represented by some measurable set Λ of ontic states λ ∈ Λ which the system might occupy. The requirement that Λ be measurable guarantees that probabilities over Λ can be defined. In the lab, a system can be prepared, transformed, and measured in certain ways. Each of these operational processes needs to be describable in the ontological model. Preparation must result in the system ending up in some ontic state λ, though the exact state need not be known. Thus, each preparation P gives rise to some preparation measure µ over Λ which is a probability measure (µ(∅) = 0, µ(Λ) = 1). For every measurable subset Ω ⊆ Λ, µ(Ω) gives the probability that the resulting λ is in Ω. Similarly, a transformation T of the system will generally change the ontic state from λ ∈ Λ to a new λ ∈ Λ. Recalling that the ontic state λ represents the entirety of the actual state of affairs before the transformation, then the final state can only depend on λ (and not the preparation method or any previous ontic states, except as mediated through λ ). The transformations must therefore be described as stochastic maps γ on Λ. A stochastic map consists of a probability measure γ(·|λ ) for each initial ontic state, such that for any measurable Ω ⊆ Λ, γ(Ω|λ ) is the probability that the final λ lies in Ω given that the initial state was λ 1 . Finally, a measurement M may give rise to some outcome E. Again, which outcome is obtained can only depend on the current ontic state λ . Therefore a measurement M gives rise to a conditional probability distribution P(E|λ ) 2 . For this paper it is only necessary to consider measurements that have countable sets of possible outcomes E. Putting these parts together: if we have a system where a preparation P is performed followed by some transformation T and some measurement M then the ontological model for that system must have some preparation measure µ, stochastic map γ, and conditional probability distribution P such that the probability of obtaining outcome E is is the effective preparation measure obtained by preparation P followed by transformation T . Note that ontological models are required to be closed under transformations. That is, for any preparation µ and transformation γ in the model then the preparation ν defined by Eq. (2) must also exist in the model (since a preparation followed by a transformation is itself a type of preparation). So an ontological model for some physical system does the following: 1. defines a measurable set Λ of ontic states for the system; 2. for each possible transformation method T defines a stochastic map γ from Λ to itself; 3. for each possible preparation method P defines a preparation measure µ over Λ, ensuring closure under the actions of the stochastic maps as in Eq. (2); 4. for each possible measurement method M defines a conditional probability distribution P over the outcomes given λ ∈ Λ; and then produces probabilities for measurement outcomes via Eqs. (1,2). The possible ontological models for a system can be constrained by requiring that these probabilities match those given by other theories known to accurately describe it (or probabilities obtained by experiment). It should be noted that ontological models are not usually defined in the measure-theoretic way presented here, but are often presented using probability distributions rather than measures. However it has been noted in Ref. [20] that this simplification precludes many reasonably ontological models, including the archetypal Beltrametti-Bugajski model [29] which simply takes ontic states to be quantum states. In order to do justice to macro-realism the more accurate approach has therefore been taken here. result of that paper is that the "macro-realism" intended by Leggett and Garg, as well as many subsequent authors, can be made precise in a reasonable way with the definition: "A macroscopically observable property with two or more distinguishable values available to it will at all times determinately possess one or other of those values." [8] Throughout this paper, "distinguishable" will be taken to mean "in principle perfectly distinguishable by a single measurement in the noiseless case". Note that macro-realism is defined with respect to some specific property Q. A macro-realist model might (and generally will) be macro-realist for some properties and not others. This property will have some values {q} and to be "observable" must correspond to at least one measurement M Q with corresponding outcomes E q . Reference [8] fleshes out this definition using ontological models and as a result describes three sub-categories of macro-realism. In order to discuss these it will be necessary to first define an operational eigenstate in ontological models. An operational eigenstate Q q of any value q of an observable property Q is a set of preparation procedures {P q }. This set is defined so that immediately following any P q with any measurement of the quantity Q will result in the outcome E q with certainty. In other words, an operational eigenstate is simply an extension of the concept of a quantum eigenstate to ontological models: the preparations which, when appropriately measured, always return a particular value of a particular property. Note that if two values q, q have operational eigenstates then they can sensibly be called "distinguishable", since any system prepared in a corresponding operational eigenstate can be identified to have one value and not the other with certainty. The three sub-categories of macro-realism for some quantity Q are then: 1. Operational eigenstate mixture macro-realism (EMMR) -The only preparations in the model are operational eigenstates of Q or statistical mixtures of operational eigenstates. That is, for each preparation P q,i of each operational eigenstate Q q let µ q,i be the preparation measure and let {c q,i } q,i be a set of positive numbers summing to unity. In EMMR every preparation measure can be written in the form ν = q i c q,i µ q,i . Note that this means that the space of ontic states Λ need only include those λ accessible by preparing some operational eigenstate of Q, as no other ontic states can ever be prepared. 2. Operational eigenstate support macro-realism (ESMR) -Like EMMR, every ontic state λ ∈ Λ accessible by preparing some operational eigenstate but, unlike EMMR, there are preparation measures in the model that are not statistical mixtures of operational eigenstate preparations for Q. That is, let Ω be a measurable subset for which µ q (Ω) = 1 for every operational eigenstate preparation µ q . Then for every preparation measure ν in an ESMR model, ν(Ω) = 1 for all such Ω. Moreover, there exists some preparation procedure not in the mixture form required by EMMR. In other words, if you're certain to prepare an ontic state from some subset Ω when preparing an operational eigenstate of Q, then you're also certain to prepare an ontic state from Ω from any other preparation measure in the model. 3. Supra eigenstate support macro-realism (SSMR) -Every ontic state λ in the model will produce some specific value q λ of Q when a measurement of Q is made, but some of those ontic states are not accessible by preparing any operational eigenstate of Q. That is, for every λ ∈ Λ then there is some value q λ of Q such that P M (E q λ | λ) = 1 for every P M corresponding to a measurement of Q. Moreover, there exists some measurable subset Ω ⊂ Λ and preparation measure ν such that ν(Ω) > 0 while µ q (Ω) = 0 for every preparation measure µ q of an operational eigenstate Q q . To help unpack these definitions, they are illustrated in Fig. 1. In each of these cases, every ontic state λ (up to possible measure-zero sets of exceptions 3 ) is associated with a specific value q λ of Q, such that it can be sensibly said that λ "possesses" q λ . This is why they are all considered types of macro-realism. Let's consider this for each case in turn. In an EMMR model, any preparation can be viewed as a choice between preparations of operational eigenstates for values of Q, so any resulting ontic state λ "possesses" the corresponding value q since it could have been obtained by preparing an operational eigenstate of q. In an ESMR model, for every ontic state λ accessible by preparing some non-operational-eigenstate measure ν, λ can also be prepared by an operational eigenstate of exactly one value of Q (up to measure-zero sets of exceptions) and so similarly each ontic state "possesses" the corresponding value of Q. Figure 1: Illustration of the three sub-categories of macro-realism as defined in the text. In each case the large square represents the whole ontic state space Λ, the four smaller squares indicate those subspaces of ontic states associated with each value q0..3 of some quantity Q, and the shaded regions represent those ontic states accessible by preparing some select preparation measures. (a) illustrates EMMR, where the squares for each qi are all ontic states preparable via some operational eigenstate preparation µq i ,j and all other allowed preparation measures are simply statistical mixtures of these, e.g. where the state space is exactly as in EMMR, but now more general preparation measures, such as the ν illustrated, are permitted. (c) illustrates SSMR, where now every λ in the box for qi must produce outcome qi in any appropriate measurement of Q, but the operational eigenstates no longer fill these boxes. That is, there are ontic states that lie outside the preparations for operational eigenstates. General preparation measures over the boxes are still permitted. In SSMR models the link between each λ ∈ Λ and the corresponding q λ is explicit in the definition. For any λ the q λ is that value for which P M (E q λ | λ) = 1 as specified in the above definition. Thus, λ "possesses" the value which it must return with certainty in any relevant measurement. Note that these three sub-categories of macro-realism are defined such that they are mutually exclusive, but they still have a natural hierarchy to them. EMMR can be seen as a more restrictive variation on ESMR, since you can make an EMMR model into an ESMR model simply by including a single preparation measure that is not a statistical mixture of operational eigenstate preparations (the ontic state space and everything else can remain unchanged). Similarly, SSMR can be seen as a less restrictive variation on ESMR. In ESMR, every ontic state λ can be obtained by preparing an operational eigenstate preparation for a value of Q, by definition of operational eigenstate it follows that a measurement of Q will therefore return some specific value for each ontic state (up to measure-zero sets of exceptions), which is the requirement on the ontic states for SSMR. Macro-realism in Quantum Theory Now that macro-realism and ontological models have been defined independently from quantum theory, it is time to apply these definitions to quantum systems. Doing this will enable precise discussion of the loopholes in the Leggett-Garg argument and proof of a stronger theorem. It will also be necessary to state some useful definitions and lemmas from ontological models. Ontological Models for Quantum Systems The postulates of quantum theory assign some Hilbert space H to a quantum system with dimension d so that the set of physical pure quantum states is P(H) = {|ψ ∈ H : ψ = 1, |ψ ∼ e iθ |ψ }. They also assign unitary operators on H to transformations and orthonormal bases over H to measurements. For simplicity, consider only systems with d < ∞. With this in mind, ontological models for quantum systems can be described in generality. An ontological model for a quantum system is defined by some ontic state space Λ as well as the relevant preparation measures, stochastic maps, and conditional probability distributions. For each state |ψ ∈ P(H) there must be a set ∆ |ψ of preparation measures µ |ψ -potentially one for each distinct method for preparing |ψ . Similarly, for each unitary operator U on H there is a set Γ U of stochastic maps γ U and for each basis measurement there is a set Ξ M of conditional probability distributions P M -again, potentially one stochastic map/probability distribution for each experimental method for transforming/measuring. In order to investigate the properties of possible ontological models it is required that the ontological model is capable of reproducing the predictions of quantum theory. That is, for any |ψ ∈ P(H), µ ∈ ∆ |ψ , U, γ ∈ Γ U , basis M , and P M ∈ Ξ M it is required that where ν is defined as in Eq. (2). Note also that ν ∈ ∆ U |ψ since preparing the quantum state |ψ (via any ontological preparation µ ∈ ∆ |ψ ) followed by performing the quantum transformation U (via any γ ∈ Γ U ) is simply a way to prepare the quantum state U |ψ . State Overlaps In order to properly discuss macro-realism in quantum theory it is necessary to discuss how to quantify state overlaps in ontological models. In quantum theory any pair of non-orthogonal states |ψ , |φ ∈ P(H) overlap by an amount quantified by the Born rule probability | ψ|φ | 2 . That is, for a system prepared in state |ψ the probability for it to behave (for all intents and purposes) like it was prepared in state |φ is | ψ|φ | 2 . Adapting this logic to an ontological model for the quantum system, consider the probability that a system prepared according to measure µ behaves like it was prepared according to ν. That is, the probability that the ontic state obtained from µ could also have been obtained from ν. This quantity is called the asymmetric overlap and is mathematically defined as [26,[30][31][32] recalling that the infimum of a subset of real numbers is the greatest lower bound of that set. This is because a preparation of ν has unit probability of producing a λ from each measurable subset Ω ⊆ Λ that satisfies ν(Ω) = 1 4 . Therefore by taking the minimum such Ω, µ(Ω) gives the desired probability. It is not difficult to see that-for an ontological model of a quantum system-the Born rule upper bounds the asymmetric overlap. Almost all 5 ontic states that can be obtained by preparing |φ will also return outcome |φ in any relevant measurement and so if a preparation of |ψ results in a λ that could have been obtained by preparing |φ then we know that this ontic state will almost surely return |φ in a relevant measurement. It follows that for any µ ∈ ∆ |ψ and ν ∈ ∆ |φ (ν | µ) ≤ | φ|ψ | 2 . A full proof of this is provided in Appendix B. It is useful to overload the definition of asymmetric overlap to include the probability that preparing an ontic state via µ will produce a λ accessible by preparing some quantum state |φ . This corresponds to which is clearly also upper bounded by the Born rule (|φ | µ) ≤ | φ|ψ | 2 . The next useful generalisation is the overlap of some preparation measure µ with two quantum states |0 , |φ . This can be thought of as the union of the overlaps expressed by (|φ | µ) and (|0 | µ) and is mathematically defined as or, equivalently as So in this way (|0 , |φ | µ) expresses the probability that sampling from µ produces a λ accessible by preparing either |0 or |φ . Since (|0 , |φ | µ) expresses the probability of a disjunction of two events that have probabilities (|0 | µ) and (|φ | µ), it follows (by Boole's inequality) that it is bounded as follows There are special triples of quantum states {|ψ , |φ , |0 } for which the bound Eq. (9) is necessarily saturated for all µ ∈ ∆ |ψ . Anti-distinguishable triples 6 have this property. They are triples {|ψ , |φ , |0 } which have a quantum measurement with outcomes E ¬ψ , E ¬φ , E ¬0 where the probability of obtaining outcome E ¬ψ for a system in state |ψ is zero (and similarly for E ¬φ , E ¬0 ). In other words, there is a measurement which can, with certainty, identify one state that was definitely not prepared. For this to be possible, almost no ontic states can be accessible by preparing all three states in the triple, because any such ontic state wouldn't be able to return any of the outcomes in the measurement. A full proof of this is provided in Appendix D. It is known that 7 triples of states {|ψ , |φ , |0 } with inner products a = | ψ|φ are necessarily anti-distinguishable by a projective measurement. Finally, it is useful to consider the effect that unitary transformations have on asymmetric overlaps. If some transformation U is applied, via some stochastic map γ ∈ Γ U , to a system prepared according to measure µ ∈ ∆ |ψ then the overlap with some quantum state |φ cannot decrease. That is where µ ∈ ∆ U |ψ is the preparation measure obtained by applying γ to µ as in Eq. (2). This is because any ontic state accessible by preparing µ will be mapped by γ onto ontic states accessible by preparing µ (since preparing µ then applying γ is a preparation of µ ). Similarly, ontic states accessible by preparing |φ map onto states preparable by U |φ . Thus ontic states in the overlap of µ and |φ are mapped by γ onto states in the overlap of µ and U |φ . Once again, a full proof is provided in Appendix C. Macro-realism for Quantum Systems Having laid the groundwork the above definition of macro-realism can now be applied to quantum systems. First consider what can count as a "macroscopically observable" quantity Q. To be observable Q must correspond to some quantum measurement M Q . Therefore, there is some orthonormal basis B Q so that for each value q of Q the corresponding outcome of M Q is a state in B Q . In order to make sense of the above definitions Q must also have operational eigenstates for each value q of Q. Fortunately this is straightforward in quantum theory: every state in B Q is an operational eigenstate. Moreover, because the elements of B Q are orthogonal it follows that preparations corresponding to different values q, q of Q are therefore distinguishable. Now consider an ESMR or EMMR model for a quantum system. For any state |ψ ∈ P(H) and any eigenstate |0 ∈ B Q the asymmetric overlap (|0 | µ) for any µ ∈ ∆ |ψ must be maximal. That is, A full proof of this is provided in Appendix E and only an outline provided here. Since each state in B Q is orthogonal to every other, no ontic state (up to measure-zero exceptions) can be accessible by preparing more than one state in B Q . Moreover-by definition of ESMR and EMMR-every ontic state must be accessible by preparing some operational eigenstate of Q. Therefore, any ontic state accessible by preparing µ must also be accessible by preparing exactly one state in B Q (up to measure-zero sets of exceptions). So the sum i (|i |µ) = 1 because each overlap is disjoint (up to measure-zero sets of exceptions) and by Eq. (5) each must be maximal, giving Eq. (12). Equation (12) is the key consequence of ESMR and EMMR that leads to the no-go theorem with quantum systems presented in section 6. Loopholes in the Leggett-Garg Proof The aim of the LGIs has always been to rule out macro-realist ontologies for quantum theory when the inequalities are violated. However, in light of the above precise definition of macro-realism, some loopholes in the argument can be identified. The first loophole is that violation of the LGIs cannot rule out SSMR models of quantum systems. Indeed, no argument that rests on compatibility with quantum predictions can completely rule out SSMR models since there exists a well-known SSMR model for quantum systems that reproduces all quantum predictions: Bohmian mechanics [23][24][25]. To see that Bohmian mechanics provides an SSMR ontology consider, for example, the Bohmian description of a single spinless point particle in three-dimensional space (the argument for more general systems is analogous). Bohmian mechanics has the ontic state as a pair λ = ( r, |ψ ) ∈ R 3 × P(H) where r is the actual position of the particle and |ψ is the quantum state (or "pilot wave"). Note that the quantum state is part of the ontology here. The "macroscopically observable property" is the position of the particle, r, and any sharp measurement of position will reveal the true value of r with certainty. Thus, for any ontic state λ there is some value of the macroscopically observable property (that is, r) which is obtained with certainty from any appropriate measurement. Thus, Bohmian mechanics provides an SSMR ontological model. The second loophole is that LGI violation is also unable to rule out ESMR ontological models. This is also demonstrated through a counter-example in the form of the Kochen-Specker model for the qubit [35], which is an ontological model satisfying ESMR 8 . The Kochen-Specker model exactly reproduces quantum predictions for d = 2 dimensional Hilbert Spaces. As the LGIs are defined in d = 2 the Kochen-Specker model will violate them. A key question is why these counter-examples evade the Leggett-Garg argument. To derive the LGIs, one needs an additional assumption: non-invasive measurability. The Leggett-Garg approach compares the non-invasiveness of the measurement process on operational eigenstates with the invasiveness on preparations that are not operational eigenstates. Their violation shows that these other preparations cannot be expressed as mixtures of operational eigenstates. In neither ESMR nor SSMR can generic preparations be related to mixtures of operational eigenstates, so the Leggett Garg approach generically has loopholes for these types of macro-realism [8]. Bohmian mechanics and the Kochen-Specker model are both examples: they contain measurement disturbances that violate the non-invasive measurability assumption, while still satisfying SSMR and ESMR respectively. The crux is that both SSMR and ESMR models can include measurements that don't disturb the distribution over Λ if the system is prepared in an operational eigenstate, but still disturb the distribution over Λ for systems prepared in other ways. EMMR, by contrast, requires that all preparations are represented by statistical mixtures of operational eigenstates. If it can be demonstrated that operational eigenstates are not disturbed by a given measurement, then according to EMMR no preparations can be disturbed by that measurement. It is this feature that prevents EMMR models from violating the LGI (see Ref. [8] for a more extensive discussion of this point). Recent experiments [10,11] following Ref. [16] have sought to address a "clumsiness loophole" in Leggett-Garg. They drop non-invasive measurability as an assumption by incorporating control experiments to check the disturbance of the measurement on the operational eigenstates. These approaches follow the Leggett-Garg argument quite closely and show that the disturbance on some general preparation cannot be explained in terms of disturbances on a statistical mixture of operational eigenstates. As a result, they are still only able to rule out EMMR models. So the Leggett-Garg proof, even taking into account the clumsiness loophole, only rules out EMMR macrorealism and leaves loopholes for SSMR and ESMR. Moreover, the loophole for SSMR models cannot be fully plugged by any proof because Bohmian mechanics exists as a counter-example. Similarly, the loophole for ESMR cannot be fully pluged in d = 2 dimensions, since the Kochen-Specker model exists as a counter-example. This leaves a clear question: can the ESMR loophole be closed by another theorem for any d > 2? Answering this question needs a different approach, one which does not make use of the measurement disturbance assumptions at all. A Stronger No-Go Theorem Using the machinery developed above, it is possible to prove a theorem that rules out both ESMR and EMMR models for quantum systems with "macroscopically observable" properties with n > 3 values. This theorem is therefore stronger than the Leggett-Garg proof as it rules out ESMR models as well as EMMR models. First, assume there is an ontological model for a quantum system with d > 3 dimensions which is ESMR or EMMR for quantity Q with n = d > 3 values. By applying Eqs. (9,11,12) to specially chosen quantum states it is possible to prove a contradiction. Let |0 ∈ B Q be the eigenstate of some value q of Q and let |ψ ∈ P(H) be any other state of the system such that | 0|ψ | 2 ∈ (0, 1 2 ). Since quantum states in P(H) are equivalent up to global phase, 0|ψ can be taken to be a positive real without loss of generality. Now select another orthonormal basis such a basis always exists since, for any α, a τ ∈ (0, 1) exists such that |ψ is normalised. Define another state with respect to the same basis These states have been chosen such that 0|ψ = α = φ|ψ and therefore there is some unitary U satisfying U |0 = |φ and U |ψ = |ψ . Moreover, the inner products of {|ψ , |φ , |0 } satisfy Eq. (10) meaning that {|ψ , |φ , |0 } is an anti-distinguishable triple and therefore satisfies Eq. (9) with equality. Choose any preparation measure µ ∈ ∆ |ψ for |ψ and any stochastic map γ ∈ Γ U for U . If |ψ is prepared according to µ and then transformed according to γ such that µ ∈ ∆ |ψ is the resulting preparation measure (as in Eq. (2)) then = 2| 0|ψ | 2 = 2α 2 (21) where the first line follows from anti-distinguishability, the second from Eq. (11), and the third from ESMR/EMMR via Eq. (12). Now consider that (|φ , |0 | µ) ≤ P B (|0 , |1 | |ψ ). That is, (|φ , |0 | µ) is a lower bound on the probability that a quantum basis measurement in B has outcome |0 or |1 for a preparation of |ψ . To see this, consider that any ontic state preparable by both |ψ and |φ must return a measurement outcome compatible with both preparations: thus the outcome must be either |0 or |1 . Similarly any ontic state preparable by both |ψ and |0 must return the outcome |0 in such a measurement. A full proof of this is provided in Appendix F. Putting these inequalities together, one finds But α was defined to be in the range (0, 1 √ 2 ) and so this is a contradiction. The following summarises the assumptions which together imply this contradiction: 1. The ontology satisfies ESMR or EMMR. 2. The "macroscopically observable property" Q has n > 3 distinguishable values (requiring that the quantum system has d ≥ n > 3 dimensions). 3. An eigenstate |0 of Q can be chosen such that some quantum state |ψ satisfying | 0|ψ | ∈ (0, 1 √ 2 ) can be prepared. 4. The quantum transformation U and quantum measurement B described above can be performed. Assumptions (i-ii) are about possible underlying ontological models, while (iii-iv) are implications of standard quantum theory. The conclusion must therefore be that either ESMR/EMMR ontologies are impossible for n > 3 distinguishable values, or that quantum theory is not correct. Quantum theory is therefore incompatible with ESMR or EMMR macro-realism. Discussion The theorem in this paper proves that quantum theory is incompatible with ESMR and EMMR macro-realist ontologies where the macroscopically observable property has n > 3 distinguishable values. This is stronger than the argument from the Leggett-Garg inequalities, which is only able to rule out EMMR ontologies. Therefore, only SSMR models are left as possibilities for macro-realist quantum ontologies. As noted above, no argument is able to rule out all SSMR ontologies because Bohmian mechanics is an SSMR theory which reproduces all predictions of quantum theory. It may be possible, however, to produce a theorem that rules out some subset of SSMR theories. For example, Bohmian mechanics is a ψ-ontic theory [20]. That is, each ontic state λ = ( r, |ψ ) can only be accessed by preparing one quantum state, namely |ψ -there is no ontic overlap between different quantum states. It may therefore be possible to prove the incompatibility of quantum theory and all SSMR ontologies that aren't ψ-ontic. This, together with the result presented here, would essentially say that to be macro-realist you must have an ontology consisting of the full quantum state plus extra information. Many would consider this a very strong argument against macro-realism. For example, such models might reasonably be accused of simply artificially adding macro-realism on top of quantum theory, rather than providing an understanding of quantum theory that respects macro-realism. Of course, those sympathetic to Bohmian mechanics would not be swayed by any such arguments, as Bohmian mechanics is already macro-realist. Papers on the Leggett-Garg argument, including those addressing the clumsiness loophole [10,11,16], have concentrated on d = 2 dimensional systems. As a result of closely following the Leggett-Garg assumptions, they are still unable to rule out any models outside of EMMR. They certainly could not rule out all ESMR models, since the Kochen-Specker model satisfies ESMR and exists in d = 2. By contrast, the theorem in this paper works when d ≥ n > 3. The next stage is the development of an experimental test of the improved theorem. This requires a detailed, noise-tolerant analysis, as any experiment is unavoidably subject to non-zero noise. The asymmetric overlap measure, which is used to characterise the different categories of macro-realism, is an inherently noise-intolerant quantity. To bring the theorem into an experimentally testable form therefore requires a more noise-tolerant alternative and suitable adjusted characterisations of the different categories of macro-realism. This is possible, but it is not a simple process. Two different approaches for such noise-tolerant replacements are currently in development and will be the subject of future papers [36,37]. It is interesting to note that experiments based on this result will be an entirely new avenue for tests of macrorealism. Experimental tests based on the Leggett-Garg argument will always have certain features and difficulties in common (such as the clumsiness loophole noted in section 5). However, since the approach of this work is so different in character one can expect the resulting experiments to be similarly different, hopefully avoiding many of the difficulties common to Leggett-Garg while requiring challenging new high-precision tests of quantum theory in d > 2 Hilbert spaces. As is common in such foundational works, this paper has considered only the case of finite-dimensional quantum systems. It is hoped that an extension to infinite-dimensional cases should be possible. Due to the fact that quantum states become integrals over bases in the infinite case, a further layer of measure-theoretic complexity would likely be required. Actually developing such an extension therefore remains an interesting open problem. Finally, one should note that in this paper the "macro" quantity Q was taken to correspond to a measurement of basis B Q in the quantum case. A more general approach might allow Q to correspond to a POVM measurement instead. That is, for each value q of Q there would be some POVM element E q and the operational eigenstates |ψ of q would be those satisfying ψ|E q |ψ = 1. We are confident that the results presented here can be fairly directly extended to such a case and this would be another interesting avenue for further work. Such an extension would likely add significant complexity to the proofs without changing the fundamental ideas, however. A A Useful Lemma Here, a lemma is proved that will be useful in the following proofs. , which is a measurable function from Λ to [0, 1]. As measurable functions, the kernels of both f andf are measurable sets in Λ. Equation (24) implies that Consider the second term in Eq. (28); there are two possibilities. First, the term could be zero (if, for example, the only subsets of Λ\ kerf where f (λ) > 0 are measure-zero subsets according to µ). If it's not zero then, since for all λ ∈ Λ\ kerf f (λ) < 1 then Λ\ kerf dµ(λ) f (λ) < µ(Λ\ kerf ). In the latter case this would imply which is a contradiction as µ(Λ) = 1 by definition of a probability measure. The only option is therefore for this term to vanish, in which case 1 = µ(kerf ) = µ (ker(1 − f )) as desired. B Bounding Asymmetric Overlaps The main text quotes a bound, Eq. (5), for the asymmetric overlap together with a sketch of the proof. This will now be proved fully. The spirit of the proof is that if λ can be obtained by preparing some ν ∈ ∆ |φ then it should also return the outcome |φ with certainty in any measurement where that is an option (there are exceptions which make the proof more difficult than this). Thus, the probability of preparing µ and getting a λ which then returns the outcome |φ in a measurement is at least the probability of preparing µ and getting a λ which is accessible from some ν ∈ ∆ |φ , which is a paraphrase of the desired result. To prove Eq. (5), consider preparing |φ according to ν ∈ ∆ |φ and then performing some quantum measurement M φ |φ . For the ontological model to reproduce quantum probabilities, as in Eq. (3), it is therefore required that It is tempting to conclude from this that for every λ in the support of ν then P M φ (|φ | λ) = 1. However, this is not generally possible because it is not generally possible to define a support for the measure ν. Therefore, a slightly different proof is required. D Anti-Distinguishability and Asymmetric Overlap It is claimed (and roughly motivated) in the text that if {|ψ , |φ , |0 } is an anti-distinguishable triple then Eq. (9) must hold with equality. This will now be proved fully. F Tripartite Asymmetric Overlap and Quantum Measurements In the proof of the main theorem, it is claimed that where the right hand side is the quantum probability of the corresponding quantum measurement and the states and measurements are as defined in the text. This will now be proved. The gist of the proof is that if λ ∈ Λ is accessible by preparing two quantum states then it may only return measurement results that are compatible with both preparations. So if λ is accessible by preparing both |ψ and |0 then it may only return |0 in a measurement in the B basis and similarly if λ is accessible by preparing both |ψ and |φ then it may only return either |0 or |1 for a similar measurement. Thus, if one of these λ's is obtained in a preparation of |ψ then the measurement result is necessarily either |0 or |1 . This will now be fully fleshed out.
10,115
2016-09-30T00:00:00.000
[ "Philosophy" ]
Graph theory and combinatorial calculus: an early approach to enhance robust understanding The objective of this work is to show an educational path for combinatorics and graph theory that has the aim, on one hand, of helping students understand some discrete mathematics properties, and on the other, of developing modelling skills through a robust understanding. In particular, for the path proposed to middle-school students, we used a connection between k-permutations and colourings of graphs: we indicated a way to solve problems related to counting all the possible arrangements of given objects in a k-tuple under given constraints. We solve this kind of problem by associating a graph with the constraints related to the k-tuple and by using graphs’ colourings, in which every colour is associated with one of the objects. The number of arrangements is given by finding the number of colourings through an algorithm called the Connection-Contraction Algorithm. The educational path is set within the Teaching for Robust Understanding framework and the goal, from the mathematical skills perspective, is to enhance modelling, passing from real situations (the fish problem in our experiment) to mathematical problems (the graph’s colouring in our experiment) and vice versa through the use of technology (the Connection-Contraction Algorithm with yEd editor, in our experiment), by using an extended modelling cycle. The meetings with students were videotaped and some results of the experimentation are given. Introduction Graph theory is a relatively new branch of mathematics that has emerged increasingly often on the international research scene for its countless applications (Derrible & Kennedy, 2011;Hart, 2008). However, it is also true that it can be used for a better understanding of mathematical concepts in the field of education: the understanding of combinatorics concepts often presents itself as quite challenging for secondary school students (Hart & Martin, 2018), and some concepts from graph theory can help their understanding. In this paper, after a review of relevant literature and introducing the research question, we present the theoretical framework to which we adhered and the mathematical content that led us to the innovative educational path that we brought to the classroom. We show some of the results obtained in the experimentation with students through a qualitative analysis, in terms of teaching for robust understanding (Schoenfeld, 2014(Schoenfeld, , 2016. We also illustrate how these activities can foster mathematical skills such as modelling (Greefrath, 2011). Graph theory and combinatorics in mathematics education Research in mathematics education includes studies on the teaching and learning of discrete mathematics. Let us consider this subject in papers of the 13th International Congress on Mathematical Education (Hart & Sandefur, 2018). Topics of discrete mathematics can be useful in the study of disciplines like computer science or operations research (see, e.g., Beineke & Wilson, 1997), and even if "discrete mathematics is a robust field with many modern applications" (Hart & Martin, 2018), in the U.S., for example, as well as in many other countries, "the 1 3 Common Core State Standards for Mathematics essentially excludes discrete mathematics" (Rosenstein, 2018). In particular, graph theory, which is one of the topics of discrete mathematics (Hart, Sandefur, & Ouvrier-Buffet, 2017), also plays a significant role in engineering and economics, and, of course, all of these topics are relevant for mathematics students (González, Muñoz-Escolano, & Oller-Marcén, 2019;Kolman, Zach, & Holoubek, 2013;Milková, 2009;Vidermanová & Melušová, 2011). Despite its 'recent' birth (in the second half of the 1700s) and the origin of its development (more closely related to games than to mathematical matters), graph theory is nowadays studied for both theoretical and practical reasons (Voloshin, 2009). In fact, graphs are useful tools for modelling real-life problems related to transportation networks, telecommunications, social networks, or big data (Derrible & Kennedy, 2011;Hart, 2008). Moreover, since some graph theory topics do not require prior knowledge to be mastered, several experiments involving these topics have been carried out in primary and secondary schools (Cartier, 2008;Ferrarello & Mammana, 2018;Niman, 1975;Oller-Marcén & Muñoz-Escolano, 2006;Santoso, 2018;Wasserman, 2017). Gonzàles, Muñoz-Escolano, and Oller-Marcén (2019) provided a theoretical analysis of the reasoning processes students used when solving graph-theory problems, in which they classified four levels of reasoning (recognition, use and formulation of definitions, classification, and proof), most of which are applicable also in primary and middle schools. "Combinatorics might be considered the mathematical art of counting. Combinatorial reasoning is the skill of reasoning about the size of sets, the process of counting, or the combinatorial setting to answer the question 'How many?'" (Hart & Sandefur, 2018, p. vi). Combinatorics does not depend on calculus, offers challenging problems that can be discussed with pupils, and can be used to train students in enumeration and generalisation and to present many applications (Kapur, 1970). At the same time, combinatorics is a field that most students find very difficult; most combinatorial problems do not have readily available solution methods (Batanero et al., 1997). Students often have difficulties working with combinatorial problems (Eisenberg & Zaslavsky, 2003;Fischbein & Gazit, 1988). Several studies over the years have promoted approaches to enhance students' capabilities in solving combinatorial problems, from primary children (English, 1991;Hoeveler, 2018;Zak, 2020) to middleand high-school students (Ďuriš et al., 2021). In our study, we aimed to address some difficulties with combinatorial problems by creating an educational path that takes graph theory into account as a support in solving the problems. Theoretical background We chose to design the activity and record the results using the Teaching for Robust Understanding (TRU) framework proposed by Schoenfeld (2013Schoenfeld ( , 2014Schoenfeld ( , 2016 and the modelling cycle introduced by Blum and Leiß (2007) and extended by Greefrath (2011). The TRU framework consists of five dimensions for powerful classrooms, described in Table 1. The framework identifies these five dimensions, which raise a truly effective teaching/learning context and foster deep student understanding, thereby achieving ambitious, robust teaching. Briefly, "if the content is rich, the students get to engage, they get powerful ideas, they build on each other's ideas, they can build positive identities with the teacher adjusting the level of instruction so that it is right for the students to engage productively" (Schoenfeld, video in https:// trufr amewo rk. org/). The content we choose can provide opportunities to learn; in particular, it can support the important disciplinary idea of mathematical modelling. As mentioned, we refer to the modelling cycle introduced by Blum and Leiß (2007) and extended by Greefrath (2011) (Fig. 1). Other modelling cycles were presented by Vorhölter et al. (2019). The modelling process is divided into various phases: there is the pole of reality (on the left), the one of mathematics (in the middle), and the one of technology (on the right). The real situation, given in the original problem, is translated into a real model and transferred into the mathematics realm in a mathematical model. The mathematical model is then technologically modelled and solved in the technology realm. Once the technological results are obtained, they are translated into mathematical solutions, re-interpreted in terms of real results, and given back to the rest of the world. The use of the technological realm could be useful for all students, and could aid students with difficulties, thereby aiming for an Equitable Access to Content. In the passage from the problem to the mathematical solution, the argumentation (Toulmin, 1958) plays an important role. It consists of one or more linked steps of reasoning that lead from an initial input to a conclusion by means of guaranteed rules. During this process, students realise not only that "the property is true", but also "why it is true"; it contributes not only to knowledge construction (Mariotti, 2008, p. 189), but also to explaining already-acquired knowledge to others. Personal knowledge construction makes students able to acquire Ownership of the content, even if Cognitive Demand is challenging. Argumentation is useful for a Formative Assessment, because the teacher can give students opportunities to deepen understanding by listening to and analysing their reasoning, rather than judging only the final results. At this point, students are ready to solve problems and use their acquired abilities and knowledge in different contexts, eventually making abstractions and generalisations, as well as connections to different topics. Research question The motivations for this work are based on the idea that difficulties often encountered in understanding and dealing with certain topics, may originate not only in the lack of prerequisites, but also in the teaching method of the teacher. In fact, we want to show that in order to teach certain topics, it is insufficient merely to know the subject matter, but, taking into account the knowledge and peculiarities of the students, it is also necessary to know what cognitive mechanisms may or may not lead to understanding of a particular topic, along with some appropriate teaching methods to deal with it. In practice, we want to show that what you teach, how you teach, and to whom you teach are all equally important. In particular, we wanted to test whether the robust understanding framework (TRU, see Sect. 3) could be useful in how to reach those topics perceived as difficult, such as combinatorial ones (what), with middle school pupils (to whom). To accomplish this goal, we wanted to use graph theory and technological tools, aiming to translate combinatorial problems into graph problems that can be solved algorithmically, following the Extended modelling cycle (see Sect. 3). Thus we posed the following research question (RQ): "Is it possible for 8th grade students to reach a Robust Understanding of challenging combinatorial topics, using graph theory and technological tools, to enhance modelling skills?" We divided this question into the following sub-questions, each one regarding one the five dimensions of the Robust Understanding framework. RQ1:"Is it possible for 8th grade students to understand rich Content, in our case graph theory, combinatorics, and the connection between them?" RQ2: "Would 8th grade students be able to positively answer such challenging Cognitive Demand?" RQ3: "Could a path aimed at challenging demand be for all the students, thanks to the use of technology, guaranteeing an Equitable Access to Content?" RQ4: "Could an approach using 'real objects' to represent 'mathematical objects' have an impact on Agency, Ownership, and Identity in students?" RQ5: "Is an approach based on Formative Assessment useful to help students in understanding?". In order to answer the research question RQ we report on our analysis of data in Sect. 8, according to the five dimensions of the TRU model, and taking into account, in several of the investigated dimensions, also the development of the Extended Modelling Cycle. The extent to which classroom activity structures provide opportunities for students to become knowledgeable, flexible, and resourceful disciplinary thinkers Discussions are focused and coherent, providing opportunities to learn disciplinary ideas, techniques, and perspectives, make connections, and develop productive disciplinary habits of mind The extent to which students have opportunities to grapple with and make sense of important disciplinary ideas and their use. Students learn best when they are challenged in ways that provide room and support for growth, with task difficulty ranging from moderate to demanding. The level of challenge should be conductive to what has been called a "productive struggle" The extent to which classroom activity structures invites and support the active engagement of all of the students in the classroom with the core disciplinary content being addressed by the class. Classrooms in which small number of students get most of the "air time" are not equitable, no matter how rich the content, all students need to involved in meaningful ways The extent to which students are provided opportunities to "walk the walk and talk the talk"-to contribute to conversations about disciplinary ideas, to build on others' ideas and have others build on theirs-in ways that contribute to their-in ways that contribute to their development of agency (the willingness to engage), their ownership over the content, and the development of positive identities as thinkers and learners The Extent to which classroom activities elicit student thinking and the subsequent interactions in response to those idea, building on productive beginnings and addressing emerging misunderstanding. Powerful instruction "meets students where they are" and gives them opportunities to deepen their understanding Mathematical content Combinatorics is one of the arch enemies of students in the high-school mathematics curriculum, and a new approach to teaching and learning it could be useful. There are several interesting connections between graph theory and combinatorics; the one expressed by Gionfriddo (2011) inspired the educational path that is the focus of this work. We briefly mention here some definitions and properties of graph theory that we used in our work with students. (For more details, see Voloshin, 2009). A graph is a pair G = (V, E) , where V is a nonempty set of n elements called vertices and E is a set of pairs of distinct elements of V called edges. If x, y are two vertices such that {x, y} is an edge of G , then x, y are said to be adjacent. A graph with n vertices K n is complete if E is the set of all pairs of distinct elements of V . A graph with n vertices Ω n is empty if E is the empty set. A vertex colouring (or simply a colouring) of a graph G is a mapping . The chromatic polynomial of G is defined to be a function P(G, ) that expresses the number of distinct colourings of G by at most colours for each positive integer . The chromatic number of G is the smallest number of colours necessary to colour a graph. It is possible to represent a graph graphically by associating each vertex with a point on the plane and each edge with a line joining adjacent vertices (Table 2). Graph G in the table has been vertex-coloured. Note that, when colouring a complete graph K n , all vertices must have different colours and that n colours are needed. Moreover, P K n , k = P k,n = k(k − 1) … (k − n + 1), where P k,n is the number of simple permutations of k objects in n places, with n ≤ k. Two vertices x and y are connected if there exists an ordered ( 2L + 1)-tuple Table 2 A null graph, a complete graph, and a coloured graph A graph is said to be connected if any pair of vertices is connected. For our purposes, we used only connected graphs. In the following, we introduce two graphs, connection graph and contraction graph, that will be used in the Connection-Contraction Algorithm: this algorithm provides all possible colourings of a graph with a given number of colours. Let G = (V, E) be a graph that is not a complete graph, i.e., G ≠ K n . Let x, y ∈ V be such that x and y are not adjacent vertices. We define the following two graphs (Table 3): • contraction graph: G∖xy , obtained from the graph G by substituting the vertices x and y with one vertex z = x = y that is adjacent to all the vertices adjacent to x and all the vertices adjacent to y in the graph G. We are now ready for the Connection-Contraction Algorithm (C-C A): 1. Suppose G is not a complete graph and let x and y be two non-adjacent vertices. 2. Generate the graphs (a) G + xy, connected, and (b) G∖xy , contracted. 3. If G + xy (or G�xy) is complete, we stop. If G + xy (or G�xy) is not complete, we generate two new graphs from G + xy (or G�xy) as in 1. and 2. We stop only when we obtain complete graphs. This algorithm produces the chromatic polynomial of a graph (giving all possible colourings of a graph with a given number of colours) and the chromatic number of the graph (the fewest number of colours needed to colour the graph). To understand why this is so, see the example below and consider the following reasoning. Let f be a colouring of a graph G , and x and y be two non- Therefore, if G is not a complete graph, and x and y are nonadjacent vertices, then P(G, ) = P(G + xy, ) + P(G�xy, ). Now, since the algorithm ends when both G + xy and G∖xy have been transformed into complete graphs, we get the chromatic polynomial of G to be: P(G, ) = P K 1 , + P K 2 , + … + P K t , , where K 1 , K 2 , …, K t are the complete graphs obtained from the previous algorithm. The chromatic number of the graph (the fewest number of colours needed to colour the graph), is then given by the smallest n such that K n is one of the complete graphs obtained at the end of the algorithm. As an example, we can apply the Connection Contraction Algorithm in order to know the number of different colourings of the graph in Fig. 2, with, for example, 4 colours ( Fig. 3). The chromatic polynomial of the graph G then turns out to be from which P(G, ) = P K 5 , + 4P K 4 , + 2P K 3 , , Study design The authors proposed the activity in an 8th grade class (in the 'Padre Pio da Pietralcina' school in Misterbianco, Italy); our challenge was to experiment with such topics in middle school, even if it is not in the regular curriculum, in order to start with some important ideas and habits of mind of mathematics, such as modelling. We were not acquainted with the students. We proposed the path to the teacher, who helped us set the activity up in a was that was suitable for her students. We assert that it is important to maintain a very close collaboration between researchers and teachers because "the results of the research will be directly applicable (instead of merely potentially relevant) to practice", as argued by Stylianides and Stylianides (2013, p. 334). Bishop (1998) posed the problem that the research community had not sufficiently answered real problems in real classrooms and claimed that "researchers need to engage more with practitioners' knowledge, perspectives, and work and activity situations, with actual materials and actual constraints and within actual social and institutional contexts" (p. 36). Other researchers have Fig. 3 The Connection-Contraction Algorithm since made similar observations. For instance, Wiliam and Lester (2008) claimed that research needed a radical shift towards interventions taken by researchers and teachers directly, which have been taken into account increasingly in recent years. For example, Ferrara and Ferrari (2020) designed and experimented an intervention, studying the impact of such an intervention when learners are engaged with new situations by thinking mathematically, while furthering and planning the activities with the regular classroom teacher. Sample The class we dealt with was composed of 26 students, many of whom were attentive and well-disposed towards learning mathematics, often actively participating during the lectures. However, a few students had some cognitive difficulties, and we wanted to propose to all the students a non-trivial topic, indeed a very rich, potentially difficult one. Study method To achieve our goal, we designed several activities based on the TRU framework, involving the use of bodies, paper, pencils, technological devices (interactive whiteboard and tablets), and specific software to draw graphs (yEd, a graph editor). The classroom was arranged in 'islands' (Fig. 4) where students sat in circles so that they could collaborate and help each other. The intervention was short. We agree with Stylianides and Stylianides (2013), who argued it is possible to design interventions of short duration in mathematics education that can alleviate typical problems of classroom practice: teachers can benefit from the observed methodology without messing up their curriculum structures. The whole path consisted of three meetings: the first two were held by the researchers and lasted 2 h each; the third one, to consolidate the concepts, was held by the teacher of the class. The researchers were present during the activities to introduce problems and lead collective discussion, but also to observe and interact with students. They video-taped and took pictures of most of the activities with a camera. Students' parents agreed to have their children video-taped by signing a consent form. In particular, the researchers videotaped all their interventions (there were two researchers in the classroom: if one of us was talking, the other one or the regular class teacher videotaped) with special attention to the mathematical discussion and to any speech arising from students. Moreover, they went around the classroom when students were working and videotaped students, paying particular attention to those having some difficulties and/or some good intuitions. With three of us (two researchers and the teacher), we could have a look at each 'island' (Fig. 4), to see how each group was working. The researchers then collected the video data to analysed them later on. Data for our analyses come from the transcriptions of videos and the pictures of students' productions in their exercise books and/or on devices: the authors conducted a content analysis with a directed approach, as a qualitative research technique, to support the theory chosen as a theoretical framework (TRU framework with Extended modelling cycle). Since we used a directed approach, the analysis starts with the chosen theory as guidance for coding. In the qualitative content analysis, in fact, the interpretation of the content of text data (transcriptions from videos, in our case) was done through the classification process of coding. We proceeded as follows: the authors viewed all the videos and selected those most interesting for the research, with open coding, and as a result of this process categories were formulated and revised. This ongoing method aims at a true description of the investigated phenomenon, without preconceptions of the researcher, to really understand the data, as explained by Mayring (2014, p. 79). One of the authors tagged each video with an initial code containing information with the following six tags: Content_modelling; Content_generalization; Cognitive demand; Equity; Ownership; Formative assessment. In this way, 12 of the 28 initial videos were selected. Then a second author viewed the 12 videos, tagging again each one with one of the six codes. Once the authors agreed on the tags and the videos were definitively selected and tagged, two authors viewed again them, commented on them, and caused transcripts of them to be made. The comments that arose from the authors are reported in Sect. 8, in order to acertain whether the five dimensions of the TRU framework, together with the Modelling cycle, were satisfied. The educational path As mentioned, the educational path was supported using the yEd software. Concerning the choice of this software, yEd is free software designed to create and manipulate diagrams, and therefore also graphs, supported by most operating systems. It is easy to use; one can decide how to draw each vertex (shape node, Fig. 5), the type of line for an edge, or rearrangement by dragging. The software seemed especially useful to the researchers in explaining the C-C Algorithm on the interactive whiteboard, and to the students in practicing the algorithm on the tablets, as connected and contracted graphs are easily created using Copy and Paste commands, and 'dragging' one vertex to another, or connecting two vertices, is done easily and quickly. The software dynamically manages the space on the virtual sheet (that is, the user has a potentially infinite sheet), which is especially useful considering that the complete evolution of the steps of the algorithm is not known a priori; moreover, it is easy to modify the colour of the vertices, helpful for explaining the colouring on the interactive whiteboard. First meeting In the first meeting, we dealt with three activities: Who is on the podium?, The fish problem, and Draw the relationship. The first one, Who is on the podium?, was designed to increase understanding of the mathematical concept of a simple k-permutation (useful, as shown in the following, for calculating the chromatic polynomial). The activity consisted of counting the arrangement of n students in k places. This was done using the classroom's chairs and the students. To start with, the number of chairs was 2, representing a podium with gold and silver medals; we counted how many possible podiums can be obtained with 3 classmates. We then added another classmate, and afterward, another chair: mathematically speaking, we required simple 2-permutations of 3 objects, simple 2-permutations of 4 objects, and simple 3-permutations of 4 objects, respectively. In the end, we asked the students to generalise the results to obtain the number of simple k-permutations of n objects. In the second activity, The fish problem, we introduced the leitmotiv of the whole path: The fish problem The owner of an aquarium store received the new fish he had ordered and now must arrange them in empty tanks. At the moment, there are only 4 empty tanks in the store, all with a coloured lid. The newly arrived fish are of 4 different varieties: regal tang, magnificent fire fish, clownfish, octopus. The shopkeeper knows very well that some of these fish cannot stay together in the same tank, as it would create a prey-predator relationship. In fact: Students tried to solve the problem by using the newly discovered rules on k-permutations, but they immediately realised that the rules were not suitable for the problem, and that while simple permutations are useful, they cannot be used to solve every kind of problem involving arrangements. Then we started with the apparently separate topic of a graph with the third activity, Draw the relationship. We presented graphs of several situations representing relationships among people (brothers, friends practicing the same sport, etc.), asking students, "How can you draw this situation graphically?" Students started drawing the situation on paper. Afterwards, the researchers started using the yEd editor, giving an opportunity to use several images of people as vertices, as well as an opportunity to use any picture as a vertex by simply dragging it into the sheet. In Fig. 6, we show the graph representing the relationships between Homer Simpson and his sisters-in-law, defined by 'being in conflict': Homer is connected by an edge to each one of the sisters-in-law because they do not get along together, while there is no edge between the two women because they are not in conflict. We decided to use this example because we found it useful to imagine that two people in conflict want to stay in different places (or different colours, in term of the graphs' colourings), as in our fish problem (keeping fish in different tanks). If we have to colour the Simpsons' graph we should use at least two colours, one for Homer and a different one for the women. In general, graphs are used to model relationships and graph colouring is primarily used to model the conflict relationship, to help solve problems where you want to manage conflicts, as in this simple example. Second meeting In the second meeting, after recalling the graph's topic, we dealt with vertex colourings of graphs. We pointed out how easy it is to find the number of all possible n-colourings of a complete graph: it suffices to count the number k of vertices of the graph, then the number of k-permutations of n objects. Then, by using yEd at the interactive whiteboard, we explained the C-C A starting from a specific graph, together with the representations of all the steps of the algorithm, (Fig. 7). The class was invited to practice the algorithm, applying it to some graphs we provided. Each student chose to practice in the way he/she personally thought best: some used the yEd software installed on their tablets, others used pen and paper, another used clipped paper. All of them managed to master the use of the passages sufficiently. Afterwards, we discussed the usefulness of the algorithm, emphasising that, thanks to the resulting complete graphs, we can determine the chromatic number of each graph exactly, but above all, we are able to determine the chromatic polynomial of each graph. We also discussed the advantages of using the algorithm to obtain absolute certainty of having exhausted all possible colourings, as opposed to solving the problems by repeated attempts. Finally, students were guided to model The fish problem in terms of graphs and to see the connection between the two topics (combinatorics and graphs). They easily determined the particular chromatic polynomial associated with the graph of The fish problem (Fig. 8), obtaining as a result the value 72 as the total number of colourings of the graph with 4 colours from the chromatic polynomial n 4 − 4n 3 + 5n 2 − 2. Now the class was able to solve the problem easily, taking very little time, and succeeding, without too much effort, in carrying out the generalisation of the result to find the chromatic polynomial. In the third meeting, the general concept of the chromatic polynomial was consolidated and the path ended with a connection to algebra topics (polynomials) that students had been working on before starting this path. Analysis of the educational path Our activity is in the TRU framework because we proposed a rich topic (The Content) with several connections and realistic examples that foster modelling and conduct students to a productive struggle (Cognitive Demand). All students had personal devices, could work in groups, and had access to different ways of working, according to what was most suitable to them (Equitable Access). In this situation, there was active participation by students who were free to model the problem as they preferred and to argue for their choices (Agency, Ownership, Identity). The teachers (here, the researchers) played a central role: they met students 'where they were', collected their ideas, built on their beginnings, and addressed their misunderstandings (Formative Assessment). Before going into the details of the analysis, we want to describe the class environment fostered by the teacher, Maria, to help explain how the class was prepared for the experiment. As mentioned, the students were attentive and active. This is also due to Maria, who got her students used to thinking about, arguing, and practicing mathematical concepts, rather than explaining prepared mathematical topics. During her lectures, Maria often asks students what they thought, to give their ideas, and to share possible solutions of tasks with their classmates. The content of the experiment was very rich and potentially difficult, and we had doubts about obtaining helpful results. During the experimentation in class, we videotaped the meetings and, in the end, used the videos to categorise the results that were obtained in terms of the efficacy of the TRU framework, considering also the extended modelling cycle (Greefrath, 2011). Here we deal with the five dimensions of TRU, making our considerations with respect to each one, answering each sub-research question, which is reported at the beginning of each subsection. In the following, quotes of students or researchers are written in italics in the text. RQ1: 'Is it possible for 8th grade students to understand rich Content, in our case graph theory, combinatorics, and the connection between them?'. Dimension 1: the content Too often, at least in Italy, mathematics is presented as a set of separate chapters, and rarely do teachers work on connections among mathematical topics. The content we dealt with, instead, is rich in connections among topics and with the theme the class was working on before our experiment, namely, early algebra. We arrived at algebraic concepts only at the end of the path, passing over two apparently separated topics (combinatorics and graphs). Rather than being superficial, our content is rich indeed (it can be taught at the university level). Moreover, it can provide one of the important mathematical skills, namely, modelling. We helped students with modelling using a mathematical concept that is easy to grasp, namely, a graph. So easy, in fact, that when we posed The fish problem (activity 2), a student had already drawn a graph before knowing the mathematical concept (Fig. 9). The student Dario drew the scheme shown in Fig. 9. Here is a dialog between Dario (D) and the researcher (R), which anticipates the vertex-colouring topic: D: "If the clownfish can stay together with magnificent fire fish, they are no more 4, but 3. It is like we have 3 varieties of fish in 4 tanks." R: "And if, instead, I leave them separated, they are 4." D: "Yes." R: "So, I can do this: I can consider them equals, right? … or I can consider them separated. Very good. This will be very useful, especially next time." Dario's scheme is the complement of the graph we then used to solve The fish problem (he joined the fish that can stay together instead of the fish that cannot), but the researcher here was focused only on how to draw the situation given by relationships using a graph. The researchers, at this point, were happy to see the choice made to use the correct way (graphs) to grasp potentially difficult content. As for modelling, here Dario is still in the rest of the world realm (Fig. 1), handling the real situation & problem. In fact, he talks about fish and tanks, but he is starting to face the problem in a mathematical way by schematising the problem on paper. Dimension 2: cognitive demand RQ2: 'Could 8th grade students be able to positively answer such a challenging Cognitive Demand?'. Even if we discussed only the usefulness of an intuitive object, like a graph, we emphasise that the posed task was not trivial to solve. It requires a struggle. It was a challenge. At the beginning, when we asked how many possibilities the owner of an aquarium had wherewith to arrange the fish, students were not able to answer correctly. They tried to use what we had just explained: simple k-permutations (Table 4). In particular, two students, Alice and Giulia, counted 24 ways to arrange the 4 fish varieties in 4 tanks (simple 4-permutations of 4 objects) and 12 ways (6 + 6) to arrange 3 fish varieties (identifying two varieties that can stay in the same tanks) in 3 tanks (simple 3-permutations of 3 objects). In this last calculation, they should have counted two simple 3-permutations of 4 objects instead. By the end of the path, the students understood how to count all possible arrangements, winning the challenge. We relate a discussion after dealing with vertex colourings and the algorithm, as recorded by the researcher in class: R: "How did we solve the fish problem? Do you remember?". Student: "Yes, we have 4 empty tanks and 4 varieties of fish." R: "Yes. So, how is the graph? We have 4 tanks, which means … What does it mean to have 4 tanks?". Student: "4 … 4 colours". (Several students said, together, "4 colours".) R: "4 colours: red tank, green tank, …. And the fish are: octopus, magnificent fire fish, clownfish and regal tang fish (drawing on the blackboard 4 points with the names of fish) and what about the links? We joined … whom?". Student: "Who cannot stay in the same tank." Table 4 Alice's and Giulia's attempts Alice tries to solve the fish problem Giulia tries to solve the fish problem R: "Who cannot stay in the same tank, in such a way that they have a different colour." Students were able to pass from a real model & problem to a mathematical model & problem (as in the Blum and Leiß cycle, shown in Fig. 1). In fact, they easily translated real objects (fish and tanks) into mathematical objects (vertices of a graph and colours of vertices) and a real relationship (a food chain) into a mathematical relationship (edges of the graph). While one of the researchers drew the graph suggested by students with the fish names (Fig. 10), another one drew the graph with the yEd editor, anticipating the application of the C-CAlgorithm, arriving at the computer model & problem in the technology realm (Fig. 1), making sure that students understood that, when colouring a graph, two unconnected vertices can have different colours (connection) or the same colour (contraction). After using the algorithm, students were invited to observe the complete graphs obtained (Fig. 11) and count the arrangements. The researcher invited students to move from the computer Fig. 11 Complete graphs at the end of the Connection-Contraction Algorithm R: "What have we obtained?". Student: "2, 2 C 3 and 1 C 4 " (students called complete graphs C n , instead of K n , as we did in Sect. 5, because 'C' is the first letter of the Italian word for 'complete'). R.: "And then how many ways do we have to colour, i.e., to put the fish in the tanks?". Students correctly counted, together, "for every C 3 with 4 colours, 4 • 3 • 2, twice, plus 4 • 3 • 2 • 1 ." The researcher wrote what they suggested on the blackboard (Fig. 12). By the end, students had arrived at the rest of the world realm for the final real results (Fig. 1), after struggling in the beginning with a non trivial task. Dimension 3: equitable access to mathematics RQ3: 'Could a path aimed at challenging demand be for all the students, thanks to the use of technology, guarantee Equitable Access to Content?'. The class was composed of students with different aptitudes, but all very accustomed to technology. So, we decided to set up the educational path with an emphasis on technology, but we also wanted to leave students free to use the approach they felt would be best. We supported students in their method of choice, both in posing the problem and in practicing the algorithm. Some of them preferred technological devices; others, paper and pencil; still others cut the paper to create vertices and edges. Moreover, we let students work and went around the desks to support them in the mathematical activities. Here we show Asia's behaviour: she decided to cut the paper, emphasising how technology could be useful in such cases; the student decided to contract and connect graphs by using and rearranging physical objects she created herself with coloured paper (Fig. 13). The researcher aided Asia with the task, but noticed that she had some difficulties, beginning with the first steps of the algorithm. The girl correctly understood the algorithm, but in moving the vertices she could represent only one situation (the one in which she was presently working), losing track of any previous steps. In fact, from the second step forward, she always connected vertices not yet joined, but forgot to contract them. Passing from the mathematical model to the computer model ( Fig. 1) can help in solving the task, because technology, in this case, could have been used as an extension of memory: the software does not think for the person, but can leave the person's mind free from remembering previous steps and free to focus on the important mathematical concepts. Indeed, we asked Giulia, another student, how she preferred to work and why. She answered: "Using the software, because it is more usable and quicker (than using paper and pen). Because you can do more in less time. " We noticed, moreover, that in each step, one can pass from a specific graph to a more general set of generated graphs, using the Zoom command in the software, keeping the whole algorithm under control. This is what Dario and Giulia did, for instance. See Fig. 14 for the Zoom command used by Dario. Another technological feature that can help students access mathematical activities is the virtual approach: the software gives the opportunity to manipulate the vertices as if they were real objects, taking advantage of the ease in obtaining and dragging copies. This is evident in what Dario, who used yEd, said: "We start from this one, the main (graph), where the square is joined to the octagon and the circle is alone. I copied it and overlapped the circle and the square, obtaining a complete graph. Instead, this side I joined the circle and the square." Dario talked about movements-"I overlapped" and "I joined"-referring to virtual, not physical, objects. Dimension 4: agency, ownership, and identity RQ4: 'Could an approach using 'real objects' to represent 'mathematical objects' facilitate Agency, Ownership, and Identity in students?'. We wanted the students to be the main actors in the educational path; we wanted them to be active with their minds, hands, and entire bodies. This is why we engaged them from the beginning with activities involving their bodies, real objects that they can touch, instead of mathematical abstract objects, as in the first task, used for simple k-permutations Fig. 14 Dario uses the Zoom command to focus on the general and the specific (Who is on the podium?). Being active in both mind and body was useful in generalising simple k-permutations, as shown in the following dialogue between Graziano (G) and the researcher: G: "If we have 5 people in 3 places, we have 5 • 4 • 3." R: "Good! That's right. Could you tell me 'mathematically?'". G: "The number of factors depends on the number of places." R: "And why do you start from 5?". G: "Because if we add a person, we have 5 possibilities from which to choose for the first place, and in the second place we will put 4, and in the third, 3." Graziano talked in terms of people and also used gestures to simulate the movements of a person added to the group of classmates on the;podium'. Dario and other students referred to the activity using physical bodies in the second meeting, during a discussion on graph colouring: R: "If I have a complete graph with 2 vertices, and I have 5 colours, how many colourings can I have?". D: "20, because we have to compute 5 • 4." R: "How did you come up with it so quickly?". G: "Because we remember the example from our last meeting, concerning first and second place." In the second and third activities (The fish problem and Draw the relationship), students dealt with fish, friends, and cartoons, rather than with 'abstract' points (this choice was helpful to promote students' ownership). They were invited to avoid considering already-proven mathematics and to think and learn by their own 'creations'. They showed ownership of the content, easily passing from one representation of the topic to another. We relate a discussion that emphasises this ability to pass between real and mathematical topics and move easily within the realm of the extended modelling cycle (Fig. 1): R: "We have a complete graph with 2 vertices and 5 colours. How many colours do we have?". Student: "20, because we have 2 tanks and 5 varieties of fish." Even if the student is confused about tanks (which should be the number of colours, 5) and varieties of fish (which should be the number of vertices, 2), he realises that the boundary between math and real topics is porous. The researcher asked about graphs, and they answered in terms of tanks and fish. Students had no problem assigning equal/ different colourings to equal/different tanks in the fish problem. They showed ownership of the topic, seeing it from different points of view. Dimension 5: formative assessment RQ5: 'Is an approach based on a Formative Assessment, useful to help students in understanding?'. We met the students 'where they were', leaving them time and space to think and produce according to their own ideas. For such a teaching method, it is neither easy nor helpful to provide a summative assessment. We did not want a quick, accurate answer from students, but rather questions, ideas, and strategies. To encourage students, it can be useful to accompany them in their current path, suggesting ways to set up and/or fix the steps in such a way as to guide them in the right direction. We have some videos showing students who applied the C-C Algorithm correctly and quickly, making Here, however, we want to relate what happened with Roberta, who decided to work on paper and needed some suggestions. First of all, we noticed that, as shown in the third picture of Fig. 15, Roberta filled the whole sheet before completing the algorithm. This problem could have been avoided by using the potentially infinite sheet in the yEd editor. However, the researcher did not want to force the student to use the software, because of our preference to respect the method students chose to use. This led the researcher to accompany the girl, helping her fix the problem, by essentially discarding graphs. In particular, she often forgot the connected graph. Moreover, in the contracted graph, whenever she overlapped two vertices, she tended to draw 2 copies of the edges (Fig. 16). Again, it would have been helpful to suggest the use of the software, because it would have made all the steps 'automatic', but the researcher respected Roberta's choice. She needed the constant presence of the researcher to guide her in various steps, even if she understood all the steps of the algorithm. All the corrections were made to help her keep the steps in mind, rather than to assign her a grade. Moreover, the help and interaction in class was not only between students and researchers, but also among classmates arranged in 'islands' (Fig. 4). We want to emphasise that although no grades were given to the students, they remembered in the second meeting what they had done and learned the previous week. Their aim was not related to a performance goal, but rather to a mastery goal. Students were genuinely interested in solving the fish problem and engaged in the whole activity, while also having fun. Conclusions The educational path presented in this paper was designed with inspiration taken from Gionfriddo (2011). The mathematical content was aimed at linking combinatorics and graph theory. While neither topic is part of the school curriculum, arguments can be made for including both topics (Sandefur, Lockwood, Hart, & Greefrath, 2022), and we were convinced to embrace this challenging task. Moreover, we also connected the topics to algorithms, computer use and modelling. The educational path is divided into phases marked by three activities (Who is on the podium?, The fish problem, and Draw the relationship) and a meeting on vertex colouring and the C-C Algorithm. The activities were configured as a tool to introduce the topic to students and initiate knowledge processes that would unfold through discussions, comparisons, and reasoning, with the help of digital and non-digital technologies. The modelling activity was prevalent throughout the entire process. In the first activity, Who is on the podium?, the modelling activity occured through consultation with peers. Subsequently, in The fish problem, students immediately modelled the problem using graph theory and solved the question with the help of technology (yEd), then returned to the solution of the problem by implementing the extended modelling cycle of Greefrath (2011). Upon finally solving The fish problem, the researcher asked, "What are the tanks?" Students immediately answered "the colours:" not only was the modelling implemented, but there was also an awareness of the analogy between the problem posed and the mathematical tool used to solve it. The educational path in the classroom was carried out in the spirit of learning by doing: by doing, I discover, I think, I verify, I try, I argue my position. The evolution of the students' argumentative competence (Toulmin, 1958) is evident from their productions. Dario supported his thesis with conviction by exposing the different possibilities of colouring the graph, first with three, then with two colours, indicating the vertices he can colour in red and those he can colour in green. Students were able to apply the algorithm and justify why the algorithm works, observing contracting two vertices means to make them of the same colour, and connecting two vertices by an edge means they are coloured with different colours. The dimensions of the TRU found application in this pathway, in which, as we saw in a previous section, content, involvement, challenges, and new problems combined to create that mathematical identity appropriate for each student, also enhancing modelling skills, since students were able to pass among Rest of the world, Mathematics and Technology realms of the Extended Modelling cycle. With activities of this type, one has the opportunity to give meaning to mathematical concepts that often remain abstract and generally considered difficult; all students can be creative and small mathematicians. In the end of the experimentation, when asking to students what graphs might be useful for, Giulia answered "We need them to solve problems. When we have a schema represented by a graph, we understand what we have to do. We were asked to work on the fish problem and we could not solve it at first. Today, with the graphs, we completed it." In Giulia's response we see the evolution of the whole activity: modelling (by graphs), solving problems (with the algorithm), and the possibility to re-use what they learned (to solve other problems). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
11,639.6
2022-08-01T00:00:00.000
[ "Mathematics" ]
Dark state with counter-rotating dissipative channels Dark state as a consequence of interference between different quantum states has great importance in the fields of chip-scale atomic clock and quantum information. For the Λ-type three-level system, this dark state is generally regarded as being dissipation-free because it is a superposition of two lowest states without dipole transition between them. However, previous studies are based on the rotating-wave approximation (RWA) by neglecting the counter-rotating terms in the system-environment interaction. In this work, we study non-Markovian quantum dynamics of the dark state in a Λ-type three-level system coupled to two bosonic baths and reveal the effect of counter-rotating terms on the dark state. In contrast to the dark state within the RWA, leakage of the dark state occurs even at zero temperature, as a result of these counter-rotating terms. Also, we present a method to restore the quantum coherence of the dark state by applying a leakage elimination operator to the system. Electromagnetically induced transparency discovered in quantum optics has long been an important effect in physics (see, e.g. ref. 1, for a review). This phenomenon of absorption cancelation is interpreted as the appearance of dark state or coherent population trapping. In addition to the atomic systems, dark state has also been observed in a number of solid-state systems including quantum dots 2, 3 , nitrogen-vacancy center 4 and silicon-vacancy center in diamond 5,6 . In fact, dark state can have different applications in physics. The atomic clocks based on coherent population trapping 7-10 make the high-precision time estimation possible using the chip-scale and low-power devices. The state transfer can be done with the adiabatic passage of the dark state [11][12][13] . Operations on the quantum states like squeezing 14 or decay suppression 15,16 can also be conducted with the help of dark state. Moreover, dark state can have important applications to the slow light 17 and photocell 18 . For the Λ-type three-level system, one of the advantages of the dark state is that it is composed of two lowest states without dipole transition between them. Within the framework of rotating-wave approximation (RWA) for the interaction between the system and the environment, the dark state is dissipation-free at a low enough temperature. For instance, the dark state is not influenced by the spontaneous emission 1 . Studies in two-level systems have shown that the counter-rotating terms can change the ground state [19][20][21] , but there were few studies regarding the influence of the counter-rotating terms on the dark state 22 . When the coupling between the system and the environment becomes strong, the counter-rotating terms cannot be neglected. Thus, interesting phenomena with the quantum dynamics of the dark state are expected even at zero temperature, because now the ground state within the framework of RWA is no longer the ground state of the system when including the counter-rotating terms. In this article, we study the quantum dynamics of the dark state beyond the RWA, where the Λ-type three-level system couples to two bosonic baths at zero temperature and the couplings between the system and the two baths contain both rotating and counter-rotating terms. We derive a non-Markovoian quantum Bloch equation for the dark state using a quantum Langevin approach. In contrast to the dark state within the RWA, leakage of the dark state occurs due to the counter-rotating terms in the system-bath interaction, revealing the breakdown of the dissipation-free dark state at the zero temperature. To suppress the leakage, we apply a leakage elimination operator to the system, which plays the role of keeping the upper level of the Λ-type three-level system empty. Indeed, the leakage of the dark state can be much reduced when applying the elimination operator, as shown . The interaction between the system and the two baths is where a k and b k are annihilation operators of the bosonic modes of the two baths. Note that both rotating and counter-rotating terms are included in this interaction Hamiltonian. The dipole transition between |1 and |2 is forbidden in the considered Λ-type three-level system, so we omit this channel here. Under the RWA (i.e., only the rotating terms are considered), Eq. (3) is reduced to Let the two bosonic baths be in the vacuum state | ≡ | ⊗ | 0 0 0 a b , which corresponds to the zero temperature for each bath. It is easy to check that . Thus, Figure 1. (a) A Λ-type three-level system driven by two fields with frequencies ω a and ω b , respectively. The field on the left (right), which drives the transition between |1〉 (|2〉) and |3〉, has a coupling strength Ω 1 (Ω 2 ) with this transition. The three-level system is also coupled to two bosonic baths, with the coupling strengths characterized by Γ a and Γ b . (b) Schematic illustration of the applied control pulses, where τ is the period of the pulses, Δ is the duration of each pulse, and h is the strength of the pulse. Below we show how the two baths affect the quantum coherence of the dark state when the interaction Hamiltonian is H int , so as to reveal the effect of the counter-rotating terms in the interaction Hamiltonian. Then, we present a method to restore the quantum coherence of the dark state by applying a leakage elimination operator to the system. Non-Markovian quantum Bloch equation. The reduced density operator of the system can be written as where = i j , 1, 2, 3, and the reduced density matrix elements are given by Here ρ t ( ) is the density operator of the total system and Tr env denotes the trace over the degrees of freedom of the environments. These reduced density matrix elements can be rewritten as is just the expectation value of the system operator j i and it can also be written, in the Heisenberg picture, as where j i t ( ) is a system operator represented in the Heisenberg picture and j i is this operator at the initial time t = 0, while U(t) is the evolution operator, with T being the time-ordering operator. Here we study the case with the initial state of the total system given To conveniently see the dynamical behavior of the system from non-Markovian to Markovian, we choose the correlation functions α of the two baths as the typical Ornstein-Uhlenbeck correlation The non-Markovian to Markovian transition can be demonstrated by tuning the parameters γ i , i.e., the inverse of the correlation times of the two baths. The coupling strength between the system and the ith bath is given by Γ i which corresponds to the decay rate under Markovian approximation 23 . Using the Heisenberg equation, we can derive the following non-Markovian quantum Bloch equation for the expectation values of the system operators (see method): . The non-Markovianity of the quantum dynamics of the three-level system is reflected in both t ( ) , which are solved via the hierarchical equation , and the initial condition is  for m or ≠ n 0. With the reduced density operator ρ t ( ) sys obtained by choosing the initial state of the total system as Ψ ≡ ⊗ | D(0) 0 0 , the fidelity of the dark state of the three-level system can be written as sys This quantity can be used to characterize the leakage of the dark state to other levels. Breakdown of the dark state. As shown in Eq. (5), for the interaction Hamiltonian H RWA with only the rotating terms, the dark state persists when the three-level system couples to two zero-temperature baths. Thus, = t ( ) 1  in this case. Below we demonstrate the dynamical evolution of the fidelity  t ( ) of the dark state when the counter-rotating terms are included in the interaction Hamiltonian. For simplicity, we choose the same parameters for the two baths, i.e., α We also choose the level of state |3〉 to be the zero point of the energy. The energy difference between |1〉 and |3〉 is taken as ω ω ω − = = 1 3 1 . Other parameters with the frequency unit are expressed as the ratio to ω. To numerically calculate the quantum dynamics of the system, we need to truncate Eq. (15) at a given hierarchical order  so that for  + > m n . As shown in Fig. 2(a), the numerical results at = 4  (blue curve) are very close to the result at = 10  (read dots), indicating that the results converge rapidly with the hierarchical order  . In this case, we choose Γ = 1. Note that the coupling strength between the system and the bath can be characterized by Γ which corresponds to the decay rate of the system under Markovian approximation 23 . To distinguish different regimes of interactions between the system and the bath, we define Γ = 1 to be the deep-strong coupling regime, because the corresponding Markovian decay rate is now comparable to the system frequencies. Consequently, Γ = 0.01 and 0.1 are defined as the strong and ultrastrong coupling regimes, respectively. When Γ < 1, more accurate results are obtained at the same hierarchical order due to the even faster convergence of the numerical results in the weak coupling regime between the system and the baths. Thus, the truncation of Eq. (15) at  = 4 can already give reliable results for the model that we study here. However, we choose  = 10 in our following calculations to have the obtained results more accurate. Also, Fig. 2(b) and (c) show the difference between the results of  = 10 and = 20  . It is found that the difference is below 10 −5 , further indicating that the truncation is reliable. In Fig. 2(d), we show the fidelity evolution of the dark state by varying the coupling strength Γ between the system and the two baths. When Γ = 1, the fidelity of the dark state decays fast and then quickly reaches the stationary oscillations in this strong system-bath coupling regime. These stationary oscillations correspond to a dynamic equilibrium of the dark state under the combined actions of the drive fields and the baths. When decreasing the coupling strength Γ, the fidelity of the dark state decays slowly and takes a long time to reach the stationary oscillations. However, with a given correlation time (inverse of γ) of the two baths, the dark state has already exhibited appreciable leakage at Γ = 0.01 [see the blue curve in Fig. 2(d)]. In Fig. 2(e), we also show the effect of the environment correlation time (i.e., γ) on the fidelity evolution of the dark state. For the non-Markovian environment, which has a longer correlation time (i.e., smaller γ), the fidelity of the dark state decays slowly with the evolution t, in comparison with the Markovian environment with a shorter correlation time (i.e., larger γ). This reveals that the dark state leaks to other levels more slowly when it is coupled to a non-Markovian environment. The physical intuition for why the Markovian environment leads to a faster decay in fidelity may be related to the memory effect of the environment. Actually, owing to the memory effect of the environment, some information that is leaked into the bath can come back to the system in the non-Markovian case. Moreover, similar to Fig. 2(d), the fidelity of the dark state exhibits stationary oscillations at longer times even in the Markovian limit (large γ) of the baths. These oscillations are also due to the persistent drive applied to the system. From our results above, we can conclude that when the coupling strength between the system and the bath becomes strong, the dark state is unstable under the influence of the counter-rotating terms even when the environments are at zero temperature. Thus, owing to the counter-rotating terms in the interaction Hamiltonian, the Leakage reduction of the dark state. To reduce the leakage of the dark state, we introduce a leakage elimination operator [24][25][26][27][28][29][30] . When this leakage elimination operator is added, the total Hamiltonian of the open system becomes is the leakage elimination operator that suppresses leakage from the system to the environment. In numerical calculations, we use the rectangular control pulse c t ( ) as an example, which has a period of τ [see Fig. 1(b)]. Within the time intervals τ τ ≤ < + ∆ l t l , where l is a positive integer to identify the pulse is in the lth period, the control pulse is switched on and has an intensity of h. For other times, the control pulse is turned off. From Fig. 3(a), it can be seen that when the control pulse is applied, the fidelity of the dark state (gray and brown curves) decreases much slowly with the evolution time, in sharp contrast with the fidelity without the control pulse (red curve). Thus, the leakage elimination operator works quite effectively in reducing the leakage of the dark state. Also, the brown curve has a higher fidelity than the gray one, implying that a higher pulse intensity yields better control effect for suppressing the state leakage. Figure 3(b) presents the effect of the period of the control pulse on the leakage elimination. It is clear that the fidelity of dark state increases when decreasing the period of the control pulse. Therefore, both higher intensity and frequency of the control pulse can strengthen the effect of the leakage elimination on the dark state. In Fig. 3(c) and (d), we further show the effect of the inverse of the correlation time γ. Compared with Fig. 2(c), it can be seen that the leakage of the dark state is much eliminated for very non-Markovian baths with small values of γ. This is the distinct advantage of the non-Markovian baths. Finally, we discuss more about the leakage elimination operator. Instead of using the eigenstates |1〉, |2〉 and |3〉, we can also use the dark state D t ( ) , bright state B t ( ) and eigenstate |3〉 as the basis states of the Hilbert space of the three-level system, where the dark state is given in Eq. (2) Discussion We have studied the non-Markovian quantum dynamics of the dark state in a Λ-type three-level system coupled to two bosonic baths and revealed the effect of the counter-rotating terms in the system-bath coupling on the dark state. Due to these counter-rotating terms, the dark state leaks to other states even at zero temperature, in sharp contrast to the dark state within the RWA. Thus, whether the dark state is really dark or not depends on the validity of the RWA in the considered system. Actually, our numerical results have shown that the counter-rotating terms cannot be neglected for a strong system-bath interaction, because these terms yield appreciable leakage of the dark state in this strong coupling regime. To restore the quantum coherence of the dark state, we propose to apply a leakage elimination operator to the system. Our numerical results indicate that the leakage of the dark state can indeed be greatly suppressed with the help of this leakage elimination operator. Our study reveals a possible mechanism for the dark state to leak and a way to fight against it. This may improve the precision of the experiments related to the dark state. While the studies of quantum dynamics beyond the RWA mainly focus on the two-level systems, our work provides insights into the dynamics of the three-level system beyond the RWA. Methods From the total Hamiltonian H tot given in Sec. II, it follows that the Heisenberg equations for the field operators a k and b k are given by , with U t ( ) given in Eq. (11). Equation (21) can be formally solved as
3,707.2
2017-03-31T00:00:00.000
[ "Physics" ]
Resin-based dental pulp capping restoration enclosing silica and portlandite nanoparticles from natural resources Natural-based materials represent green choices for biomedical applications. In this study, resin pulp capping restoration enclosing strengthening silica and bioactive portlandite nanofillers were prepared from industrial wastes. Silica nanoparticles were isolated from rice husk by heat treatment, followed by dissolution/precipitation treatment. Portlandite nanoparticles were prepared by calcination of carbonated lime waste followed by ultrasonic treatment. Both were characterized using x-ray diffraction, energy dispersive x-ray, and transmission electron microscopy. For preparing pulp capping restoration, silica (after silanization) and/or portlandite nanoparticles were mixed with 40/60 weight ratio of bisphenol A-glycidyl methacrylate and triethylene glycol dimethacrylate. Groups A, B, and C enclosing 50 wt.% silica, 25 wt.% silica + 25 wt.% portlandite, and 50 wt.% portlandite, respectively, were prepared. All groups underwent microhardness, compressive strength, calcium release, pH, and apatite forming ability inspection in comparison to mineral trioxide aggregate (MTA) positive control. In comparison to MTA, all experimental groups showed significantly higher compressive strength, group B showed comparable microhardness, and group C showed significantly higher calcium release. Groups B and C showed prominent hydroxyapatite formation. Thus, the preparation of economic, silica-fortified, bioactive pulp capping material from under-utilized agricultural residues (rice husk) and zero-value industrial waste (carbonated lime from sugar industry) could be achieved. Preparation and characterization of silica nanoparticles from rice husk Silica was prepared by heat treatment of rice husk at 800 °C for 1 h in a muffle furnace.After the sample was cooled to room temperature, the isolated silica was washed using distilled water till the neutral pH of water filtrate and dried in a vacuum oven at 105 °C for 2 h.The isolated silica was then dissolved in 0.1 N NaOH followed by precipitation by drop-wise addition of dissolved silica in 0.1 N HCl under vigorous stirring at room temperature.The prepared sample was collected by centrifugation at 10,000 rpm, purified by repeated washing using distilled water, and centrifuged.Finally, the suspended silica nanoparticles were dialyzed against distilled water until neutral pH. The isolated silica nanoparticles were characterized using a transmission electron microscope (TEM) using a JEOL TEM (JEM-2100, JEOL, Tokyo, Japan) with an acceleration voltage of 100 kV.A drop of the suspension was used on a copper grid bearing a carbon film.X-ray diffraction pattern (XRD) was determined using a Bruker diffractometer (Bruker D 8 advance target).Cu Kα radiation source with a second monochromator (λ = 1.5405Ǻ) at 40 kV and 40 mA was used and the scanning rate was 0.2 min −1 . The atomic percentages of the prepared silica nanoparticles were obtained using Energy dispersive X-ray spectroscopy (EDX), hyphenated with Quanta FEG 250 scanning electron microscopy.The spectra were displayed on TEAM ® software at the acceleration voltage 20 kV. Silanization of the silica nanoparticles Silane coupling agent was prepared by proportioning 70 wt.%ethanol in a glass beaker, a few drops of acetic acid were then added gradually into the solution to decrease the pH to 3-4.Finally 6 wt.% trimethoxysilane was added 11,35 .Stirring was done using a magnetic stirrer for 1 h.The silica nanoparticles synthesized from rice husk were incubated in the silane coupling agent for 2 h and then centrifuged (Megafuge 8R, Thermo Fisher, Germany) for 30 min.Finally, the precipitate was dried for 24 h in a hot oven at 80 °C34 . Incorporation of the silanized silica and portlandite nanoparticles into the prepared resin matrix The silanized silica and portlandite nanoparticles were incrementally added to the experimentally prepared resin matrix then they were hand mixed using a plastic spatula to form a homogenous resin mix in three different groups with a total 50% weight percentage loading.The prepared groups were designed as follows: Group A: 50 wt.%silanized silica nanofillers, group B: 25 wt.% silanized silica nanofillers and 25 wt.% portlandite nanofillers and group C: 50 wt.%portlandite nanofillers. The three experimental groups A, B, and C were compared to a commercially available MTA product (PPH CERKAMED, Poland, batch number; 205211) supplied in the form of powder that consists of calcium oxide with oxides of silicon, iron, aluminum, sodium, potassium, bismuth, magnesium, zirconium and calcium phosphate.The mixing liquid was distilled water. Compressive strength measurement Five specimens for each group were prepared for compressive strength testing using cylindrical Teflon molds 4 mm in diameter and 6 mm in height according to ADA specification no.27 36,37 .Each specimen was incrementally packed in a mold placed over a glass slab and a celluloid strip.Another celluloid paper was pressed at the mold's top against another glass slab to extrude excess material.Curing was done using a light-emitting diode (LED) curing unit (RTA, MINIS, Guilin Woodpecker Medical Instrument Co. Ltd., China): 1000-1200 mW/cm 2 for 40 s.Excess material was removed using Soflex discs.Compressive strength testing was done after immersion of the specimens in distilled water for 24 h using a universal testing machine (AGX-PLUS, SHIMADZU, 5KN) with 50 N load cell and crosshead speed 0.5 cm/min 38 . Vickers microhardness measurement Five specimens were prepared for each group using split Teflon molds of 5 mm in diameter and 2 mm in thickness 39 .Specimens were prepared as those prepared for compressive strength testing.Excess material was removed using Soflex discs.Hardness testing was then performed after immersion of the specimens in distilled water for 24 h using a Vickers hardness tester (NEXUS 4000 ™, INNOVATEST, model no.4503,Netherlands).Specimens were subjected to 100 g force for 15 s dwell time for each indentation 40 . Calcium ions-releasing ability and pH measurement Five cured cylindrical specimens with dimensions of 4 mm in diameter and 6 mm in height were prepared as previously mentioned.The specimens were then immersed in a freshly prepared artificial saliva (2.38 g Na 2 HPO 4 , 0.19 g KH 2 PO 4, and 8.00 g NaCl per liter of distilled water adjusted with phosphoric acid to pH 6.75) 35 and stored at 37 °C for 1, 7 and 14 days.The volume of the artificial saliva was adjusted to be 10 mm 3 according to the equation Vs = Sa/10, where Vs is the volume of the artificial saliva, and Sa is the apparent surface area of each specimen 41 .The calcium ions concentration was measured by an inductively optical emission spectrometer (Ultima Exports, HORIBA, France).pH was measured after each incubation period using a pH meter (Jenway 3505, Bibby Scientific Limited, UK).Evaluating the apatite-forming ability of the prepared groups Five cured cylindrical specimens with dimensions of 4 mm in diameter and 6 mm in height were prepared and stored in 10 mm 3 of artificial saliva for 14 days 41 .The specimen surfaces were characterized using scanning electron microscopy and energy dispersive X-ray spectrometry (SEM-EDX; JCM-6000Plus NeoScopTM, JEOL Ltd., Tokyo, Japan).The EDX analysis was carried out on the surface of the crystals present in the SEM image of each group.Two locations were selected and the results of the EDX analysis were averaged. Statistical analysis Numerical data were represented as mean and standard deviation (SD).Shapiro-Wilk's test was used to test for normality.Homogeneity of variances was tested using Levene's test.Data showed parametric distribution and variance homogeneity.Mechanical properties data were analyzed using one-way ANOVA followed by Tukey's post hoc test.Calcium ions release and pH data were analyzed using two-way mixed model ANOVA.Comparisons of simple effects were done utilizing one-way ANOVA followed by Tukey's post hoc test for independent variables and repeated measures ANOVA followed by Bonferroni post hoc test for repeated measurements.P-values were adjusted for multiple comparisons using Bonferroni correction. Characterization of silica nanoparticles prepared from rice husk The prepared silica nanoparticles were characterized using XRD, TEM, and EDX.The XRD pattern of the prepared silica nanoparticles presented in Fig. 1, showed that they had cristobalite structure with reflection peaks at 2θ values of 21.7°, 28.4°, 31.3° and 36.1°, which correspond to 101, 111, 102, and 200 crystal planes 42 . The EDX analysis, shown in Fig. 2, revealed the purity of the prepared silica nanoparticles since Si and O atoms had peaks with strong intensity with traces of Cu and K.The sum of the atomic percent of silicon and oxygen atoms was 99.46, while that of Cu and K together was 0.53%. Regarding the size of the prepared silica nanoparticles, TEM images showed particle size with a diameter range of 23-116 nm (Fig. 3). The EDX analysis shown in Fig. 5 displayed the good purity of the prepared portlandite nanoparticles, since the peaks for Ca and O atoms predominated in the spectrum.Traces of magnesium, sulfur, phosphorus, and silicon, which could have originated from the carbonated mud 27,28 , were also found. The TEM images presented in Fig. 6 on the other hand, revealed the tetragonal and hexagonal plate-like shape of the prepared portlandite nanoparticles with dimensions ≤ 100 nm. Mechanical properties evaluation of the prepared pulp capping materials Intergroup comparisons of the mechanical properties of the experimentally prepared pulp capping material are presented in Table 1.The proportion of the components in each group was determined according to preliminary experiments to obtain samples with reasonable strength and handling properties. The results of the mechanical properties evaluation showed that; for maximum compressive strength and modulus of elasticity, there was a significant difference between the tested groups, with the positive control (MTA), where MTA had a significantly lower value than all the experimental groups (p < 0.001).However, the maximum strain results showed no statistically significant difference among all groups at p = 0.476. As for the hardness results, the positive control group revealed significantly higher values than those of groups A and C, though there was no significant difference in the hardness values of group B. Calcium ions release Results of the mixed model analysis of the calcium ions release data revealed a significant interaction between the groups and the time of measurement (p < 0.001).Comparison of simple effects showed that; for different time intervals, all tested groups revealed significant differences in-between (p < 0.001).On day 1, post hoc pairwise comparisons displayed significantly higher calcium ions release for the positive control group than for the other groups (p < 0.001).At day 7, group C had significantly higher calcium ions release than the other groups (p < 0.001).In addition, the positive control and group B had significantly higher values than group A (p < 0.001).At day 14, all pairwise comparisons were statistically significant (p < 0.001), with group C having the highest calcium ions release, followed by the positive control, then group B and group A having the lowest mean value. For the positive control group, calcium ions release after 7 days was significantly lower than that after 1 and 14 days (p < 0.001), while for group A, the difference was not statistically significant (p = 0.105).For groups B and C, the difference was statistically significant, with calcium ions release values measured after 7 and 14 days being significantly higher than that at day 1 (p < 0.001).Mean and standard deviation values for calcium ions release are presented in Fig. 7. pH analysis The results of the mixed model analysis (p = 0.006) of the pH data revealed that there was a significant interaction between the groups and the time of measurement.Comparison of the simple effects showed that; for days 1 and 7, there was a significant difference between the tested groups (p < 0.05), while for day 14, the difference was not statistically significant (p = 0.333).On day 1, post hoc pairwise comparisons showed that the mean pH values of group C were significantly lower than those of the other groups (p < 0.001).On day 7, the positive control group exhibited a significantly higher mean pH value than the other groups (p < 0.001). For the positive control group, pH values measured after 7 days were significantly higher than those measured after 14 days (p = 0.030).While for the other groups, there was no statistically significant difference in their pH values among different time intervals (p > 0.05).Figure 8 represents the mean and standard deviation of the recorded pH values. The apatite forming ability The SEM images as well as the results of the EDX analysis are presented in Fig. 9 and Fig. 10 respectively, for the positive control and the experimental groups, after soaking in artificial saliva for 14 days. As revealed in the SEM images, there was evidence of the formation of calcium phosphate deposits in the positive control group, group B, and group C, these deposits had irregular morphology in the positive control group, shuttle/flower-like shape in group B, and nano-spherulites packed as clusters of spheroidal masses having needle-like projections in group C. On the other hand, group A showed the absence of formation of any calcium phosphate deposits.These results were confirmed by the data obtained from the EDX analysis, where, the Ca/P atomic ratio of the presented calcium and phosphorous percentages were 2.11, 0, 1.63, and 2.25 for the positive control, group A, B, and C respectively. Discussion Pulp capping therapy is a well-known restorative procedure aimed to preserve the vitality of the restored teeth; however, it represents a big challenge to dental practitioners and consequently, manufacturers owing to its proximity to the sensitive pulp tissue. In the current study, two nanoparticles have been prepared for use in pulp capping restorations.The preparation processes involved several steps.In the case of portlandite nanoparticle preparation, the process was as follows: (1) Heat treatment at 800 °C, which is known industrially as calcination and is used in several industries, such as preparation of high-quality fillers.Calcination is a closed process as all heat and gases evolved are utilized and carefully recycled 45 .In our study, calcination caused the evolution of carbon dioxide gas in a relatively pure form and thus could be easily collected from the system to be used as a gas or in other industrial operations.(2) Ultrasonic treatment of the produced portlandite to form nano-size particles.Ultrasonic technology is a green technology for the preparation of nanomaterials, as it does not involve chemical utilization and is not a high-energy consuming technology; moreover, ultrasonic technology has potential applications in our daily life activities like household washing 46 . In the case of silica nanoparticle, the same heat treatment mentioned above was used, and then the dissolution using sodium hydroxide followed by the precipitation using hydrochloric acid was carried out.Heat treatment of rice husks generates energy that is usually used as a source of heating in several industries 47 .The alkali and acid used in the preparation process are commonly used in many industries such as the preparation of food-grade materials.For example, hydrochloric acid is used as an acidity regulator in the food industry (used under the symbol E507) while sodium hydroxide is used in curing olives and in removing the skins of fruits and vegetables to be canned 48 . Regarding the raw materials of the existing commercially available pulp capping materials, the formulation may differ according to their type and manufacturer, some are green-based, as their main ingredient depends on refined Portland cement as ProRoot MTA (Dentsply tulsa, Johnson City, TN, USA) and TheraCal (Bisco Inc, Schaumburg, IL, USA).However, others are not green-based as their ingredient depends on synthetic chemical oxides as Ledermix MTA (Riemser, Riems, Germany), MTA Angelus (Angelus dental solutions, Londrina, PR, Brazil), BioAggregate (Innovative BioCeramix Inc, Vancouver, Canada) 49 , as well as, the current commercial product used as a positive control in this study (MTA, PPH CERKAMED, Poland). The most used filler in dental composite resin restorations is silica.Silica particles of varying sizes and shapes can improve the physico-mechanical properties of dental composites.Most of the silica particles utilized in the resin composite synthesis are fabricated through the sol-gel process, using harmful and costly chemicals, like www.nature.com/scientificreports/tetraethyl orthosilicate and sodium silicate.In the current study, silica was obtained from an inexpensive biobased source; rice husk, and without the use of toxic chemicals.Bio-based materials are renewable, sustainable, and ecological 34,50 .Silica nanoparticles were prepared from rice husk in a cristobalite crystal structure, as the XRD pattern showed.TEM image of the prepared silica confirmed their spherical shape and nano size (23-116 nm).The most preferred size and shape of the silica fillers for use in dental composite restoration fabrication are the nano, spherical fillers.These preferred characteristics allow increased filler loading thus enhancing composites' fracture strength 51,52 .Moreover, small fillers with variable sizes allow denser packing, which raises the volume fraction of the fillers in the resinous composite restoration.All these features were fulfilled by the silica nanoparticles prepared in this study from rice husk; this could explain the good mechanical properties of the experimental resinous pulp capping materials 50 . The incorporation of nanoparticles into the resin matrix improves its wear resistance, modulus of elasticity, flexural strength, tensile strength, and fracture toughness 19 ; however, nanoparticles tend to agglomerate if their surface is not treated 51 .The agglomeration and lack of distribution in the resin matrix produce weak points as well as stress concentration areas, with the final failure of the restoration 19 .For this reason, the silica nanoparticles utilized in this study were silanized using 6 wt.% trimethoxysilane coupling agent.The silanization process of the silica nanoparticles hydrolyzes the (-OCH 3 ) groups into silanol groups using water in the ethanol solvent.At that point, the silanol groups form a covalent bond with hydroxyl groups on the silica surface.The silanol groups on adjacent silanes condense together forming a polymer film on the silica surface.Furthermore, hydrogen bonds form between the hydroxyl groups on the surface of silica particles and the carbonyl groups of silane coupling www.nature.com/scientificreports/agent via Vander Waals force 50,53 .Silanization of the silica nanoparticles provides a homogenous distribution of the nanofillers in the resinous restoration with an essential bond between both the matrix and the filler phases 50,51 .Due to the reasons mentioned above, adequate wetting of the prepared silica nanofillers was achieved in this study, with an overall improved performance of the material.Previous studies mentioned that the formed dentin bridge after direct resinous pulp capping application was significantly slower than that obtained after direct calcium hydroxide application, thus microleakage may result, with subsequent failure 16,17 .Therefore, in this study, a new resinous system containing dentin-promoting agent (portlandite nanoparticles) was developed.This study matched the scope of other studies where Suzuki et al. 54 studied the wound healing process of rats' dental pulps that were directly pulp-capped via an experimental resinous restoration containing calcium phosphate as dentin dentin-promoting agent.The results of this study www.nature.com/scientificreports/revealed that the incorporation of calcium phosphate into the experimental resinous restoration efficiently promoted dentin bridge formation and that the quantity of reparative dentin formed was directly related to the concentration of the added calcium phosphate.Another study prepared by Kato et al. 55 reported that the experimental resinous pulp capping system containing dentin-promoting agents such as hydroxyapatite, brushite, whitlockite, and octacalcium phosphate, induced reparative dentin.Similarly, in the present study, nano hydrated calcium oxide (Ca(OH) 2 ) (portlandite) was added as dentin-promoting filler to the experimental pulp capping restoration to generate a self-healing potential to the pulp and to reduce microleakage at the restoration-cavity interface.Pure portlandite with dimensions ≤ 100 nm was prepared from the carbonated clay waste of sugar beets that generated huge amounts of carbonation lime wastes during the industry process, holding calcium-rich portlandite nanoparticles.The ultimate goal of pulp capping materials is reliable biological properties, thus calcium ions release, pH, and apatite forming ability inspection were performed to examine the new experimental pulp capping materials' bioactivity. However testing their mechanical and physical properties is no less important than testing biological properties, as any dental material intended to restore teeth should sustain the subjected masticatory and parafunctional forces, accordingly inspecting the mechanical properties of a new dental restoration is an important factor in determining its clinical success 50 .Additionally, a serious drawback of most MTA restorations is their weak mechanical properties 9,11 , thus, in the current study both mechanical and biological properties were examined. Regarding the mechanical properties evaluation, the effect of the filler type on the compressive strength of the experimental groups was evaluated using a universal testing machine according to ADA specification no.27 37,38 .Results showed no statistically significant difference between the compressive strength results and the modulus of elasticity (stiffness) among the three experimental groups.As expected, the addition of silica increased the compressive strength values; however, good mechanical properties were achieved for all the inspected experimental groups. Additionally, the compressive strength and the modulus of elasticity of the experimental groups were significantly higher than those of the positive control group (MTA).MTA is a calcium silicate-based material related to Portland cement with expected lower compressive strength when compared to a resinous pulp capping material enclosing rice husk silica and /or portlandite nanoparticles.The compressive strength of MTA is mainly affected by the size of the MTA powder, the mixing liquid, and the mixing techniques 56 .It was reported that MTA undergoes a hydration reaction with water resulting in calcium ions release with subsequent enhancement in its compressive strength.However, the depletion of the inorganic component from MTA may eventually decrease its compressive strength 2 . Vickers hardness test is a reliable test to evaluate resinous composites.Generally, measuring the hardness of any dental restoration provides a good estimation of its wear resistance and stability against oral environmental changes 34,57 .Many factors can influence the hardness of the dental resinous composite restoration including, resin type, filler loading, and curing time 50,57 . In the current study, the four tested groups exhibited reasonable hardness values, these values were comparable to a previous study 34 , in which, silica rice husk was incorporated in an experimental flowable resin composite, and their hardness results were between 29 and 31VHN. Though groups A and C showed significantly lower hardness results than MTA (p < 0.001), yet, group B showed a comparable, non-significantly different result to MTA.Many studies 58,59 suggested that Vickers hardness values could be increased by increasing the amount of special filler types such as zirconia, aluminum oxide, etc., these types of fillers were not added to the prepared experimental groups but are present in the MTA, where zirconia and aluminum are present in its composition as stated by the manufacturer. Dental caries is a bothering dental disease that develops when the cariogenic bacteria metabolize carbohydrates, generating acids with a subsequent decrease in the pH value of saliva, resulting in teeth demineralization.A pH of 5.5-6 is known as the theoretically cariogenic pH 60 .The use of dental pulp capping materials that can release active remineralizing ingredients, such as calcium ions, is considered a smart strategy to stop dental caries. In the current study, all recorded pH values at different immersion periods were above the critical cariogenic pH, this reveals a smart behavior of the investigated materials.This increase in the pH values occurs most likely because of the released cations originating from the tested materials. The sustained release of calcium ions was evaluated by immersing the control and the experimental groups in artificial saliva for 1, 7, and 14 days.Some studies have inspected various immersion solutions, such as simulated body fluid, Hanks′ Balanced salt solution, Dulbecco's phosphate-buffered saline and distilled water; however, the current study utilized artificial saliva as an immersion medium to simulate the oral environmental conditions 61 . The ability of MTA to exhibit high calcium ions release and alkalinize the surrounding fluids could be due to the surface hydration and dissolution of MTA calcium-silicate particles because of their high reactivity with water.This results in the formation of calcium hydroxide that dissolute into Ca and OH ions, which are released in the medium and elevate the pH [62][63][64] . Though the experimental B & C groups have already calcium hydroxide (portlandite) in their filler structure that can readily release Ca and OH ions faster, yet, the pH of MTA was significantly higher than all the experimental groups on day 1 and 7.This could be explained by the fact that the experimental groups have resin in their composition, which may have regulated the Ca and OH ions release and consequently did not elevate the pH of the medium initially, however by the end of the immersion period ( www.nature.com/scientificreports/ is initially formed in vivo then it transforms into a crystalline apatite-like phase by taking up OH -ions from the solution [65][66][67] .Calcium silicate-based restorations could have a sealing effect, derived from its apatite formation capability.Calcium ions in the pulp capping material react with the phosphates in dentin, forming an apatite-like structure with subsequent mechanical and chemical bonds with dentin, this improves the pulp capping sealing ability and presents what is called 'self-healing restorations' 61 . In this study, experimental groups were immersed in artificial saliva for 14 days to analyze their apatite formation ability.The shape and size of the created apatite crystals significantly affect the bioactivity of the material.Smaller apatite crystals absorb more protein, thus absorbing more cells capable of inducing hard-tissue regeneration 2 . As revealed in the SEM images, groups B and C revealed densely formed small crystals, suggesting that these groups can induce cell proliferation and hard-tissue formation 2 . According to SEM observations and EDX analysis, MTA and the experimental group B, and C exhibited apatite-forming ability while group A did not, obviously due to the absence of portlandite in this group. Ca/P ratio of calcium phosphate precipitates significantly affects the degree of their bioactivity.Previous studies reported Ca/P ratios of 3.84, 8.33, and 2.74 for MTA incubated in simulated body fluid for 1, 7, and 14 days respectively 68 .In the current work, the Ca/P ratio of the calcium phosphate precipitated in MTA was higher than the stoichiometric Ca/P ratio for hydroxyapatite (1.67).On the other hand, the Ca/P ratio of the precipitates in group B (25% silica and 25% portlandite) has a closed Ca/P ratio to the hydroxyapatite, while the Ca/P ratio in the case of sample C (50% portlandite) was higher than that of hydroxyapatite and close to that of positive control.The high Ca/P ratios may be an indication of the presence of excess calcium precipitations on the surface, which provide favorable bioactivity, biocompatibility, and hard tissue-induction abilities for the experimental restoration materials 2 . The clinical success of bioactive pulp capping materials mainly depends on their ability to form and regenerate the apatite phase of hard tissues.Their hard tissue-induction abilities are derived from the produced calcium hydroxide resulting from the hydration reaction of MTA 2 . The results of our study demonstrated that groups B and C pulp capping materials possessed high bioactivity and produced modification in their surface morphology and chemical composition upon immersion in the artificial saliva, equivalent to or even surpassing those of the positive control group (MTA).This significant finding was evidenced by the formation of packed calcium phosphate nano-spherulite clusters with needle-like projections shown in group C.This high bioactivity could be explained by a previous study 63 , which stated that the hydroxyl, ester, and ether chelating groups (OH, C=O, C-O-C, and C-O groups) present in hydroxyl ethyl methacrylate resin (HEMA) are the coordination sites for chelating calcium ions 63 .Likewise, Bis-A GMA and TEGDMA resins found in the experimental groups of the present study could form Bis-A GMA and TEGDMAcalcium chelate complexes.These complexes are responsible for providing initiation sites for apatite nucleation. Further studies are recommended to evaluate the biocompatibility of these novel experimental pulp capping materials on dental pulp cells and to compare their cytotoxic effect and dentin formation capability to a commercially available MTA product, at different time intervals. Conclusion Within the limitation of this study, it was concluded that a reinforced bioactive resinous pulp capping material could be achieved using low-cost rice husk and carbonated mud from the sugar industry to obtain silica and portlandite nanoparticles respectively and that the produced pulp capping restorations have equivalent bioactivity to MTA but with better mechanical properties and controlled setting time. Figure 1 . Figure 1.The XRD pattern of the prepared rice husk silica nanoparticles. Figure 2 . Figure 2. The EDX analysis of the prepared silica nanoparticles. Figure 3 . Figure 3.The TEM images of the prepared silica nanoparticles at different magnifications. Figure 4 . Figure 4.The XRD pattern of the prepared portlandite nanoparticles. Figure 5 . Figure 5.The EDX analysis of the prepared portlandite nanoparticles. Figure 6 . Figure 6.The TEM images of portlandite nanoparticles at different magnifications. Furthermore, the results of this study demonstrated that group B and group C pulp capping materials exhibited high levels of Ca ions release throughout the study.It is well known that amorphous calcium phosphate Vol:.(1234567890)Scientific Reports | (2024) 14:16554 | https://doi.org/10.1038/s41598-024-66728-0
6,602.4
2024-07-17T00:00:00.000
[ "Materials Science", "Medicine" ]
The impact of tax policy stimulus on automobile choice-Evidence from Chinese automobile industry Using detailed national brand-level automobile sales data from January 2004 to the end of 2009, we quantify the impact of a series of tax policy stimuli initiated by the Chinese Government on automobile sales in China. These tax stimuli aimed either to prevent high displacement car consumption or to encourage low displacement car purchases. We conclude that the first two tax adjustments surpass high-emission auto sales, and the third adjustment promotes the overall auto sales. INTRODUCTION The total sales of China automobile market in 2009 was 13.64 million with 46.15 percent year-on-year growth, surpassing the United States for the first time and becoming the world's largest auto market.In 2010, automobile production reached 182.65 million with 32.44 percent year-on-year growth and automobile sales reached 180.62 million with 32.37 percent growth, winning the world's first. With the increasing auto production and inventory in China each year, the huge consumption of oil resulting from the popularity of automobile is increasingly prominent.Harmful gases are discharged into the air, causing air pollution and greenhouse effect.Thus developing low-carbon economy has become an urgent problem that every country has to face.Under the background of conservation-minded and eco-friendly society, it is very necessary to develop energy-efficient, low -emission and material consumption automobile industry, and improve automobile products' structure and market purchasing orientation through special policies and methods. Against this background, the government adjusted excise tax twice on April 1st, 2006 and September 1 st , 2008 respectively, and one purchase tax adjustment on January 2009.The first two excise tax adjustments aim to "suppress big" and "encourage small" which means curbing the consumption of high-emission cars and promoting the consumption of low ones.And the third purchase tax adjustment only intends to "encourage small", which means to promote consumption of lowemission cars through cutting the purchase tax.After three times of tax adjustment, relevant departments, foreign media and consumers comment differently on whether the purpose of "damps raises greatly is small" can be achieved or not.Some experts believe that the tax adjustment only can protect environment in short term, but not really change the automobile consumption structure, while others believe that the tax adjustment deeply influences consumers' purchase intention. Previous comments are all subjective, lacking scientific evidence and reliable empirical data.In addition, there are no academic research that can answer the effects of China automobile tax adjustment through empirical research methods.To fill the research gaps, this paper intends to raise the following questions: (1) Do two excise tax adjustments affect China automobile sales?(2) Do purchase tax adjustments affect our automobile market? (3) If they both do, when does the maximum effect happen?What moderator variables contribute to it?Specially, we also examine the regulation of automobile emission and country of origin, and answer whether high-emission cars are inhibited and low-emission ones are promoted, and show that indigenous cars or joint ventures cars are more inhibited. This paper is the first to empirically investigate the effectiveness of tax adjustment in China.It also has some innovations: firstly, how external macroeconomic policy and enterprise micro strategy affect consumer purchase behavior is explored; secondly, in order to know the effect of changes in policy or strategy, scale effect and structural effect are separated; thirdly, it shows up the particularity of China automobile market, as well as the peculiarity of the two taxes; the forth, the policy effectiveness assessment is comparatively deficient at the dependent variable of marketing performance.The fifth, we stand on the sight of the consumer behavior but not the qualitative and theoretical researches. Abroad Previous studies abroad on policy adjustment have set the foundation for policy impact on auto.Earlier researches study the effect of different fuel tax level on consumer welfare, applying questionnaire data of U.S. consumer census (Fullerton and West, 1999). In Japan, Fullerton et al. (2004) take automobile consumption of Japanese residents and simulate the impact of policy changes on the family automobile consumption and the mileage.The policy simulation includes changing the cost per kilometer, such as tax based on emission, carbon tax, fuel tax, etc., and changing vehicle cost, such as tax based on engine size, displacement and years of use. In Mumbai, there are three policies towards decreasing pollution: upgrading the diesel engine to the greener CNG, rising fuel price and collecting auto license tax.Takeuchi et al. (2007) adapt first-hand questionnaire survey and assesse that which has stronger inhibitory effect on vehicle emissions among the above three policies by selection model and nested selection model.This study finally confirms that the most effective method is upgrading diesel engine by evaluating price elasticity and income elasticity, and it would not lead to bus fare rising but consumers converted to family cars, then proceed to exhaust emission. In United States, provided that gasoline price rose one cent, gasoline consumption would lead to a decrease of 0.2%, through investigating the impact of increasing gasoline tax on U.S. auto market (Bento et al., 2009).Gallego et al. (2011) study the impact of auto control policy on air pollution (carbon dioxide purity) in two Latin American urban Mexico and Santiago.They regard time as regulated variable and distributed it into three periods: peak, off-peak and weekend, proceeding to investigate the difference of policy responses in different period.Furthermore, they study policy impact on new car sales and used-cars trading volume through Diff-in-Diff methodology. Domestic With the development of auto industry and the worse situation of air pollution, policy adjustment in China is becoming a hot issue that many scholars analyze it in different angles and synthesize the literature on auto tax.Wang (2007) analyzes the impact of excise tax reform of China in 2006.Guo (2010) explores the impact of auto purchase tax initiated and implement on several stateowned firms in two time point respectively applying correlation analysis.It shows that most auto companies' stock soared in varying degrees.Particularly, Audi as a typical high-emission model, its sales is refrained by two auto purchase tax adjustment through descriptive analysis and correlation analysis (Zhu, 2010).Xiao and Ju (2011) study the impact of two auto excise tax released on April1, 2006 and September 1, 2008 on proenvironment (fuel consumption, etc) and social welfare.It finds that the policy decreased the total auto sales, thereby enabling to reduce fuel consumption, but the consumption structure of the various models is not affected.Afterwards, they also study the impact of the fuel tax reform policy announced in 2009, and find the same results.The only difference is that consumers' welfare loss is greater than the consumption tax loss due to fuel tax increasing.Chen et al. (2010) concern about the consumption tax reform in 2006, and find that the price effect and advertisement effect result from the time lag between releasing policy and implementing it. Systemizing the previous literature, the tax lever applied in adjusting foreign auto industry consumption structure is generally fuel tax, license tax, carbon tax, etc., rarely on excise tax and purchase tax, which is China's unique characteristics.Part of existing domestic study has focused on China auto industry tax policy which lays a good foundation for our study.Overall, the results are limited and most qualitative research based Displacement Before After Adjustment range on descriptive analysis and correlation analysis, etc. Articles that apply selection model and other scientific and systematic methodology are deficient. Nation equity and brand equity Nation equity is a concept of the equity or goodwill associated with a country.It refers to the generalized COO effects, including performance-based COO effects and normative COO effects (Maheswaran and Chen, 2008).Nation equity can impact the company or product performance-related perceptions, and can be positive or negative as a function of culture, politics, religion, economic development and other external macro factors (Maheswaran and Chen, 2008 Although limited literature devoted to nation equity and brand equity studies of auto industry, referring to nation equity and nation image in general (Wang and Deng, 2010;Wang et al., 2009), country of origin (COO) effects (Wang and Yang, 2004), brand equity and brand image Chen and Wang 3 (Fang et al. 2011), domestic researches have rich achievements and high academic value, laying a good foundation for our study on the heterogeneity of consumers' reaction of different COO and different brands auto tax adjustments. Three automobile industry tax policy adjustments To conserve energy, reduce emission, increase the awareness of environmental protection, encourage purchasing low-fuel and low-emission cars and optimize automobile production consumption structures, China government adjusted tax policy several times on local automobile market.Recalling the large-scale tax adjustment history of Chinese auto industry, there are three times adjustments. The first adjustment On March 21th, 2006, China Ministry of Finance and the State Administration of Taxation issued adjustment towards the items, rates and related policy of current excise tax.The adjustment was implanted in April 1 st 2006.It significantly improved the ratio of cars which displacements are above 2.5 liter.That is a great shock on high-emission cars, and tax-inclusive prices of some imported luxury cars in Shanghai increased by 150,000 overnight.However, the ratio of low-emission cars with under 1.5 liter displacements was decreased, which released the intense signal of "damps raises greatly is small".The detail tax rate adjustment of the first policy is shown in Table 1. The second adjustment After two years of market reaction, further adjustment towards auto excise tax was carried out in September 1 st 2008 by China Ministry of Finance and the State Administration of Taxation.This tax adjustment continued to raise the excise tax of high-emission and luxury cars, and lower the tax of low-emission ones.After the adjustment, manufacturers reacted differently and consumers also make sensitive responses.The detail tax rate adjustment of the second policy is shown in Table 2.These two-timesadjustments mainly focused on excise tax, aiming to manufacturers.The excise tax equals to manufacturers' price multiplied by the excise tax rate.Manufacturers passed on the tax to the ultimate consumer on their own, all controlled by the manufacturers themselves. The third adjustment Differed from the two previous excise tax adjustments, the third targets are car consumers.The car purchase tax is paid to the Internal Revenue Service.The general cost is the total auto price divided by 117% of the valueadded tax, then multiplied by purchase tax rate. The purchase tax adjustment initiated on January 20 th, 2009 reduced 5% purchase tax of cars with 1.6 liters and less displacement while the ratio of cars with over 1.6 liter displacement remained the same as before.The specific rate programs of the third adjustment are shown in Table 3. Within four years, China has three major adjustments of the tax ratio for the auto industry, respectively; twice excise tax adjustments against manufactures and once purchase tax adjustment against consumers.Well, whether do they have effect?How effective?Do they really help "damps raises greatly is small"?What fluctuation towards tax ratio adjustment do sales of different displacement, different brands and different models have? Empirical method We use empirical methods and quantitative analysis to build the marketing econometric model, regarding three tax adjustments as dummy variables, exploring the impact of the different policies' stimulation on auto sales. Data In this study, research data are monthly sales data of China auto industry from January 2004 to December 2009 covering 72 months and three tax policy adjustment periods.The sales data include 15290 observations of 675 brands from 59 auto manufactures in China.Variables in data include attributes such as sales, prices, displacement and dummy variables includes joint car and local brands.In addition, the study also uses other auxiliary data such as GDP and monthly retail price, which can exclude the impact of economic fluctuation and other external policy factors through controlling variables. Logical model Suppose that customer i can choose a car from j brands, and we define it as vector jt X .jt X includes a series of attributes and environmental factors such as three auto tax policy. jt P is retail brand price.Consumers can also choose not to buy or buy zero brands; we named it outside good.The utility that consumer i derives from purchasing product j in quarter t is given by:  is customer i 's intrinsic preference for brand j ,  is response coefficient of consumers towards observed product attributes and environment variable. is response coefficient on price.ijt  is customer i 's idiosyncratic preference for brand j in quarter t , which gives rise to the following consumer i 's logical choice probability for brand j in quarter t : Control function After getting the probability for brand j , market share of brand j in quarter t is jt S , which is the ratio of sales to the total auto market sales. The probability of outside good is: For the convenience of the parameter estimates, we convert (4) divided by (3) to linear form as follows: Taking derivatives of right of Equation ( 5), we obtain multiple variables for research as: Argument Equation ( 6) is a terminal model.Vector X is one of arguments in right equation, and its first part are attributes of auto brand products (represented by COVARIATES), including manual or automatic, two-box auto or three-box auto, displacement and country dummy(1 for foreign, 0 for China).In addition, it also uses other macro environment variables such as GDP level and oil prices. Interaction To consider the different impacts of excise tax and purchase tax on different displacement cars and consumers' purchase reaction on different country of origin, we take displacement  policy and COO  policy of the two interaction terms into variables. MODEL ESTIMATION AND RESULTS ANALYSIS We use STATA 10 to analyze data.In order to control the brand effect and maintain results reliability, the main effect and interaction in data analysis inspection use fixed effect regression.Table 4 has three regressions in testing. Model (1) is the main effect.Model (2) mainly inspects interaction of displacement.Model (3) studies the interaction of the regulatory impact of COO. Main effects At first, we analyze the main effects and control attributes of price, manual, automatic, two or three-box.The results show that the coefficients of policy 1(   -0.23, P<0.01) and policy 2 (   -0.18, P<0.01) are significantly negative, indicating that two excise tax implementations surpass auto sales (   -0.23, P<0.01).Policy 3's coefficient is significantly positive (   0.39), showing that cutting purchase tax in 2009 promoted the overall sales.From this analysis we found that all of two excise taxes issued in 2006 and in 2008 one purchase tax issued in 2009 work on automobile market; the first two inhibited auto sales, and the third stimulated auto sales. Interaction effect Accounting for interaction effects, the value of Stm1* displacement is negative and significant (   -0.12, P<0.05), indicating that the higher auto displacement, the more suppression policy1 taking on auto sales; the interaction between Stm2 and displacement is not significant; the interaction effect between Stm3 and displacement is negative and significant on 0.1 level, showing that the smaller displacement models, the stronger promotion taken by policy 3.In other words, tax incentive policies that halved the purchase tax have deeper effect on low-emission cars.This finding is consistent with the original intention of the relevant policies implementation, indicating the effectiveness of policies; excise tax effectively inhibited the consumption of high-emission and environmental damage cars; halved purchase tax adjust consumer excise structure well and promoted the low-emission cars consumption in large scale.Another interesting regulated variable is country of origin.We know Stm 2 and country of origin have a significant positive interaction terms from the results of the third column.Country dummy variable that is 1 represents the joint venture (foreign); 0 is China's own brand, which shows that, after the implementation of policy 2 which limited the excise tax towards purchasing high-emission cars, the most difficult obstacle is domestic brands but not joint venture brands.The value by multiplying Stm 3 and COO is positive and significant, indicating that the purchase tax indeed promotes the consumption of low-emission cars; however, it mainly promotes joint venture auto manufactures.Local manufactures benefit from waiver policy and increase limited sales.In other words, research on COO tells us that it seems that the adjustment policies are effective to change the structure of auto consumption, guiding consumers to buy low or medium-emission economy cars in order to switch to low-carbon, environment-friendly consumption structure.However, when we analyze the internal consumption structure in detail, we find consumption on local high-emission is declining; after the arrival of purchase tax stimulus, Chinese consumers turn to be favor in low-emission cars; however, subject to a substantial increase in sales of more foreign joint venture auto brands.It shows the effect of COO is based on consumers psychology perspective.Recently, Chinese consumers still prefer foreign brand cars. Research Conclusion Based on large-scale panel data of China's automobile market, applying the brand selection model logic, using the "Natural experiment" research method in economics, the author studied the impact of three industry policy adjustments on China's automobile industry consumption structure.The validation of main effect and the analysis of displacement moderator variable interaction effect prove that, three tax adjustments play a role on "suppress high and encourage low".But the moderate effect of country of origin suggests that, compared with foreign famous brands, the competitiveness of China brands is not enough.In other words, it is an important task and challenge for domestic local brands to develop national brands and enhance brand equity by restraining the effect of country of origin mostly in the future. Establishing an integrated assessment modeling framework This research attempts to establish a conceptual framework to measure policy marketing effectiveness systematically and comprehensively through exploring the effects of policy objectives (displacement, market size, structure) and the outside effects of objectives (brand, COO, district, etc).The effects of policy objectives are broke down into the scale effect and the structural effect for in-depth analysis and research the heterogeneity of the outside objectives' reaction under the same policy. The assessment framework and methods can expand the effect of other macroscopic policy or micro strategy. Enriching related marketing theory This study focuses on the effect of government policy on customer purchase behavior and will be realized by identifying the scale effect (whether policy increase or decrease total consumption) and conversion effect (consumers' purchase intend transform from high emission vehicles into small ones).Ample tax items and changeful tax rate adjustments will provide multinational situations, and greatly enrich the existing theory around this field with the help of the rich conclusions of this study.The time interval between publication time and implementation time of the excise tax and purchase tax policy can reveal the strategic behavior that forefront consumption in order to enjoy the benefits before the tax rate changes under the influence of "look ahead" and postpone consumption so as to enjoy the change rate of affordable rules.This will enrich the existing marketing research achievement on consumers' "strategic behavior " and " rational consumer". This study considers the country of origin, brand and geographical effect which are the so called heterogeneity of different levels response policy.It contributes to comprehending the automotive products' nation equity, brand equity, and enrich existing researches. Enriching tax price leverage theory By employing the price leverage, tax has diverse moderating effects.This study quantifies the moderating effects, differences and conditions of excise tax and purchase tax in China's auto industry.These findings will enrich and optimize correlation theory in the tax field. Policy effectiveness evaluation It is of great significance for relevant departments using scientific methods to roundly assess policy effectiveness that which is better between grasping excise tax as indirect price lever and purchase tax as direct price leverage.This study can also provide important guiding on how to make policy adjustment more rationally and scientifically, such as tax items level (displacement) design, the direction and amplitude of the adjustment in each level, the time interval design of policy issued. Significance for auto manufacturers and dealers This study has a practical significance for all sectors in the auto industry.It is clear that policies will directly guide auto manufacturers' production layout and structure configuration decision, and affect dealers' sales layout.Based on our study, all sectors in auto industry can make more effective response strategies and optimal decision.Variables such as COO, brand, region, etc. can help auto firms to grasp their own advantages and disadvantages, so as to improve nation equity and brand equity. It also includes environment variables, which are three dummy variables of auto tax policy, where, STM1, STM2, and STM3 are excise tax policies released in April 2006, adjustment policy released in September 2006 and purchase tax adjustment policy released in January 2009.If they are implemented, the value is 1, and 0 otherwise.Arguments also use price jt P to control price effects and residual ijt  . Table 1 . Adjustment of Consumption Tax Rates in China on April 1 st , 2006. Table 2 . Adjustment of consumption tax rates in China on September 1 st, 2008. Table 3 . Adjustment of consumption tax rates in China on January 20 th, 2010. Table 4 . Analysis results of main effects and interaction effects.The columns of the table list the results of different regression models which are all applying auto brand fixed effects.The dependent variable is the logarithmic of a car market share minus the logarithmic of external product market share.Stm1, Stm2 and Stm3 are excise tax policy promulgated in April 2006, adjustment policy promulgated in September 2006 and purchase tax adjustment policy promulgated in January 2009 respectively.If they implemented, the value is 1, 0 otherwise.
4,970
2015-01-14T00:00:00.000
[ "Economics", "Business" ]
Ionic Diffusion‐Driven Ionovoltaic Transducer for Probing Ion‐Molecular Interactions at Solid–Liquid Interface Abstract Ion–solid surface interactions are one of the fundamental principles in liquid‐interfacing devices ranging from various electrochemical systems to electrolyte‐driven energy conversion devices. The interplays between these two phases, especially containing charge carriers in the solid layer, work as a pivotal role in the operation of these devices, but corresponding details of those effects remain as unrevealed issues in academic fields. Herein, an ion–charge carrier interaction at an electrolyte–semiconductor interface is interrogated with an ion‐dynamics‐induced (ionovoltaic) energy transducer, controlled by interfacial self‐assembled molecules. An electricity generating mechanism from interfacial ionic diffusion is elucidated in terms of the ion–charge carrier interaction, originated from a dipole potential effect of the self‐assembled molecular layer (SAM). In addition, this effect is found to be modulated via chemical functionalization of the interfacial molecular layer and transition metal ion complexation therein. With the aiding of surface analytic techniques and a liquid‐interfacing Hall measurement, electrical behaviors of the device depending on the magnitude of the ion‐ligand complexation are interrogated, thereby demonstrating the ion–charge carrier interplays spanning at electrolyte–SAM‐semiconductor interface. Hence, this system can be applied to study molecular interactions, including chemical and physical influences, occurring at the solid–liquid interfacial region. S-3 water bath. Hence, as shown in Figure 1b, the electric output could have an apex at the first charging (C 1 ) and then be irregularly attenuated as time goes by. Hence, as shown in Figure 1b, the electric output could have an apex at the first charging (C 1 ) and then be decayed as time goes by. Herein, the apparent peak of electric outputs is considered as a major signal, and a corresponding simplified equivalent circuit is displayed at Figure S6b. In the circuit of Figure S6b, a relation between each current flow can be expressed by = − (S1) Voltage measured in external voltammeter (V) can be presented as follows; , where is a resistance in external circuit including contact resistances and assumed to be much smaller than . by the solution injection can be expressed with ⁄ , where Q is an amount of ions (electrons) adsorbed (accumulated) at the electrolyte-SAM-semiconductor interface. Hence, , and ∆ is a potential difference between electrolyte and semiconductor, which are in the vicinity of the SAM interface. This is equivalent to a potential difference between underneath the ion-adsorbed interface (denoted with subscript's') and un-adsorbed (DI water) interface (denoted with subscript'd') in the semiconducting layer. Hence, the equation (S1) can be denoted with both the equation (S2) and the equation (S3) as follows; , and , denote a potential of semiconductor near the SAM interface under the ion-adsorbed region and un-adsorbed region, respectively. Due to the diffusion-induced capacitive charging ( Figure 1a), dC 1 /dt can be expressed with an adsorption speed in electric double layer ( ⁄ , where and are a diffusion coefficient of adsorbed ion and Debye length, respectively) as follows [4] ; S-4 Herein, 0 , , and are vacuum permittivity, the relative permittivity of SAM, and a thickness of SAM, respectively. is assumed as DI water condition (equivalent to an initial state in the experiment). Thus, the equation (S4) can be expressed as follows; In a short-circuited condition, V=0, and then In an open-circuited condition, I=0, and then , which is equivalent to = × . The equation (S8) can be expressed with a sheet resistance of semiconductor (R sq ) by considering the extrinsic factors as follows. Supporting Note 2. Interpretation of the ESS interface In the equation (S7), , − , is a driving force of charge carrier flows near the SAMsemiconductor interface. As assumed in Supporting Note 1, the electrolyte-SAM-semiconductor interface can be regarded as the capacitor, of which electrodes have different potential screening capabilities. As shown in Figure S7a,b, an equal quantity of charges (ions and electrons) can be accumulated at the SAM (PFOTS) interface, which can have a interfacial charge density of . Hence, in both high (subscript 's') and low (subscript 'd') concentration regions, an adsorbed ion density at Stern layer ( , and , , respectively) can be equivalent to the charge density (per unit area) at the SAM-semiconductor interface ( , and , , respectively) ( = ). As can be expressed as , where q, N, and W are an electron charge, a dopant density, and a width of space charge region, S-5 respectively. [5] qN is constant throughout the semiconducting layer. From the zeta potential ( ) measurement, < indicated that , > , ( Figure S1a). Hence, in the high concentration region, larger , than , can induce a relatively wider than ( Figure S7a,b). can be shown with and as follows; [5] = 2 2 0 = − 2 0 (S11) is a dielectric constant of the semiconductor. Hence, as shown in Figure S7c,d, , can be much larger than , and spanned to deeper region in the semiconductor. Therefore, in the PFOTS case, ) and an internal resistor ( ) near the SAM interface, respectively); = 1 + 2 + ⋯ + . After the first interfacial charging at 1 , electrolyte can be diffused in aqueous phase (switch on), and therefore, drive both sequential interfacial charging ( +1 ) and S-9 dissipation towards bulk phase, simultaneously. ′ +1 is an resistance in each ionic diffusion. (b) A simplified equivalent circuit that is considering the first interfacial charging event. Figure S12. Device resistance as a function of exposed FeCl 3 concentrations. Resistances in each condition was obtained from a slope of current-voltage curves, which were measured at both terminals. S-13 Figure S13. Cl 2p XPS spectrum of Fe 3+ -adsorbed CA surface when exposed to 100 μM Table S1. Stability constant (log K) of mono-complex formation of metal ion and catechol ligand, and pKa value for metal ion acidity in aqueous state.
1,363.6
2021-10-31T00:00:00.000
[ "Chemistry", "Materials Science", "Physics" ]
Arsenite Removal from Drinking Water using Naturally available Laterite in Sri Lanka Arsenite, As(III) is the most soluble form of arsenic species. Arsenic removal efficiency by laterite (commonly found in Sri Lanka) was examined as a function of pH, initial arsenite concentration, laterite dosage, contact time and mixing rate. More than 90% arsenite removal could be achieved within 5 minutes when pH is around 10. By treating the water at this pH range, the current USEPA standard for arsenic in drinking water (10 ppb) can be maintained when the arsenite / laterite ratio is less than 10(g/g). Results of the study showed that naturally available laterite in Sri Lanka can be used as an effective adsorbent to treat arsenic contaminated water. Introduction Arsenic in natural water originates from natural and anthropogenic sources.Naturally it is released into groundwater from naturally occurring minerals [1].It has been reported that arsenic occurs naturally in about 245 minerals, which when subjected to weathering will release soluble arsenic into natural waters [2].It is commonly found in rocks and soils with high sulfur content.For example Arsenopyrite (FeAsS), pyrite (FeS2), orpiment (As2S3), realger (AsS), and chalcopyrite are some of the sulfide minerals, which contain high levels of arsenic [3].Among them arsenopyrite is the commonly available mineral.As2O5, scorodite (FeAsO4.2H2O),gypsum, Fe-smectite, claudetite (As2O3) are some other kinds of minerals containing arsenic.Therefore from the dissolution of these arsenic-bearing minerals arsenic can be released to groundwater systems.In the surface water, arsenic can be derived from weathering of geological materials, through mixing with high arsenic geothermal waters and by mixing of waste stream from a variety of industrial processes.Petroleum refining, glass melting, paper production, cement manufacturing, paint manufacturing, production of semiconductors and smelting of ores are some of the examples for those industrial processes.Also this is released into the environment by the dispersion of arsenic containing fertilizers, pesticides, and herbicides [2,[4][5][6]. The aqueous chemistry of arsenic is complicated.Speciation and solubility of arsenic is influenced both by redox status and pH.Arsenic occurs in the environment mainly as the inorganic arsenic oxides of arsenite(III) and arsenate(V).Arsenates predominate in well oxidized waters while arsenites occur in reduced environments [7][8][9].Depending on the pH, in aqueous solutions As(III) occurs in different forms such as H3AsO3, H2AsO3 -, HAsO3 2-and AsO3 3- [10].Similarly mole fraction distribution of arsenate species are H3AsO4, H2AsO4 -, HAsO4 2-, and AsO4 3- [11]. Arsenite, As(III) is much more toxic, soluble and mobile than As(V) [8,12].It has been reported that arsenite is 25-60 times more toxic than arsenate and inorganic arsenic compounds are more toxic than organic [13].The acute and chronic toxicity of arsenic in humans has been documented especially in countries like Argentina, Bangladesh, China, Chile, Ghana, Hungary, West Bengal (India), Mexico, Thailand, Taiwan Vietnam and USA [1,14].In Sri Lanka also; questions were raised recently whether arsenic is one of the causative agents of chronic kidney diseases [4][5][6].Arsenic causes cancers and tumors in the skin, bladder, genital organs and eyes [1].Conjunctivitis, melanosis, hyperpigmentation, hyperkeratosis and peripheral vascular disorders are the most 2 commonly reported symptoms of chronic arsenic exposure.In severe cases gangrene in the limbs and malignant neoplasm have also been reported.Acute short-term exposure to high doses of arsenic also can cause adverse health effects.In addition to these, arsenic in drinking water can cause diabetes, anaemia as well as reproductive & developmental, immunological, and neurological effects [14]. The maximum permissible limit of arsenic in drinking water in Sri Lanka is 50 ppb (SLS 614, 1983), while USEPA and WHO (1993) guideline is 10 ppb.New Zealand drinking water guideline ( 2008) is also 10 ppb.According to the USEPA, the maximum contaminant level goal (MCLG) of arsenic in drinking water is zero.MCLG is the level below which there is no known or expected risk to health.Even for Sri Lanka a lower guideline which is reasonably achievable considering the treatment performance and rapidly improving analytical capability is needed to achieve public health protection goals. Several studies have been carried out on arsenic removal and those technologies generally fall into three major classes.They are chemical precipitation, adsorption and membrane separation.Each technology has its own merits and demerits [15].Most of the removal methodologies were based on adsorption where the removal efficiency and the cost effectiveness depend on the type of adsorbent used.Many natural & artificial adsorbents are used and it is understood that materials containing Fe, Al and SiO2 can remove arsenic from drinking water efficiently. Laterite locally known as "cabook" in Sri Lanka is a reddish colour highly weathered clayey rock material which is rich in iron or aluminum oxides.The constituents of laterite are minerals such as goethite, gibbsite, hematite, kaolinite etc. Due to the cellular or vesicular nature of laterite it has a high specific surface area, porosity and permeability [16,17]. Results of a past study on Sri Lakan laterite [16] indicate a dominant presence of the three oxides Fe2O3, Al2O3, and SiO2.According to those findings, the Fe-rich layer is well developed in the laterite from lowlands; where the laterization process is believed to be active.Both hydrous iron and aluminum oxide components in laterite have a pH of the point zero charge (commonly called as pHzpc) of 8.5-8.6 [18,19].This is the pH where the net surface charge is zero.Under natural conditions (typical pH value of naturally available laterite varies between 4-7) they are characterized by net positive surface charge and hence have the capacity to absorb anionic contaminants.Presence of Fe and Al, high porosity and the availability of anion exchange sites are advantageous for using laterite as an adsorbent for removal of arsenic. Fig.1-Laterite rock and lateritic soil Therefore in this research arsenite, As(III) removal efficiency using laterite, a naturally available material in Sri Lanka was studied.As(III) was selected primarily due to its high mobility, solubility, and toxicity.This paper presents the findings of this study and the results of a survey on arsenic in Sri lankan ground water as of 2001/2002 [15].More recent data (2012) are available in reference [20,21]. Arsenic Analysis After reviewing available analytical methodologies carefully, a method based on the Hydride Generation-Atomic Absorption Spectrometry (HG-AAS), was used for arsenic analysis due to its high sensitivity.Analysis of arsenic in the experiments was carried out using this method at the Department of Chemical Engineering, Faculty of Engineering, University of Peradeniya, Sri Lanka.A Hydride Generation Atomic Absorption Spectrophotometer with a hollow cathode lamp as the radiation source (AAnalyst 300, Perkin Elmer) was used for this purpose [22].The detection limit of the instrument was 1 ppb.Flow Injection Analysis System (FIAS 100) was used for continuous hydride (AsH3) generation.The process involves the reaction of the acidified solution (sample mixed with hydrochloric acid) with sodium borohydride as the reducing agent.Readings were verified by spiking the standard solutions (by checking the results of a standard arsenic solution, by spiking with different concentrations of arsenic.An alternative approach is to analyse a proportion of samples in duplicate). 2.2 Preparation of adsorbent Laterite used for this study was obtained from Pasyala, a town in South-Western in Sri Lanka.Initial pH of the 2g/L laterite -water suspension was 6.37.According to the literature both hydrous iron and aluminum oxide components in laterite have a pHzpc of 8.5-8.6 [18,19].Laterite sample was crushed, powdered (ground with a mortar and pestle) and sieved to separate the powder passing through 0.075 mm sieve.Smaller particle size was selected due to its high specific surface area, which is an advantage in the adsorption process. Procedure The experiments were conducted as batch studies at room temperature.1000 ppm standard arsenite solution was used as the stock arsenite solution and subsequent dilutions were carried out using distilled water, when necessary.Acid washed glass beakers were used for the experiments.NaOH and HNO3 were used to adjust the pH value while NaNO3 was used to maintain the ionic strength. Jar test apparatus was used and all the experiments were carried out in duplicate.The procedure used was as follows.Initial arsenite solutions were prepared diluting the stock solution and adding NaNO3 to maintain the ionic strength.Then laterite (prepared as described above) was added to the sample followed by rapid mixing to ensure complete mix in the solution.A variable speed electrical stirrer, inserted into the solution carried out this mixing.Then reducing the speed, slow mixing was performed, allowing reactions to take place and sorption to occur.Then the sample was allowed to settle about 1 hour.After settling the supernatant was taken to another bottle and preserved using concentrated HCl until analysis.Every sample was analyzed on the following day. Rapid mixing Slow mixing Settling (100 rpm / 1 minute) ("x" rpm / "y" minutes) ( 1 hour) Fig. 2 -Treatment train employed 2.3.1 Effect of pH and Initial arsenite concentration The effect of pH on the removal of arsenite was studied by equilibrating the reaction mixture of 200 ppb arsenite solution in 0.1M ionic strength and 2 g/L laterite; at different initial pH values.100 rpm rapid mixing was performed for 1 minute which is followed by 30 rpm slow mixing rate for 15 minutes.Next the samples were allowed to settle about 1 hour and then the supernatant was preserved until analysis.Similar experiments were performed using 1200 ppb initial arsenite concentration (with 30g/L and 2 g/L laterite concentrations), and the effect of pH variation was studied. Effect of laterite dosage For finding out the optimum laterite dosage, different laterite concentrations such as 2, 5, 10, 15, 20, 30, 50 g/L were used.The experiment was performed similar to the procedure given in section 2.3.1 with an initial arsenite concentration of 200 ppb.The pH of the samples were maintained around 10. Effect of contact time and mixing rate To study the effect of contact time, a similar experiment was performed with initial arsenite concentration of 200 ppb and laterite concentration of 15 g/L at pH 10.The slow mixing time was varied as 5,10,15,30, 45 and 60 minutes and the slow mixing rate was maintained at 30 rpm.Rapid mixing was maintained at 100 rpm for 1 minute. The effect of slow mixing rate on arsenite removal was observed by performing a similar experiment but varying the slow mixing rate from 10 to 50 rpm.Here initial arsenite concentration was 200 ppb while initial laterite concentration was 20 g/L.The rapid mixing rate was 100 rpm within 1 minute and slow mixing rate was maintained about 10 minutes.pH was maintained around 10. Iron concentration of treated water Since laterite consists of Fe2O3, whether it increases the iron concentration in the treated effluent was verified.The total iron concentration of treated water was also tested using a HACH DR/2010 Spectrophotometer.Silver diethyl dithiocarbamate method was used in this analysis and the range of detection is 0 -0.200 ppm.Water with 200 ppb initial arsenite concentration at 0.1M ionic strength was treated using 20 g/L laterite at pH 10 and the final total iron concentration in the treated water was analysed.In the survey carried out in 2001/2002 [15] most of the samples were collected from the available dug wells and tube wells, which were used for drinking purposes (sampling locations are given in Fig. 3).Some of the samples (15% of the total samples) were collected from the areas where arsenic-bearing minerals are found (but only as minor accessory minerals) [17].About 110 samples were analysed.In these experiments 0.1M ionic strength was maintained.According to the results more than 50% arsenite removal could be obtained throughout the whole pH range.Removed amount depends on the used laterite concentration.In the acidic and neutral pH ranges the removal of arsenite by laterite is less than that in the basic pH range.According to the results, arsenic concentration of final effluent in the acidic pH range is 3-5 times higher than that in the basic pH range.Highest removal percentage was obtained within basic pH range mainly when pH is around 10 (more than 90% removal of arsenite).This may be explained by the different species of As(III) oxy-anions present in different pH ranges. According to literature [10,23], H3AsO3, which is the neutral form of arsenite, is the predominant species in the acidic and neutral pH ranges.In basic pH ranges when pH>8 H2AsO3 -, HAsO3 2-, and AsO3 3-are dominant and therefore As(III) is non reactive below this pH.Therefore high arsenite removal efficiency can be obtained in the basic pH range. As described above, the hydrous oxides of iron and aluminum in laterite can have anion exchange sites, which is important for arsenite removal.Also when stirring the sample, due to the aeration, dissolved iron (Fe(II)) will be oxidized to Fe(III) and precipitated as iron (Fe(III)) oxyhydroxide.At high pH levels formation of smaller Fe(III) hydroxide precipitates get increased.These smaller precipitates provide a higher effective surface area for arsenite adsorption.This will enhance the process of flocculation at the slow mixing stage [24].Also due to the formation of aluminum hydroxide the flocculation process may be increased. If the removal of arsenite from water by laterite is only by adsorption, removal efficiency should be high when pH<pHzpc (pH of the point zero charge).But according to the findings of the current research, removal efficiency was high when pH>pHzpc.Therefore removal of arsenite is not only by adsorption, but may be due to both sorption mechanisms. (May be by absorption to laterite particles and by adsorption as described above).The actual removal process may be characterized by analysing the arsenic-laterite sludge.If an XRD analysis is carried out for both raw and used (for water treatment) laterite; it will give more insights in to the removal process. As represented in Fig. 6 and Fig. 7, arsenite removal by laterite get decreased with the increasing initial arsenite concentration and with the decreasing laterite concentration.This is due to the limited amount of sorption sites in a particular laterite dosage.Also iron oxyhydroxide formation gets decreased with decreasing laterite concentration. Effect of laterite dosage The study of removal percentage as a function of sorbent dosage is important in establishing the optimum use of sorbent for any sorption process.Fig 8 shows the effect of laterite dosage on the removal percentage.According to these results, by increasing the sorbent dosage from 2 to 50 g/L (i.e.increasing 25 times) the removal percentage increased from 62% to 99% (i.e.increased by 37%).The increase in removal percentage can be explained due to the greater number of sites available to arsenite with increasing laterite dosage.Therefore when the initial arsenite concentration is 200 ppb, to obtain the 10 ppb standard in final effluent, laterite dosage of 20 g/L or higher has to be used.This implies that arsenite / laterite ratio needs to be less than 10 (g/g) to reach the drinking water standard of 10 ppb.Laterite dosage should be further increased to treat water within neutral pH levels since the arsenite removal efficiency is less within this range (refer Fig. 4 -Fig.7).Fig. 9 shows the adsorption isotherm for arsenite removal by laterite.This can be used to calculate the required laterite dosage to obtain a particular equilibrium arsenite concentration.Furthermore the results obtained are well fitted in the linear form of Freundlich isotherm. Effect of contact time and mixing rate Fig. 10 shows the effect of contact time with laterite for the removal efficiency; using a solution of initial arsenite concentration of 200 ppb.Sorption was very rapid reaching the equilibrium state within 5 minutes.This rapid removal is a great advantage for using this as an efficient arsenic removal method in drinking water treatment. Fig. 11 shows the arsenite removal percentage using laterite as a function of slow mixing rate (agitation rate).Varying the agitation rate from 10 rpm to 50 rpm does not have any significant effect on the removal process.This supports the earlier observation that the removal of arsenite by laterite is a rapid process. Iron concentration in the water treated using laterite No iron was detected in the treated water.The iron concentration in the same sample without adding arsenic was found to be 0.22 mg/L.This supports the earlier observations (discussed in section 3.1), that iron oxyhydroxides will form, adsorb arsenic and precipitate. Therefore considering these results, the arsenic removal method studied in the current research, by using naturally available laterite, can be recommended as a low-cost method for water treatment applications. The efficiency of this method may be increased by converting arsenite to arsenate or by using laterite with higher iron concentration.Then the optimum laterite dosage can be reduced than the currently obtained value.For this, laterite from various places in Sri Lanka has to be analyzed for the availability of high iron concentration and that can be used in this process. Fig.12-Proposed treatment train This method can be improved for direct application in the existing water treatment plants to supply water at a larger scale.Laterite should be mixed with raw water (contaminated with arsenic) at the coagulation stage, where alum is added to remove the turbidity of water. The treated water can be separated from the laterite sludge (which contains arsenic) at the settling tank.The efficiency of this treatment can be further increased by passing the treated water through a conventional sand filter.As described in literature [25,26], the accumulated sludge can be used to construct bricks.But a well defined method for the disposal of arsenic sludge should be researched. This method can also be used at household level to treat small quantities of drinking water, especially within the rural community.For this, a column filter with granules of laterite, can be used.A similar type of filter with pieces of bricks is used in the dry zone of Sri Lanka to remove fluoride from ground water.Arsenic removal efficiency of this filter system with laterite has to be further researched. Arsenic in Sri Lankan aquifers As mentioned above the USEPA standard for arsenic in drinking water is 10 ppb and the Sri Lankan standard is 50 ppb.According to the survey carried out in 2001/2002 [15], Very low arsenic levels (≤1 ppb) were reported in the samples analysed.In another study to analyse well water quality, less than 10 ppb average arsenic levels have been reported in 5 districts in SriLanka [20].Furthermore no arsenic has been detected in another study conducted to check the availability of heavy metals in 40 water samples collected from different water sources in 5 districts in SriLanka [21]. Effluent Filter bed ENGINEER 29 Conclusions Efficiency of arsenite removal using naturally available laterite in Sri Lanka is presented in this paper.More than 90% arsenite removal could be achieved within 5 minutes when pH is around 10. Therefore by treating the water at this pH range, the current USEPA standard for arsenic in drinking water (10 ppb) can be maintained when the arsenite / laterite ratio is less than 10(g/g).This method can be improved for direct application in the existing water treatment plants to supply arsenic free water at a larger scale.For this further studies can be carried out to finalize an optimum dosage at natural pH range.Laterite pieces in column filters can also be used at household level to treat small quantities of drinking water, especially within affected communities.Therefore as a further study it is recommended to research the arsenite removal efficiency of this filter system in order to develop a low-cost water treatment methodology for the field application.The feasibility of this method in affected areas, with the presence of other constituents such as fluoride, sulfate, phosphate, nitrate and chloride has to be investigated. According to the survey carried out in 2001/2002 [15], high arsenic levels were not detected in the surveyed ground waters and the current USEPA standard for arsenic in drinking water (10 ppb) was not exceeded in the ground water in the areas where samples were analysed (in shallow water, less than 15m depth).Furthermore, it is suggested to implement a 10 ppb maximum permissible limit as the new arsenic standard in drinking water for Sri Lanka.This may help to achieve public health protection goals. Fig. 4 - Fig.4-Arsenite removal percentage using laterite, as a function of pH with the initial arsenite concentration of 200 ppb. Fig. 7 - Fig.5-Arsenite removal percentage using laterite, as a function of pH with the initial arsenite concentration of 1200 ppb. Fig. 8 - Fig.8-Arsenite removal percentage using laterite, as a function of the laterite dosage. Fig. 11 - Fig.11-Arsenite removal percentage using laterite, as a function of slow mixing rate.
4,662
2014-04-20T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Cells-in-Touch: 3D Printing in Reconstruction and Modelling of Microscopic Biological Geometries for Education and Future Research Applications Additive manufacturing (3D printing) and computer-aided design (CAD) still have limited uptake in biomedical and bioengineering research and education, despite the significant potential of these technologies. The utility of organ-scale 3D-printed models of living structures is widely appreciated, while the workflows for microscopy data translation into tactile accessible replicas are not well developed yet. Here, we demonstrate an accessible and reproducible CAD-based methodology for generating 3D-printed scalable models of human cells cultured in vitro and imaged using conventional scanning confocal microscopy with fused deposition modeling (FDM) 3D printing. We termed this technology CiTo-3DP (Cells-in-Touch for 3D Printing). As a proof-of-concept, we created dismountable CiTo-3DP models of human epithelial, mesenchymal, and neural cells by using selectively stained nuclei and cytoskeletal components. We also provide educational and research context for the presented cellular models. In the future, the CiTo-3DP approach can be adapted to different imaging and 3D printing modalities and comprehensively present various cell types, subcellular structures, and extracellular matrices. The resulting CAD and 3D printed models could be used for a broad spectrum of education and research applications. Introduction Additive manufacturing (AM), commonly termed 3D printing [1], is a methodology of physical reconstruction of three-dimensional structures and complex geometries from digital models of these objects formed (in a core concept, and in contrast to the traditional subtractive or formative manufacturing approaches) by layered deposition of the material [2]. The success of AM may be attributed to its affordability, flexibility, safety, and efficiency compared to more traditional manufacturing processes [3]. The most common modalities of 3D printing, in order of increasing spatial resolution capacity, include powder bed fusion (e.g., selective laser sintering), inkjet printing, stereolithography, and fused deposition modeling (FDM) [1]. The availability of affordable FDM desktop 3D printers an engaging interactive stimulus for better integrative 3D printed models in educational and research contexts. Here, we address the indicated challenges and present our proof-of-concept study together with the practical protocols for the methodology which we termed CiTo-3DP (Cellsin-Touch for 3D Printing) for producing 3D PLA prints from SCM serial images (z-stacks) of micrometer scale biological objects. Using this approach, we created 3D-printed models of epithelial, mesenchymal, and neural human cells. These cell types are representative of solid tissues from different embryonic origins and have fundamental morphological differences that define the respective phenotypes. Using fluorescent contrasting agents, we visualized and printed subcellular structures. These structures include the nuclei and two types of cytoskeletal elements (f-actin stress fibers and contractile α-smooth muscle actin (α-SMA)) with a digital reconstruction of the cell surface shape. To enhance interactivity, we made our models dismountable. In addition, our work provides rich research and educational context of the presented workflow (Appendix A.1). The diversity of future applications for the CiTo-3DP approach is also discussed. Study Design The following method was designed to generate 3D cell reconstructions from immunofluorescent confocal z-stack images of adherent in vitro cultured cells and optimized for 3D printing on commercially available FDM printers. The workflow applied in the current study is schematically shown in Figure 1. Three cell types were used for the proofof-concept experimentation of the CiTo-3DP methodology, including human epithelial, mesenchymal, and neuronal cells. The model of an epithelial tissue cell was based on the images of the linear cells PANC-1, which are representative of the parenchyma of the pancreas in the state of malignancy (pancreatic adenocarcinoma). The mesenchymal phenotype was shown using primary healthy fibroblasts of skin derma (human dermal fibroblasts, HDFs). A model of a neuron was created based on the images of cell line SH-SY5Y, which is representative of neuroblastoma. The detailed technical notes for the workflow presented here are provided in Appendix B. 1. dismountability. In simple words, this option allows "assembling" or "disassembling" th "cell" which could provide an engaging interactive stimulus for better integrative 3D printed models in educational and research contexts. Here, we address the indicated challenges and present our proof-of-concept study together with the practical protocols for the methodology which we termed CiTo-3DP (Cells-in-Touch for 3D Printing) for producing 3D PLA prints from SCM serial images (z stacks) of micrometer scale biological objects. Using this approach, we created 3D-printed models of epithelial, mesenchymal, and neural human cells. These cell types ar representative of solid tissues from different embryonic origins and have fundamenta morphological differences that define the respective phenotypes. Using fluorescen contrasting agents, we visualized and printed subcellular structures. These structure include the nuclei and two types of cytoskeletal elements (f-actin stress fibers and contractile α-smooth muscle actin (α-SMA)) with a digital reconstruction of the cel surface shape. To enhance interactivity, we made our models dismountable. In addition our work provides rich research and educational context of the presented workflow (Appendix A.1). The diversity of future applications for the CiTo-3DP approach is also discussed. Study Design The following method was designed to generate 3D cell reconstructions from immunofluorescent confocal z-stack images of adherent in vitro cultured cells and optimized for 3D printing on commercially available FDM printers. The workflow applied in the current study is schematically shown in Figure 1. Three cell types were used for the proof-of-concept experimentation of the CiTo-3DP methodology, including human epithelial, mesenchymal, and neuronal cells. The model of an epithelial tissue cell wa based on the images of the linear cells PANC-1, which are representative of th parenchyma of the pancreas in the state of malignancy (pancreatic adenocarcinoma). Th mesenchymal phenotype was shown using primary healthy fibroblasts of skin derma (human dermal fibroblasts, HDFs). A model of a neuron was created based on the image of cell line SH-SY5Y, which is representative of neuroblastoma. The detailed technica notes for the workflow presented here are provided in Appendix B.1. Figure 1. The workflow applied in the current study. Z-stacks of 2D cell images were acquired with an Olympus FV3000 confocal laser scanning microscope. Subsequently, CAD models of the cell were generated using Mimics Research software v21.0. Post-processing was performed using Mimics 3-matic. Finalization of the model for printing was conducted using Ultimaker CURA. PLA models were printed with an Ultimaker S5 printer. The workflow applied in the current study. Z-stacks of 2D cell images were acquired with an Olympus FV3000 confocal laser scanning microscope. Subsequently, CAD models of the cells were generated using Mimics Research software v21.0. Post-processing was performed using Mimics 3-matic. Finalization of the model for printing was conducted using Ultimaker CURA. PLA models were printed with an Ultimaker S5 printer. Image Acquisition and 3D Reconstruction Cells were imaged using an Olympus FV3000 confocal laser scanning microscopy system (Olympus, Tokyo, Japan). The confocal microscopy settings and image parameters used in this study are shown in Tables A2 and A3 in Appendix A. The TIFF z-stack images were imported into a biomedical image segmentation software, Mimics Research 21.0 (Materialise, Leuven, Belgium), which is commonly used in 3D macro-anatomical analysis of DICOM images ( Figure A1 in Appendix B). It should be noted that image quality is the most important contributor to reconstruction accuracy. On import, image aspect and scale, in nm or µm, were validated against the coronal, axial, and sagittal coordinate axes. The main enabling tool used for 3D reconstruction of images was thresholding. The following workflow was applied: thresholding (tool: segment > threshold; or tool: segment > dynamic region grow) inputs grey-scale, or "Grey Value" (GV), pixel-intensity maxima and minima, allowing for 3D image segmentation into new masks appearing in the software's project management and 3D previewer windows. New masks are comprised of tessellated mesh surfaces, wrapped around individual or adjacent image pixels. In this way, imported image stacks were organized into 3D reconstructions of isolated cellular components. Alternatively, cellular components were separated by splitting the mask (tool: split mask). Following this, masks were cropped (tool: segment > crop mask; or tool: segment > region grow) to include only the information required. In cell biology, single cells or smaller cellular clusters may be segmented in this way. Due to the nature of fluorescent staining and confocal microscopy imaging, as well as the nature of the imaged subcellular structures, where the cytoskeleton plays the role of the tension-bearing element for the outer cell membrane, there could be several holes in the reconstructed cell membrane surface. Therefore, the surfaces of the segmented masks were expanded by filling (tool: segment > smart fill) and brushing in the individual 2D images (tool: segment > smart fill > local fill). Any reconstruction errors that are inconsistent with the imaged biology, which may arise due to image resolution and thresholding, were edited by highlighting the respective region (tool: segment > edit masks). The next step was optimization of the obtained 3D reconstruction for printing (post-processing). Post-Processing With the surfaces segmented and ready for post-processing, the relevant masks were converted into meshed geometries or parts. The parts (tool: segment > part) were exported into Mimics 3-matic software v13.0 ( Figure 2a). Bioengineering 2023, 10, x FOR PEER REVIEW 5 masks). The next step was optimization of the obtained 3D reconstruction for pri (post-processing). Post-Processing With the surfaces segmented and ready for post-processing, the relevant masks converted into meshed geometries or parts. The parts (tool: segment > part) were exp into Mimics 3-matic software v13.0 ( Figure 2a). designed as a dismountable set for greater interactivity. Note the nucleus geometry was Boolean subtracted from the cytoskeleton with positive clearance factor for post-printing compatibility. The cytoskeleton geometry was also split into upper (shown in pink) and lower (shown in grey) parts. Mimics Research works in the validated image scale, but Mimics 3-matic is constrained to operate in mm, with actual scale stored in memory. Although this scale transformation is automatic within the Materialise (Leuven, Belgium) software package, it was validated by measurement of key lengths in both software packages (tool: measure > distance) (Figure 2b,c). Imported parts, displayed in the software Object Tree, were color coordinated (tool: object tree > object properties > colors) and aligned (tool: align) for improved workflow. To view object interiors, a viewing plane was defined and translated through the object (tool: object tree > section list > standard section > position step size). This proved useful in examining the compliance of meshed objects to the imaged biology. At this stage, meshes were representative of surfaces only, making them impossible to print with extruded filament of a non-negligible thickness. In practice, printing surfaces with thicknesses greater than or equal to 1 mm generate stable models, although this may vary with printing material and printer used. In Mimics 3-Matic, meshes were uniformly offset (tool: design > uniform offset > solid) by a minimum distance of 1 mm, with the solid fill option checked. Next, the models were smoothed (tool: fix > smooth; or fix > reduce; or fix > wrap; or finish > local smoothing; or remesh) to simplify tessellation and hence reduce printing time and cost (Figure 2d). This action is also known to improve the likelihood of printing success without sacrificing significant resolution. To improve the educational interactivity of the models, a range of editing tools are available in the software. In the presented CiTo-3DP methodology, PANC-1, and neuronal SH-SY5Y cell models were trimmed (tool: finish > trim > preserve inner and outer) to split the cytoskeleton component in two. Further to this, the nuclear component was removed, with a positive clearance factor in mm, from the cytoskeleton, allowing it to fit neatly inside the split parts (tool: design > local Boolean > subtraction) (Figure 2e). The same approach was utilized to separate two cytoskeleton components (f-actin and α-SMA) and nuclei in the fibroblast models. If components are to be joined together by design slots or joints, a datum plane must be defined (tool: design > create analytical primitive > create datum plane), such that the relevant geometry may be cut (tool: design > cut) about the plane and designed for fitting (tool: design > create primitive; design > Boolean union). Note that compliance and compatibility must be carefully considered for part-fitting. To finalize meshed geometries, the software automatic mesh corrector algorithm (tool: fix > fix wizard > follow advice) was used. After this, the respective objects, now optimized for 3D printing, were exported as STL files into relevant pre-printing software. In our methodology, these STL files were opened in Ultimaker's pre-print software CURA v4.7.0. Printing CURA is an open-access software, allowing users to import STL files into a virtual 3D workplace of the specific printer chosen for printing ( Figure A2 in Appendix B). Prior to opening the relevant STL files, CURA was configured to the printer used (tool: add printer). A wide range of pre-set printer configurations from Ultimaker (Geldermalsen, Netherlands) and other 3D printing companies is included in the software. The size of the printing bed, the type and number of extruders and the material used for extrusion were all defined, as were the slice orientations, layer thickness, infill, and settings for the printing of supports. The software also provided a 3D virtual preview of the print process to visualize the model as it would be printed, allowing further edits and refinements to the print strategy prior to actual printing. The relevant STL models, which use the 3-matic mm scale, were imported and adjusted to best fit on the printing bed. Any changes to scale were noted. As the presented workflow was used as a proof-of-concept, the printing configurations were selected for fast PLA printing, which correlates to a printed layer height of 0.2 mm. Notably, printing speed is directly related to layer height in millimeters and hence determines the quality or resolution of the final print. Shell thickness, or the number of horizontal layers in each shell, affects the final stability of the print, as does infill. A triangular infill of 10% was used for fast printing. Supports were generated, followed by object slicing, which determined the exact printing path the extruder would follow. The printing configuration and path followed determine speed of the print and the amount of material used. The sliced objects were exported directly into the printing hardware. Printing was initially observed to check for common 3D printing errors such as extruder clogging or poor build plate adhesion. Once printing was finished, models were allowed to cool and then removed from the build plate. Printing supports were removed manually. As such, a 3D reconstruction of complex cell geometry, imaged using confocal microscopy, was printed. Results The cell types selected for modeling differed significantly in their morphometric characteristics (Table 1). Further 3D modeling using CiTo-3DP methodology allowed reliable reproduction of the key features of the studied cells. The resolution of each print was calculated using the printing scale and layer height ( Table A4 in Appendix B). As follows from Table 1, the epithelial cell representative for the pancreatic adenocarcinoma (PANC-1) showed a compact phenotype, compared to fibroblasts. PANC-1 cells featured a round cell shape without long protrusions, higher density of f-actin at the outer cell borders (cortical localization), and centrally or slightly eccentrically located roundish nuclei ( Figure 3a). Note that the shape of the nucleus was irregular, in contrast to the common perception. The pancreatic cell model was 3D printed in colored and single-color (white) versions and made dismountable (Figure 3b-f). In this model, the post-processing operations allowed the reconstruction of the internal space and the outer cell shape based on the configuration of f-actin cytoskeleton filaments. Interestingly, the PANC-1 cell model revealed the existence of the specific "niche" formed by f-actin filaments around the nucleus, which was not appreciable in confocal microscopy images, while it became clearly visible during virtual 3D conversion of the confocal z-stacks into STL files (Figure 3g-i). A mesenchymal cell phenotype was presented by primary fibroblasts derived from human skin derma. These cells showed typical spindle-like and relatively flattened cell bodies, with centrally located nuclei of various shapes. Notably, the HDFs cultured on stiff plastic surfaces also possessed α-SMA cytoskeletal filaments, which are a specific marker of differentiation into a contractile fibroblast phenotype (myofibroblasts), known to be responsible for fibrotic (scarring) processes ( Figure 4a). In the 3D printed model, we reconstructed two HDFs that were contacting each other in cell culture. We produced a multicolored dismountable model, which included two parts of cytoskeleton (red PLA filament was used for modeling of f-actin, and the mint-colored filament was applied for α-SMA) and the nuclei were printed in blue color (Figure 4b,d). Interestingly, the nuclei of HDFs had a complex, slightly flattened shape, with delicately branched edges. The cellular f-actin filaments also surrounded the nuclei as was observed in the epithelial cell model. In contrast, the α-SMA fibers did not exhibit spatial coordination with the nuclei. Next, we demonstrated solid 3D-printed models of the same fibroblasts in a white color (Figure 4c) to emphasize the integration of the nuclei and two types of actin in the cytoskeleton. In Figure 4e,f, the 3D STL models for f-actin and α-SMA are shown. Neuronal-like differentiated SH-SY5Y cells were approximately three times smaller than the epithelial cells (PANC-1). They featured polygonal cell body shapes containing round-shaped nuclei with multiple axonal or dendritic protrusions ( Figure 5a). For 3D printing, we segmented a central part of a single neuronal-like cell ( Figure 5b). The staining pattern (red color for f-actin cytoskeleton and turquoise/blue for nuclei was reproduced in the 3D printed model that allowed for dismountability (Figure 5c,d). We applied a post-processing protocol to reconstruct the lower cell surface based on the f-actin fibers cytoskeleton configuration (Figure 5e). We also demonstrated a 3D printed model with an alternative color scheme (with mint-colored f-actin and blue nucleus) and revealed the complexity of the cellular nucleus surface shape (Figure 5f). A mesenchymal cell phenotype was presented by primary fibroblasts derived from human skin derma. These cells showed typical spindle-like and relatively flattened cell bodies, with centrally located nuclei of various shapes. Notably, the HDFs cultured on stiff plastic surfaces also possessed α-SMA cytoskeletal filaments, which are a specific marker of differentiation into a contractile fibroblast phenotype (myofibroblasts), known Neuronal-like differentiated SH-SY5Y cells were approximately three times smaller than the epithelial cells (PANC-1). They featured polygonal cell body shapes containing round-shaped nuclei with multiple axonal or dendritic protrusions (Figure 5a). For 3D printing, we segmented a central part of a single neuronal-like cell (Figure 5b). The staining pattern (red color for f-actin cytoskeleton and turquoise/blue for nuclei was reproduced in the 3D printed model that allowed for dismountability (Figure 5c,d). We applied a post-processing protocol to reconstruct the lower cell surface based on the factin fibers cytoskeleton configuration (Figure 5e). We also demonstrated a 3D printed model with an alternative color scheme (with mint-colored f-actin and blue nucleus) and revealed the complexity of the cellular nucleus surface shape (Figure 5f). Discussion The microscopy-to-3D printing concept allows upscaling the unseen world of microscopy into perceptible matter. This provides researchers and educators with a tool to present their discoveries and teaching content at a more comprehensible scale, making it easier to communicate complex biomolecular subjects. In the current study, FDM printing technology was utilized to produce CADgenerated 3D reconstructions of confocal microscopy whole-cell imaging data. We utilized FDM 3D printing in our CiTo-3DP methodology due to its ease of use, speed, considerable commercial availability, and affordable operation. The FDM technology has reasonable resolution capabilities, with most commercial products able to print to actual resolutions, or extrusion layer heights, of down to 100 µm [13]. FDM 3D printing devices are also capable of printing other materials with varying physical attributes, such as flexibility, strength and transparency, and colors, although safety and printer compatibility need to be considered. We used not only white PLA material but also demonstrated that the multicolored and transparent materials can be adapted to our proposed CiTo-3DP protocol in a way similar to published prototypes [13,20,26]. Additional finishing of the models for perception enhancement can be performed, for example, by coating them with silicone rubber as shown elsewhere [8]. In our CiTo-3DP workflow, an Ultimaker 3D printer was chosen for model production. The advantage of using Ultimaker hardware is its ease of use, commercial availability, affordability, and compatibility with pre-print software CURA v4.7.0. The latter included a virtual 3D visualization of the print process itself, allowing prior Discussion The microscopy-to-3D printing concept allows upscaling the unseen world of microscopy into perceptible matter. This provides researchers and educators with a tool to present their discoveries and teaching content at a more comprehensible scale, making it easier to communicate complex biomolecular subjects. In the current study, FDM printing technology was utilized to produce CAD-generated 3D reconstructions of confocal microscopy whole-cell imaging data. We utilized FDM 3D printing in our CiTo-3DP methodology due to its ease of use, speed, considerable commercial availability, and affordable operation. The FDM technology has reasonable resolution capabilities, with most commercial products able to print to actual resolutions, or extrusion layer heights, of down to 100 µm [13]. FDM 3D printing devices are also capable of printing other materials with varying physical attributes, such as flexibility, strength and transparency, and colors, although safety and printer compatibility need to be considered. We used not only white PLA material but also demonstrated that the multicolored and transparent materials can be adapted to our proposed CiTo-3DP protocol in a way similar to published prototypes [13,20,26]. Additional finishing of the models for perception enhancement can be performed, for example, by coating them with silicone rubber as shown elsewhere [8]. In our CiTo-3DP workflow, an Ultimaker 3D printer was chosen for model production. The advantage of using Ultimaker hardware is its ease of use, commercial availability, affordability, and compatibility with pre-print software CURA v4.7.0. The latter included a virtual 3D visualization of the print process itself, allowing prior adjustment of various print settings such as print slice orientation, infills, and printing of supports. For more complex geometries, however, greater control and editing of the printing path could be an advantage in specific cases, although CURA has the advantage of being readily available. In saying that, due to the competitive market, we regarded other 3D printing hardware and software as comparable and easily interchangeable with the presented workflow. The current study represents a "proof-of-concept" technical note limited to the translation from the confocal images of the cells to the tactile models using FDM as the most accessible and affordable AM method. At the same time, more complex AM technologies such as SLS and two-photon 3D printing are indeed becoming more readily available. These advanced approaches offer greater precision, potentially making them better suited to the field of microscopy, where model upscaling, image resolution, and printing accuracy are vitally important. However, currently, they still appear less accessible and more expensive to entry-level users when compared with FDM 3D printing, and hence will likely experience less uptake into new industries. We envisage that in the future, the proposed CiTo-3DP methodology can be easily expanded and customized to merge with not only various additional staining methods (e.g., immunocytochemistry, organelle trackers), and high-resolution microscopy modalities, such as electron microscopy or superresolution microscopy, but also to the light-curing 3D printing workflows (e.g., 2-photon nanoparticles-aided polymerization [27]). This will allow the rapid creation of cellular models and subcellular structures of very high resolution and structural fidelity that potentially may be used in further bioengineering applications (e.g., preparation of tissue engineering scaffolds). The future of image processing and AM utilization in cell biology and related disciplines is promising. Various steps have been taken toward integrating image-based model simulations into common practice. Togni et al. [28] showed the efficacy of using finite-element method (FEM) multi-physics modeling software in undergraduate biology education, whilst Tang et al. [29] compared the biomechanical heterogeneity of living cells as measured by atomic force microscopy and finite-element simulation. Notably, both used generic computer-defined geometries. To implement this into a 3D printing workflow, further steps would be required to better define the objects. Inspecting surface mesh quality, generating a volume mesh, and validating against the imaged biology, would be required as a minimum to ensure accurate modeling. A range of finite element method and computational fluid dynamics (CFD) software, such as ANSYS, COMSOL Multiphysics, or even Materialise, are available, providing the file types and sizes that are transferable between software. Another promising technology entering this field is virtual reality. Virtual reality visualizations require similar image-processing analysis and hence provide equivalent educational benefits to students, all whilst negating the need and hence the cost of 3D printing. This too, has seen limited uptake in cell biology education. In the study by Cali et al. [30], virtual reality was used to visualize and aid quantitative analysis of reconstructed glial and neuronal cells. FDM 3D printed cellular and subcellular models have the potential to be used both as a visual aid, as described, and as a quantitative tool. This is of particular interest in the fields of bioengineering, computational biology, cellular and tissue morphometrics, and developmental biology. Analysis of morphogenetic behavior of living tissues has to date proven instrumental in biology-related fields [31], and 3D image reconstruction and FDM-printing pose as additional analytical tools. In the presented methodology, the clear differences between the PANC-1 (epithelial), HDF (mesenchymal), and SH-SY5Y (neuronal) phenotypes were revealed using 3D printed models and were shown across several cellular structures. Image processing and reconstruction of 3D geometries make basic morphometric measurements, such as cellular diameter, shape, height, and surface area easier to acquire. Additionally, the segmentation of various cellular structures allows intra-cellular comparisons to be made. FDM-printing models with the same material would also provide data on cellular volumetrics. That is, the amount of material required to print cellular structures of different cell types could be used as a comparative measurement. To improve image quality, finer voxel dimensions are recommended. Clearly, the presented workflow could be utilized for quantitative morphometrics with minimal adjustments. Finally, introducing 3D models for the presentation of experimental results in biological systems is a part of the trend to put discoveries in more translatable models. This is especially instrumental for research conducted on cellular and tissue levels since both cell microscopy and pathology lose the volumetric perspective. We hope that additive technology models can contribute to a better understanding of the spatial profile of tissues, accelerating research in matrix biology and mechanotransduction. The development of this approach has the potential to further revolutionize science education, by providing a strong nexus between laboratory skills, computational analysis, and communication of results [13]. This method has already been utilized in advancing the analysis of biomolecular data sets in the teaching of complex chemical molecular structures [26]. It is reasonable to suggest that AM technology applied for the reconstruction of micron-scale biological objects can contribute to knowledge generation advancement in life and material sciences, engineering, and medicine. Notably, 3D printing also offers an innovative and feasible way of introducing tactility into the educational curriculum, resulting in greatly improved learning outcomes by 3D printed models as tactile data visualizations [32]. For example, detailed 3D-printed anatomical models of prosected organs allow the replacement of several expensive and labor-intensive processes used in medical education [13]. In fact, 3D-printed replicas provide a physical interface through which users can directly interact with the source data and obtain difficult scientific and engineering concepts in a more accessible way. Such an approach reduces the cognitive load and improves knowledge translation. Threedimensionally printed models also can enhance learning experiences for visually impaired and disabled students and for students with special needs [32]. Three-dimensional printing does not come without limitations. Firstly, and most importantly, model quality is intrinsically dependent on microscopy image quality. That is, the mode of image acquisition has a direct impact on the final quality of results. Available computational power should also be considered regarding any increase in imaging resolution. Furthermore, using our proposed CiTo-3DP methodology, it is difficult to visualize smaller cellular structures, such as ribosomes, even after upscaling. Beyond the image quality and microscope resolution, it is also constrained by the resolution capacity of FDM printing and the spatial limitations of commercial printers. That is, increasing scaling factors to visualize more detailed biological structures would hinder project time, cost, and model ease-of-interactivity. Nevertheless, the CiTo-3DP methodology we have outlined here is highly transferrable and flexible. We provided three examples of CiTo-3DP methodology use without fully describing its potential in different fields of research and education. However, we envisage that, in the future, the proposed CiTo-3DP methodology could be utilized for a variety of applications, including (but not limited) to in silico simulations for biology, medicine, pharmacological research, tissue engineering, morphometrical analysis, multiphysics modeling, education, rehabilitation of visually impaired people, and integration into virtual reality. Conclusions In conclusion, the presented CiTo-3DP approach bridges the gap between the highresolution imaging of subcellular living structures and additive manufacturing, allowing the translation of cellular biology messages through tactile accessible, and interactive 3D printed models, and providing educators and researchers with a new way to display and analyze complex biological and engineering data. Acknowledgments: X.F., A.F. and A.G. thank Ewa Goldys (UNSW) for her devoted attitude in mentorship support and helpful discussions on the implementation of this project. The authors thank Sandhya Clement (UNSW and the University of Sydney) for providing PANC-1 cells. X.F. and A.G. thank Madison King and the team of the "Design Futures Lab" (https://www.making.unsw.edu. au/dfl/facilities/digital-fabrication-lab-3d-printing/) (accessed on 6 January 2023) for the help in development and optimization of the 3D printing protocols, and fabrication of the cellular models demonstrated in the current work. X.F., A.F. and A.G. thank Ayad Anwer and Lynn Ferris (UNSW) for the laboratory operations support. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Abbreviations: AM-additive manufacturing; 1 : STED-Non-diffraction limited stimulated emission depletion microscopy, TEM-transmission electron microscopy, MPM-Multiphoton microscopy, SCF-scanning confocal microscopy, LSHM-lightsheet microscopy; 2 : SET-serial electron tomography; AF-autofluorescence, gGFP-genetically encoded green fluorescent protein, Phalloidin-FITC-phalloidin conjugated with fluorescein isothiocyanate; DAPI-4 ,6-diamidino-2-phenylindole, DAPI (omitted)*-zebrafish larva also was stained with DAPI, however, in the final visualization and 3D printed model the nuclear staining was omitted; DNAdesoxyribonucleic acid; 3 Appendix A. 1 . The Biomedical Educational and Research Context The link between education and AM has previously been made by Kaplan et al. [32] and others, showing learning outcomes of complex concepts are greatly improved through the production of physical data visualizations. Further to this, education is made more accessible and comprehensible to special needs and disabled students. Similar conclusions have been drawn in cell biology and microscopy-related fields by Perry et al. [5]. In these fields, image data are often visualized three-dimensionally digitally, on analytically powerful software such as ImageJ [30]. Visualizing data in this way is effective but requires additional cognitive load to isometrically decode within the human brain. Three-dimensional printing data reconstructions effectively reduce this cognitive load, and as such, has the capacity to revolutionize how scientists analyze and present their results and findings. In biomedical and bioengineering education, the presented CiTo-3DP methodology may be used to improve learning outcomes associated with complex biological concepts, such as cell morphology and tissue development. Additive technologies can provide biological constructors consisting of cellular and subcellular parts which can become a great way to teach spatial concepts such as tissue architecture and intracellular compartmentalization. The CiTo-3DP method goes beyond the teaching of "generic" eukaryotic cell types by showing the obvious variety in cell morphometry between PANC-1 and HDF cells ( Figure 5). As an example, students could be led through a laboratory-based experiment to culture and image basic mammalian cell phenotypes from epithelial, mesenchymal, or endodermal tissue origin. Other complex concepts such as cancer pathogenesis, cell shape regulation, EMT/MET, fibrosis, cell phenotypes, and transdifferentiation, as well as the general concept of dimensionalities-can be explored in specially designed experiments that now can be enhanced with 3D tactile visualization. Students could then develop their own interactive models in image processing compatible CAD software and 3D print them using FDM printers located at their respective institutes. Such a workflow would provide students with hands-on experience in cell culturing, microscopy imaging, computational data analysis, and CAD, whilst also providing them with enhanced learning outcomes. We suggest that some aspects of the presented CiTo-3DP protocols are particularly relevant for the biological and biomedical education and research context. • Cell culture terminology and methodology: linear (immortalized) cells vs. primary cells [33]. The technical article by Merck explains cell culture protocols applicable both to linear cells (presented by cancer PANC-1 cells) and primary cells (human dermal fibroblasts, HDF). • Healthy cells (HDF) vs. cancer cells (PANC-1). Fibroblasts are the main cell type in connective tissues, responsible for the production and degradation of collagen and other components of the extracellular matrix. A specialized form of fibroblasts, myofibroblasts, can exert strong contraction of tissue (particularly important for wound healing and regeneration). The source of PANC-1 cells in pancreatic ductal adenocarcinoma is deadly cancer with limited treatment options [34,35]. This malignant tumor commonly contains large amounts of collagen and fibroblasts, which together contribute to its treatment resistance [36]. • Embryonic origin of cells and tissues. Pancreatic adenocarcinoma originates from pancreatic glandular epithelium which has an ectodermal embryonic origin. Fibroblasts are cells of mesodermal origin. Neuron-like cells SH-SY5Y are derived from human neuroblastoma, a malignant tumor that originates from the neural crest cells. • Cell shape and phenotype. Untreated PANC-1 cells are characterized by epithelioid phenotype; the HDFs have a mesenchymal-like phenotype, while SH-SY5Y cells may have varying phenotypes, depending on the cell culture conditions. In the current study, neuronal-like differentiation was maintained in these cells. Among several classifying features, morphology is one of the most prominent and obvious signatures of cellular phenotype. Epithelioid cells have a rounded shape, and their nucleus is usually centrally located. The signature of mesenchymal cells is a more elongated shape, quite often spindle-like, and the nucleus of the cell is usually more eccentric [33]. Neurons feature a clearly discernible cell body with centrally located round nuclei and various types of cytoplasmatic processes (the branching ones are termed dendrites, and the long, non-branching processes are named axons). Cell shape is a recognized feature associated with adhesion and motility potential, as well as their differentiation commitment [37][38][39]. • EMT and MET. One of the critical hallmarks of cancer progression is the so-called epithelial-to-mesenchymal transition (EMT) and the reverse (MET) process, reflecting the adaptation of cancer cells to new environments, for example, during the metastatic colonization of distant organs. The signature for EMT is a loss of epithelioid phenotype in epithelial (healthy or malignant) cells and the acquisition of a mesenchymal phenotype. MET presents the opposite transition. EMT/MET phenotype changes are reflected, in particular, in cell shape [40,41]. • Fibrosis. Fibrosis is the scarring of tissues and organs, characterized by excessive accumulation of extracellular matrix. At certain stages, it is also associated with the rapid proliferation of fibroblasts and their transformation into myofibroblasts. The signature of myofibroblasts is an expression of α-smooth muscle actin (α-SMA). The fibroblast-to-myofibroblast transdifferentiation, as well as the transformation of other cells into myofibroblasts, is a typical sign of fibrosis [42][43][44][45][46]. • Cytoskeleton. Three types of subcellular structures were imaged in the current study: cell nuclei, polymerized f-actin filaments representing the cytoskeleton component defining the shape of cells [37][38][39], and the specialized form of actin, known as αsmooth muscle actin (α-SMA), which is recognized as a phenotypical marker of cells bearing mechanical stress, such as smooth muscle cells or myofibroblasts [47]. Both the shape of the cytoskeleton and the level of α-SMA expression are key indicators of the cell's functional state. Cell cytoskeleton and nucleus shape are dynamic characteristics that can reflect the phase of the mitotic cycle and the migration pattern [48]. In standard two-dimensional cell culture models, larger mean surface area and proportion of contractile α-SMA fibers indicate myofibroblast transdifferentiation of fibroblasts followed by excess synthesis of collagen [49]. This phenotypical transition reflects cellular fibrotic response on the tissue level, e.g., in skin scar or peri-implant connective tissue capsule formation [50]. These parameters are important to monitor in in vitro studies of cancer treatment, drug testing, and all the areas where cells are responding to external factors. • The research and bioengineering applications of the CiTo-3DP methodology are dependent on the choice of the cells and subcellular structures. For the printed cellular models presented in this study, we envisage a scope of analytical tasks related to the relationship between the nucleus and cytoskeleton. For example, the data on the mass vs. the volume of the organelles, the surface texture of the organelles, the architecture of the intracellular space, and their reorganization in response to the experimental stimuli could serve for more biologically accurate bioengineering simulations such as, for instance, computational fluid dynamics research and analysis of the intracellular mechanical microenvironment. Further development of the proposed approach with the development of multi-material models or layered multi-material coatings, may be useful in cognitive and rehabilitation sciences. The open-source Ultimaker software CURA v4.7.0 was used to prepare the .stl files for 3D printing and configure the print settings of the selected printer. All cell types were printed simultaneously on a dual-extrusion Ultimaker S5 FFF-technology printer with a 330 × 240 × 300 mm build volume. • White PLA material was extruded at 205 • C through a 0.4 mm extruder head onto a build plate surface at 65 • C. Fast printing settings were chosen to minimize printing time, which came to approximately 12 h. In particular, a 10% infill and 60 • support angle were chosen. The printer was allowed to cool prior to removing the printed models from the build plate. Supports were removed by hand and with the aid of plyers. • Cropped 3D reconstructions of cellular structures were generated in Mimics Research 21.0 software from the imported z-stacks using grey-value thresholding. Single PANC-1 and SH-SY5Y cells and two connected HDF cells were, respectively, isolated. Minor edits were made to masks to better represent the cellular components imaged. Specifically, the Smart Fill tool was used to fill small holes between reconstructed voxels. • This was particularly important in generating close-to-solid nucleus structures. The initial length measurement of a selected nucleus object was taken in µm for scale verification throughout the workflow. The resultant objects were exported directly into Mimics 3-Matic for post-processing. A second length measurement of the previously selected nucleus object was taken in mm, which verified the import rescaling from µm to mm automatically performed by the Mimics software. • The 3D objects were optimized for 3D printing using various editing tools. A 1 mm external uniform offset was applied to the meshed surface geometries, followed by iterations of the smoothing, wrapping and remeshing tools. The models were designed such that the nucleus could be extracted from the rest of the cell body model. To achieve this, an XY-plane trim was performed to slice the cytoskeleton geometry in half. The aligned nucleus geometry was then Boolean-subtracted with a 1 mm clearance factor from the trimmed cytoskeleton geometries. • Finally, the quality of the resultant surface meshes was checked using the Fix Wizard tool. The surfaces (nucleus, cytoskeleton upper, cytoskeleton lower) were exported as separate .stl files. Mimics and 3-Matic (Materialise) are already readily used in biomedical research applications as design-orientated software. • As previously noted, Mimics have been used by Liu et al. [14] to improve surgical planning and performance, as well as by McMenamin et al. [13] as a cheaper and more ethically neutral alternative teaching aid to cadavers in medical education. In comparison to other commercial software, Martin et al. [7] showed that Mimics possessed more powerful image manipulation, visualization, and editing functions. 3-matic, also part of Materialise and often packaged with Mimics, allows for further design iterations and is well-suited to optimizing meshes for FDM 3D printing as STL files. • Notably, neither software has seen significant uptake in areas of micro-scale biology. In comparison to open-access image-processing software such as 3D Slicer and Im-ageJ, commercial software provides a faster, more powerful, and more versatile user experience. In terms of CAD, commercial software, such as 3-Matic, allows for greater interactivity to be easily built into printable models. Although this design power was not fully explored in this methodology, its effect was demonstrated by the interactivity of the PANC-1 cytoskeleton-nucleus cell model. To achieve similar results using free software would require a transfer between software, which is often cumbersome in terms of file formatting and file sizes. Considering that Materialise provides both image-processing and CAD, and is already used in biology-related sciences, it was chosen for this project. It should be noted that Materialise also offers a variety of online tutorial resources, making it far easier to learn the software.
9,499
2023-06-01T00:00:00.000
[ "Education", "Engineering", "Biology", "Computer Science" ]
Cryogenic control system operational experience at SNS * Correspondence: howellm@ornl. gov This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http:// energy.gov/downloads/doe-publicaccess-plan). Research Accelerator Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831, USA Abstract SNS cryogenic system The design of the SNS cryogenic system is similar to the system deployed at Thomas Jefferson National Accelerator Facility (TJNAF) with some modifications. The SNS system is designed with about 60 % of the refrigeration capacity of the original TJNAF system [1]. Table 1 details the system specifications. Figure 1 is a simplified diagram of the system. The major components of the system include a purifier, helium gas storage, warm compressors, 4.5-K cold box, liquid helium storage, 2-K cold box, linear accelerator (LINAC) distribution system, controls system and additional ancillary systems. SNS cryogenic control system The Central Helium Liquefier (CHL) at SNS is a highly automated and highly reliable machine with an exceptional performance record. The control system was designed in a modular fashion within the Experimental Physics and Industrial Control System (EPICS) framework which allows it to integrate with the other controls in the SNS accelerator complex. This design EPICS includes a total of 14 Versa Module European (VME) Input Output Controllers (IOCs) and 23 Allen Bradley ControlLogix™ Programmable Logic Controllers (PLCs). Each subsystem has its own dedicated pair of ControlLogix™ PLC and a VME IOC. In this implementation, the lower level controls, equipment and instrumentation interface, and interlocks are contained in the PLC while the higher-level controls, Proportional Integral Derivative (PID) loops, diode temperature sensor modules and Linear Differential Variable Transformer (LVDT) modules are handled in the VME IOCs. A block flow diagram of the cryogenic control system is depicted in Fig. 2. In addition to the ControlLogix PLCs and VME IOCs, the cryogenic control system utilizes EPICS "soft" IOCs to implement the cryogenic alarm handler and upper level control sequences. An EPICS "soft" IOC is a program running on a host machine performing input/output (I/O) operations on devices with no direct hardware connected to the host machine, as well as executing sequence operations, open or closed loop controls, and other computations. The system is equipped with multiple sources of electrical power to maintain high system reliability, and consequently ensure uninterrupted control system operation. The primary power source is susceptible to interruptions caused by external factors; hence the control system devices are setup with a secondary or emergency power source delivered by an Uninterruptible Power Supply (UPS) with a diesel generator backup. Maintaining uninterrupted power to the control system through automatic transfer switches (ATS) is critical to the reliability and availability of the facility. For operator interface, SNS has selected the Extensible Display Manager (EDM), which is maintained at ORNL. A script was created to translate the JLab operator screens, which were implemented using Motif Editor and Display Manager (MEDM), into EDM. The SNS color and font standards were applied after the screens were translated. The screens were updated to reflect the SNS plant hardware design and to incorporate improvements suggested by JLab [2]. The initial controls development effort resulted in control screens for each of the subsystems and several of the individual pieces of equipment. An example of a control screen is provided in Fig. 3. As time progressed, additional screens were included for diagnostic purposes. Screens capturing important trip data or information about equipment health were added to assist with system troubleshooting. Summary screens were included for the valves and instruments of each subsystem to aid in the calibration and initial set up of the system. The control screens closely mimic the Piping and Instrument Diagrams (P&IDs) to support operator familiarity with both the drawings and control screens. This approach aids with system operation and troubleshooting. Integration of cryogenic control system with accelerator controls The integration of the cryogenics control system with the rest of the accelerator complex is of key importance. Data is provided from the cryogenic control system to the Radio Frequency (RF) control system to determine whether it is acceptable to apply RF power to the superconducting cavities. Additionally, the cryogenic control system is used to control the liquid level, pressure, temperature, and amount of electric heat in the cryomodules. It performs these functions in normal operating conditions and in transitional phases of operation. Several control sequences are in place to integrate these functions. The cryogenic control system resides on its own network and is separate from the accelerator network. The cryogenics plant is equipped with its own control room and is controlled from a location separate from the rest of the accelerator. In the off shifts when the cryogenic system is unmanned, the control system is equipped with an autodialer that calls the staff in the event of an alarm. The accelerator central control room (CCR) has read access to the cryogenic system, monitoring operation in the cryogenic facility during off shifts. This provides redundancy to the auto-dialer. Over time, desirable control functions were identified as important for the CCR operators to perform, avoiding excess call-ins for the cryogenic staff. To enable control, the chief operator in the CCR can log in to the cryogenic network where the EPICS access security configuration has been set to provide the chief operator with limited amount of write access to the cryogenic controls. The chief operator can make select modifications such as an adjustment to the electric heat in the cryomodules to stabilize pressure. An added challenge to integrating the controls of the cryogenics system is interfacing with the multiple vendors and contributors to the design and fabrication of the system. Most large-scale cryogenics systems are built by multiple vendors and institutes. For the SNS cryogenic system, TJNAF, Oak Ridge National Laboratory, Linde, Air Liquide, S2M, PHPK, and several other vendors were involved. SNS partnered with TJNAF personnel to lead the controls effort. However, several of these vendors provided PLC code for the operation of their component. A functional description was developed to guide the staff in coalescing these different code components into one cohesive functional control system. Control system standards Implementation and enforcement of standards in several areas, including software, hardware, screen design, device naming, and signal naming were recognized as early linchpins of the integration approach for the SNS controls system implementation. To ensure uniformity across all developed software, the SNS project negotiated projectwide licensing agreements. The most important aspect of control system standardization was the uniform use of the EPICS framework for all subsystem controls. EPICS provides tools for developing and executing control algorithms, a common communication protocol, Channel Access, and a set of configurable tools for graphical user interfaces (GUI). Contrary to tradition even in other EPICS laboratories, this includes both the conventional facilities and the target control systems, where integration was deemed important from the outset. Training was required for both commercial firms and partner laboratories not familiar with EPICS. The GUI tools available in EPICS include the EDM, developed for EPICS at the Oak Ridge Holifield facility and further enhanced in collaboration with the SNS controls group. EDM was chosen for easier maintenance and extensibility than competing EPICS display managers, and tools were developed to translate screens developed in two of these: MEDM and DM2K (European version of MEDM). Working with the operations team, SNS standardized layouts and color schemes used for operator screens. EDM facilitates consistent use of color rules by allowing selectable pre-defined configurations for similar types of screens. Linux was chosen as the operating system for control system development, as well as operator console, file management and high-level server applications [3]. SNS facilitated the use of hardware standards by establishing Basic Ordering Agreements (BOAs), which allowed all partners, subcontractors, and vendors to purchase selected standards at project-negotiated prices. The SNS control system makes far greater use of commercial PLCs than was traditional in EPICS-based systems. PLCs were used for subsystems that must be kept operating whether the rest of the control system is needed. SNS selected the Allen-Bradley ControlLogix™ family of PLCs for these applications. SNS originally standardized on the Motorola 2100 Power PC series of processors for its distributed IOCs, however, some limitations of this model have led to many IOCs being upgraded to Motorola 5500 s after some years of operational experience. An adapter card allows the same processor to be used for both VME and VME eXtension for Instrumentation (VXI) applications. BOAs were established for VME and VXI crates: Dawn for 7 slot VME crates; Wiener for 21 slot crates and Racal for VXI. A BOA was completed for standard, 19″ equipment racks. These were configured as required with doors, side-panels and/or other accessories. One of the first and most important standards agreed by the partner laboratories was for signal and device naming. Despite having established the standard, the application of the naming convention broke down during the development of the control system with multiple partners. The standardized names using several different interpretations of the original standards document appeared on drawings, screens, in documents, and prototypical databases. It was also unfortunate that special characters defined in the naming standards could not be accepted by other commercial software products subsequently utilized for non-controls applications. Reliability The SNS cryogenic system has been 99.7% reliable over the last ten years operating, on average, 5000 h per year. This equates to approximately 14 h of down time per year. That means each subcomponent of the system must greatly exceed 99.7% reliability to ensure the continued operating record of excellence. Figure 4 depicts the reliability data of the cryogenic system. The down time is calculated from the time the beam goes off to the time the beam returns. The most important aspects of recovery are response time, diagnostics and evaluation, and the implementation of the repair. When possible, these issues are corrected on scheduled maintenance days or during maintenance outages to avoid operational risk. The down time experienced by the cryogenic system over the last ten years of operation can be classified into six categories: sinus filter, output module, capacitor, PLC fault, heater power supply, and JT valve motor (Fig. 5). Of these six categories, the down time caused by the sinus filter was by far the biggest contributor, responsible for over 50% of the total down time for the cryogenic system. Although this sounds like a large number, there were only four down time events in the last ten years. The SNS 2-K cold box is equipped with four cold compressors, each powered by a Variable Frequency Drive (VFD). Each VFD is equipped with a sinus filter, LC filter containing multiple inductors and capacitors. The wiring of the sinus filter in VFD3 was incorrect from the first installation of the VFD cabinet. Despite having this issue, the system operated reliably for approximately ten years. At that point, the wiring within the sinus filter burned and an inductor failed in the filter. It was realized at that point that it had been the cause of a significant portion of historical downtime. Previously, the VFD had been suspected and had been changed multiple times. After replacing the sinus filter, the system has been more stable, making it easier to execute the 2-K pump down sequence. The next largest category is output module failures. Each PLC within the control system is equipped with multiple input and output modules. The input modules detect the status of input signals such as temperature, pressure, and flow sensors whereas output modules control devices such as valves, relays, and heaters. There were only two output module failures during the last ten years, however, any failure resulting in a 2-K cold box trip usually causes at least 8 hours of down time. Therefore, it is imperative that the control system has very high reliability. One of these failures occurred on the output module for VFD for one of the cold compressors and the other was related to the helium Dewar. Both resulted in tripping the 2-K cold box. A single capacitor failure on the power supply card in a magnetic bearing cabinet is the next most significant cause of down time in the CHL. Each of the four cold compressors within the 2-K cold box have a magnetic bearing that levitates the cold compressor wheel during operation. If the magnetic bearing fails, the cold compressor is equipped with back-up ball bearings. When the magnetic bearing fails and the cold compressor lands on the back-up bearings while spinning, this is referred to as a "hard landing". The cold compressors are designed to withstand a small number of hard landings. In this event, the capacitor failed on one of the three phases of power. As a result, it was inconclusive as to whether the system transferred from line power to back-up battery power. In a similar installation at another institute, hard landings have resulted in the failure of a cold compressor. When this issue occurred, the emphasis was placed on the health of the system rather than minimizing down time. A "wobble" test was performed to evaluate the health of the back-up bearing. For this test, a power supply is used to tilt the cold compressor and measure the voltage readings to determine the air gap measurement between the shaft of the cold compressor and bearing. These values were compared to the original values measured several years earlier. It was determined from that measurement that the system was healthy enough to restart. The PLC that controls the 4-K cold box had a major fault that resulted in a complete memory loss. The PLC was able to be restarted and reloaded with the program to get the system restarted. However, the root cause of the failure was not identified. In response to not identifying the problem, a spare PLC was loaded with the latest version of the code and a swap of the existing PLC was performed. This resulted in approximately two shifts of downtime. Identifying the latest revision of the code and having it easily accessible is an important consideration to minimizing downtime in a situation such as this. Two additional smaller contributors of down time are displayed in Fig. 5. They both represent single events. The first of which was a motor failure on a Joule-Thomson (JT) valve on a cryomodule and the other was a heater power supply that failed. The motor failure was quick to repair but required the beam to be shut off to allow access to the LINAC tunnel. The power supply failure fortunately coincided with a period when SNS was not producing neutrons. This coincidence allowed recovery to be done during non-production time which resulted in minimal interruption (i.e. approximately a half hour) to neutron production. Additional issues experienced not affecting reliability The most important aspect of preventing down time is the awareness of the operations personnel. There are many activities that help make a system more reliable such as preventative maintenance plans, calibration programs, and system alarms. However, they cannot replace the people that walk through the plant every day, looking, listening, and smelling the operation. In 2019, an abnormal noise was heard on the main warm helium gas valve to the 4-K cold box. This valve is located outside of the CHL building. Because the control system was not instrumented to read this valve, the control screen indicated only the last commanded position which was 100% open. When the operator got a closer look to see why the abnormal noise was being caused, it was noticed that the valve was almost closed. If the valve had closed, the cold box would have tripped resulting in eight to ten hours of down time. The operations and maintenance crew formulated a plan to remove the pneumatic actuator while holding the valve open with a mechanical mechanism. After the pneumatic actuator was removed, a manual actuator was installed, holding the valve in its current position. When the manual actuator was installed, the valve was slowly opened to restore it to 100% open. Upon inspection of the valve actuator, it was determined that the seals of the positioner had failed, and it had filled with water. Also in 2019, an abnormal noise was detected coming from one of the turbines in the 4-K cold box. The inlet valve to the turbine is controlled from the operator screens but once again, the displayed valve position did not represent the actual valve position since the valve is not equipped with instrumentation for read back. For valves with no read back, the valve position command output is used to indicate its position on the operator screens. When observing the valve in the field, it was found that the valve was oscillating from full open to almost closed. This is a major concern because it can cause turbine damage. Investigation determined there had been a failure in a pneumatic control module in the valve actuator on the turbine inlet valve. The control module was replaced. Over the next several months, multiple failures occurred with pneumatic control modules in the 4-K cold box valves leading to the conclusion the part was at end of life after approximately 15 years of service. During the next two maintenance outages, all thirty of the control modules were replaced and an adequate supply of spares is now maintained in inventory. Another issue that arose in the 4-K cold box was a glitch in the reading of the speed sensor of a turbine [4]. In this case, the speed sensors were outputting a very low voltage signal to a tachometer causing the turbines to trip due to a loss of speed signal. An oscilloscope was installed to read both the output of the speed sensor and the output of the tachometer. It was discovered that intermittently, the tachometer output signal would drop to zero. Figure 6 shows a screen shot of the oscilloscope reading at the output of the tachometer. Initially in this reading, the output is zero before it begins to read again. To rectify this, the speed sensor was positioned closer to the target on the turbine, which resulted in the voltage signal increasing and filters were added in the PLC logic to minimize impact of a temporary signal glitch. For future installations, dual speed sensors should be considered [4]. Another issue that surfaced over the years of operations was related to the network routers. It was originally intended that the cryogenics network would have redundant network routers. Because two routers were installed, it was assumed that they were fully redundant. During a power outage affecting one of the routers, it was clear that the switches were not fully redundant, resulting in losing control and monitoring of certain aspects of the cryogenic system. The old routers, which were approximately ten years old, were upgraded to a new model capable of supporting redundancy, dual power supplies and automatic failover capability. The old models did not support redundancy as originally thought. The cryogenic control system has two core switches that were upgraded to Cisco Catalyst 3850 switches. As shown in Fig. 7, the two switches were configured to use the Multiple Hot Standby Router Protocol (MHSRP) to provide routing redundancy. Each router is in its own HSRP group supporting redundancy for internet traffic. Router A is the active router for group 1 and serves as the standby router for group 2. Router B is the active router for group 2 and serves as the standby router for group 1. When both routers are available, they share the IP traffic load. Should either router fail, the operational router becomes the active router of the group serviced by the failed router. If the failed router returns to operational availability, preemption restores load sharing between both routers [5]. In the early part of 2019, the cavity heaters from cryomodules 5 through 9 tripped multiple times. Since the electric heat in the helium vessels controls the operating pressure of the LINAC, disruption of the heat can cause problems. Depending on the amount of heat lost or gained, pressure and flow abnormalities can have a negative effect on the operation of the 2-K cold box. Further inspection and observation of the AC power distribution revealed that the isobar power strips that supply AC power to the PLC and power supplies indicated a faint 'Fault' light. Since the isobar is also a common component of the earlier trips, the isobar power strip was replaced. After the power strip was replaced, the new one also indicated a Fault. The AC phases were checked as well as the ground. Since the isobars were fed from the ATS, it was determined that the ATS may be the cause of the problem. The old ATS was replaced with a newer model and the isobars 'Fault' light went away. Analysis performed on the removed ATS indicated that 6 V was present from neutral to ground. Since the problem has not resurfaced, the old versions of transfer switches were replaced with the newer models as a preventative measure [6]. FMEA One of the most important lessons learned for the SNS cryogenic control system was the need for a structured way of determining how to prioritize work and bring the proper attention to necessary work, securing the funding and resources to perform the work. A Failure Modes and Effects Analysis (FMEA) was performed for the entire cryogenic system in 2009. To perform such an analysis, evaluation matrices are created to evaluate and score certain events in terms of probability, severity, and detection. These three numbers are multiplied to give a risk priority number (RPN). Presumably, the higher RPNs should be prioritized over the lower RPNs. However, no system is perfect and there are times when the judgment of the people conducting the work takes precedence over the actual FMEA result. The FMEA does yield a product that defines weaknesses in the process, ranked items in need of focus, and an opportunity for a team to focus on a process, along with a driving force to produce action [4]. During the process of conducting the FMEA, it was clear that the probability evaluation matrix for the equipment was not applicable for controls. As a result, new tables were generated for controls hardware and software and the analysis was performed. For this analysis, firmware was considered software. This effort resulted in a driving force that produced funding to update PLCs and IOCs operating with firmware that had known defects. Ultimately, this effort improved the long-term reliability of the SNS cryogenic system. See Table 2 for the FMEA controls probability evaluation matrix. Calibration The calibration effort conducted during the initial installation of the system was invaluable. The data sheets were used multiple times during start up and commissioning to verify proper system operation. Not only were they consulted in the early phases of Table 2 FMEA probability evaluation matrix for controls. Ranking the Probability for Software Controls Rank Known bug likely to occur in one month. 10 Know bug likely to occur in 3-6 months. 7 Known bug likely to occur in 1-3 years. 4 Processor/module has SNS controls standardized version of software. 1 Ranking the Probability for Hardware Controls Rank Known hardware defect. 10 Mean time to failure in 3-6 months. 7 Mean time to failure to in 1-3 years. 4 Processor/module has SNS controls standardized version of hardware in manufacturer's intended environment. operation, years later they are utilized to quickly determine ranges of measurement and compare current performance to the original calibration. Some difficulties were observed in conducting the calibrations. Stainless steel devices installed in stainless steel wells tended to gall and a cheater bar was required to remove these instruments. Some instruments were not designed to be calibrated with the system operating and required a plant shutdown to be maintained. Because the SNS cryogenic system has not had a sustained shutdown in the last fifteen years, many of these instruments are not calibrated routinely. As an alternative to routine calibrations, comparison screens were created to compare instruments that read similar values [7]. For example, all pressure transmitters that indicate low header pressure are put on one screen. This can be seen in Fig. 8. If one value is substantially different from the others, calibration can be prioritized, or control loops can be configured to a different transmitter. Understanding cryogenic system operating requirements In the design and implementation of a cryogenic control system, it is important to understand the cryogenic system operating requirements. Using a modular PLC/IOC system for each subsystem has simplified troubleshooting and is a good practice that can be utilized in future installations. Consideration needs to be given as to whether the system is to operate continuously for years or if the system will have routine shutdown periods. Including test, calibration, and validation points and signals will facilitate maintenance, and troubleshooting. Having the control system monitor its own health is another key aspect of the design of a highly reliable and available system. It was through this monitoring that a problem was detected with the SNS purifier temperature read backs. The system was displaying values that looked reasonable however, the readings were holding their last value rather than reading accurately real time information. The system was used to detect a lack of variability in the readings and determined there was a problem. As a result, the problem was corrected, and real time readings resumed. In monitoring the system health, communication errors, module status and signal status should be evaluated and the appropriate action to take upon detection of the error should be defined. Operators must then be alerted of these off normal conditions through alarms. These characteristics of the SNS cryogenic control system have been essential. Loss of communication and alarming Communication between IOCs and PLCs is essential to the operation of the control system. In practice, there will be losses of communication. It is important to prepare for this event ahead of time in the design and commissioning phases of the system development. All the PLCs and IOCs must take the proper action in the event of a loss of communication. For example, if the signal from a sensor is not valid, the PLC must perform predetermined actions to mitigate this situation. If communication is lost from a particular PLC, the IOC should perform predetermined actions to mitigate that situation. These events and the corresponding actions to take can be evaluated during the FMEA process. The auto-dialer has been a crucial piece of equipment for the SNS cryogenic control system. Since determining an automated response to every event is impossible, human intervention is a necessity in a cryogenic system. Selecting the correct alarms and values of those alarms is a very important aspect of a high reliability system. At SNS when an alarm occurs during unstaffed periods, the auto-dialer calls a Subject Matter Expert (SME). Three people are always on call to respond to such alarms. Notifying the proper people at the time of alarm provides the best chance of responding to a situation while minimizing down time. Control screens The display of information on a control screen can have an impact on troubleshooting. For example when displaying a valve on a control screen, multiple indicators can be displayed such as the fail state, percent open, type of value (read back or a command), the raw value of the signal it is controlling, the converted value it is controlling, and whether the valve is in automatic or manual mode. Having command values displayed without annotating the type of value has caused confusion in the operation leading operations staff to interpret a commanded position as a read back position. This can be improved by carefully displaying the information on the control screens. It can be difficult to detect the cause of a trip for a piece of complex equipment without a trip capture program. Screens have been developed for the SNS cryogenic control system to capture the cause of a trip of components. The screens aid in the troubleshooting of the system to assist in minimizing down time. Considerations in developing these screens should be given to ensuring the screens are easily understood. Standardizing the nomenclature and color scheme of the displays will make the information more easily comprehendible. For example, a numerical zero can equal an "OK" condition and be displayed as green, while a numerical one can equal a "bad" condition and be displayed as red. The date and time of the condition should be readily displayed to assist the operator in troubleshooting. An example of one of the screens developed for this purpose is depicted in Fig. 9. Redundancy Redundancy is a critical component of any control system with multiple lessons learned having been experienced on this topic. Some critical instruments were installed with spares and some were not. It is important to review the system design to ensure spares are installed in the proper locations. Particular attention should be paid to instruments installed in high radiation environments. The temperature diode and pressure transmitter life expectancy are greatly reduced in the SNS LINAC tunnel. As a result, the control pressure transmitter for the 2 K cold box has been changed to a transmitter in the CHL just upstream of the 2-K cold box. The network components require redundancy as part of the design to ensure continuous operation of the system. In the SNS system, redundancy is provided in the core and aggregate switches as described in section 2.2 of this paper. Redundant links are provided from these switches to the edge switches. Each spare edge switch is installed adjacent to the operating switch. If an edge switch failure occurs, the patch cable can quickly be moved physically from the failed switch to the installed spare. Redundancy in power supply is provided to the SNS cryogenic control system. Line power is provided to the control equipment through an ATS. If power is lost, the ATS switches power to a UPS which is backed up by a diesel generator that automatically starts when power is lost. The control system remains powered even in sustained power outages, which has been an important aspect of maintaining high system availability. Additional candidates for redundancy are the PLCs and communication to the Input/ Output chassis. Maintaining a hot spare of critical PLCs would allow system updates during the operation of the equipment. This was not included in the SNS cryogenic control system. With redundancy in the communication path to the input/output chassis, the system can continue to run until a maintenance day when the problem can be addressed without effecting beam production. Electrical design considerations There has been an emphasis on electrical safety in national laboratories in the United States over the last several years. It is recommended to look for ways to incorporate electrical safety into the cryogenic controls system design. First, use Nationally Recognized Testing Laboratory (NRTL) equipment or equivalent if it is available. Utilizing equipment from a company that has been through a third-party testing and certification for product safety reduces the chances of unforeseen issues. Additionally, using low voltage sensors and power supplies makes maintenance iterations more inherently safe. It is recommended to select instrumentation with 24 VDC signal and heater power supplies and actuators under 50 VDC if possible. Many issues and equipment failures over the years of operation could be attributed to loose wires. Spring terminals are easy to over or under torque resulting in intermittent connections which are typically very difficult to identify and resolve. As a result of this experience, many of the connections at SNS were changed to spring clamp terminals. These have been very reliable and consistent with almost no intermittent connections. Ethernet It is also recommended that Ethernet be used on for communication across PLC devices where possible. Issues have occurred at SNS using both DeviceNet and Control-Net for PLC communication which has complicated operations and maintenance. Using Ethernet facilitates adding new control system equipment to the system. As control systems further develop, it is preferred to push more control to the PLC and have more robust communication. Ethernet can support this development. A recent control system architecture installed for the FRIB cryogenic system has utilized Ethernet while making redundant communication paths [8]. Continuing with this strategy is encouraged. Standardization The successful implementation of a cryogenic control system is greatly enhanced with a proper standardization plan. This plan should include hardware, software, and naming conventions. It is suggested that a facility select a PLC vendor and stick with it even if the standard is overkill for a function. Having standard programming and available spare parts is valuable and has been useful at SNS. As part of this effort, it is recommended that standard I/O modules for each type of signal including analog input, analog output, binary input, and binary output. Similarly, there are many good instrumentation companies and if standards are not applied, a facility can be overwhelmed with many different instrument vendors. It is recommended that a facility standardize on two or three companies. Software is much more complicated to standardize. However, a detailed specification requiring functional description documents and a guide for commenting code can save time and money in the long-term operation and maintenance of the facility. The standardization of the code is facilitated by using standard hardware. However, it is recommended in a distributed development model that the facility controls team preside over all other control efforts to integrate the entire effort. Naming conventions should be developed and communicated to the entire collaboration and managed by the facility controls team. At SNS, this proved to be difficult and there is variability in naming convention from some of the vendors versus the partner laboratories. This can add difficulty in troubleshooting the system and maintaining consistent documentation. Conclusion Maintaining the reliability of a cryogenic control system requires continuous long-term effort. Much has been learned about the system at SNS in the last fifteen years of operation. The system has proven to be robust and reliable but there are opportunities for improvement. The primary components of maintaining the control system reliability at SNS are a preventative maintenance program, a FMEA, and incorporating lessons learned to continuously improve the system. As part of preventative maintenance, consideration should be given to periodically updating control system hardware, firmware and software to correct bugs and to prevent systems from becoming unsustainable due to obsolescence. This is a challenge when the upgrades require the plant to be warmed up or shut down as these opportunities are rare. When calibrations and maintenance are not possible, alternative comparisons can be utilized to determine accuracy of signals. An FMEA was completed to help prioritize efforts and provide a driving force for required funding and resource allocation. An environment of continuous improvement has been encouraged as lessons learned have continued to be applied to the system. For future installations, consideration should be given to the lessons that have been learned at the SNS cryogenic control system. Consider all modes of operation when developing the control system. This will facilitate maintenance and calibration iterations. Redundancy and standardization are key characteristics of a control system and should be integrated into the design. Careful consideration should be given to electrical safety, communication reliability and driving more control down to the PLC level in future installations.
8,306.2
2021-01-18T00:00:00.000
[ "Engineering", "Physics" ]
The Role of Strange Quasiquarks in Transport Properties of the QGP . We study the role of dynamical quarks in the transport properties of the deconfined hot medium utilizing the quasiparticle kinetic approach. The system is then composed of the quasiparticle excitations, whose interactions are encoded in their dynamical masses with temperature dependence specified by the e ff ective coupling extracted from lattice QCD thermodynamics. Evaluating the temperature and flavor profiles of the shear and bulk viscosities, as well as the electrical conductivity we examine how di ff erent particle species, in partic-ular strange quarks, modify the transport parameters of the deconfined matter. Introduction One of the major aims of the experimental and theoretical studies of the quark-gluon plasma (QGP) is to reveal its dynamical and transport properties, quantified by various transport parameters. The shear viscosity η measures the resistance of the fluid against momentum modifications during its longitudinal motion [1], while the bulk viscosity ζ indicates the energy dissipation caused by the expansion of the QGP [2]. Their dimensionless ratios to the entropy density, i.e. η/s, ζ/s are the major input in the hydrodynamic equations [3]. The other important parameter is the electrical conductivity σ, since the strong electric and magnetic fields are expected to emerge in the non-central heavy-ion collisions. A precise determination of transport coefficients as functions of temperature and chemical potential is one of the main steps towards understanding the non-trivial evolution of strongly interacting matter. For a comprehensive study of the bulk properties of the deconfined matter conducted within the quasiparticle model (QPM), we refer the reader to [1,2,4]. In this work, we summarize our numerical results on transport coefficients at finite temperature and vanishing chemical potential. We assess the influence of dynamical quasiparticles on the evolution of different coefficients, focusing particularly on the role of the strange quark flavor. In Sec. 2, we briefly discuss the main assumptions of the QPM. In Sec. 3, we present the results for transport coefficients derived in kinetic theory under the relaxation time approximation. A summary and conclusions are given in Sec. 4. . The effective mass depends on the bare particle mass m 0 i and the gauge-independent self-energy Π i defined as [5] Π l,s (T ) = 2 The effective running coupling G(T ) is deduced from the equation of state calculated in lattice gauge theory for the QGP with 2 + 1 quark flavors and the gluon plasma in pure Yang-Mills theory. This way, the G(T ) incorporates the non-perturbative dynamics near the QCD phase transition and reproduces the perturbative behavior in the very high-temperature regime [1,2]. Transport parameters in kinetic theory Assuming that a system slightly deviates from thermal equilibrium, we employ the expressions for the transport parameters determined from kinetic theory under the relaxation time approximation. The derivation and detailed analysis are presented in [1,5,6] for the shear viscosity η, in [2,5] for the bulk viscosity ζ, and in [2,4,7] for the electrical conductivity σ. 0.01 In Fig. 1 we present the total specific shear and bulk viscosities of the hot QCD medium, along with the individual contributions of each quasiparticle sector in case of N f = 2 + 1. The hierarchy in η i /s follows the ordering in the effective masses [1], while in ζ i /s this behavior is disturbed. Although strange quarks and gluons are described by different statistical distributions, there appears a quantitative resemblance between ζ s /s and ζ g /s ratios. We find that the similarity arises due to a convolution of the factors entering the expression for the bulk viscosity [2]. Overall, the light quarks bring the main impact to the total shear and bulk viscosities, while the heaviest quasiparticles are the most effective ones in equilibrating momentum degradations within the QGP. Compared to the pure Yang-Mills theory, the presence of dynamical quarks in QCD significantly increases η/s and ζ/s ratios. Further, one clearly observes a smooth behavior of the specific viscosities in the vicinity of the QCD crossover, and a strong non-monotonicities around T c in pure Yang-Mills theory due to the first-order phase transition [1]. Fig. 2 (left) illustrates the bulk to shear viscosity ratios ζ i /η i obtained by dividing each curve presented on the left side of Fig. 1 by the corresponding curve on the right. We have utilized the total ζ/η ratio in QCD and pure Yang-Mills theory to study the impact of quarks onto the deviation of the deconfined matter from conformal invariance [2]. Here, we observe an interesting overlap of the ζ s /η s component coming from strange quarks, with the total ζ/η ratio. Thus, the strange quark sector quantitatively resembles the bulk to shear viscosity ratio for the QGP. The contribution from the light quark sector slightly underestimates the total ζ/η ratio, while the ζ g /η g is much larger due to the suppressed gluonic term in the shear viscosity, see left panel of Fig. 1. In the right panel of Fig. 2 we present the total scaled electrical conductivity σ/T along with the contributions from light and strange quasiparticle sectors. The light-quark contribution is larger than that of the strange quarks since it contains electric charges of up and down quarks and since the strange quark component is suppressed by the larger effective mass. The corresponding lattice data [8] near T c is rather compatible with the QPM result, whereas the discrepancy between them emerges at T 1.5 T c and increases gently with temperature. This can be attributed to the fact that the lattice setup includes the pion mass M π = 384(4) MeV, heavier than the physical one used in our model-building. For a detailed comparison of the total σ/T ratio with the results computed in other approaches, we refer a reader to [2,4]. Conclusions We have examined the influence of different dynamical particles, especially the strange quarks, on transport coefficients of the deconfined matter. The transport parameters were computed within kinetic theory under the relaxation time approximation for a medium approached by the quasiparticle model (QPM). The dynamical masses of quasiparticles are characterized by the effective temperature-dependent running coupling deduced from the entropy density obtained by lattice simulations. Assuming that the transport parameters are characterized by the common relaxation time for each quasiparticle species, we computed the temperature and flavor dependence of the specific shear η/s and bulk ζ/s viscosities, and of the scaled electrical conductivity σ/T . We juxtaposed the total results for the QGP with N f = 2 + 1 with the transport parameters of the gluon plasma in pure Yang-Mills theory. Above the (pseudo)critical temperature, all transport parameters of the pure gluon plasma are much smaller than that for the QGP with dynamical quarks. Thus, the presence of dynamical quarks significantly changes the transport properties of the deconfined matter. For the QGP with quark quasiparticles, we found that the major contribution to the transport coefficients comes from the light-quark sector, while the components from strange quarks and gluons are suppressed by their larger effective masses. While for the specific shear viscosity η/s we observed a clear ordering in the contributions from different quasiparticles, for the bulk viscosity ζ/s we found that the strange quarks and gluons contribute to the total ratio in almost the same amount due to the convolution of their observables, such as effective masses and relaxation times. The bulk to shear viscosity ratio and its components reveal that in the QPM, the ζ s /η s ratio coming from the strange quark sector quantitatively resembles the total ζ/η ratio of the QGP with 2 + 1 quark flavor. Finally, the individual contributions from light and strange quarks to the electrical conductivity σ/T were confronted with the total ratio and the corresponding lattice data. We found our results to be qualitatively consistent with the recent lattice QCD outsomes [8].
1,829.8
2022-01-01T00:00:00.000
[ "Physics" ]